uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,314,259,994,363
arxiv
\section{Introduction} The orbital period distribution of gas giants is a fundamental property of planetary systems and places constraints on their formation processes, migration mechanisms, and future evolution. The observed period distribution is not smooth. The majority of hot Jupiters, i.e. those with a semi-major axis $a<0.1$ AU, are found in a `pile-up' at periods of $P\sim3-4$ days ($\sim0.035-0.045$ AU), whereas only four hot Jupiters are found in very close in orbits ($\lesssim0.02$ AU, $\lesssim1$ days), namely WASP-18 b, WASP-19 b, WASP-43 b, and WASP-103 b (\citealt{Hel09,Heb10,Hel11,Gil14}, respectively). The sharp decline of hot Jupiters in orbital periods less than two days is a genuine feature of the exoplanet period distribution, confirmed by both ground-based and space-based planet searches, e.g. \textit{Kepler} \citep{How11}, and SuperWASP \citep{Hel12}. This suggests that very close-in hot Jupiters are relatively rare, else current instrumentation would easily detect them on account of their very frequent and deep transits, and large RV variations in comparison to longer period, smaller planets. As a result, \citet{Hel11} argue that extreme systems like WASP-19 b are approximately one hundred times less common than those hot Jupiters in the pile-up, indicating that it is either difficult to get gas giants into very close orbits, or that they are quickly destroyed by strong tidal forces once they arrive. The latter would imply that very close-in hot Jupiters with old host stars are in the last few percent of their lifetimes, which raises a further question of how likely it is to have observed these systems in a transient phase of their orbital evolution. Despite extensive theoretical work, our understanding of how tidal forces influence the orbital evolution of giant planets is poorly constrained by observation. The efficiency of the dissipation of the orbital energy due to frictional processes in the star is usually parameterised by a stellar tidal quality factor $Q_{\star}^{\prime}$. Studies of binary star systems estimate its value to be $Q_{\star}^{\prime}\sim10^{6}$ (see e.g. \citealt{Mei05}) and analysis of the tidal evolution of a small sample of exoplanets has found some evidence for consistency with this value ($10^{6}<Q_{\star}^{\prime}<10^{9}$, \citealt{Jack08}). On the other hand, a recent exoplanet population study which tuned the value of $Q_{\star}^{\prime}$ until the distribution of remaining planet lifetimes was statistically likely, found $Q_{\star}^{\prime}\gtrsim10^{7}$ at the $99\%$ confidence level \citep{Pen12} for its specific set of initial conditions. However, direct observational measurements of $Q_{\star}^{\prime}$ in individual systems, i.e. the observation of a decaying orbital period, do not currently exist. $Q_{\star}^{\prime}$ is the dominant factor in setting the pace of the orbital evolution for very close in hot Jupiters and the unusually short predicted remaining lifetimes for planets such as WASP-18 b and WASP-19 b have lead to a number of suggested modifications to the theory of stellar tides that reduce the efficiency of the dissipation. For example, \citet{Win10} speculate that the observed increase in misalignment between the planetary orbit and stellar spin axes for hot Jupiters orbiting hotter stars depends on the depth of the convective zone in the host star. Here, cooler stars with deeper convective envelopes dissipate the orbital energy more efficiently resulting in a faster alignment of the stellar obliquity, in keeping with theoretical studies (e.g. \citealt{Bark09,Bark10,Pen11}). Others suggest that there is a complicated dependency on the planetary mean motion that results in zones of inefficient tidal dissipation during inspiral \citep{Ogi07}, or even possible mass loss effects that act to slow the orbital evolution of the planet \citep{Li10}. In this paper, we present the discovery and characterisation of the hot Jupiter WTS-2 b. It is the second planet to be detected in the infrared light curves of the WFCAM Transit Survey (WTS) \citep{Cap12,Kov12}, and orbits a mid-K dwarf star at just 1.5 times the separation at which it would be destroyed by tidal forces, making it a useful benchmark in constraining the theory of stellar tides. The remainder of this paper is organised as follows: in Section~\ref{sec:WTS}, we briefly summarise the goals of the WTS and its atypical observing strategy, the reduction procedure used to generate the infrared light curves, and the processes used to identify WTS-2 b as a transiting candidate and the checks performed before proceeding with its follow-up observations. Section~\ref{sec:obs} describes all of the follow-up data we obtained for WTS-2 b and their data reduction. We characterise the WTS-2 host star in Section~\ref{sec:char_star}, and derive the corresponding properties of its planetary companion WTS-2 b in Section~\ref{sec:char_planet}. Section~\ref{sec:falsepos} summarises our investigation into possible blending scenarios. In Section~\ref{sec:discussion}, we calculate and discuss the tidal evolution and remaining lifetime of WTS-2 b. We calculate the expected shift in its transit arrival time after 10 years, assuming that its orbit is decaying under tidal forces with $Q_{\star}^{\prime}=10^{6}$, and we give a correction to the previously published expected shift in the transit arrival time for WASP-18 b in Section~\ref{sec:wasp-18}. In Section~\ref{sec:pop}, we also attempt to constrain $Q_{\star}^{\prime}$ using the known population of hot Jupiters. Finally, we assess the potential for characterising the atmosphere of WTS-2 b using ground-based telescopes in Section~\ref{sec:followup}. Our conclusions are summarised in Section~\ref{sec:conclusion}. \section{The WFCAM Transit Survey}\label{sec:WTS} The WTS was a photometric monitoring campaign that covered $\sim6$ sq. degrees of the sky. It used the $3.8$ m United Kingdom Infrared Telescope (UKIRT) on Mauna Kea, Hawaii, in conjunction with the Wide-Field Camera (WFCAM), to observe at infrared wavelengths ($J$-band, $1.25\mu$m). The survey began on $5^{\rm th}$ August 2007. A detailed description of the WTS and its goals can be found in \citet{Kov12}, \citet{Bir12}, and \citet{Zen13}, but its main features are recounted here briefly for reference. The WTS light curves were observed at infrared wavelengths in order to maximise sensitivity to photons from M-dwarfs. However, for the earlier-type stars in the WTS fields, infrared observations had the added advantage of being less sensitive to low-level star spot modulation, thus providing more stable light curves in which to hunt for planets \citep{Goul12}. The WTS covered four fields distributed in RA so that at least one field was always visible within 15 degrees of zenith from Mauna Kea. This was key to the survey's observing strategy as it operated as a back-up program in the highly efficiently queue-scheduled operational mode of UKIRT, observing in sky conditions that the UKIRT large programs, such as UKIDSS \citep{Lawr07}, could not use. Consequently, the majority of the WTS observations were taken in the first hour of the night when the atmosphere is still cooling and settling; however, the back-up nature of the program served to randomise the observing pattern. The exact field locations were chosen to minimise giant contamination, while maximising the number of early M-dwarfs and maintaining $E(B-V)<0.1$ mag, which kept the fields at $b>5$ degrees. WTS-2 b was found in the `19 hr field', which was centred at RA$=19^{h}$, Dec$=+36^{d}$ and contained $\sim 65,000$ stellar sources at $J\leq17$ mag. Note that this field is very close to, but does not overlap with the Kepler field-of-view \citep{Bat06}, which has been shown to have a low fraction of late-K and early-M giants at optical magnitudes comparable to the WTS \citep{Mann12}. \subsection{Observation and reduction of the UKIRT/WFCAM $J$-band time-series photometry}\label{sec:jband} The infrared light curves of the WTS were generated from time-series photometry taken with the WFCAM imager \citep{Cas07} mounted at the prime focus of UKIRT. WFCAM consists of four $2048\times2048$ $18~\mu$m pixel HgCdTe Rockwell Hawaii-II, non-buttable, infrared arrays. The arrays each cover $13.65^{\prime}\times13.65^{\prime}$ ($0.4^{\prime\prime}$/pixel) and are arranged in a square paw-print pattern, separated by $94$ per cent of an array width. The four WTS field cover $1.5$ sq. deg. each, which requires $8$ pointings of the WFCAM paw-print, tiled together to give uniform coverage. The WTS observed a $9$-point jitter pattern of 10 second exposures at each pointing, resulting in a cadence of one data point per 15 minutes in any given one hour observing block ($9 \times 10$ s $\times 8$ + overheads). The 2-D image processing of the WFCAM images and the generation of the WTS light curves is described in detail by \citet{Kov12} and closely follows the methods of \citet{Irw07}. In summary, we remove the dark current and reset anomaly from the raw images, apply a flat-field correction using twilight flats, then decurtain and sky subtract. Astrometric and photometric calibration was achieved using 2MASS stars in the field-of-view \citep{Hod09}. To generate the light curves, we made a master catalogue of source positions using a stacked image of the $20$ best frames and used it to perform list-driven, co-located, variable aperture photometry. For WTS-2, the best aperture radius (i.e. the one that gave the smallest RMS) was equal to $\sqrt{2}$ times the typical FWHM of the stellar images across all frames i.e. 3.5 pixels ($1.98^{\prime\prime}$). In an attempt to remove systematic trends in the light curves, e.g. those caused by flat-fielding inaccuracies or varying differential atmospheric extinction across the wide field-of-view, we fit a 2-D quadratic polynomial to the flux residuals in each light curve as a function of the source position on the detector. This step can significantly reduce the RMS of the brightest objects in wide-field surveys \citep{Irw07}. Finally, we removed residual seeing-correlated effects by fitting a quadratic polynomial to the flux residuals in each light curve as a function of the stellar image FWHM on the corresponding frame. The resulting $J$-band light curves for the 19hr field have a median RMS of $\sim1$ per cent ($\sim10$ mmag) or better for $J\leq16$ mag, with a per data point precision of $\sim3-5$ mmag for the brightest targets (saturation occurs at $J\sim13$ mag)\footnote{The RMS is calculated using the robust median of absolute deviations (MAD) estimator, scaled to the equivalent Gaussian standard deviation (i.e.. RMS$\sim1.48\times$MAD).}. The out-of-eclipse data in the light curve of WTS-2 ($J_{\rm WFCAM}=13.88$ mag) has a per data point precision of $5.3$ mmag. The full, phase-folded, unbinned $J$-band light curve of WTS-2 b is shown in Figure~\ref{fig:lc_J}, and the data are given in Table~\ref{tab:lc_J}. \begin{table} \centering \begin{tabular}{@{\extracolsep{\fill}}ccc} \hline \hline HJD&$J_{\rm WFCAM}$&$\sigma_{J_{\rm WFCAM}}$\\ &(mag)&(mag)\\ \hline 2454317.810999&13.9219&0.0033\\ 2454317.823059&13.9245&0.0032\\ ...&...&...\\ \hline \end{tabular} \caption{The observed WFCAM $J$-band light curve data for WTS-2 b without correction for dilution. Magnitudes are given in the WFCAM system. \citet{Hod09} provide conversions for other systems. The errors, $\sigma_{J}$, are estimated using a standard noise model, including contributions from Poisson noise within the object aperture, sky noise, readout noise and errors in the sky background estimation. (This table is published in full in the online journal and is shown partially here for guidance regarding its form and content.)} \label{tab:lc_J} \end{table} \begin{figure*} \centering \includegraphics[width=\textwidth]{WTS_J} \caption{The full phase-folded discovery light curve of WTS-2 b from WTS $J$-band observations. The data are not binned. The out-of-eclipse RMS is 5.3 mmag. The data for this figure are given in Table~\ref{tab:lc_J}. Note that this light curve has not been corrected for dilution by the additional faint light source also within the aperture (see Section~\ref{sec:lucky}).} \label{fig:lc_J} \end{figure*} \subsection{Detection and prioritisation of WTS transit candidates}\label{sec:detect} The vast sample of stars in the WTS and its randomised observing strategy do not permit a straight-forward eyeball search for transits in the light curves, so we undertook several steps to reduce the enormity of this task. All stellar sources in the 19hr field with $J< 17$ mag were first passed through the box-least-squares transit detection algorithm {\sc occfit}, which is described in detail by \citet{Aig04}. Like all ground-based transit surveys, the processed WTS light curves suffer from residual correlated red noise, which can mimic transit events. We therefore adjusted the detection significance statistic, $S$, calculated by {\sc occfit~} to account for the presence of red noise following the model of \citet{Pon06b} to give $S_{\rm red}$. In order to qualify as a WTS transit candidate, a detection must have $S_{\rm red}\geq5$. We also rejected transit detections in the period range $0.99<P<1.005$ days, as the majority of these were found to be aliases caused by the observing window function of our ground-based survey. In the final step before eyeballing the remaining light curves, we used $ZYJHK$ single epoch photometry from WFCAM (see Section~\ref{sec:broadband}), plus complementary $griz$ photometry from SDSS DR7 \citep{Yor00} to create a spectral energy distribution (SED) for each object and estimate its effective temperature (see \citealt{Bir12} for details). The effective temperature, ${\rm T}_{\rm eff}$, was converted to an approximate stellar radius for each source, using the stellar evolution models of \citet{Bar98}, at an age of 1 Gyr with a mixing length equal to the scale height. Assuming a maximum planetary radius of $2{\rm R}_{\rm J}$, we defined an envelope of transit depths as a function of stellar radius that were consistent with planetary transit events. Only detections with changes in flux ($\Delta F$) corresponding to $R_{\star}\sqrt{(\Delta F)} \leq 2{\rm R}_{\rm J}$ were allowed through to the eyeballing stage. It is important to note firstly that {\sc occfit~} tends to under-estimate transit depths because it does not allow for the trapezoidal shape of a transit, nor does it account for limb-darkening effects. Secondly, the models we use to estimate the stellar radii systematically under-estimate the temperature of solar-like stars \citep{Bar98}, making our first estimates of ${\rm T}_{\rm eff}$ too cool for stars of earlier type than M-dwarfs and hence initial radius estimates that are too small. Both of these factors combined make it unlikely that genuine hot-Jupiter transit events are rejected by this final selection criterion. The $\sim3500$ candidates that survived to the eyeball stage were mostly false-positives arising from nights of bad data or singular bad frames that we do not filter from the data. We also removed binary systems that were detected on half their true orbital period (as is favoured by the detection statistic). Overall, with this method we detected $40$ good transiting candidates, including WTS-2 b, which has $S_{\rm red}=23$, an {\sc occfit}-detected period of $1.0187$ days, an initial estimated stellar effective temperature ${\rm T}_{\rm eff}=4777$ K, and an {\sc occfit}-detected transit depth of $\Delta F=0.031$, corresponding to an estimated planet radius of $\sqrt{0.031}\times0.82{\rm R}_{\odot}\sim1.4{\rm R}_{\rm J}$. Before proceeding with follow-up observations, we checked that the stellar density calculated from the phase-folded light curve of WTS-2 b matched the estimated stellar type from the initial SED model fit, using the method described by \citet{Sea03}. A large discrepancy would suggest a blended or grazing binary system. We found a light curve stellar density of $\rho_{\star}\sim1.49{\rho}_{\odot}$, which is within $\sim0.05{\rho}_{\odot}$ of the model density for a $4777$ K star at 1 Gyr in the \citet{Bar98} models. The close agreement between the stellar densities from SED modelling and the phase-folded light curve triggered the follow-up observations to characterise WTS-2 b. \section{Follow-up observations and data reduction}\label{sec:obs} \subsection{Multi-wavelength single epoch broadband photometry}\label{sec:broadband} In order to measure the photometric colours and estimate the spectral type of WTS-2 (and all the other sources in the WTS), we used WFCAM to observe single, deep exposures of the four WTS fields in five filters ($ZYJHK$), with exposure times 180, 90, 90, $4\times90$, and $4\times90$ seconds, respectively. The 2-D image processing for these data are the same as described in Section~\ref{sec:jband}. For WTS-2, we also obtained Johnson $B$-, $V$- and $R$-band single epoch photometry on the nights of $8^{\rm th}$ and $22^{\rm nd}$ March 2012 at the University of Hertfordshire's Bayfordbury Observatory (latitude$=51.8$ degrees North, longitude$=0.1$ degrees West). We used a Meade LX200GPS $16$-inch $f/10$ telescope fitted with an SBIG STL-6303E CCD camera, and integration times of 300 seconds per band. Images were bias, dark, and flat-field corrected, and the extracted aperture photometry was calibrated using three bright reference stars within the image. The quoted photometric uncertainties for this data combine the contribution from the signal-to-noise of the source (typically $\sim20$) with the scatter in the zero-point from the calibration stars. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{vosa_sed} \caption{The spectral energy distribution of WTS-2. The best-fitting Kurucz model spectrum from a $\chi^{2}$ analysis (see Section~\ref{sec:SED}) is overlaid in grey (${\rm T}_{\rm eff}=5000$ K), while the synthetic photometry for the corresponding observed bandpasses are shown by the blue open triangles. The observed data are shown by the black open circles and the dereddened photometry is shown by the red open squares. Note that the errors on the photometry include the photon error listed in Table~\ref{tab:broadband}, plus a $2\%$ uncertainty added in quadrature to the WFCAM and SDSS bandpasses to allow for calibration between the different surveys. However, the magnitudes have not been adjusted for contamination by a faint red source of third light within 0.6 arcsec of the host star (see Section~\ref{sec:lucky}) The symbols are generally bigger than the errors. The filter transmission profiles for our observed bandpasses (see Table~\ref{tab:broadband}) are shown by the lines at the bottom of the plot.} \label{fig:SED} \end{figure} A further nine photometric data points at optical and infrared wavelengths were gathered for WTS-2 using the publicly available Virtual Observatory SED Analyzer\footnote{http://svo2.cab.inta-csic.es/svo/theory/vosa} ({\sc vosa}, \citealt{Bay08,Bay13}), including $ugriz$ from the Sloan Digital Sky Survey data release 7 (SDSS DR7) \citep{Yor00}, $JHK$ from the Two Micron All Sky Survey (2MASS, \citealt{Skr06}), and $W1W2$ from the Wide-field Infrared Survey Explorer (WISE, \citealt{Wrig10}). We do not give the $W3$ and $W4$ bandpasses as they fall below the WISE $5\sigma$ point source sensitivity for detection. We also note that the $u$-band for SDSS photometry is affected by a known red leak in the filter and has been assigned an accordingly larger error\footnote{See http://www.sdss3.org/dr8/imaging/caveats.php}. All of the available single epoch broadband photometry for WTS-2 is reported in Table~\ref{tab:broadband} and plotted in Figure~\ref{fig:SED}. The data are used in Section~\ref{sec:char_star} to determine the best-fitting SED for WTS-2. \begin{table} \centering \begin{tabular}{lrrr} \hline \hline Filter&$\lambda_{\rm eff}$ (\AA)&EW (\AA)&Magnitude\\ \hline SDSS-$u$&3546&558&$18.361\pm0.039$\\ Johnson-$B$&4378&1158&$16.8\pm0.1$\\ SDSS-$g$&4670&1158&$16.283\pm0.004$\\ Johnson-$V$&5466&890&$15.9\pm0.1$\\ SDSS-$r$&6156&1111&$15.464\pm0.003$\\ Johnson-$R$&6696&2070&$15.3\pm0.1$\\ SDSS-$i$&7471&1045&$15.146\pm0.003$\\ WFCAM-$Z$&8802&927&$14.501\pm0.003$\\ SDSS-$z$&8918&1124&$14.959\pm0.005$\\ WFCAM-$Y$&10339&999&$14.352\pm0.004$\\ 2MASS-$J$&12350&1624&$13.928\pm0.025$\\ WFCAM-$J$&12490&1513&$13.963\pm0.003$\\ WFCAM-$H$&16338&2810&$13.470\pm0.002$\\ 2MASS-$H$&16620&2509&$13.464\pm0.026$\\ 2MASS-$K_{s}$&21590&2619&$13.414\pm0.039$\\ WFCAM-$K$&22185&3251&$13.360\pm0.003$\\ WISE-$W1$&34002&6626&$13.292\pm0.027$\\ WISE-$W2$&46520&10422&$13.368\pm0.038$\\ \hline \end{tabular} \caption{Broadband photometry for WTS-2. All reported magnitudes are in the Vega system except the SDSS photometry, which is in the AB magnitude system. These magnitudes have not been corrected for reddening, nor for the dilution by the faint red source within 0.6 arcsec of the host star (see Section~\ref{sec:lucky}). $\lambda_{\rm eff}$ is the effective wavelength defined as the mean wavelength weighted by the transmission function of the filter, and EW is the equivalent width of the bandpass.} \label{tab:broadband} \end{table} \subsection{INT/WFC $i$-band time-series photometry} In order to confirm the transit of WTS-2 b and to help constrain the transit model, on $18^{\rm th}$ July 2010 we obtained further time-series photometry in the Sloan $i$-band using the Wide Field camera (WFC) on the $2.5$ m Isaac Newton Telescope (INT) at Roque de Los Muchachos, La Palma. A total of $67$ frames covering the full transit with some out-of-transit baseline were obtained with exposures times of $90$ seconds, at a cadence of $1$ data point every $2.45$ minutes (the overheads include the CCD read-out time plus time allowed for the auto-guider to place the star back onto the exact same pixel after every exposure). \begin{figure} \centering \includegraphics[width=0.49\textwidth]{mcmc_jband_dilute}\\ \includegraphics[width=0.49\textwidth]{mcmc_iband_dilute} \caption{{\bf Top:} Phase-folded WTS $J$-band light curve for WTS-2 b corrected for dilution by a faint red source and zoomed around the transit. The adopted best model from a simultaneous fit to the $J$-band and $i$-band dilution-corrected light curves is shown by the solid red line (see Section~\ref{sec:transitfit}). The errors have been scaled such that the out-of-transit data has $\chi_{\nu}^{2}=1$ when compared to a flat line. The lower panel shows the residuals to the model. {\bf Bottom:} Same as above but for the dilution-corrected INT $i$-band light curve. All the available data is shown. Note the change in y-axis scale for the residuals.} \label{fig:lc_i} \end{figure} The CASU INT/WFC data reduction pipeline \citep{Irw01,Irw07} was used to reduce the $i$-band images. The pipeline follows a standard CCD reduction of de-biasing, correcting for nonlinearity, flat-fielding and defringing. A master source catalogue was extracted from a stacked image of the 20 best frames and variable aperture photometry was performed for all sources in all images to generate light curves. The out-of-transit RMS in the WTS-2 $i$-band light curve is $1.5$ mmag. The light curve is used simultaneously with the $J$-band light curve to find the best-fitting model transit to WTS-2 b in Section~\ref{sec:transitfit}. The WTS-2 b $i'$-band light curve is shown in Figure~\ref{fig:lc_i} and the data are given in Table~\ref{tab:lc_i}. \begin{table} \centering \begin{tabular}{@{\extracolsep{\fill}}ccc} \hline \hline HJD&Normalised flux&Error\\ \hline 2455396.56449361&1.0047&0.0015\\ 2455396.56754940&0.9988&0.0015\\ ...&...&...\\ \hline \end{tabular} \caption{The observed INT $i$-band light curve for WTS-2 b without correction for dilution. The errors were derived in the same manner as for the $J$-band but have been scaled so that the out-of-transit baseline has a reduced $\chi^{2}=1$ when compared to a flat line to avoid underestimating the errors (see Figure~\ref{fig:lc_i}). (This table is published in full in the online journal and is shown partially here for guidance regarding its form and content.)} \label{tab:lc_i} \end{table} \subsection{CAHA/TWIN intermediate-resolution spectroscopy} We carried out intermediate-resolution reconnaissance spectroscopy of WTS-2 to obtain an estimate of the host star effective temperature and its surface gravity (see Section~\ref{sec:char_star}), and to measure preliminary radial velocity (RV) variations to test for the presence of a blended or grazing eclipsing binary system (see Section~\ref{sec:falsepos} and Table~\ref{tab:recon}). Spectroscopic observations of WTS-2 and several RV standards were taken over 6 nights during June-August 2011 as part of a wider follow-up campaign of the WTS planet candidates and M-dwarf eclipsing binaries (Cruz et al., \emph{in prep.}). We used the Cassegrain Twin Spectrograph (TWIN) mounted on the 3.5-m telescope at the Calar Alto Observatory (CAHA) in southern Spain, with its T10 grism and a $1.2^{\prime\prime}$ slit, resulting in a dispersion of $\sim0.39$\AA/pix ($R\sim8000$) and a wavelength coverage of $\sim6200-6950$\AA. A total of 18 epochs were observed for WTS-2 with integration times between $600$ and $900$ seconds. The spectra were reduced in the standard way using {\sc iraf} packages. To measure the RV variations of WTS-2 and the RV standards, the {\sc iraf} package {\sc fxcor} was used to perform Fourier cross-correlation of the observed spectra with synthetic templates generated from \citet{Mun05}. The effective temperature and surface gravity of the cross-correlation template was chosen to match the results of the SED fit in Section~\ref{sec:char_star} but with a solar metallicity. We also use the TWIN spectra in Section~\ref{sec:char_star} to confirm the stellar characteristics found via the SED fit. For this, we used a spectrum created by aligning and stacking eight of the TWIN spectra obtained in August 2011 into a single spectrum with SNR$\sim25$. The stacked spectrum is shown in Figure~\ref{fig:TWIN_spec}. \begin{table} \centering \begin{tabular}{ccc} \hline \hline HJD&RV&$\sigma_{\rm RV}$\\ &(km/s)&(km/s)\\ \hline 2455721.417173& -19.794& 1.540\\ 2455721.501447& -19.122& 2.015\\ 2455721.586693& -19.091& 1.820\\ 2455762.651947& -19.050& 1.888\\ 2455762.659494& -18.415& 1.670\\ 2455763.590589& -21.118& 1.560\\ 2455763.658298& -20.079& 1.614\\ 2455763.665845& -18.796& 1.656\\ 2455783.377699& -22.410& 1.599\\ 2455783.567465& -19.215& 1.374\\ 2455783.645763& -18.072& 1.888\\ 2455783.656805& -19.118& 1.851\\ 2455784.508995& -19.121& 1.132\\ 2455784.661690& -20.206& 1.710\\ 2455784.672731& -20.004& 1.313\\ 2455785.444262& -19.560& 1.265\\ 2455785.508347& -21.585& 1.863\\ 2455785.668461& -19.728& 1.971\\ \hline \end{tabular} \caption{Reconnaissance radial velocities from CAHA 3.5m/TWIN.} \label{tab:recon} \end{table} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{twin_spec} \caption{A stacked spectrum of WTS-2 using eight of the TWIN observations from August 2011. The individual spectra have been aligned to the same RV and continuum-normalised. The rest-wavelength of H$\alpha$ is labeled.} \label{fig:TWIN_spec} \end{figure} \subsection{HET high-resolution spectroscopy} High-resolution spectroscopic observations of WTS-2 were obtained between August-November 2011 at the McDonald Observatory in Austin, Texas, using the High Resolution Spectrograph (HRS, \citealt{Tul98}) at the Hobby-Eberly Telescope (HET). These spectra were used to measure the RV variations of the star and hence calculate the Keplerian parameters of the WTS-2 spectroscopic orbit (see Section~\ref{sec:precisionRV}), and to measure the bisector variations to help assess false-positive scenarios (see Section~\ref{sec:falsepos}). The relative faintness of WTS-2 necessitated a large aperture telescope in order to achieve high-precision RV measurements. HET has an effective aperture of 9.2 meters \citep{Ram98} and sits at a fixed elevation angle of $55^{\circ}$, rotating in azimuth to access $81\%$ of the sky visible from the observatory. HRS is a single-object fibre-coupled spectrograph with two additional sky fibres that uses a mosaic of two R-4 echelle gratings with cross-dispersing gratings to separate the spectral orders. We used an effective slit width of $2^{\prime\prime}$ with the 600g5271 grating to give a resolution of $R=60,000$ and a wavelength coverage of $\sim4400-6280$\AA, separated into 40 echelle orders across the two CCD detectors (18 on the red CCD, 22 on the blue CCD). Each science image was a 1 hour integration, split into $2\times30$ minute exposures. Due to the faintness of the star, we did not use the Iodine gas cell but instead observed several exposures of the ThAr arc lamp before and after each science frame for wavelength calibration and to monitor any systematic shifts. A high signal-to-noise (SNR) exposure of a white dwarf was also obtained as a telluric standard. The {\sc iraf.echelle}\footnote{http://iraf.net/irafdocs/ech.pdf} package was used to reduce the HET spectra. After subtracting the bias and flat-fielding the images, the science and sky spectra for each 30 minute exposure were extracted order-by-order, and the corresponding sky spectrum was then subtracted. Wavelength calibration was achieved using the extracted ThAr arc lamp spectra. The dispersion functions calculated for the ThAr spectra (RMS$\sim0.003$\AA) taken before and after the science frames were checked for consistency and then linearly interpolated to create a final dispersion function to apply to the stellar spectra. No significant drift or abnormalities were observed in the wavelength solution during each observing run. Before combining the two 30-minute exposures at each epoch, the individual spectra were continuum-normalised, filtered for residual cosmic rays, and corrected for telluric features at the redder wavelengths (using the extracted white dwarf spectrum) using a custom set of {\sc matlab} programs. After combining the exposures at each epoch we obtained a total of seven spectra with average SNRs of $\sim15$. We note here that due to the faintness of WTS-2, the cores of the deepest lines in the HRS spectra are distorted during the calibration process, particularly after sky subtraction. This means only the weaker lines in these high-resolution spectra are suitable for any detailed spectroscopic analysis of the host star, such as abundance calculations or measuring the projected rotational velocity (see Section~\ref{sec:char_star}). To measure the RVs, each echelle order in the spectrum was cross-correlated with a synthetic template using {\sc iraf.fxcor}. The template was taken from the MAFAGS-OS grid of model atmospheres \citep{Grup04} with ${\rm T}_{\rm eff}=4800$K, $\log(g)=4.5$ and solar metallicity. The template parameters are within the errors of the final host star properties obtained by the detailed analysis in Section~\ref{sec:char_star}, and the variation of the RVs for different templates within these errors is negligible compared to the errors on the measured RVs. The RVs reported in Table~\ref{tab:HET_RVs} and shown in Figure~\ref{fig:rv_het} are the mean RV from all the echelle orders at a given epoch with the uncertainties equal to the standard deviation on the mean of the RVs. \begin{table} \centering \begin{tabular}{cccccccc} \hline \hline HJD&Phase&RV&$\sigma _{\rm RV}$&BS&$\sigma_{\rm BS}$\\ &&(km/s)&(km/s)&(km/s)&(km/s)\\ \hline 2455790.83253&0.9599&-19.922&0.043&1.73&0.90\\ 2455822.73790&0.2790&-20.332&0.061&0.83&0.93\\ 2455845.68006&0.7997&-19.761&0.046&0.72&0.95\\ 2455856.65080&0.5693&-20.021&0.047&-0.44&0.81\\ 2455867.61787&0.3349&-20.295&0.051&0.04&0.90\\ 2455869.61445&0.2952&-20.282&0.048&-0.48&0.75\\ 2455876.59697&0.1490&-20.115&0.040&0.08&0.86\\ \hline \end{tabular} \caption{Radial velocity data for WTS-2 derived from the HET/HRS spectra and their associated bisector span (BS) variations (see Section~\ref{sec:falsepos}). The error on each RV data point ($sigma_{\rm RV}$) in this table is the standard deviation on the mean of measured RVs across all echelle orders for that epoch. The phases are calculated using the best-fitting period from the simultaneous light curve analysis in Section~\ref{sec:transitfit}.} \label{tab:HET_RVs} \end{table} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{HET_RVs_new} \caption{{\bf Top:} The radial velocity curve of WTS-2 as a function of the orbital phase, measured using high-resolution spectra from HET. The red solid curve shows the best-fitting model to the data ($K_{\star}=256\pm32$ m/s, see Section~\ref{sec:precisionRV}), while the dotted horizontal line marks the measured systemic velocity of the system. The best-fitting model parameters are given in Table~\ref{tab:parameters}. A circular orbit was assumed in the model. The RV error bars in this plot have been scaled such that the fit gives $\chi^{2}_{\nu}=1$. {\bf Bottom:} Residuals after subtracting the best-fitting model.} \label{fig:rv_het} \end{figure} \subsection{High-resolution $i/z$-band AstraLux/CAHA lucky imaging}\label{sec:lucky} Although the WFCAM $J$-band survey images are of relatively high spatial resolution ($0.4$ arcsec/pixel) compared to most ground-based transit surveys, in order to adequately address false positive scenarios and search for unresolved stellar companions, we obtained high-resolution images of WTS-2 with the lucky imaging camera AstraLux \citep{Horm08} mounted on the CAHA 2.2m telescope. The observations were carried out on the night of June 14th 2013, with a mean seeing of 0.6 arcsec. We obtained $30000$ frames in the $i$- and $z$-bands with single frame exposure times of $0.08$ and $0.06$ s, respectively. The basic reduction, frame selection and image combination were carried out with the AstraLux pipeline\footnote{www.mpia.mpg.de/ASTRALUX} \citep{horm07}. During the reduction process, the images are resampled to half their pixel size. The calculated plate solution is then $23.61\pm0.20$ mas/pixel (Lillo-Box et al. 2014, submitted). The plate scale was measured with the {\sc ccmap} package of {\sc iraf} by matching the $XY$ positions of $66$ stars identified in an AstraLux image with their counterparts in the \citet{Yan94} catalog of the Hubble Space Telescope (see \citealt{Lil12} for a more detailed explanation of this method which was used to study $\sim100$ Kepler planet host candidates). For our analysis, we used the best $1\%$ of exposures in the $i$- and $z$-bands, which have PSFs with FWHMs of $0.24$ and $0.18$ arcsec, respectively. Figure \ref{fig:AstraLux_z} shows the $z$-band stack in which a faint source is visible $0.567\pm0.005$ arcsec South of WTS-2. We performed an iterative PSF fitting of WTS-2 and this nearby source to estimate the flux of each of them. The PSF was constructed using the two brighter sources in the field of view. We find that the nearby faint source is contributing $10.4\pm1.0\%$ and $13.1\pm1.0\%$ of the total light in the $i$- and $z$-bands, respectively. We can exclude any other companions beyond a projected separation of 0.4 arcsec from WTS-2 down to a magnitude difference of $\Delta m=3$ mag at the $3\sigma$ level. Motivated by this result, we extended our analysis to $ZYJHK$-band images taken with WFCAM (see Section~\ref{sec:broadband}). Although these data have a significantly larger pixel scale ($0.4$ arcsec/pixel) and considerably larger PSFs (FWHM $\sim1.2-1.7$ arcsec), we were able to perform a simultaneous fit of two PSFs and estimated in this way the blending light coming from the faint source that is located South of WTS-2. For the PSF fitting we made use of the position information we obtained from the high resolution AstraLux images by restricting the separation and position angle to the one measured on the $i$- and $z$-band images. For the five WFCAM bands we find that the faint source is contributing $9.5\pm3.0\%$, $10.7\pm4.0\%$, $19.0\pm4.0\%$, $19.8\pm4.0\%$ and $22.5\pm3.0\%$ in the $Z$-, $Y$-, $J$-, $H$- and $K$-bands respectively. \begin{figure} \centering {\setlength{\fboxsep}{0pt} \setlength{\fboxrule}{0.5pt} \fbox{\includegraphics[width=0.4\textwidth]{AstraLux_z_anon}}} \caption{AstraLux $z$-band image of WTS-2 and two other bright stars in the field. The field of view is $24\times24$ arcsec. A faint source is clearly visible $0.567\pm0.005$ arcsec South of WTS-2.} \label{fig:AstraLux_z} \end{figure} The resulting magnitudes for the contaminant are thus as follows: SDSS-$i=17.66$ mag, WFCAM-$Z=17.12$ mag, SDSS-$z=17.14$ mag, WFCAM-$Y=16.97$ mag, WFCAM-$J=15.77$ mag, WFCAM-$H=15.23$ mag and WFCAM-$K=14.98$ mag. The $H-K$ and $J-H$ colours of the contaminant correspond to a spectral type of $\sim$M$1$V and ${\rm T}_{\rm eff}\sim3600K$ when compared with the \citet{Bar98} models. This situation is similar to the hot Jupiter WASP-12 b which was also recently shown to be diluted by a faint M-dwarf at $1$ arcsec separation \citep{Cross12}. If the M-dwarf in our aperture is gravitationally bound to the K-dwarf, the projected separation would correspond to an orbital separation of $\sim600$ AU, and an orbital period of $\sim12~500$ years, assuming a face-on, circular orbit. To assess the likelihood of physical association between the two sources, we estimated the probability that the faint source is a chance alignment star using the Besan\c{c}on stellar population synthesis models \citep{Robin03}. We extracted the predicted number of stars in a 1.6 sq. degree region centred on the coordinates of WTS-2 in the magnitude range of the faint source ($I=16.95-17.1$ mag, where the SDSS $i$ magnitude was converted to $I$ using the relations on the SDSS website\footnote{http://www.sdss.org/dr5/algorithms/sdssUBVRITransform.html\#Lupton2005}). In this range, we find $2\times10^{-4}$ stars per sq. arcsec. Multiplying this by our aperture area ($\pi(2^{\prime\prime})^{2}$), we find a priori probability of $<0.26\%$ of finding a suitably faint red star in our aperture. This value is a factor of ten lower if one only considers stars within the projected separation of the WTS-2 and the faint source. We therefore conclude that the faint source is likely to be a wide-orbit companion to WTS-2, however the small proper motion means confirmation of physical association could take many years. \section{Characterization of the host star}\label{sec:char_star} The properties of the planet WTS-2 b depend directly on the characterisation of its host star. Due to the faintness of the host star, the usual method of deriving the stellar parameters from very high-resolution spectra (see e.g. \citealt{Tor12}) is not appropriate because the SNR in our high-resolution HET spectra is too low, and furthermore the previously mentioned issue of distorted features in the cores of the deepest lines could bias the results. Instead, we use two datasets of lower resolution and complementary analyses to arrive at consistent estimates of the stellar properties, albeit with comparatively larger uncertainties. Table~\ref{tab:parameters} gives the final adopted parameters and their errors based on the results of this section. We note here that we have not corrected the following analysis for contamination by the faint red source within the aperture or slit of the observations. The majority of this analysis is based on data at optical wavelengths where the contribution from the faint red source is low ($<10\%$), hence our derived host star properties are unlikely to deviate outside the presented uncertainties when accounting for the faint red source. \subsection{Effective temperature, surface gravity, metallicity, lithium abundance, and rotation}\label{sec:SED} \subsubsection{Photometric analysis} To begin, we refined the initial SED fit to the WFCAM photometry using {\sc vosa} to add more bandpasses and to explore a wider range of ${\rm T}_{\rm eff}$, surface gravities, metallicities, and to fit for reddening. {\sc vosa} calculates synthetic photometry by convolving theoretical atmospheric models with the filter transmission curves of the observed bandpasses, then performs a $\chi^{2}$ minimisation to find the best-fitting model to the data \citep{Bay08,Bay13}. We used a grid of Kurucz ATLAS9 model spectra \citep{Cast97} in the range $3500\leq{\rm T}_{\rm eff}\leq6000$ K in steps of $250$ K, with $\log(g)=4.0-5.0$ in steps of $0.5$ dex (to be consistent with the light curve stellar density estimate), [Fe/H]=$[-0.5,0.0,+0.2,+0.5]$, and $0\leq A_{V}\leq0.5$ in steps of $0.025$. The upper boundary on the extinction range was chosen to approximately match the total integrated line-of-sight extinction for the $2$ degree region around the centre of the 19hr field ($A_{V}=0.439$ mag, $E(B-V)=0.132$ mag), calculated using the infrared dust maps of \citet{Sch98}. Figure~\ref{fig:SED} shows the best-fitting SED for WTS-2, which has a reduced $\chi_{\nu}^{2}=3.9$. Note that a $2\%$ error was added in quadrature to the SDSS and WFCAM photometric errors given in Table~\ref{tab:broadband}, to allow for calibration between the surveys, but that the magnitudes have not been corrected for the presence of the faint red source within 0.6 arcsec of the brighter star (see Sec~\ref{sec:lucky}). In addition to the $\chi^{2}$ model fit, {\sc vosa} performs a Bayesian analysis of the model fit, resulting in a posterior probability density function covering the range of fitted values for each parameter. A Gaussian-fit to the ${\rm T}_{\rm eff}$ and $A_{V}$ distributions gives approximate errors as follows: ${\rm T}_{\rm eff}=5000\pm140$ K and $A_{V}=0.27\pm0.07$. For $\log(g)$ the distribution is essentially flat due to the intrinsic insensitivity of the available broadband photometry to gravity sensitive features, so we adopt $\log(g)=4.5\pm0.5$. For [Fe/H], higher metallicity is preferred, with the most probable solutions being [Fe/H]$=+0.2$ and $+0.5$ ($37\%$ and $42\%$, respectively). The ${\rm T}_{\rm eff}$ from the refined SED fit is higher than our original estimate, which is not surprising given that the initial estimate was made using models known to underestimate ${\rm T}_{\rm eff}$ for stars earlier than M-type. \subsubsection{Spectroscopic analysis} We checked the results of the SED fitting in two ways; firstly by fitting synthetic spectra to the stacked TWIN spectrum, and secondly through a standard spectroscopic abundance analysis of the same spectrum. From the latter, we also derived estimates of the rotational and microturbulence velocities, and an upper limit on the lithium abundance. Firstly, we compared the stacked TWIN spectrum of WTS-2 to synthetic spectra in the \citet{Coe05} library. The spectral library was generated by the {\sc pfant} code \citep{Barb03}, which computes the synthetic spectra using the updated ATLAS9 model atmospheres of \citet{Cast04} (with a mixing length equal to twice the scale height) and a list of atomic and molecular lines, under the assumption of local thermodynamic equilibrium. Before we performed a $\chi^{2}$ minimisation to find the best-fitting model, the synthetic spectra were degraded to the resolution of the TWIN spectra, then normalised to their continuum along with the observed stacked spectrum. Our model grid covered $4250\leq{\rm T}_{\rm eff}\leq5500$ K in steps of $250$ K, $4.0\leq\log(g)\leq5.0$ in steps of $0.5$, and [Fe/H]$=[-1.0,-0.5,0.0,+0.2,+0.5]$. We noted that the synthetic spectra systematically under-predicted the depth of some absorption features in the Solar spectrum (most likely due to neglect of non-LTE effects and/or errors in the continuum normalisation) such that our $\chi^{2}$ minimisation would preferentially select metal-rich spectra (see \citealt{Cap12} for a more detailed explanation). We therefore only performed the $\chi^{2}$ analysis on those lines that were well-reproduced for the Solar spectrum. The best-fitting model was consistent with the {\sc vosa} result, giving ${\rm T}_{\rm eff}=5000\pm250$ K, $\log(g)=4.5\pm 0.5$, and [Fe/H]=$+0.2^{+0.3}_{-0.2}$, where the errors correspond simply to the step-size in the models. This corresponds to a spectral type of K$2$V$\pm2$, according to Table B1 of \citet{Gray08}. For the standard spectroscopic abundance analysis, we measured the excitation potential of neutral Fe I and ionised Fe II lines in the TWIN stacked spectrum and compared them to synthetic spectra. All synthetic spectra were calculated using 1D LTE model atmospheres computed with SAM12 and WITA6 routines \citep{Pav03} and constants taken from the VALD2 \citep{Kup99}. For a complete description of our procedure, see \citet{Pav12}. For a range of synthetic models with microturbulence velocity $\xi=0.0-2.5$ km/s, in steps of $0.25$ km/s, and ${\rm T}_{\rm eff}=4900-5100$ K in steps of $100$ K, we found that the ionisation equilibrium condition was met at $\xi=0.75\pm0.50$ for $\log(g)=4.45\pm0.25$. The corresponding iron abundance was [Fe/H]=$+0.095\pm0.021$, again consistent with the SED-fitting results. To measure the rotational velocity of the star, the $v\sin(i)$ value was calculated independently for each of the 20 Fe II lines in the TWIN stacked spectrum, by convolving the model line profile with a set of rotation profiles \citep{Gray08}, ranging from $0-6$ km/s in steps of $0.2$ km/s. The average and standard deviation of all the lines was $v\sin(i)=2.2\pm1.0$ km/s. Finally, we placed an upper limit on the lithium abundance of $\log N$(Li) $< 1.8$ ($=12+\log N$(Li/H)), with an equivalent width upper limit of EW(Li)$\sim 0.089$\AA. Only upper limits are possible due to noise contamination and relatively low resolution of the TWIN spectrum (see Figure~\ref{fig:lithium}). \begin{figure} \centering \includegraphics[width=0.5\textwidth]{lithium} \caption{A section of the stacked TWIN spectrum covering the lithium feature at $6707.55-6709.21$\AA~with models overlaid for different Li abundances. The apparent redshift of the observed Li feature is most likely due to noise contamination, so we place an upper limit at $\log$N(Li)$_{\rm LTE}=1.8$ dex.} \label{fig:lithium} \end{figure} \subsection{Mass and age constraints}\label{sec:mass-age} The mass of WTS-2 was derived using a modified Hertzsprung-Russell diagram, as shown in Figure~\ref{fig:evo_tracks}, comparing the spectroscopically measured ${\rm T}_{\rm eff}$ to the stellar density measured from the light curves. Model isochrones were generated using the PARSEC (PAdova and TRieste Stellar Evolution Code) v1.0 code, which includes the pre-main sequence phase \citep{Bres12}, for $Z=0.019$ (i.e. Solar). The observational errors on ${\rm T}_{\rm eff}$ allow solutions in the pre-main sequence phase; however, we rule out young ages using other indicators. For example, comparing the upper limit on the lithium abundance of WTS-2 to that observed in open clusters of a known age \citep{Sest05}, constrains the age to $>250$ Myr. This already places the system beyond the pre-main sequence phase of the isochrones. For K-dwarfs, one can also obtain age constraints via gyrochronology \citep{Barne07,Mam08}, which depends on the stellar rotation period, usually measured from star spot modulation in the light curve. However, all significant peaks in the WTS-2 periodogram are consistent with aliases of the observing window function or of long-term systematic trends present in all the WTS light curves. This is not surprising, given that we are using infrared light curves, which have less contrast between the spot and stellar temperatures, resulting in lower amplitude rotational modulation signals \citep{Goul12}. However, the maximum possible rotation period is set by the upper limit on $v\sin(i)$ ($3.2$ km/s for WTS-2). Using Equation 7 of \citet{Mald10}, we find that the upper limit on $v\sin(i)$ is consistent with a gyrochronology lower age limit of $\gtrsim600$ Myr. Our spectra do not cover sufficient activity sensitive spectral features so we cannot use the age-activity relationship, although the lack of emission in the H$\alpha$ line rules out a very young star. However, the age constraint is in agreement with association to the young and young-old Galactic disk \citep{Leg92}, determined from the space velocities given in Table~\ref{tab:parameters}, which were derived using proper motions from SDSS DR7 \citep{Munn04,Munn08} and the systemic velocity derived in Section~\ref{sec:precisionRV}. The model isochrones between $0.6-13.5$ Gyrs allow a mass range of $M_{\star}=0.820\pm0.082{\rm M}_{\odot}$, which we adopt as the mass of WTS-2. As a K-dwarf, WTS-2 has a deeper outer convective envelope than the Sun. According to the models of \citet{Sad12}, the physical depth of the convective envelope is at $R_{\rm cz}\sim0.54{\rm R}_{\odot}$ ($\sim30\%$ of the stellar radius), and it has a mass of $M_{\rm cz}\sim0.05{\rm M}_{\odot}$ according to the \citet{Pin01} models (compared to $M_{\rm cz}\sim0.001{\rm M}_{\odot}$ for a late F star such as WASP-18). \begin{figure} \centering \includegraphics[width=0.49\textwidth]{evo_tracks_use} \caption{Modified Hertzsprung-Russell diagram comparing the effective temperature and light curve stellar density for WTS-2, with the PARSEC v1.0 stellar evolution isochrones \citep{Bres12} for Z=0.019. The data point and its error box marks the allowed values for WTS-2. Assuming the star is not on the pre-main sequence, the isochrones give an age constraint of $>600$ Myr.} \label{fig:evo_tracks} \end{figure} \section{System parameters}\label{sec:char_planet} The orbital elements and physical properties of WTS-2 b are derived from a simultaneous fitting of the $J$-band and $i$-band light curves, then combining the results with a separate analysis of the RVs measured with HET. Given that we have an estimate of the blended light contribution in both the $i$ and $J$-band filters, we present an analysis of both the diluted and dilution-corrected light curves for completeness. We adopt the dilution-corrected solution for the remainder of this paper; however, many of the derived parameters are consistent within the $1\sigma$ error bars from both analyses due to the relatively large errors on the fractions of blended light. We also address the limits we can place on the $J$-band secondary eclipse of WTS-2 b. \subsection{Light curve analysis}\label{sec:transitfit} In both the diluted and dilution-corrected cases, the $J$-band and $i$-band light curves were modelled jointly using the analytic formulae presented by \citet{Mand02}. A Markov-Chain Monte Carlo analysis (MCMC) was used to derive the uncertainties on the fitted parameters and their correlations. We fixed the limb-darkening coefficients in the fit by adopting values from the tables of \citet{Cla11}. We used the ATLAS atmospheric models and the flux conversion method (FCM) to obtain the quadratic law limb-darkening coefficients in the $i$- and $J$-bands ($\gamma_{1i}, \gamma_{2i}, \gamma_{1J}, \gamma_{2J}$) corresponding to ${\rm T}_{\rm eff}=5000$ K, $\log(g)=4.5$, [Fe/H]$=+0.2$, and $\xi=2$ km/s. This gave $\gamma_{1i}=0.4622$, $\gamma_{2i}=0.1784$, $\gamma_{1J}=0.2609$, and $\gamma_{2J}=0.2469$. Before fitting the light curves, we applied a scaling factor to the per data point errors in the $J$- and $i$-band light curves, such that the out-of-transit data when compared to a flat line gave a $\chi^{2}_{\nu}$ of unity. This was to account for any under-estimation of the errors. The following parameters were allowed to vary in the MCMC analysis: the period ($P$), the epoch of mid-transit ($T_{0}$), the planet/star radius ratio ($R_{P}/R_{\star}$), the impact parameter ($b=a\cos(i)/R_{\star}$), where $i$ in the inclination of the system to our line-of-sight, and the semi-major axis in units of the stellar radius ($a/R_{\star}$). Note that the radius ratio was assumed to be the same in both the $J$-band and $i$-band transit models. The orbit was assumed to be circular, hence the eccentricity ($e$) was fixed to zero. Three chains of $1\times10^{6}$ steps were run each time to check convergence, then combined after discarding the first $10\%$ of each chain (the burn-in length). The dilution-corrected light curves and their combined best-fitting model are shown in Figure~\ref{fig:lc_i}, and the resulting best-fitting model parameters are listed in Table~\ref{tab:parameters}. Figure~\ref{fig:mcmc_correlations} shows the extent of correlation between some of the more correlated model parameters in this analysis. The distributions are not perfectly Gaussian and result in slightly asymmetric errors for the $68.3\%$ confidence interval about the median. In order to propagate these errors into the calculation of absolute dimensions, we have symmetrized the errors by adopting the mean of the $68.3\%$ boundaries (the $15.85\%$ and $84.15\%$ confidence limits) as the parameter value (rather than the median), and we then quote the $68.3\%$ confidence interval as the $\pm1\sigma$ errors. The full extent of the of relatively large errors on the blending fractions was explored by running the MCMC analysis on light curves corrected with the $\pm1\sigma$ limits of the estimated blending fractions. The quoted errors and parameter values were derived using the distributions from all of these runs. The results of fitting the original, diluted light curves are also given in Table~\ref{tab:parameters} for completeness. \begin{figure*} \centering \includegraphics[width=\textwidth]{hess_mcmc_kate_new} \caption{Distributions of correlated parameters in the MCMC analysis of the dilution-corrected light curves. The insets show individual parameter (normalised) histograms to highlight the skew in the distributions.} \label{fig:mcmc_correlations} \end{figure*} \begin{table*} \centering \begin{tabular}{lll} \hline \hline \multicolumn{1}{l}{Stellar properties:} &Diluted&Dilution-corrected\\ \hline Names& WTS-2&---\\ & 2MASS 19345587+3648557&---\\ &SDSS J193455.87+364855.6&---\\ &WISE J193455.86+364855.6&---\\ &KIC 1173581&---\\ RA&19h34m55.87s (293.732792 deg)&---\\ Dec&+36d48m55.79s (36.815497 deg)&---\\ ${\rm T}_{\rm eff}$& $5000\pm250$ K&---\\ Spectral Type&K$2(\pm2)$V&---\\ $\log(g)^{a}$& $4.5\pm0.5$&---\\ $\log(g)^{b}$&$4.589\pm0.023$&$4.600\pm0.023$\\ $\rm [Fe/H]$&$+0.2^{+0.3}_{-0.2}$&---\\ $v\sin(i)$& $2.2\pm1.0$ km/s&---\\ $\xi$& $0.75\pm0.5$ km/s&---\\ $\log N(Li)_{\rm LTE}$& $<1.8$ dex&---\\ $M_{\star}^{b}$& $0.820\pm0.082$ ${\rm M}_{\odot}$& $0.820\pm0.082{\rm M}_{\odot}$\\ $R_{\star}^{b}$&$0.761\pm0.033{\rm R}_{\odot}$&$0.752\pm0.032{\rm R}_{\odot}$\\ $\rho_{\star}^{c}$&$1.86\pm0.15\rho_{\rm{\odot}}$&$1.93\pm0.16\rho_{\rm{\odot}}$\\ $R_{\rm cz}$&$\sim0.54{\rm R}_{\odot}$&$\sim0.54{\rm R}_{\odot}$\\ $M_{\rm cz}$&$\sim0.05{\rm M}_{\odot}$&$\sim0.05{\rm M}_{\odot}$\\ Age& $>600$ Myr&---\\ $A_{V}$&$0.27\pm0.07$ mag&---\\ Distance& $\sim1$ kpc&---\\ $\mu_{\alpha}\cos\delta$&$2.3\pm2.3$ mas/yr&---\\ $\mu_{delta}$&$-1.9\pm2.3$ mas/yr&---\\ $U$&$-13.3\pm5.6$ km/s&---\\ $V$&$-0.3\pm7.7$ km/s&---\\ $W$&$-15.1\pm5.3$ km/s&---\\ \hline \multicolumn{1}{l}{System properties:} &Diluted&Dilution-corrected\\ \hline $P$&$1.0187074\pm7.1\times10^{-7}$ days&$1.0187068\pm6.5\times10^{-7}$ days\\ $T_{0}-2454317$&$0.81264\pm6.4\times10^{-4}$ HJD &$0.81333\pm6.5\times10^{-4}$ HJS\\ $R{\rm p}/R_{\star}$&$0.1755\pm0.0018$&$0.1863\pm0.0021$\\ $b$&$0.597\pm0.032$&$0.584\pm0.033$\\ $i$&$83.43\pm0.53$ $^{\circ}$&$83.55\pm0.53$ $^{\circ}$\\ $a$&$0.01855\pm0.00062$ AU&$0.01855\pm0.00062$ AU\\ &$1.51\pm$$0.11$$a_{Roche}$&$1.44\pm$$0.12$$a_{Roche}$\\ $\bar{\chi}_{\rm LC}^{2}$&$457.8$&$387.2$\\ $K$&$256\pm$$32$ m/s&---\\ $V_{\rm sys}$&$-20.026\pm0.019$ km/s&---\\ $e$&$0$ (fixed)&---\\ \hline \multicolumn{1}{l}{Planet properties:} &Diluted&Dilution-corrected\\ \hline $M_{\rm P}$&$1.12\pm$$0.16$${\rm M}_{\rm J}$&$1.12\pm0.16{\rm M}_{\rm J}$\\ $R_{\rm p}$&$1.300\pm0.058{\rm R}_{\rm J}$&$1.363\pm0.061{\rm R}_{\rm J}$\\ $\rho_{\rm P}$&$0.63\pm$$0.12$ gcm$^{-3}$ ($0.477\pm$$0.093$${\rho}_{\rm J}$)&$0.54\pm$$0.11$ gcm$^{-3}$ ($0.413\pm$$0.080$${\rho}_{\rm J}$)\\ $g_{b}$&$16.4\pm$$2.7$ ms$^{-2}$&$14.9\pm2.5$ ms$^{-2}$\\ $F_{inc}$&$1.29\times10^{9}\pm0.29\times10^{-9}$ erg/s/cm$^{2}$&$1.26\times10^{9}\pm0.29\times10^{-9}$ erg/s/cm$^{2}$\\ $\rm T_{\rm eq}^{d}$&$2000\pm100$ K&$2000\pm100$ K\\ $\Theta$&$0.0389\pm$$0.0070$&$0.0371\pm0.0068$\\ \hline \end{tabular} \caption{Characterisation of the WTS-2 system. The `Dilution-corrected' column is based on an analysis in which the light curves were corrected for contamination by a faint red source contributing $10.4\pm1\%$ and $19.0\pm4\%$ of the flux in the aperture in the $i$- and $J$-bands, respectively (see Section~\ref{sec:lucky}). The `Diluted' column gives the results without correction for the faint red source. $^{a}$From spectroscopic analysis. $^{b}$From light curve mean stellar density and stellar evolution isochrones. $^{c}$From light curve analysis. The proper motions $\mu_{\alpha}\cos\delta$ and $\mu_{delta}$ are from the SDSS DR7 database. The space velocities $U,V,W$ are with respect to the Sun (heliocentric) but for a left-handed coordinate system, i.e. U is positive away from the Galactic centre. $\bar{\chi}_{\rm LC}^{2}$ is the mean of the $\chi^{2}$ values in the MCMC runs. $^{d}$ Equilibrium temperature assuming $A_{B}=0$ and $f=2/3$. Note that we used the equatorial radius of Jupiter (${\rm R}_{\rm J}=7.1492\times10^7$m). $a_{Roche}$ is the Roche limit separation i.e. the critical distance inside which the planet would lose mass via Roche lobe overflow. $g_{b}$ is the planet surface gravity according to equation 7 in \citet{Sou08}. $\Theta$ is the Safronov number ($\Theta=\frac{1}{2}(V_{esc}/V_{orb})^{2}=(a/R_{P})(M_{P}/M_{\star})$) \citep{Han07}. The errors on the light curve parameters are the $68.3\%$ confidence interval while the parameter value is the mean of the $68.3\%$ confidence level boundaries, such that the errors are symmetric. Despite its KIC name, WTS-2 b is unfortunately not in the Kepler field-of-view.} \label{tab:parameters} \end{table*} \subsubsection{$J$-band secondary eclipse limits}\label{sec:secondary} WTS-2 b orbits very close to its host star and receives a high level of incident radiation ($F_{inc}\sim1.3\times10^{9}$ erg/s/cm$^{2}$). Following the prescription of \citet{Lop07b} and assuming that the atmosphere has a zero-albedo ($A_{B}=0$) and instantaneously re-radiates the incident stellar flux (i.e. no advection, $f=2/3$), the expected equilibrium temperature of the planet is $\rm T_{\rm eq}=2000\pm100$ K. Although this is not as high as the hottest hot-Jupiters (e.g. KOI-13 b has $\rm T_{\rm eq}\sim2900$ K, \citealt{Mis12}), WTS-2 b is one of the hottest planets orbiting a K-dwarf. Adopting this value as the maximum day-side temperature of the planet and approximating the spectra of the planet and star as black-bodies, we expect the observed secondary eclipse depth in the WTS $J$-band light curve to be $\sim0.63$ mmag. This value is calculated using the dilution-corrected light curve analysis and then adding back in the $\sim19\%$ contamination of the WTS $J$-band light curve by the additional red source in the aperture. The out-of-eclipse data in the WTS-2 b $J$-band light curve has an RMS of $\sim4$ mmag, and is the typical RMS of the WTS $J$-band light curves at $J=13.9$ mag. There are $88$ data points in the expected secondary eclipse of the WTS-2 b $J$-band light curve (according to the best-fitting model). Assuming white-noise only, this would result in a precision of $\sim0.43$ mmag on the secondary eclipse depth. We performed a basic linear regression fit to the WTS-2 $J$-band light curve with a model from the \citet{Mand02} routines to attempt to detect the secondary eclipse of WTS-2 b. The best-fitting model is shown in Figure~\ref{fig:sec_eclip} and corresponds to a flux ratio of $(F_{P}/F_{\star})=0.93\times10^{-3}\pm0.69\times10^{-3}$. The large uncertainty is unsurprising and means that the WTS survey light curve is not capable of detecting the secondary eclipse, hence we are unable to constrain the properties of the planet's day-side from the WTS data. The sparse sampling of the eclipse in the WTS light curve and the randomised observing pattern of the survey over many nights make it difficult to monitor the systematic effects during a single eclipse, hindering a robust measurement of the flux ratio. However, we do note that we find no evidence for an anomalously deep event, which supports the planetary nature of WTS-2 b. A single dedicated night of observation would in principle be able to measure the eclipse depth to a sufficient precision. The potential for follow-up studies of the planet's atmosphere is discussed in Section~\ref{sec:followup}. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{sec_eclip} \caption{The observed WTS-2 b J-band light curve i.e. without correction for dilution, zoomed around the expected secondary eclipse phase. The observed data is shown in grey dots, while the black filled circles are the data binned in phase with uncertainties equal to the standard error on the mean. The black solid horizontal line shows a flat model i.e. no secondary eclipse detection. The solid red line is the best-fitting model light curve from a basic linear regression analysis, which gave a depth of $(F_{P}/F_{\star})=0.93\times10^{-3}\pm0.69\times10^{-3}$, i.e. a non-detection. The model was fitted to data between phases $0.4<\phi<0.6$.} \label{fig:sec_eclip} \end{figure} \subsection{Radial velocity analysis}\label{sec:precisionRV} The RV curve has been modelled with constraints from the light curve fit, rather than being fitted simultaneously with the light curve data, due to the limited amount of RV data. To fit the RV curve, we adopted the well-defined period and transit ephemeris from the light curves and fixed these parameters in the RV curve model. We also fixed the orbit to be circular as we do not have enough data to model an eccentric orbit. Furthermore, a circular orbit is arguably the most reasonable approximation for a planet so close to its host star (see e.g. \citealt{Ande12}). The model takes the form of: \begin{equation} \rm RV=V_{sys}+K_{\star}\sin(2\pi\phi) \label{eqn:sinefit} \end{equation} \noindent where $\phi$ is the phase, $K_{\star}$ is the RV semi-amplitude, and $V_{sys}$ is the systemic velocity of the WTS-2 system. The phase-folded radial velocities and the best-fitting model are plotted in Figure~\ref{fig:rv_het}, while Table~\ref{tab:parameters} gives the resulting model parameter values. In the fit, the RV error bars have been scaled by $\sqrt(\chi^{2}_{\nu})=1.32$ such that $\chi^{2}_{\nu}=1$. This accounts for possible under-estimation of the RV errors, or conversely, reflects the quality of the fit, and acts to enlarge the uncertainties on the model parameters, which are the $1\sigma$ errors from the $\chi^{2}$-fit. The best-fitting model gives a planet mass of $1.12\pm0.16 M_{J}$, where the error is calculated by propagating the errors of the relevant observables ($K_{\star}$, $M_{\star}$, $P$, and $i$). \section{Eliminating false positives}\label{sec:falsepos} Wide-field transit surveys invariably suffer from transit mimics, usually caused by eclipsing binaries, either as grazing systems, or by eclipsing binaries contaminated by a source of third light. Given that WTS-2 b is a relatively unusual planetary system, and the presence of a faint third light source in our aperture, it is important to investigate viable false positive scenarios. \subsection{Non-blended false positives} Due to the faintness of our target, before proceeding to precision RV measurements with the 9.2-m HET, we carried out reconnaissance intermediate-resolution spectroscopy with the 3.5-m telescope at CAHA to check for large RV variations indicative of non-blended false positives such as a grazing binary, or a binary containing two identical size stars whose light curve has been erroneously phase-folded on half of the true orbital period. The CAHA spectra were single-lined, with no evidence for a double-peak in the cross-correlation functions, indicating that the system was not a non-blended false positive. Such scenarios would also have been reflected in the stellar density measured from the transit shape, as the density depends strongly on $P^{-2}$ \citep{Sea03}. The measured RVs, given in Table~\ref{tab:recon}, had an RMS of $1.1$ km/s and were consistent with no significant RV variation within the precision of the measurements, ruling out companion masses $>5{\rm M}_{\rm J}$ for non-blended scenarios. \subsection{Blended false positives} Despite the orders of magnitude larger RV variations expected for a binary system, in the case where the binary spectral lines are blended with a brighter foreground star, the overall variations in the cross-correlation profile can have significantly smaller amplitudes, potentially as small as that expected for a giant planet. Such a system would produce significant line-profile variations, so we measured the bisector spans (i.e. the difference between the bisector values at the top and at the bottom of the correlation function, \citealt{Tor05}) for each epoch of high-resolution HET spectra. In the case of contamination from a blended binary, or stellar atmospheric oscillations, we would expected to measure bisector spans values consistently different from zero, and as a strong function of the measured radial velocities \citep{Que01,Man05}. Figure~\ref{fig:bisectors} shows the measured bisector spans as a function of phase and RV. Although the bisector span values are scattered around zero, they have large errors and a RMS scatter ($\sim1$ km/s) that exceeds the measured RV semi-amplitude ($0.256$ km/s). The result is that they are too noisy to conclusively rule out any blended eclipsing binary scenario. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{bisectors} \caption{Bisector spans for WTS-2 b measured from high-resolution HET spectra, as a function of phase and as a function of the change in RV. Due to the large errors on the bisectors, we cannot use them to assess the possibility of a blending eclipsing binary in the aperture and instead use other methods detailed in this section.} \label{fig:bisectors} \end{figure} Instead, to further rule out blended eclipsing binary scenarios, we consider the following information. Firstly, the transit depths in the $i$- and $J$-bands are very consistent. Thus, if the light curves were generated by a background eclipsing binary blended with a bright foreground K-dwarf, then the colour (i.e. surface temperature) of the eclipsed star should also be similar to a K-dwarf. Secondly, the mean stellar density derived from the best-fitting transit model is in excellent agreement with the stellar density inferred from spectroscopic observations of the brightest source in the aperture, i.e. a K-dwarf. Again, this implies the eclipsed star should be similar in nature to the spectroscopically observed K-dwarf. Now, if we assume a significant fraction of the light in the observed light curves originates from the foreground K-dwarf and subtract it, we find that the transit can no longer be fitted by a K-dwarf star, instead requiring a cooler, denser star to fit the transit shape, which is in contradiction to our first two statements. This already indicates that a blended eclipsing binary scenario can be rejected but it is important to robustly rule out the detected red object within the aperture as the source of the occultations. To further explore the role of additional light in the observed light curves, we use constraints provided by a simultaneous modelling of the $i$- and $J$-band light curves. Following a similar method outlined by \citet{Sne09} and \citet{Kopp13} for assessing the blend scenarios for OGLE-TR-L9 b and POTS-1 b, we simulate background eclipsing binary systems blended by different amounts of light from a third star using the \citet{Mand02} algorithms. This analysis was carried out on the observed light curves, i.e. the light curves that have not been corrected for the known amount of dilution by the faint red source identified by in AstraLux imaging (see Section~\ref{sec:lucky}), and can thus be considered an independent test of the blending fraction. In the simulations, we vary two parameters: i) the difference in surface temperature between the eclipsed star and the blending source, $\Delta T$, and ii) the fraction of light from the blending source ($0<F_{3^{rd}}<90\%$). The combined light should produce a spectrum with a temperature that matches the spectroscopic measurement, i.e. $5000$ K. Any small fraction of light originating from the eclipsing star is included in $F_{3^{rd}}$. We exclude models which give stellar densities inconsistent with stellar evolutionary tracks, although we allow any evolutionary status for the eclipsed star since we do not insist that the contaminant is bound to the observed K-dwarf, even though this is quite likely (see Section~\ref{sec:lucky}). Figure~\ref{fig:blends} shows the upper limits on the allowed eclipsed star density across a range of masses based on the \citet{Sie00} evolutionary tracks. For each combination of $F_{3^{rd}}$ and $\Delta T$, we allow the binary radius ratio and the impact parameter to vary freely in the simulation, while the density of the eclipsed star is limited to be below the maximum density allowed based on the temperature of the eclipsed star ($T_{ecl}$). The fractional contribution of the light from the third star is also adjusted in each waveband based on blackbody spectra. Given that we are most concerned about blends with a background eclipsing M-dwarf, we set the limb darkening coefficients to be appropriate for a $3500K$ eclipsed star throughout. Although this is not strictly valid for hotter models, the effect of the limb darkening is marginal compared to the large chromatic variations caused by observing in different filters. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{temp_dens} \includegraphics[width=0.5\textwidth]{blend_contours} \caption{{\bf Top:} Stellar evolutionary tracks showing the variation in mean stellar density with stellar surface temperature for a range of ages and masses, based on the \citet{Sie00} models. The dashed line marks the maximum possible density for a given surface temperature. The horizontal dotted line marks the stellar density measured from the best-fitting model to the dilution-corrected light curves. {\bf Bottom:} Confidence intervals from the simultaneous $\chi^{2}$ analysis of all possible blended eclipsing binary scenarios fitted to the $i$- and $J$-band light curves. The y-axis gives the blended light fraction for the $i$-band and the x-axis gives the difference in surface temperature between the eclipsed star and the third light star. The combined light curves significantly favour a low level of blending by a source that is redder than the main contributor of light in the aperture. Note that blended background M-dwarf eclipsing binaries solutions lie in the upper left of this plot and are very poor-fits to the light curves in the two different bandpasses.} \label{fig:blends} \end{figure} The bottom panel of Figure~\ref{fig:blends} shows the $\chi^{2}$ confidence contours of fitting blended transit models to the two light curves simultaneously. $\Delta T=0$ K corresponds to $T_{ecl}=T_{3^{rd}}=5000$ K, while $F_{3^{rd}}$ refers to the fraction of blending light in the $i$-band. It shows that the data can only be fitted well by a low level of blending light from a source that is redder than the occulted star. In fact, the preferred solution is for a blending fraction of $\sim10\%$ by an object of $T_{3^{rd}}\sim3600$ K. This matches extremely well with the independent measurement of the blended light fraction from the AstraLux imaging (see Section~\ref{sec:lucky}). If the observed light curve had been generated by a foreground K-dwarf diluting eclipses from a background M-dwarf eclipsing binary, the simulations would have congregated in the upper left corner of the plot. However, these models produce transit shapes that are too wide, too V-shaped or too color-dependent to match the data. Finally, we note that it is unlikely that star spots are responsible for the RV variations as a $\sim1$-day period would correspond to a rotational velocity of $\sim40$ km/s for a K2V star, which is inconsistent with our measured $vsini$ unless there is a high degree of spin-orbit axis misalignment, which seems to be unlikely for cool dwarfs \citep{Win10}. All of these factors combined lead us to conclude that the planetary nature of the detected system is robust, despite that lack of a conclusive bisector span analysis. \section{Discussion}\label{sec:discussion} We have presented WTS-2 b, the second planet to have been discovered in the infrared light curves of the WTS. The notable property of this otherwise typical hot Jupiter is its orbital separation of just $a=0.01855$ AU, which places the planet in the small but growing sample of extreme giant planets in sub-$0.02$ AU orbits. The planet's orbit is just 1.5 times the tidal destruction radius i.e. the critical separation inside which the planet would to lose mass via Roche lobe overflow, $a_{Roche}\approx2.16R_{P}(M_{\star}/M_{P})^{1/3}$ \citep{Fab05,Ford06}. Figure~\ref{fig:roche} shows the distribution of $a/a_{Roche}$ as a function of stellar mass for transiting exoplanets, marking WTS-2 b as one of the closest systems to tidal destruction, particularly for low-mass host stars. Throughout this discussion, we use parameter values for WTS-2 b derived from the analysis of the dilution-corrected light curve (see Section~\ref{sec:transitfit}). \subsection{Remaining lifetime} The close proximity of WTS-2b to its host star suggests that its orbital evolution is dominated by tidal forces (e.g. \citealt{Ras96,Paz02}). The tide raised on the star by the planet exerts a strong torque that transfers the angular momentum of the planetary orbit to the stellar spin (e.g. \citealt{Gold66,Zah77,Hut81,Egg98}), causing the planet to spiral inwards and the star to spin up. In our case, the tide raised on the planet by the star is ignored as we have (reasonably) assumed that the planet is on a circular orbit and synchronised. Following \citet{Mats10}, we find that the total angular momentum, $L_{\rm tot}$, in the WTS-2 b system compared to the critical angular momentum, $L_{\rm crit}$, required for the star--planet system to reach a state of tidal equilibrium, i.e. dual synchronisation, is $L_{\rm tot}/L_{\rm crit}\sim0.57$ which is $<1$, indicating that WTS-2 b will never reach a stable orbit and will continue to spiral in towards the host star under tidal forces until it is inside $a_{\rm Roche}$, where it will presumably be destroyed by Roche lobe overflow \citep{Gu03}. First, let us estimate how long it will it take before the planet meets its demise and if the orbital decay will be directly observable on the decade timescale, according the standard $Q_{\star}^{\prime}=10^{6}$ calibration. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{roche_2014} \caption{The distribution of $a/a_{Roche}$ as a function of stellar mass for all known transiting exoplanets. WTS-2 b is marked by the red filled square and is one of the closest exoplanets to the separation at which it would begin to lose mass. A few other systems of note are labelled. The horizontal dashed line marks the ideal circularisation radius \citep{Ford06}, i.e. where highly eccentric orbits caused by e.g. planet-planet scattering are circularised. Data from exoplanets.org.} \label{fig:roche} \end{figure} To estimate the remaining lifetime of WTS-2 b, we take a simple model of tidal interactions, namely the damping of the equilibrium tide by viscous forces inside the star, i.e. the hydrostatic adjustment of the star to the imposed gravitational field of the planet, with the tidal bulge lagging the planet by a constant time \citep{Hut81,Egg98}. Note that we chose this model as it has been shown by \citet{Soc12} that the constant time lag model has a better physical motivation than the constant phase lag model \citep{Gold66}, as it is independent of the orbital configuration. In this model, the rate of semi-major axis decay is given by \citep{Mats10}: \begin{equation} \frac{\dot{a}}{a}=-6k_2\Delta t \frac{M_{P}}{M_{\star}}\left(\frac{R_{\star}}{a}\right)^5n^2, \label{eqn:adot} \end{equation} under the simplifying assumptions that the relevant tidal frequency is simply the planet's mean motion ($n=2\pi/P$), the orbit is circular ($e=0$), the planet rotation is synchronised with the orbit, and that the star is non-rotating. Here, $k_{2}$ is the star's second-order Love number (related to the star's density profile) and $\Delta t$ is the constant time lag. In this model, assuming that the planet does not change the star's spin significantly, integrating equation~\ref{eqn:adot} gives the future lifetime: \begin{equation} t_{\rm life}=-\frac{a^8}{48k_{2}\Delta tGM_{P}R_{\star}^{5}}. \label{eqn:tlife} \end{equation} Note that $t_{\rm life}$ is the time until $a=0$ AU, but that the difference in time between this and $a=a_{Roche}$ is negligible. As mentioned previously, the strength of tidal forces is commonly parametrised by means of the tidal quality factor $Q_{\star}^{\prime}$, with a higher $Q_{\star}^{\prime}$ meaning weaker tidal dissipation. While in the highly simplified constant phase lag model \citep{Gold66} $Q_{\star}^{\prime}$ is a constant, this is not in general true. In our adopted constant time lag model, $Q_{\star}^{\prime}$ is related to the lag time by \citep{Mats10}: \begin{equation} Q_{\star}^{\prime}=\frac{3}{4k_2\Delta tn}. \label{eqn:qprime} \end{equation} Adopting $Q_{\star}^{\prime}=10^{6}$ for the current-day WTS-2 b system, based on previous studies of $Q_{\star}^{\prime}$ \citep{Trill98,Mei05,Jack08}, we find a remaining lifetime of $\sim40$ Myr, which is just $7\%$ of the youngest possible age of the system ($\gtrsim600$ Myr), and $<1\%$ for the more typical older field star ages allowed by the stellar model isochrones used in Section~\ref{sec:mass-age}. Two situations arise from this, either i) the system is undergoing a rapid orbital decay and is genuinely close to destruction, in which case we can measure the tidal decay directly by monitoring the transit time shift over tens of years, or ii) $Q_{\star}^{\prime}$ is larger so that the system decays more slowly, or has a more complicated dependency on other system parameters. \subsection{Transit arrival time shift}\label{sec:tshift} In the case of scenario i), we can calculate how long it would take to observe a significant shift in the transit arrival time of WTS-2 b. For this, we need to know the current rate of orbital angular frequency change ($dn/dt$), which can be calculated via the chain rule using equation~\ref{eqn:adot} and the derivative of Kepler's third law in terms of $n$ with respect to $a$ (i.e. $dn/da$): \begin{eqnarray} \frac{dn}{dt} = \left(\frac{dn}{da}\right)\dot{a} =-\left(\frac{27}{4}\right)n^{2}\left(\frac{M_{P}}{M_{\star}}\right)\left(\frac{R_{\star}}{a}\right)^{5}\left(\frac{1}{Q_{\star}^{\prime}}\right), \label{eqn:dndt} \end{eqnarray} For WTS-2 b, assuming $Q_{\star}^{\prime}=10^{6}$, we find that {$dn/dt=1.0594599\times10^{-20}$ rad/s$^{2}$. To calculate the expected transit time shift, $T_{\rm shift}$, after a time $T$, we note that the angle $\theta$ swept out by a planet orbiting with angular frequency $n=d\theta/dT$ increasing at a constant rate of $dn/dT$ is, via Taylor expansion: \begin{equation} \theta=n_{0}T + \frac{1}{2}T^{2}\left(\frac{dn}{dT}\right). \end{equation} The angular difference between the linear ephemeris and the quadratic ephemeris after $T$ years is simply the quadratic term, thus the transit arrival time shift is: \begin{equation} T_{\rm shift}=\frac{1}{2}T^{2}\left(\frac{dn}{dT}\right)\left(\frac{P}{2\pi}\right), \label{eqn:shift} \end{equation} where $P$ is the orbital period. Note that equation~\ref{eqn:dndt} and equation~\ref{eqn:shift} carry the same assumptions as equation~\ref{eqn:adot}. Assuming that current instrumentation can reach a timing accuracy of $5$ seconds (see e.g. \citealt{Gil09}), the decay of WTS-2 b's orbit would be detectable after $\sim15$ years ($T_{\rm shift}\sim17$ s for $Q_{\star}^{\prime}=10^{6}$), but it remains the best target to observe this phenomenon for early-to-mid K-dwarf host stars. If no detectable transit time shift is found in the WTS-2 b system, it provides a stringent lower limit for the value of $Q_{\star}^{\prime}$ in the sparsely sampled K-dwarf regime, thus helping to constrain tidal evolution theories that argue $Q_{\star}^{\prime}$ is dependent on the depth and mass of the convective outer envelope of the host star \citep{Bark09,Bark10,Pen11}. We have predicted the $T_{\rm shift}$ values for a sample of known transiting hot Jupiters ($M_{P}>0.3{\rm M}_{\rm J}$) to determine if direct observational constraints across the entire mass range of planet host stars is achievable within a decade. Such constraints could be used to address the dependence of $Q_{\star}^{\prime}$ on the depth of the stellar convective envelope. The sample was selected from the exoplanets.org database, choosing systems with approximately circular orbits ($e<0.01$), and contains $101$ planets (as of $20^{\rm th}$ January 2014). We assume these hot Jupiter systems contain only one planet and do not have stellar companions, such that additional transit timing variations can be ignored. Choosing near circular orbit systems also allows us to neglect issues such as precession of the orbit, the stellar oblateness, and the value of the planetary tidal quality factor (although these affects are likely to be small). For these reasons, the well-known close-in hot Jupiter WASP-12 b is excluded from our sample, owing to its stellar and planetary companions, and slightly eccentric orbit \citep{Hus11,Berg13,Maci13}. The left panel of Figure~\ref{fig:shift} shows our predicted $T_{\rm shift}$ values for the sample after 10 years as a function of stellar host mass assuming $Q_{\star}^{\prime}=10^{6}$ in equation~\ref{eqn:dndt}. There are a number of systems with feasibly observable variations in their transit arrival time whose host stars span a variety of stellar internal structures and could provide direct observational constraints on $Q_{\star}^{\prime}$ within a decade with current instrumentation. The right hand panel of Figure~\ref{fig:shift} depicts how long one would need to wait in order to place a lower limit constraint on $Q_{\star}^{\prime}$ in our adopted model for some of the most perturbed systems, e.g. after $25$ years, if no detectable transit time shift is observed, one could rule out values of $Q_{\star}^{\prime}<1\times10^{7}$ across a wide range of stellar masses. Note however that future transit arrival time shift measurements require similarly accurate measurements of the planets' current-day periods and ephemerides. Intriguingly, tentative measurements of period decay rates in WASP-43 b and OGLE-TR-113 b for example, which orbit M- and K-dwarf host stars, suggest relatively small values of $Q_{\star}^{\prime}$, on the order of $10^{3}-10^{4}$; however, further data over several years is required to confirm these results \citep{Ada10,Ble14,Murg14}. \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{shift_blend_2014} \includegraphics[width=0.49\textwidth]{obs_constraint_blend_2014} \caption{{\bf Left:} Transit time shifts after 10 years for known transiting hot Jupiters assuming $Q_{\star}^{\prime}=10^{6}$ in equation~\ref{eqn:dndt}. The more significantly shifted planets are labeled. The horizontal dotted line marks the 5 second timing accuracy possible with current instrumentation. After 10 years, strong observational constraints on $Q_{\star}^{\prime}$ would be available across the full stellar mass range of exoplanets host stars. {\bf Right:} The amount of time after discovery one would need to wait to detect $T_{\rm shift}=5$ seconds for a given $Q_{\star}^{\prime}$, e.g. after $T\sim25$ years, one could rule out $Q_{\star}^{\prime}\leq10^{7}$ across a range of stellar masses if no detectable shift was observed.} \label{fig:shift} \end{figure*} \subsubsection{WASP-18 b}\label{sec:wasp-18} Importantly, we note here that for the most extreme planet, WASP-18 b, equations~\ref{eqn:dndt} and~\ref{eqn:shift} give $dn/dt=5.53\times10^{-19}$ rad/s$^{2}$ (which corresponds to a rate of change of period of $dP/dt=-0.018$ s/yr) and a corresponding $T_{\rm shift}\sim356$ seconds after $10$ years for $Q_{\star}^{\prime}=10^{6}$, which is significantly more than the predicted $T_{\rm shift}=28$ seconds reported in \citet{Hel09}. There are several differences between our methods of calculating $T_{\rm shift}$ and the WASP-18 b discovery paper, for example, \citet{Hel09} used the tidal evolution formalism of \citet{Dobb04}, which defines $Q_{\star}^{\prime}$ to be a factor of 2 different to our adopted formalism, and they also included the effects of stellar rotation and the stellar wind which we have neglected here. However, none of these factors are sufficient to explain the order of magnitude difference between the predicted $T_{\rm shift}$ values for WASP-18 b. We also note that a $T_{\rm shift}$ of order 100s of seconds for WASP-18 b is consistent with scaling the theoretical calculations of \citet{Pen11} for $Q_{\star}^{\prime}=10^{6}$. Given that our equations give a similar remaining lifetime for WASP-18 b ($\sim0.72$ Myr) to that reported by \citeauthor{Hel09} ($0.65$ Myr), our orbital evolution tracks and in-spiral times appear to agree. We have therefore concluded that a simple numerical error occurred in the WASP-18 b discovery paper at the final stage of converting the orbital evolution into $dP/dt$ and a corresponding transit arrival time shift (Collier Cameron, priv. comm), and that under the assumption of $Q_{\star}^{\prime}=10^{6}$, observable shifts in the transit timing of WASP-18 will arrive much earlier than previously thought. In fact, we calculate that a shift of $28$ seconds for the WASP-18 b transit would only take $\sim3$ years, which is a positive outcome. \citet{Max13} found no evidence for variations in the times of transit from a linear ephemeris for WASP-18 b greater than 100 seconds after 3 years, but if $Q_{\star}^{\prime}$ is genuinely close to $10^{6}$, we expect to see evidence of this much sooner than a decade. We also note that our predicted timing variation for WASP-18 b over 10 years is now much larger than that predicted to be caused by the Applegate effect on similar timescales \citep{Wat10}. \subsection{Current observational constraints on $Q_{\star}^{\prime}$}\label{sec:pop} Rather than waiting to observe a decaying orbital period by measuring transit arrival time shifts, can we already rule out low values of $Q_{\star}^{\prime}$ ($\lesssim10^{6}$)? For example, in the individual case of WASP-19 b, \citet{Hel09} suggest $Q_{\star}^{\prime}\sim10^{7}$, else the probability of observing the planet in its current evolutionary state is unlikely given the known population of hot Jupiters. However, the growing number of very close in hot Jupiters suggests that the population should be treated as whole. \citet{Pen12} performed a population study of transiting exoplanets in circular orbits around stars with surface convective zones, to find a $Q_{\star}^{\prime}$ that would give a statistically likely distribution of remaining planet lifetimes. They assumed that the orbits of the planets initially evolved only under gas disc migration and then by tidal forces alone since the zero-age main-sequence. They integrated the orbital evolution from $5$ Myrs based on the given ages of its host star, and argued that $Q_{\star}^{\prime}\gtrsim10^7$ in order to fit the observed population at the $99\%$ confidence level. Their largest source of uncertainty was the error on the stellar ages, but even accounting for this they still found inconsistency with a low values of $Q_{\star}^{\prime}$. However, \citet{Pen12} point out that their result may not be valid for other giant planet migration mechanisms, such as dynamical scattering, and that their model is not valid for stars without surface convective layers so they excluded any host star with $M_{\star}>1.25{\rm M}_{\odot}$, which could be subject to a different mode of tidal dissipation. We also note that high values of $Q_{\star}^{\prime}$ for those planets deposited close to the host star before the dispersal of the gas disk ($\lesssim10$ Myr, \citealt{Her07,Wya08}) are perhaps expected as the tidal migration would need to be slow over the host star's main sequence lifetime. Here, we attempt a complementary study to that of \citet{Pen12}, in that we assume the population of hot Jupiters instead migrated by scattering onto eccentric orbits (it is interesting to note here that the likely bound M-dwarf at $0.6$ arcsec separation from WTS-2 is a potential source of Kozai perturbations which could also trigger the migration of the gas giant). Planets scattered such that their eccentric orbit just grazes $a=a_{Roche}$ are tidally circularised to $2a_{Roche}$ \citep{Ford06,Nag08}, and we assume that any inside $2a_{Roche}$ at present-day are assumed to have migrated under tidal forces alone from there (see Figure~\ref{fig:roche}). The key difference is that we assume the scattering event can occur at any point during the planet's total lifetime so the tidal forces have not necessarily been dominant during the majority of the planet's lifetime. This assumption means that the pile-up of planets near $2a_{Roche}$ is constantly replenished. If planets are continuously falling in from the pile-up at a constant rate in time due to tidal forces, then our model given in equation 2 will give a distribution of remaining lifetimes that is uniformly distributed in time. For example, for every one planet we see with a remaining lifetime of $0.1-1$ Myr, we expect to see $10$ with remaining lifetimes of $1-10$ Myr, $100$ with remaining lifetimes of $10-100$ Myr, and so on. If the calculated remaining lifetime distribution for the observed population diverges from this, our model and adopted value of $Q_{\star}^{\prime}=10^{6}$ are not observationally supported. However, if the distribution matches, planets such as WASP-18 b and WASP-19 b are consistent with being genuinely close to destruction and their detection is not so unlikely. For simplicity, we have used the sample of hot Jupiters that we created in Section~\ref{sec:tshift}. Due to the dependence of $Q_{\star}^{\prime}$ on the orbital period, we assign a current-day $Q_{\star}^{\prime}$ to each system by assuming that it had $Q_{\star}^{\prime}=10^{6}$ at its $3$-day orbital separation. This ensures that $\Delta t$ in equation~\ref{eqn:tlife} is constant for all systems, allowing a physically meaningful comparison between planets. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{hut_twotail_newprobs_2014} \caption{Histogram of the calculated remaining lifetimes of observed systems (red dotted line) using $Q_{\star}^{\prime}=10^{6}$. The black solid line histogram shows the observed distribution after a correction for transit probability and survey completeness. The dashed black line shows the distribution we expect to observe if planets are falling in under tidal forces at a constant rate in time, using $Q_{\star}^{\prime}=10^{6}$. The predicted distribution is scaled using the 1-10 Myr bin which we have assumed is the least affected by incompleteness. The corrected and predicted distributions are discrepant at long remaining lifetimes, implying that our model of the tidal forces is incorrect, but the survey completeness correction makes these bins uncertain (see Section~\ref{sec:pop}).} \label{fig:remaining} \end{figure} The resulting distribution of observed remaining lifetimes is shown as a red dotted-line histogram in Figure~\ref{fig:remaining}. We correct the observed distribution of remaining lifetimes to account for the geometrical alignment bias in the transit detection probability, such that each planet observed is representative of a population of $\sim(a/R_\star)$ planets. We also correct for survey incompleteness using the detection probability function described by \citet{Pen12}, which is $100\%$ complete out to $2$-days and tails off at longer periods. Applying these corrections yields the histogram shown by the solid black line in Figure~\ref{fig:remaining}. The predicted distribution (dashed line), i.e. that which increases by a factor of 10 for each bin, is created by scaling to the $1-10$ Myr bin. This bin was chosen for the scaling as it has the best combination of sample size and completeness. In the longest remaining lifetime bins, the bias-corrected distribution is highly discrepant with the predicted one, suggesting that either $Q_{\star}^{\prime}$ is indeed higher, so that planets do not typically spiral into their hosts within the age of the system, or that $Q_{\star}^{\prime}$ may have a complicated frequency dependence making the future lifetime of the system hard to predict. The latter possibility is predicted by various dynamical tide mechanisms i.e. the excitation of normal modes in the star by the imposed gravitational field (see e.g. \citealt{Ogi07}), with the tidal quality factor varying by orders of magnitude with small changes in the planet's orbital frequency as different modes are excited in the star. In this case, the planets with supposedly short lifetimes could be temporarily stuck in a region of high $Q_{\star}^{\prime}$ after migrating rapidly from a feeding region where $Q_{\star}^{\prime}$ is lower. A third possibility is that mass loss as the planet's size approaches its Roche lobe causes orbit expansion that retards the tidal decay \citep{Li10,Fos10,Has12}. However, using the equations of \citet{Li10}, we estimate that at least in the case of WTS-2b such mass loss is negligible, around $10^{4}$ times less than that for WASP-12b. However, the simplifications in our population study bias us against longer remaining lifetimes, i.e. we have excluded eccentric systems which tend to have longer periods, and we do not have a detailed treatment of the long-period sensitivity of the transit surveys contributing to the sample. While RV surveys suggest that it is unlikely the number of longer lifetime (longer period) systems will increase dramatically, our bias-corrected distribution of remaining lifetimes is still uncertain, and it is not straight-forward to reconstruct it. Although more detailed population studies, such as that by \citep{Pen12}, strongly advocate $Q_{\star}^{\prime}\gtrsim10^{7}$ for the general population of exoplanets, this is under one specific set of initial conditions (e.g. gas disk migration) with some idealised assumptions about the chances of a planet candidate being confirmed by follow-up considering the human element involved in its assessment and the availability of resources. Such studies will always be hampered by these uncertainties and while they provide some generalised constraints on $Q_{\star}^{\prime}$, we conclude that the most informative and straight-forward constraints on $Q_{\star}^{\prime}$ are best obtained through the monitoring of orbital periods in individual close-in giant planet systems. Even in the case of no detectable period decay, this places a constraint on the rate of change of orbital period, and hence definitive limits on the value of $Q_{\star}^{\prime}$. Importantly, each system acts as a probe of different parameters that $Q_{\star}^{\prime}$ may be dependent on e.g. the internal structure of the host star, such that even a relatively small sample of planets can lead to strong observational constraints $Q_{\star}^{\prime}$ (see right panel of Figure~\ref{fig:shift}). To achieve the same results with population studies, i.e. studying $Q_{\star}^{\prime}$ as a function of host star mass, would require many more well-characterised systems per host star mass bin, and although future space-based and ground-based planet discovery missions may provide this, it is likely to be on a similar timescale to the technological advancements in precision timing measurements. Consequently, we find that monitoring changes in orbital periods of close-in giant planets will be the most informative and least assumption-prone method for observationally constraining $Q_{\star}^{\prime}$. \subsection{Follow-up potential}\label{sec:followup} In terms of planetary mass and host star, WTS-2 b is very similar to the well-known hot Jupiter HD 189733 b \citep{Bou05b}. However, WTS-2 b receives almost three times as much incident stellar radiation on account of its closer orbit, resulting in an expected maximum day-side temperature that is $\sim500$ K hotter than HD 189733 b. Stellar irradiation is expected to be a dominant factor in determining the atmospheric properties of a hot Jupiter (e.g. Fortney et al. 2008). From this, WTS-2 b is expected to have an inversion layer (stratosphere) in its atmosphere, caused by gaseous absorbing compounds \citep{Bur08b}. In cooler atmospheres, these absorbers condense out, as may be the case for HD189733 b, which does not exhibit an inversion layer \citep{Char08,Bir13}. Multi-wavelength measurements of the WTS-2 b secondary eclipse depth will allow the temperature structure of its atmosphere to be determined. The fact that WTS-2 b is very hot and orbits a relatively small star means that its secondary eclipse depths will be deeper compared to other hot Jupiters of similar $T_{\rm eq}$ orbiting more luminous stars. To assess the potential of ground-based follow-up studies of WTS-2 b's atmosphere, we have calculated the expected secondary eclipse depths for WTS-2 b at optical and infrared wavelengths again following the equations of \citet{Lop07b}. We approximate the stellar and planetary spectra as black-bodies, and assume the maximum day-side temperature for the planet $T_{\rm eff,p}=2000$ K i.e. zero-albedo (no reflection) and no advection of incident energy from the day-side to the night-side. The expected planet/star flux ratios in the $I$, $Z$, $J$, $H$, and $K_{s}$ bandpasses, based on the dilution-corrected light curve analysis, are $\sim0.14\times10^{-3}$, $\sim0.19\times10^{-3}$, $\sim0.75\times10^{-3}$, $\sim1.5\times10^{-3}$, and $\sim2.6\times10^{-3}$, respectively. Note that any potential follow-up observations would need to add the expected contamination from the M-dwarf companion to these values. For example, if the M-dwarf is entirely contained within the photometric aperture, the expected observed depths would be $\sim0.12\times10^{-3}$, $\sim0.17\times10^{-3}$, $\sim0.63\times10^{-3}$, $\sim1.25\times10^{-3}$, and $\sim2.12\times10^{-3}$, in the $I$, $Z$, $J$, $H$, and $K_{s}$ bandpasses, respectively. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{all_secs_blend_cor_2014} \caption{Expected $K_{s}$-band secondary eclipse depths for the currently known exoplanets as a function of the system's $K_{s}$-band magnitude. Eclipse depths are calculated assuming the maximum day-side temperature ($A_{B}=0$, $f=2/3$ i.e. no advection of energy from the day-side to the night-side) for all planets. WTS-2 is marked by the red square and is one of the deepest predicted $Ks$-band secondary eclipse depths, making it more favourable for ground-based atmospheric characterisation studies. Note that these are not measurements and that the true depth can deviate e.g. HD189733b ($0.039\%$, \citealt{Swa09}. Also note that WTS-2 b is the expected depth without adding in contamination from the red companion, and will depend on the spatial resolution of the observations. Planet and stellar data from exoplanets.org 20/01/2014.} \label{fig:all_secs} \end{figure} Figure~\ref{fig:all_secs} shows the expected non-contaminated $K_{s}$-band secondary eclipse depth of WTS-2 b in the context of other transiting exoplanets, again assuming that each planet has $A_{B}=0$ and $f=2/3$. We find that WTS-2 b has one of the deepest predicted $K_{s}$-band secondary eclipses amongst the known exoplanet population. Although the host star is relatively faint, such a deep secondary eclipse could potentially be detected with ground-based infrared facilities. For example, both the Long-slit Intermediate Resolution Infrared Spectrograph (LIRIS) at the 4-m William Herschel Telescope in La Palma, and WFCAM on UKIRT in Hawaii have a proven record for detecting such events (see e.g. \citealt{Sne07,Moo09,Moo11}). It has also been shown that the presence of an inversion layer may depend on the activity of the host star, whereby UV flux from an active host star causes photodissociation of the absorbing compounds in the planet's upper atmosphere preventing the temperature inversion \citep{Knu10}. A measurement of the activity level in the WTS-2 host star is not only a useful ageing diagnostic, but key to understanding the planet's atmospheric properties. Measurements of the WTS-2 b secondary eclipse would also help constrain the eccentricity of the system, and improve the ephemeris of the orbit, aiding future studies of orbital decay in the system. \section{Conclusions}\label{sec:conclusion} We have reported the discovery of WTS-2 b, a typical transiting hot Jupiter in an unusually close orbit around a K2V star, which has a likely gravitationally-bound M-dwarf companion at a projected separation of $0.6$ arcsec. The proximity of the planet to its host star places it at just 1.5 times the separation at which it would be destroyed by Roche lobe overflow. The system provides a calibration point for theories describing the effect of tidal forces on the orbital evolution of giant planets, which are poorly constrained by observations. In particular, the system is useful for constraining theories that predict host stars with deeper convective envelopes lead to more efficient tidal dissipation. Using a simple model of tidal orbital evolution with a tidal dissipation quality factor $Q_{\star}^{\prime}=10^{6}$, we calculated a remaining lifetime for WTS-2 b of just $40$ Myr. The decaying orbit corresponds to a shift in the transit arrival time of WTS-2 b of $\sim17$ seconds after $15$ years. We have also reported a correction to the previously published predicted shift in the transit arrival time of WASP-18 b, which used a very similar model for the stellar tides. We have calculated that the WASP-18 b transit time shift is $356$ seconds after 10 years for $Q_{\star}^{\prime}=10^{6}$, which is much larger than the previously reported $28$ seconds. We found that transit arrival time measurements in individual systems could place stringent observational constraints on $Q_{\star}^{\prime}$ across the full mass spectrum of exoplanet host stars within the next decade. Our attempt to constraint $Q_{\star}^{\prime}$ via a study of the observed population of currently known transiting hot Jupiters was inconclusive, requiring a more detailed and precise determination of transit survey sensitivities at long periods. We conclude that the most informative and straight-forward constraints on $Q_{\star}^{\prime}$ and the theory of tidal orbital evolution for exoplanets will be provided by transit arrival time shifts in individual systems. Finally, WTS-2 b is one of the most highly irradiated gas giants orbiting a K-dwarf and is therefore expected to have an inversion layer in its atmosphere. This is in contrast to the non-inverted atmosphere of HD 189733 b, which has a very similar planet mass and host star to WTS-2 b, but receives $\sim3$ times less incident radiation. Despite the relatively faint magnitude of the host star, the system size ratio and hot day-side temperature result in predicted infrared secondary eclipses that are within the reach of current ground-based instrumentation. \section*{Acknowledgements} The authors would like to thank A. Collier Cameron and C. Hellier for their time and help in addressing the WASP-18 b transit arrival time shift discrepancy. JLB would also like to thank Doug Lin for some engaging and very helpful discussions, and to thank our anonymous referee for asking some very pertinent questions that improved this manuscript. We also thank the excellent TOs and support staff at UKIRT, and all those observers who clicked on U/CMP/2. All authors of this paper have received support from the RoPACS network during this research, a Marie Curie Initial Training Network funded by the European Commission’s Seventh Framework Programme. The United Kingdom Infrared Telescope is operated by the Joint Astronomy Centre on behalf of the Science and Technology Facilities Council of the U.K. This article is based on observations made with the INT operated on the island of La Palma by the ING in the Spanish Observatorio del Roque de los Muchachos. The Hobby-Eberly Telescope (HET) is a joint project of the University of Texas at Austin, the Pennsylvania State University, Stanford University, Ludwig-Maximilians-Universit\"at M\"unchen, and Georg-August-Universit\"at G\"ottingen. The HET is named in honor of its principal benefactors, William P. Hobby and Robert E. Eberly. This article is based on Calar Alto Observatory, the German-Spanish Astronomical Center, Calar Alto, jointly operated by the Max-Planck-Institut f\"ur Astronomie Heidelberg and the Instituto de Astrof\'isica de Andaluc\'ia (CSIC). This research has been funded by the Spanish National Plan of R\&D grants AYA2010-20630, AYA2010-19136, AYA2010-21161-C02-02, AYA2011-30147-C03-03, AYA2012-38897-C02-01, CONSOLIDER-INGENIO GTC CSD2006-00070 and PRICIT-S2009/ESP-1496. This work was partly funded by the Funda\c{c}\~ao para a Ci\^encia e a Tecnologia (FCT)-Portugal through the project PEst-OE/EEI/UI0066/2011. NL was funded by the Ram\'on y Cajal fellowship number 08-303-01-02 by the Spanish ministry of science and innovation. Lillo-Box thanks the CSIC JAE-predoc program for the PhD fellowship. This publication makes use of VOSA, developed under the Spanish Virtual Observatory project supported from the Spanish MICINN through grant AyA2008-02156.This research has made use of the Exoplanet Orbit Database and the Exoplanet Data Explorer at exoplanets.org \citep{Wri10} and the Extrasolar Planets Encyclopaedia exoplanet.eu \citep{Sch11}. This research uses products from SDSS DR7. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This work also makes use of NASA’s Astrophysics Data System (ADS) bibliographic services, and the SIMBAD database, operated at CDS, Strasbourg, France. {\sc iraf} is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation. \bibliographystyle{mn2e}
1,314,259,994,364
arxiv
\section{Introduction} Convolutional neural networks~(CNNs)~\citep{lecun2012efficient} have demonstrated great capability in various challenging artificial intelligence tasks, especially in fields of computer vision~\citep{he2017mask,huang2016densely} and natural language processing~\citep{bahdanau2014neural}. One common property behind these tasks is that both images and texts have grid-like structures. Elements on feature maps have locality and order information, which enables the application of convolutional operations~\citep{defferrard2016convolutional}. In practice, many real-world data can be naturally represented as graphs such as social and biological networks. Due to the great success of CNNs on grid-like data, applying them on graph data~\citep{gori2005new,scarselli2009graph} is particularly appealing. Recently, there have been many attempts to extend convolutions to graph data~(GNNs) \citep{kipf2016semi,velivckovic2017graph,gao2018large}. One common use of convolutions on graphs is to compute node representations~\citep{hamilton2017inductive,ying2018hierarchical}. With learned node representations, we can perform various tasks on graphs such as node classification and link prediction. Images can be considered as special cases of graphs, in which nodes lie on regular 2D lattices. It is this special structure that enables the use of convolution and pooling operations on images. Based on this relationship, node classification and embedding tasks have a natural correspondence with pixel-wise prediction tasks such as image segmentation~\citep{noh2015learning,gao2017efficient,jegou2017one}. In particular, both tasks aim to make predictions for each input unit, corresponding to a pixel on images or a node in graphs. In the computer vision field, pixel-wise prediction tasks have achieved major advances recently. Encoder-decoder architectures like the U-Net~\citep{ronneberger2015u} are state-of-the-art methods for these tasks. It is thus highly interesting to develop U-Net-like architectures for graph data. In addition to convolutions, pooling and up-sampling operations are essential building blocks in these architectures. However, extending these operations to graph data is highly challenging. Unlike grid-like data such as images and texts, nodes in graphs have no spatial locality and order information as required by regular pooling operations. To bridge the above gap, we propose novel graph pooling~(gPool) and unpooling~(gUnpool) operations in this work. Based on these two operations, we propose U-Net-like architectures for graph data. The gPool operation samples some nodes to form a smaller graph based on their scalar projection values on a trainable projection vector. As an inverse operation of gPool, we propose a corresponding graph unpooling~(gUnpool) operation, which restores the graph to its original structure with the help of locations of nodes selected in the corresponding gPool layer. Based on the gPool and gUnpool layers, we develop graph U-Nets, which allow high-level feature encoding and decoding for network embedding. Experimental results on node classification and graph classification tasks demonstrate the effectiveness of our proposed methods as compared to previous methods. \section{Related Work} Recently, there has been a rich line of research on graph neural networks~\citep{gilmer2017neural}. Inspired by the first order graph Laplacian methods, \cite{kipf2016semi} proposed graph convolutional networks~(GCNs), which achieved promising performance on graph node classification tasks. The layer-wise forward-propagation operation of GCNs is defined as: \begin{equation} \begin{aligned} X_{\ell+1} = \sigma(\hat{D}^{-\frac{1}{2}}\hat{A}\hat{D}^{-\frac{1}{2}}X_{\ell}W_{\ell}), \end{aligned}\label{eq:gcn} \end{equation} where $\hat A = A + I$ is used to add self-loops in the input adjacency matrix $A$, $X_{\ell}$ is the feature matrix of layer $\ell$. The GCN layer uses the diagonal node degree matrix $\hat{D}$ to normalize $\hat{A}$. $W_{\ell}$ is a trainable weight matrix that applies a linear transformation to feature vectors. GCNs essentially perform aggregation and transformation on node features without learning trainable filters. \cite{hamilton2017inductive} tried to sample a fixed number of neighboring nodes to keep the computational footprint consistent. \cite{velivckovic2017graph} proposed to use attention mechanisms to enable different weights for neighboring nodes. \cite{schlichtkrull2018modeling} used relational graph convolutional networks for link prediction and entity classification. Some studies applied GNNs to graph classification tasks~\citep{duvenaud2015convolutional,dai2016discriminative,zhang2018end}. \cite{bronstein2017geometric} discussed possible ways of applying deep learning on graph data. \cite{henaff2015deep} and \cite{bruna2014spectral} proposed to use spectral networks for large-scale graph classification tasks. Some studies also applied graph kernels on traditional computer vision tasks~\citep{gama2019convolutional,fey2018splinecnn,monti2017geometric}. In addition to convolution, some studies tried to extend pooling operations to graphs. \cite{defferrard2016convolutional} proposed to use binary tree indexing for graph coarsening, which fixes indices of nodes before applying 1-D pooling operations. \cite{simonovsky2017dynamic} used deterministic graph clustering algorithm to determine pooling patterns. \cite{ying2018hierarchical} used an assignment matrix to achieve pooling by assigning nodes to different clusters of the next layer. \section{Graph U-Nets} In this section, we introduce the graph pooling~(gPool) layer and graph unpooling~(gUnpool) layer. Based on these two new layers, we develop the graph U-Nets for node classification tasks. \subsection{Graph Pooling Layer} \begin{figure*}[t] \includegraphics[width=\textwidth]{FIG/GPool2.pdf} \caption{An illustration of the proposed graph pooling layer with $k=2$. $\times$ and $\odot$ denote matrix multiplication and element-wise product, respectively. We consider a graph with 4 nodes, and each node has 5 features. By processing this graph, we obtain the adjacency matrix $A^\ell \in \mathbb{R}^{4 \times 4}$ and the input feature matrix $X^\ell \in \mathbb{R}^{4 \times 5} $ of layer $\ell$. In the projection stage, $\mathbf p \in \mathbb{R}^{5} $ is a trainable projection vector. By matrix multiplication and $\mbox{sigmoid}(\cdot)$, we obtain $\mathbf y$ that are scores estimating scalar projection values of each node to the projection vector. By using $k=2$, we select two nodes with the highest scores and record their indices in the top-k-node selection stage. We use the indices to extract the corresponding nodes to form a new graph, resulting in the pooled feature map $\tilde X^{\ell}$ and new corresponding adjacency matrix $A^{\ell+1}$. At the gate stage, we perform element-wise multiplication between $\tilde X^{\ell}$ and the selected node scores vector $\mathbf{\tilde y}$, resulting in $X^{\ell+1}$. This graph pooling layer outputs $A^{\ell+1}$ and $X^{\ell+1}$.} \label{fig:gpool} \end{figure*} Pooling layers play important roles in CNNs on grid-like data. They can reduce sizes of feature maps and enlarge receptive fields, thereby giving rise to better generalization and performance~\citep{yu2015multi}. On grid-like data such as images, feature maps are partitioned into non-overlapping rectangles, on which non-linear down-sampling functions like maximum are applied. In addition to local pooling, global pooling layers~\citep{zhao2015self} perform down-sampling operations on all input units, thereby reducing each feature map to a single number. In contrast, $k$-max pooling layers~\citep{blunsom2014convolutional} select the $k$-largest units out of each feature map. However, we cannot directly apply these pooling operations to graphs. In particular, there is no locality information among nodes in graphs. Thus the partition operation is not applicable on graphs. The global pooling operation will reduce all nodes to one single node, which restricts the flexibility of networks. The $k$-max pooling operation outputs the $k$-largest units that may come from different nodes in graphs, resulting in inconsistency in the connectivity of selected nodes. In this section, we propose the graph pooling~(gPool) layer to enable down-sampling on graph data. In this layer, we adaptively select a subset of nodes to form a new but smaller graph. To this end, we employ a trainable projection vector $\mathbf{p}$. By projecting all node features to 1D, we can perform $k$-max pooling for node selection. Since the selection is based on 1D footprint of each node, the connectivity in the new graph is consistent across nodes. Given a node $i$ with its feature vector $\mathbf{x}_i$, the scalar projection of $\mathbf{x}_i$ on $\mathbf{p}$ is $y_i = \mathbf{x_i} \mathbf{p} / \lVert \mathbf{p} \rVert$. Here, $y_i$ measures how much information of node $i$ can be retained when projected onto the direction of $\mathbf{p}$. By sampling nodes, we wish to preserve as much information as possible from the original graph. To achieve this, we select nodes with the largest scalar projection values on $\mathbf{p}$ to form a new graph. Suppose there are $N$ nodes in a graph $\mathbb G$ and each of which contains $C$ features. The graph can be represented by two matrices; those are the adjacency matrix $A^{\ell} \in \mathbb{R}^{N \times N} $ and the feature matrix $X^{\ell} \in \mathbb{R}^{N\times C}$. Each non-zero entry in the adjacency matrix $A$ represents an edge between two nodes in the graph. Each row vector $\mathbf{x}^{\ell}_i$ in the feature matrix $X^{\ell}$ denotes the feature vector of node $i$ in the graph. The layer-wise propagation rule of the graph pooling layer $\ell$ is defined as: \begin{equation} \begin{aligned} \mathbf y & = X^{\ell} \mathbf p^{\ell} / \lVert \mathbf p^{\ell} \rVert, \\ \mbox{idx} &= \mbox{rank}(\mathbf y, k), \\ \tilde{\mathbf y} &= \mbox{sigmoid} (\mathbf y(\mbox{idx})), \\ \tilde X^{\ell} & = X^{\ell}(\mbox{idx}, :), \\ A^{\ell+1} &= A^{\ell}(\mbox{idx}, \mbox{idx}), \\ X^{\ell+1} &= \tilde X^{\ell} \odot \left(\tilde{\mathbf y} \mathbf{1}_C^{T}\right), \end{aligned}\label{eq:gpool} \end{equation} where $k$ is the number of nodes selected in the new graph. $\mbox{rank}(\mathbf y, k)$ is the operation of node ranking, which returns indices of the $k$-largest values in $\mathbf y$. The $\mbox{idx}$ returned by $\mbox{rank}(\mathbf y, k)$ contains the indices of nodes selected for the new graph. $A^{\ell}(\mbox{idx}, \mbox{idx})$ and $X^{\ell}(\mbox{idx}, :)$ perform the row and/or column extraction to form the adjacency matrix and the feature matrix for the new graph. $\mathbf y(\mbox{idx})$ extracts values in $\mathbf y$ with indices idx followed by a $\mbox{sigmoid}$ operation. $\mathbf 1_{C}\in \mathbb{R}^{C}$ is a vector of size $C$ with all components being 1, and $\odot$ represents the element-wise matrix multiplication. $X^{\ell}$ is the feature matrix with row vectors $\mathbf x^{\ell}_1, \mathbf x^{\ell}_2, \cdots, \mathbf x^{\ell}_N$, each of which corresponds to a node in the graph. We first compute the scalar projection of $X^{\ell}$ on $\mathbf{p}^{\ell}$, resulting in $\mathbf y = [ y_1, y_2, \cdots, y_N ]^T$ with each $y_i$ measuring the scalar projection value of each node on the projection vector $\mathbf p^{\ell}$. Based on the scalar projection vector $\mathbf y$, $\mbox{rank}(\cdot)$ operation ranks values and returns the $k$-largest values in $\mathbf y$. Suppose the $k$-selected indices are $i_1, i_2, \cdots, i_k $ with $i_m < i_n$ and $1 \le m < n \le k$. Note that the index selection process preserves the position order information in the original graph. With indices idx, we extract the adjacency matrix $A^{\ell} \in \mathbb{R}^{k\times k}$ and the feature matrix $\tilde X^{\ell} \in \mathbb{R}^{k \times C}$ for the new graph. Finally, we employ a gate operation to control information flow. With selected indices idx, we obtain the gate vector $\tilde y \in \mathbb{R}^k$ by applying $\mbox{sigmoid}$ to each element in the extracted scalar projection vector. Using element-wise matrix product of $\tilde X^{\ell}$ and $\mathbf{\tilde y} \mathbf 1^T_C$, information of selected nodes is controlled. The $i$th row vector in $X^{\ell+1}$ is the product of the $i$th row vector in $X^{\ell}$ and the $i$th scalar value in $\tilde y$. Notably, the gate operation makes the projection vector $\mathbf p$ trainable by back-propagation~\citep{lecun2012efficient}. Without the gate operation, the projection vector $\mathbf p$ produces discrete outputs, which makes it not trainable by back-propagation. Figure~\ref{fig:gpool} provides an illustration of our proposed graph pooling layer. Compared to pooling operations used in grid-like data, our graph pooling layer employs extra training parameters in projection vector $\mathbf p$. We will show that the extra parameters are negligible but can boost performance. \subsection{Graph Unpooling Layer} \begin{figure*}[t] \centering \includegraphics[width=0.8\textwidth]{FIG/GUnpool.pdf} \caption{An illustration of the proposed graph unpooling~(gUnpool) layer. In this example, a graph with 7 nodes is down-sampled using a gPool layer, resulting in a coarsened graph with 4 nodes and position information of selected nodes. The corresponding gUnpool layer uses the position information to reconstruct the original graph structure by using empty feature vectors for unselected nodes.} \label{fig:gunpool} \end{figure*} Up-sampling operations are important for encoder-decoder networks such as U-Net. The encoders of networks usually employ pooling operations to reduce feature map size and increase receptive field. While in decoders, feature maps need to be up-sampled to restore their original resolutions. On grid-like data like images, there are several up-sampling operations such as the deconvolution~\citep{isola2017image,Zhao2015StackedWA} and unpooling layers~\citep{long2015fully}. However, such operations are not currently available on graph data. To enable up-sampling operations on graph data, we propose the graph unpooling~(gUnpool) layer, which performs the inverse operation of the gPool layer and restores the graph into its original structure. To achieve this, we record the locations of nodes selected in the corresponding gPool layer and use this information to place nodes back to their original positions in the graph. Formally, we propose the layer-wise propagation rule of graph unpooling layer as \begin{equation} \begin{aligned} X^{\ell+1} &= \mbox{distribute}(0_{N\times C}, X^{\ell}, \mbox{idx}), \\ \end{aligned}\label{eq:gunpool} \end{equation} where $\mbox{idx} \in \mathbb{Z}^{*k}$ contains indices of selected nodes in the corresponding gPool layer that reduces the graph size from $N$ nodes to $k$ nodes. $X^{\ell} \in \mathbb{R}^{k \times C}$ are the feature matrix of the current graph, and $0_{N\times C}$ are the initially empty feature matrix for the new graph. $\mbox{distribute}(0_{N\times C}, X^{\ell}, \mbox{idx})$ is the operation that distributes row vectors in $X^{\ell}$ into $0_{N\times C}$ feature matrix according to their corresponding indices stored in $\mbox{idx}$. In $X^{\ell+1}$, row vectors with indices in $\mbox{idx}$ are updated by row vectors in $X^{\ell}$, while other row vectors remain zero. \subsection{Graph U-Nets Architecture}\label{sec:gunet} \begin{figure*}[t] \includegraphics[width=\textwidth]{FIG/GUnet.pdf} \caption{An illustration of the proposed graph U-Nets~(g-U-Nets). In this example, each node in the input graph has two features. The input feature vectors are transformed into low-dimensional representations using a GCN layer. After that, we stack two encoder blocks, each of which contains a gPool layer and a GCN layer. In the decoder part, there are also two decoder blocks. Each block consists of a gUnpool layer and a GCN layer. For blocks in the same level, encoder block uses skip connection to fuse the low-level spatial features from the encoder block. The output feature vectors of nodes in the last layer are network embedding, which can be used for various tasks such as node classification and link prediction. } \label{fig:unet} \end{figure*} It is well-known that encoder-decoder networks like U-Net achieve promising performance on pixel-wise prediction tasks, since they can encode and decode high-level features while maintaining local spatial information. Similar to pixel-wise prediction tasks~\citep{gong2013deep,ronneberger2015u}, node classification tasks aim to make a prediction for each input unit. Based on our proposed gPool and gUnpool layers, we propose our graph U-Nets~(g-U-Nets) architecture for node classification tasks. In our graph U-Nets~(g-U-Nets), we first apply a graph embedding layer to convert nodes into low-dimensional representations, since original inputs of some dataset like Cora~\citep{sen2008collective} usually have very high-dimensional feature vectors. After the graph embedding layer, we build the encoder by stacking several encoding blocks, each of which contains a gPool layer followed by a GCN layer. gPool layers reduce the size of graph to encode higher-order features, while GCN layers are responsible for aggregating information from each node's first-order information. In the decoder part, we stack the same number of decoding blocks as in the encoder part. Each decoder block is composed of a gUnpool layer and a GCN layer. The gUnpool layer restores the graph into its higher resolution structure, and the GCN layer aggregates information from the neighborhood. There are skip-connections between corresponding blocks of encoder and decoder layers, which transmit spatial information to decoders for better performance. The skip-connection can be either feature map addition or concatenation. Finally, we employ a GCN layer for final predictions before the softmax function. Figure~\ref{fig:unet} provides an illustration of a sample g-U-Nets with two blocks in encoder and decoder. Notably, there is a GCN layer before each gPool layer, thereby enabling gPool layers to capture the topological information in graphs implicitly. \subsection{Graph Connectivity Augmentation via Graph Power}\label{sec:aug} In our proposed gPool layer, we sample some important nodes to form a new graph for high-level feature encoding. Since related edges are removed when removing nodes in gPool, the nodes in the pooled graph might become isolated. This may influence the information propagation in subsequent layers, especially when GCN layers are used to aggregate information from neighboring nodes. We need to increase connectivity among nodes in the pooled graph. To address this problem, we propose to use the $k^{th}$ graph power $\mathbb{G}^k$ to increase the graph connectivity. This operation builds links between nodes whose distances are at most $k$ hops~\citep{chepuri2016subsampling}. In this work, we employ $k=2$ since there is a GCN layer before each gPool layer to aggregate information from its first-order neighboring nodes. Formally, we replace the fifth equation in Eq~\ref{eq:gpool} by: \begin{equation} A^{2} = A^{\ell}A^{\ell}, \,\,\,\,\,\,\,\,A^{\ell+1} = A^{2}(\mbox{idx}, \mbox{idx}), \label{eq:gaug} \end{equation} where $A^2 \in \mathbb{R}^{N \times N}$ is the $2^{nd}$ graph power. Now, the graph sampling is performed on the augmented graph with better connectivity. \subsection{Improved GCN Layer} In Eq.~\ref{eq:gcn}, the adjacency matrix before normalization is computed as $\hat A = A + I$ in which a self-loop is added to each node in the graph. When performing information aggregation, the same weight is given to node's own feature vector and its neighboring nodes. In this work, we wish to give a higher weight to node's own feature vector, since its own feature should be more important for prediction. To this end, we change the calculation to $\hat{A} = A + 2I$ by imposing larger weights on self loops in the graph, which is common in graph processing. All experiments in this work use this modified version of GCN layer for better performance. \section{Experimental Study} In this section, we evaluate our gPool and gUnpool layers based on the g-U-Nets proposed in Section~\ref{sec:gunet}. We compare our networks with previous state-of-the-art models on node classification and graph classification tasks. Experimental results show that our methods achieve new state-of-the-art results in terms of node classification accuracy and graph classification accuracy. Some ablation studies are performed to examine the contributions of the proposed gPool layer, gUnpool layer, and graph connectivity augmentation to performance improvements. We conduct studies on the relationship between network depth and node classification performance. We investigate if additional parameters involved in gPool layers can increase the risk of over-fitting. \begin{table*}[t] \centering \caption{Summary of datasets used in our node classification experiments~\citep{yang2016revisiting,zitnik2017predicting}. The Cora, Citeseer, and Pubmed datasets are used for transductive learning experiments.} \label{table:transdatasets} \begin{tabularx}{\textwidth}{ X YYYYYYYY } \hline \textbf{Dataset} & \textbf{Nodes} & \textbf{Features} & \textbf{Classes} & \textbf{Training} & \textbf{Validation} & \textbf{Testing} & \textbf{Degree} \\ \hline\hline Cora & 2708 & 1433 & 7 & 140 & 500 & 1000 & 4 \\ \hline Citeseer & 3327 & 3703 & 6 & 120 & 500 & 1000 & 5 \\ \hline Pubmed & 19717 & 500 & 3 & 60 & 500 & 1000 & 6 \\ \hline \hline \end{tabularx} \end{table*} \begin{table*}[t] \centering \caption{Summary of datasets used in our inductive learning experiments. The D\&D~\citep{dobson2003distinguishing}, PROTEINS~\citep{borgwardt2005protein}, and COLLAB~\citep{yanardag2015structural} datasets are used for inductive learning experiments.} \label{table:inducdatasets} \begin{tabularx}{\textwidth}{ X YYYY } \hline \textbf{Dataset} & \textbf{Graphs} & \textbf{Nodes (max)} & \textbf{Nodes (avg)} & \textbf{Classes} \\ \hline\hline D\&D & 1178 & 5748 & 284.32 & 2 \\ \hline PROTEINS & 1113 & 620 & 39.06 & 2 \\ \hline COLLAB & 5000 & 492 & 74.49 & 3 \\ \hline \hline \end{tabularx} \end{table*} \subsection{Datasets} In experiments, we evaluate our networks on node classification tasks under transductive learning settings and graph classification tasks under inductive learning settings. Under transductive learning settings, unlabeled data are accessible for training, which enables the network to learn about the graph structure. To be specific, only part of nodes are labeled while labels of other nodes in the same graph remain unknown. We employ three benchmark datasets for this setting; those are Cora, Citeseer, and Pubmed~\citep{kipf2016semi}, which are summarized in Table~\ref{table:transdatasets}. These datasets are citation networks, with each node and each edge representing a document and a citation, respectively. The feature vector of each node is the bag-of-word representation whose dimension is determined by the dictionary size. We follow the same experimental settings in~\citep{kipf2016semi}. For each class, there are 20 nodes for training, 500 nodes for validation, and 1000 nodes for testing. Under inductive learning settings, testing data are not available during training, which means the training process does not use graph structures of testing data. We evaluate our methods on relatively large graph datasets selected from common benchmarks used in graph classification tasks~\citep{ying2018hierarchical,niepert2016learning,zhang2018end}. We use protein datasets including D\&D~\citep{dobson2003distinguishing} and PROTEINS~\citep{borgwardt2005protein}, the scientific collaboration dataset COLLAB~\citep{yanardag2015structural}. These data are summarized in Table~\ref{table:inducdatasets}. \subsection{Experimental Setup} We describe the experimental setup for both transductive and inductive learning settings. For transductive learning tasks, we employ our proposed g-U-Nets proposed in Section~\ref{sec:gunet}. Since nodes in the three datasets are associated with high-dimensional features, we employ a GCN layer to reduce them into low-dimensional representations. In the encoder part, we stack four blocks, each of which consists of a gPool layer and a GCN layer. We sample 2000, 1000, 500, 200 nodes in the four gPool layers, respectively. Correspondingly, the decoder part also contains four blocks. Each decoder block is composed of a gUnpool layer and a GCN layer. We use addition operation in skip connections between blocks of encoder and decoder parts. Finally, we apply a GCN layer for final prediction. For all layers in the model, we use identity activation function~\cite{gao2018large} after each GCN layer. To avoid over-fitting, we apply $L_2$ regularization on weights with $\lambda=0.001$. Dropout~\citep{srivastava2014dropout} is applied to both adjacency matrices and feature matrices with keep rates of 0.8 and 0.08, respectively. For inductive learning tasks, we follow the same experimental setups in~\cite{zhang2018end} using our g-U-Nets architecture as described in transductive learning settings for feature extraction. Since the sizes of graphs vary in graph classification tasks, we sample proportions of nodes in four gPool layers; those are 90\%, 70\%, 60\%, and 50\%, respectively. The dropout keep rate imposed on feature matrices is 0.3. \begin{table*}[t] \centering \caption{Results of transductive learning experiments in terms of node classification accuracies on Cora, Citeseer, and Pubmed datasets. g-U-Nets denotes our proposed graph U-Nets model.} \label{table:trans} \begin{tabularx}{\textwidth}{ lx{3.5cm} YYY } \hline \textbf{Models} & \textbf{Cora} & \textbf{Citeseer} & \textbf{Pubmed} \\ \hline\hline DeepWalk~\citep{perozzi2014deepwalk} & 67.2\% & 43.2\% & 65.3\% \\ \hline Planetoid~\citep{yang2016revisiting} & 75.7\% & 64.7\% & 77.2\% \\ \hline Chebyshev~\citep{defferrard2016convolutional} & 81.2\% & 69.8\% & 74.4\% \\ \hline GCN~\citep{kipf2016semi} & 81.5\% & 70.3\% & 79.0\% \\ \hline GAT~\citep{velivckovic2017graph} & 83.0 $\pm$ 0.7\% & 72.5 $\pm$ 0.7\% & 79.0 $\pm$ 0.3\% \\ \hline \textbf{g-U-Nets (Ours)} & \textbf{84.4 $\pm$ 0.6\%} & \textbf{73.2 $\pm$ 0.5\%} & \textbf{79.6 $\pm$ 0.2\%} \\ \hline \hline \end{tabularx} \end{table*} \begin{table*}[t] \centering \caption{Results of inductive learning experiments in terms of graph classification accuracies on D\&D, PROTEINS, and COLLAB datasets. g-U-Nets denotes our proposed graph U-Nets model.} \label{table:induc} \begin{tabularx}{\textwidth}{ lx{3.5cm} YYY } \hline \textbf{Models} & \textbf{D\&D} & \textbf{PROTEINS} & \textbf{COLLAB} \\ \hline\hline PSCN~\citep{niepert2016learning} & 76.27\% & 75.00\% & 72.60\% \\ \hline DGCNN~\citep{zhang2018end} & 79.37\% & 76.26\% & 73.76\% \\ \hline DiffPool-DET~\citep{ying2018hierarchical} & 75.47\% & 75.62\% & \textbf{82.13}\% \\ \hline DiffPool-NOLP~\citep{ying2018hierarchical} & 79.98\% & 76.22\% & 75.58\% \\ \hline DiffPool~\citep{ying2018hierarchical} & 80.64\% & 76.25\% & 75.48\% \\ \hline \textbf{g-U-Nets (Ours)} & \textbf{82.43\%} & \textbf{77.68\%} & 77.56\% \\ \hline \hline \end{tabularx} \end{table*} \subsection{Performance Study} Under transductive learning settings, we compare our proposed g-U-Nets with other state-of-the-art models in terms of node classification accuracy. We report node classification accuracies on datasets Cora, Citeseer, and Pubmed, and the results are summarized in Table~\ref{table:trans}. We can observe from the results that our g-U-Nets achieves consistently better performance than other networks. For baseline values listed for node classification tasks, they are the state-of-the-art on these datasets. Our proposed model is composed of GCN, gPool, and gUnpool layers without involving more advanced graph convolution layers like GAT. When compared to GCN directly, our g-U-Nets significantly improves performance on all three datasets by margins of 2.9\%, 2.9\%, and 0.6\%, respectively. Note that the only difference between our g-U-Nets and GCN is the use of encoder-decoder architecture containing gPool and gUnpool layers. These results demonstrate the effectiveness of g-U-Nets in network embedding. Under inductive learning settings, we compared our methods with other state-of-the-art models on graph classification tasks with datasets D\&D, PROTEINS, and COLLAB, and the results are summarized in Table~\ref{table:induc}. We can observe from the results that our proposed gPool method outperforms DiffPool~\citep{ying2018hierarchical} by margins of 1.79\% and 1.43\% on the D\&D and PROTEINS datasets. Notably, the result obtained by DiffPool-DET on COLLAB is significantly higher than all other methods and the other two DiffPool models. On all three datasets, our model outperforms baseline models including DiffPool. In addition, DiffPool claimed that their training utilized auxiliary task of link prediction to stabilize model performance, which indicates the instability of DiffPool model. But in our experiments, we only use graph labels for training without any auxiliary tasks to stabilize training. \subsection{Ablation Study of gPool and gUnpool layers} Although GCNs have been reported to have worse performance when the network goes deeper~\citep{kipf2016semi}, it may also be argued that the performance improvement over GCN in Table~\ref{table:trans} is due to the use of a deeper network architecture. In this section, we investigate the contributions of gPool and gUnpool layers to the performance of g-U-Nets. We conduct experiments by removing all gPool and gUnpool layers from our g-U-Nets, leading to a network with only GCN layers with skip connections. Table~\ref{table:gunet_vs_gunet_no_pool} provides the comparison results between g-U-Nets with and without gPool or gUnpool layers. The results show that g-U-Nets have better performance over g-U-Nets without gPool or gUnpool layers by margins of 2.3\%, 1.6\% and 0.5\% on Cora, Citeseer, and Pubmed datasets, respectively. These results demonstrate the contributions of gPool and gUnpool layers to performance improvement. When considering the difference between the two models in terms of architecture, g-U-Nets enable higher level feature encoding, thereby resulting in better generalization and performance. \begin{table*}[!th] \centering \caption{Comparison of g-U-Nets with and without gPool or gUnpool layers in terms of node classification accuracy on Cora, Citeseer, and Pubmed datasets.} \label{table:gunet_vs_gunet_no_pool} \begin{tabularx}{\textwidth}{ lx{3cm} YYY } \hline \textbf{Models} & \textbf{Cora} & \textbf{Citeseer} & \textbf{Pubmed} \\ \hline\hline g-U-Nets without gPool or gUnpool & 82.1 $\pm$ 0.6\% & 71.6 $\pm$ 0.5\% & 79.1 $\pm$ 0.2\% \\ \hline \textbf{g-U-Nets (Ours)} & \textbf{84.4 $\pm$ 0.6\%} & \textbf{73.2 $\pm$ 0.5\%} & \textbf{79.6 $\pm$ 0.2\%} \\ \hline \hline \end{tabularx} \end{table*} \begin{table*}[!th] \centering \caption{Comparison of g-U-Nets with and without graph connectivity augmentation in terms of node classification accuracy on Cora, Citeseer, and Pubmed datasets. } \label{table:gunet_vs_gunet_no_aug} \begin{tabularx}{\textwidth}{ lx{3.5cm} YYY } \hline \textbf{Models} & \textbf{Cora} & \textbf{Citeseer} & \textbf{Pubmed} \\ \hline\hline g-U-Nets without augmentation & 83.7 $\pm$ 0.7\% & 72.5 $\pm$ 0.6\% & 79.0 $\pm$ 0.3\% \\ \hline \textbf{g-U-Nets (Ours)} & \textbf{84.4 $\pm$ 0.6\%} & \textbf{73.2 $\pm$ 0.5\%} & \textbf{79.6 $\pm$ 0.2\%} \\ \hline \hline \end{tabularx} \end{table*} \begin{table*}[!th] \centering \caption{Comparison of different network depths in terms of node classification accuracy on Cora, Citeseer, and Pubmed datasets. Based on g-U-Nets, we experiment with different network depths in terms of the number of blocks in encoder and decoder parts.} \label{table:depth} \begin{tabularx}{\textwidth}{ YYYY } \hline \textbf{Depth} & \textbf{Cora} & \textbf{Citeseer} & \textbf{Pubmed} \\ \hline\hline 2 & 82.6 $\pm$ 0.6\% & 71.8 $\pm$ 0.5\% & 79.1 $\pm$ 0.3\% \\ \hline 3 & 83.8 $\pm$ 0.7\% & 72.7 $\pm$ 0.7\% & 79.4 $\pm$ 0.4\% \\ \hline 4 & \textbf{84.4 $\pm$ 0.6\%} & \textbf{73.2 $\pm$ 0.5\%} & \textbf{79.6 $\pm$ 0.2\%} \\ \hline 5 & 84.1 $\pm$ 0.5\% & 72.8 $\pm$ 0.6\% & 79.5 $\pm$ 0.3\% \\ \hline \hline \end{tabularx} \end{table*} \begin{table*}[!th] \centering \caption{Comparison of the g-U-Nets with and without gPool or gUnpool layers in terms of the node classification accuracy and the number of parameters on Cora dataset.} \label{table:param} \begin{tabularx}{\textwidth}{ lx{3cm} YYY} \hline \textbf{Models} & \textbf{Accuracy} & \textbf{\#Params} & \textbf{Ratio of increase} \\ \hline\hline g-U-Nets without gPool or gUnpool & 82.1 $\pm$ 0.6\% & 75,643 & 0.00\% \\ \hline \textbf{g-U-Nets (Ours)} & \textbf{84.4 $\pm$ 0.6\%} & 75,737 & 0.12\% \\ \hline \hline \end{tabularx} \end{table*} \subsection{Graph Connectivity Augmentation Study} In the above experiments, we employ gPool layers with graph connectivity augmentation by using the $2^{nd}$ graph power in Section~\ref{sec:aug}. Here, we conduct experiments on node classification tasks to investigate the benefits of graph connectivity augmentation based on g-U-Nets. We remove the graph connectivity augmentation from gPool layers while keeping other settings the same for fairness of comparisons. Table~\ref{table:gunet_vs_gunet_no_aug} provides comparison results between g-U-Nets with and without graph connectivity augmentation. The results show that the absence of graph connectivity augmentation will cause consistent performance degradation on all of three datasets. This demonstrates that graph connectivity augmentation via $2^{nd}$ graph power can help with the graph connectivity and information transfer among nodes in sampled graphs. \subsection{Network Depth Study of Graph U-Nets}\label{sec:exp_depth} Since the network depth in terms of the number of blocks in encoder and decoder parts is an important hyper-parameter in the g-U-Nets, we conduct experiments to investigate the relationship between network depth and performance in terms of node classification accuracy. We use different network depths on node classification tasks and report the classification accuracies. The results are summarized in Table~\ref{table:depth}. We can observe from the results that the performance improves as network goes deeper until a depth of 4. The over-fitting problem happens in deeper networks and prevents networks from improving when the depth goes beyond that. In image segmentation, U-Net models with depth 3 or 4 are commonly used~\citep{badrinarayanan2017segnet,cciccek20163d}, which is consistent with our choice in experiments. This indicates the capacity of gPool and gUnpool layers in receptive field enlargement and high-level feature encoding even working with very shallow networks. \subsection{Parameter Study of Graph Pooling Layers}\label{sec:exp_param} Since our proposed gPool layer involves extra parameters, we compute the number of additional parameters based on our g-U-Nets. The comparison results between g-U-Nets with and without gPool or gUnpool layers on dataset Cora are summarized in Table~\ref{table:param}. From the results, we can observe that gPool layers in U-Net model only adds 0.12\% additional parameters but can promote the performance by a margin of 2.3\%. We believe this negligible increase of extra parameters will not increase the risk of over-fitting. Compared to g-U-Nets without gPool or gUnpool layers, the encoder-decoder architecture with our gPool and gUnpool layers yields significant performance improvement. \section{Conclusion} In this work, we propose novel gPool and gUnpool layers in g-U-Nets networks for network embedding. The gPool layer implements the regular global $k$-max pooling operation on graph data. It samples a subset of important nodes to enable high-level feature encoding and receptive field enlargement. By employing a trainable projection vector, gPool layers sample nodes based on their scalar projection values. Furthermore, we propose the gUnpool layer which applies unpooling operations on graph data. By using the position information of nodes in the original graph, gUnpool layer performs the inverse operation of the corresponding gPool layer and restores the original graph structure. Based on our gPool and gUnpool layers, we propose the graph U-Nets~(g-U-Nets) architecture which uses a similar encoder-decoder architecture as regular U-Net on image data. Experimental results demonstrate that our g-U-Nets achieve performance improvements as compared to other GNNs on transductive learning tasks. To avoid the isolated node problem that may exist in sampled graphs, we employ the $2^{nd}$ graph power to improve graph connectivity. Ablation studies indicate the contributions of our graph connectivity augmentation approach. \clearpage \section*{Acknowledgments} This work was supported in part by National Science Foundation grants IIS-1908166 and IIS-1908198.
1,314,259,994,365
arxiv
\section{Introduction} The short-range hard-core attractive Yukawa (HCAY) fluid has been widely used as a simple model for testing a variety of theories\cite{Pini99, Paricaud06, Nezbeda07, Weiss07, Haro07, Shiqi04} and as a reference system for modeling the behavior of real fluids, such as colloidal suspensions\cite{Pini02, Wu03} and protein solutions\cite{Caccamo00, Sciortino02,Lekker00}. Its high popularity is due to the fact that it captures the most important features of these systems, such as coexistence and interfacial properties which rule many industrial processes and biological phenomena. The HCAY pair potential is given by \begin{equation} \label{potencial} u(r)=\left\{\begin{array}{ll}\infty, & \mbox{ for $r<\sigma,$}\\ -\epsilon \frac{\exp[-\kappa (r-\sigma)/\sigma]}{r/\sigma}, & \mbox{ for $ \sigma\leq r, $} \end{array} \right. \end{equation} \noindent where $\kappa$ is the interaction range parameter, $\epsilon$ is its depth, and $\sigma =1$ is the hard core diameter. The vapor-liquid phase diagrams of the HCAY fluids have been accessed by using different simulation techniques as well as different theoretical approaches. In recent previous works the coexistence properties of the HCAY fluid were reported for a wide interaction range~\cite{Minerva01,Orea07,Orea08,Lemus08,Pini10}. For long range interactions, i.~e. for small $\kappa$, the reported values obtained by simulation and different theoretical approaches of the coexistence properties agree. However, for short-range attractions, there appear strong differences between theory and simulations. Moreover, not even different simulation approaches lead to a good agreement. On the other hand, short-range attractive potentials are specially relevant for modeling protein solutions, i.~e., systems having a key role in biological applications~\cite{Frenkel97,Odriozola11}. Hence, there is an increasing attention on this type of interactions~\cite{Weiss08,Weiss09,Nezbeda11,Chapela10,Jakse10,Benavides10}. For these type of interactions, the vapor-liquid coexistence curve turns metastable since it locates within the vapor-solid coexistence curve~\cite{Chen08,Lekker00,Frenkel07,Prausnitz04,Kumar05}. The simulation of this type of systems is rather difficult for standard simulation techniques. Determining the surface tension is by far more computationally demanding than obtaining the coexistence curves. This turns even worse when the potential is extremely short-range and discontinuous. This explains why the vapor-liquid interfacial properties of discontinuous potentials have been rarely studied so far. For that reason there is growing interest in developing new simulation techniques for the efficient computation of the surface tension. In this direction, some researchers implemented clever simulation approaches~\cite{Alejandre99, Errington03, Singh03, Jackson05, Errington07, Bryk07, Miguel08, Jackson11}. These approaches, however, provide a considerable variation over their surface tension results \cite{Miguel08}. Furthermore, there are very little data when the potential becomes strong short-range. A recent work of Singh~\cite{Singh09} deals with this issue. He reported the interfacial properties of strong short-range HCAY fluid with $\kappa=8,9,10$, using Grand Canonical transition-matrix Monte Carlo (GC-TMMC). Anyway, the surface tension data are scarce and restricted to the temperature region close to the critical point. This lack of data at lower temperatures motivated us to implement REMC, which should help sampling and thus providing new data in this region. In view of the previous paragraphs, we understand that there is a demand for the improvement of the simulation approaches, specially for short-range interaction potentials and at low temperatures. Hence, we combine the slab technique with the REMC method to study the HCAY fluid for $\kappa=10, 9, 8$ and $7$, and for a wide temperature range. The main purposes of this work are three: First, to show that the methodology that has been proposed to calculate the surface tension for discontinuous potentials\cite{Alejandre99,Orea03} is valid even for strong short-range systems. Second, to show that the vapor-liquid coexistence and interfacial properties can be obtained at lower temperatures by using the REMC technique. Third, to report new thermodynamic properties of the strong short-range HCAY fluid for a broader range of thermodynamic conditions, and to compare the results with those previously reported in the literature when available. \section{Methods\label{methods}} In the following subsection we summarizes the interface tension evaluation by the virial route for discontinuous potentials. In the next one, some details are given on the implementation of the REMC technique. \subsection{Interface tension calculation} The methodology employed to calculate the interface tension of discontinuous potentials is given in detail in previous papers~\cite{Orea03, Rendon06}. Its main idea is to obtain the pressure tensor through the derivatives of the potential given by~\ref{potencial}. The derivative of the discontinuous contribution in Eq.~\ref{potencial} is given by \begin{equation} \label{deltar} \frac{du(r)}{dr}=-kT \delta(r-\sigma), \end{equation} where $\delta(r-\sigma)$ is a $\delta$-function. The evaluation of this expression can be performed during a MC simulation through \begin{equation} \label{deltax} \delta(x) = \frac{\Theta(x) - \Theta(x-\Delta \sigma)}{\Delta\sigma}, ~~~~~~\mbox{as ~~~~~~$\Delta \sigma \rightarrow 0$}, \end{equation} where $\Theta(x)$ is the unit step function: $\Theta(x)=0$ for $x < 0$ and $\Theta(x)=1$ for $x > 0$. In this work the parameter $\Delta$ is given the following values: $\Delta=0.005, 0.010, 0.015, 0.020$ and $0.025$. For the virial route, the surface tension is calculated by \begin{equation} \label{st} \gamma =\frac{L_z}2\Bigg\{\big<P_{zz}\big> - \frac 12\big[\big<P_{xx}\big> + \big<P_{yy}\big>\big]\Bigg\}, \end{equation} where $L_z$ is the length of the simulation box in perpendicular direction to the interfaces. The factor $1/2$ is due to the existence of two interfaces in the system. \subsection{Replica exchange method and simulation details} As mentioned, we combine the replica exchange Monte Carlo method~\cite{Geyer91,Lyubartsev92,Marinari92,Hukushima96,Frenkel} with the slab technique~\cite{Chapela77}. The general idea is to simulate $M$ replicas of the original system of interest, being each replica at a different temperature, so that the exchange of microstates among the ensemble cells is allowed (swap moves). In this way, replicas at high temperatures travel long distances in phase space, whereas low temperature systems perform precise sampling at local regions of phase space. Hence, by introducing these swap trials, a particular replica travels through many temperatures allowing it to overcome free energy barriers. Particular ensembles are not disturbed but enriched by the different contributions of the $M$ replicas. This technique allows to easily explore the coexistence regions at low temperature, where other methods freeze. The formal justification of the swap trials implies the definition of an extended ensemble as follows \begin{equation} Q_{\rm extended}=\prod_{i=1}^{M} Q_{N V T_i}, \end{equation} where $Q_{NVT_i}$ is the partition function of the Canonical ensemble of the system at temperature $T_i$, volume $V$, and particle number $N$. $M$ is the considered number of replicas of the system, which equals the number of different temperatures defining the extended ensemble. $Q_{\rm extended}$ may be sampled by $M$ independent standard $NVT$-MC simulations. However, swap trials can be now introduced between the replicas which improves the sampling of the low temperature ensembles. The acceptation probability for this swap trials (performed between adjacent replicas) is given by \begin{equation}\label{accP} P_{acc}\!=\!min(1,\exp[(\beta_j- \beta_i)(U_i-U_j)]) \end{equation} where $U_i-U_j$ is the potential energy difference between replicas $i$ and $j$, and $\beta_i-\beta_j$ is the difference between the reciprocal temperatures $i$ and $j$. Adjacent temperatures should be close enough to provide large exchange acceptance rates between neighboring ensembles. In order to take good advantage of the method, the ensemble at the larger temperature must assure large jumps in configuration space, so that the small temperature ensembles can sample from disjoint configurations. We employed parallelepiped boxes with sides $L_x=L_y=8.0\sigma$ and $L_z=40.0\sigma$ for simulating the vapor-liquid interface of the HCAY fluid. Each of them is initially set with all particles ($N=1000$) randomly placed within the slab which is initially surrounded by vacuum\cite{Chapela77}. The center of mass is placed at the box center. Periodic boundary conditions are set in the three directions. Verlet lists are implemented to improve performance. Simulations were carried out in the vapor-liquid metastable region, so the highest temperature is set below the critical temperature. Other temperatures are fixed by following a geometrically decreasing trend. The replicas are equilibrated by $10^{7}$ MC steps, while maximum displacements are varied to yield acceptance rates close to 30 \%. Long displacements trials are also considered. These displacements are important since they allow large jumps in the vapor phase with relatively large acceptance rates while naturally performing particle transference trials between both phases. The thermodynamic properties are calculated by considering additional $4\times10^{7}$ configurations. The new data reported with standard MC simulations were made using the same parameters as in previous works~\cite{Orea07,Orea08,Lemus08,Pini10}, i.~e., they were performed in a parallelepiped cell with sides $L_z=40\sigma>L_x=L_y=10\sigma$ and $1500$ particles. It is well known that with these parameters finite size effects are avoided~\cite{Orea05,Malfreyt09,Janecek09}. The new results were obtained with $6\times 10^7$ MC steps to get more accurate values than in previous work~\cite{Orea07}. This new results for $\kappa=7$ are slightly different from those previously reported by Duda {\it et.al.}~\cite{Orea07}. Additional standard MC simulations were also obtained with $N=1000$ and $L_x=L_y=8\sigma$ for several conditions, leading to the same results. This suggests that size effects are less important for strong short-range potentials. The critical density and temperature were calculated by using the rectilinear diameters law and the universal value of $\beta=0.325$\cite{Allen}. Our results are given in dimensionless units, as follows: $r^*=r/\sigma$ for distance, $T^*= k_BT/\epsilon$ for temperature, $\rho^*=\rho \sigma^3$ for density, and $\gamma^*= \gamma\sigma^2/\epsilon$ for surface tension. \section{Results and discussion} \begin{figure} \resizebox{0.4\textwidth}{!}{\includegraphics{Fig1.eps}} \caption{\label{fig1} Density profiles, $\rho^*(z)$, of HCAY fluids with $\kappa=10$ for different temperatures by means of REMC. Larger temperatures produce larger vapor densities and lower liquid densities. } \end{figure} The vapor-liquid interfacial properties of strong short-range attractive Yukawa systems with $\kappa=10, 9, 8$, and $7$ were obtained using two simulation techniques, i.~e., standard MC and REMC. Following we focus on the obtained results. Fig.~\ref{fig1} shows typical interfacial density profiles for strong short-range HCAY fluids with $\kappa = 10$ obtained at different temperatures. Well-defined liquid and vapor regions are observed which makes us confident that the interfaces are stable. Furthermore, the systems show bulk vapor and liquid regions of similar volume. This is convenient to generate relatively large bulk regions and make sure they are fully developed. As it is expected, a temperature increase leads to lower and higher densities of liquid and vapor phases, respectively. Besides, one can clearly see that the interface width increases with temperature, pointing out a decrease of the interface tension. It should be noted that the REMC method yields very smooth density profiles (a good sampling of highly uncorrelated configurations is performed), which allows to obtain precise values of the liquid and vapor coexistence densities, $\rho^*_l$ and $\rho^*_v$, by taking average in the different regions of the profiles. This is done without the need of fitting a hyperbolic function, as commonly used. \begin{figure} \resizebox{0.4\textwidth}{!}{\includegraphics{Fig2.eps}} \caption{\label{fig2} Phase diagrams of HCAY fluids. Open circles correspond to REMC whereas squares to standard MC. A good agreement is found for all cases.} \end{figure} Fig.~\ref{fig2} shows the phase diagrams obtained from REMC for the studied systems (open circles). This figure is built from the values of $\rho^*_l$ and $\rho^*_v$ obtained from the profiles at different temperatures and for $\kappa=10, 9, 8$ and $7$. The same figure also includes previously reported data for $\kappa=10, 9$ and $7$ (open squares)~\cite{Pini10}. The open squares corresponding to $\kappa=8$ are new data. All the data represented with open squares were obtained by using the standard MC method~\cite{Pini10}. An excellent agreement is found between REMC and standard MC (codes are totally independent from one another). The data also agree well with the coexistence curves reported by Singh~\cite{Singh09} (as shown in ref.~\cite{Pini10}). Note that the REMC technique provides data at lower temperature, where the standard methodologies have sampling issues as the system dynamics turns glassy~\cite{Testard11}. At even lower temperatures the vapor-liquid coexistence metastability brakes and crystallization takes place~\cite{Odriozola11} (not shown). For obtaining the critical points we employed the exponential fitting (with $\beta=0.325$)~\cite{Frenkel}. Critical points also show a good match between standard MC and REMC simulations. \begin{figure} \resizebox{0.4\textwidth}{!}{\includegraphics{Fig3.eps}} \caption{\label{fig3} Surface tension against $\Delta$ for $\kappa=10$ and different temperatures. Upper curves correspond to lower temperatures. Lines correspond to linear regressions.} \end{figure} As mentioned in section~\ref{methods}, the accurate evaluation of the surface tension implies determining it as a function of the parameter $\Delta$ and performing an extrapolation for $\Delta \rightarrow 0$. Fig.~\ref{fig3} shows the surface tension as a function of $\Delta$ for $\kappa=10$ and different temperatures. In this figure one can see the linear behavior of the surface tension as a function of $\Delta$ for all studied temperatures. This allows to obtain an accurate value of this property by simply performing a linear extrapolation for $\Delta \rightarrow 0$. It should be noted that the slopes of the regressions never vanish, meaning that a single $\Delta$ value cannot be used for an accurate determination of the surface tension. Moreover, the slopes turn larger for decreasing temperature (see Fig.~\ref{fig3}). Thus, determining the surface tension for the short-range HCAY potential at low temperatures implies obtaining it as a function of $\Delta$ and then performing a careful extrapolation for $\Delta \rightarrow 0$. On the other hand, when comparing the behavior of the surface tension as a function of $\Delta$ for the HCAY fluid to the square well fluid, we notice that the slopes for the HCAY fluid have the opposite sign. This is probably the effect of having a single discontinuity instead of the two corresponding to the square well potential. Anyway, in both cases the linear extrapolation is essential for the accurate evaluation of the surface tension of discontinuous potentials by the virial route. \begin{figure} \resizebox{0.4\textwidth}{!}{\includegraphics{Fig4.eps}} \caption{\label{fig4} Surface tension of strong short-range HCAY fluid as a function of temperature and $\kappa$. Circles are obtained by REMC and squares by the standard MC method. Crosses corresponds to data reported by Singh~\cite{Singh09}. The inset shows the REMC reduced surface tension against the reduced temperature.} \end{figure} The results of the surface tension as a function of the temperature for all considered $\kappa$ values are shown in Fig.~\ref{fig4} and given in Table~\ref{Table1}. The curves behavior is very similar to that found with longer interaction ranges, i.~e., the surface tension raises as temperature decreases and the curves shift to the left with increasing $\kappa$. As in previous plots, circles correspond to REMC and squares to the standard MC method. The obtained agreement is remarkable. This provides confidence in the proposed method for the surface tension evaluation~\cite{Alejandre99,Orea03,Orea05} of these particular extremely short-range and discontinuous potentials by the virial method. \begingroup \squeezetable \begin{table} \caption{Surface tension values for the strong short-range Yukawa fluids. The subscripts are the estimated errors. } \label{Table1} \begin{tabular}{||c|c|c|c|c||} \hline \hline & \multicolumn{2}{c|}{Standard MC} & \multicolumn{2}{c||}{REMC} \\ \hline $\kappa$ & $T^*$ & $\gamma^*$ & $T^*$ & $\gamma^*$ \\ \hline & 0.395 & $0.029_{07}$ & 0.4000 & $0.023_{06}$ \\ & 0.390 & $0.038_{05}$ & 0.3936 & $0.033_{07}$ \\ & 0.380 & $0.062_{06}$ & 0.3873 & $0.045_{04}$ \\ & 0.370 & $0.095_{10}$ & 0.3811 & $0.059_{05}$ \\ & 0.360 & $0.126_{08}$ & 0.3750 & $0.078_{07}$ \\ 7 & 0.350 & $0.158_{09}$ & 0.3690 & $0.092_{06}$ \\ & & & 0.3631 & $0.113_{07}$ \\ & & & 0.3573 & $0.134_{06}$ \\ & & & 0.3516 & $0.153_{05}$ \\ & & & 0.3460 & $0.170_{11}$ \\ & & & 0.3404 & $0.188_{08}$ \\ \hline & 0.375 & $0.021_{06}$ & 0.3750 & $0.019_{06}$ \\ & 0.370 & $0.032_{07}$ & 0.3698 & $0.027_{05}$ \\ & 0.365 & $0.043_{05}$ & 0.3646 & $0.041_{07}$ \\ & 0.360 & $0.058_{06}$ & 0.3595 & $0.055_{05}$ \\ & 0.350 & $0.088_{08}$ & 0.3544 & $0.072_{05}$ \\ 8 & 0.340 & $0.120_{11}$ & 0.3495 & $0.090_{06}$ \\ & & & 0.3446 & $0.107_{08}$ \\ & & & 0.3398 & $0.125_{06}$ \\ & & & 0.3350 & $0.140_{10}$ \\ & & & 0.3303 & $0.162_{08}$ \\ & & & 0.3257 & $0.182_{09}$ \\ \hline & 0.355 & $0.018_{04}$ & 0.3524 & $0.023_{06}$ \\ & 0.350 & $0.030_{06}$ & 0.3478 & $0.034_{08}$ \\ & 0.345 & $0.045_{08}$ & 0.3433 & $0.049_{05}$ \\ 9 & 0.340 & $0.062_{05}$ & 0.3388 & $0.065_{07}$ \\ & 0.335 & $0.078_{11}$ & 0.3344 & $0.082_{06}$ \\ & 0.330 & $0.101_{08}$ & 0.3300 & $0.099_{08}$ \\ & & & 0.3258 & $0.115_{10}$ \\ & & & 0.3215 & $0.130_{09}$ \\ \hline & 0.338 & $0.021_{05}$ & 0.3360 & $0.025_{04}$ \\ & 0.335 & $0.029_{08}$ & 0.3321 & $0.037_{07}$ \\ & 0.330 & $0.044_{07}$ & 0.3282 & $0.054_{06}$ \\ 10& 0.325 & $0.062_{07}$ & 0.3243 & $0.068_{09}$ \\ & 0.320 & $0.085_{09}$ & 0.3205 & $0.081_{11}$ \\ & & & 0.3168 & $0.097_{05}$ \\ & & & 0.3131 & $0.114_{08}$ \\ \hline \hline \end{tabular} \end{table} \endgroup We also included in Fig.~\ref{fig4} the data recently reported by Singh by means of GC-TMMC method~\cite{Singh09}. A comparison to these data reveals a good qualitative agreement. That is, trends show a good match. Nonetheless, our data are systematically above Singh's values, specially close to the critical point where differences are above $30\%$. It should be mentioned that differences are considerably larger than our error bars, which are always smaller than twice the symbol size for both cases (see Table 1). These differences in the surface tension values are despite the fact that a very good match was found between the vapor-liquid phase diagrams obtained by GC-TMMC and the standard MC technique, as it was shown in a previous work~\cite{Pini10}. Our belief is that the GC-TMMC technique would probably yield larger surface tension data for a considerably larger number of MC steps (the author reported a short running time calculation, $48-72 hrs$). Note that we performed $6\times 10^7$ and $4\times 10^7$ steps with the standard MC and the REMC method, respectively, to produce the data shown in Fig.~\ref{fig4}. A $M=12$ REMC run takes approximately a month in a quad core desktop machine. Thus, in both cases, a large number of steps was required. In that sense, the gain of employing REMC instead of a standard MC technique for the surface tension determination is not very remarkable (we expected a much better performance of REMC). A similar finding was obtained by de Miguel~\cite{Miguel08} when comparing the expanded ensemble approach (a clever method also based on the introduction of an extended ensemble and thus related to REMC) with the virial route for obtaining the surface tension. Previous findings for intermediate and long interaction range of the same pair potential show the surface tension and the coexistence curves yield a master curve when reduced using their critical parameters~\cite{Orea08}. Furthermore, a single master curve was also found for the coexistence data of the strong short-range HCAY~\cite{Pini10}. However, the reduced surface tension, $\gamma^*_r=\gamma/(\rho^{*2/3}_c T^*_c)$, obtained by REMC as a function of the reduced temperature, $T^*_R=T^*/T^*_c$, shown in the inset of Fig.~\ref{fig4} forms an imperfect master curve, where slight deviations are observed. Thus, we cannot assure the obeying of the corresponding states law of the strong short-range HCAY based on this property. \section{Conclusions} The presented work fulfills the three purposes stated in the introduction. That is, it shows that the virial route calculation of the surface tension for discontinuous potentials\cite{Alejandre99,Orea03} produces reliable data for strong short-range systems. For this purpose, the surface tension must be obtained as a function of $\Delta$ and then a careful extrapolation for $\Delta \rightarrow 0$ must be implemented. Then, it reveals the utility of the REMC technique to assist the virial route for obtaining data at low temperatures. Finally, it reports new interface tension values for the strong short-range HCAY fluids which contribute to the knowledge on the behavior of these systems for a broader range of thermodynamic conditions. These new data may help testing the results of the existing and forthcoming theoretical approaches. \section{Acknowledgments} The authors thank The Molecular Engineering Program of IMP as well as CONACyT of M\'exico for financial support (projects Nos. D.00406 and Y.00119 SENER-CONACyT).\\
1,314,259,994,366
arxiv
\section{Continuous joint measurement of non-commuting observables} \label{sec:contmeasurements} Let us start by reviewing the continuous measurement process for a single observable represented by the Hermitian operator $A$. We assume a steady-state detector that consists of a continuous stream of identically prepared Gaussian states \cite{JacobsIntroContMeas06}, each of which briefly interacts with the system for a duration $\delta t$ and is later measured to produce a noisy result $r$. This idea of a continuous measurement is fairly general, but for specificity we will consider a steady-state coherent microwave field in a pumped resonator with rapid decay rate $\kappa$, which very briefly interacts with a transmon qubit for a duration $\delta t \sim \kappa^{-1}$ before escaping the resonator, traveling down a transmission line, being amplified with a phase-sensitive amplifier, and then being measured to produce a homodyne signal \cite{Murch2013,Weber2014,Tan2015,Foroozani2016}. We model this measurement phenomenologically with the quantum Bayesian approach~\cite{KorotkovBayesian1,KorotkovBayesian2,KorotkovBayesian3}, which is equivalent to the optical quantum trajectories formalism~\cite{WisemanBook,diosi1988continuous,Gambetta2008} with coherent resonator states, which disentangle from the qubit in this ``bad cavity'' limit where $\kappa \to \infty$. Each segment of the detecting microwave field of duration $\delta t$ interacts with the qubit independently to produce a result $r$, which produces a Markov chain of quantum state updates (equivalent to Bayes' rule) that becomes a stochastic process in the continuum limit. For the sake of simplifying the discussion here, we assume that the collection of the field is perfectly efficient, and that there are no other dephasing or energy-relaxation effects that disrupt the qubit evolution. We also assume that the measured result $r$ has been scaled such that the probability distribution $P(r|a)$ for obtaining $r$ if the system is in an eigenstate $\left| a \right\rangle$ of $A$ is Gaussian with variance $\tau/\delta t$ centered around the eigenvalue $a$ corresponding to $\left| a \right\rangle$. The measurement time $\tau$ is an experimental parameter that depends on the coupling between the measurement device and the system, and characterizes the rate at which the device acquires information about the state of the system. With this choice of normalization, $\tau$ is the time for an accumulated noisy readout to achieve unit signal-to-noise ratio given a definite initial eigenstate. Such a Gaussian measurement for observing $r$ during each independent duration $\delta t$ corresponds to a Gaussian positive operator-valued measure (POVM) $E(r)$ that is diagonal in the $A$ basis such that $P(r|a) = \langle a|E(r)|a \rangle$. As such, $E(r)$ satisfies the probability normalization condition $\int E(r)dr = \id$. In absence of experimental inefficiency and phase backaction \cite{KorotkovBayesian2}, each $E(r)$ factors into a single Kraus operator \begin{equation}\label{eq:gausskraus} M(r) = \left( \frac{\delta t}{2 \pi \tau }\right)^{1/4} \exp\left[-\frac{ \delta t}{2\tau} \frac{(r - A)^2}{2} \right] \end{equation} such that $E(r) = M(r)^\dagger M(r)$. This Kraus operator describes the state update $\rho \mapsto M(r)\rho M(r)^\dagger / \text{Tr}[\rho E(r)]$ resulting from observing a particular $r$, given an initial density matrix $\rho = \sum_{a,a'}\rho_{a,a'}|a\rangle\langle a'|$. For simulation purposes, $r$ is a random variable sampled from the mixture distribution $ P(r|\rho(t))=\text{Tr}[\rho(t)E(r)]$ at each time step. In the continuum limit $\delta t \ll \tau$, the readout $r$ approximates a moving-average stochastic process \begin{align} \label{eq:readout} r(t) = \text{Tr}[\rho(t) A] + \sqrt{\tau} \ \xi(t). \end{align} That is, the Gaussians with variance $\tau/\delta t$ centered at each eigenvalue $a$ broaden and merge, so the mean of $r(t)$ at each $t$ approximates the mean $\text{Tr}[\rho(t) A]$ of the eigenvalues in the state $\rho(t)$, with the approximately Gaussian spread around that mean becoming additive white noise $\xi(t)$ satisfying $\langle \xi(t) \rangle = 0$ and $\langle \xi(t_1) \xi(t_2) \rangle = \delta(t_1 - t_2) $. Here the averaging $\langle \cdot \rangle$ can denote either a temporal average or an ensemble average since the white noise process is stationary. Considering two noncommuting observables is a straightforward generalization of the single observable case, obtained by alternating the measurements prior to taking the continuum limit. For simplicity we now restrict our discussion to a qubit with Bloch coordinates $x(t) = \text{Tr}[\rho(t)\sigma_x]$, $y(t) = \text{Tr}[\rho(t)\sigma_y]$, and $z(t) = \text{Tr}[\rho(t)\sigma_z]$ defined by the Pauli operators $\sigma_x$, $\sigma_y$, and $\sigma_z$. We will simultaneously measure $x$ and $z$ with equal measurement times $\tau_x = \tau_z = \tau$, in the absence of Hamiltonian evolution. The effects of each independent measurement with records $r_x$ and $r_z$ are described by Kraus operators of the form in Eq.~\eqref{eq:gausskraus} with $A = \sigma_x,\,\sigma_z$, which we denote as $M_x(r_x)$ and $M_z(r_z)$, respectively. After obtaining both measurements over a timestep $\delta t$, the approximate state update is given by \begin{equation} \rho(t+\delta t) \approx \frac{M_z M_x \rho(t) M_x^\dag M_z^\dag}{\text{Tr}\left[ M_z M_x \rho(t) M_x^\dag M_z^\dag \right]}, \end{equation} which is valid to first order in $\delta t/\tau \ll 1$. Though this discrete form that performs the two measurements separately will accumulate error of order $(\delta t/\tau)^2$ over time, it is still a useful approximation for numerical simulations, since it properly preserves the properties of the state (unit trace and complete positivity). The accumulated sequencing error may be quantified by comparing the state after the update order ($x$, $z$) to that after the reverse ordering, which will verify whether $\delta t/\tau$ is sufficiently small. In practice, explicitly first order stochastic update methods can accumulate more subtle evolution errors over time without taking proper care of preserving the state properties. For analytic purposes, expanding the discrete update to linear order and formally taking the continuum limit $\delta t \to 0$ produces a stochastic master equation for $\rho(t)$ \begin{align}\label{eq:strat} \dot{\rho} &= \frac{r_x}{\tau}\left[ \frac{\left\{ \sigma_x,\rho \right\}}{2} - x \rho\right] + \frac{r_z}{\tau} \left[\frac{\left\{ \sigma_z,\rho \right\}}{2} - z \rho\right] \end{align} in Stratonovich form (with time-symmetric derivative $\dot{\rho}(t) \equiv \lim_{\delta t\to 0}[\rho(t+\delta t) - \rho(t - \delta t)]/2\delta t$), where we suppress explicit time dependencies for brevity. This form makes it clear that the effect of continuous qubit measurements at each time $t$ is completely described by a renormalized Jordan product $\{A,B\}/2 \equiv (AB + BA)/2$ of each measured observable with the state $\rho$. In Bloch coordinates, this master equation splits into: \begin{subequations}\label{eq:dynamicsStrato} \begin{align} \dot{x} &= \left(1-x^2\right) \frac{r_x}{{\tau}} - x z \frac{r_z}{{\tau}} \label{eq:dynamicsAStrato} \\ \dot{y} &= - y x \frac{r_x}{{\tau}} - y z \frac{r_z}{{\tau}} \label{eq:dynamicsBStrato} \\ \dot{z} &= \left(1-z^2 \right) \frac{r_z}{{\tau}} - x z \frac{r_x}{{\tau}}. \label{eq:dynamicsCStrato} \end{align} \end{subequations} The correlation functions for the observed readouts may be computed from these differential equations~\cite{KorotkovXYZ} \begin{align} \label{eq:correlators} \langle r_x(0) r_x(t) \rangle &= \langle r_z(0) r_z(t) \rangle = \exp(-t/2\tau) \\ \langle r_x(0) r_z(t) \rangle &= \langle r_z(0) r_x(t) \rangle = 0, \nonumber \end{align} and match our numerical simulations shown in Fig.~\ref{fig:correlators} for any $t>0$ and any initial qubit state. \begin{figure} \begin{center} \includegraphics[trim={0 1cm 0 0},width=0.5\textwidth]{correlatorsv8.eps} \end{center} \caption{Correlation functions for the independent readout signals $r_x(t)$ and $r_z(t)$ obtained from simultaneously monitoring the noncommuting qubit observables $\sigma_x$ and $\sigma_z$ with equal characteristic measurement times $\tau$. While cross-correlations like $\langle r_x(0) r_z(t) \rangle$ vanish, autocorrelations like $\langle r_x(0) r_x(t) \rangle$ decay exponentially with the delay at rate $1/2\tau$. } \label{fig:correlators} \end{figure} It is interesting to compare Eq.~\eqref{eq:strat} with its analogue in It\^o form (with forward-derivative $\dot{\rho} \equiv [\rho(t+\delta t)-\rho(t)]/\delta t$)~\cite{WisemanBook}. The qubit state evolves in It\^o form according to \begin{align}\label{eq:Ito} \dot{\rho} = &-\frac{1}{2\tau}\frac{\big[[\rho,\sigma_x],\sigma_x\big]}{4} -\frac{1}{2\tau}\frac{\big[[\rho,\sigma_z],\sigma_z\big]}{4} \nonumber \\ &+ \frac{\xi_x}{\sqrt{\tau}}\left[ \frac{\left\{ \sigma_x , \rho \right\}}{2} - x \rho \right] + \frac{\xi_z}{\sqrt{\tau}}\left[ \frac{\left\{ \sigma_z , \rho \right\}}{2} - z \rho \right]. \end{align} The first two terms are in Lindblad form, and correspond to the ensemble-average dissipation (decoherence) due to the detector, which acts as an external bath on average during the measurement process. The last two terms describe measurement innovation, and are similar to the Stratonovich evolution but involve only the effective white noises $\xi_x$ and $\xi_z$ of each readout (defined as in Eq.~\eqref{eq:readout}; they increase the purity of the state due to the acquisition of information by the measurement devices~\cite{JacobsIntroContMeas06}. For the efficient measurements considered here, the innovation precisely compensates for the dissipation to preserve purity of an initially pure state, which is not apparent in the It\^o picture. However, the relation between individual trajectories and the ensemble average is more clear in the It\^o picture, since the white noise simply averages away. The solutions to Eqs.~\eqref{eq:strat} and \eqref{eq:Ito} are identical, so the choice of derivative definition is a matter of taste. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{trackedxz5.eps} \caption{Example filtered output signals $\overline{r}_x(t)$ (top, blue) and $\overline{r}_z(t)$ (bottom, blue), and qubit Bloch coordinates $x(t)$ (top, black) and $z(t)$ (bottom, black), simulated with a normalized time step $\delta t/\tau = 0.01$. The raw readout signals $r_x(t)$ and $r_z(t)$ were filtered with a simple exponentially-weighted moving average with decay time $\tau$, and approximately track the qubit state even without more sophisticated state estimation. } \label{fig:trackedXZ} \end{figure} Fig.~\ref{fig:trackedXZ} demonstrates a remarkable feature of the simultaneous measurement of both $x$ and $z$ that is not possible for a single continuous measurement: filtering the raw readout signals $r_x(t)$ and $r_z(t)$ allows the true qubit state $x(t)$ and $z(t)$ to be tracked with reasonably high fidelity \cite{KorotkovXYZ}. For a single measurement this sort of tracking is only possible in the stronger-measurement case where the qubit remains mostly in the eigenstates of the measurement (as in Zeno-pinned quantum jump dynamics \cite{Vijay2011}). Such a single continuous measurement effectively hides the qubit dynamics for timescales shorter than the collapse to the measurement eigenstates (${\sim}3\tau$). However, we see in Fig.~\ref{fig:trackedXZ} that in the two-measurement case even the simplest exponential moving-average filter manages to smooth out the excess readout noise and recover the qualitative qubit state dynamics as $\langle r_x(t)\rangle \approx x(t)$ and $\langle r_z(t) \rangle \approx z(t)$. This \emph{model-independent} state estimation method uses the directly observed readouts with minimal processing, and shows that the qubit state no longer collapses to definite eigenstates, but instead seems to behave as if the coordinates $x$ and $z$ are always simultaneously well-defined but also randomly evolving. Importantly, the observer need have no prior knowledge about the qubit to arrive at precisely the same conclusion, with the same estimation of the qubit state. Indeed, Fig.~\ref{fig:trackedXZ} shows an arbitrary evolution segment from a much longer trajectory run, with no further context. This behavior is contrary to what one would naively expect, since we are monitoring two noncommuting observables that should disturb one another. However, we can intuit that the two observables are mutually disrupting the progressive qubit state collapse to their respective eigenstates, such that the disruptions perfectly balance due to the equal $\tau$. This symmetric joint observation thus seems to permit the observables to behave somewhat more \emph{classically}, with both seeming to be reasonably well-defined at all times in an observationally meaningful way. This behavior is very much in the spirit of the macrorealist assumptions of Leggett and Garg that were discussed in the introduction \cite{LeggettGarg85}, despite the fact that the noncommutativity of the monitored observables is precisely what is expected to be responsible for \emph{causing} violations of such macrorealism. As such, we are motivated to ask whether the qubit will still violate macrorealistic inequalities for continuous measurements that are similar to existing tests performed with single continuous measurement signals \cite{KorotkovLG,KorotkovLGexperiment,Bednorz2012}. (As an interesting side note, the Cauchy-Schwartz inequality for qubit observable variances produces \begin{align} (\Delta\sigma_x)^2(\Delta\sigma_z)^2 &\geq \left|\frac{\langle[\sigma_x,\sigma_z]\rangle}{2i}\right|^2 \\ & + \left|\frac{\langle\{\sigma_x,\sigma_z\}\rangle}{2} - \langle\sigma_x\rangle\langle\sigma_z\rangle\right|^2, \nonumber \end{align} which can be rearranged to produce a trivial Bloch sphere inequality $x^2 + y^2 + z^2 \leq 1$. Neither this, nor the coarser Heisenberg-Kennard uncertainty relation derived from it, prevent classical spin-like behavior for a qubit.) \section{Violation of Leggett-Garg macrorealism} \label{sec:LG} In the standard Leggett-Garg scenario one considers projective measurements of a dichotomic quantity $z(t)$, with $|z(t)|~=~1$, though this restriction can be relaxed to permit $|z(t)|\leq 1$ \cite{Dressel2011,Dressel2014}. Under the assumption that the system obeys \emph{macrorealism}, i.e. that \begin{enumerate} \item[(A1)] $z(t)$ evolves causally with a well defined value at any given time $t$ (\emph{macrorealism per-se}), and that \item[(A2)] $z(t)$ can be measured without disturbing subsequent evolution (\emph{noninvasive measurability}), \end{enumerate} the following three-time inequality holds \begin{equation} \label{LGinequalityProj} \langle z(t_1) z(t_2) \rangle + \langle z(t_2) z(t_3) \rangle - \langle z(t_1) z(t_3) \rangle \le 1, \end{equation} where $\langle \cdot \rangle$ indicates an ensemble average over many realizations of the experiment, each of the realizations consisting of the projective measurement of $z$ at two different times. Evolving quantum systems can violate the inequality~\eqref{LGinequalityProj}, implying the failure of at least one of the macrorealism postulates. For qubit measurements of $\sigma_z$ evolving with a Rabi Hamiltonian $H = (\Omega/2) \sigma_x$, the left hand side of~\eqref{LGinequalityProj} becomes $2\cos \left( \Omega \Delta t \right) - \cos \left(\Omega \Delta t\right) = 3/2$ if the time intervals are chosen to be equal such that $t_3 - t_2 = t_2 - t_1 \equiv \Delta t = \pi/2 \Omega$. Note that the violation of the inequality depends crucially on the relation between $\Delta t$ and the period $2\pi/\Omega$---there is no violation in the limit that $\Omega\to 0$ (no evolution). For continuous monitoring of only $\sigma_z$, this logic is generalized in the following way \cite{KorotkovLG,KorotkovLGexperiment}. First, the noisy measured readout is assumed to be unbiased: $r_z(t) = z(t) + \sqrt{\tau}\,\xi_z(t)$. Second, the noise $\xi_z(t)$ is assumed to be only apparent (i.e., produced by the detector itself) and not causing additional evolution of the qubit (e.g., through measurement backaction, or invasive physical coupling); in this case, $\langle \xi_z(0)z(t) \rangle = 0$ for $t>0$. With this interpretation of continuous \emph{noninvasive measurability}, we can rewrite the correlation functions in Eq.~\eqref{LGinequalityProj} as correlations of the readout directly, $\langle z(t_1)z(t_2)\rangle = \langle r_z(t_1)r_z(t_2)\rangle$, using the fact that the white noise is itself $\delta$-correlated. After this replacement, we recover results completely analogous to the projective measurement case in Eq.~\eqref{LGinequalityProj}. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{spinbloch7.jpg} \caption Bloch representation of a qubit contained in the $y = 0$ plane. The components $x(t) = \cos\left(\theta(t) \right)$ and $z(t) = \sin\left(\theta(t)\right)$ determine the state at any given time. For a classical spin, these components are sufficient to deduce the component $l_\varphi(t) = x(t)\cos\varphi + z(t)\sin\varphi$ along any direction defined by the angle $\varphi$, as shown.} \label{fig:spinbloch} \end{figure} Let us now assume macrorealism holds for the system being considered, and use the same logic as above to derive suitable macrorealistic constraints for joint $x$ and $z$ monitoring. Since now two orthogonal axes are involved, we expect the macrorealistic state of the qubit to mimic that of a classical spin. From Fig.~\ref{fig:spinbloch} it is easy to see that from the observed components $x(t)$ and $z(t)$ we can deduce the component $l_\varphi(t)$ of such a definite spin state in an arbitrary direction defined by the angle $\varphi$, \begin{equation} \label{eq:length} l_\varphi = \cos(\varphi) x + \sin(\varphi) z. \end{equation} Similarly, for a given direction $\varphi$ we can construct an effective readout signal for $l_\varphi(t)$ as \begin{equation} \label{eq:rphi} r_{\varphi} = \cos(\varphi) r_x + \sin(\varphi) r_z \equiv l_\varphi + \sqrt{\tau} \ \xi_\varphi, \end{equation} where $\xi_\varphi = \cos(\varphi)\xi_x + \sin(\varphi)\xi_z$ is still zero-mean $\delta$-correlated white noise. If $r_x(t)$ and $r_z(t)$ do convey information about the instantaneous values of $x(t)$ and $z(t)$, then $r_\varphi(t)$ should also provide information about the instantaneous value of $l_\varphi(t)$. It then follows that if we assume \emph{noninvasive measurability} as before, so the apparent noise does not disturb the measured quantity, $\langle \xi_\varphi(0) l_\varphi(t) \rangle = 0$, then \begin{align} \label{eq:phiphi} \langle r_\varphi(0) r_\varphi(t) \rangle &= \langle l_\varphi(0) l_\varphi(t) \rangle. \end{align} This is the natural generalization of the continuous Leggett-Garg assumptions to the case of joint continuous measurements. From the correlation functions in Eq.~\eqref{eq:correlators} the observations for the quantum mechanical model then become \begin{align} \label{eq:phiphi1} \langle r_\varphi(0) r_\varphi(t) \rangle &= \cos^2(\varphi) \langle r_x(0) r_x(t) \rangle + \sin^2(\varphi) \langle r_z(0) r_z(t) \rangle \nonumber \\ & + \sin(\varphi) \cos(\varphi) \Big\langle r_x(0) r_z(t) + r_z(0) r_x(t) \Big\rangle \nonumber \\ & = \exp(-t/2\tau), \qquad \forall \varphi. \end{align} For short times $t \ll 2\tau$ this implies $\langle r_\varphi(0) r_\varphi(t) \rangle \approx 1$. Since $|l_\varphi| \le 1 $, Eq.~\eqref{eq:phiphi} can only be fulfilled if $|l_\varphi(t)| \approx 1$ for all times and for \emph{any direction $\varphi$}. This is clearly inconsistent with the qubit acting like a spin with a well determined state in the Bloch representation, even before invoking an inequality like Eq.~\eqref{LGinequalityProj}. The incongruities do not end there. Now consider the product of components of the spin along orthogonal directions defined by the angles $\varphi$ and $\varphi + \frac{\pi}{2}$. We obtain, by similar calculations as above, that \begin{align} \label{eq:phiphi2} &\langle r_\varphi(0) r_{\varphi+\pi/2}(t) \rangle = 0 \qquad \forall \ \varphi. \end{align} We thus conclude $|l_\varphi(t)| = 0$ for all $t$ and $\phi$, which is incompatible with the previous conclusion that $|l_\varphi(t)| \approx 1$. In Fig.~\ref{fig:quantumvsclassical} we check these results with numerical simulations, and compare them to the what one would obtain for a well-defined classical spin. \begin{figure} \centering \includegraphics[width=0.46\textwidth]{quantumvsclassical3.eps} \caption{ Comparison of the component $l_\varphi$ for a classical spin pointed along the positive $x$ axis (red), and the correlation results for a qubit, both theoretical (black), and numerically simulated (blue, dotted). The quantum correlation functions $\langle r_\varphi(0) r_\varphi(t) \rangle$ for short delay times $t \ll \tau$ nearly saturate the maximum value of $1$ that $l_\varphi(0) l_\varphi(t)$ can take at any given time. With the Leggett-Garg noninvasive measurability assumption, this implies that \emph{for all times} and \emph{for all angles} $l_\varphi \approx 1$. This is clearly incompatible with the possible values that the component $l_\varphi$ can take for a classical spin with a definite direction in the Bloch sphere. } \label{fig:quantumvsclassical} \end{figure} Notice that with these results it is also simple to construct Leggett-Garg inequalities for non-commuting measurements that are similar to Eq.~\eqref{LGinequalityProj}. As an example, invoking the same noninvasive measurability assumption as before we obtain \begin{align} \label{LGnewclassical} \langle r_x(0) r_\varphi(t) \rangle + \langle r_\varphi(t) r_z(2t) \rangle - \langle r_x(0) r_z(2t) \rangle &~\le~1 \end{align} for any $\varphi$. However, the actual evolution under joint measurement of $x$ and $z$ yields \begin{align} &\langle r_x(0) r_\varphi(t) \rangle + \langle r_\varphi(t) r_z(2t) \rangle - \langle r_x(0) r_z(2t) \rangle \nonumber \\ &\quad = \left( \cos(\varphi) + \sin(\varphi) \right) \exp(-t/2\tau) \le \sqrt{2}, \label{eq:LGnew} \end{align} from using conditions~\eqref{eq:correlators}. For $\varphi = \pi/4$ the right hand side of Eq.~\eqref{eq:LGnew} is approximately $\sqrt{2}$ for $t \ll \tau$, violating the macrorealistic bound~\eqref{LGnewclassical}. Notably, these bound violations occur without Hamiltonian evolution, unlike Leggett-Garg inequalities formed from a single output, like $r_x$ or $r_z$ independently; as such, in order to see these violations it is crucial to consider both measurement outputs simultaneously. Following the usual logic for Leggett-Garg inequalities, we infer from these absurd conclusions that at least one of the assumptions of macrorealism is being violated. One option is to reject realism, but this is unlikely given the strongly realistic behavior suggested by Fig.~\eqref{fig:trackedXZ}. It is thus more likely that the standard assumption of noninvasive measurability being used for continuous measurements is overly restrictive. It is thus interesting to compute what the quantum dynamics actually imply about the necessary form of the noise invasiveness in order to reproduce the apparent contradictions above. Let us revisit Eqs.~\eqref{eq:phiphi1} and~\eqref{eq:phiphi2} and expand them properly in terms of the quantum model. Since $\xi_x$ and $\xi_z$ are independent white noises, we have $\langle \xi_\varphi(0) \xi_\varphi(t) \rangle = \delta(t)$, and since the values of prior state components do not influence later white noise we also get $\langle l_\varphi(0)~\xi_\varphi(t)\rangle = 0$. As such, the proper correlation expansion that includes the invasiveness of the noise for $t > 0$ is \begin{align} \langle r_\varphi(0) r_\varphi(t) \rangle = \langle l_\varphi(0) l_\varphi(t) \rangle + \sqrt{\tau} \langle \xi_\varphi(0) l_\varphi(t)\rangle. \end{align} Imposing that the quantum predictions from Eqs.~\eqref{eq:correlators} be satisfied then places the following constraints on the noises. \begin{subequations} \begin{align} \big\langle x(0) x(t) + \sqrt{\tau}\xi_x(0) x(t) \big\rangle &= \exp(-t/2\tau) \label{eq:constraint1} \\ \big\langle z(0) z(t) + \sqrt{\tau}\xi_z(0) z(t) \big\rangle &= \exp(-t/2\tau) \label{eq:constraint2} \\ \big\langle x(0) z(t) + \sqrt{\tau}\xi_x(0) z(t) \big\rangle &= 0 \label{eq:constraint3} \\ \big\langle z(0) x(t) + \sqrt{\tau}\xi_z(0) x(t) \big\rangle &= 0, \label{eq:constraint4} \end{align} \end{subequations} which intertwine the dynamics of the system with the noise output from each measurement device. These equations need to be satisfied by any macrorealistic model of the underlying evolution of the quantum state that models the invasiveness of the noise. \section{Violating macrorealism with an epistemically restricted classical model} \label{sec:ClassicalModel} We will now construct a classical model for a spin in a fluctuating magnetic field that accounts for the effect of the noise, and that perfectly emulates both the dynamics of the qubit and the readout signals output in an experiment. The form of this model is sufficient only for measurements of symmetric strength (equal $\tau$), but it provides interesting insight into the structure of the preceding Leggett-Garg violations. To derive a classical model, we write the equations of motion for the angle $\theta(t)$ in the $x$-$z$ plane \cite{KorotkovXYZ,Areeya2013,Areeya2015}, defined for a pure state by $x(t) = \cos(\theta(t))$ and $z(t) = \sin(\theta(t))$. From the Stratonovich Eqs.~\eqref{eq:dynamicsStrato} this angle has the equivalent dynamics \begin{align} \dot{\theta} & = x \dot{z} - z \dot{x} = \frac{ x r_z}{{\tau}} - \frac{z r_x}{{\tau}} \equiv \frac{\widetilde{r}}{\tau}, \end{align} where we have redefined the noise as \begin{equation} \label{eq:rtilde} \widetilde{r} \equiv x r_z- z r_x. \end{equation} Surprisingly, this new noise $\widetilde{r}(t)$ behaves precisely as state-independent white noise, \begin{align} \label{eq:rtildewhite} &\big\langle \widetilde{r}(0) \widetilde{r}(t) \big\rangle= \sqrt{\tau}\, \delta(t), \end{align} which can be shown by noticing $\widetilde{r} = \sqrt{\tau} \big( x \xi_z - z\xi_x\big) $ from the expressions in Eq.~\eqref{eq:readout} for $r_x$ and $r_z$, along with the pure state condition $x^2~+~z^2~=~1$, and that $\xi_x$ and $\xi_z$ are independent white noises. This identity completely eliminates the nonlinear state dependence in the evolution, so that the angular velocity $\dot{\theta}$ instantaneously responds to an arbitrary white noise drive. We can think of a classical magnetic moment $\vec{\mu}$ for a spin in the $x$-$z$ plane with evolution $\dot{\vec{\mu}} \propto \vec{B}(t)\times\vec{\mu}$ due to an environmental magnetic field $\vec{B}(t) = B(t)\hat{y} \propto \widetilde{r}(t) \hat{y}$, with fluctuating magnitude and fixed direction along the $y$-axis. This evolution produces random spin rotations in a similar manner to the random velocity kicks received during the Brownian motion of a particle. Note, however, that for Eq.~\ref{eq:rtildewhite} to produce truly white noise it is crucial that the timescale $\tau$ is the same for both measurements. Now that the spin dynamics have been physically fixed in an observer-independent way by environmental white noise, suppose an agent can measure both the spin angle $\theta(t)$ and the environmental noise $\widetilde{r}(t)$ without disturbing them (i.e., assume true macrorealism). This agent can now construct \emph{effective} readouts $\widetilde{r}_x$ and $\widetilde{r}_z$ from the measured \emph{physical white noise} $\widetilde{r}$ and a second auxiliary \emph{subjective white noise} $\widetilde{s}$ (also satisfying $\langle\widetilde{s}(0)\widetilde{s}(t)\rangle = \sqrt{\tau}\,\delta(t)$) that is known only to the agent. The construction of the effective readouts has the structure of a rotation that inverts the transformation of Eq.~\eqref{eq:rtilde} by mixing the physical and subjective noises \begin{align}\label{eq:effectivereadout} \widetilde{r}_x = x + (-z \widetilde{r} + x\widetilde{s}) , \qquad \widetilde{r}_z = z + (x \widetilde{r} + z\widetilde{s}). \end{align} It is then easy to check that these effective readouts satisfy the expected correlation functions, and that averaging the effective readouts will approximately track the state components $x$ and $z$ with additive white noise precisely as illustrated by Fig.~\ref{fig:trackedXZ}. Notice that this is true in spite of the fact that these effective white noises have been constructed from both the physical white noise $\widetilde{r}$ and an unrelated subjective noise $\widetilde{s}$ introduced by the agent. Suppose now that, as depicted on Fig.~\ref{fig:EpistemicModel}, the agent sends these effective readouts to a third party, and informs the third party that they are true measurement records for a continuous qubit measurement. This third party, hampered by the lack of knowledge about the signal preparation, will be unable to find any discrepancies with this claim. As far as the third party will be able to tell, the two readouts $\widetilde{r}_x$ and $\widetilde{r}_z$ will appear to have been generated by the continuous measurement of a qubit. Indeed, the evolution Eqs.~\eqref{eq:dynamicsStrato} can be used by the third party to integrate these readouts and perfectly emulate what will seem like genuine qubit evolution; only the agent will know that these reconstructed ``qubit trajectories'' are actually equivalent to an observer-independent physical spin evolution. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{classicalemulation2.jpg} \caption{Illustration of a classical system that emulates the dynamics of the qubit subjected to the joint continuous measurement of $\sigma_x$ and $\sigma_z$. A magnetic field $B(t) \propto \widetilde{r}(t)$, with $\widetilde{r}(t)$ stochastic white noise, drives the magnetic moment $\vec{\mu}$ of the classical spin. An agent can then combine the driving \emph{physical white noise} $\widetilde{r}$, along with the state of the spin, with an independent \emph{subjective white noise} $\widetilde{s}$ to produce the effective readouts $\widetilde{r}_x = x + (-z \widetilde{r} + x\widetilde{s})$ and $\widetilde{r}_z = z + (x \widetilde{r} + z\widetilde{s})$, which can be later given to a third party. These effective readouts, as well as the dynamics of the classical spin, perfectly emulate the readout and dynamics expected for a monitored qubit as in Fig.~\ref{fig:trackedXZ}.} \label{fig:EpistemicModel} \end{figure} By construction, the dynamics of the classical spin with these \emph{epistemically restricted effective readouts} will be indistinguishable from those of a qubit undergoing joint continuous measurements of $x$ and $z$. Although the agent has perfect knowledge of the classical spin dynamics, the physical noise, and the irrelevant subjective noise, the third party only receives restricted knowledge that hides the structure of the noise, so would make the same macrorealistic conclusions about the dynamics that were derived in the previous section. This equivalence is consistent with other observations in the literature that quantum models can share many features with epistemically restricted classical models \cite{Spekkens2007,Harrigan2010,Bartlett2012}. We emphasize that when the measurement is asymmetric ($\tau_x \neq \tau_z$), then this simple spin model becomes invalid and more complicated classical dynamics will be needed to explain the basins of attraction that appear around the dominant measurement poles (e.g., an additional electric field). Nevertheless, the simplicity of the present model suggests a way to understand how the measured output in Fig.~\ref{fig:trackedXZ} could be consistent with realistic behavior. \section{Conclusion} \label{sec:Conclusion} By considering simultaneous monitoring of both the $x$ and $z$ Bloch coordinates of a qubit, we have shown that the measured readouts contain structure that challenges the usual application of the notion of Leggett-Garg macrorealism to continuous quantum measurement. Assuming noninvasive measurability---by treating the observed unbiased noise as only apparent and not driving the physical dynamics---the collected readouts manifestly violate macrorealistic inequalities for arbitrarily short correlation times. Interpreted as a spin, such correlations would imply the striking conclusion that the spin points in all directions simultaneously with magnitude one at all times, while also having a magnitude of zero. To be logically consistent according to macrorealism, one has to admit the possibility that either i) the measurement process is invasive, with the observed noise having a physical effect on the system, or ii) the physical quantities being measured do not have definite values at all times. Since the qualitative qubit dynamics may be recovered from model-independent averaging of the collected readouts directly, rejecting the latter assumption seems unwarranted in this case. Instead, intrinsic measurement invasiveness seems much more likely. The apparent invasiveness of the measurement process leaves an imprint, in the form of correlations created between the intrinsic noises from the measurement devices and the physical values being measured. Any postulated underlying dynamics for the system are thus constrained by the structure of the correlation functions predicted by quantum mechanics from the collapse postulate. Consistency with quantum predictions is not sufficient to guarantee ``quantumness'' of the mechanism for invasiveness, however. To emphasize this point, we constructed an equivalent classical model for a spin undergoing the same dynamics as the qubit, which is valid for the special case of equal measurement strengths for $x$ and $z$. The stochastic evolution is driven by a fluctuating environmental magnetic field, and produces experimental output that perfectly emulates the records one would obtain from continuously monitoring both $x$ and $z$ coordinates of a qubit. Hence, the output of this classical emulation also violates Leggett-Garg inequalities, and thus seems to violate macrorealism, even though the state of the classical system is well defined and in principle knowable at all times. Importantly, the actual effect of the measurement is not invasive at the level of the observer, since the dynamics of the classical spin and the physical environmental noise are independent from the generation and collection of the observed records. To reproduce the qubit measurement records using the classical model, an agent (possibly the measuring device itself) must transform the physical noise driving the evolution to include additional \emph{subjective} noise that has no relevance to the evolution. This extra noise thus constitutes an epistemic restriction on what the observer is allowed to learn about the physical state of the system. That is, the experimental readouts give disguised, as opposed to full, information about the ontic state of the classical system and its physical noise. \acknowledgments We thank Alexander Korotkov, Juan Atalaya, Leigh Martin, Shay Hacohen-Gourgy, Irfan Siddiqi and Andrew Jordan for helpful discussions. This work was supported by US Army Research Office Grant No. W911NF-15-1-0496. We also acknowledge partial support by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Economic Development \& Innovation.
1,314,259,994,367
arxiv
\section{Introduction} In polynomial optimisation it is a central problem to certify that a real polynomial in $n$ variables is non-negative as a function on $\R^n$. One prominent approach is to write it as a sum of squares of polynomials which is studied in real algebraic geometry and used in semi-definite programming methods, cf.~\cite{BleParThoMR3075433}. This idea goes back to work by Hilbert \cite{HilMR1510517}, where he proved that every non-negative polynomial is a sum of squares in $1$ variable, in degree $2$ and for bivariate polynomials of degree $4$. In every other case, there are non-negative polynomials that cannot be written as a sum of squares of polynomials. Hilbert's proof relies on methods from algebraic geometry, in particular the Cayley-Bacharach Theorem. In modern words, Hilbert constructed a supporting hyperplane to the sums of squares cone in a strictly positive polynomial, i.e.~a linear functional that is non-negative on all sums of squares but takes a negative value in a strictly positive polynomial, see \cite{Reznick}. We revisit this construction using modern developments in classical algebraic geometry with the aim of controlling the rank of the associated Hankel matrix. The most important ingredients are the Cayley-Bacharach Theorem and the Buchsbaum-Eisenbud Structure Theorem for height $3$ Gorenstein algebras. Here is our setup: A linear functional $\ell$ on the real vector space of ternary forms of degree $2d$ is non-negative on every square if and only if the bilinear form \[ B_\ell\colon \left \{ \begin{array}[]{rcl} \R[x,y,z]_d\times \R[x,y,z]_d & \to & \R \\ (f,g) & \mapsto & \ell(f\cdot g). \end{array} \right. \] is positive semi-definite. The representing matrix of this bilinear form with respect to the monomial basis is the Hankel matrix associated with $\ell$. Therefore, the convex cone dual to the cone $\Sigma_{2d}$ of sums of squares of polynomials is the Hankel spectrahedron \[ \Sigma_{2d}^\vee = \{\ell\in\R[x,y,z]_{2d}^\ast \colon (\ell(x^{\alpha+\beta}))_{\alpha,\beta} \text{ is positive semi-definite}\}. \] Every real point evaluation $\ev_x\colon\R[x,y,z]_{2d}\to \R$, $p\mapsto p(x)$, at $x\in\R^3$ is an extreme ray of $\Sigma_{2d}^\vee$. In fact, by the Veronese embedding of $\PP^2$ of degree $2d$, they are exactly the positive semi-definite rank $1$ Hankel matrices. We are interested in extreme rays of higher rank. These correspond to supporting hyperplanes of $\Sigma_{2d}$ which expose a face whose relative interior consists of strictly positive polynomials. Conversely, for every non-negative polynomial $p$ that is not a sum of squares, there exists an extreme ray $\R_+\ell$ of $\Sigma_{2d}^\vee$ such that $\ell(p)<0$. Our construction of extreme rays of higher rank is a generalisation of a construction of the first author, which extends Hilbert's methods and characterises the extreme rays of $\Sigma_6^\vee$ of rank greater than $1$ by special point configurations in the plane, namely the transversal intersection of two plane cubics in $9$ points, see \cite{BleNP}. In our main result, we construct an extreme ray $\R_+ \ell$ of $\Sigma_{2d}^\vee$ for $2d\geq 8$ such that the associated bilinear form $B_\ell$ has maximal rank, which is $\binom{d+2}{2}-4$ by Theorem \ref{Thm:MaxRank}. Using the extensive work on height $3$ Gorenstein algebras, we conclude that there are many extreme rays of maximal rank; precisely, the Zariski closure of the set of extreme rays is an irreducible variety of codimension $10$. The intricate rank stratification of the semi-algebraic set of extreme rays characterises the algebraic boundary of the sums of squares cone via projective duality theory. We completely work out the first three nontrivial cases $d=3,4,5$ in Section \ref{sec:Dextics}, extending the study of the algebraic boundary of the sums of squares cones for ternary sextics and quaternary quartics in \cite{BleHauOttRanStuMR2999301}. The question which ranks can occur for extreme rays is closely related to Hilbert functions of Gorenstein ideals with socle in even degree $2d$ that contain a regular sequence in degree $d$. Our presentation is organised as follows: In Section \ref{sec:Gorenstein}, we introduce a main tool, namely $0$-dimensional Gorenstein ideals, in a hands-on way and show that the variety of all Hankel matrices of rank at most $\binom{d+2}{2}-4$ is irreducible and has the expected codimension $10$ in the $\binom{2d+2}{2}$-dimensional space $\R[x,y,z]_{2d}^\ast$. This result is based on the Buchsbaum-Eisenbud Structure Theorem and its refined analysis by Diesel in \cite{Die}. It allows us to later identify this variety as the Zariski closure of the extreme rays of $\Sigma_{2d}^\vee$. In Section \ref{sec:ExtRays}, we conclude from the results in \cite{BlePositiveGorensteinIdeals} that $\binom{d+2}{2}-4$ is indeed the maximal rank of an extreme ray for $d\geq 4$ and construct such an extreme ray using the Cayley-Bacharach Theorem for plane curves. Section \ref{sec:Dextics} is dedicated to extreme rays of lower rank that we work out explicitly in the first intriguing case $d=5$. We also discuss the mentioned applications of the rank-stratification of the locus of extreme rays to the study of the algebraic boundary of $\Sigma_{2d}$. \section{Interlude on Gorenstein Ideals}\label{sec:Gorenstein} Let us fix the following notations: We denote by $k[\ul{x}]=k[x,y,z]$ the polynomial ring over a field $k$ generated by $3$ variables. We consider it with the standard total degree grading and denote by $k[\ul{x}]_m$ the $k$-vector space of homogeneous polynomials of degree $m$, which has dimension $\binom{m+2}{2}$. Let $\ell\in \C[\ul{x}]_{m}^\ast$ be a linear functional on ternary forms of degree $m$. To $\ell$ and every pair of positive integers $u,v\in\N$ with $u+v=m$, we associate the bilinear form \[ B_{\ell,u,v}\colon \left\{ \begin{array}[h]{rcl} \C[\ul{x}]_u\times\C[\ul{x}]_v & \to & \C \\ (p,q) & \mapsto & \ell(pq). \end{array}\right. \] The representing matrices of these bilinear forms with respect to the monomial bases are called the \emph{Catalecticant matrices} of $\ell$. \begin{Def}\label{Def:Gorenstein} Let $\ell\in\C[\ul{x}]_{m}^\ast$ be a linear functional. We call the homogeneous ideal $I(\ell)$ of $\C[\ul{x}]$ generated by \[ \{p\in \C[\ul{x}]_k\colon k>m \text{ or } \ell(pq)=0 \text{ for all } q\in\C[\ul{x}]_{m-k}\} \] the \emph{Gorenstein ideal with socle} $\ell$. We call $m$ the \emph{socle degree} of the ideal. \end{Def} These ideals were studied extensively in the literature, cf.~Iarrobino-Kanev \cite{IarKanMR1735271}. Our definition is probably the most direct for $0$-dimensional Gorenstein ideals, cf.~\cite[Theorem 21.6 and Exercise 21.7]{EisenbudMR1322960}. \begin{Rem}\label{Rem:SymHilbFunc} The degree $u$ part of the ideal is the left-kernel of the bilinear form $B_{\ell,u,v}$ for $u\leq m$. In particular, the Hilbert function of a Gorenstein ideal $I$ with even socle degree $2d$ is symmetric around $d$, i.e.~$\hilb(I,i)=\hilb(I,2d-i)$ for all $0\leq i \leq 2d$. \end{Rem} We can consider the set of all Gorenstein ideals with a fixed socle degree $m$ as a projective space by identifying an ideal with its socle, which is uniquely determined by the ideal up to scaling. In this projective space, we consider the set $\gor(T)$ of all Gorenstein ideals with a given Hilbert function $T$. \begin{Prop} The set $\gor(T)$ of all Gorenstein ideals with socle degree $m$ and Hilbert function $T$ is a quasiprojective subvariety of the projective space of all Gorenstein ideals with socle degree $m$. \end{Prop} \begin{proof} The condition to have a given Hilbert function can be expressed as rank conditions on the Catalecticant matrices, namely \[ \rk(B_{\ell,u,v}) = T(u). \] \end{proof} \begin{Rem}\label{Rem:gorTQ} (a) The quasiprojective variety $\gor(T)$ is defined over $\Q$, because the minors of the Catalecticant matrices are polynomials with coefficients in $\Z$.\\ (b) Note that a $k$-rational point $\ell\in\gor(T)$ for a subfield $k\subset \C$ is a linear functional $\ell = \ell\otimes 1 \in k[\ul{x}]_m\otimes \C$. \end{Rem} \begin{Def} We call a Hilbert function $T$ \emph{permissible} if there is a Gorenstein ideal $I\subset\C[\ul{x}]$ with Hilbert function $T$. \end{Def} Using the Buchsbaum-Eisenbud Structure Theorem for height $3$ Gorenstein ideals (cf.~Buchsbaum-Eisenbud \cite{BE}), Diesel proved the following. \begin{Thm}[cf.~Diesel {\cite[Theorem 1.1 and 2.7]{Die}}]\label{Thm:DieGorT} For every permissible Hilbert function $T$, the variety $\gor(T)$ is an irreducible unirational variety. \end{Thm} We will use the fact that $\gor(T)$ is unirational to determine the dimension of $\gor(T)$ for special Hilbert functions $T$. In order to do this, we need the more precise information on the unirationality of $\gor(T)$ given by Diesel. The information we need is spread out over the paper Diesel \cite{Die}. We will give a short summary with references. \begin{Rem}\label{Rem:DieselCom} Diesel proves that for a given permissible Hilbert function $T$ there is a minimal set (with respect to inclusion) $D_{min}=(Q,P)$ of degrees of generators $Q = \{q_1,\ldots,q_u\}$ and relations $P = \{p_1,\ldots,p_u\}$ for a Gorenstein ideal with Hilbert function $T$. We assume $q_1\leq q_2\leq\ldots\leq q_u$ and $p_1\geq p_2\geq\ldots\geq p_u$. The set $\gor_{D_{min}}$ of all Gorenstein ideals with generators of degree as specified by $Q$ is a dense subset of $\gor(T)$, see the proof of \cite[Theorem 2.7 and Theorem 3.8]{Die}. Given $D_{min}$, we consider the affine space $\A^{h(E_M)}$ of skew-symmetric matrices with entries in $\C[\ul{x}]$ where the $(i,j)$-th entry is homogeneous of degree $p_j-q_i$ ($i\neq j$) and the rational map $\pi\colon \A^{h(E_M)} \ratto \gor_{D_{min}}$ that takes a matrix to the Gorenstein ideal generated by its Pfaffians. This statement uses the Buchsbaum-Eisenbud Structure Theorem, cf.~\cite{Die}, p.~367 and p.~369. Given a Hilbert function $T$, the set $D_{min}$ of degrees of generators and relations for $T$ is determined in a combinatorial way: Given the socle degree $m$ and the minimal degree $k$ of a generator of the ideal, there is a one-to-one correspondence between permissible Hilbert functions of order $k$ and self-complementary partitions of $2k$ by $m-2k+2$ blocks, cf.~\cite[Proposition 3.9]{Die}. These partitions give the maximum number of generators, which is $2k+1$, cf.~\cite[Theorem 3.3]{Die}. To refine these sequences to $D_{min}$, we iteratively delete pairs $(q_i,q_j)$ from $Q$ and $(p_i,p_j)$ from $P$ whenever they satisfy $r_i+r_j=p_i+p_j-q_i-q_j=0$, cf.~\cite{Die}, p.~380. \end{Rem} We are particularly interested in Gorenstein ideals with socle in even degree $2d$ with the property that the middle Catalecticant has corank $4$, i.e.~rank $\binom{d+2}{2}-4$. The proof of the following statement is analogous to the proof of Diesel \cite[Theorem 4.4]{Die}. \begin{Lem}\label{Lem:Corank4dim} Let $d\geq 4$ be an integer. The projective variety $X_{-4}$ of middle Catalecticant matrices of corank at least $4$, i.e.~of rank at most $\binom{d+2}{2}-4$, is irreducible of codimension $10$ in the space of middle Catalecticant matrices. In particular, it is defined by the symmetric $(\binom{d+2}{2}-3)$-minors of the generic middle Catalecticant matrix. \end{Lem} \begin{proof} Let $N = \binom{d+2}{2}$. The quasiprojective variety $S_{-4}$ of symmetric $N\times N$-matrices of rank $N-4$ has codimension $10$ in the projective space of the vector space of symmetric $N\times N$-matrices. Therefore the intersection $X_{-4}$ of $S_{-4}$ with the subspace of middle Catalecticant matrices has codimension at most $10$ in this linear space. We will show, that it has codimension exactly $10$ by counting dimensions of the possible $\gor(T)$ using their unirationality. There are only two possible Hilbert functions for a Gorenstein ideal $I$ with socle degree $2d$ and $\hilb(I,d)=\binom{d+2}{2}-4$ by their symmetry, namely \[ T_1=(1,3,6,\ldots,\binom{d+1}{2},\binom{d+2}{2}-4,\ldots), \] which corresponds to the case of four generators in degree $d$ and no generators of lower degree, and \[ T_2 = (1,3,6,\ldots,\binom{d+1}{2}-1,\binom{d+2}{2}-4,\ldots), \] which corresponds to the case of one generator of degree $d-1$ and one generator of degree $d$. More precisely, these two Hilbert functions correspond to the self-complementary partitions of $2\times 2d$ resp.~$4\times(2d-2)$ blocks shown in Figure \ref{fig:GorensteinPartitions} by the correspondence explained in Diesel \cite[section 3.4, in paricular Proposition 3.9]{Die}. \begin{figure}[h] \begin{center} \begin{tikzpicture} \filldraw[color = black!20!white] (-6,0) rectangle (-5,7); \filldraw[color = black!20!white] (-5,0) rectangle (-4,3); \draw[black] (-6,0) -- (-6,10) -- (-4,10) -- (-4,0) -- (-6,0); \draw (-5,0) -- (-5,10); \draw (-6,1) -- (-4,1); \draw (-6,2) -- (-4,2); \draw (-6,3) -- (-4,3); \draw (-6,7) -- (-4,7); \draw (-6,8) -- (-4,8); \draw (-6,9) -- (-4,9); \draw[very thick] (-6,10) -- (-6,7) -- (-5,7) -- (-5,3) -- (-4,3) -- (-4,0); \draw[decorate,decoration={brace,amplitude=8pt}] (-4,7) -- (-4,3) node[midway,xshift=0.9cm] {2d-6}; \filldraw[color = black!20!white] (2,0) rectangle (3,10); \filldraw[color = black!20!white] (3,0) rectangle (4,9); \filldraw[color = black!20!white] (4,0) rectangle (5,1); \draw[black] (2,0) -- (2,10) -- (6,10) -- (6,0) -- (2,0); \draw (3,0) -- (3,10); \draw (4,0) -- (4,10); \draw (5,0) -- (5,10); \draw (2,9) -- (6,9); \draw (2,1) -- (6,1); \draw[very thick] (3,10) -- (3,9) -- (4,9) -- (4,1) -- (5,1) -- (5,0); \draw[decorate,decoration={brace,amplitude=8pt}] (6,9) -- (6,1) node[midway,xshift=0.9cm] {2d-4}; \end{tikzpicture} \end{center} \caption{The partition on the right of $2d\times 2$ blocks corresponds to $T_1$, the partition on the left of $(2d-2)\times 4$ blocks to $T_2$.} \label{fig:GorensteinPartitions} \end{figure} We first consider $T_1$. The sequence of degrees of the generators for the minimal set $D_{min}$ is in this case different for $d=4$ and $d\geq 5$, namely $(4,4,4,4,6)$ for $d=4$ and $(d,d,d,d,d+1,\ldots,d+1)$ with $(2d-9)$ many generators of degree $d+1$ for $d\geq 5$, cf.~Remark \ref{Rem:DieselCom}. Since $q_i+p_i = 2d+3$, the degree matrices are \[ \left( \begin{array}[l]{ccccc} 0 & 3 & 3 & 3 & 1 \\ & 0 & 3 & 3 & 1 \\ & & 0 & 3 & 1 \\ & & & 0 & 1 \\ & & & & 0 \\ \end{array}\right), \left( \begin{array}[r]{cccccccc} 0 & 3 & 3 & 3 & 2 & \cdots & \cdots & 2 \\ & 0 & 3 & 3 & \vdots & & & \vdots \\ & & 0 & 3 & \vdots & & & \vdots \\ & & & 0 & 2 & \cdots & \cdots & 2 \\ & & & & 0 & 1 & \cdots & 1 \\ & & & & & 0 & \ddots & \vdots \\ & & & & & & \ddots & 1 \\ & & & & & & & 0 \end{array}\right) \] where the right one is of size $(2d-5)\times (2d-5)$. Every entry of the matrix can be generically chosen among the forms of the indicated degree and its Pfaffians will generate a Gorenstein ideal with Hilbert function $T_1$. Therefore, for $d=4$, we have $h(E_M)=6\dim(\C[\ul{x}]_3) + 4\dim(\C[\ul{x}]_1)=72$ and for $d\geq 5$ we have \begin{eqnarray*} h(E_M)& = & 6\dim(\C[\ul{x}]_3)+4(2d-9)\dim(\C[\ul{x}]_2) \\ & & + \binom{2d-9}{2}\dim(\C[\ul{x}]_1)\\ & = & 6d^2-9d-21. \end{eqnarray*} This is an overcount of the dimension of $\gor_{D_{min}}$ because for every choice of generators of a given ideal we get a matrix with these generators as Pfaffians. So for $d=4$, we choose a basis of a $4$-dimensional subspace of forms of degree $4$ and one generator of degree $6$ from a $\dim(\C[\ul{x}]_6) - T_1(2)=22$-dimensional space. Therefore we overcount the dimension of $\gor_{D_{min}}$ by at least $4^2+22=38$ and the dimension of $\gor(T)$ is at most $34$. Since $\dim(\P(\C[\ul{x}]_8^\ast)) = 44$, its codimension is at least $10$. For $d\geq 5$, we choose a basis of a $4$-dimensional subspace of forms of degree $d$ and $2d-9$ linearly independent generators from a space of dimension $\dim(\C[\ul{x}]_{d+1})-T_1(d-1)=2d+3$. The overcount in this case is at least $4^2+(2d-9)(2d+3)$ and the dimension of $\gor_{D_{min}}$ is at most $2d^2+3d-10$. The projective dimension of the space of middle Catalecticant matrices is $\dim(\C[\ul{x}]_{2d}^\ast)-1=2d^2+3d$, which again implies that the codimension of $\gor(T_1)$ is at least $10$. From the fact that it can be at most $10$, it follows that it is exactly $10$. We now repeat the count for the Hilbert function $T_2$. In this case, $D_{min} = \{Q_{min},P_{min}\} = \{(d-1,d,d+1,d+1,\ldots,d+1),(d+4,d+3,d+2,d+2,\ldots,d+2)\}$ with $(2d-5)$ times the entry $d+1$ in $Q_{min}$ and $d+2$ in $P_{min}$, cf.~Figure \ref{fig:GorensteinPartitions}. Therefore, the degree matrix is \[ \left( \begin{array}[h]{cccccc} 0 & 4 & 3 & \cdots & \cdots & 3 \\ & 0 & 2 & \cdots & \cdots & 2 \\ & & 0 & 1 & \cdots & 1 \\ & & & 0 & \ddots & \vdots \\ & & & & & 1 \\ & & & & & 0 \end{array}\right) \] which is of size $(2d-3)\times (2d-3)$. We compute \begin{eqnarray*} h(E_M) & = & \dim(\C[\ul{x}]_4)+(2d-5)\dim(\C[\ul{x}]_3)+(2d-5)\dim(\C[\ul{x}]_2) \\ & & +\binom{2d-5}{2}\dim(\C[\ul{x}]_1) \\ & = & 6d^2-d-20. \end{eqnarray*} Here we choose one generator of degree $d-1$, one generator of degree $d$ from a $4$-dimensional space and $(2d-5)$ generators from a $\dim(\C[\ul{x}]_{d+1})-T_2(d-1)=(2d+4)$-dimensional space. Therefore the dimension of $\gor(T_2)$ is at most $6d^2-d-20-4-(2d-5)(2d+4)=2d^2+d-4$. The codimension is at least $2d+4\geq 12$. So $\gor(T_2)$ cannot be an irreducible component of $X_{-4}$ and we conclude that $\gor(T_2)\subset \cl(\gor(T_1))$. In summary, $\gor(T_1)$ is a dense subset of $X_{-4}$ and $X_{-4}$ is irreducible (cf.~Diesel \cite[Theorem 2.7]{Die}) and has the expected codimension $10$ in the space of middle Catalecticant matrices. \end{proof} The tangent space to the quasiprojective variety $\gor(T)$ for a permissible Hilbert function $T$ at a Gorenstein ideal $I$ can be described in terms of the ideal. We identify $\C[\ul{x}]_m$ with its dual space by using the apolar bilinear form, i.e.~we identify a monomial $x^\alpha\in\C[\ul{x}]_m$ with the linear form $p\mapsto \frac{1}{\alpha!} \frac {\partial^{|\alpha|}}{\partial x^\alpha}p$ that takes a polynomial $p=\sum p_\beta x^\beta$ to $p_\alpha$. Using this identification, we can state a characterisation of the tangent space to $\gor(T)$ at an ideal $I$ in terms of this ideal. \begin{Thm}[Iarrobino-Kanev {\cite[Theorem 3.9 and 4.21]{IarKanMR1735271}}] \label{Thm:GorTTangentSpace} Let $T$ be a permissible Hilbert function. The quasiprojective variety $\gor(T)$ is smooth. Let $\ell\in\C[\ul{x}]_m^\ast$ be a linear functional such that the corresponding Gorenstein ideal $I=I(\ell)$ has Hilbert function $T$. Then the tangent space to $\gor(T)$ at $\ell$ is \[ ( (I^2)_m)^\perp\subset \C[\ul{x}]_m. \] \end{Thm} The irreducible variety $X_{-4}$ of middle Catalecticant matrices of corank at least $4$ is defined over $\Q$ and we will later show that it has a smooth rational point, i.e.~a point with rational coordinates. Therefore, the real points of $X_{-4}$ are Zariski-dense in it and the above statement of Theorem \ref{Thm:GorTTangentSpace} also applies to real points of $X_{-4}$, cf.~\cite[Section 2.8]{BochnakMR1659509}. \section{Extreme Rays of Maximal Rank and Positive Gorenstein Ideals}\label{sec:ExtRays} In this section, we recapitulate bounds on the rank of Hankel matrices of extreme rays of $\Sigma_{2d}^\vee$ which are not point evaluations. The lower bound and its tightness are proved in Blekherman \cite[Theorem 2.1]{BlePositiveGorensteinIdeals}. We constructively establish tightness of the upper bound. We show that the Zariski closure of the set of extreme rays is the variety of Hankel matrices of corank at least $4$, which is irreducible; in particular, it is (at least set-theoretically) defined by the symmetric $r\times r$ minors of the generic Hankel matrix, where $r=\binom{d+2}{2}-3$. To a linear functional $\ell \in \R[\ul{x}]_m^\ast$, we associate the bilinear form \[ B_{\ell}\colon \left\{ \begin{array}[h]{rcl} \R[\ul{x}]_d\times\R[\ul{x}]_d & \to & \R \\ (p,q) & \mapsto & \ell(pq), \end{array}\right. \] whose representing matrix with respect to the monomial bases is called the \emph{Hankel matrix} of $\ell$. One of the main results of Blekherman is a characterisation of extreme rays of $\Sigma_{2d}^\vee$ by the associated Gorenstein ideals. \begin{Prop}[Blekherman {\cite[Lemma 2.2]{BleNP}} and {\cite[Proposition 4.2]{BlePositiveGorensteinIdeals}}] \label{Prop:ExtremeRays} (a) A linear functional $\ell\in\R[\ul{x}]_{2d}^\ast$ spans an extreme ray of $\Sigma_{2d}^\vee$ if and only if the bilinear form $B_\ell$ is positive semi-definite and the degree $d$ part $I(\ell)_d$ of the Gorenstein ideal $I(\ell)$ is maximal with respect to inclusion over all Gorenstein ideals with socle degree $2d$.\\ (b) Let $I$ be a Gorenstein ideal with socle degree $2d$. Then $I_d$ is maximal with respect to inclusion over all Gorenstein ideals with socle degree $2d$ if and only if the degree $2d$ part of the ideal generated by $I_d$ is a hyperplane in $\R[\ul{x}]_{2d}$. In this case, it is equal to $I_{2d}$. \end{Prop} Lower bounds on the ranks for extreme rays were established by Blekherman. \begin{Thm}[Blekherman {\cite[Theorem 2.1]{BlePositiveGorensteinIdeals}}]\label{Thm:ExtremeRays} Let $d\geq 3$ and $\ell\in \Sigma_{2d}^\vee$ and suppose $\R_+\ell$ is an extreme ray. Then the rank $r$ of $B_\ell$ is $1$, in which case $\ell$ is a point evaluation, or its rank is at least $3d-2$. These bounds are tight and extreme rays $\Sigma_{2d}^\vee$ of rank $3d-2$ can be explicitly constructed. \end{Thm} From Blekherman's work, we can easily deduce an upper bound. \begin{Thm}\label{Thm:MaxRank} Let $\ell\in\Sigma_{2d}^\vee$, $d\geq 4$ and suppose $\R_+\ell$ is an extreme ray. The rank of $B_\ell$ is at most $\binom{d+2}{2}-4$, i.e.~the corank is at least $4$. \end{Thm} \begin{proof} Since $\R_+\ell$ is an extreme ray, we know that the degree $2d$ part of the ideal generated by $I(\ell)_d$ is a hyperplane in the space of forms of degree $2d$. The dimension of the space $\R[\ul{x}]_d I(\ell)_d$ is bounded by $\dim(\R[\ul{x}]_d) \dim(I(\ell)_d)=\binom{d+2}{2}\crk(B_\ell)$. In case $\crk(B_\ell)\leq 3$ and $d\geq 5$, this bound is smaller than the dimension $\binom{2d+2}{2}-1$ of a hyperplane in $\R[\ul{x}]_{2d}$. The case $\crk(B_\ell) \leq 3$ and $d=4$ needs a more precise count: Suppose that $\crk(B_\ell)=3$ and the kernel of $B_\ell$ is generated by $f_1,f_2,f_3$. Then the dimension of the space $\R[\ul{x}]_4 I(\ell)_4$ is bounded by $3\dim(\R[\ul{x}]_4)-3 = 42<45-1 = \dim(\R[\ul{x}]_8) -1$ because there are the $3$ obvious relations, namely $f_if_j-f_jf_i = 0$ for $i\neq j$. \end{proof} \begin{Rem} The upper bound in the case $d=3$ is corank $3$, which agrees with the lower bound. \end{Rem} A main tool in this section is the Cayley-Bacharach Theorem. \begin{Thm}[Cayley-Bacharach, cf.~Eisenbud-Green-Harris {\cite[CB5]{EisGreHar}}] Let $X_1,X_2\subset\P^2$ be plane curves defined over $\R$ of degree $d$ and $e$ intersecting in $d\cdot e$ points. Set $s= d+e-3$ and decompose $X_1\cap X_2 = \Gamma_1\cup \Gamma_2$ into two disjoint sets defined over $\R$. Then for all $k\leq s$, the following equality holds \begin{eqnarray*} & \dim(\I(\Gamma_1)_k) - \dim(\I(X_1\cap X_2)_k) = \\ & |\Gamma_2| - \dim \lspan \{ {\rm Re}\ev_x, {\rm Im}\ev_x \in\R[\ul{x}]_{s-k}^\ast\colon x\in\Gamma_2\}. \end{eqnarray*} The left hand side is the dimension of the space of forms of degree $k$ vanishing on $\Gamma_1$ modulo the subspace of forms vanishing in every point of $X_1\cap X_2$. The right hand side is the linear defect of point evaluations on forms of dual degree $s-k$ at points of $\Gamma_2$. \end{Thm} Probably the most famous instance of this theorem is the following application to the complete intersection of two cubic curves, stated here for a totally real intersection. \begin{Exm} Suppose $X_1,X_2\subset\P^2$ are plane cubic curves intersecting in $9$ points. Then $d=e=3$ and so $s=3$. Pick $\Gamma_2 = \{P\}$ for any intersection point $P$ and put $\Gamma_1 = (X_1\cap X_2) \setminus \{P\}$. Let us consider $k=3$ and compute the right hand side of the Cayley-Bacharach equality: Since $\dim \lspan\{\ev_x\in\R[\ul{x}]_0^\ast\colon x\in\Gamma_2\} = 1$, we conclude \[ \dim(\I(\Gamma_1)_3) - \dim(\I(X_1\cap X_2)_3) = 0, \] which means that every cubic form that vanishes in the $8$ points of $\Gamma_1$ also vanishes at the ninth point $P$ of the intersection. In other words, the point evaluation $\ev_P\in\R[\ul{x}]_3^\ast$ lies in the subspace $U_{\Gamma_1}$ spanned by the eight point evaluations $\{\ev_x\in\R[\ul{x}]_3^\ast\colon x\in\Gamma_1\}$. The annihilator of $U_{\Gamma_1}$ is the $2$-dimensional subspace of $\R[\ul{x}]_3$ spanned by the defining equations of $X_1$ and $X_2$. Since this is true for any point $P\in X_1\cap X_2$, we conclude, that there is a unique linear relation among the point evaluations $\{\ev_x\in\R[\ul{x}]_3^\ast\colon x\in X_1\cap X_2\}$ and all coefficients of this relation are non-zero. \end{Exm} Using the Cayley-Bacharach Theorem, we will first show that there are extreme rays of corank $4$ under the following constraint on the degree. \begin{Const}\label{Const:ellipse} Let $d\geq 4$. There is a unique conic $C$ going through the following six points in the plane: $(0,0),(1,0),(0,1),(d-1,d-1),(d-2,d-1),(d-1,d-2)$; its equation is given by $C = \V(x^2+y^2-\frac{2(d-2)}{d-1}xy-x-y)$. From now on, we assume that this conic does not go through any other integer point. The only exceptional cases in the interval $\{4,5,\ldots,100\}$ are: $9,19,21,29,33,34,36,40,49,51,57,61,73,78,79,81,89,99$. \end{Const} \begin{Prop}\label{Prop:CB} Set $L_1=\prod_{j=0}^{d-1}(x-jz)$ and $L_2=\prod_{j=0}^{d-1}(y-jz)$ and let $\Gamma=\V(L_1,L_2)=\{(j:k:1)\colon j,k=0,\ldots,d-1\}$ be the intersection of their zero sets in $\P^2$. Split these points into \[ \Gamma_2 = \{(x:y:1)\colon x+y=2\}\cup\bigcup_{j=1}^{d-4}\{(x:y:1)\colon x+y=d+j\} \] and $\Gamma_1=\Gamma\setminus\Gamma_2$. Then there is a unique linear relation $\sum_{v\in\Gamma_1} u_v{\rm ev}_v=0$ among the point evaluations on forms of degree $d$ at points of $\Gamma_1$ and all coefficients $u_v\in\R$ in this relation are non-zero. The set of all forms of degree $d$ vanishing on $\Gamma_1$ is a $3$-dimenional space spanned by $L_1,L_2$ and a form $p$ which is non-zero at any point of $\Gamma_2$. \end{Prop} See Figure \ref{fig:ExtRayC4} for the case $d=5$ and Figure \ref{fig:perturbation} for the case $d=9$. \begin{proof} First observe that there is a unique (up to scaling) form of degree $d-3$ vanishing on $\Gamma_2$, namely $(x+y-2z)\prod_{j=1}^{d-4}(x+y-(d+j)z)$, the product of diagonals defining $\Gamma_2$: Indeed, suppose $f$ is a form of degree $d-3$ vanishing on $\Gamma_2$, then it intersects the line $x+y=d+1$ in $d-2$ integer points. Therefore it vanishes identically on it and we can divide $f$ by this linear polynomial and get a form of degree $d-4$ vanishing on $d-3$ points on the line $x+y=d+2$. Inductively, we conclude that $f$ is (again up to scaling) the claimed product of linear forms. Therefore, by the Cayley-Bacharach Theorem, the space of forms of degree $d$ vanishing on $\Gamma_1$ is $3$-dimensional, so it is spanned by $L_1, L_2$ and a third form $p$. We will explicitly construct this form: Let $p$ be the product of the linear forms $x+y-jz$ for $j=3,\ldots,d$ and of the ellipse $\V(x^2+y^2-\frac{2(d-2)}{(d-1)}xy-x-y)$ passing through the six points $(0,0),(1,0),(0,1),(d-2,d-1),(d-1,d-1)$ and $(d-1,d-2)$. By construction, $p$ vanishes on $\Gamma_1$, is of degree $d$ and does not vanish on all of $\Gamma$. Therefore $\{L_1,L_2,p\}$ is a basis of the space of forms of degree $d$ vanishing on $\Gamma_1$. By assumption on $d$, the form $p$ does not vanish on any point of $\Gamma$ other than the six mentioned above. Note that $|\Gamma_1| = \binom{d+2}{2}-2$, because $|\Gamma_1| = d^2 - |\Gamma_2| = d^2 - (3+\sum_{j=1}^{d-4}(d-1-j))=d^2-\binom{d-1}{2}$. So the fact that the space of forms of degree $d$ vanishing on $\Gamma_1$ is $3$-dimensional implies that there is a unique linear relation among the point evalutaions on forms of degree $d$ at points of $\Gamma_1$. To see that all coefficients $u_v$ in the relation $\sum_{v\in\Gamma_1}u_v{\rm ev}_v=0$ are non-zero, note that the unique form $f$ of degree $d-3$ vanishing on $\Gamma_2$ does not vanish on any point of $\Gamma_1$. Therefore, there is no form of degree $d-3$ vanishing on $\Gamma_2\cup\{v_0\}$ for any $v_0\in\Gamma_1$ and Cayley-Bachrach implies that the point evaluations $\{ {\rm ev}_v\colon v\in \Gamma_1\}\setminus\{ {\rm ev}_{v_0}\}$ are linearly independent. \end{proof} \begin{Lem}\label{Lem:ExtremeRaysC4} There is an extreme ray $\R_+\ell$ of $\Sigma_{2d}^\vee$ such that $B_\ell$ has corank $4$. The Hilbert function of the ideal $I(\ell)$ is $\hilb(I(\ell),j)=\binom{j+2}{2} = \hilb(I(\ell),2d-j)$ for $0\leq j<d$ and $\hilb(I(\ell),d)=\binom{d+2}{2}-4$. \end{Lem} \begin{proof} Let $L_1,L_2,p$ be as in Proposition \ref{Prop:CB} and consider the splitting $\V(L_1,L_2) = \Gamma = \Gamma_1 \cup \Gamma_2$ of the points defined there. Pick a point $P\in\Gamma_1$ and set $\Lambda = \Gamma_1\setminus\{P\}$. We claim that the linear functional \[ \ell = \sum_{v\in \Lambda} \ev_v - \frac{u_P^2}{\sum_{v\in\Lambda}u_v^2} \ev_P, \] where $u_v$ are the coefficients of the Cayley-Bacharach relation as in Proposition \ref{Prop:CB}, is an extreme ray of $\Sigma_{2d}^\vee$ and that the corresponding Hankel matrix $B_\ell$ has corank $4$. First note that $B_\ell$ is positive semi-definite because \begin{eqnarray*} \ell(f^2) & = & \sum_{v\in\Lambda} f(v)^2 - \frac{u_P^2}{\sum_{v\in\Lambda} u_v^2} f(P)^2 \\ & = & \sum_{v\in\Lambda} f(v)^2 - \frac{u_P^2}{\sum_{v\in\Lambda} u_v^2} \frac{1}{u_P^2} \left(\sum_{v\in\Lambda} u_vf(v)\right)^2 \\ & = & \|(f(v)_{v\in\Lambda}\|^2 - \left| \left\langle \frac{1}{\|(u_v)_{v\in\Lambda}\|} (u_v)_{v\in\Lambda} , (f(v))_{v\in\Lambda} \right\rangle \right|^2 \\ & \geq & 0 \end{eqnarray*} by the Cauchy-Schwarz inequality for all polynomials $f\in\R[\ul{x}]_d$. More precisely, $\ell(f^2)$ is zero for a form $f$ not identically zero on $\Gamma$ if and only if $f(v)=\alpha u_v$ for all $v\in\Lambda$ and some $\alpha\in\R^\ast$. Therefore, the degeneration space of the Hankel matrix is spanned by $L_1,L_2,p$ and the form uniquely determined (modulo $L_1,L_2,p$) by $f(v)=u_v$ for all $v\in\Lambda$; it has dimension $4$ as desired. Indeed, the form $f$ is uniquely determined because $\{\ev_x\in (\R[\ul{x}]_d/\lspan(L_1,L_2,p))^\ast\colon x\in\Lambda\}$ is a basis. We now prove extremality of $\ell$ in $\Sigma_{2d}^\vee$ by checking the characterisation that $I(\ell)_d$ generates a hyperplane in the vector space of forms of degree $2d$, cf.~Blekherman \cite[Proposition 4.2]{BlePositiveGorensteinIdeals}. As a first step, we show that $\langle L_1,L_2,p \rangle_{2d-3}$ has codimension $|\V(L_1,L_2,p)|=|\Gamma_1|$. So suppose $a_1 L_1+a_2 L_2 + bp=0$, where $a_1,a_2,b\in\R[\ul{x}]_{d-3}$ are forms of degree $d-3$. By evaluating at points of $\Gamma_2$, we conclude that $b$ is the uniquely determined form of degree $d-3$ vanishing on $\Gamma_2$, cf.~proof of Proposition \ref{Prop:CB}. Since $L_1$ and $L_2$ are coprime, this is a unique syzygy and we conclude \[ \dim(\langle L_1,L_2,p\rangle_{2d-3}) = 3\binom{d-1}{2} -1 = \frac32(d^2-3d+2)-1, \] which means codimension $|\Gamma_1|=d^2-\binom{d-1}{2}$ in $\R[\ul{x}]_{2d-3}$. In particular, the codimension of $\langle L_1,L_2,p\rangle_{2d}$ in $\R[\ul{x}]_{2d}$ is also $|\Gamma_1|$ because the point evaluations $\{\ev_v\colon v\in\Gamma_1\}$ are linearly independent on forms of degree $2d-3$ and consequently also on forms of degree $2d$. Now suppose $a_1L_1 + a_2L_2+bp + cf=0$ for forms $a_1,a_2,b,c\in\R[\ul{x}]_d$ of degree $d$. Evaluation at points of $\Gamma_1$ implies that $c$ lies in the span of $L_1,L_2,p$. So we have three syzygies and the codimension of $\langle L_1,L_2,p,f\rangle_{2d}$ is $|\Gamma_1| - \binom{d+2}{2}+3=1$, as desired. \end{proof} \begin{Exm}\label{Exm:Hankel} We follow the construction in Proposition \ref{Prop:CB} and Lemma \ref{Lem:ExtremeRaysC4} in the case $d=5$. Then $\Gamma=\V(L_1,L_2)$ consists of the $25$ points $(i:j:1)\in\P^2$ where $i,j=0,\ldots,4$, see Figure \ref{fig:ExtRayC4}. The six points on the two lines $x+y=2$ and $x+y=6$ are the points of $\Gamma_2$. Indeed, the point evaluations at the $19$ points of $\Gamma_1 = \Gamma\setminus \Gamma_2$ on forms of degree $5$ satisfy a unique linear relation, namely $\begin{pmatrix} -1 & 3 & 0 & -5 & 3 \\ 3 & -16 & 18 & 0 & -5 \\ 0 & 18 & -36 & 18 & 0 \\ -5 & 0 & 18 & -16 & 3 \\ 3 & -5 & 0 & 3 & -1 \\ \end{pmatrix}$, where the $(i,j)$-th entry of this matrix is the coefficient of the point evaluation at $(5-i:j-1:1)$ in the linear relation, i.e.~visually, it is the coefficient corresponding to the points in the $5\times 5$-grid seen in Figure \ref{fig:ExtRayC4}. \begin{figure}[h] \begin{center} \begin{tikzpicture} \filldraw (0,0) circle(2pt); \filldraw (0,1) circle(2pt); \filldraw (0,2) circle(2pt); \filldraw (0,3) circle(2pt); \filldraw (0,4) circle(2pt); \filldraw (1,0) circle(2pt); \filldraw (1,1) circle(2pt); \filldraw (1,2) circle(2pt); \filldraw (1,3) circle(2pt); \filldraw (1,4) circle(2pt); \filldraw (2,0) circle(2pt); \filldraw (2,1) circle(2pt); \filldraw (2,2) circle(2pt); \filldraw (2,3) circle(2pt); \filldraw (2,4) circle(2pt); \filldraw (3,0) circle(2pt); \filldraw (3,1) circle(2pt); \filldraw (3,2) circle(2pt); \filldraw (3,3) circle(2pt); \filldraw (3,4) circle(2pt); \filldraw (4,0) circle(2pt); \filldraw (4,1) circle(2pt); \filldraw (4,2) circle(2pt); \filldraw (4,3) circle(2pt); \filldraw (4,4) circle(2pt); \draw (-1,3) -- (3,-1); \draw (1,5) -- (5,1); \end{tikzpicture} \end{center} \caption{The construction of an extreme ray of $\Sigma_{2d}^\vee$ of corank $4$ for $d=5$.} \label{fig:ExtRayC4} \end{figure} The $21\times 21$ Hankel matrix can be exactly computed using a computer algebra system. In Mathematica, the following code will do the job: \begin{verbatim} d = 5; m = MonomialList[(x+y+z)^d]; q1 = x (x - z) (x - 2 z) (x - 3 z) (x - 4 z); q2 = y (y - z) (y - 2 z) (y - 3 z) (y - 4 z); Pevalall = Solve[{q1 == 0, q2 == 0, z == 1}, {x, y, z}]; Pevalfoo = Select[Pevalall, ({y + x - 2 z} /. #) != {0} &]; Peval = Select[Pevalfoo, ({y + x - 6 z} /. #) != {0} &]; Peval0 = Drop[Peval, -1]; H = Transpose[{m}].{m}; P = Peval0; Q = Sum[H /. P[[i]], {i, 1, Length[P]}]; Qp = H /. Peval[[-1]]; l = Norm[CB]^2; Hankel = Q - 1/l (rel[[1]][[-1]])^2 Qp; \end{verbatim} We set up the monomial basis \texttt{m} and the totally real complete intersection of $25$ points, where $\texttt{q1}=L_1$ and $\texttt{q2} = L_2$. The two lines using the \texttt{Select}-command remove the points on the two diagonals $x+y=2$ and $x+y=6$, so $\Gamma_1 = \texttt{Peval}$. With the \texttt{Drop}-command, we remove one of the points from the list. The matrix \texttt{H} is the general Hankel matrix and $Q$ is the Hankel matrix of the linear functional $\sum_{v\in\Gamma_1\setminus \texttt{Peval[[-1]]}} \ev_v$ and \texttt{Qp} the Hankel matrix of the point evaluation at \texttt{Peval[[-1]]}. So \texttt{Hankel} is the Hankel matrix of the extreme ray that we constructed. \end{Exm} \begin{Rem} Note that the proof of the Lemma \ref{Lem:ExtremeRaysC4} shows that the face of the cone $\Sigma_{2d}$ of sums of squares exposed by the constructed extreme ray consists of the sums of squares of polynomials in $I(\ell)_d$. \end{Rem} The fact that the conic vanishes in additional integer points on the $d\times d$ grid defined by the products of linear forms $L_1$ and $L_2$ in Proposition \ref{Prop:CB} destroys the extremality of the constructed linear functional because we get additional syzygies among the generators of the corresponding Gorenstein ideal. In order to deal with this problem, we will make a perturbation to our point arrangement. First, we want to observe the following fact, which motivates why we should be able to get around this obsatcle by perturbation: \begin{Rem}\label{Rem:CBApplication} Consider the setup in Proposition \ref{Prop:CB} and suppose the conic $C$ vanishes in additional integer points in the $d\times d$ integer grid $\Gamma=\V(L_1)\cap \V(L_2)$. Pick such a point $P\in\Gamma$. Then every form of degree $d$ vanishing on $\Gamma_1$ will also vanish at $P$ because $L_1$, $L_2$ and the third form $p$, which is the product of lines and the conic, form a basis of this space. By the Theorem of Cayley-Bacharach applied to $\Gamma = \Gamma_1'\cup\Gamma_2'$ for $\Gamma_1' = \Gamma_1 \cup\{P\}$ and $\Gamma_2' = \Gamma_2\setminus\{P\}$, there is a unique linear relation among the point evaluations at points of $\Gamma_2'$ on forms of degree $d-3$. In particular, the coefficient of the point evaluation at $P$ in the unique linear relation among point evaluations at $\Gamma_2$ on forms of degree $d-3$ is zero. The converse is also true by Cayley-Bacharach, so we have:\\ The conic $C$ vanishes in a point in $P\in\Gamma_2$ if and only if the coefficient of the point evaluation at $P$ in the unique linear relation among $\{\ev_v\in \R[\ul{x}]_{d-3}^\ast \colon v\in\Gamma_2\}$ is zero. This seems to be a non-generic property and we will indeed show that we can make all coefficients in the linear relation among these point evaluations non-zero by a careful perturbation of $L_1$ and $L_2$. \end{Rem} We now drop the assumptions on $d$ made in \ref{Const:ellipse} and prove Lemma \ref{Lem:ExtremeRaysC4} for all $d\geq 4$: \begin{Lem}\label{Lem:ExRayC4Per} For any $d\geq 4$, there is an extreme ray $\R_+\ell$ of $\Sigma_{2d}^\vee$ such that $B_\ell$ has corank $4$. The Hilbert function of the ideal $I(\ell)$ is $\hilb(I(\ell),j) = \binom{j+2}{2}$ for $0\leq j<d$ and $\hilb(I(\ell),d) = \binom{d+2}{2}-4$. \end{Lem} \begin{proof} We start as above with the products of linear forms $L_1 =\prod_{j=0}^{d-1}(x-jz)$ and $L_2 =\prod_{j=0}^{d-1}(y-jz) $ and denote by $\Gamma$ the complete intersection $\V(L_1)\cap \V(L_2)$. Split $\Gamma$ into \[ \Gamma_2 = \{(x:y:1)\colon x+y=2\}\cup\bigcup_{j=1}^{d-4}\{(x:y:1)\colon x+y=d+j\} \] and $\Gamma_1=\Gamma\setminus\Gamma_2$. Then the space of forms of degree $d$ vanishing on $\Gamma_1$ has dimension $3$. Let $p$ be the uniquely determined form of degree $d$ such that $L_1,L_2,p$ is a basis of this space. By Cayley-Bacharach, we know that there is a unique relation among the point evaluations $\{\ev_x\in \R[\ul{x}]_{d-3}^\ast \colon x\in\Gamma_2\}$, say \[ \sum_{x\in\Gamma_2} w_x\ev_x=0. \] Note that by the preceding Remark \ref{Rem:CBApplication}, the coefficient of $\ev_{(1:1:1)}$ is non-zero. Set $\Gamma_1'=\Gamma_1\cup \{(1:1:1)\}$ and $\Gamma_2'=\Gamma_2'\setminus\{(1:1:1)\}$. Then the point evaluations $\{\ev_x\in\R[\ul{x}]_{d-3}^\ast\colon x\in\Gamma_2'\}$ are linearly independent and span a hyperplane $H$ in $\R[\ul{x}]_{d-3}^\ast$. So there is a unique form $q$ of degree $d-3$ vanishing on $\Gamma_2'$, namely the one vanishing on all of $\Gamma_2$, i.e.~$q=(x+y-2z)\prod_{j=1}^{d-4}(x+y-(d+j)z)$. We will now perturb the point $(1:1:1)$ along the line $x+y=2$, see Figure \ref{fig:perturbation} for a visualisation in case $d=9$: Let $v_t := (t,2-t)$. Of course, $q(v_t)=0$ for every $t\in\R$, i.e.~the point evaluation $\ev_{v_t}\in\R[\ul{x}]_{d-3}^\ast$ lies in the hyperplane spanned by the point evaluations at $\Gamma_2'$; write \[ \ev_{v_t} = \sum_{x\in\Gamma_2'} \alpha_x(t)\ev_x, \] where the coefficients $\alpha_x(t)$ are rational functions of the parameter $t$. Suppose there is a point $P\in\Gamma_2'$ such that $\alpha_P(t)=0$ for all $t\in\R$. Then $\ev_{v_t}\in\lspan(\ev_v\colon v\in\Gamma_2'\setminus\{P\})$. Dually this means, that there is a form $f_P$ of degree $d-3$, uniquely determined modulo $q$, such that $f_P(P)=1$, $f_P(v)=0$ for all $v\in\Gamma_2'\setminus\{P\}$ and consequently $f_P(v_t) = 0$ for all $t\in\R$. Such a form cannot exist: Since $v_t$ ranges over the whole line defined by $x+y=2$, the form $f_P$ vanishes identically on this line; so we can factor it out. Furthermore, $f_P$ vanishes identically on every diagonal defining $\Gamma_2$ to the left of $P$, i.e.~$f_P(x,j-x)=0$ for all $d<j<P_1+P_2$ because it has too many zeros on these lines from $\Gamma_2'$. Now $\Gamma_2'\cap\{x+y=P_1+P_2\}$ consists of $2d-1-P_1-P_2$ many points. We have already established $P_1+P_2-d$ linear factors of $f_P$, so the remaining cofactor has degree $2d-P_1-P_2-3$. Therefore, $f_P$ vanishes identically on this line, which is a contradiction because it contains $P$. \begin{figure}[h] \begin{center} \begin{tikzpicture} \filldraw (0,0) circle(2pt); \filldraw (0,2) circle(2pt); \filldraw (0,3) circle(2pt); \filldraw (0,4) circle(2pt); \filldraw (0,5) circle(2pt); \filldraw (0,6) circle(2pt); \filldraw (0,7) circle(2pt); \filldraw (0,8) circle(2pt); \filldraw (2,0) circle(2pt); \filldraw (2,2) circle(2pt); \filldraw (2,3) circle(2pt); \filldraw[red] (2,4) circle(2pt); \filldraw (2,5) circle(2pt); \filldraw (2,6) circle(2pt); \filldraw (2,7) circle(2pt); \filldraw (2,8) circle(2pt); \filldraw (3,0) circle(2pt); \filldraw (3,2) circle(2pt); \filldraw (3,3) circle(2pt); \filldraw (3,4) circle(2pt); \filldraw (3,5) circle(2pt); \filldraw (3,6) circle(2pt); \filldraw (3,7) circle(2pt); \filldraw (3,8) circle(2pt); \filldraw (4,0) circle(2pt); \filldraw[red] (4,2) circle(2pt); \filldraw (4,3) circle(2pt); \filldraw (4,4) circle(2pt); \filldraw (4,5) circle(2pt); \filldraw[red] (4,6) circle(2pt); \filldraw (4,7) circle(2pt); \filldraw (4,8) circle(2pt); \filldraw (5,0) circle(2pt); \filldraw (5,2) circle(2pt); \filldraw (5,3) circle(2pt); \filldraw (5,4) circle(2pt); \filldraw (5,5) circle(2pt); \filldraw (5,6) circle(2pt); \filldraw (5,7) circle(2pt); \filldraw (5,8) circle(2pt); \filldraw (6,0) circle(2pt); \filldraw (6,2) circle(2pt); \filldraw (6,3) circle(2pt); \filldraw[red] (6,4) circle(2pt); \filldraw (6,5) circle(2pt); \filldraw (6,6) circle(2pt); \filldraw (6,7) circle(2pt); \filldraw (6,8) circle(2pt); \filldraw (7,0) circle(2pt); \filldraw (7,2) circle(2pt); \filldraw (7,3) circle(2pt); \filldraw (7,4) circle(2pt); \filldraw (7,5) circle(2pt); \filldraw (7,6) circle(2pt); \filldraw (7,7) circle(2pt); \filldraw (7,8) circle(2pt); \filldraw (8,0) circle(2pt); \filldraw (8,2) circle(2pt); \filldraw (8,3) circle(2pt); \filldraw (8,4) circle(2pt); \filldraw (8,5) circle(2pt); \filldraw (8,6) circle(2pt); \filldraw (8,7) circle(2pt); \filldraw (8,8) circle(2pt); \filldraw (3/4,0) circle(2pt); \filldraw (3/4,5/4) circle(2pt); \filldraw (3/4,2) circle(2pt); \filldraw (3/4,3) circle(2pt); \filldraw (3/4,4) circle(2pt); \filldraw (3/4,5) circle(2pt); \filldraw (3/4,6) circle(2pt); \filldraw (3/4,7) circle(2pt); \filldraw (3/4,8) circle(2pt); \filldraw (0,5/4) circle(2pt); \filldraw (2,5/4) circle(2pt); \filldraw (3,5/4) circle(2pt); \filldraw (4,5/4) circle(2pt); \filldraw (5,5/4) circle(2pt); \filldraw (6,5/4) circle(2pt); \filldraw (7,5/4) circle(2pt); \filldraw (8,5/4) circle(2pt); \filldraw[black!30!white] (0,1) circle(2pt); \filldraw[black!30!white] (1,1) circle(2pt); \filldraw[black!30!white] (2,1) circle(2pt); \filldraw[black!30!white] (3,1) circle(2pt); \filldraw[black!30!white] (4,1) circle(2pt); \filldraw[black!30!white] (5,1) circle(2pt); \filldraw[black!30!white] (6,1) circle(2pt); \filldraw[black!30!white] (7,1) circle(2pt); \filldraw[black!30!white] (8,1) circle(2pt); \filldraw[black!30!white] (1,0) circle(2pt); \filldraw[black!30!white] (1,2) circle(2pt); \filldraw[black!30!white] (1,3) circle(2pt); \filldraw[black!30!white] (1,4) circle(2pt); \filldraw[black!30!white] (1,5) circle(2pt); \filldraw[black!30!white] (1,6) circle(2pt); \filldraw[black!30!white] (1,7) circle(2pt); \filldraw[black!30!white] (1,8) circle(2pt); \draw (-1,3) -- (3,-1); \draw (1,9) -- (9,1); \draw (2,9) -- (9,2); \draw (3,9) -- (9,3); \draw (4,9) -- (9,4); \draw (5,9) -- (9,5); \draw[black!30!white] (1,-0.2) -- (1,8.2); \draw[black!30!white] (-0.2,1) -- (8.2,1); \draw[rotate=45,black!30!white] (5.65685,0) ellipse (5.65685cm and 1.46059cm); \end{tikzpicture} \end{center} \caption{A picture of the perturbation for general $d\geq 4$ shown for the first critical case $d=9$: The black points and the four red points are the perturbed point configuration for which our construction works. The four red points are the additional points through which the grey ellipse goes.} \label{fig:perturbation} \end{figure} So there is an $\epsilon>0$ such that for all $t\in (1-\epsilon,1)$, all coefficients of the linear relation \[ \ev_{v_t} = \sum_{v\in\Gamma_2'}\alpha_v(t)\ev_v \] are non-zero. Pick a $t_0$ in this interval and consider the totally real complete intersection $\Gamma=\V(L_1')\cap\V(L_2')$ for $L_1 = x(x-t_0z)\prod_{j=2}^{d-1}(x-jz)$ and $L_2 = y(y-(2-t_0)z)\prod_{j=2}^{d-1}(x-jz)$ and argue as above: we split the points into $\Gamma_1$ and $\Gamma_2$, where $\Gamma_2$ is the same union of diagonals as above. The Theorem of Cayley-Bacharach then implies the existence of a form of degree $d$ vanishing on $\Gamma_1$ and not identically on $\Gamma$. In fact, by Remark \ref{Rem:CBApplication}, this form does not vanish in any point of $\Gamma_2$, so we can now complete the proof as in Lemma \ref{Lem:ExtremeRaysC4}. \end{proof} \begin{Rem} In particular, the union of all extreme rays of $\Sigma_{2d}^\vee$ need not be closed, e.g.~for $d=9$, extremality fails in our original construction but a perturbation gives an extreme ray. \end{Rem} \begin{Thm}\label{Thm:ZarClExtRays} For any $d\geq 4$, the Zariski closure of the set of extreme rays of $\Sigma_{2d}^\vee$ is the variety of Hankel matrices of corank at least $4$. It is irreducible and has codimension $10$. \end{Thm} \begin{proof} We have shown in the proof of Lemma \ref{Lem:Corank4dim} that the quasi-projective variety $\gor(T)$ of all Gorenstein ideals with Hilbert function $T(j)=\binom{j+2}{2}$ for $0\leq j<d$ and $T(d)=\binom{d+2}{2}-4$ is dense in $X_{-4}$. It is also smooth, cf.~Theorem \ref{Thm:GorTTangentSpace} or Iarrobino-Kanev \cite[Theorem 4.21]{IarKanMR1735271}. We have shown in Lemma \ref{Lem:ExtremeRaysC4} that there is an extreme ray $\R_+\ell_0$ of $\Sigma_{2d}^\vee$ with $I(\ell_0)\in\gor(T)$. We will now show that every linear functional in an open neighbourhood of $\ell_0$ in $\gor(T)$ spans an extreme ray of $\Sigma_{2d}^\vee$. Since $I(\ell)\in\gor(T)$ implies that the corank of the Hankel matrix $B_\ell$ is $4$, there is an open neighbourhood of $\ell_0$ such that $B_\ell$ is positive semi-definite for all $\ell$ in this neighbourhood, because the eigenvalues of a symmetric matrix depend continuously on its entries. Therefore, a linear functional $\ell$ in this neighbourhood spans an extreme ray of $\Sigma_{2d}^\vee$ if and only if $I(\ell)_d$ generates a hyperplane in $\R[\ul{x}]_{2d}$, i.e.~$\langle I(\ell)_d\rangle_{2d}=I(\ell)_{2d}$. By Gauss' algorithm (column echelon form), we can write a basis $(b_1,b_2,b_3,b_4)$ of the kernel of $B_\ell$ in terms of rational functions in the entries of $B_\ell$. We consider the linear map \[ \R[\ul{x}]_d^4\to \R[\ul{x}]_{2d}, (f_1,f_2,f_3,f_4)\mapsto f_1b_1+f_2b_2+f_3b_3+f_4b_4. \] The rank of this map is at least $\binom{2d+2}{2}-1$ (i.e.~the image is a hyperplane) because $\ell\in\gor(T)$. The image is a hyperplane for $\ell=\ell_0$. So the same is true for every $\ell$ in a neighbourhood of $\ell_0$ in $\gor(T)$, which shows that these $\ell$ are extreme rays of $\Sigma_{2d}^\vee$. \end{proof} \begin{Rem} In the proof of the above Theorem, we see that if $T$ is a Hilbert function occuring for a Gorenstein ideal corresponding to an extreme ray of $\Sigma_{2d}^\vee$, then there is an open subset of extreme rays in a connected component of $\gor(T)(\R)$ because $\gor(T)$ is smooth. As we remarked above, it might not be the entire connected component. \end{Rem} Our construction of an extreme ray of maximal rank also gives a base-point free special linear system with a totally real representative on a smooth curve of degree $d\geq 4$, which might be interesting in itself. \begin{Prop} Let $d\geq 4$. There is a smooth real curve $X\subset\P^2$ of degree $d$ and an effective divisor $D$ of degree $g = \binom{d-1}{2}$ supported on $X(\R)$ such that $|D|$ has dimension $1$ and is base-point free. \end{Prop} \begin{proof} Start with a complete intersection $\V(L_1)\cap\V(L_2)$ of linear forms and a choice of $\binom{d-1}{2}$ points $\Gamma_2\subset\V(L_1)\cap\V(L_2)$ such that there is a unique curve of degree $d-3$ passing through these points. Moreover, assume that all coefficients in the linear relation among the point evaluations $\{\ev_v\in\R[\ul{x}]_{d-3}^\ast\colon v\in\Gamma_2\}$ are non-zero. This situation is established in the proof of Lemma \ref{Lem:ExRayC4Per}. By Bertini's Theorem \cite[Theorem 6.2.11]{BelCarMR2549804} or \cite[Th\'eor\`eme 6.6.2]{JouMR725671}, there is a smooth curve $\V(f)$ of degree $d$ passing through $\Gamma_2$ such that $f$ is a small perturbation of $L_1$; more precisely, we want $\Gamma = \V(f)\cap \V(L_2)$ to be a totally real transversal intersection. Then the complete linear system $|\Gamma_2|\subset \divi(\V(f))$ is cut out by forms of degree $d$, i.e. \[ |\Gamma_2| = \{C.\V(f)-(\Gamma-\Gamma_2)\geq 0\colon C\subset\P^2\text{ of degree }d\}, \] cf.~Eisenbud-Green-Harris \cite[Corollary 5 (to Brill-Noether's Restsatz)]{EisGreHar}. We have argued in Remark \ref{Rem:CBApplication} that this linear system is base-point free. We compute its dimension with the help of the Cayley-Bacharach Theorem, more precisely \cite[Corollary 6]{EisGreHar}: \[ 1 =|\Gamma_2|-( \ell( (d-3)H) - \ell( (d-3)H-\Gamma_2) )= g-( g-\ell( (d-3)H-\Gamma_2)), \] which implies \[ \ell(\Gamma_2) = \deg(\Gamma_2) +1 - g +\ell( (d-3)H-\Gamma_2) = 2. \] \end{proof} \begin{Rem} Conversely, given such a linear system on a smooth curve $X\subset\P^2$, we can apply the construction in the proof of Lemma \ref{Lem:ExtremeRaysC4} to construct an extreme ray of $\Sigma_{2d}^\vee$ of maximal rank, at least if there is a totally real transveral intersection $C\cap X$ with $C.X-D\geq0$. The fact, that the linear system has dimension $1$ gives the unique linear relation among the point evaluations at $C.X-D$ on forms of degree $d$. Extremality then follows from the fact that $|D|$ is base-point free by the count of dimensions as in the proof of Lemma \ref{Lem:ExtremeRaysC4}. \end{Rem} \section{The case $d=5$ or Ternary Dextics.}\label{sec:Dextics} For $d=3$, a complete characterisation of extreme rays of $\Sigma_6^\vee$ was given by Blekherman in \cite{BleNP}. It led to a complete description of the algebraic boundary of the sums of squares cone $\Sigma_6$ by Blekherman, Hauenstein, Ottem, Ranestad and Sturmfels, cf.~\cite{BleHauOttRanStuMR2999301}. For $d=4$, there are only two possible ranks for extreme rays of $\Sigma_8^\vee$, namely $10$ and $11$; in particular, we know how to construct one of each rank. It is possible to prove, similarly to the cases below, that both these ranks give rise to irreducible components of the algebraic boundary of $\Sigma_8$ by projective duality. So the first interesting case from this point of view is $d=5$: In fact, we can construct an extreme ray of $\Sigma_{10}^\vee$ of every rank in the interval $\{13,\ldots,17\}$ between the lower and upper bound using the Cayley-Bacharach Theorem. Moreover, using the results of \cite{SinnAlgBound}, we can prove by projective duality that there is an irreducible component of the algebraic boundary of $\Sigma_{10}$ for every one of these ranks; in particular, $\partial_a\Sigma_{10}$ has at least $6$ irreducible components. In the following propositions in this section, we will prove the following theorem. \begin{Thm}\label{Thm:d5} For every $r\in\{13,\ldots,17\}$, there is an extreme ray $\R_+\ell_r$ of $\Sigma_{10}^\vee$ such that the rank of the Hankel matrix $B_{\ell_r}$ is $r$. The Hilbert function $T_r$ of $I(\ell_r)$ is \begin{compactitem} \item $T_{13} = (1,3,6,9,12,13,12,9,6,3,1)$ \\ \item $T_{14} = (1,3,6,10,13,14,13,10,6,3,1)$ \\ \item $T_{15} = (1,3,6,10,14,15,14,10,6,3,1)$ \\ \item $T_{16} = (1,3,6,10,14,16,14,10,6,3,1)$ \\ \item $T_{17} = (1,3,6,10,15,17,15,10,6,3,1)$ \end{compactitem} The dual varieties to $\gor(T_r)$ are irreducible components of the algebraic boundary of the sums of squares cone $\Sigma_{10}$ for all $r\in\{13,\ldots,17\}$. \end{Thm} The construction given in the preceding section for extreme rays of maximal rank $\binom{d+2}{2}-4$ leads to an extreme ray $\R_+\ell$ of $\Sigma_{10}^\vee$ such that the Hilbert function of the corresponding Gorenstein ideal $I(\ell)$ is \[ T_{17} = (1,3,6,10,15,17,15,10,6,3,1). \] By Theorem \ref{Thm:ZarClExtRays}, the Zariski closure of the set of extreme rays of $\Sigma_{10}^\vee$ is $\cl(\gor(T_{17}))$, a unirational variety of codimension $10$ in $\P^{65}$. So \cite[Theorem 3.8]{SinnAlgBound}, implies that its dual variety is an irreducible component of the algebraic boundary of $\Sigma_{10}$. We now work our way up beginning with the lowest rank $13$, following the construction in Blekherman \cite{BlePositiveGorensteinIdeals}: \begin{Prop} There is an extreme ray $\R_+\ell$ of $\Sigma_{10}^\vee$ of rank $13$. The Hilbert function of the Gorenstein ideal $I(\ell)$ is \[ T_{13}=(1,3,6,9,12,13,12,9,6,3,1) \] and the variety dual to $\cl(\gor(T_{13}))$ is an irreducible component of the algebraic boundary of $\Sigma_{10}$. \end{Prop} \begin{proof} Let $L_1 = x(x-z)(x-2z)(x-3z)(x-4z)$ and $L_2 = y(y-z)(y-2z)$ and $\Gamma = \V(L_1)\cap\V(L_2)$. By construction, there is a unique linear relation among $\{\ev_v\in\R[\ul{x}]_5^\ast\colon v\in\Gamma\}$, say $\sum_{v\in\Gamma}u_v\ev_v = 0$, and all coefficients in this relation are non-zero. The linear functional \[ \ell = \sum_{v\in\Gamma\setminus\{P\}} ev_v - \frac{u_P^2}{\sum_{v\in\Gamma\setminus\{P\}} u_v^2} \ev_P \] is positive semi-definite of rank $13$ for any $P\in\Gamma$ by the Cauchy-Schwarz inequality, cf.~proof of Lemma \ref{Lem:ExtremeRaysC4}. By a Hilbert function computation using \texttt{Macaulay2} \cite{Macaulay2}, we verify, that the degree $5$ part of the corresponding Gorenstein ideal $I(\ell)$ generates a hyperplane in degree $10$. To prove that the dual variety to $\gor(T_{13})$ is an irreducible component of $\partial_a \Sigma_{10}$, we use \cite[Theorem 3.8]{SinnAlgBound}. The condition given there is equivalent to \[ (T_{\ell} \gor(T_{13}))^\perp = (I(\ell)_5)^2 \] because the face of $\Sigma_{10}$ supported by $\ell$ is the set of sums of squares of polynomials in $I(\ell)_5$, which spans the vector space $(I(\ell)_5)^2$. By the description of the tangent space to $\gor(T_{13})$ at $\ell$ (cf.~Theorem \ref{Thm:GorTTangentSpace}), this is equivalent to \[ (I(\ell)^2)_{10} = (I(\ell)_5)^2, \] which we also check using \texttt{Macaulay2} \cite{Macaulay2}. \end{proof} \begin{Prop} There is an extreme ray $\R_+\ell$ of $\Sigma_{10}^\vee$ of rank $14$. The Hilbert function of the Gorenstein ideal $I(\ell)$ is \[ T_{14}=(1,3,6,10,13,14,13,10,6,3,1) \] and the variety dual to $\cl(\gor(T_{14}))$ is an irreducible component of the algebraic boundary of $\Sigma_{10}$. \end{Prop} \begin{proof} In this case, take $L_1 = x(x-z)(x-2z)(x-3z)$ and $L_2 = y(y-z)(y-2z)(y-3z)$ and set $\Gamma = \V(L_1)\cap \V(L_2)$. There is a unique linear relation among $\{\ev_v\in\R[\ul{x}]_5^\ast \colon v\in\Gamma\}$, say $\sum_{v\in\Gamma} u_v\ev_v = 0$, and all its coefficients are non-zero. As above, the linear functional \[ \ell = \sum_{v\in\Gamma\setminus\{P\}} ev_v - \frac{u_P^2}{\sum_{v\in\Gamma\setminus\{P\}} u_v^2} \ev_P \] is positive semi-definite of rank $14$ for any $P\in\Gamma$. Again, using \texttt{Macaulay2} \cite{Macaulay2}, we verify, that the degree $5$ part of the corresponding Gorenstein ideal $I(\ell)$ generates a hyperplane in degree $10$ and that \[ (I(\ell)^2)_{10} = (I(\ell)_5)^2. \] \end{proof} \begin{Prop} There is an extreme ray $\R_+\ell$ of $\Sigma_{10}^\vee$ of rank $15$. The Hilbert function of the Gorenstein ideal $I(\ell)$ is \[ T_{15}=(1,3,6,10,14,15,14,10,6,3,1) \] and the variety dual to $\cl(\gor(T_{15}))$ is an irreducible component of the algebraic boundary of $\Sigma_{10}$. \end{Prop} \begin{proof} In this case, we start with a complete intersection of a quartic and a quintic, $L_1 = x(x-z)(x-2z)(x-3z)(x-4z)$, $L_2 = y (y-z)(y-2z)(y-3z)$ and $\Gamma = \V(L_1)\cap \V(L_2)$. Choose $\Gamma_2 = \{(0:2:1),(1:1:1),(2:0:1)\}$ and set $\Gamma_1 = \Gamma\setminus\Gamma_2$. By Cayley-Bacharach, there is a unique linear relation among the $17$ points of $\Gamma_1$. Using \texttt{Macaulay2} \cite{Macaulay2}, we complete the proof as above. \end{proof} \begin{Prop} There is an extreme ray $\R_+\ell$ of $\Sigma_{10}^\vee$ of rank $16$. The Hilbert function of the Gorenstein ideal $I(\ell)$ is \[ T_{16}=(1,3,6,10,14,16,14,10,6,3,1) \] and the variety dual to $\cl(\gor(T_{16}))$ is an irreducible component of the algebraic boundary of $\Sigma_{10}$. \end{Prop} \begin{proof} Again, choose $L_1 = x(x-z)(x-2z)(x-3z)(x-4z)$, $L_2 = y (y-z)(y-2z)(y-3z)$ and $\Gamma = \V(L_1)\cap \V(L_2)$. This time, $\Gamma_2 = \{(0:1:1),(1:0:1)\}$ and $\Gamma_1 = \Gamma\setminus\Gamma_2$ do the job: Cayley-Bacharach gives a unique linear relation among the $18$ points of $\Gamma_1$. Using \texttt{Macaulay2} \cite{Macaulay2}, we complete the proof as above. \end{proof} For general $d>5$, our constructive method using the Cayley-Bacharach Theorem cannot construct an extreme ray of every rank in the interval $\{3d-2,\ldots,\binom{d+2}{2}\}$ given by the lower and upper bound. The first failure occurs for $d=6$ and rank $17\in\{16,\ldots,24\}$. In fact, $\Sigma_{12}^\vee$ does not have an extreme ray of rank $17$, as we will see below, cf.~Lemma \ref{Lem:Rank17}. \begin{Rem}\label{Rem:d6r17} Our construction starts with a totally real intersection of two curves $X_1$ and $X_2$ with $\deg(X_1)+\deg(X_2)\geq d+3$; we then need $19$ intersection points such that the corresponding point evaluations on forms of degree $6$ satisfy a unique linear relation in which all coefficients are non-zero. This configuration would lead to a positive linear functional such that the Hankel matrix has the desired rank $17$ (of course we would still need to prove extremality). We will see that this is not possible: The following tuples are permissible choices for the degrees of the curves $(3,6)$, $(4,5)$, $(4,6)$, $(5,5)$, $(5,6)$ and $(6,6)$. For $(\deg(X_1),\deg(X_2)) = (3,6)$, the transversal intersection has only $18$ points. In the case $(4,5)$, there is a unique linear relation among point evaluations at the $20$ intersection points such that all coefficients are non-zero; in particular, whatever point we remove, the remaining $19$ point evaluations are linearly independent on forms of degree $6$. In the cases $(4,6)$, $(5,5)$ and $(5,6)$, we cannot have the desired number of points on a curve of dual degree $s-d$: For example, in order to apply the duality of the Cayley-Bacharach Theorem to the $24$ intersection points in the case $(4,6)$, we would need to have $5$ of the intersection points on a line, which intersects the quartic in only $4$ points. The last case $(6,6)$ is more subtle: We would like to find exactly $17$ intersection points on a cubic, which is impossible, because there is a unique linear relation among the corresponding point evaluations on forms of degree $6$ on the complete intersection of a cubic and a sextic, cf.~Eisenbud-Green-Harris \cite[CB4]{EisGreHar}. \end{Rem} This is not a defect of our construction in this case. In fact, there are no extreme rays of $\Sigma_{12}^\vee$ of rank $17$. \begin{Lem}\label{Lem:Rank17} There is no Gorenstein ideal $I\subset \C[x,y,z]$ with socle in degree $12$ such that $\hilb(I,6) = 17$ and $I_6$ is maximal with respect to inclusion among $J_6$, where $J$ runs over all Gorenstein ideals with socle in degree $12$. \end{Lem} In the proof of this lemma, we will use the following theorem multiple times. \begin{Rem} The complete intersection of three ternary forms of degree $d_1$, $d_2$, and $d_3$ respectively is a Gorenstein ideal in $\C[x,y,z]$ with socle in degree $d_1+d_2+d_3-3$, see \cite[Theorem CB8]{EisGreHar}. The socle degree follows using an elementary count of dimensions and the fact that the generators must be relatively prime. \end{Rem} \begin{proof} We exclude possibilities arguing by the lowest degree $k$ of a generator of $I$. First note that maximality of $I_6$ implies that $\langle I_6 \rangle_{12} = I_{12}$ and $\V(I_6)=\emptyset$. In particular, we can always choose a complete intersection of three forms in $I_6$, one of which can be chosen to be a suitable multiple of a generator of minimal degree of $I$. Let $k$ be the minimal degree of a generator of $I$. The ideal $I$ cannot contain a quadric generator, because the linear function defining $I$ would then be supported on $12$ points by the apolarity lemma, which implies $\hilb(I,6) \leq 12$. In case $k=3$, the Gorenstein ideal is actually generated by a cubic and two sextics that are a complete intersection, see Stanley \cite{Stanley}, so $\hilb(I,6) = 16 = 28 - (10 +2)$. The case $k=6$ is also easily excluded because Hilbert functions of Gorenstein ideals in $\C[x,y,z]$ are unimodular by \cite[Theorem 4.2]{Stanley}, which implies in this case that $\hilb(I,5)\leq \hilb(I,6) = 17$, or equivalently $\dim(I_5) \geq 4$. This leaves the two cases $k=5$ and $k=4$: Suppose $k=5$, then the Hilbert functions of Gorenstein ideals with socle in degree $12$ and order $5$ are in 1-1 correspondence with self-complimentary partitions of $10\times 4$ blocks, cf.~\cite[Proposition 3.9]{Die}. By unimodularity of Hilbert functions, we have $\dim(I_5)\geq 4$. This determines the first three rows of the blocks, so we can choose two more generators of degree $\leq 6$. Since a block of degree $5$ forces a relation in degree $6$ by the self-complimentarity of the partition, the $4$ generators of degree $5$ generate a $4\cdot 3 - 3 = 9$-dimensional subspace of $\C[x,y,z]_6$. The only two possible degrees for the other two generators to achieve $\dim(I_6) = 28-17 = 11$ are therefore one more generator of degree $5$ and one generator of degree $7$ or two generators in degree $6$. In the first case, the degrees of generators are $q = (5,5,5,5,5,7,7,9,9,9,9)$ and the corresponding relation degrees are $p = (10,10,10,10,10,8,8,6,6,6,6)$. Since there is no generator of degree $6$ and $\V(I_6) = \emptyset$, we find a complete intersection of three forms of degree $5$ contained in $I$. These generate a Gorenstein ideal with socle in degree $5+5+5-3 = 12$. This is impossible, becaues $I$ contains further generators. In the second case, the degrees of generators are $q = (5,5,5,5,6,6,8,8,9,9,9)$ and the corresponding relation degrees are $p = (10,10,10,10,9,9,7,7,6,6,6)$. The quasiprojective variety $\gor(T)$, where $T$ is given by these generator and relation degrees, contains a dense subset of Gorenstein ideals generated by polynomials of degree $q_{\rm min} = (5,5,5,5,8,8,9)$ by \cite[Section 3.3]{Die}. So the same argument as in the first case excludes this possibility, too. So now we are left with the case $k=4$: In this case, Hilbert functions of Gorenstein ideals with socle in degree $12$ and order $4$ correspond to self-complimentary partitions of $8\times 6$ blocks. If $\dim(I_4) =2$, then these would generate a $2\cdot 6 = 12$-dimensional subspace in $\C[x,y,z]_6$ because they correspond to relations in degree $7$. So $\dim(I_4) = 1$. We can choose $4$ more degrees of generators $\leq 6$. A generator of degree $5$ comes with a relation in degree $5$ and therefore, the only two possible choices of degrees for the generators with $\hilb(I,6) = 17$ are one generator of degree $5$ and three generators of degree $6$ or two generators of degree $5$ and one generator of degree $6$. Let's first consider $q = (4,5,5,6,7,7,8,9,9)$, which corresponds to relation degrees $p = (11,10,10,9,8,8,7,6,6)$. Again, $\gor(T)$, where $T$ is given by these generator and relation degrees, contains a dense subset of Gorenstein ideals generated by polynomials of degree $q_{\rm min} = (4,5,5,7,9)$. There is no generator of degree $6$ anymore, so $\V(I_6) = \emptyset$ implies that we find a complete intersection of a quartic and two quintics, which generate a Gorenstein ideal with socle in degree $4+5+5-3=11$, which is impossible. The last remaining case is $q = (4,5,6,6,6,8,8,8,9)$ with corresponding relation degrees $p = (11,10,9,9,9,7,7,7,6)$. Here, $\gor(T)$ contains a dense subset of Gorenstein ideals with generators of degree $q_{\rm min} = (4,5,6,6,8,8,8)$ and correspondingly $p_{\rm min} = (11,10,9,9,7,7,7)$. The assumption $\V(I_6) = \emptyset$ implies only that we can find a complete intersection of a quartic and two sextics. They generate a Gorenstein ideal with socle in degree $4+6+6-1 = 13$. This Gorenstein ideal has Hilbert function $3$ in degree $12$ because Hilbert functions of Gorenstein ideals are symmetric, cf.~Remark \ref{Rem:SymHilbFunc}. This is impossible because this complete intersection together with the other $4$ generators would then fill $\C[x,y,z]_{12}$. This concludes the case study. \end{proof} \textbf{Acknowledgements.} This paper grew out of the PhD thesis of the second author, who wants to thank his adviser Claus Scheiderer for his support, ideas and input. The first author was partially supported by Alfred P. Sloan research fellowship and NSF DMS CAREER award. The second author was supported by the Studienstiftung des deutschen Volkes and the National Institute of Mathematical Sciences, Daejeon, Korea, during the Summer 2014 Thematic Program on {\em Applied Algebraic Geometry}.
1,314,259,994,368
arxiv
\section{introduction} Recently, a huge amount of research on complex networks has been achieved in interdisciplinary fields including mathematics, statistical physics, computer science, sociology, biology, etc.~\cite{Newman2003a,Albert2002,Dorogovtsev2001a}. Complex networks are ubiquitous in the real world, e.g., there are technological networks such as the Internet~\cite{Faloutsos1999}, biological networks such as protein interaction networks~\cite{Jeong2001}, and social networks such as scientific collaboration networks~\cite{Newman2001a}. Various models to explain the observed properties of those real networks have been introduced and studied by both numerical and analytic approaches. Relatively fewer works, however, have been done about possible error or bias in collecting data and identifying real networks in a practical sense, and most works deal exclusively with either social networks or the Internet~\cite{Costenbader2003,Robins2004,Kossinets2004,Petermann2004b, Clauset2004c,Achlioptas2005,Dallasta2005,Stumpf2005,Scholz2005, JDHan2005,Stumpf2005c}. For instance, a survey of relationships among participants has to be conducted to construct a social network, but the collected network data may be incomplete or erroneous since a survey usually targets only a partial sample of a whole population~\cite{Kossinets2004}. The topology of the Internet is inferred by aggregating paths or {\em traceroutes}~\cite{Traceroute}, which also reveals only a part of the whole Internet~\cite{Petermann2004b,Clauset2004c,Achlioptas2005,Dallasta2005}. In biology, protein-protein interaction networks are identified by seeking contextual or cellular functions mostly within specific functional modules~\cite{Jeong2001}. Identification of such networks by experiments also has a fundamental limit naturally. Thus, all these networks identified are {\em sampled} networks from complete structures. In addition, if the size of an entire network is too large to measure some quantities such as betweenness centrality~\cite{KIGoh2001,KIGoh2002} due to time complexity, inevitably a sampling process is necessary. So far models of networks have been designed based on features observed in real networks, such as the small-world effect~\cite{Watts1998} and the power-law degree distribution~\cite{Barabasi1999,Krapivsky2001}. But what if those {\em observed} characteristics from the {\em sampled} networks are considerably different from the original structures of the real networks? It has been shown that the sampled networks based on the traceroute sampling method may have significantly different topological properties from the original network in some cases~\cite{Petermann2004b,Clauset2004c, Achlioptas2005,Dallasta2005}. Effects of missing data in social networks are discussed in Ref.~\cite{Kossinets2004}, in which it was shown that some problems in conceiving social networks can cause incompleteness of data and lead to misestimation of quantities like mean node degree, clustering coefficient, assortativity, etc. At this point, bias in such quantities needs to be considered in a more general sense. In a statistical sense, inference from a sample provides fairly reasonable estimation of a whole population if a large number of objects are selected randomly enough to be representative in the population. This naive criterion, however, cannot be applied directly to sampling networks, since there are two different elements, i.e., nodes and links in a network. A degree distribution of nodes is, for example, a statistic of a network, but the degree is not an independent characteristic of each node. Nodes are literally connected to one another, by the other kind of components called links from which a degree is defined. Similarly, other properties of a network also heavily depend on the way that nodes and links are interwoven. There could be several different ways of sampling networks due to the two interrelated elements (nodes and links), and each method may give distinctive features with respect to such properties. There has been a large amount of work on random breakdowns or intentional attack on complex networks, considered as the exact reverse process of sampling, in the physics community~\cite{Albert2000a,Cohen2000,Cohen2000a, Gallos2005b}. The analytic methods in that work, therefore, can be also applied to the sampling problem. In this paper, we adopt three basic methods of sampling networks and investigate the effect of each method on measuring several well-known network quantities such as degree distribution, average path length, betweenness centrality distribution~\cite{KIGoh2001,KIGoh2002}, assortativity~\cite{Newman2002a}, and clustering coefficient~\cite{Watts1998}. Observed bias of such quantities is explained, and we provide appropriate criteria for choosing sampling methods to measure the quantities more accurately. Some typical real networks as well as the Barab{\'a}si-Albert model~\cite{Barabasi1999} are sampled for this analysis. More general sampling processes used to identify real networks may consist of some combinations of methods presented here or variations of them, but we can infer by using the results from the basic methods. \section{sampling methods and networks} We introduce three kinds of sampling method called node sampling, link sampling, and snowball sampling. In node sampling, a certain number of nodes are randomly chosen and links among them are kept. The sampling fraction in this method is defined as the ratio of the number of chosen nodes (including isolated nodes that will be removed later) to that of all the nodes in the original network. As in Fig.~\ref{method}(a), isolated nodes are neglected for convenience, although they are fully predictable, so the number of nodes in a sampled network is a little bit less than that of selected nodes. We observe the dependence of the number of chosen links on that of nodes, since it is related to the average degrees and average path length of sampled networks, discussed later on. Suppose the fraction of number of selected nodes is $\alpha$ and that of links among them is $\beta$. Then it is found that $\beta \sim \alpha^2$ if we pick nodes randomly, since the maximum number of (undirected) links possible for $n$ selected nodes are ${n \choose 2} = n(n-1)/2 \sim n^2$~\cite{pick_node}. In link sampling, a certain number of links are randomly selected and nodes attached to them are kept, as in Fig.~\ref{method}(b). In snowball sampling~\cite{snowball,Newman2003c}, we first choose a single node and all the nodes directly linked to it are picked. Then all the nodes connected to those picked in the last step are selected, and this process is continued until the desired number of nodes are sampled. The set of nodes selected in the $n$th step is denoted as the $n$th layer, in the same sense of ``radius'' for ego-centered networks in Ref.~\cite{Newman2003c}. See Fig.~\ref{method} (c) for illustration. To control the number of nodes in the sampled network, a necessary number of nodes are randomly chosen from the last layer. Similar to the cluster-growing method used to calculate the fractal dimension of percolation clusters in Ref.~\cite{Song2005}, the snowball sampling method tends to pick hubs (nodes with many links) in short step due to high connectivity of them. So whether the initial node is a hub or not does not make a noticeable difference in characterizing the sampled network. \begin{figure} \includegraphics[width=0.4\textwidth]{network.eps} \caption{Three kinds of sampling method. (a) Node sampling: Select the circled nodes, keep three links among them, and the isolated node is removed. (b) Link sampling: Select the three circled links and six nodes attached to them. (c) Snowball sampling: Starting from the circled node, select nodes and links attached to them by tracing links.} \label{method} \end{figure} For numerical analysis of the sampled networks, we use Barab{\'a}si-Albert (BA) scale-free network as an example of model networks, which follows the power-law degree distribution $p(k) \sim k^{-3}$, with $30000$ nodes and $m_0 = m = 4$~\cite{Barabasi1999}. We also consider three real-world networks from various fields, including protein interaction network (PIN)~\cite{Jeong2001,KIGoh2003}, the Internet at the autonomous systems (AS) level~\cite{Meyer}, and e-print archive coauthorship network (arxiv.org)~\cite{Newman2001a}. The numbers of nodes and links for each network are in Table~\ref{net_table}. Although results from other homogeneous networks are also discussed in Sec. IV, most of networks considered in this work are undirected and scale-free networks following power-law degree distribution, $p(k) \sim k^{-\gamma}$, where $2 < \gamma \leq 3$. \begin{table} \begin{tabular}{cccc} \hline \hline Network & $n$ & $l$ & Ref. \\ \hline PIN & 5077 & 16449 & \cite{Jeong2001,KIGoh2003} \\ Internet AS & 10515 & 21455 & \cite{Meyer} \\ arxiv.org & 49983 & 245300 & \cite{Newman2001a} \\ \hline \hline \end{tabular} \caption{The numbers of nodes $n$ and links $l$ for each real network.} \label{net_table} \end{table} \section{characteristics of sampled networks} \subsection{Degree distribution and average path length} A degree of a node is defined as the number of links attached to the node. Many real networks are shown to have a power-law degree distribution $p(k) \sim k^{-\gamma}$~\cite{Newman2003a,Albert2002, Dorogovtsev2001a}, including the networks considered in this paper. We found that in general degree distributions of sampled networks from the four networks obtained by all three methods follow the power-law as well as those of the original networks. The exponents of degree distribution $\gamma$ (degree exponent) are extracted using maximum likelihood estimate given by the formula~\cite{Newman2004b} \begin{equation} \gamma = 1 + n \left[ \sum_{i=1}^n \ln \frac{k_i}{k_\textrm{min}} \right]^{-1} , \label{eq:MLE} \end{equation} where $n$ is the number of elements in a set $\{k_i \}$ whose elements follow the power-law distribution $p(k) \sim k^{-\gamma}$, and $k_\textrm{min}$ is the smallest element for which the power-law behavior holds. Figure~\ref{MLE_deg} shows the change of the degree exponent for the sampled networks from each network obtained by numerical simulation for each method as we change the sampling fraction $\alpha$. \begin{figure} \includegraphics[width=0.45\textwidth]{MLE_formula.eps} \caption{Changes of degree exponent $\gamma$ for each network's sampled networks according to the sampling fraction $\alpha$, averaged over ten independent realizations. Empty squares ($\square$) stand for node sampling, filled squares ($\blacksquare$) for link sampling, and empty triangles ($\vartriangle$) for snowball sampling. The horizontal dashed lines are the values for the original exponent of each network, and the solid lines represent the values obtained by Eq.~(\ref{new_degree_node}).} \label{MLE_deg} \end{figure} For node sampling, we {\em fix} the number of sampled nodes and select nodes randomly. In this case, the new degree distribution $p'(k)$ of the sampled network is expressed as \begin{equation} p'(k) = \sum_{k_0 = k}^{n-1} p(k_0 ) {k_0 \choose k}{n - k_0 - 1 \choose n_s - k -1} \Big/ {n-1 \choose n_s - 1} , \label{new_degree_node_pj} \end{equation} where $p(k)$ is the degree distribution of the original network, $n$ is the number of nodes in the original network, and $n_s = \alpha n$ is the fixed number of sampled nodes. In the case that the number of nodes in sampled networks is not fixed but only the probability $\alpha$ with which individual nodes are selected is given~\cite{Stumpf2005,Stumpf2005c}, Eq.~(\ref{new_degree_node_pj}) should be written as \begin{equation} p'(k) = \sum_{n_s = k+1}^n \sum_{k_0 = k}^{n-1} p(k_0 ) f(n_s ) {k_0 \choose k} \frac{\displaystyle {n - k_0 - 1 \choose n_s - k -1}} {\displaystyle{n-1 \choose n_s - 1}} , \label{new_degree_node_pj_alpha} \end{equation} where the probability that $n_s$ number of nodes are chosen is $f(n_s ) = {n \choose n_s} \alpha^{n_s} (1 - \alpha)^{n - n_s}$. If the number of nodes is fixed, $f(n_s ) = \delta(n_s - \alpha n)$ and Eq.~(\ref{new_degree_node_pj_alpha}) becomes Eq.~(\ref{new_degree_node_pj}) with $n_s = \alpha n$. Even if the number of nodes is not fixed, when the system size is large enough to use the approximation $f(n_s ) \simeq \delta(n_s - \alpha n)$, we can safely use Eq.~(\ref{new_degree_node_pj}). Equation~(\ref{new_degree_node_pj}) can be further reduced by $n! / (n-m)! \simeq n^m$ for $n \gg m$. Suppose $n, n_s \gg k_0 , k$. Then \begin{equation} \begin{array}{l} \displaystyle{{n - k_0 - 1 \choose n_s - k -1} \Big/ {n-1 \choose n_s - 1}} \simeq n_s^k (n - n_s )^{k_0 - k}/ n^{k_0} \\ \\ = \displaystyle{\left(\frac{n_s}{n}\right)^k \left( 1 - \frac{n_s}{n}\right)^{k_0 - k}} = \alpha^k (1 - \alpha)^{k_0 - k}, \end{array} \end{equation} which leads to the formula previously used in Refs.~\cite{Cohen2000,Stumpf2005,Stumpf2005c} \begin{equation} p'(k) = \sum_{k_0 = k}^{\infty} p(k_0 ) {k_0 \choose k} \alpha^k (1-\alpha)^{k_0 - k}. \label{new_degree_node} \end{equation} \begin{figure} \includegraphics[width=0.4\textwidth]{SmallNetInset.eps} \caption{Degree distribution for sampled networks of (a) {\em C. elegans} neural network with $\alpha = 120/297$ and (b) Zachary karate club network with $\alpha = 20/34$, obtained from the node sampling. Empty circles are simulation results from $1000$ sampling processes. Solid lines correspond to Eq.~(\ref{new_degree_node_pj}) and dashed lines to Eq.~(\ref{new_degree_node}). Insets show the part of large degrees, where the difference between two formulae is prominent, for each graph.} \label{SmallNet} \end{figure} The sizes of all the four networks studied in this paper are larger than $5000$, and we have checked that Eqs.~(\ref{new_degree_node_pj}) and (\ref{new_degree_node}) give practically the same values of $p'(k)$ and are indistinguishable in the graphs. For much smaller networks, on the other hand, Eq.~(\ref{new_degree_node_pj}) actually predicts the degree distribution of sampled networks better than Eq.~(\ref{new_degree_node}). In Fig.~\ref{SmallNet}, we compare the simulation results for two small networks, the nematode {\em C. elegans} neural network~\cite{YYAhn2005} with $297$ nodes and $2359$ links and the Zachary karate club network~\cite{Girvan2002} with $34$ nodes and $77$ links, with those two equations by substituting the original degree distribution $p(k) = n_k / n$, where $n_k$ is the number of nodes with degree $k$. The figure clearly shows that Eq.~(\ref{new_degree_node_pj}) is more accurate. The above equations turn out to be applied to the link sampling with the same sampling fraction $\alpha$ as well. Here we can use the technique in Ref.~\cite{Newman2005} to solve the bond percolation or epidemic model. Suppose a node, which originally had $k_0$ links before sampling, comes to have $k$ links. Because the random link sampling chooses links uniformly, the probability of the node having $k$ out of $k_0$ links is $p(k|k_0) = {k_0 \choose k} \alpha^k (1-\alpha)^{k_0 - k}$. Consequently the probability that a node in the sampled network has degree $k$ from all the possible original degree $k_0$ is $p'(k) = \sum_{k_0 = k}^{\infty} p(k_0) p(k|k_0)$, which leads us back to Eq.~(\ref{new_degree_node}). The fact that those two sorts of sampling are described by the same equation is also supported by Fig.~\ref{MLE_deg} showing the similar degree exponent changes for both node and link sampling. As Stumpf {\em et al.} point out in Refs.~\cite{Stumpf2005,Stumpf2005c}, Eq.~(\ref{new_degree_node}) for a power-law degree distribution $p(k) \sim k^{-\gamma}$ yields deviation of $p'(k)$ from the original power-law form for quite small sampling fraction $\alpha$. For moderate values of $\alpha$, however, the deviation is not significant and we observe that the tangent of $p'(k)$ in the log-log plot actually becomes steeper from Eq.~(\ref{new_degree_node}), consistent with our numerical observation about the node and link sampling as shown in Fig.~\ref{MLE_deg}. To extract the degree exponents from Eq.~(\ref{new_degree_node}), first we calculate the degree distribution of the original networks by $p(k) = n_k / n$. Substituting that $p(k)$ into Eq.~(\ref{new_degree_node}), we obtain the degree distribution $p'(k)$ of the sampled networks corresponding to a given sampling fraction $\alpha$. The degree exponents from those $p'(k)$ in Fig.~\ref{MLE_deg} show good agreement with the values from numerical simulation for both node and link sampling cases. \begin{figure} \includegraphics[width=0.4\textwidth]{kk_snowball2.eps} \caption{Change of nodes' degree in BA network for snowball sampling. The sampling fraction is $10000/30000$.} \label{kk_snowball} \end{figure} On the contrary, it is found that a degree exponent decreases for snowball sampling as we decrease the sampling fraction. By the definition of snowball sampling, hubs are more likely to be selected by this method. Furthermore, once a hub is picked, every node connected to the hub is selected in the next step unless it belongs to the previous layer. This characteristic of snowball sampling tends to {\em conserve} the degrees of easily selected hubs, which leads to the decrease of degree exponents by holding the ``tail'' of the power-law degree distribution. Figure~\ref{kk_snowball} shows the degrees in a sampled network obtained by snowball sampling, and the nodes with large degree on the $y = x$ line clearly indicates a tendency to choose hubs and conserve their degrees. Therefore, the snowball sampling underestimates the degree exponent. In Ref.~\cite{Clauset2004c}, they show that the traceroute sampling can underestimate the degree exponent of a scale-free network by undersampling the low-degree nodes relative to the high-degree ones. In spite of the difference between the snowball and traceroute sampling, both of these methods overrepresent hubs and have the same ``crawling'' character used to identify the nodes. We infer that the decrease of degree exponents for both sampling methods is caused by these similar features. \begin{figure} \includegraphics[width=0.45\textwidth]{apl_multi.eps} \caption{Changes of APL for each network's sampled networks according to the ratio $\xi$ of the size of giant component in the sampled networks to that of the original ones, averaged over ten independent realizations. Empty squares ($\square$) stand for node sampling, filled squares ($\blacksquare$) for link sampling, and empty triangles ($\vartriangle$) for snowball sampling. The horizontal dashed lines are the values for the original APL of each network, and the other lines are guides to the eyes.} \label{APL} \end{figure} We also check two closely related quantities, namely the average degree and the average path length (APL) in the sampled networks. APL is the average of shortest paths between all the pairs of nodes in a network, often used as a measure of network efficiency. In Fig.~\ref{APL}, we present APL of the {\em giant component} in the sampled networks obtained by the numerical simulation. For snowball sampling, APL decreases according to the decreased system size of sampled networks. On the other hand, for node and link sampling, the APL of a sampled network is larger than that of the original network for not-so-small sampling fraction, even though the size of the sampled network itself is smaller than the original one. As presented earlier, for node sampling, the number of links is proportional to the square of the number of the nodes, which leads to $\langle k \rangle = 2l/n \propto n$, where $l$ and $n$ are the numbers of links and nodes in a sampled network, respectively. This suggests that the average degree in a sampled network decreases as the sampling fraction becomes smaller. Obviously, for a given network, APL decreases as the average degree increases. The diminishment in the average degree, therefore, seems to have a stronger effect on APL than the overall system size in this case. Similar behavior of the average degree and APL is observed for link sampling, but in this case it seems that the ``treelike'' structure of sampled networks, related to the clustering coefficient discussed later, is responsible for that behavior. \subsection{Betweenness centrality distribution} Betweenness centrality (BC or load), which measures the centrality of a node by the traffic flow in a network, of node $b$ is defined as \begin{equation} g_b = \sum_{i \neq j} \frac{C_b (i,j)}{C (i,j)}, \end{equation} where $C (i,j)$ is the number of all the shortest pathways between a pair of nodes $(i,j)$ and $C_b (i,j)$ is that of the shortest pathways running through a node $b$~\cite{KIGoh2002}. It is known that the BC distribution follows a power-law $p(g) \sim g^{-\eta}$ for scale-free networks~\cite{KIGoh2001,KIGoh2002}. Similar to the degree distribution, the BC distribution of sampled networks also follows power-law well as do the original networks. Figure~\ref{MLE_bc_BA} shows the change of the BC exponent, also obtained by Eq.~(\ref{eq:MLE}), for each network and each sampling method. Similar to the degree exponent case, in general, BC exponents increase for node and link sampling and decrease for snowball sampling as the sampling fraction gets lower. Figure~\ref{MLE_bc_BA} bears a resemblance to Fig.~\ref{MLE_deg} except for the case of arxiv.org, for which the BC exponent seems to be conserved for all the sampling methods. The correlation between degree and BC of nodes~\cite{Barthelemy2003b}, shown in Fig.~\ref{kbc_node}, could explain the same direction of changes of degree and BC exponents. For assortative networks such as arxiv.org here, however, it is known that the degree-BC correlation is not clear~\cite{KIGoh2003a}, which explains the different behavior in Fig.~\ref{MLE_bc_BA}(d). Therefore, at least empirically, we expect overestimation of a BC exponent by node and link sampling and underestimation by snowball sampling. \begin{figure} \includegraphics[width=0.45\textwidth]{MLE_bc_multi.eps} \caption{Changes of BC exponent $\eta$ for each network's sampled networks according to the sampling fraction $\alpha$, averaged over ten realizations. Empty squares ($\square$) stand for node sampling, filled squares ($\blacksquare$) for link sampling, and empty triangles ($\vartriangle$) for snowball sampling. The horizontal dashed lines are the values for the original exponent of each network, and the other lines are guides to the eyes.} \label{MLE_bc_BA} \end{figure} \begin{figure} \includegraphics[width=0.4\textwidth]{kbc_node.eps} \caption{Degree and BC of nodes in a sampled network of BA network by node sampling. The sampling fraction is $10000/30000$. The value of BC is rescaled by the number of nodes.} \label{kbc_node} \end{figure} \subsection{Assortativity} The assortativity $r$, which measures the correlation between degrees of node linked to each other, is defined as the Pearson correlation coefficient of degrees between pairs of nodes~\cite{Newman2002a}. Positive values of $r$ stand for the positive degree-degree correlation which means that nodes with large degrees tend to be connected to one another. Most social networks have this positive degree correlation ({\em assortative} mixing), including the arxiv.org network considered in this paper. On the other hand, most biological and technological networks show negative degree correlation $r < 0$ ({\em disassortative} mixing), including PIN and Internet AS network here. If there is no degree correlation among nodes (neutral), as in the case of BA model, the value of $r$ is in the vicinity of $0$. \begin{figure} \includegraphics[width=0.45\textwidth]{as_multi.eps} \caption{Changes of assortativity $r$ for each network's sampled networks according to the sampling fraction $\alpha$, averaged over ten realizations. Empty squares ($\square$) stand for node sampling, filled squares ($\blacksquare$) for link sampling, and empty triangles ($\vartriangle$) for snowball sampling. The horizontal dashed lines are the values for the original assortativity of each network, and the other lines are guides to the eyes.} \label{as_snowball} \end{figure} \begin{figure} \includegraphics[width=0.45\textwidth]{Sampling_figS.eps} \centering \caption{Changes of assortativity $r$ under the link sampling for our four datasets, and comparison with Eq.~(\ref{link_assort_change}).} \label{Sampling_figS} \end{figure} The change of assortativity for each network and each method is shown in Fig.~\ref{as_snowball}. For node and link sampling, no noticeable changes of assortativity in the sampled networks are observed. Random choice of nodes or links appears to conserve assortativity well for these two methods. Sampled networks from snowball sampling, however, are shown to be more disassortative than the original networks. This pattern is common no matter whether the original network is assortative (arxiv.org), disassortative (PIN and Internet AS), or neutral (BA). In Ref.~\cite{JDNoh2007a}, a formula for the change of assortativity under the link sampling process is presented as follows, \begin{equation} \label{link_assort_change} r' = \frac{r}{\displaystyle{1 + \frac{1-\alpha}{\alpha} \left[ \frac{\langle k^2 \rangle / \langle k \rangle - 1} {\langle k^3 \rangle / \langle k \rangle - ( \langle k^2 \rangle / \langle k \rangle )^2 } \right]}}, \end{equation} where $\langle k^n \rangle$ is the $n$th moment of the degree of the original network. Our data fit perfectly well with Eq.~(\ref{link_assort_change}), as shown in Fig.~\ref{Sampling_figS}. In our datasets, where the degree exponent $\gamma < 4$, $\langle k^3 \rangle$ dominates in Eq.~(\ref{link_assort_change}) and $r' \simeq r$ in most cases, which is consistent with our numerical data for the link sampling. There is another way to check the degree correlation, which is measuring the quantity $\langle k_{nn} (k) \rangle = \sum_{k'} k' p(k'|k)$, i.e., the average degree of nearest neighbors of nodes with degree $k$~\cite{Satorras2001b}. Assortative mixing is represented by a positive slope of the $\langle k_{nn} (k) \rangle$ graph, while the others by horizontal (neutral) or a negative slope (disassortative). Figure~\ref{knn} shows the changes of these slopes for $\langle k_{nn} (k) \rangle$ graphs of the sampled networks from two kinds of original networks by snowball sampling. The slope decreases, i.e., moves toward the negative value as the sampling fraction gets lower for both disassortative Internet AS and assortative arxiv.org. \begin{figure} \includegraphics[width=0.4\textwidth]{knn_multi.eps} \caption{$\langle k_{nn} (k) \rangle$ distribution for sampled networks of (a) PIN, (b) arxiv.org by snowball sampling.} \label{knn} \end{figure} We suggest that the more disassortative nature of sampled networks compared with the original ones is due to the last layer of snowball sampling method. In contrast to the conserved structure of the inner layers, a considerable number of links are lost for the nodes in the last layer. Meanwhile, hubs are likely to be selected for snowball sampling. This separation of ``core'' and ``periphery'' part is seen in Fig.~\ref{kk_snowball}, and the connections between hubs and nodes of the last layer can reduce the value of assortativity. The simulation shows that a sampled network containing the entire last layer is more disassortative than the one where only parts of the last layer are kept, which supports the hypothesis that the effect of the last layer induces disassortative mixing. Therefore, we have to be careful when measuring the assortativity for the network from the snowball sampling. \subsection{Clustering coefficient} The clustering coefficient $C_i$ of node $i$ is the ratio of the total number $y$ of the links connecting its nearest neighbors to the total number of all possible links between all these nearest neighbors~\cite{Dorogovtsev2001a}, \begin{equation} C_i = \frac{2 y}{k_i (k_i - 1)} , \end{equation} where $k_i$ is the degree of node $i$. The clustering coefficient of a network is the average of this value over all the nodes $C = \sum_i C_i / n$, where $n$ is the number of nodes. Most real networks have much larger value of clustering coefficient than model networks such as ER or BA network due to, e.g., the community or modular structure. In Fig.~\ref{cc2_link}, we show the change of clustering coefficient for each original network and each sampling method. For node and snowball sampling, there is a little change of clustering coefficient depending on networks. On the other hand, link sampling prominently reduces the clustering coefficient. This effect is obvious since the random omission of links, the reverse process of link sampling, ``opens up triangles fast'' as stated in Ref.~\cite{Kossinets2004}. The link sampling, therefore, underestimates clustering coefficient of a network. \begin{figure} \includegraphics[width=0.45\textwidth]{cc_multi.eps} \caption{Changes of clustering coefficient $C$ for each network's sampled networks according to the sampling fraction $\alpha$, averaged over ten realizations. Empty squares ($\square$) stand for node sampling, filled squares ($\blacksquare$) for link sampling, and empty triangles ($\vartriangle$) for snowball sampling. The horizontal dashed lines are the values for the original clustering coefficient of each network, and the other lines are guides to the eyes.} \label{cc2_link} \end{figure} \section{discussion and conclusions} \begin{table} \begin{tabular}{ccccc} \hline \hline & Degree & BC & & Clustering \\ & Exponent & Exponent & Assortativity & Coefficient \\ & $\gamma$ & $\eta$ & $r$ & $C$ \\ \hline Node $\Downarrow$ & $\Uparrow$ & $\Uparrow$ & $=$ & $\Updownarrow$ \\ \hline Link $\Downarrow$ & $\Uparrow$ & $\Uparrow$ & $=$ & $\Downarrow$ \\ \hline Snowball $\Downarrow$ & $\Downarrow$ & $\Downarrow$ & $\Downarrow$ & $\Updownarrow$ \\ \hline \hline \end{tabular} \caption{The changes of quantities in networks by each sampling method. As the sampling fraction gets lower ($\Downarrow$ at the very right of each sampling method indicates this), $\Uparrow$ stands for increase, $\Downarrow$ for decrease, $=$ for the same, and $\Updownarrow$ for depending on networks.} \label{summary_table} \end{table} In this paper, we have studied the changes of well-known quantities in complex networks for randomly sampled networks. Three kinds of sampling methods are applied, and three representative real-world networks, along with the BA model, are used as the original networks for numerical investigation. We have measured four typical quantities in sampled networks, which shows some characteristic patterns in changes of the quantities for each sampling method. Based on properties of sampling methods, possible explanations for such changes as well as the mathematical analysis are provided. We have also analyzed other networks than the scale-free ones such as Erd\H{o}s-R{\'e}nyi random network~\cite{Erdos1959} and the growing network without the preferential attachment~\cite{Barabasi1999}, and the results show that the form of the degree distribution is conserved for the node and link sampling in those cases, consistent with the previous work~\cite{Stumpf2005c}. Table~\ref{summary_table} summarizes the results. To check the generality of the results, we also investigated the randomized version of each network in a similar fashion. The randomized networks were constructed by shuffling the links while conserving only the degree distribution~\cite{Newman2002a}. We found the same results with the original networks. The results in Table~\ref{summary_table}, therefore, seem to hold for scale-free networks in general and provide criteria for sampling method when some specific quantity is supposed to be investigated by the sampling. From another viewpoint, bias of some quantities can be predicted if a specific sampling method used to identify a network is known. If we are interested in the assortativity of a network, for example, node or link sampling can give fairly accurate values. For a clustering coefficient, on the other hand, the link sampling method should be avoided. Sampling problems should be taken into account for real network research, but not much work has been done so far. Exploration of other characteristics of complex networks or using other sampling methods, rigorous analytic approaches, and establishing solid principles by more systematic investigation could all be important research topics for the future. We hope this work can make a contribution to this direction of research. \begin{acknowledgments} We would like to thank Kwang-Il Goh for providing useful information, and appreciate Yong-Yeol Ahn for helping us with the link sampling formula. S.H.L. is grateful to Kim Bojeong Basic Science Foundation and KAIST for generous help. This work was supported by KOSEF through Grant No. R14-2002-059-01002-0 (P-J.K.) and by R01-2005-000-1112-0 (H.J.). \end{acknowledgments}
1,314,259,994,369
arxiv
\section{Introduction} Modern microscopy techniques allow researchers to observe phenomena on a sub--cellular, cellular and supra--cellular level. The observation of cells at different scales gives insights of key biological questions within modern science fostering more and more systematic approaches to understand the essence of life~\cite{abbott2003cell}. Contemporary microscopy offers wide spectra of different techniques with distinct advantages and disadvantages. Particularly, fluorescence microscopy allows biologists to observe live specimens and dynamic processes within a tissue or specimen. This technique is based on the addition of fluorescent molecules named fluorophores, which attach to target proteins or cellular structures on a sub--cellular or cellular level like DNA, membranes, cytoskeleton, or extra cellular matrix~\cite{lakowicz2013principles}. Fluorophores are excited by photons, usually a laser beam, and fluorescent emission is captured by a photonic detector or camera. Fluorescence microscopes vary in the excitation procedure, observation and volumetric resolution. In the last decades, fluorescence microscopy became the standard tool for in vivo and in toto (whole sample) imaging, however, photo--toxicity, photo--bleaching, out--of--focus contribution and acquisition speed limit its application. Particularly, Light Sheet Fluorescence Microscopy (LSFM) is a technique which uses a thin light sheet (plane) to excite the fluorophores in the focal plane of the detection objective~\cite{olarte2018light}. This technique has some advantages compared to the regular confocal fluorescence microscopes. Thanks to the perpendicular excitation through the thin plane, an optical sectioning occurs. This excitation reduces the out--of--focus contribution, due to the light sheet only excites fluorophores present in the observed focal plane. The photo--toxicity and photo--bleaching are also trimmed down (the energy load is reduced from $10^3$ E to E~\cite{keller2008quantitative,ritter2011cylindrical}), allowing acquisition of specimen in--vivo for long periods of time. Moreover, the reduced out--of--focus contribution improves the edges and contrast of the images. Additionally, its acquisition speed can achieve a few seconds for an entire 3D scan and it can observe big specimens (in the size of millimeters/centimeters) \cite{santi2011light}. Thus, LSFM is currently one of the preferred techniques to acquire a wide range of applications, especially for big specimen and long observation times, obtaining a reasonable image contrast for cell segmentation and time resolution for cell tracking~\cite{girkin2018light}. Another related LSFM technique is the so--called lattice light--sheet microscopy where the laser beam consists in a very narrow Bessel type lattice, intended to capture much smaller spatial scales of nanometers~\cite{chen2014lattice,reynaud2014guide}. In this study, we will only consider LSFM with gaussian type laser beams. During the image acquisition process, it occurs that the farther we are from the point of light emission, the higher the loss of image resolution (see e.g. Figure 2 in~\cite{huisken2012slicing} and Figure 3 in~\cite{huisken2007even}). We also see an increasing dominance of blur and shadows as the laser goes through the object~\cite{huisken2012slicing, reynaud2008light}. The standard reconstruction procedure used to overcome these issues consists of merging different images by using the opposite and complementary excitation directions~\cite{huisken2007even, huisken2012slicing, reynaud2008light} (left and right), as in the three images in Figure~\ref{fig:measurements}. This process is feasible in practice since the design of the microscope structure is set up in such a way that the laser beam can illuminate the object from opposite sides preventing the interference of the lasers. A critical problem with this merging process is the presence of artifacts in the middle plane of the final images. On the other hand, there exist calibration problems in the experimental setting for the acquisition process, such as: errors in the position and orientation of the lasers respect to the cameras, object displacements, opposite laser correspondence, etc. To avoid this merging technique and hence improve the final images, we establish a mathematical model that allows us to understand the laser behaviour and the subsequent fluorescence process. Even more, we propose to study this imaging technique as an inverse problem, where we seek to reconstruct the distribution $\mu$ of the fluorophore from the set of (images) measurements obtained by the camera. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{constant0_01_measurements.eps} \caption{An example of a LSFM image (density of the fluorescent molecules $\mu$). The first and second images show the scattering effects observed by the camera when left and right excitations are applied. The third one is the ``fused image'', taking the best side of the previous ones (as in~\cite{huisken2012slicing, huisken2007even}).}\label{fig:measurements} \end{figure} In Section~\ref{sec:model}, we first describe an operator $\mathcal{P}$ that relates the measurements with our unknown variable $\mu$, identifying two meaningful stages in an LSFM experiment: excitation and fluorescence. To model the first stage, we use the Fermi--Eyges pencil--beam equation to describe the space and angular distributions of the laser beam when it propagates in a near--transparent object. This equation was first presented by Fermi in 1940 and studied later by Rossi and Greisen in \cite[Section~23]{rossi1941cosmic}. In \cite{borgers1996asymptotic,borgers1996accuracy}, B{\"o}rgers \emph{et al.} present an asymptotic derivation of the Fermi Pencil--Beam equation from the Fokker--Planck equation and from the linear Boltzmann equation under two different conditions. On the other hand, the fluorescence stage takes place once the fluorescent molecules has been activated by the laser beam. For the second stage we use the Radiative Transport Equation (RTE) (see e.g. \cite{bal2009inverse}) to describe how the photons propagate until reaching the collimated camera. In this way, we completely define the forward operator $\mathcal{P}$ describing the proposed mathematical model. In Section \ref{sec:IP} we summarize the mathematical model obtained and the description of the inverse problem that we will study. In Section \ref{sec:uniqueness} we show that there is unique reconstruction of the function $\mu$ in the proposed inverse problem. Injectivity of the operator $\mathcal P$ is presented in Theorem~\ref{th:uniqueness}. We obtain this results by considering the relationship between the solutions of the Fermi pencil--beam and heat equations. By interpreting our measurements in terms of heat propagation, we obtain injectivity of $\mathcal{P}$ by reducing the problem to one of backward uniqueness for a heat equation from a nontrivial space--time curve, and the uniqueness for such problem is presented in Section~\ref{sec:backward_heat}. Finally, in Sections \ref{sec:numerics} and \ref{sec:numerical_results}, we present a discretization version of the forward operator to numerically solve the direct and inverse problems. We propose to find a numerical solution for the LSFM reconstruction problem by solving a linear system. In this context, we use different algorithms that are already available to optimally solve this problems. Mainly, we refer to~\cite{hansen2010discrete,hansen2018air,gazzola2019ir} where discrete inverse problems are studied and iterative regulatization methods for sparse and large--scale problems are detailed. \section{Mathematical model in LSFM}\label{sec:model} \subsection{Notation and model scheme} Let $\Omega\subset\mathbb{R}^2$ be an open set with smooth boundary, which represents the object studied under the microscope. We assume that $\Omega$ is contained in the rectangle $[0,s_1]\times [-y_1,y_1]$, for some $s_1>0,\ y_1 > 0$, both large enough. And for each $h\in[-y_1,y_1]$ we define $x_h:=\inf\{x:(x,h)\in\Omega\}$ (see in Figure~\ref{fig:Experiment_2D} the corresponding terms). \begin{figure}[H] \centering \includegraphics[scale=0.8]{Experiment_2D} \caption{\footnotesize Geometric representation of the excitation and emission beams. An incident laser at point $(0,h)$ illuminates the object from left and propagates inside the object according to the Fermi pencil--beam equation, exciting the fluorescent molecules within the sample. Then, the excited fluorescence molecules emits photons in all directions. For collimated cameras, only photons emitted in straight vertical directions are detected at different positions $s$.\label{fig:Experiment_2D}} \end{figure} The modelling of the LSFM experiment has two main stages, excitation and fluorescence, that are divided in the following components (see Figure~\ref{fig:Experiment_2D}): \begin{enumerate} \item\label{item:i} The excitation beam is emitted at the point $ (0,h)$ in the direction $\nu =(1,0)$. We call $h\in[-y_1, y_1]$ the \emph{height of incidence}. \item\label{item:ii} The laser follows a free transport equation, without attenuation or scattering, until entering the domain $\Omega$ at the point $(x_h,h)$. \item\label{item:iii} Once the laser enters the object, the propagation of the laser is described by the Fermi pencil--beam equation (equation \eqref{eq:FermiEq}). We denote by $u := u_h(x, y, \boldsymbol{\omega})$ the intensity of photons at position $(x,y)\in[0,s_1]\times[-y_1,y_1]$ traveling in the direction $ \boldsymbol{\omega}=(\cos(\omega),\sin(\omega))$ for $\omega\in\mathbb{R}/2\pi\mathbb{Z}$. Therefore, the total intensity of excitation photons at $(x,y)$, arising from an incident excitation at $(0,h)$, is $v_h(x,y)=\int u_h(x,y, \boldsymbol{\omega}) d \boldsymbol{\omega}$. \item\label{item:iv} The excitation beam reaching $(x,y)$ excites the fluorescent molecules at that point, and the excited fluorophores will be proportional to the density of fluorescent molecules and the excitation intensity. Namely, if $\mu(x,y)$ is the density of fluorescent molecules at $(x,y)$, then the excited fluorophores will be $w_h(x,y) = c~ v_h(x,y)\mu(x,y)$, where $c$ is the activation constant. \item\label{item:v} The excited fluorescence molecules $w_h$ emit photons in all directions, which propagates according to a linear transport equation (equation \eqref{eq:RTE}). The camera is vertically collimated, hence only measuring those photons traveling in the direction $(0,1)$. We will denote by $p_h(s)$ the fluorescent measurement at pixel $s\in[0,s_1]$ arising from an excitation at $(0,h)$. \end{enumerate} The previous description of LSFM considers some simplifications and does not include all the possible physical phenomena involved in LSFM. The proposed model is a step in trying to understand and tackle difficulties observed in LSFM, like blurring effects among others, and an attempt in trying to improve such imaging technique by analyzing the simplified and related inverse problem. LSFM can be considered as a particular illumination-detection geometrical setting of Fluorescence Molecular Tomography (FMT) (a review of Fluorescence Molecular Imaging and Fluorescence Molecular Tomography can be found in \cite{ntziachristos2006fluorescence} and \cite{stuker2011fluorescence}), but for a less diffusive media as the one usually considered in FMT. This less diffusive media implies a number of differences between our approach and the usual descriptions used in FMT, namely, in FMT the photon propagation is usually described by a diffusion equation without directionality of photons (see e.g. equation (1) in \cite{lam2005time}, and equations (1) and (2) in~\cite{stuker2011fluorescence}), which translates into a very different mathematical equation for the illumination model. Furthermore, the detection model generally employed in FTM does not allow for directional collimation, and also requires measurements from multiple angles (see e.g. \cite{ntziachristos2006fluorescence} and \cite{stuker2011fluorescence}). In the next subsection we present more details about stages \eqref{item:iii} and \eqref{item:v} that we have briefly introduce above. \subsection{Excitation: the Fermi pencil--beam equation} In this part we look into the details of stage \eqref{item:iii} above, \emph{i.e.} the propagation of the excitation laser inside the object described by the Fermi pencil--beam equation. To describe the transport of photons in highly scattering and highly peaked forward regime, a possible model is the following Fokker--Planck equation (see \cite{bal2009inverse}), \begin{equation}\label{eq:FokkerPlanckn} \boldsymbol{\omega}\cdot \nabla u(\boldsymbol{x}, \boldsymbol{\omega})+\lambda(\boldsymbol{x},\boldsymbol{\omega})u(\boldsymbol{x},\boldsymbol{\omega})=\psi(\boldsymbol{x})\Delta_{\boldsymbol{\omega}}u(\boldsymbol{x},\boldsymbol{\omega}) \end{equation} where, $\boldsymbol{x}=(x,y)\in\mathbb R^2$ and $\boldsymbol{\omega}\in S^1$ is the direction of propagation, with $\boldsymbol{\omega}=(\cos(\omega),\sin(\omega))$ for $\omega\in \mathbb{R}/2\pi\mathbb{Z}$. The quantity $u(\boldsymbol{x},\boldsymbol{\omega})$ corresponds to the intensity of photons at the point $\boldsymbol{x}$ that are moving in the direction $\boldsymbol{\omega}$. The coefficient $\lambda := \lambda_h(\boldsymbol{x}, \boldsymbol{\omega})$ represents the portion of photons that have been absorbed at the point $\boldsymbol{x}$ that were moving in direction $\boldsymbol{\omega}$. The operator $\Delta_{\boldsymbol{\omega}} $ is the Laplace--Beltrami operator on $S^1$ and $\psi(\boldsymbol{x})$ is the diffusion coefficient related to the scattering of the medium. In isotropic media (when $\lambda(\boldsymbol{x}, \boldsymbol{\omega})=\lambda(\boldsymbol{x})$) and since we are in $\mathbb{R}^2$ (letting $\boldsymbol{\omega}=(\cos(\omega),\sin(\omega))$), we can rewrite the Fokker--Planck equation \eqref{eq:FokkerPlanckn} as \begin{equation} \label{eq:FokkerPlanck2} Lu(\boldsymbol{x},\omega)=(\cos(\omega)\partial_x+\sin(\omega)\partial_y +\lambda(\boldsymbol{x})-\psi(\boldsymbol{x}) \partial_\omega^2)u(\boldsymbol{x},\omega)=0. \end{equation} And in the case that the diffusion coefficient $\psi(\boldsymbol{x})$ is small enough and the source is spatially and directionally concentrated, the photons will concentrated along a line and direction determined by the source. Namely, in \cite{borgers1996accuracy} it was shown that under adequate smallness and ellipticity assumptions on the diffusion coefficient, the Fokker--Plank equation \begin{align* L u(x,y,\omega)&=\left(\cos(\omega)\partial_x+\sin(\omega)\partial_y +\lambda(\boldsymbol{x})- \psi(\boldsymbol{x})\partial_\omega^2\right)u(x,y,\omega) =0.\\ u(x_h ,y,\omega)&=\delta_h(y)\delta_0(\omega),\qquad x\in (x_h,\infty),y \in\mathbb{R}, \omega\in \mathbb{R}/2\pi\mathbb{Z}, \end{align*} admits a \emph{paraxial approximation} with $\omega\sim 0$, given by the Fermi pencil--beam equation \begin{align} \label{eq:FermiEq} &{}L_\textnormal{approx} u(x,y,\omega)=\left(\partial_x+\omega\partial_y +\lambda(x,h)- \psi(x,h)\partial_\omega^2\right)u(x,y,\omega)=0.\\ \nonumber &{}u(x_h,y,\omega)=\delta_h(y)\delta_0(\omega) ,\qquad x\in (x_h,\infty),y \in\mathbb{R}, \omega\in \mathbb{R}, \end{align} here we have considered the approximations below inasmuch as $\omega$ is concentrated around zero and satisfies: \[ \cos(\omega)\approx 1,\quad \sin(\omega)\approx \omega \] and \[ |\omega|\ll 1,\ \omega \in \mathbb{R}/2\pi\mathbb{Z}\ \Longleftrightarrow\ |\omega|\ll 1,\ \omega\in \mathbb{R}. \] The Fermi equation has been derived from Fokker--Planck in~\cite{borgers1996asymptotic} by means of stereographic--type coordinates on the unit circle and by dropping higher order terms coming from asymptotic expansions with respect to the diffusion magnitude. Let $\lambda_h(x)=\lambda(x,h)$ and $\psi_h(x)=\psi(x,h)$. Equation \eqref{eq:FermiEq} can be explicitly solved (see e.g. \cite{eyges1948multiple}) and the solution for $x\in (x_h,\infty),\ y \in\mathbb{R},\ \omega\in \mathbb{R}$ is given by \begin{equation}\label{eq:paraxial} u_h(x,y,\omega) =\exp\left(-\int_{x_h}^x\lambda_h(\tau)d\tau\right)f_Z(\boldsymbol{z}), \end{equation} where $ \boldsymbol{z} =((y-h)-\omega (x-x_h),\omega)^\top$, and where \[ f_Z(\boldsymbol{z})=\frac{1}{2\pi \sqrt{\det \Sigma(x,h)}}\cdot\exp\left[-\frac{1}{2}\boldsymbol{z}^\top \Sigma^{-1}(x,h)\boldsymbol{z}\right], \] with \[ \Sigma(x,h):=\begin{pmatrix} E_2&-E_1\\-E_1&E_0 \end{pmatrix}(x,h), \qquad \Sigma^{-1}(x,h)=\frac 1{\det \Sigma}\begin{pmatrix} E_0&E_1\\E_1&E_2 \end{pmatrix}(x,h), \] and \begin{align}\label{eq:Ek} E_k(x,h)=\int_{x_h}^x (\tau-x_h)^k\psi_h(\tau)d\tau,\quad k=0,1,2. \end{align} By letting $\Lambda = \begin{pmatrix} 1 & (x-x_h) \\ 0 & 1 \\ \end{pmatrix} $ (hence $\det(\Lambda)=1$ and $\Lambda^{-1} = \begin{pmatrix} 1 & -(x-x_h) \\ 0 & 1 \\ \end{pmatrix} $) then \[ \boldsymbol{z}=\binom{(y-h)-\omega (x-x_h)}{\omega}= \Lambda^{-1} \binom{y-h}{\omega}, \] and \[ f_Z(\boldsymbol{z})=\frac{1}{2\pi \sqrt{\det \Lambda\Sigma(x,h)\Lambda^\top}}\cdot\exp\left[-\frac{1}{2}\binom{y-h}{\omega}^\top \left(\left(\Lambda^{-1}\right)^\top\Sigma^{-1}(x,h)\Lambda^{-1}\right) \binom{y-h}{\omega}\right]. \] Denoting $\alpha^2=(\Lambda \Sigma \Lambda^\top)_{11} = \left(E_2(x,h)-2(x-x_h)E_1(x,h)+(x-x_h)^2E_0(x,h)\right)$ we get (the marginal distribution on a multivariate normal distribution), \[ \int_\mathbb{R} f_Z(\boldsymbol{z}) dw =\frac{1}{\alpha \sqrt{2\pi}}\exp\left(-\frac{(y-h)^2}{2\alpha^2}\right). \] From the solution \eqref{eq:paraxial}, the previous calculation gives us the total excitation intensity at a point $(x,y)\in(x_h,\infty)\times(-y_1,y_1)$ arising from an incident excitation at $(0,h)$, namely \begin{align} v_h(x,y) \nonumber &=\int_\mathbb{R} u_h(x,y,w) dw = \exp\left(-\int_{x_h}^x\lambda_h(\tau)d\tau\right)\int_\mathbb{R} f_Z(\boldsymbol{z})dw \\ \label{eq:paraxialxy} &= \frac{1}{\alpha_h(x) \sqrt{2\pi}}\exp\left(-\int_{x_h}^x\lambda_h(\tau)d\tau\right) \exp\left(-\frac{(y-h)^2}{2\alpha_h^2(x)}\right), \end{align} where \begin{align} \nonumber \alpha^2_h(x)&=\left(E_2(x,h)-2(x-x_h)E_1(x,h)+(x-x_h)^2E_0(x,h)\right)\\ \nonumber&=\int_{x_h}^x \psi_h(\tau)[(\tau-x_h)^2-2(x-x_h)(\tau-x_h)+(x-x_h)^2]d\tau\\ \label{eq:alpha_h} &=\int_{x_h}^x (x-\tau)^2\psi_h(\tau)d\tau. \end{align} We can notice that for a fix $x$, $v_x(y) = v_h(x, y)$ in~\eref{eq:paraxialxy} is the density function of a univariate normal distribution with mean $h$ and variance $\alpha_h^2(x)$ multiplied by an exponential term depending on $\lambda_h$. This is explained in detail in Figure~\ref{fig:vh_gauss}. \begin{figure}[ht] \begin{center} \includegraphics[width=0.8\linewidth]{vh_gauss_code} \end{center} \caption{Graphic interpretation of equation~\eref{eq:paraxialxy}. Figure A shows the function $v_h$ when an illumination is made at height $y=h$. For fix points $x_0$ and $x_1$, the expressions $v_h(x_0,\cdot)$ and $v_h(x_1,\cdot)$ are the density distribution of a normal distribution multiplied by a constant that depends on $\lambda_h$. Figures B and C show these normal distributions. In both cases, the mean is $\hat \mu= h$ with variance $\alpha^2(x_0)$ and $\alpha^2(x_1)$, respectively.} \label{fig:vh_gauss} \end{figure} Given the excitation intensity $v_h(x,y)$ and density of fluorescent molecules $\mu(x,y)$, the fluorescent source is $w_h(x,y) = c~v_h(x,y) \mu(x,y)$, and in the following we provide the details of the model that relates the sources of photons and the measurements obtained at the camera, using the linear transport equation. \subsection{Fluorescence: Radiative Transfer Equation} In this detection stage we assume a perfect collimation of the camera in the direction $(0,1)$, this means that only photons travelling parallel to the $y$--axis are measured. The collimation at the camera allows us to remove the positive contribution in the measurements of the scattered photons. Let us denote by $p_h(\boldsymbol{x},\btheta)$ the intensity of photons at position $\boldsymbol{x}\in\mathbb{R}^2$ traveling in a direction $\btheta\in S^1$, arising from an incident excitation at $(0,h)$. We will consider that the propagation of photons is governed by a linear transport equation with attenuation $a$ and source $w_{h}$ (see \cite{bal2011combined, bal2007inverse}), namely we will assume that $p_h$ satisfies, \begin{align} \label{eq:RTE}\btheta\cdot \nabla_{\boldsymbol{x}}\ p_h(\boldsymbol{x},\btheta )+a(\boldsymbol{x})p_h(\boldsymbol{x}, \btheta\, )=w_{h}(\boldsymbol{x}),\quad &{}\forall \boldsymbol{x}\in\mathbb{R}^2,\ \btheta\in S^1\\[5pt] \nonumber\lim_{t\to\infty}p_h(\boldsymbol{x}-t\btheta,\btheta\,)=0,\quad &{}\forall \boldsymbol{x}\in\mathbb{R}^2, \ \btheta\in S^1, \end{align} where the boundary condition states that there are no external radiation sources, and $w_h$ is supported inside $\Omega$. Under mild regularity conditions on $w_{h}$ and $a$, the unique solution of equation~\eqref{eq:RTE} is \[ p_{h}(\boldsymbol{x},\btheta)=\int_{-\infty}^0 w_{h}(\boldsymbol{x}+r\btheta) \exp\left(-\int_{r}^0 a(\boldsymbol{x}+\tau\btheta)d\tau\right)dr, \] hence providing an expression for the intensity of photons detected at position $\boldsymbol{x}$ if collimated in direction $\btheta$. Since the cameras are outside the bounded object supporting the source, it is useful to consider the total number of photons traveling along lines. In order to do so, let us parametrize the lines in the plane as $L(s,\btheta^\perp)=\{\boldsymbol{x}\in \mathbb{R}^2\colon \boldsymbol{x}\cdot\btheta =s\}$, where $s\in\mathbb{R}$ is the distance of the line to the origin, $\btheta\in S^1$ is the direction perpendicular to the line, and $\btheta^\perp$, the rotation of $\btheta$ by $\pi/2$, is the direction of the line. The total intensity of photons along the line $L(s,\btheta^\perp)$ is \begin{align} p_{h}(s,\btheta^\perp) \nonumber&{}=\lim_{\tau\to\infty}p_h(\tau\btheta^\perp+s\btheta,\btheta^\perp)\\ \nonumber&{}=\int_{\mathbb{R}}w_{h}(r\btheta^\perp+s\btheta)\exp\left(-\int_{r}^\infty a(\tau \btheta^\perp +s\btheta)d\tau\right)dr\\ \label{eq:Measurements} &{}=c\int_{\mathbb{R}}\mu(r\btheta^\perp+ s\btheta)v_{h}(r\btheta^\perp+s\btheta)\exp\left(-\int_{r}^\infty a(\tau \btheta^\perp +s\btheta)d\tau\right)dr, \end{align} the last equality is obtained by the assumption $w_h(x,y) = c~ v_h(x,y)\mu(x,y)$ described in \eref{item:iv}. The Figure~\ref{fig:domain2D} shows an example of the integral along one line. Under the standard setup of the microscope, the object does not rotate with respect to the camera, hence for the measurements we will consider only the fixed direction $\btheta^\perp=(0,1)$. Rewriting \eqref{eq:Measurements}, and including the expression for $v_h$ given by \eqref{eq:paraxialxy}, we can finally write an expression for $p_h(s)=p_h(s,(0,1))$ the intensity of fluorescent photos measured in the camera pixel at position $s\in[0,s_1]$ arising from an incident excitation at height $h$ (see Figure \ref{fig:Experiment_2D}): \begin{align} \nonumber p_h(s) &{}=c\int_{\mathbb{R}}\mu(s,r)v_{h}(s,r)\exp\left(-\int_{r}^\infty a(s,\tau)d\tau\right)dr\\ \label{eq:measure_h}&{}= c\cdot\exp\left(-\int^s_{x_h}\lambda_h(\tau)d\tau\right)\bigintsss_\mathbb{R} \frac{\mu(s,r)e^{-\int^\infty_r a(s,\tau)d\tau}}{\alpha_h(s)\sqrt{2\pi}} \exp\left(-\frac{(r-h)^2}{2\alpha_h^2(s)}\right)dr. \end{align} We can observe that if $a, \lambda$ and $\psi$ are known, then for each $h$ fixed, the operator $\mu \mapsto p_{h}(s,\btheta^\perp)$ is a weighted X--ray transform resembling an attenuated X--ray transform with an extra weight. The approach, here presented, considers observations in multiple heights $h$ for only one angle $\btheta$. But, another interesting problem can come out if we additionally consider observations for several angles $\btheta\in S^1$, to simultaneously recover $\mu$ and the attenuation $a$ (or $\lambda$) as in some related works presented in \cite{hertle1988identification,solmon1995identification,stefanov2014identification,courdurier2015simultaneous}. In the next section, we introduce the measurement operator $\mathcal P$ to study the inverse problem related with the reconstruction of $\mu$ from the expression~\eqref{eq:measure_h}. \section{Inverse problem}\label{sec:IP} In this section we will summarize all the elements involved in the description of the measurement operator $\mathcal{P}$, we will discuss about the admissible sections of a domain $\Omega$ where the model $\mathcal P$ is a more adequate description of the phenomena, and we will pose the inverse problems of reconstructing $\mu$ as the inversion of the measurement operator $\mathcal P$. \subsection{Physical Quantities} In the previous section we considered the following quantities involved in the phenomena, \begin{enumerate} \item $\lambda(x,y)$ describing the attenuation for the incident laser inside the domain. \item $\psi(x,y)$ describing the diffusion of the laser as it propagates inside the domain. \item $\mu(x,y)$ the density of fluorescent molecules at each point $(x,y)$ in the domain. \item $a(x,y)$ describing the attenuation of the fluorescent light inside the domain. \item $c$ the activation constant, describing the proportion of incident light that excite the fluorophores. \end{enumerate} We will assume $\lambda, \mu, a\in C_\textnormal{pw}(\overline{\Omega})$ and $\psi\in C^1(\overline{\Omega})$, where $C_\textnormal{pw}, C^1$ denote the set of piecewise continuous and continuously differentiable functions, respectively, we assume that these functions vanish outside of $\overline{\Omega}$ and that $\psi>0$ in $\overline{\Omega}$. Under these conditions all the solutions to the equations in Section \ref{sec:model} exist and are unique (piecewise continuous regularity could be replaced by $L^1$ regularity). We recall that we are using the notation $\lambda_h(x):=\lambda(x,h)$ and $\psi_h(x):=\psi(x,h)$. \subsection{Admissible domain} It is important to observe that \eqref{eq:paraxial} is a solution to equation \eqref{eq:FermiEq} only under the hypothesis that $\psi_h>0$. Therefore the model for the incident excitation is not as correct after the laser exits the domain $\Omega$, hence equation \eqref{eq:measure_h} describing the fluorescent measurement $p_h(s)$ in pixel $s$ arising from an incident excitation at height $h$, is more adequate if the segment $[x_h,s]\times\{h\}$ is contained in $\overline{\Omega}$. We will consider this aspect for the theoretical part of this work, which motivates the following definitions. \begin{definition}\label{def:admisible_section} (See Figure \ref{fig:graph_gamma} for an illustration of the following definitions). Let $\Omega\subset [0,s_1]\times[-y_1,y_1]$ be an open set with smooth boundary. Recall that for $h\in[-y_1,y_1]$ we defined $x_h=\inf\{x:(x,h)\in\Omega\}$. For $s\in[0,s_1]$ define \begin{align*} &Y_s=\{h\in[-y_1,y_1] : x_h\leq s\}\\ &s^-=\inf\{s : Y_s\neq \emptyset\}, \end{align*} and observe that $Y_s\subset Y_r$ for $s < r$. We say that $s\in [s^-,s_1]$ is admissible if $[x_h,s]\times\{h\}\subset \overline{\Omega}$, for all $h\in Y_s$. We define $s^+$ as the supremum over the admissible $s$, we define $\underline{y}(s)=\inf(Y_{s})$ and $\overline{y}(s)=\sup(Y_{s})$ for all $s\in[s^-,s^+]$, and we let $y^-=\underline{y}(s^+), y^+=\overline{y}(s^+)$. We define the admissible section of $\Omega$ as $\Omega_\textnormal{ad}=\{(x,y)\in\Omega : x\leq s^+\}$ and we also define $\gamma:Y_{s^+}\to [0,s^+]$ as $\gamma(h):=x_h$, \emph{i.e.} as the unique smooth function satisfying \begin{align*} \Omega_\textnormal{ad} = \{(x,y) : \gamma(y)\leq x\leq s^+\}. \end{align*} If the set $\Omega$ is additionally convex, then $Y_{s^+}=[y^-,y^+]$, and if the set $\Omega$ is convex and oriented properly then $\Omega_\textnormal{ad}$ covers half of $\Omega$, in the sense that at both boundary points $(s^+,y^-)$ and $(s^+,y^+)$ the boundary is tangent to an horizontal line (see Figure \ref{fig:domain2D}). \end{definition} \begin{figure}[H] \begin{center} \includegraphics[scale=0.8]{graph} \end{center} \caption{Example of an admisible domain and the corresponding $\gamma$ function for a generic set $\Omega$. Figure A presents the definition of the quantities $s^-$ and $s^+$ and the set $ Y_{s^+}$. Figure B shows function $\gamma$ and its domain $Y_{s^+}$ in the new coordinates.} \label{fig:graph_gamma} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.8]{domain2D} \end{center} \caption{Example of an admissible domain for a convex set $\Omega$. On the left side, in figure A, we present its admissible section $\Omega_{\text{ad}}$ filled. All variables are defined under this scenario. Figure B, at right, shows the corresponding function $\gamma$ and its domain $Y_{s^+}$.}\label{fig:domain2D} \end{figure} Following the discussion above, we will proceed to the theoretical analysis of the inverse problem considering only the admissible section $\Omega_\textnormal{ad}$ of the domain $\Omega$, even though the proposed model could still be used as an approximate description of the whole phenomena in the full domain $\Omega$. Once we are able to solve the inverse problem on an admissible section, the solution to the inverse problem in the full domain follows in a similar fashion as the merging method suggested in~\cite{huisken2007even}. For the right orientation of the camera, which depends on the geometry of the sample, it is possible to solve the inverse problem in $\Omega$ by solving two (or possible more) local problem for admissible regions. This assumes of course the possibility of illuminating the domain from different directions which might be limited by the particular microscope set up. From a numerical point of view, when we leave domain $\Omega$ as we are considering that no diffusion is happening (since $\varphi\in C^1(\Omega)$), integration along lines will be just a rough approximation of the real experiment as in the line shown in Figure~\ref{fig:domain2D}. But if we restrict our analysis to the admissible domain $\Omega_{\textnormal{ad}}$, we guarantee that the integrals along $L(x, \btheta^\perp)$ after excitation at height $y$ with $(x,y)\in \Omega_{\textnormal{ad}}$ fits the exact value given by the model and not just an approximation. We explain this in Figure~\ref{fig:domain2D}. To complete the framework for the theoretical study we require one more condition with respect to the shape of the domain $\Omega$, prescribed in the following definition. \begin{definition} We will say that a domain $\Omega$ is admissible if it satisfies that $\Omega=\Omega_\textnormal{ad}$ and if additionally $\gamma\in C^1(Y_s)$ and $\gamma'(\underline{y}(s))<0, \forall s\in(s^-,s^+)$. \end{definition} \subsection{Measurements and Inverse Problem} For the rest of the paper, we will assume that $\Omega=\Omega_\textnormal{ad}$ is an admissible domain, in addition to the aforementioned conditions that $\lambda, \mu, a\in C_\textnormal{pw}(\overline{\Omega}_\textnormal{ad})$, $\psi\in C^1(\overline{\Omega}_\textnormal{ad})$, that these functions vanish at $(x,h)$ if $x<x_h$, and that $\psi>0$ in $\overline{\Omega}_\textnormal{ad}$. In terms of the inverse problem we consider that $\lambda, a$ and $\psi$ are known, while $\mu$ is the unknown quantity. \begin{definition}[measurement operator]\label{def:measurement_operator} We define the measurement operator $\mathcal P$ defined on functions $\mu\in C_\textnormal{pw}(\overline{\Omega}_\textnormal{ad})$ given by (see equation \eqref{eq:measure_h}) \begin{align*} \mathcal P [\mu] (s,h) = p_h(s), \quad (s, h)\in \Omega_{\text{ad}}. \end{align*} \end{definition} And therefore, the inverse problem consists in recovering $\mu$ from the knowledge of $\mathcal{P}[\mu]$, \emph{i.e.}, we want to study the invertibility of the linear operator $\mathcal P$. In next section, we present an injectivity result for the operator $\mathcal P$; this will guarantee that $\ker \mathcal P =\{ 0 \}$ and consequently if the data $p_h(s)$ is in the range of $\mathcal P$, it will uniquely characterize the unknown function $\mu$~\cite{bal2019introduction}. In practice, our measurement operator has to be discretized, and the available data contains noise. Hence, this discretized measurement operator is often not injective, but it will be seen as an approximation of $\mathcal P$, which we will prove is injective. We will overcome the ill-posedness generated by noise data in the discretized inverse problem introducing some regularization techniques as is described in Section~\ref{sec:numerical_results}. \section{Injectivity of the measurement operator}\label{sec:uniqueness} For an admissible domain $\Omega_\textnormal{ad}$ and under the hypotheses described in the previous section, we have the following injectivity result for the operator $\mathcal P$. \begin{theorem}\label{th:uniqueness} The measurements $\mathcal{P}[\mu]$ uniquely determine the density of fluorophores $\mu$ in $\Omega_{\text{ad}}$., \emph{i.e.} if $\mathcal{P}[\mu](s, h) = \mathcal{P}[\nu](s, h)$ for all $(s,h)\in \Omega_{\text{ad}}$ then $\mu(x,y)=\nu(x,y)$ for all $(x,y)\in\Omega_{\text{ad}}$. \end{theorem} This results is a direct consequence of a more localized injectivity property of the linear operator $\mathcal P$, described in the following theorem. \begin{theorem} Let $s\in(s^-,s^+)$. If $\mathcal P [\mu] (s,h)=0$ for all $h\in Y_s$ then $\mu(s,y)=0, \forall y\in Y_s$. \end{theorem} \begin{Proof} Let $s\in(s^-,s^+)$ be fixed. Let us recall that for $h\in Y_s$ the measurements take the form (see equations \eqref{eq:alpha_h} and \eqref{eq:measure_h}) \[ \mathcal{P}[\mu](s,h) = \exp\left(-\int^s_{\gamma(h)}\lambda_h(\tau)d\tau\right)\bigintsss_\mathbb{R} \frac{c\mu(s,r)e^{-\int^\infty_r a(s,\tau)d\tau}}{\sqrt{2\pi \alpha^2_h(s)}} \exp\left(-\frac{(r-h)^2}{2\alpha^2_h(s)}\right)dr, \] where \begin{align*} &\alpha_h^2(s) = \int_{\gamma(h)}^s (s-\tau)^2 \psi(\tau,h)d\tau. \end{align*} We observe that by letting \begin{align*} f(y) &{}:= c\mu(s,y)\exp\left({-\int^\infty_y a(s,\tau)d\tau}\right),\\[8pt] g(h) &{}:= \exp\left(\int^s_{\gamma(h)}\lambda_h(\tau)d\tau\right) \mathcal P[\mu](s,h),\ \text{and}\\[8pt] \sigma(h) &{}:= \alpha_h^2(s)/2, \end{align*} then the theorem reduces to show that $f(y)=0, \forall y\in Y_s$ whenever $g(h)=0, \forall h\in Y_s$, where \begin{equation}\label{eq:data_yb} g(h) = \int_\mathbb{R} \frac{f(r)}{\sqrt{4\pi \sigma(h)}} \exp\left(-\frac{(r-h)^2}{4\sigma(h)}\right)dr. \end{equation} If $U(t,y)$ is the unique solution to the following initial value problem for the heat equation, \begin{equation} \left\{\begin{aligned} & (\partial_t - \partial^2_y)U(t,y) = 0, \quad(t,y)\in(0,+\infty)\times\mathbb{R},\\ &U(0,y) = f(y), \textnormal{ if } y \in Y_s,\\ & U(0,y) = 0, \textnormal{ if } y\notin Y_s, \\ & \lim_{|y|\to\infty} U(t,y) = 0, \forall t>0, \end{aligned} \right. \end{equation} then \[ U(t,y) = \int_\mathbb{R} \frac{f(r)}{\sqrt{4\pi t}}\exp\left(-\frac{(r-y)^2}{4t}\right)dr, \] and \begin{align*} g(y) = U(\sigma(y),y),\ \forall y\in Y_s,\quad \textnormal{ while } \quad f(y) = U(0,y),\ \forall y\in Y_s. \end{align*} Let $\Gamma := \{(\sigma(y),y) : y\in Y_s\}\cup\{(0,y) : y\notin Y_s\}$. Since $g(y)=0, \forall y\in Y_s$ if and only if $U|_\Gamma =0$, then we can recast our problem as the problem of proving that \begin{align*} U|_{\Gamma} = 0 \textnormal{ implies } U(0,y) = 0,\ \forall y \in Y_s. \end{align*} This is exactly what Theorem \ref{thm:backward_uniqueness} in the following section shows. But to use Theorem \ref{thm:backward_uniqueness} we need to check that $\Gamma$ satisfies the required conditions, which reduces to prove the following \begin{enumerate} \item\label{sigma_i} $\sigma:Y_s\to \mathbb{R}$ is $C^1$. \item\label{sigma_ii} $\sigma(y)=0$ if $y\in \partial Y_s$. \item\label{sigma_iii} $\sigma'(y)=0$ whenever $\sigma(y)=0$. \item\label{sigma_iv} There exists $\delta>0$ such that $\sigma'(y)>0$ for $y\in(\underline{y}(s),\underline{y}(s)+\delta)$. \end{enumerate} Let us prove this four points. Recall that for $y\in Y_s$ \begin{align}\label{eq:sigma} \sigma(y) = \frac 12\int_{\gamma(y)}^s (s-\tau)^2 \psi(\tau,y)d\tau, \end{align} therefore \begin{align}\label{eq:sigma'} \sigma'(y) = -\frac 12\gamma'(y) (s-\gamma(y))^2 \psi(\gamma(y),y)+ \frac 12\int_{\gamma(y)}^s (s-\tau)^2 \frac{\partial \psi}{\partial y} (\tau,y)d\tau. \end{align} The hypotheses on the regularity of $\gamma$ and $\psi$ clearly imply that $\sigma\in C^1(Y_s)$ and therefore \eqref{sigma_i} is satisfied. Property \eqref{sigma_ii} follows from the equation \eqref{eq:sigma} and the fact that if $y\in \partial Y_s$ then $\gamma(y)=s$. In order to check \eqref{sigma_iii} let us recall that $\psi>0$ in $\Omega$, therefore $\sigma(y)=0$ only if $\gamma(y)=s$ (see equation \eqref{eq:sigma}), in which case equation \eqref{eq:sigma'} implies $\sigma'(y)=0$. To establish \eqref{sigma_iv}, we observe that if $m=\inf_{(x,y)\in\overline{\Omega}}|\psi (x,y)|>0$ and $M=\sup_{(x,y)}|\partial \psi/\partial y (x,y)|$ then from equation \eqref{eq:sigma'} \begin{align*} \frac{2\sigma'(y)}{(s-\gamma(y))^2} \geq \Big[-\gamma'(y)m - \frac{1}{3} (s-\gamma(y))M\Big]\stackrel{y\to\underline{y}(s)}{\longrightarrow} -\gamma'(\underline{y}(s))m, \end{align*} since $\Omega$ is admissible, $\gamma'(\underline{y}(s))<0$ and therefore $\sigma'(y)>0$ for $y\in(\underline{y}(s),\underline{y}(s)+\delta]$, for some $\delta>0$. \end{Proof} \section{A uniqueness result for the heat equation}\label{sec:backward_heat} The purpose of this section is to prove the next result. \begin{theorem}\label{thm:backward_uniqueness} Let $\sigma(y)\in C^1_c(\mathbb{R})$ and denote $\Gamma =\{(t,y)\in\mathbb{R}^2: t=\sigma(y)\}$. Let $$\underline{y}=\inf(\supp\sigma),\quad\overline{y}=\sup(\supp\sigma)$$ and assume there is $\delta>0$ so that $\sigma'(y)> 0$ in $(\underline{y},\underline{y}+\delta)$. If $U(t,y)$ is a solution to the heat equation \begin{align*} (\partial_t - \partial^2_y)U(t,y) = 0,\quad (t,y)\in(0,+\infty)\times\mathbb{R},\\ U(t,y)\to 0\;\text{as}\; |y|\to\infty,\ \forall t>0, \end{align*} satisfying $\supp U|_{t=0}\subset\supp\sigma$ and $ U|_{\Gamma} = 0$, then $U= 0$ everywhere in $(0,+\infty)\times\mathbb{R}$. In particular $U(0,y)=\lim_{t\to 0^+}U(t,y)=0, \forall y\in\mathbb{R}$. % \end{theorem} \begin{Proof} Let $T=\sigma(\underline{y}+\delta)$, by hypothesis the restriction of $\sigma$ to the interval $(\underline{y},\underline{y}+\delta)$ has an inverse $\rho(t) = \sigma^{-1}(t)\in C^1(0,T)\cap C[0,T]$, and since $\sigma(\underline{y}) = 0$ then $\rho(0) = \underline{y}$. Then we can parameterize the section of $\Gamma$ immediately to the right of $(\underline{y},0)$ as $\{(\rho(t),t):0\leq t\leq T\}$ (see Figure~\ref{fig:Gamma}). \begin{figure} \centering \includegraphics[scale=0.9]{Gamma} \caption{Curve $\Gamma$ in variables $(y,t)\in \mathbb{R} \times (0,+\infty)$. The filled zone $\{(y,t)\colon 0\leq t< T,\ y<\rho(t)\}$ has been denoted by $S$. The assumption that $\supp U|_{t=0}\subset\supp\sigma$ is also represented, since $\underline{y}=\inf(\supp\sigma)$ and $\overline{y}=\sup(\supp\sigma)$.} \label{fig:Gamma} \end{figure} Let us define the following one--sided exterior energy $$I(t):= \frac{1}{2}\int^{\rho(t)}_{-\infty}|U(t,y)|^2dy,\quad t\in[0,T),$$ and notice that for all $t\in(0,T)$ $$\frac{d}{dt}I(t) = \frac{1}{2}|U(\rho(t),t)|^2\frac{d}{dt}\rho(t) + \int^{\rho(t)}_{-\infty}U(t,y)\partial_tU(t,y)dy ,$$ and the first term in the sum vanishes since $U|_{\Gamma}=0$. On the other hand, since $U$ solves the heat equation and integrating by parts, $$ \begin{aligned} \int^{\rho(t)}_{-\infty}U(t,y)\partial_tU(t,y)dy &= \int^{\rho(t)}_{-\infty}U(t,y)\partial_y^2 U(t,y)dy\\ &= U(t,\cdot)\partial_y U(t,\cdot)\Big|^{\rho(t)}_{-\infty} - \int^{\rho(t)}_{-\infty}|\partial_y U(t,y)|^2dy, \end{aligned} $$ and again the first term in the sum vanishes since $U|_{\Gamma}=0$. Therefore $$\frac{d}{dt}I(t) = - \int^{\rho(t)}_{-\infty}|\partial_y U(t,y)|^2dy\leq 0,\quad \forall t\in[0,T),$$ and $I(t)$ is a nonnegative decreasing function. But $\supp U(0,y)\subset\supp\sigma$, implying that $I(0) = 0$ and concluding that $I(t)=0$ for all $t\in [0,T)$. It follows that $$U(t,y)= 0,\quad \forall t\in[0,T), \forall y<\rho(t),$$ and from classical unique continuation results for parabolic equations (see for instance \cite{lin1990uniqueness}) we deduce that $U$ must vanish in the whole upper-half plane. \end{Proof} In the next sections, we present the numerical implementation of the direct and inverse problems. \section{Discrete direct and inverse problems}\label{sec:numerics} The main objective of this and next sections is to present a numerical analysis and solution of the direct and inverse problems. This will allow us to bear out that the diffusion and artifacts, observed during the traditional acquisition process, can be described by the proposed model. \subsection{Direct model} Here, we present how to simulate our data set using the proposed forward operator $\mathcal P$. Given the fluorescence density $\mu$ in a given domain $\Omega$, we are able to compute the value of $p_h(s)$ for all $s$ thanks to the expression~\eqref{eq:measure_h}. The density of fluophores $\mu$ and the two cases of attenuation $\lambda$ that we will consider in the experiments are presented in Figure~\ref{fig:constant_data}. The variable attenuation is proportional to the fluorophore density plus a constant value which represents the medium where the object is submerged. We assume that the attenuation of the fluorescence stage $a$ satisfies the relation $a = \hat c \cdot \lambda$. We choose a parameter $\hat c$ so that the diffusion effect got in the numerical experiments remains close to the one observed in the real data. Here, we also assume that the diffusion term $\psi_h$ is proportional to the attenuation $\lambda_h$, \emph{i.e.} $\psi_h = \tilde c\cdot \lambda_h$. For all the experiments we set this constant in $\tilde c = 0.6$. Additionally, recalling that $w_h = c\cdot \mu \cdot v_h$, represents the amount of fluorescent molecules that is activated after the excitation process, we took $c=1$ throughout the experiments. \begin{figure}[ht] \centering \includegraphics[width=\linewidth,height=0.35\linewidth]{constant_data.eps} \caption{\textbf{From left to right:} fluorophore density distribution ($\mu$), constant and variable attenuation ($\lambda$) for the excitation stage.} \label{fig:constant_data} \end{figure} For all experiments, we work over the domain $\Omega=[0,2]\times[-1,1]$ and with images of size $N \times N$ with $N=257$. The discretization step is given by $\tau = 2/(N+1)$ in $x$ and $y$ axes. We start by calculating the values $v_h(x,y)$ over $\Omega$ for a discretized set of excitations points along the interval $[-1,1]$. We take $N$ heights of excitations with step size $\tau$. The excitation points are considered in two directions: left and right, since the support of our object is a circle (as shown in Figure~\ref{fig:constant_data}) by the Definition~\ref{def:admisible_section}, two directions are needed to guarantee the uniqueness of our solution in the whole domain. Then the total amount of excitation points is $2N$. The discretization of equation~\eqref{eq:paraxialxy} is straightforward if we approximate the integrals of $\lambda_h$ as finite sums of its pixel intensities, since we are representing $\lambda_h$ as an image of size $N\times N$. The same is considered for the integrals of $\psi_h$ in expression~\eqref{eq:Ek}. Figure~\ref{fig:vh} presents a single simulation of $v_h(x,y)$ when the excitation point occurs at $h=-0.1406$, from both directions (left and right). We also included a visualization of the function $w_h$. \begin{figure}[H] \centering \includegraphics[width=0.48\linewidth,height=0.24\linewidth]{constant0_01_vh.eps} \includegraphics[width=0.48\linewidth,height=0.24\linewidth]{constant0_01_wh.eps} \caption{The left image corresponds to the $v_h$ image after illuminating at $h=-0.1406$ from left and right, respectively. In the second image, we show the function $w_h$ for the same height. We included the support of our object in broken red lines for visualization purposes.} \label{fig:vh} \end{figure} To achieve the discretization of the equation~\eqref{eq:measure_h}, we define the set of discrete values of $h$ as $\{h_l\}$ for $l=1,\ldots, 2N$ and analogously, for $s$ we consider $\{s_k\}$ for $k=1,\ldots,N.$ Additionally, as images $a$ and $\mu$ are seen as matrices, we index them as $a_{ij}$ and $\mu_{ij}$ for $i,j = 1,\ldots, N$. Finally, a line of observation is defined by the distance $s_k$, and we denote it by $L_k$. In Figure~\ref{fig:new_discretization}, we describe all the discrete variables that we have introduced. The filled pixels represent an example of the discretized function $v_h$ when the excitation occurs at the point $h_l$ of our discrete domain. We denote by $v_{ijl}$ the value of $v_{h}$ in the pixel indexed by $(i,j)$ when $h=h_l$. \begin{figure}[ht] \centering \includegraphics[scale=1.3]{new_discretization.pdf} \caption{Discretization of the image and the variables used in the AtRt.}\label{fig:new_discretization} \end{figure} We use the Kronecker delta to determine if a line $L_k$ is intersecting a pixel $(i,j)$, this happens when we are at pixels where $j=k$, then: \[ \delta_{jk} = \left\{ \begin{array}{ll} 1,& \text{if $j=k$,}\\ 0,& \text{otherwise.} \end{array} \right. \] Then $\mathcal P[\mu] (s_k, h_l)= p_{h_l}(s_k)$ is calculated as: \begin{eqnarray} \label{eq:02} \mathcal P[\mu](s_k, h_{l}) & = & c\sum_{i,j=1}^{N} \delta_{jk}\mu_{ij}v_{ijl}\exp\left(-D_{ik}(a)\right),\\ \label{eq:03} & = & c\sum_{i=1}^{N} \mu_{ik}v_{ikl}\exp\left(-D_{ik}(a)\right), \end{eqnarray} where \[ D_{ik}(a) = \sum_{z=1}^{i} a_{zk} \] is interpreted as partial sums along the columns of the attenuation $a$. Under this discretization, our set of measurements is of size $2N^2$, for all $(s_k, h_l)$ with two--side excitations (we highlight that the density $\mu$ has $N^2$ pixels that is the amount of unknowns of our problem). In Figure~\ref{fig:measurements}, the first two images represent the matrix of measurements obtained from left and right excitations, respectively. In the third one, the fused image (as in \cite{huisken2012slicing} is presented to compare it with the reconstruction obtained by the proposed model. In Figure~\ref{fig:measurementsVSsource}, we compare the fused image and the ground truth density $\mu$ under the same scale of values. This figure shows that the density that is measured by the camera is not as good and need to be corrected in the central zone, which was our initial motivation. In the next section, we study the numerical inversion of the proposed inverse problem and present possible improvements that can be obtained through our approach. \begin{figure}[ht] \centering \includegraphics[width=0.7\linewidth]{constant0_01_measurementsVSsource.eps} \caption{The fused image (see Figure~\ref{fig:measurements}) of measurements (left) compared to the ground truth density (right).}\label{fig:measurementsVSsource} \end{figure} \subsection{Inverse model} We take advantage of the linearity of the operator $\mathcal P$ described in Definition~\ref{def:measurement_operator}, to represent the solution of our discretized inverse problem as the solution of a linear system of the form: \begin{equation}\label{eq:linear_system} A{\bm\mu}=b,\qquad A\in\mathbb{R}^{m\times n}, \ b\in \mathbb{R}^m,\ \bm\mu\in\mathbb{R}^n. \end{equation} To build the matrix $A$ associated to our problem, we have to do small changes to the previous discretization. We just reorder $(\mu_{ij})$ as a vector $\bm \mu$ of size $N^2\times 1$ as shown in the expression below. We use the variable $z$ to index pixels, so $z=1, \ldots, N^2$. The same is needed for ${\bm v}_l := (v_{ijl})$: \[ \bm \mu = (\mu_z) = \begin{bmatrix} \mu_{11}\\ \mu_{21}\\ \mu_{31}\\ \vdots\\ \mu_{NN} \end{bmatrix},\qquad {\bm v}_l = (v_{zl})= \begin{bmatrix} v_{11l}\\ v_{21l}\\ v_{31l}\\ \vdots\\ v_{NNl} \end{bmatrix}, \ \forall l=1,\ldots, 2N. \] Equivalent to the Kronecker delta we introduce a matrix that can tell us the whole information about the intersections between lines $L_k$ and a pixel $z$. For a fixed pixel $z$ and distance $s_k$, we define \[ w_{zk}= \left\{ \begin{array}{ll} 1,& \text{if line $L_k$ crosses the pixel $z$,}\\ 0,& \text{otherwise.} \end{array} \right. \] Then, we can write a vector ${\bm w}_k$ of size $N^2\times 1$, as follows: \[ {\bm w}_k = \begin{bmatrix} w_{1k},\ w_{2k},\ w_{3k},\ \cdots,\ w_{N^2k} \end{bmatrix}^\top. \] And defining ${\bm W}_{kl} = {\bm v}_l\odot {\bm w}_k$, where $\odot$ represents the Hadamard or point--wise product. The only part that needs to be written as a vector in expression~\eqref{eq:02} is the exponential term, for this, we define a matrix ${(D_z)}$ as the cumulative sums of the attenuation matrix $a$ in the direction of the camera. The farther a pixel is from the camera, the greater its accumulated value. As before, we rewrite this matrix as a ($N^2\times 1$)--vector, that we denote by $\bm D$: \[ {\bm D} = \begin{bmatrix} D_{1},\ D_{2},\ D_{3},\ \cdots,\ D_{N^2} \end{bmatrix}^\top. \] Now, for each $k$ and $l$, we write a row of our final matrix $A$ as: \[ {\bm a}_{kl} = {\bm W}_{kl}\odot \exp({-\bm D}), \] where $\exp(-\bm D)$ is understood as the exponential of each component of $\bm D$. Then varying $k$ and $l$, we built $A$ of size $m\times n$, with $m = 2N^2$ and $n=N^2$. To build the vector of measurements $b$, as we obtain our set of observations (as the first two images presented in Figure~\ref{fig:measurements}), we just need to reshape them as a column vector taking row by row and transposing them. The shape of the matrix $A$ and vector $b$ are: \begin{align*} A &{} = \left[\begin{array}{cccc|cccc|c|cccc} {\bm a}_{11} & {\bm a}_{21}& \cdots&{\bm a}_{N1}& {\bm a}_{12}& {\bm a}_{22}& \cdots& {\bm a}_{N2}& \cdots& {\bm a}_{1,2N}& {\bm a}_{2,2N}& \cdots& {\bm a}_{N,2N} \end{array}\right]^\top, \\[8pt] b &{} = \left[\begin{array}{cccc|cccc|c|cccc} {b}_{11}& {b}_{21}& \cdots& {b}_{N1}& {b}_{12}& {b}_{22}& \cdots& {b}_{N2}& \cdots& {b}_{1,2N}& {b}_{2,2N}& \cdots& {b}_{N,2N} \end{array}\right]^\top. \end{align*} \subsubsection{Solution of the linear system.} As the matrix $A$ is sparse and large, a factorization process to solve~\eqref{eq:linear_system} could be impossible or computationally expensive. For this reason, the use of iterative methods is highly desirable to solve this type of linear systems. Additionally, we consider that our measurements (represented by the right-hand vector $b$) are corrupted by unknown vector of noise $\varepsilon\in \mathbb{R}^m$, as is usual in the real cases. For the different iterative algorithms that we will present, we assume that at least the norm $\delta := \|\varepsilon\|$ is known. Then, due to the ill--posedness produced by the presence of noise and the possible ill-conditioned matrix $A$, a \emph{regularization process} can be used to overcome these issues~\cite{calvetti2004non}. The regularized minimization problem associated to the solution of the linear system~\eqref{eq:linear_system} is: \begin{equation}\label{eq:regularization_problem} \mu = \argmin_{x\in \mathbb{R}^m} \left\{\frac 12 \|Ax-b\|_2^2 + \lambda \mathcal R(x)\right\} \end{equation} where the data--fit term $\|Ax-b\|_2^2$ forces the problem to find $x$ that remains close to the given data $b$, and the regularizer term $\mathcal R$ is chosen to overcome the particular requirements of each problem. An alternative way to include the regularization is to apply an iterative method directly on the data--fit term and use the number of iterations as stop criteria when semi-convergence is achieved. The general principle of the semi--convergence is to obtain a desired approximation before the noise starts to show up in the current solution~\cite[Chapter 6]{hansen2010discrete}. The algorithms used to solve our problem consider these two possible approaches. In the next section, we briefly describe the algorithms that are used to solve our linear system and hence, the inverse problem. We have implemented the discretization of our problem in \textsc{Matlab} and we solve the linear system using the \textsc{IR tools} which are detailed in~\cite{gazzola2019ir}. \section{Numerical results}\label{sec:numerical_results} In this part, we propose to solve our discrete inverse problem using two different minimization approaches, that we denote by (P1) and (P2) and are defining as follows: \begin{align} &\tag{P1}\label{eq:P1} \left\{\begin{aligned} & \underset{x}{\text{minimize}} & & \|Ax -b\|^2_2 \\ & \text{subject to} & & x \in \mathcal C \end{aligned}\right.\\[10pt] &\tag{P2}\label{eq:P2} \left\{\begin{aligned} & \underset{x}{\text{minimize}} & & \|Ax -b\|^2_2 +\lambda \TV(x) \\ & \text{subject to} & & x \geq 0 \end{aligned}\right. \end{align} The Problem~\eqref{eq:P1} is related to the \emph{semi--convergence} case, where the regularization will be included within the iterations of the optimization algorithms. We will compare the results obtained by five different algorithms: the \emph{Modified residual norm steepest descent method}~\cite{nagy2000enforcing} (\texttt{mrnsd}), the \emph{Flexible CGLS method}~\cite{gazzola2017fast} (\texttt{nnfcgls}), \emph{Simultaneous algebraic reconstruction technique}~\cite{hansen2018air} (\texttt{sart}) and the \emph{Fast Iterative Shrinkage-Thresholding Algorithm} (\texttt{fista})~\cite{beck2009fast} (that solves the Tikhonov problem with box constraints when the parameter $\lambda = 0$, a penalized version is also available if $\lambda \not=0$ but we are not considering this case). The problem~\eqref{eq:P2} has the shape of~\eqref{eq:regularization_problem} where we have considered the \emph{total variation} (TV, \cite{rudin1992nonlinear}) as our regularizer $\mathcal R$ . To solve it, we use a particular case of the \emph{Projected-restarted iteration method} (PRI)~\cite{calvetti2004non} which incorporates a heuristic TV penalization term~\cite{gazzola2014generalized}. As in~\cite{gazzola2019ir}, we denote this method by (\texttt{htv}). \subsection{Simulated noise measurements} To avoid \emph{inverse crime} in our reconstructions, we add noise to our simulated measurements. For this, we consider an scaling factor $\beta$ to generate a poisson distributed noise (since this random variable returns normal values, it is necessary to amplify the signal). The factor $\beta$ controls the level of noise, \emph{i.e.}, if $\beta$ takes large values, we will get lower intensity images and therefore higher poisson noise~\cite{li2017pure}. Accordingly, each pixel value $p$ is replaced by a draw $\beta\cdot \text{Pois}\left(\frac{p}{\beta}\right)$ as in~\cite[eq. 2]{li2017pure}. Examples 1 and 2 described below are implemented with values $\beta = 0.01$ and $\beta = 0.001$, respectively. \subsection{Stop criteria} In this \textsc{IR tools} package, all algorithms mentioned above used the \emph{discrepancy principle} to stop in the \emph{best} iteration. For the algorithms \texttt{sart}, \texttt{fista}, \texttt{mrnsd} and \texttt{nnfcgls}, this means that the algorithms stop as soon as the relative norm of the residual $b-Ax^{(k)}$ is sufficiently small, typically of the same size as the norm of the noise $\varepsilon$, \emph{i.e.} when \[ \frac{\|b-Ax^{(k)}\|_2}{\|b\|_2}\leq \eta \cdot \texttt{NoiseLevel} \] where $\eta$ is a ``safety factor'' slightly larger than 1, and \texttt{NoiseLevel} is the relative noise $\|\varepsilon\|_2/\|b\|_2$. For the algorithm \texttt{htv} that is a PRI method with inner--outer iterations, the discrepancy principle is used to stop the inner iterations, whilst the outer iterations are stopped when $\|x^{(k)}\|$, $\|\TV(x^{(k)})\|_2$ or the value of the regularizer parameter $\lambda$, becomes stable. \subsection{Initialization} We use the fused image of measurements (see Figure~\ref{fig:measurements}) as initial value $x^{(0)}$ (see Figure~\ref{fig:measurementsVSsource}), this initializing helps to improve the speed of the algorithms and reduce the number of iterations. When the parameter $\eta$ is needed, we considered $\eta = 1.01$. Additionally, since we simulate the data as shown in Section~\ref{sec:numerics}, we have at our disposal the true value of the unknown image $\mathbf{\mu}$ which is included in the algorithm to calculate the relative error. \subsection*{Example 1:} In this first simulated example, we consider that the attenuations $\lambda$ and $a$ are constant over the domain $\Omega$. This means that we are only considering the effects of the medium where our object of interest in submerged. In Table~\ref{tab:constant_algorithms}, we present the results in terms of number of (outer) iterations, time of execution, the relative error (NRE) and the structural similarity coefficient (SSIM, \cite{wang2004image}) between the reference (true) density and the reconstruction. In this example, all the algorithms present a quantitative improvement compared to the values of the fused image. The \texttt{htv} method gives the smallest NRE value (0.139\%) and \texttt{fista} the highest value of the SSIM (0.98439). In Figure~\ref{fig:constant_zoom}, we can visually compare the different results. \begin{table}[ht] \caption{Number of iterations, execution time, relative error and SSIM for the different algorithms when attenuation is assume to be known and constant. The ``fused image'' row corresponds to the third image in Figure~\ref{fig:measurements}, which has been perturbed by noise.}\label{tab:constant_algorithms} \begin{center} \begin{tabular}{lcccl} \bottomrule Algorithm & iterations & time (s) & $\|x - x^{(k)}\|_2/\|x\|_2$ & SSIM\\\mr \texttt{fused image} & -- -- & -- -- & 0.1637 & 0.96402\\ \texttt{fista} & \,31 & 4.8129 & 0.15077 & $\mathbf{0.98439}$\\ \texttt{htv} & \,34 & 1.2496 & $\mathbf{0.13914}$ & 0.98349\\ \texttt{mrnsd} & 150 & 2.8388 & 0.14965 & 0.98278\\ \texttt{nnfcgls} & 106 & 3.7828 & 0.14001 & 0.98383\\ \texttt{sart} & \,10 & 1.9969 & 0.15856 & 0.98305\\ \bottomrule \end{tabular}\\ {\footnotesize $^\ast x$ is the truth solution.} \end{center} \end{table} \begin{figure}[ht] \begin{center} \begin{tikzpicture}[spy using outlines={rectangle,white,magnification=2,size=1.5cm, height=5.5cm, connect spies}] \node {\pgfimage[height=5cm, width=5cm]{constant0_01_source.png}}; \spy on (0,0) in node [left] at (3.5,0); \end{tikzpicture} \end{center} \caption{True simulated density, with zoomed zone to visual comparisons.}\label{fig:zoom_source} \end{figure} \begin{figure}[ht] \begin{center} \begin{tabular}{cc} \subfloat[Noise measurements]{\begin{tikzpicture}[spy using outlines={rectangle,white,magnification=2,size=1.5cm, height=5.5cm, connect spies}] \node {\pgfimage[height=5cm, width=5cm]{constant0_01_meas.png}}; \spy on (0,0) in node [left] at (3.5,0); \end{tikzpicture}}& \subfloat[fista]{\begin{tikzpicture}[spy using outlines={rectangle,white,magnification=2,size=1.5cm, height=5.5cm, connect spies}] \node {\pgfimage[height=5cm, width=5cm]{constant0_01_fista.png}}; \spy on (0,0) in node [left] at (3.5,0); \end{tikzpicture}}\\ \subfloat[htv]{\begin{tikzpicture}[spy using outlines={rectangle,white,magnification=2,size=1.5cm, height=5.5cm, connect spies}] \node {\pgfimage[height=5cm, width=5cm]{constant0_01_htv.png}}; \spy on (0,0) in node [left] at (3.5,0); \end{tikzpicture}}& \subfloat[mrnsd]{\begin{tikzpicture}[spy using outlines={rectangle,white,magnification=2,size=1.5cm, height=5.5cm, connect spies}] \node {\pgfimage[height=5cm, width=5cm]{constant0_01_mrnsd.png}}; \spy on (0,0) in node [left] at (3.5,0); \end{tikzpicture}}\\ \subfloat[nnfcgls]{\begin{tikzpicture}[spy using outlines={rectangle,white,magnification=2,size=1.5cm, height=5.5cm, connect spies}] \node {\pgfimage[height=5cm, width=5cm]{constant0_01_nnfcgls.png}}; \spy on (0,0) in node [left] at (3.5,0); \end{tikzpicture}}& \subfloat[sart]{\begin{tikzpicture}[spy using outlines={rectangle,white,magnification=2,size=1.5cm, height=5.5cm, connect spies}] \node {\pgfimage[height=5cm, width=5cm]{constant0_01_sart.png}}; \spy on (0,0) in node [left] at (3.5,0); \end{tikzpicture}}\\ \end{tabular} \end{center} \caption{For Example 1: zoomed images to visualize the difference between the reconstructions.}\label{fig:constant_zoom} \end{figure} In Figure~\ref{fig:line_comparison_constant}, we draw the profiles of the reconstructions along $x=1$ in order to observe the improvements reached in the central region of the image. \begin{figure}[ht] \begin{center} \includegraphics[width=\linewidth]{constant0_01_comparisonlines.eps} \end{center} \caption{For Example 1: Profiles of reconstruction at $x=1$ that corresponds to the column 129 of the images.} \label{fig:line_comparison_constant} \end{figure} \subsection*{Example 2:} In this case, the simulated measurements are generated using variables attenuations $\lambda$ and $a$, in order to include some attenuation effects produced by the presence of the fluorescent molecules. However, as in more real cases, the attenuation could be also unknown, we propose to reconstruct the density $\mu$ with a constant attenuation $a$ which could be experimentally determined. In our case, we take $a=1.1$ over $\Omega$. We have included Poisson Noise with $\texttt{NoiseLevel}=0.01$. The results are presented as before in Figures~\ref{fig:variable_zoom}--\ref{fig:line_comparison_variable} and Table~\ref{tab:variable_algorithms}. The values of the \texttt{nnfcgls} and \texttt{sart} methods are slightly better than the other algorithms, but all of them improve the fused image values. We do not focus on which algorithm is better; we are just interested in the improvements observed in the proposed reconstruction independently of the selection of the optimization algorithm. \begin{table}[ht] \caption{Number of iterations, execution time, relative error and SSIM for the different algorithms when the attenuation is variable but is considered as constant during the reconstruction. The ``fused image'' row corresponds to the third image in Figure~\ref{fig:measurements}, which has been perturbed by poisson noise.}\label{tab:variable_algorithms} \begin{center} \begin{tabular}{lcccl} \bottomrule Algorithm & iterations & time (s) & $\|x - x^{(k)}\|_2/\|x\|_2$ & SSIM\\\mr \texttt{fused image} & -- -- & -- -- & 0.40466 & 0.92454\\ \texttt{fista} & \,\,\,29 & 5.3721 & 0.29567 & 0.95875\\ \texttt{htv} & \,\,\,41 & 2.0133 & 0.26267 & 0.96326\\ \texttt{mrnsd} &$>2000$ & 27.976 & 0.24255 & 0.97345\\ \texttt{nnfcgls} &$>2000$ & 96.577 & $\mathbf{0.22798}$ & 0.97695\\ \texttt{sart} &$>2000$ & 45.802 & 0.22783 & $\mathbf{0.97994}$\\ \bottomrule \end{tabular}\\[5pt] {\footnotesize $^\ast x$ is the truth solution, the symbol $>$ means stops with a maximum number of iterations.} \end{center} \end{table} \begin{figure}[ht] \begin{center} \begin{tabular}{cc} \subfloat[Noise measurements]{\begin{tikzpicture}[spy using outlines={rectangle,white,magnification=2,size=1.5cm, height=5.5cm, connect spies}] \node {\pgfimage[height=5cm, width=5cm]{variable0_001_meas.png}}; \spy on (0,0) in node [left] at (3.5,0); \end{tikzpicture}}& \subfloat[fista]{\begin{tikzpicture}[spy using outlines={rectangle,white,magnification=2,size=1.5cm, height=5.5cm, connect spies}] \node {\pgfimage[height=5cm, width=5cm]{variable0_001_fista.png}}; \spy on (0,0) in node [left] at (3.5,0); \end{tikzpicture}}\\ \subfloat[htv]{\begin{tikzpicture}[spy using outlines={rectangle,white,magnification=2,size=1.5cm, height=5.5cm, connect spies}] \node {\pgfimage[height=5cm, width=5cm]{variable0_001_htv.png}}; \spy on (0,0) in node [left] at (3.5,0); \end{tikzpicture}}& \subfloat[mrnsd]{\begin{tikzpicture}[spy using outlines={rectangle,white,magnification=2,size=1.5cm, height=5.5cm, connect spies}] \node {\pgfimage[height=5cm, width=5cm]{variable0_001_mrnsd.png}}; \spy on (0,0) in node [left] at (3.5,0); \end{tikzpicture}}\\ \subfloat[nnfcgls]{\begin{tikzpicture}[spy using outlines={rectangle,white,magnification=2,size=1.5cm, height=5.5cm, connect spies}] \node {\pgfimage[height=5cm, width=5cm]{variable0_001_nnfcgls.png}}; \spy on (0,0) in node [left] at (3.5,0); \end{tikzpicture}}& \subfloat[sart]{\begin{tikzpicture}[spy using outlines={rectangle,white,magnification=2,size=1.5cm, height=5.5cm, connect spies}] \node {\pgfimage[height=5cm, width=5cm]{variable0_001_sart.png}}; \spy on (0,0) in node [left] at (3.5,0); \end{tikzpicture}}\\ \end{tabular} \end{center} \caption{For Example 2: zoomed images to visualize the difference between the reconstructions. The images are re-scaled to the range of the ground truth density.}\label{fig:variable_zoom} \end{figure} In Figure~\ref{fig:line_comparison_variable}, we draw the profiles of the reconstructions along $x=1$ as before. Here we observe that the assumption of the attenuation is constant implies in some parts a underestimation of the true value. This will depend directly from the constant value that we choose for $a$. \begin{figure}[ht] \begin{center} \includegraphics[width=\linewidth]{variable0_001_comparisonlines.eps} \end{center} \caption{For Example 2: Profiles of reconstruction at $x=1$ that corresponds to the column 129 of the images.} \label{fig:line_comparison_variable} \end{figure} \section{Conclusions and outlook} We presented a novel mathematical model for the Light Sheet Fluorescence Microscopy. To our best knowledge, this is the first approach in this direction and is an initial step in trying to understand and tackle some of the issues observed in LSFM. This work shows that by considering the acquisition of the density $\mu$ as an inverse problem a better reconstruction can be obtained, compared to the traditional merging method that is currently used. From the theoretical point of view, we presented a uniqueness result for the proposed inverse problem, by reducing it to the recovery of the initial condition in a heat equation with measurements in a space--time curve. The stability in the reconstruction of $\mu$ is not considered in this article. However, due to the clear link between the microscopy inverse problem and backward heat propagation the former is expected to be severely ill-posed. The question then is whether Logarithmic stability is the optimal result or if it is possible to obtain a H\"older-type inequality, this kind of result would also open the door to obtain stability results for more physically complete models. This type of question are expected to be addressed in future works. Additional future work also includes the extension of these results to the three dimensional case, where some extra assumptions might be necessary and we would need to discuss a light-sheet illumination or a beam illumination as the natural extension of the technique presented here. Questions about a simultaneous reconstruction are also open. For example, about the possibility of recovering the density and the attenuation (either in the illumination or fluorescence) at the same time, by considering additional measurements when rotating the object in multiple directions. A more ambitious extension of this work would be to consider more complete and less simplified physics for the illumination and fluorescence stages. In this paper we are heavily reliant in the explicit solution of the Fermi pencil beam equation, which makes it very challenging to extend our results to other illumination models. We are also considering a perfect collimation of the fluorescence measurement and different collimation schemes would give rise to other difficulties. Another ambitious extension of this work would be to include the stochastic nature of the fluorescence stage, which would require an MLEM or similar reconstruction techniques to be considered. % % % \section*{Acknowledgments} E.C. was partially funded by CONICYT-PCHA/Doctorado Nacional/2016-21161721 grant, by SENESCYT/Convocatoria2015 and Project UCH-1566 from the Department of Mathematical Engineering at Universidad de Chile. \noindent A.O. was partially funded by CONICYT grant Fondecyt \#1191903, CONICYT Basal Program PFB-03 (AFB170001) and MathAmsud 18-MATH-04 and CONICYT/FONDAP/15110009. \noindent M.C. was partially funded by CONICYT grant Fondecyt \#1191903 and M.C. thanks Bo\u{g}azi\c{c}i University, Istanbul, Turkey, as part of this work was completed as a visiting researcher at the institution. \noindent S.H. and V.C. are part of SCIAN-Lab funded by Fondecyt \#1181823, EQM140119, CONICYT (PIA ACT 1402), CENS CORFO (16CTTS-66390) and BNI (ICM P09-015-F). SCIAN-Lab is a selected member of the German-Chilean Center of Excellence Initiative (DAAD 57220037 and 57168868). V.C. is also partially funded by CONICYT grant Fondecyt \#11170475. \noindent B.P. was partially funded by ONR grant N00014-17-1-2096. \noindent We acknowledge M.D. Miguel Concha for providing us with light-sheet microscopy data (funded by Fondequip EQM130051). \section*{References} \bibliographystyle{plain}
1,314,259,994,370
arxiv
\section{Introduction} In recent years, Artificial Intelligence (AI) has become much more closely connected to human activity. Many tasks that once used to require human labor, are now gradually being automated and shifted to AI. For instance, in order to cope with the COVID-19 pandemic situation, the use of robot workers is being suggested to minimize physical contact between humans. These robot technologies heavily depend on the accuracy of action recognition/prediction and the consequent interaction between humans and machines. State-of-the-art action recognition and prediction models are deep neural networks (DNNs), due to their capability of modeling complex problems \cite{Si_2019_CVPR, li2019spatio, li2019actional} in an accurate way. Nonetheless, it has also been shown that these models are prone to adversarial examples (or attacks) \cite{biggio2013evasion,szegedy2013intriguing,goodfellow2014explaining}. DNNs can behave erratically when processing inputs with carefully crafted perturbations, even though such perturbations are imperceptible to humans \cite{carlini2017towards,madry2017towards,croce2020reliable,jiang2020imbalanced,wang2020unified}. This has raised security concerns on the deployment of DNN-powered AI systems in security-critical applications such as autonomous driving \cite{eykholt2018robust,duan2020adversarial} and medical diagnosis \cite{finlayson2019adversarial,ma2020understanding}. Investigating and understanding these abnormalities is a crucial task before machine learning based AI agents can become practical. In this work, we investigate the adversarial vulnerability of DNN reaction prediction (i.e., regression) models in skeleton-based interactions. Skeleton signals are among one of the most commonly used representations for human or robot motion \cite{zhang2016rgb, wang2018rgb}. While adversarial attacks have been extensively studied on images \cite{goodfellow2014explaining,su2019one,brown2017adversarial,duan2020adversarial}, very few works have been proposed for skeletons \cite{liu2019adversarial, wang2019smart, zheng2020towards}. In comparison to the image space, which is continuous and where pixels can be perturbed freely without raising obvious attack suspicions, the skeleton space is sparse and discrete. It has a temporal nature that needs to be taken into account. Consequently, attacking skeleton-based models requires many more constraints than the image space. Existing works on attacking skeleton-based models have only considered the single-person scenario, and have all focused towards recognition (i.e., classification) models \cite{liu2019adversarial, wang2019smart, zheng2020towards}. However, interaction scenarios involving two or more characters are essential to the interaction between humans and AI. They should not be overlooked if our ultimate goal is to build AI agents that can fit into our daily life. Neglecting possible attacks might lead to AI agents malfunctioning or behaving aggressively when they are not supposed to. To close this gap, we propose an Adversarial Interaction Attack (AIA) to test the vulnerability of regression DNNs in skeleton-based interactions involving two characters. Being able to accurately recognize a person's action is important, but it is equally important to be able go a step further and \textit{respond} to the action in an appropriate way. In light of this, the usage of regression models is necessary. We hence modified the output layers of two previous state-of-art models on action recognition. One model was based on a Temporal Convolutional Neural Network (TCN) \cite{BaiTCN2018} and the other was based on Gated Recurrent Units (GRUs) \cite{maghoumi2019deepgru}. The models were modified to return reactor sequences instead of class labels, and we trained them on skeleton-based interaction data. We examine the performance of AIA attack under both white-box and black-box settings. We show that our AIA attack can easily fool the two regression models to misinterpret the actor's intentions and predict unexpected reactions. Such reactions have detrimental effects to either the actor or the reactor. Overall, our work reveals potential threats of subtle adversarial attacks on interactions involving AI. In summary, our contributions are: \begin{itemize} \item We propose an adversarial attack approach - Adversarial Interaction Attack (AIA), that is domain-independent, and works for general sequential regression models. \item We propose an evaluation metric that can be applied to evaluate the performance of sequential regression attacks. Such a metric is currently missing from the literature. \item We empirically show that our AIA attack can generate targeted adversarial action sequences with small perturbations, which fool DNN regression models into making incorrect (possibly dangerous) predictions. \item We demonstrate via three case studies how our AIA attack may affect human and AI interactions in real scenarios, which motivates the need for effective defense strategies. \end{itemize} We highlight that our work is the \textit{first} work on targeted sequential regression attack in a strict manner (i.e. purely numerical outputs without labels of any kind). We do not compare our work to previous works on skeleton-based action recognition as the focus of our work is fundamentally different. Specifically, the goal of our work is to design a new type of attack and evaluation metric that is capable of handling any type of regression-based problems in general. We thus leave the compatibility between our work and the previously proposed anthropomorphic constraints \cite{liu2019adversarial, wang2019smart, zheng2020towards} as a future area of interest. \section{Related Work} \subsection{Adversarial Attack} Adversarial attacks can be either white-box or black-box depending on the attacker's knowledge about the target model. White-box attack has full knowledge about the target model including parameters and training details \cite{goodfellow2014explaining,zheng2019distributionally,croce2020reliable,jiang2020imbalanced}, while black-box attack can only query the target model \cite{chen2017zoo,ilyas2018prior,bhagoji2018practical,dong2019efficient,jiang2019black,bai2020improving} or use a surrogate model \cite{liu2016delving,tramer2017space,dong2018boosting,dong2019evading,andriushchenko2020square,wu2020skip,wang2020unified}. Adversarial attacks can also be targeted or untargeted. Under the classification setting, untargeted attacks aim to fool the model such that its output is different from the correct label, whereas targeted attack aims to fool the model to return a target label of the attacker's interest. White-box attacks can be achieved by solving either the targeted or untargeted adversarial objective using first-order gradient methods \cite{goodfellow2014explaining,kurakin2016adversarial}. Optimization-based methods have also been proposed to achieve the adversarial objective, and at the same time, minimize the perturbation size \cite{carlini2017towards,chen2017ead}. Most of the above existing attacks were proposed for images and classification models, and the perturbation is usually constrained to be small (eg. $\|\epsilon\|_{\infty}=8$ for pixel value in $[0,255]$) so as to be imperceptible to human observers. Defenses against adversarial attacks have also been explored on image dataset \cite{madry2017towards,zhang2019theoretically,wang2019convergence,bai2019hilbert,wang2020improving,wu2020adversarial,bai2021improving}. \noindent\textbf{Attacking Regression Models.} Untargeted regression attacks can be derived from classification attacks by simply attacking the regression loss \cite{8846746}. However, it is more difficult to perform targeted regression attack such that the model outputs a target sequence. This is because, unlike classification models that contain a finite set of discrete labels, regression models can have infinitely many possible outcomes. Hence, most existing attacks on regression models have focused on the untargeted setting. \citet{10.1007/978-3-030-36708-4_39} proposed a univariate regression loss with the goal of changing the outputs of EEG-based BCI regression models to a value that is at least $t$ away from the natural outcome. This loss function guarantees only that the adversarial output will be at a specified distance away from the natural output. It does not constrain how large or small the output can actually become. Compared to \citet{Cheng2020Seq2SickET}, the order of target sequences is more significant for our problem. In natural language processing (NLP), \citet{Cheng2020Seq2SickET} proposed a targeted attack towards recurrent language models. This work aims to replace arbitrary words in the output sequence with a small set of target adversarial keywords, regardless of their order and occurrence position. While word embedding can be used to evaluate attack performance on language models, an appropriate performance metric is still lacking in the field of interaction prediction, making it difficult to evaluate the effectiveness of an attack. None of the existing works have implemented an attack that is able to change the whole output sequence completely. In our work, we propose such an attack, which can change the entire output sequence with target frames appearing in our desired order. \noindent\textbf{Adversarial Attack on Action Recognition.} Previous attacks on skeleton-based action recognition have proposed several constraints based on extensive study of anthropomorphism and motion. These include postural constraints as the maximum changes in joint angles, and inter-frame constraints based on the notion of velocities, accelerations, and jerks \cite{liu2019adversarial, wang2019smart, zheng2020towards}. Additionally, \citet{liu2019adversarial} utilized a Generative Adversarial Network (GAN) loss to model anthropomorphic plausibility. These constraints are distinct from our work, but could potentially be employed in combination with our proposed attack to improve naturalness of adversarial action sequences. \subsection{Interaction Recognition and Prediction} The use of skeleton data has gained its popularity in action recognition and prediction research. Owing to the fact that reliable skeleton data can be easily extracted from modern RGB-D sensors or RGB camera images, these techniques can be easily extended to practical applications \cite{kiwon_hau3d12}. One benchmark interaction dataset is the SBU Kinect Interaction Dataset. Different from most skeleton-based action recognition datasets that focus on studying single-person activities, the SBU Kinect Interaction Dataset captures various activities with two characters involved. Predicting interactions is a much harder task in comparison to predicting single-person activities, due to the complexity and the non-periodicity of the problem \cite{kiwon_hau3d12}. Specifically, in the interaction scenario, two characters are involved. However, the contribution from each character may not be equal. For instance, interactions such as approaching and departing have only one active character; another character remains steady over all time frames. Convolutional Neural Networks (CNNs) \cite{du2015skeleton,nunez2018convolutional,li2017skeleton} and Recurrent Neural Networks (RNNs) \cite{du2015hierarchical} are two popular choices to tackle the interaction recognition problem. Models from the RNN family such as Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM) are commonly chosen for interaction recognition, because it is natural for them to handle sequential data. \citet{maghoumi2019deepgru} proposed a recurrent-based model namely DeepGRU, that was able to reach state-of-the-art performance. Temporal Convolutional Networks (TCNs) are also a common choice of model when dealing with spatio-temporal data. TCNs, just like RNNs, can take sequences of any length. TCNs rely on a causal convolution operation to ensure no information leakage from future to the past \cite{BaiTCN2018}. TCN is also a previous state-of-art model \cite{kim2017interpretable} and a component adopted by many latest works on skeleton-based action recognition \cite{meng2018human, yan2018spatial}. In this paper, we will modify the DeepGRU network proposed by \citet{maghoumi2019deepgru} and the TCN network proposed by \citet{BaiTCN2018} for interaction prediction and examine their vulnerability to our proposed attack based on the SBU Kinect Interaction Dataset. \section{Proposed Adversarial Interaction Attack} In this section, we first provide a mathematical formulation of the targeted adversarial sequence attack problem. We then introduce the loss functions used by our AIA attack. \noindent\textbf{Overview.} Intuitively, the goal of our AIA attack is to deceive the \emph{reactor} AI agent into thinking that the \emph{actor} is doing a different specific action by making minor changes to the positions of the \emph{actor's} joints or the angles between joints. The reactor agent will consequently respond by performing the reaction that is targeted by the attack. \subsection{Formal Problem Definition} A skeleton sequence with $T$ frames can be represented mathematically as the vector $\mathbf{X} = (\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_T)$ where $\mathbf{x}_i$ is a skeleton representation of the $i^{th}$ frame, which is a vector consists of 3D-coordinates of the human skeleton joints. More specifically, $\mathbf{x}_i \in \mathbb{R}^{N \times 3}$, where N denotes the number of the joints. In our approach, we flattened $\mathbf{x}_i$ into $\mathbb{R}^{3N}$. First, we define the formal notion of interaction. Suppose the two characters in a two-person interaction scenario are \emph{actor A} and \emph{reactor B}. The task of an interaction prediction model $f$ is to predict an appropriate reaction (i.e., skeleton) $\mathbf{y}_t$ at each time step $t$ for reactor B based on the observed skeleton sequence of actor A $(\mathbf{x}_1, \cdots, \mathbf{x}_{t})$. This can be written mathematically as: $$f(\mathbf{x}_1, \cdots, \mathbf{x}_{t-1},\mathbf{x}_{t}) = \mathbf{y}_t.$$ Given an input skeleton sequence $\mathbf{X} = (\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_T)$, an adversarial target skeleton sequence $\mathbf{Y}'= (\mathbf{y}'_1, \mathbf{y}'_2, ..., \mathbf{y}'_T)$, and a prediction model $f: R^{T \times 3N} \rightarrow R^{T \times 3N}$, the goal of our AIA attack is to find an adversarial input sequence $\mathbf{X}'=(\mathbf{x}'_1, \cdots, \mathbf{x}'_{T})$ by solving the following optimization problem: \begin{equation}\label{eq:obj} \begin{aligned} & \min_{\mathbf{X}'} \sum_{t \in T} \|\mathbf{x}_t' - \mathbf{x}_t \|_{\infty} \\ & s.t. \;\; \sum\limits_{t \in T} \|f(\mathbf{x}'_1, \cdots ,\mathbf{x}'_t) - \mathbf{y}_t' \|_2 < \kappa, \end{aligned} \end{equation} where, $\|\cdot\|_{p}$ is the $L_{p}$ norm, and $\kappa \geq 0$ is a \emph{tolerance factor}, which serves as a cutoff that distinguishes whether the output sequence is recognizable as the target reaction. This gives us more flexibility when crafting the adversarial input sequence $\mathbf{X}'$ because the acceptable target sequence is non-singular; the output sequence does not need to be exactly the same as the target sequence to resemble a particular action. We empirically determine this factor based on informal user survey in Section \ref{sec:5.1}. Intuitively, the above objective is to find a sequence $\mathbf{X}'$ with minimum perturbation from $\mathbf{X}$, such that the distance between the output and the target is less than $\kappa / T$ on average for each time step. \subsection{Adversarial Loss Function} Our goal is to develop a mechanism that crafts an adversarial input sequence which solves the above optimization problem given any target output sequence, while also maintaining the naturalness of the adversarial input sequence. In order to achieve this goal, we propose the following adversarial loss function: \begin{equation}\label{eq:adv} \mathcal{L}_{adv} = \mathcal{L}_{spatial} + \lambda \mathcal{L}_{temporal}, \end{equation} where the $\mathcal{L}_{spatial}$ loss term minimizes the spatial distance between the output sequence and the target sequence, and the $\mathcal{L}_{temporal}$ loss term maximizes the coherence of the perturbed input sequence so as to maintain the naturalness of the adversarial input sequence. \noindent\textbf{Spatial Loss.} The spatial loss term aims to generate adversarial output sequences that are visually similar to the target reaction sequences; that is, its objective is to minimize the spatial distance between the output joint locations and the \textit{neighbourhood} of the target joints for every time step. Following the formulation of the relaxed optimization problem in \eqref{eq:obj}, we use the $L_2$ norm to measure the distance between two sets of joint locations: \begin{equation}\label{eq:spatial} \mathcal{L}_{spatial} = \sum\limits_{t \in T} \inf\{\|f(\mathbf{x}'_1, \cdots ,\mathbf{x}'_t) - \mathbf{p}_t\|_2 \; | \; \mathbf{p}_t \in S_t\} \end{equation} with $S_t$ being an $(N \mbox{-} 1)$-sphere defined by: \begin{equation}\label{eq:spatial2} S_t(\mathbf{y}'_t, \eta) = \{\mathbf{p}_t \in \mathbb{R}^{3N} \;|\; \|\mathbf{p}_t - \mathbf{y}'_t\|_2 = \eta\}. \end{equation} Here, $\eta = \kappa / T$ is the mean of the enabling tolerance factor $\kappa$ in equation \eqref{eq:obj} over time $T$. \noindent\textbf{Temporal Loss.} The temporal loss term is to guarantee the naturalness of the generated adversarial input sequence. Specifically, the movement of each joint should be continuous in time, and motions with abrupt huge change or teleportation should be penalised. The $\mathcal{L}_{temporal}$ term achieves this goal by maximizing the coherence of each element in the perturbed input sequence with respect to its neighboring elements in the temporal dimension. This gives: \begin{equation}\label{eq:temporal} \mathcal{L}_{temporal} = \sum\limits_{t \in T} (\|\mathbf{x}'_t - \mathbf{x'}_{t-1}\|_2 + \|\mathbf{x}'_t - \mathbf{x'}_{t+1}\|_2) \end{equation} Note that a scaling factor $0 \leq \lambda \leq 1$ is introduced in front of $\mathcal{L}_{temporal}$ to balance the two loss terms. We use the first-order method Project Gradient Descent (PGD) \cite{madry2017towards} to minimize the combined adversarial loss iteratively as follows: \iffalse \begin{equation} \vx^{t+1}_{adv} = \Pi_{\epsilon} \Big( \vx^{t}_{adv} + \alpha \cdot \text{sign}\big(\frac{\partial \ell}{\partial \vz_L} \prod_{i=0}^{L-1} ( \gamma \frac{\partial f_{i+1}}{\partial \vz_{i}} + 1 ) \frac{\partial \vz_0}{\partial \vx}\big) \Big). \end{equation} \fi \begin{equation} \begin{aligned} &\mathbf{X}'_0 = \mathbf{X} \\ &\mathbf{X}'_{m+1} = \Pi_{\mathbf{X}, \epsilon} \big(\mathbf{X}'_m - \alpha \cdot \sign (\nabla_{\mathbf{X}'_m} \mathcal{L}_{adv}(\mathbf{X}'_m, \mathbf{Y'}))\big) \end{aligned} \end{equation} where, $\Pi_{\mathbf{X}, \epsilon}(\cdot)$ is the projection operation that clips the perturbation back to $\epsilon$-distance away from $\mathbf{X}$ when it goes beyond, $\nabla_{\mathbf{X}'_m}\mathcal{L}_{adv}(\mathbf{X}'_m, \mathbf{Y'})$ is the gradient of the adversarial loss to the input sequence, $m$ is the current perturbation step for a total number of $M$ steps, $\alpha$ is the step size and $\epsilon$ is the maximum perturbation factor. The sequence $\mathbf{Y'}$ for a target reaction can be either customized or sampled from the original dataset. \begin{figure*}[h!] \includegraphics[width=17.6cm]{1-orig.png} \includegraphics[width=17.6cm]{1-adv.png} \caption{Side-by-side comparison of Case Study 1 ‘handshaking’ to ‘punching’. Top-Bottom: original prediction, adversarial prediction. Blue character: input, green character: output.} \label{fig:1} \end{figure*} \begin{figure*}[h!] \includegraphics[width=17.6cm]{2-orig.png} \includegraphics[width=17.6cm]{2-adv.png} \caption{Side-by-side comparison of Case Study 2 ‘punching’ to ‘handshaking’. Top-Bottom: original prediction, adversarial prediction. Blue character: input, green character: output.} \label{fig:2} \end{figure*} \begin{figure*}[h!] \includegraphics[width=17.6cm]{3-orig.png} \includegraphics[width=17.6cm]{3-adv.png} \caption{Side-by-side comparison of Case Study 3 ‘approaching’ to ‘remaining’. Top-Bottom: original prediction, adversarial prediction. Blue character: input, green character: output.} \label{fig:3} \vspace{-0.1in} \end{figure*} \section{Overview on Several Case Studies} In this section, we conduct case studies on three selected sets of attack objectives that can be easily associated with real scenarios and can serve as motivations behind our approach. Detailed experimental settings can be found in Section \ref{sec:5}. The dynamic versions of the case studies and more examples are provided in the supplementary materials. \subsection{Case Study 1: `handshaking' to `punching'} Figure \ref{fig:1} illustrates a successful AIA attack that fools the model to predict a `punching' action for the reactor (the green character) as a response to the adversarially perturbed `handshaking' action of the actor (the blue character). Note that the perturbation only slightly changed the actor's action. This reveals an important safety risk that needs to be carefully addressed before machine learning based AI agents can be widely used in human daily life. Suppose that we are at an AI interactive exhibition, a participant would like to shake hands with an AI robot agent. He gradually extends his hand, sending out an interaction request to the AI agent and is expecting the AI agent to respond to his handshaking invitation by shaking hand with him. However, instead of reaching its hands out gently, the AI agent decided to punch the participant in the face because the participant's body does not stay straight. It would be extremely hazardous if the human character unintentionally wiggled his body in a pattern similar to the adversarial perturbation introduced in this case study. While the actual chance of this happening is extremely low due to the high complexity of data in both the spatial and the temporal dimensions, this threat might nevertheless happen if AI workers become widely deployed worldwide. In this case, the human is a victim by inadvertently performing an adversarial attack (wiggling their body). \subsection{Case Study 2: `punching' to `handshaking'} In this case study we consider a case opposite to the previous one, where human exploiters are capable of attacking AI agents actively and derive benefit from being active attackers. In the future, it could become a common practice to utilize AI agents to complete dangerous tasks so as to lower the chance of human operators incurring injuries or fatalities. Security guard is one such job that might be taken over by an AI agent. Imagine a secret agency that hires AI security guards is invaded by intruders and is placed in a scenario where combat becomes necessary. The AI guard will fail in its role if the invaders know how to apply effective adversarial attacks towards it. This is the case in Figure \ref{fig:2} where the model was fooled to suggest `handshaking' for the reactor (the green character) rather than `punching'. \subsection{Case Study 3: `approaching' to `remaining'} Finally, Case Study 3 demonstrated in Figure \ref{fig:3} examines the case of how a cheater might be able to bypass an AI agent's detection. Whilst automatic ticket checkers have been widely adopted, manual ticket checking is still required for numerous situations. For instance, public transportation companies may want to check whether a passenger has paid for the upgrade fee if he or she is in a first class seat. Now suppose that a public transportation company decides to hire AI agents to do the ticket checking job. The public transportation company will lose a huge amount of income if passengers know how to stop the ticket checkers from `approaching' as in Figure \ref{fig:3}, or even change their `approaching' response to `departing'. \section{Empirical Understanding of AIA}\label{sec:5} \subsection{Tolerance Factor $\kappa$}\label{sec:5.1} The objective of AIA attack is defined with respect to a tolerance factor $\kappa$ (see \eqref{eq:obj}, \eqref{eq:spatial} and \eqref{eq:spatial2}), which is a flexible metric that distinguishes whether the output sequence is close to the targeted adversarial reaction. Because there are many factors involved, such as the character's height, handedness, and the direction the character is facing, conventional distance metrics such as $L_1$ and $L_2$ norms are not suitable to define precisely what the pattern of a specific action should look like. Therefore, we determine the value of $\kappa$ based on human perception via an informal user survey. In order to obtain appropriate values for $\kappa$ to evaluate whether an attack is successful, we randomly sampled 5 out of 8 sets of attack objectives and presented them to 82 human judges, including computer science faculties and students. Each objective set is composed of an action-reaction pair and contains output sequences generated from 6 different values of $\epsilon$ (from left to right in ascending order). For each sample set, we asked the human judges to choose the leftmost sequence they believe is performing the target reaction. Sampled objectives and the responses from the 82 human judges are recorded in Table \ref{tab:1}. Based on the responses from the 82 human judges, we computed the tolerance factor $\kappa$ in the optimization problem defined in \eqref{eq:obj} based on the average of \begin{equation}\label{eq:kappa} \sum\limits_{t \in T} \|f(\mathbf{x}'_1, \cdots ,\mathbf{x}'_t) - \mathbf{y}_t' \|_2 \end{equation} over the 5 sample objective sets. The calculation of \eqref{eq:kappa} for each objective set is based on the minimum $\epsilon$ polled from the 82 human judges, and the corresponding value of $\kappa$ is then selected as the optimal value (boldfaced in Table \ref{tab:1}). Note that, $\kappa$ serves as a topological boundary between the natural and the adversarial \textit{outputs}, whereas $\epsilon$ is a maximum perturbation constraint that we don't want the \textit{input} perturbation to go beyond. \begin{table*}[!htbp] \caption{Responses from the 82 human judges. The optimal $\kappa$ for each attack objective is highlighted in \textbf{bold}. } \label{tab:1} \centering \small \begin{tabular}{llllllll} \hline $\epsilon =$ & 0.075 & 0.15 & 0.225 & 0.3 & 0.375 & 0.45 \\ \hline Handshaking & 1 ($\kappa = 90.9$) & 4 ($\kappa = 84.28$) & \textbf{44 ($\mathbf{\kappa = 79.52}$)} & 3 ($\kappa = 74.49$) & 12 ($\kappa = 45.04$) & 14 ($\kappa = 35.03$) \\ Punching & \textbf{58 ($\mathbf{\kappa = 52.04}$)} & 13 ($\kappa = 47.63$) & 6 ($\kappa = 43.97$) & 3 ($\kappa = 41.76$) & 0 ($\kappa = 39.14$) & 2 ($\kappa = 34.91$) \\ Kicking & 3 ($\kappa = 100.61$) & \textbf{71 ($\mathbf{\kappa = 93.17}$)} & 7 ($\kappa = 86.57$) & 1 ($\kappa = 80.68$) & 0 ($\kappa = 47.47$) & 0 ($\kappa = 35.36$) \\ Departing & 0 ($\kappa = 85.03$) & 7 ($\kappa = 76.78$) & \textbf{26 ($\mathbf{\kappa = 71.77}$)} & 12 ($\kappa = 67.58$) & 1 ($\kappa = 41.78$) & 10 ($\kappa = 32.70$) \\ Pushing & 6 ($\kappa = 28.66$) & 3 ($\kappa = 26.55$) & 2 ($\kappa = 25.16$) & 14 ($\kappa = 23.98$) & \textbf{49 ($\mathbf{\kappa = 22.77}$)} & 5 ($\kappa = 21.31$) \\ \hline \end{tabular} \end{table*} \begin{figure*}[!ht] \includegraphics[width=17.6cm]{ablation_wo.png} \includegraphics[width=17.6cm]{ablation_w.png} \caption{Adversarial input action sequences generated by our AIA attack with (bottom row, and $\lambda = 0.1$) or without (top row) the temporal constraint $\mathcal{L}_{temporal}$.} \label{fig:5} \vspace{-0.1in} \end{figure*} \subsection{Effect of the Temporal Constraint} Here, we study the effect of the temporal constraint $\mathcal{L}_{temporal}$ defined in \eqref{eq:temporal} on the naturalness of the generated adversarial input action sequence. Specifically, we investigate how the input skeleton sequence changes in the depth axis as that is the only perturbed dimension throughout our experiments. Our hypothesis is that this additional factor will enable our AIA attack to find adversarial input sequences that change more smoothly with respect to time. We demonstrate visually a comparison between adversarial sequences generated with and without the temporal constraint in Figure \ref{fig:5}. The top sequence is an adversarial input sequence generated with the $\mathcal{L}_{temporal}$ term removed, whereas the bottom sequence is an adversarial input sequence generated with $\lambda=0.1$ scaling factor applied to the $\mathcal{L}_{temporal}$ term. In comparison to the previous experiment, we plot the skeletons from the depth-y point of view as we are more interested in visualizing the perturbation. As shown in Figure \ref{fig:5}, it is observable that in general, the top sequence has more abrupt changes in body position between each time step. This almost never happens in the bottom sequence. More specifically, in the bottom sequence, when a larger change to the body posture is necessary, the change is always preceded by smaller changes in the same direction. In contrast, in the top sequence, any large changes can take place in just one time step. This type of aggressive change should be avoided as much as possible, as it could make the attack more easily detectable. \section{Performance Evaluation} We conduct two sets of experiments to evaluate the effectiveness (white-box attack success rate) and the transferability (black-box attack success rate) of our AIA attack. \subsection{Experimental Settings}\label{sec:6.1} \noindent\textbf{Dataset.} We conduct our experiments on the benchmark SBU Kinect Interaction Dataset, which is composed of interactions of eight different categories, namely `approaching', `departing', `kicking', `punching', `pushing', `hugging, `handshaking', and `exchanging'. It contains 21 sets of data sampled from 7 participants using a Microsoft Kinect sensor, with approximately 300 interactions in total. Each character's information is encoded into 15 joints with the $x$, $y$, and depth dimensions. The values of $x$ and $y$ fall within $[0, 1]$, and depth in $[0, 7.8125]$. In order to extract action sequences and reaction sequences, we partitioned each interaction into two individual sequences corresponding to each character respectively. One sequence will be used as the action (input) and another will be used as the reaction (output). Due to the lack of data in this dataset, we trained our response predictors from the perspectives of both characters. With this belief, we used the skeleton sequences of both characters as input data independently. That is, for each interaction sequence $\mathbf{x} = \mathbf{x}_1 \frown \mathbf{x}_2$, we create two input/target pairs $(\mathbf{x}_1, \mathbf{x}_2)$ and $(\mathbf{x}_2, \mathbf{x}_1)$. \noindent\textbf{Models and Training.} We adopted one convolutional model, TCN \cite{BaiTCN2018}, and one recurrent model, DeepGRU \cite{maghoumi2019deepgru}, and modified them such that the models predict sequences instead of categorical labels. Our TCN model has 10 hidden layers with 256 units in each layer and our DeepGRU model follows \citet{maghoumi2019deepgru} exactly with the output being a linear layer instead of the attention-classifier framework. We trained each model on the preprocessed dataset for 1,000 epochs using the Adam optimizer with a learning rate of 0.001. We held out sets s01s02, s03s04, s05s02, s06s04 in the original dataset as our test set. \noindent\textbf{Attack Setting.} In all experiments, we used the same step size of $\alpha=0.03$ and run our AIA attack for $M=400$ iterations. In addition, we used the Adam optimizer with a learning rate of $1e-3$ to maximize the adversarial loss function $\mathcal{L}_{adv}$. The scaling factor $\lambda$ for the temporal loss term $\mathcal{L}_{temporal}$ was set to 0.1. The tolerance factor $\kappa$ was selected for each target reaction based on our previous informal user survey in Section \ref{sec:5.1} (the exact values can be found in Table \ref{tab:1}). \subsection{Effectiveness of our AIA Attack}\label{sec:6.2} In this experiment, we examine the effectiveness of our AIA attack under the white-box setting with different values of maximum perturbation $\epsilon$ allowed. In order for an attack to be considered successful, it has to satisfy two conditions: 1) the adversarial output sequences need to be recognizable as the target reaction (related to $\kappa$), and 2) the adversarial input sequences need to be visually similar enough compared to the natural input sequences such that it can circumvent security detection (related to $\epsilon$). Hence, the smaller the $\epsilon$ the attack can work under, the more effective the attack is. Without loss of generality and in order to control the overall change to the input sequence, we perturbed only the depth dimension for each joint. This makes it much easier to visualize perturbations. On a side note, this is a stricter optimization problem with constraints in comparison to the original proposed problem. The outcome of this experiment is thus applicable to the original problem as well. \iffalse \begin{table*}[!htbp] \caption{Responses from the 82 human judges. The optimal $\kappa$ for each attack objective is highlighted in \textbf{bold}. } \label{tab:1} \centering \begin{tabular}{llllllll} \hline $\epsilon =$ & 0.075 & 0.15 & 0.225 & 0.3 & 0.375 & 0.45 & None \\ \hline Objective 1 & 1 ($\kappa = 90.9$) & 4 ($\kappa = 84.28$) & \textbf{44 ($\mathbf{\kappa = 79.52}$)} & 3 ($\kappa = 74.49$) & 12 ($\kappa = 45.04$) & 14 ($\kappa = 35.03$) & 4 \\ Objective 2 & \textbf{58 ($\mathbf{\kappa = 52.04}$)} & 13 ($\kappa = 47.63$) & 6 ($\kappa = 43.97$) & 3 ($\kappa = 41.76$) & 0 ($\kappa = 39.14$) & 2 ($\kappa = 34.91$) & 0 \\ Objective 3 & 3 ($\kappa = 100.61$) & \textbf{71 ($\mathbf{\kappa = 93.17}$)} & 7 ($\kappa = 86.57$) & 1 ($\kappa = 80.68$) & 0 ($\kappa = 47.47$) & 0 ($\kappa = 35.36$) & 0 \\ Objective 4 & 0 ($\kappa = 85.03$) & 7 ($\kappa = 76.78$) & \textbf{26 ($\mathbf{\kappa = 71.77}$)} & 12 ($\kappa = 67.58$) & 1 ($\kappa = 41.78$) & 10 ($\kappa = 32.70$) & \textbf{26} \\ Objective 5 & 6 ($\kappa = 28.66$) & 3 ($\kappa = 26.55$) & 2 ($\kappa = 25.16$) & 14 ($\kappa = 23.98$) & \textbf{49 ($\mathbf{\kappa = 22.77}$)} & 5 ($\kappa = 21.31$) & 3 \\ \hline \end{tabular} \end{table*} \fi \subsubsection{Adversarial Targets.} We created 8 sets of target reactions, corresponding to all 8 interactions in the SBU Kinect Interaction Dataset. The objective of each set of targets is to change the output reactions of all test data into one specific target reaction. We then perform targeted adversarial attacks based on these objectives over a range of $\epsilon$ values. We consider an attack to be successful if the sum term in \eqref{eq:obj} computed on the test datum is less than the human-determined $\kappa$ based on the sample sets. Otherwise we consider the attack to have failed. The average attack success rates over all 8 target sets under various $\epsilon$ are reported for both models in the left subfigure of Figure \ref{fig:6}. We used the $\kappa$ sampled from human judges to evaluate attack success rates for objectives 1 to 5. We expect the $\kappa$ to be generalizable to unseen reactions, so we used the average $\kappa$ over 5 objective sets to evaluate the remaining 3 attack objectives. \begin{figure}[!ht] \includegraphics[width=0.49\linewidth]{whitebox_cropped.pdf} \includegraphics[width=0.49\linewidth]{blackbox_cropped.pdf} \caption{Average white-box (left) and black-box (right) attack success rate of our AIA attack on TCN and DeepGRU.} \label{fig:6} \vspace{-0.1in} \end{figure} \subsubsection{Results.} On average, with a perturbation factor $\epsilon$ of 0.225 to 0.3, our AIA attack is able to alter almost all output sequences of the DeepGRU model into any target sequence. On the other hand, a larger $\epsilon$ of 0.375 to 0.45 is necessary for AIA to achieve a similar level of performance on the TCN model. In general, the TCN model is more robust to our attack than the DeepGRU model. However, under this white-box setting, we were able to achieve a 100\% attack success rate on almost all target sets for both models. We close up this experiment with a conclusion that when model parameters are available, our AIA attack is very effective towards deep sequential regression models. Note that the depth value falls within [0, 7.8125]. This indicates that our AIA algorithm is able to accomplish most attack objectives with small perturbations of 2\% to 5\% to natural input sequences. More generally, our attack works for any target sequences, not only confined to specific interactions from the dataset. This enables our attack method to work for both targeted and untargeted goals. For untargeted goals, the attacker simply needs to pick an arbitrary target sequence that is far enough from the original sequence. \iffalse \begin{table}[h!] \caption{White Box Attack Success Rate on DeepGRU.} \centering \begin{tabular}{lllllll} \hline $\epsilon =$ & 0.15 & 0.225 & 0.3 & 0.375 & 0.45 \\ \hline Objective 1 & 0.781 & 0.945 & 1.0 & 1.0 & 1.0 \\ Objective 2 & 0.8 & 0.909 & 0.982 & 1.0 & 1.0 \\ Objective 3 & 0.8 & 0.927 & 1.0 & 1.0 & 1.0 \\ Objective 4 & 0.709 & 0.823 & 0.982 & 1.0 & 1.0 \\ Objective 5 & 0.727 & 0.891 & 1.0 & 1.0 & 1.0 \\ Objective 6 & 0.691 & 0.891 & 1.0 & 1.0 & 1.0 \\ Objective 7 & 0.764 & 0.891 & 1.0 & 1.0 & 1.0 \\ Objective 8 & 0.691 & 0.818 & 0.982 & 1.0 & 1.0 \\ \hline \end{tabular} \end{table} \begin{table}[h!] \caption{White Box Attack Success Rate on TCN.} \centering \begin{tabular}{lllllll} \hline $\epsilon =$ & 0.15 & 0.225 & 0.3 & 0.375 & 0.45 \\ \hline Objective 1 & 0.527 & 0.709 & 0.745 & 0.855 & 0.945 \\ Objective 2 & 0.54 & 0.691 & 0.745 & 0.927 & 1.0 \\ Objective 3 & 0.527 & 0.673 & 0.745 & 0.982 & 1.0 \\ Objective 4 & 0.527 & 0.691 & 0.727 & 0.891 & 0.982 \\ Objective 5 & 0.473 & 0.691 & 0.727 & 0.982 & 1.0 \\ Objective 6 & 0.473 & 0.564 & 0.709 & 0.891 & 1.0 \\ Objective 7 & 0.564 & 0.691 & 0.764 & 0.945 & 1.0 \\ Objective 8 & 0.455 & 0.564 & 0.691 & 0.945 & 1.0 \\ \hline \end{tabular} \end{table} \fi \subsection{Black-Box Transferability} In addition to white-box effectiveness, we examine how transferable our attack is. An adversarial example generated based on one model is said to be transferable if it can also fool other independently trained models. In this experiment, we examine robustness of the TCN model and the DeepGRU model towards adversarial examples generated based on each other. \subsubsection{Black-box Setting.} We employed the same metric established in Section 5.1 to determine an attack to be successful or not. To evaluate how strong our attack is under the black-box setting, we reused the adversarial input sequences in the previous experiment. We feed all adversarial sequences generated based on one model into another and inspect their effectiveness when used to attack unseen model. In other words, we use adversarial sequences generated based on the DeepGRU model into the TCN model and vice versa. The average black-box attack success rates over a range of $\epsilon$ are reported for both models in the right subfigure of Figure \ref{fig:6}. \subsubsection{Results.} Surprisingly, adversarial examples generated from the TCN model are remarkably strong. With an $\epsilon$ value of 0.375 to 0.45, adversarial actions generated from the TCN model successfully fooled the DeepGRU model more than 80\% of the time for almost all attack objectives. Along with the results in Section \ref{sec:6.2}, this substantiates that our AIA attack has high transferability in addition to being effective. We also observed that adversarial actions generated from the DeepGRU model are rather weak on the TCN model under the black box setting. It is only able to achieve an average success rate of 30\% irrespective to the maximum perturbation $\epsilon$ permitted. The TCN model is more robust than DeepGRU in the white-box setting. We suspect that this is because the convolutional layers used in TCN are more robust than the gated recurrent units of DeepGRU. Specifically, in order to fool the TCN model, the attack needs to take into account the high level feature maps between the convolutional layers. However, adversarial examples generated from the DeepGRU model might not be able to fool the convolutional layers of TCN because these high level features were not taken into consideration in the first place. Note that, while being relatively more robust, TCN also leads to more transferable attacks. We leave further inspection to this disparity as a future work. \iffalse \begin{table}[h!] \caption{Black Box Attack Success Rate on DeepGRU.} \centering \begin{tabular}{lllllll} \hline $\epsilon =$ & 0.15 & 0.225 & 0.3 & 0.375 & 0.45 \\ \hline Objective 1 & 0.364 & 0.455 & 0.745 & 0.818 & 0.855 \\ Objective 2 & 0.364 & 0.436 & 0.709 & 0.891 & 0.836 \\ Objective 3 & 0.345 & 0.418 & 0.655 & 0.764 & 0.691 \\ Objective 4 & 0.345 & 0.4 & 0.6 & 0.709 & 0.8 \\ Objective 5 & 0.364 & 0.436 & 0.727 & 0.855 & 0.873 \\ Objective 6 & 0.255 & 0.4 & 0.709 & 0.909 & 0.873 \\ Objective 7 & 0.345 & 0.455 & 0.818 & 0.855 & 0.909 \\ Objective 8 & 0.310 & 0.418 & 0.673 & 0.836 & 0.836 \\ \hline \end{tabular} \end{table} \begin{table}[h!] \caption{Black Box Attack Success Rate on TCN.} \centering \begin{tabular}{lllllll} \hline $\epsilon =$ & 0.15 & 0.225 & 0.3 & 0.375 & 0.45 \\ \hline Objective 1 & 0.273 & 0.309 & 0.291 & 0.309 & 0.309 \\ Objective 2 & 0.291 & 0.309 & 0.291 & 0.291 & 0.309 \\ Objective 3 & 0.255 & 0.291 & 0.291 & 0.273 & 0.291 \\ Objective 4 & 0.309 & 0.291 & 0.309 & 0.327 & 0.345 \\ Objective 5 & 0.291 & 0.291 & 0.309 & 0.327 & 0.309 \\ Objective 6 & 0.255 & 0.255 & 0.255 & 0.255 & 0.273 \\ Objective 7 & 0.291 & 0.327 & 0.309 & 0.309 & 0.327 \\ Objective 8 & 0.291 & 0.273 & 0.291 & 0.309 & 0.291 \\ \hline \end{tabular} \end{table} \fi \section{Conclusion} In this paper, we presented a framework for attacking general spatio-temporal regression models. We proposed the first targeted sequential regression attack that is capable of altering the entire output sequence completely - Adversarial Interaction Attack (AIA). On top of that, we also defined an evaluation metric that can be adopted to evaluate the performance of adversarial attacks on sequential regression problems. We demonstrated on variants of two previous state-of-art action recognition models, TCN and DeepGRU, that our AIA attack is very effective. Additionally, we showed that our AIA attacks are highly transferable if referenced from proper models. We also discussed through three case studies, how AIA might impact interactions between human and AI in real scenarios. We hope this serves to motivate careful consideration about how to effectively incorporate AI based agents into human daily life. \bibliographystyle{aaai}
1,314,259,994,371
arxiv
\section{Introduction} Cherenkov radiation is a well-understood phenomenon related to the passage of charged, ultra relativistic particles through a dielectric medium. Since its discovery by P. A. Cherenkov in 1934~\cite{Cherenkov}, it has been studied both experimentally and theoretically with such success that Cherenkov radiation is now routinely used in applications such as Cherenkov particle counters; study of biomolecules and in astronomical observatories. One particularly interesting application is to search for energetic neutrinos from cosmic sources. Because of their low flux and small interaction cross-sections, this requires very large detectors, with volume of order 1 km$^3$. To build a detector with volume of order 1 km$^3$ requires naturally occuring water or ice as the Cherenkov medium. Currently, the largest detector is IceCube \cite{IceCube}, located at the South Pole. It observes the Cherenkov radiation from the charged particles produced in neutrino-induced showers. To understand this radiation, it is necessary to understand how charged particles radiate photons in the Antarctic ice. This ice consists of hexagonal ice crystals that are oriented in the same direction\cite{ice,ice2}. This orientation leads to an anisotropy, and the Cherenkov radiation may depend on the direction of the ice orientation. This is of particular interest because IceCube has already observed an anistropy in the ice, believed to be due to the scattering depending on the azimuthal direction the photon follows through the medium \cite{ICice}. Any Cherenkov production anisotropies may be confused with this scattering anisotropy and the presence of a directional Cherenkov anisotropy can also affect neutrino directional and energy reconstructions. This is particularly important for the next-generation PINGU detector, which will need to reach very low levels of systematic error to be able to determine the neutrino mass hierarchy \cite{Aartsen:2014oha}. The first theory for Cherenkov radiation in an isotropic medium was produced by I. Y Tamm and I. M Frank in 1937~\cite{FrankTheTank}. Since then a number of flawed attempts have been made to describe the same process in an anisotropic medium where the particle propagation direction becomes important for the Cherenkov emission. In 1956, V. E. Pafomov did a calculation~\cite{Pafomov} that reproduced the correct emission angles and described the intensity distribution, but failed to give the correct result when the calculation was applied to an isotropic material. Then in 1960, C. Muzicar~\cite{Muzicar} obtained a similar result for the emission angles but a different dependence of the number of photons emitted on the propagation direction of the emitting particle with respect to the symmetry axis of the medium. An experiment carried out by D. Gf\"oller \cite{Gfoller} in the following year was in support of Pafomovs result. The first fully convincing calculation for the Cherenkov photon emission from an anisotropic material was given in a 1997 paper~\cite{Delbart} by A. Delbart, J. Derr\'e and R. Chipaux; despite the succesful calculation, however, the paper includes a number of typing errors in the central equations for the Cherenkov photon yield. In this paper, we provide a corrected version of the description of the Cherenkov radiation in a uniaxial medium as published by Delbart et. \textit{al}. We then proceed to calculate some energy spectra and discuss the possible implications for IceCube and whether this should be implemented in the in-ice propagation codes. Lastly, we prescribe a procedure to test the present calculations in a small-scale experiment, and results relevant for high-energy calorimetry. \section{Theory of Cherenkov radiation in uniaxial optical materials} {\label{sec:theory}} Consider a relativistic charged particle with velocity $\beta c$ moving in an isotropic medium of refractive index, $n$. The particle emits Cherenkov radiation if the condition $\beta n > 1$ is fulfilled; that is, only for relativistic particle speeds; throughout this paper, we take as input a relativistic charged particle with $\beta=1$. The radiation is emitted at a characteristic angle, $\cos\theta_{\textrm{C}} = (\beta n)^{-1}$, with respect to the propagation direction of the emitting particle. Contrary to other relativistic radiation phenomena like synchrotron radiation and bremsstrahlung, this angle can be quite large: $40^{\degree}$ for ice, and $70^{\degree}$ for the birefringent material rutile; both of which will be discussed later. The emission angle is azimuthally symmetric in isotropic materials. If the material is anisotropic this symmetry may be broken, and the refractive index may depend on the angle of the incoming particle, complicating the radiation pattern. Here, we study uniaxial materials where the optical axis defines a direction around which there is azimuthal symmetry. These materials have two different refractive indices. Light polarized along the optical axis experiences a refractive index, $n_e$ with $e$ for extraordinary; and light with a polarization perpendicular to the optical axis experiences a refractive index $n_o$, with $o$ for ordinary. These terms originate from optics theory and are applied to Cherenkov radiation only for anisotropic media. The ordinary wave denotes photons travelling along the optical axis. Their propagation does not depend on the photon polarization. In contrast, the propagation properties of the extraordinary wave depend on their polarization. Photons travelling at other angles experience a refractive index in between $n_o$ and $n_e$. The relationship between $\mathbf{\hat{d}}$ which is a unit-vector in the direction of the displacement and $\mathbf{\hat{e}}$ which points in the direction of the electric field is $d_i = \Sigma \epsilon_{ik} e_k$. This will be used along with the expression for the phase velocity, $v_p = c/ \sqrt{\mu \epsilon}$. Combining these equations and using that $\mathbf{\hat{d}}$ is unitary gives $v_p^2= \mathbf{e}\cdot\mathbf{\hat{d}}\;c^2/ \mu$. We define the geometry as shown in Figure \ref{fig:Geometry}. Because of the symmetry around the optical axis, i.e. $\mathbf{x_1}$, the propagation direction of the emitting particle is defined by the single angle $\chi$, and written \begin{align} \mathbf{\hat{r}} = \begin{pmatrix} r_1 \\ r_2 \\ r_3 \end{pmatrix} = \begin{pmatrix} \cos\chi \\ \sin\chi \\ 0 \end{pmatrix}, \end{align} such that the particle propagates in the $(\mathbf{x_1},\mathbf{x_2})$ plane. When $\chi=0$, the optical axis is aligned along the direction of the radiating particle. In this geometry the dielectric tensor is diagonal with $\epsilon_{11} = n_e^2$ and $\epsilon_{22} =\epsilon_{33} = n_o^2$. \begin{figure}[hbt] \begin{center} \includegraphics[width=.5\textwidth]{Geometry} \caption{The coordinate system and the angles used in the calculation. The $\mathbf{x}_1$ axis is chosen to point in the direction of the optical axis. The angle $\chi$ between the incoming particle and the optical axis is in the plane spanned by $\mathbf{x}_1$ and $\mathbf{x}_2$. } \label{fig:Geometry} \end{center} \end{figure} The unit vector $\mathbf{\hat{k}}$ seen on Figure \ref{fig:Geometry} lies in the plane spanned by $\mathbf{\hat{r}}$ and the optical axis. Rather than expressing a physical direction, $k_1$ and $k_2$ are free parameters that can be chosen such as to simplify the calculation of integrals in the following. The unit vector $\mathbf{\hat{u}}$ points in the direction of the Cherenkov wave phase propagation. It is expressed in polar coordinates around $\mathbf{\hat{k}}$, as illustrated on Fig. \ref{fig:Geometry}: \begin{align} \mathbf{\hat{u}} = \begin{pmatrix} u_1 \\ u_2 \\ u_3 \end{pmatrix} = \begin{pmatrix} k_1\cos(\theta) - k_2\sin(\theta)\cos(\phi) \\ k_2\cos(\theta) + k_1\sin(\theta)\cos(\phi) \\ \sin(\theta)\sin(\phi) \end{pmatrix}. \end{align} The vector $\mathbf{\hat{u}}$ points in the direction of photon propagation when there is no dispersion in the medium. With these definitions, the expression for the differential number of photons emitted in a length interval $dl$ within an energy interval, $dE$, is~\cite{Ginzburg} \begin{align} \frac{d^2N}{dldE}=\frac{\alpha c^3}{2\pi \hbar \mu} \int_{0}^{4\pi} \frac{(\mathbf{e}\cdot\mathbf{\hat{r}})^2}{v_p^4}\delta \left(\mathbf{\hat{r}}\cdot \mathbf{\hat{u}}-\frac{v_p}{c\beta}\right) d\Omega, \label{eq:Ginzburg} \end{align} where $\alpha$ is the fine structure constant, $v_p$ is the phase velocity of the emitted wave, $\mathbf{e}$ is a vector in the direction of the electric field, and $\mu$ is the scalar magnetic permittivity of the medium - set to $1$ throughout here. With the dielectric tensor as defined above, Eq. \ref{eq:Ginzburg} can be decomposed in two contributions representing the number of ordinary, $N^{(o)}$ and extraordinary $N^{(e)}$ photons respectively. \subsection{The ordinary waves} The following approach follows exactly \cite{Muzicar} and \cite{Delbart} but with a few corrections along the way. We use the notation from the latter to calculate the number of Cherenkov photons emitted. For the ordinary wave, $\mathbf{\hat{r}}$ points along $\mathbf{\hat{k}}$ and the integration of Eq. \ref{eq:Ginzburg} over $\theta$ can be done by putting in the values of $\mathbf{e}$, $\mathbf{\hat{r}}$ and $\mathbf{\hat{u}}$. We differentiate the result by $d\phi$ and obtain the triply differential number of ordinary photons emitted~\footnote{In Ref.~\cite{Delbart}, $\cos(\chi)$ was written in the denominator rather than $\cos(\phi)$.} \begin{align} \label{eq:N_o} \frac{d^3N^{(o)}}{d l d E d\phi} = \frac{\alpha}{2\pi\hbar c} \times \frac{\sin^2(\theta_{o})\sin^2(\chi)\sin^2(\phi)}{1-\left[\cos(\theta_{o})\cos(\chi)-\sin(\chi)\cos(\phi)\sin(\theta_{o})\right]^2}. \end{align} The argument of the Dirac delta function in (\ref{eq:Ginzburg}) determines the geometrical properties of the emitted photons. For the ordinary photons the phase velocity is $v_p^{(o)} = c/n_o$ and therefore $\mathbf{\hat{r}}\cdot\mathbf{\hat{u}}=1/n_o\beta=\cos (\theta _{(o)})$ which is the Cherenkov emission angle in an isotropic medium. It is a general result that if the particle propagates along the optical axis, $\chi = 0$, no ordinary photons are emitted. \subsection{The extraordinary waves} The calculation of the triply differential cross section of the number of emitted extraordinary photons follows a similar route as that for the ordinary photons. It is somewhat more complicated because now $\mathbf{\hat{r}} \neq \mathbf{\hat{k}}$. However, $\mathbf{\hat{k}}$ is still a symmetry axis for the emission along $\mathbf{\hat{u}}$ although it turns out that the extraordinary cone has an elliptical shape. Again, the first step is to evaluate the argument of the Dirac delta function which is $\mathbf{\hat{r}}\cdot\mathbf{\hat{u}} = v_p^{(e)}/c\beta$ for the extraordinary wave. From the definition of the phase velocity, we get for the extraordinary wave \begin{align} v_p^{(e)} = c\sqrt{\cfrac{1}{n_e^2} + \left( \cfrac{1}{n_o^2} - \cfrac{1}{n_e^2} \right) u_1^2}. \end{align} Using this to solve the Dirac delta function to obtain the geometrical properties of the wave one finds that the choice \begin{align} \label{eq:k1k2} k_1 = \frac{1}{\sqrt{2}}\sqrt{1+\frac{A}{\sqrt{A^2+4r_1^2r_2^2}}} \\ k_2 = \frac{1}{\sqrt{2}}\sqrt{1-\frac{A}{\sqrt{A^2+4r_1^2r_2^2}}}, \end{align} with \begin{align} \label{eq:A} A = \left( r_1^2-\cfrac{1}{n_o^2\beta^2} \right) - \left( r_2^2-\cfrac{1}{n_e^2\beta^2} \right), \end{align} simplifies the expression (\ref{eq:Ginzburg}) the most. In Appendix A, we show the two final steps needed to evaluate this expression for the extraordinary waves and discuss some errors presented in the results given in \cite{Muzicar,Delbart}. The number of extraordinary photons emitted is \begin{align} \label{eq:N_e} \frac{d^3N^{(e)}}{d l d E d\phi} = \frac{\alpha}{2\pi\hbar c} \frac{1}{n_o^2 \beta^2}\times \cfrac{-\cfrac{1}{\mathbf{\hat{u} \cdot \hat{r}} n_o^2\beta^2}+\cfrac{1}{1-u_1^2}\left(\mathbf{\hat{u} \cdot \hat{r}}\left(r_1^2\beta^2n_o^2-2\right)+\cfrac{1}{\beta^2n_o^2\mathbf{\hat{u} \cdot \hat{r}}} +2u_2r_2\right)} {\sqrt{\left(R-Q\cos^2(\phi)\right)\left(P-Q\cos^2(\phi)\right)}}, \end{align} where $R = \cfrac{1}{n_e^2\beta^2}$, $Q=\left(k_2r_1-k_1r_2 \right)^2 - \left(\cfrac{1}{n_o^2} - \cfrac{1}{n_e^2}\right)\cfrac{k_2^2}{\beta ^2}$ and $P=\left(k_1r_1+k_2r_2 \right)^2 - \left(\cfrac{1}{n_o^2} - \cfrac{1}{n_e^2}\right)\cfrac{k_1^2}{\beta ^2}$. \subsection{Theory summary - observables} In the previous sections we listed the corrected results for the number of ordinary and extraordinary photons differential in emission angle, energy and target thickness. When these contributions are integrated over angle and energy and subsequently added, we obtain the total number of photons emitted per pathlength $dl$. For the case when $\chi = 0^{\degree}$ and $\chi= 90^{\degree}$ we get respectively: \begin{align} N_\parallel^t &= \frac{\alpha}{\hbar c} \left(1- \frac{1}{n_0^2\beta^2}\right), \textrm{and}\\ N_\perp^t &= \frac{\alpha}{\hbar c} \left(1- \frac{1}{n_0n_e\beta^2}\right). \end{align} When calculated in units of eV$^{-1}$ and mm$^{-1}$, $\frac{\alpha}{\hbar c} \approx 37$. Later we will calculate the relative number $R_t = N_\perp^t /N_\parallel^t$, to compare the two most extreme scenarios. We will also assume that the refractive indices $n_0$ and $n_e$ are constant. The overall change of the refractive index of ice from the top to the bottom of IceCube due to pressure and temperature variations was estimated~\cite{Price} to be $\Delta n = 0.002$. Ref.~\cite{iceReview} gives a review of the dependence of the refractive index on the photon wavelength. \section{The Antarctic ice} We have seen in the theory section that the birefringence of crystals affects the emission of Cherenkov radiation. This will be relevant for IceCube if all or most of the crystals in which radiation is emitted point in the same direction. Figure \ref{fig:AntarcticIce} summarizes from \cite{ice} the size and orientation of the Antarctic ice crystallites measured at five different locations. The circular plots are called LPO's for lattice-preferred orientation which shows the crystallographic orientations of the crystallites composing the sample, in stereographic projection; see \cite{LPO} for a textbook definition. They show the degree to which the individual crystals in the ice point in the same direction. At all five drilling locations, the ice is randomly oriented near the surface and more and more aligned deeper down, except when just next to the bedrock. In all five samples, at the depths of the IceCube optical sensors {\it i.e.} $1450-2450$ m, the ice crystals are not randomly oriented but rather have a high degree of common orientation. Generally, the optical axis tends to rotate towards an axis of compression. At large depths the ice crystals are no longer randomly oriented as at the surface, but have a preferred direction which depends on the flow history \cite{Dorte}. No such measurement has been performed at the Geographic South Pole where IceCube is located, and the IceCube dust logger \cite{dustlogger} is insensitive to any azimuthal dependence of the ice properties. However, the IceCube optical modules are equipped with LED flashers which can send signals to other IceCube optical modules. Using this system, the IceCube collaboration has observed a significant anisotropy in the scattering of light at the detector site~\cite{IceCubeIce}. The anisotropy is oriented along the direction of ice flow. It is typically attributed to dust grains in the ice which should be aligned by the pressure gradient in the ice \cite{Bay}, but may have other origins such as the effect discussed here. \begin{figure}[hbt] \begin{center} \includegraphics[width=1\textwidth]{AntarcticIce.jpg} \caption{The blue line shows the grain size as a function of depth for Antarctic ice measured at five different locations. The circular plots called LPOs (see text for a definition) show the orientation of the ice crystals at various depths. The dark tildes symbols represent the bedrock. The distance from the South Pole to Byrd station is 1100 km; to Vostok is 1250 km; to Dome C is 1765 km; to Kohnen station where EPICA DML was drilled is 1670 km; and to the Dome Fuji station is 1250 km. From \cite{ice}. } \label{fig:AntarcticIce} \end{center} \end{figure} \section{Results} \subsection{Relevance for IceCube} We now apply the results presented in the theory section to Antarctic ice, with a focus on the extreme cases with $\chi = 0^{\degree}$ and $\chi = 90^{\degree}$. Figure \ref{fig:HexagonalIce} shows the number of emitted photons per azimuthal angle, $d\phi$, energy, $dE$, and path length $dl$, as a function of azimuthal angle, $\phi$. We use values from Figure \ref{fig:wavelength} for $n_o= 1.3115$ and $n_e=1.3192$ given by \cite{Japan} for an ice temperature of $-27.5^{\degree}$ C in agreement with the standard reference for the optical properties of ice \cite{IceIndices}. The temperature of the ice in IceCube varies between $-32$ at the top to $-9^{\degree}$ C at the lowest elevation optical detector modules~\cite{Lutz,IceCubeData}. The refractive indices also depend on photon wavelength. To study the extreme case, we use a value of $546$ nm where $n_o$ and $n_e$ differ the most. The detector optical modules used in IceCube are most sensitive at $390$ nm \cite{MaxEfficiency}, but at short wavelengths, light scatters much more than at longer wavelengths, so travels a shorter distance from its source. \begin{figure}[hbt] \begin{center} \includegraphics[width=.5\textwidth]{ice_best.eps} \caption{Properties of Cherenkov radiation emitted in a hexagonal ice crystal. Top: The number of photons, $\frac{d^3N}{dldEd\phi}$, as a function of azimuthal angle $\phi$. The blue lines are for $\chi = 90^{\degree}$ and the red curve is for $\chi = 0^{\degree}$ - see legend for details. Middle: A zoom of the above figure so that the variation in total emitted photons emitted when $\chi = 90^{\degree}$ can be seen. Bottom: Cherenkov radiation emission angle as a function of $\phi$.} \label{fig:HexagonalIce} \end{center} \end{figure} The blue curves in Figure \ref{fig:HexagonalIce} pertain to the case $\chi = 90^{\degree}$; the full drawn curve being the sum of the ordinary photons (dashed line) and the extraordinary (dotted line). For the case $\chi = 0^{\degree}$, no ordinary photons are emitted and the solid red line only stems from the contribution of extraordinary photons. The difference between the solid blue and red curves is the net difference in Cherenkov radiation yield depending on the particle propagation direction to the optical axis. The lower plot on Figure \ref{fig:HexagonalIce} shows how the Cherenkov emission angle $\theta$ depends on the azimuthal angle $\phi$. When $\chi=90$, the ordinary photons are emitted at a constant angle $\theta_{o}=40.32^{\degree}$, while the emission angle of extraordinary photons varies by $0.17^{\degree}$ depending on the $\phi$ angle; with peaks located at $90^{\degree}$ and $270^{\degree}$. The solid red curve shows the constant emission angle of $\theta = 40.48^{\degree}$ of the extraordinary photons which is the sole contribution when $\chi = 0^{\degree}$. Lastly, Figure \ref{fig:HexagonalIce} shows that the hexagonal nature of ice affects the number of Cherenkov photons emitted by up to $0.43\%$ depending on the angle $\chi$ of propagation of the radiating particle. The dependence on wavelength of the index of refraction influences the results. Data for both $n_o$ and $n_e$ is available in a range of wavelengths starting from $250$ to $550$ nm~\cite{Japan}. In both cases, the index of refraction is increasing with decreasing wavelength. For the available data points, we have recalculated the ratio $R$ as a function of wavelength, see Figure \ref{fig:wavelength}. The top figure shows for increasing wavelengths both the ordinary and extraordinary refractive indices decrease. The bottom figure shows that the ratio $R$ is relatively stable between wavelengths from about $250$ up to about $450$ nm. The single datapoint at a wavelength of $550$ nm gives an $R$ value which is three to four times larger; this is also where the difference between $n_o$ and $n_e$ is largest according to the top figure. \begin{figure}[hbt] \begin{center} \includegraphics[width=.5\textwidth]{wavelength.eps} \caption{Top: The ordinary and extraordinary refractive indices of ice for different photon wavelengths~\cite{Japan}. The crosses represent data measured at an ice temperature of $-27.5^{\degree}$C; presented without error bars as in the original paper. The extraordinary refractive index at the largest wavelength of $546$ nm differs somewhat from the trend. Bottom: The ratio $R_t = N_\perp^t /N_\parallel^t$ calculated using the above values. } \label{fig:wavelength} \end{center} \end{figure} The effect of anisotropic Cherenkov emission on neutrino detection depends on the observation channel. For the most energetic event yet seen by IceCube, with an observed energy of 2.6 PeV, the collaboration reported a preliminary angular uncertainty of 0.26 degrees \cite{ATEL}, comparable to these variations. However, this was a muon track; the excellent angular resolution came from the 1 km long lever arm, and it is unlikely that small variations in emission would affect this reconstruction. This also holds for less energetic muons. In IceCube, $\nu_e$ are observed through their electromagnetic showers, where the direction is determined by observing the effects of the Cherenkov cone. However, in IceCube most photons scatter before they are detected, so the Cherenkov cone is largely washed out, and the angular uncertainty for these events is at least 10 to 15 degrees, increasing at lower energies \cite{Aartsen:2014gkd}. It seems unlikely that the variation in Cherenkov angle will have an effect on current resolution; especially considering that uncertainties due to light propagation in the ice are more significant with an effect of order 10 \% in IceCube. However, it could be important with smaller detectors with less scattering ({\it e. g.} PINGU), and improved reconstruction algorithms with better angular resolution. The amplitude variation of 0.3\% is smaller than the energy uncertainty of 15 \% for IceCube (above 10 TeV), so does not seem to be significant~\cite{ICenergyResolution}. \subsection{Anisotropic radio Cherenkov emission} A few other experiments use the Antarctic ice as a Cherenkov medium. The Askaryan Radio Array (ARA) ~\cite{ARA} and the Antarctic Ross Ice-Shelf ANtenna Neutrino Array (ARIANNA)~\cite{ARIANNA1} are proposed experiments to study extremely rare cosmic neutrinos with energy above $\sim 10^{17}$ eV. Both designs take advantage of the Askaryan effect~\cite{Askaryan}: The interaction of a cosmic neutrino with a nucleus near the detector causes an extended particle shower with a net negative charge due to annihilation of shower positrons with electrons in the medium through which the shower propagates. The propagation of this net charge leads to the emission of radiowaves - the Askaryan effect. ARA will consist of an array of multiple measuring stations distributed over roughly $200$ m$^2$; $16$ test stations are already in place, taking data. ARIANNA is a proposed array of multiple measurement stations distributed over roughly $900$ km$^2$ on the Ross ice shelf in Antarctica; 7 test stations are already taking data~\cite{ARIANNA}. ARIANNA is located on a floating ice-shelf with a thickness of $\sim 580$ m and it aims at performing both direct and indirect measurements of high energy cosmic neutrinos; in indirect measurements, the radio waves reflect off the ice-water interface before reaching the detectors. The ice sheet at the ARIANNA site was largely formed on the Antarctic plateau and gradually pushed North over a time scale of 100,000 years, going through gaps in the trans-Antarctic mountains onto the Ross sea. Anisotropies may have formed before, during and/or after the course of this transport. These experiments need to understand how radio waves propagate through Antarctic ice. Fortunately, partly because radar is used to survey Antarctic ice and the underlying rock, a fair amount is known about the ice. The refractive indices depend on depth. Below the firn $R_t = N_\perp^t /N_\parallel^t=0.9975$ at $9.7$ GHz \cite{RadioNeNo}, similar to the value obtained at $39$ GHz~\cite{RadioNeNo2}. The results reported on the two refractive indices compare well with the standard value $n=3.18$~\cite{n318} at radio frequencies referred to in ARIANNA papers. Ice crystal orientations are believed to play a significant role, and birefringence due to oriented ice crystals is significant. Refs.~\cite{AnisotropyAndFlow,AnisotropyAndFlow2} show that the scattering of radiowaves in Antarctic ice depends on the angle between the radio polarization and the direction of ice flow. So, possible anisotropy in Cherenkov radiation must be considered. Greenland also has glaciers which could host a high-energy neutrino detector; this would provide Northern hemisphere coverage. One group has measured radio propagation near the Summit Station in Greenland~\cite{Greenland}. They find radio attenuation properties that are, after accounting for the warmer temperatures, similar to those measured at the South Pole. Likewise, studies of ice crystal orientation seem to be similar. A plot similar to the present Figure \ref{fig:AntarcticIce}, but for Greenland, is available in~\cite{ice}. Radio waves do not scatter significantly in the ice, so radio-detection experiments directly observe the Cherenkov cone; in fact measurements of the radio spectrum is used for directional reconstruction, by determining how far off the Cherenkov cone the observer is. So, any alteration of the Cherenkov cone is more important for radio-detection than for optical experiments; the change in Cherenkov angle translates fairly directly into angular uncertainty and hence the leads to a change in apparent neutrino direction. The fractional changes in index of refraction between the two extreme directions are somewhat larger than at optical frequencies, at least at atmospheric pressure; the pressure dependence of this difference has not been studied. In the top $150$ m, the refractive index of the Antarctic ice changes from $1.35$ to $1.78$ i.e. by $32\%$ for radio waves \cite{nOnDepth,nOnDepth2}. The pressure dependence of the refractive indices has not been studied for hexagonal ice, so it is unknown if the difference between $n_o$ and $n_e$ is constant with increasing pressure. \subsection{Calculations for extreme cases, and a possible experiment to test the theory} Here, we calculate the angular emission spectra for Cherenkov photons in a selection of anisotropic media. Table \ref{tbl:Crystals} shows three materials chosen for their large anisotropy: rutile (TiO$_2$), calcium carbonate (CaCO$_3$), sodium nitrate (NaNO$_3$), plus hexagonal ice (H$_2$O I$_h$) crystals. \begin{table}[ht] \centering \begin{tabular}{l l l l c} Crystal & $n_o$ & $n_e$\ \ \ \ & $R$ \ \ \ & Max ang. diff. \\ \toprule TiO$_2$ & 2.616 & 2.903 & 1.017 & 2.32$^{\degree}$ \\ CaCO$_3$ & 1.6584 & 1.4864\ \ & 0.9339 & 5.20$^{\degree}$ \\ NaNO$_3$ & 1.5854 & 1.3369\ \ & 0.8433 & 9.40$^{\degree}$ \\ H$_2$O I$_h$ & 1.309 & 1.313 & 1.003 & 0.39$^{\degree}$ \\ \end{tabular} \caption{Properties of three uniaxial crystals plus H$_2$O I$_h$ and a summary of the influence on the Cherenkov radiation originating from these materials. The rightmost column shows the maximum difference between the emission angle for ordinary and extraordinary photons when $\chi = 90^{\degree}$.} \label{tbl:Crystals} \end{table} The results of the calculations are summarized for the three crystals on Figure \ref{fig:Crystals}. We apply the same legends and distinctions between top and bottom plots as on Figure \ref{fig:HexagonalIce}. \begin{figure}[hbt] \begin{center} \includegraphics[width=1\textwidth]{3crystals.eps} \caption{Properties of Cherenkov radiation emitted in three different crystals. For legend, see Figure \ref{fig:HexagonalIce} Top: Number of photons, $\cfrac{d^3N}{dldEd\phi}$, as a function of azimuthal angle $\phi$. Bottom: Emission angle as a function of $\phi$. Left: Rutile, mid: Calcium carbonate, right: sodium nitrate.} \label{fig:Crystals} \end{center} \end{figure} As with hexagonal ice, no ordinary photons are emitted for $\chi =0$, so there is only one red curve on all the plots. Oscillatory behavior for the ordinary, extraordinary and the sum of Cherenkov photons is visible for the three crystals. The maximum differences in emission angle, $\theta$, are summarized in table \ref{tbl:Crystals} along with the relative number of emitted photons, $R$. For calcium carbonate and sodium nitrate, $n_e < n_o$ and hence the total emitted number of photons is larger when $\chi = 0$. Finally, we perform calculations for lead tungstate crystals (PbWO$_4$) as for example used in the CMS calorimeter at the CERN LHC. At CMS, the lead tungstate light output is measured with Si avalanche photodiodes at a characteristic wavelength of 430 nm \cite{CMS_APD}. With a parameterization of the ordinary and extraordinary indices of refraction \cite{Chip98}, we get for this wavelength $n_o=2.3459$ and $n_e=2.2319$. The results are are shown in Figure \ref{fig:leadtungstate}, and an asymmetry value of $R_t=0.9887$ is obtained, i.e.\ a small but possibly detectable effect. \begin{figure}[hbt] \begin{center} \includegraphics[width=.5\textwidth]{leadTungstate.eps} \caption{Properties of Cherenkov radiation emitted in a lead tungstate crystal. Top: The number of photons, $\frac{d^3N}{dldEd\phi}$, as a function of azimuthal angle $\phi$. The blue lines are for $\chi = 90^{\degree}$ and the red curve is for $\chi = 0^{\degree}$ - see legend for details. Middle: A zoom of the above figure so that the variation in total emitted photons emitted when $\chi = 90^{\degree}$ can be seen. Bottom: Cherenkov radiation emission angle as a function of $\phi$.} \label{fig:leadtungstate} \end{center} \end{figure} These calculations could be tested in a simple experiment. By choosing a crystal with a large value of abs$(R-1)$, like e.g. sodium nitrate, one would measure the total number of emitted photons as a function of $\chi$. One could also fix $\chi$ and measure the emission angle $\theta$ as a function of azimuthal angle $\phi$. It is even possible to choose a value of $\beta$ such that only parts of the Cherenkov cone would be filled. This measurement would be very sensitive to the $\beta$ value and constitute a very precise particle velocity test at a narrow velocity range. In all cases, the design of the crystal would have to take into account the fact that the Cherenkov emission angles are large. This means that the rear end of the crystal must be constructed with a geometry such that total internal reflection is avoided. \section{Conclusions} In anisotropic media, optical Cherenkov emission depends on the angle between the relativistic charged particle and the optical axis of the medium. In oriented ice crystals, the Cherenkov emission rate varies slightly, by 0.3\%, and the emission angle can vary by 0.4 degrees. Since such crystals are found at the South Pole, this effect of anisotropic Cherenkov emission is important to understand for the neutrino experiments located on Antarctica. However, the present results mean that such experiments, like IceCube, can safely neglect the effect of crystal anisotropy in their data analysis. Cherenkov radiation emission in materials where the difference between ordinary and extraordinary refractive indices is larger than for ice is impacted by the present results. For the case of lead tungstate, the variation in angle is 1.4 degrees, and the intensity variation is slightly above one percent, both of which may be relevant for precise calorimetry. An experiment to test the accuracy of the formalism seems possible with relatively little effort. SK acknowledges a useful conversation with Ryan Bay (UC Berkeley). This work was supported in part by U.S. National Science Foundation under grants PHY-1307472 and the U.S. Department of Energy under contract number DE-AC-76SF00098.
1,314,259,994,372
arxiv
\section{\label{sect_toymodelproducthyperbolic}Parallels to arithmetic hyperbolic manifolds} \subsection{Phenomenology} We want to look at the geometry of Oeljeklaus-Toma manifolds by investigating to what extent there exist some parallels to the behaviour of hyperbolic manifolds in dimension $\geq3$. In the following list it might not be clear (especially to the author of these lines) which analogies are meaningful, and which are accidents\footnote{Especially since mathematics does not have accidents. It has, however, plenty of room for misleading analogies.}: \begin{enumerate} \item Going through the universal covering space, we have the presentatio \[ X=\mathbf{H}_{n}/\Gamma\qquad\text{versus}\qquad X^{\prime}=(\mathbf{H ^{s}\times\mathbf{C})/\Gamma^{\prime}\text{, \] where $X$ denotes a hyperbolic $n$-manifold; $\mathbf{H}_{n}$ denotes hyperbolic $n$-space with the hyperbolic metric. On the right hand side $X^{\prime}$ is an Oeljeklaus-Toma manifold and $\mathbf{H}^{s}\times \mathbf{C}$ is equipped with the Oeljeklaus-Toma locally conformally K\"{a}hler metric. In either case, $\Gamma$ is a discrete, finite covolume subgroup of the relevant isometry groups. \item In dimension $\geq3$ finite volume hyperbolic manifolds are determined (up to isometry) by their fundamental groups (Mostow-Prasad Rigidity). For Oeljeklaus-Toma manifolds we find in Proposition \ref{prop_reconstruct} that they are also uniquely determined by their fundamental groups. Up to diffeomorphism this is just Mostow Rigidity for real solvmanifolds, but we even get the number field $K$ back. \item As a consequence of Rigidity, the volume of a hyperbolic $n$-manifold for $n\geq3$ is a \textit{topological} invariant. Similarly, once we pick an overall normalization for the K\"{a}hler potential on $\mathbf{H}^{s \times\mathbf{C}$, diffeomorphic Oeljeklaus-Toma manifolds admit a canonical notion of volume. \item Among the hyperbolic manifolds, there is the special class of `arithmetic' ones. They come from arithmetic groups defined through number fields $K$, and their volume relates to the special value $\zeta_{K}(2)$ of the zeta function of the number field. For example, there is Humbert's formula, discovered in 1919 \begin{equation} \operatorname*{Vol}\left( \mathbf{H}_{3}/\operatorname*{PSL}\nolimits_{2 \left( \mathcal{O}_{K}\right) \right) =\frac{1}{4}\cdot\left\vert \triangle_{K/\mathbf{Q}}\right\vert ^{\frac{3}{2}}\cdot\pi^{-2}\cdot\zeta _{K}(2) \label{ly8 \end{equation} for $K$ an imaginary quadratic number field. This can be generalized broadly, for example encompassing number fields with $s\geq1$ real and $t=1$ complex places. One switches to product-hyperbolic geometries \[ \Gamma\circlearrowright\mathbf{H}^{s}\times\mathbf{H}_{3}^{t}\text{, \qquad\qquad X:=\left( \mathbf{H}^{s}\times\mathbf{H}_{3}^{t}\right) /\Gamma \] and still gets volume formulas of the shap \begin{equation} \operatorname*{Vol}\left( X\right) =(\text{rational factor})\cdot\left\vert \triangle_{K/\mathbf{Q}}\right\vert ^{\frac{3}{2}}\cdot\pi^{-s-2t}\cdot \zeta_{K}(2)\text{.} \label{ly3 \end{equation} See \cite[Thm. 3.2]{MR3117524} for a whole panorama of related volume computations. On the other hand, Proposition \ref{prop_main} shows that the volume of Oeljeklaus-Toma manifolds i \[ (\text{rational factor})\cdot\pi^{-1}\cdot\operatorname*{res}\nolimits_{s=1 \zeta_{K}(s)\text{. \] This is certainly far less exciting than obtaining $\zeta_{K}(2)$, but it is pleasant that to see that the Oeljeklaus-Toma LCK\ metric, i.e. a metric with special complex-geometric properties, \textit{entirely naturally} leads to this kind of volume formula. In some sense, this behaviour was already built-in within Tricerri's LCK metric for the $\mathrm{S}^{0}$ Inoue surface. \item Finally, the work of Thurston and Jorgensen \cite{MR636516} has shown a very interesting structure on the volume distribution among hyperbolic $3$-manifolds. In particular, there is a unique hyperbolic $3$-manifold of smallest volume. Asking the same question for Oeljeklaus-Toma manifolds, one also finds that there is (in each dimension) a smallest volume, attained by at least one, and at most finitely many Oeljeklaus-Toma manifolds. Incidentally, the smallest hyperbolic $3$-manifold (the Weeks manifold) is arithmetic, and comes from the \textit{same} number field as the smallest Oeljeklaus-Toma manifold. Quite possibly, however, this is just a sporadic effect of small numbers. \end{enumerate} \ \begin{tabular} [c]{r|l hyperbolic $3$-manifold & Oeljeklaus-Toma\\\hline universal cover $\mathbf{H}_{3}$ & universal cover $\mathbf{H}\times \mathbf{C}$\\ $\operatorname*{SL}\nolimits_{2}(\mathcal{O}_{K})$ & $\supse \begin{pmatrix} a & b\\ & a^{-1 \end{pmatrix} $\\ hyperbolic metric & locally conformally\ K\"{a}hler\\ $\zeta_{K}(2)$ & residue at $\zeta_{K}(1)$, \end{tabular} \] There is one thing I should say very clearly: The above analogies might all be purely phenomenological. Many of them could be attributed to Oeljeklaus-Toma manifolds being real solvmanifolds. However, it remains curious that the LCK metric (i.e. a metric chosen for its special holomorphic features) leads to a zeta value volume so naturally. \begin{remark} On the other hand, it must be said that all of the above analogies completely collapse if one considers number fields with $t>1$ complex places as well: \begin{enumerate} \item In this case the formation of $X(K;U)$ depends on a choice, which is perhaps un-natural. \item It seems to be a very delicate question whether there exists some $U$ so that $X(K;U)$ can be equipped with a LCK metric. See Remark \ref{rmk_XKU_can_it_be_lck} for recent work on this issue. This makes it difficult to speak of volume at all. \item We show in Example \ref{rmk_XKU_can_it_be_lck} that a matching of fundamental groups, although it still implies being diffeomorphic, does not allow to reconstruct the field $K$. So even if one can come up with a normalized LCK metric of some sort, like the one of Battisti \cite[Appendix {MR3193953}, it seems improbable that there is a unique choice within each diffeomorphism class. \end{enumerate} \end{remark} \section{Preparations} We shall exclusively use Poincar\'{e}'s upper half plane model for the hyperbolic $2$-space $\mathbf{H}$. The Iwasawa decomposition of the group $\operatorname*{SL}\nolimits_{2}(\mathbf{R})$ is the homeomorphism $\operatorname*{SL}\nolimits_{2}(\mathbf{R})\approx K\times A\times N$, wher \[ K: \begin{pmatrix} \cos\theta & -\sin\theta\\ \sin\theta & \cos\theta \end{pmatrix} \qquad A: \begin{pmatrix} a & \\ & a^{-1 \end{pmatrix} \qquad N: \begin{pmatrix} 1 & b\\ & 1 \end{pmatrix} \text{, \] (for $a>0$, $b\in\mathbf{R}$) are three subgroups; $K=\operatorname*{SO \nolimits_{2}(\mathbf{R})$ is a maximal compact subgroup. We can use this to obtain a very pleasant parametrization of the complex upper half plane $\mathbf{H}:=\{x+iy\mid y>0\}\subset\mathbf{C}$, namel \[ A\cdot N=\frac{\operatorname*{SL}\nolimits_{2}(\mathbf{R})}{K}=\frac {\operatorname*{SL}\nolimits_{2}(\mathbf{R})}{\operatorname*{SO \nolimits_{2}(\mathbf{R})}=\mathbf{H}\text{. \] Let us recall the details: $\operatorname*{SL}\nolimits_{2}(\mathbf{R})$ acts on $\mathbf{H}$ via the standard M\"{o}bius action, i.e \ \begin{pmatrix} a & b\\ c & d \end{pmatrix} \cdot z:=\frac{az+b}{cz+d \] for $z\in\mathbf{H}$ a complex number. Sinc \[ A\cdot N=\left\{ \left. \begin{pmatrix} a & b\\ & a^{-1 \end{pmatrix} \right\vert a\in\mathbf{R}_{>0}^{\times}\right\} \text{, \] the orbit of $i\in\mathbf{H}$ under the action of $A\cdot N$ unwinds as \begin{equation} A\cdot N\cong\mathbf{H}\qqua \begin{pmatrix} \sqrt{y} & \frac{x}{\sqrt{y}}\\ & \frac{1}{\sqrt{y} \end{pmatrix} \cdot i=x+iy\qquad\in\mathbf{H} \label{l2 \end{equation} for $x\in\mathbf{R}$ and $y>0$; this is obviously simply transitive. \section{Oeljeklaus-Toma manifolds} Let $K$ be a number field with $s$ real places and $t$ complex places. Let $n:=[K:\mathbf{Q}]=s+2t$; the ring of integers $\mathcal{O}_{K}$ is a free $\mathbf{Z}$-module of rank $n$ and the group of units $\mathcal{O _{K}^{\times}$ decomposes as $\mathcal{O}_{K,\operatorname*{tor}}^{\times }\times\mathbf{Z}^{s+t-1}$, where the torsion subgroup $\mathcal{O _{K,\operatorname*{tor}}^{\times}$ is the group of roots of unity in $K$. Whenever $s\geq1$, we necessarily have $\mathcal{O}_{K,\operatorname*{tor }^{\times}=\{\pm1\}$. Suppose $\sigma_{1},\ldots,\sigma_{s}:K\rightarrow \mathbf{R}$ are the real embeddings and $\sigma_{s+1},\ldots,\sigma _{s+t},\overline{\sigma_{s+1}},\ldots,\overline{\sigma_{s+t}}:K\rightarrow \mathbf{C}$ the $t$ complex conjugate pairs of complex embeddings (the numbering and choice of complex conjugate partners is non-canonical, but does not affect any of the following). Write $\mathcal{O}_{K}^{\times,+ :=\{x\in\mathcal{O}_{K}^{\times}\mid\sigma_{j}(x)>0$ for $1\leq j\leq s\}$ for the group of totally positive units. Suppose $s\geq1$ and $t\geq1$. Following Oeljeklaus and Toma \cite{MR2141693} we defin \begin{equation} X(K;U):=\frac{\mathbf{H}^{s}\times\mathbf{C}^{t}}{\mathcal{O}_{K}\rtimes U}\text{,} \label{ly5 \end{equation} where $U\subseteq\mathcal{O}_{K}^{\times,+}$ is a suitably chosen subgroup; it has to be \textquotedblleft admissible\textquotedblright\ in the sense of \cite{MR2141693}. The semi-direct product $\mathcal{O}_{K}\rtimes U$ is formed by letting $U$ act on $\mathcal{O}_{K}$ by multiplication. See \cite[\S 1 {MR2141693} for details. It is shown that the action is properly discontinuous, full rank, and holomorphic. As a result, $X(K;U)$ canonically becomes a compact complex manifold. In the present text we will mostly deal with the case $t=1$. It is special in two ways: Firstly, there is a canonical choice for $U$ because $U:=\mathcal{O}_{K}^{\times,+}$ is always an admissible subgroup. Secondly, these $X(K;U)$ admit an LCK\ metric. We will review this in \S \ref{sect_InvariantVolumeForm}. \begin{remark} \label{rmk_XKU_can_it_be_lck}In fact by the work of Vuletescu \cite{MR3236651} and Battisti \cite[Appendix, Theorem 8]{MR3193953}, we now know that $X(K;U)$ as in Equation \ref{ly5} admits an LCK metric if and only if for all $\alpha\in U$ one has $\left\vert \sigma_{s+1}(\alpha)\right\vert =\cdots=\left\vert \sigma_{s+t}(\alpha)\right\vert $. For the case we are mostly interested in, i.e. $t=1$, this condition is trivially met. See Dubickas \cite{MR3193953} for an extensive study whether this condition can be satisfied for $t>1$. \end{remark} We return to the case of $t=1$ complex places and $U:=\mathcal{O}_{K ^{\times,+}$. Firstly, let us point out that the action underlying the definition of $X(K;\mathcal{O}_{K}^{\times,+})$ can be written in a different fashion: \begin{lemma} \label{lemma_sl2_vs_OT_action}On the $\mathbf{H}$-factors in $\mathbf{H ^{s}\times\mathbf{C}$ the M\"{o}bius action of the subgrou \begin{equation} \left\{ \left. \begin{pmatrix} a & b\\ & a^{-1 \end{pmatrix} \right\vert a\in\mathcal{O}_{K}^{\times,+}\text{ and }b\in\mathcal{O _{K}\right\} \subseteq\operatorname*{SL}\nolimits_{2}\left( \mathcal{O _{K}\right) \label{ly4 \end{equation} under the embeddin \begin{equation} \operatorname*{SL}\nolimits_{2}\left( \mathcal{O}_{K}\right) \subseteq \prod_{v\mid\infty}\operatorname*{SL}\nolimits_{2}\left( K_{v}\right) \subseteq\underset{\circlearrowright\mathbf{H}^{s}}{\operatorname*{SL \nolimits_{2}\left( \mathbf{R}\right) ^{s}}\times\underset{\circlearrowright \mathbf{C}}{\operatorname*{Aut}(\mathbf{C})} \label{ly6 \end{equation} agrees with the Oeljeklaus-Toma action of the subgroup $\mathcal{O}_{K \rtimes(\mathcal{O}_{K}^{\times,+})^{2}$. In particular, the subgroup in Equation \ref{ly4} is isomorphic to the semi-direct product $\mathcal{O _{K}\rtimes(\mathcal{O}_{K}^{\times,+})^{2}$. \end{lemma} \begin{proof} This is a straight-forward computation. The isomorphism is induced from the ma \begin{align*} \varphi:\mathcal{O}_{K}\rtimes(\mathcal{O}_{K}^{\times,+})^{2} & \longrightarrow\operatorname*{SL}\nolimits_{2}\left( \mathcal{O}_{K}\right) \\ (u,v) & \longmapst \begin{pmatrix} \sqrt{v} & \frac{u}{\sqrt{v}}\\ & \frac{1}{\sqrt{v} \end{pmatrix} \text{. \end{align*} Note that each unit in $(\mathcal{O}_{K}^{\times,+})^{2}$ has a unique totally positive square root in $\mathcal{O}_{K}^{\times,+}$ so that the square roots have a well-defined sense. The multiplication on the left-hand side is $(u,v)(\tilde{u},\tilde{v})=(u+v\tilde{u},v\tilde{v})$ and we leave it to the reader to check that $\varphi$ is a group homomorphism. Its image is the group of Equation \ref{ly4} and an inverse is given b \ \begin{pmatrix} a & b\\ & a^{-1 \end{pmatrix} \longmapsto(ab,a^{2})\qquad\in\mathcal{O}_{K}\rtimes(\mathcal{O}_{K ^{\times,+})^{2}\text{. \] Thus, the groups are indeed isomorphic. Let $z:=x+iy\in\mathbf{H}$ be an arbitrary point, i.e. \[ x+iy \begin{pmatrix} \sqrt{y} & \frac{x}{\sqrt{y}}\\ & \frac{1}{\sqrt{y} \end{pmatrix} \cdot i\text{, \] following the orbit parametrization coming from Equation \ref{l2}. We comput \ \begin{pmatrix} 1 & b\\ & 1 \end{pmatrix} \cdot z \begin{pmatrix} 1 & b\\ & 1 \end{pmatrix} \cdo \begin{pmatrix} \sqrt{y} & \frac{x}{\sqrt{y}}\\ & \frac{1}{\sqrt{y} \end{pmatrix} \cdot i=(x+b)+iy \] an \ \begin{pmatrix} a & \\ & a^{-1 \end{pmatrix} \cdot z \begin{pmatrix} a & \\ & a^{-1 \end{pmatrix} \cdo \begin{pmatrix} \sqrt{y} & \frac{x}{\sqrt{y}}\\ & \frac{1}{\sqrt{y} \end{pmatrix} \cdot i=a^{2}(x+iy) \] and we immediately see that this agrees under the embedding in Equation \ref{ly6} with the action of the subgroup $\mathcal{O}_{K}\rtimes (\mathcal{O}_{K}^{\times,+})^{2}$ as defined in \cite{MR2141693}. Thus, not only are the groups isomorphic; the Oeljeklaus-Toma action also matches the M\"{o}bius action. \end{proof} \section{\label{section_CommutatorSubgroup}The commutator subgroup $[\pi,\pi ]$} Next, we want to understand the structure of the commutator subgroup of $\mathcal{O}_{K}\rtimes U$. Since it will not be any more difficult to treat the general case, let us for the moment allow for $U$ any subgroup of $\mathcal{O}_{K}^{\times}$. The following definition is suggested by the observations in the paper of Parton and Vuletescu \cite[Proof of Thm. 4.2]{MR2875828}: \begin{definition} \label{def_IdealJ}For a number field $K$ and an arbitrary subgroup $U\subseteq\mathcal{O}_{K}^{\times}$ defin \[ J(U):=\{\text{\emph{ideal generated by }}1-v\text{\emph{\ for all} }v\in U\}\qquad\subseteq\mathcal{O}_{K}\text{. \] \end{definition} Of course this ideal might just be the entire ring of integers. If one actually wants to compute this ideal, it will be convenient to reduce its definition to a finite number of generators: \begin{lemma} \label{Lemma_JUCanBeDefinedOnGenerators}If $\varepsilon_{1},\ldots ,\varepsilon_{s}$ are generators of the group $U$, then the ideal $J(U)$ can also be described a \[ J(U)=(1-\varepsilon_{1},\ldots,1-\varepsilon_{s})\text{. \] \end{lemma} \begin{proof} It is clear that the $\{1-\varepsilon_{i}\}_{i=1,\ldots,s}$ generate a sub-ideal of $J(U)$, so it suffices to prove the converse inclusion. Observe that if $v,\tilde{v}\in U$ the \begin{equation} v(1-\tilde{v})+(1-v)=1-v\tilde{v}\text{,} \label{lcca1 \end{equation} where the left-hand side lies in the ideal generated by $1-v$ and $1-\tilde {v}$. Therefore, for all products $v=\varepsilon_{1}^{n_{1}}\cdots \varepsilon_{s}^{n_{s}}$ with $n_{1},\ldots,n_{s}\in\mathbf{Z}$ we may inductively rewrite $1-v$ along Equation \ref{lcca1} as an element in the ideal generated by the $\{1-\varepsilon_{i},1-\varepsilon_{i}^{-1 \}_{i=1,\ldots,s}$. Here $1-\varepsilon_{i}^{-1}$ occurs since Equation \ref{lcca1} just allows to reduce products, but not inverses. However, we also have \[ 1-v^{-1}=-v^{-1}(1-v) \] for all units $v\in U$, showing that the generators $1-\varepsilon_{i}^{-1}$ for $i=1,\ldots,s$ are actually not needed. \end{proof} \begin{lemma} For a number field $K$ and subgroups $U,V\subseteq\mathcal{O}_{K}^{\times}$ we hav \[ J(U)+J(V)=J(U\cdot V)\text{, \] where we have a sum of ideals on the left-hand side and $U\cdot V$ denotes the smallest subgroup of $\mathcal{O}_{K}^{\times}$ containing both $U$ and $V$. \end{lemma} \begin{proof} Every element of $U\cdot V$ has the shape $uv$ with $u\in U,v\in V$ since $\mathcal{O}_{K}^{\times}$ is abelian. By Equation \ref{lcca1} the element $1-uv$ lies in the ideal sum $J(U)+J(V)$. Conversely, any element of the sum can be written as $\sum a_{i}(1-u_{i})$ with $a_{i}\in\mathcal{O}_{K}$ and $u_{i}$ (for each $i$) either in $U$ or in $V$, so in either case $u_{i}\in U\cdot V$. \end{proof} \begin{proposition} \label{Prop_StructureOfCommutator}Let $K$ be a number field and $U\subseteq \mathcal{O}_{K}^{\times}$ a subgroup. Then for the semi-direct product $\pi:=\mathcal{O}_{K}\rtimes U$ we hav \[ \lbrack\pi,\pi]=\{(u,1)\in\pi\mid u\in J(U)\}\text{. \] Analogously, in the subgroup of $\operatorname*{SL}\nolimits_{2 (\mathcal{O}_{K})$ defined in Equation \ref{ly4} the commutator subgroup consists of all matrice \ \begin{pmatrix} 1 & b\\ & 1 \end{pmatrix} \] with $b\in J(U^{2})$. \end{proposition} \begin{proof} We have $(u,v)(\tilde{u},\tilde{v})=(u+v\tilde{u},v\tilde{v})$ and $(u,v)^{-1}=(-uv^{-1},v^{-1})$. Since the commutator subgroup $[\pi,\pi]$ is generated by commutators, it is actually generated by all elements of the shap \begin{align} (u,v)(\tilde{u},\tilde{v})(u,v)^{-1}(\tilde{u},\tilde{v})^{-1} & =(u+v\tilde{u},v\tilde{v})(-uv^{-1}-\tilde{u}v^{-1}\tilde{v}^{-1},v^{-1 \tilde{v}^{-1})\nonumber\\ & =((1-\tilde{v})u-(1-v)\tilde{u},1)\text{.} \label{lcca2 \end{align} This shows that the commutator subgroup $[\pi,\pi]$ is contained in the abelian group $(\mathcal{O}_{K},1)$ and then necessarily a subgroup. Furthermore, since $u,\tilde{u}\in\mathcal{O}_{K}$ are arbitrary, $[\pi,\pi]$ contains the subgroup $(I,1)$ for $I=(1-\tilde{v},1-v)$, i.e. the ideal in $\mathcal{O}_{K}$ generated by the elements $1-\tilde{v},1-v$. Since the latter is true for all $v\in U$ and $[\pi,\pi]$ is closed under addition in $(\mathcal{O}_{K},1)$, it follows that $[\pi,\pi]$ contains all elements $(x,1)$ with $x\in J(U)$. Conversely, from Equation \ref{lcca2} it is clear that all elements in $[\pi,\pi]$ are of the shape $(x,1)$ with $x\in J(U)$, proving the claim. The claim about $\operatorname*{SL}\nolimits_{2}$ follows directly from Lemma \ref{lemma_sl2_vs_OT_action}. \end{proof} \begin{proposition} \label{Prop_KappaAgreesWithOK}Let $K$ be a number field and $U\subseteq \mathcal{O}_{K}^{\times}$ a torsion-free subgroup $\neq1$. Then for $\pi:=\mathcal{O}_{K}\rtimes U$ the kernel $\varkappa$ in the short exact sequenc \begin{equation} 1\longrightarrow\varkappa\longrightarrow\pi\longrightarrow\pi _{\operatorname*{ab},\operatorname*{fr}}\longrightarrow1 \label{tsy1 \end{equation} is just the subgroup $\mathcal{O}_{K}$. Moreover, if $U$ is admissible in the sense of \cite{MR2141693}, we have a canonical short exact sequenc \begin{equation} 0\longrightarrow\mathcal{O}_{K}/J(U)\longrightarrow H_{1}(X(K;U),\mathbf{Z )\longrightarrow U\longrightarrow0\text{,} \label{tsy2 \end{equation} where $\mathcal{O}_{K}/J(U)$ is precisely the torsion subgroup. In particular, this group needs at most $s+2t$ generators. \end{proposition} \begin{proof} (after Parton and Vuletescu \cite[Thm. 4.2]{MR2875828}) We have the commutative diagram with exact row \begin{equation \xymatrix{ 1 \ar[r] & [\pi,\pi] \ar[r] \ar[d] & \pi\ar[r] \ar[d]_{\mathrm{id}} & \pi_{\mathrm{ab}} \ar[d] \ar[r] & 1 \\ 1 \ar[r] & \mathcal{O}_{K} \ar[r] & \pi\ar[r] & U \ar[r] & 1, } \label{lsn1 \end{equation} where the upper row is formed from abelianization and the lower row from the semi-direct product structure of $\pi$. The existence of the downward arrows follows from Prop. \ref{Prop_StructureOfCommutator}, along with $[\pi ,\pi]=J(U)$, where $J(U)$ is an ideal in $\mathcal{O}_{K}$. Since we assume $U\neq1$, $J(U)$ is not the zero ideal and therefore must have finite index in $\mathcal{O}_{K}$ as an abelian group; this means tha \[ m\mathcal{O}_{K}\subseteq\lbrack\pi,\pi]\subseteq\mathcal{O}_{K \] for some $m$. It is now an easy diagram chase\footnote{The Snake Lemma is false for arbitrary non-abelian groups, but it \textit{does} hold for the specific Diagram \ref{lsn1}. The essential reason is that all kernels and cokernels in this diagram exist. This would not necessarily hold for a general diagram of non-abelian groups.} to see that $\ker\left( \pi _{\operatorname*{ab}}\rightarrow U\right) $ is a pure torsion group, in fact it is annihilated by $m$. On the other hand since $U$ is torsion-free, the kernel of $\pi_{\operatorname*{ab}}\rightarrow U$ must contain the full torsion subgroup. We conclude that $\ker\left( \pi_{\operatorname*{ab }\rightarrow U\right) $ actually agrees with the torsion subgroup in $\pi_{\operatorname*{ab}}$, and via the snake map with $\mathcal{O}_{K}/J(U)$. Since the right-hand side downward arrow in Diagram \ref{lsn1} is moreover surjective, we deduce that $U=\pi_{\operatorname*{ab},\operatorname*{fr}}$ and therefore $\varkappa$ in Equation \ref{tsy1} agrees with $\mathcal{O}_{K}$. Finally, if $U$ is admissible, we can form $X(K;U)$ and by the Hurewicz Theorem there is a canonical isomorphis \[ \pi_{1}(X(K;U),\ast)_{\operatorname*{ab}}\overset{\sim}{\longrightarrow H_{1}(X(K;U),\mathbf{Z}) \] and our previous argument decomposes the left-hand side just in the shape of Equation \ref{tsy2}. Since $\mathcal{O}_{K}\simeq\mathbf{Z}^{s+2t}$, every quotient group requires at most $s+2t$ generators itself. \end{proof} We should give some examples regarding the structure of $J(U)$. Firstly, it is easy to produce examples where the ideal is non-trivial: \begin{example} \label{example_ComputeJ2}We give a few examples for the group of totally positive units, i.e. we consider the ideal norms $N(J(\mathcal{O}_{K ^{\times,+}))=\#\mathcal{O}_{K}/J(\mathcal{O}_{K}^{\times,+})$. We perform this computation for the number field \[ F_{m}=\mathbf{Q}[T]/(T^{3}-T+m)\quad G_{m}=\mathbf{Q}[T]/(T^{7}-T-m)\quad H_{m}=\mathbf{Q}[T]/(T^{3}-2T-m)\text{, \] the result spelled out in the respective column \ \begin{tabular} [c]{c|c|cc $m$ & $F_{m}$ & $G_{m}$ & \multicolumn{1}{|c}{$H_{m}$}\\\hline \multicolumn{1}{r|}{$1$} & \multicolumn{1}{|r|}{$1$} & \multicolumn{1}{|r|}{$1$} & \multicolumn{1}{r}{$-$}\\ \multicolumn{1}{r|}{$2$} & \multicolumn{1}{|r|}{$2^{2}$} & \multicolumn{1}{|r|}{$2^{2}$} & \multicolumn{1}{r}{$2$}\\ \multicolumn{1}{r|}{$3$} & \multicolumn{1}{|r|}{$3^{2}$} & \multicolumn{1}{|r|}{$1$} & \multicolumn{1}{r}{$2\cdot3^{2}$}\\ \multicolumn{1}{r|}{$4$} & \multicolumn{1}{|r|}{$2^{3}$} & \multicolumn{1}{|r|}{$2^{2}$} & \multicolumn{1}{r}{$-$}\\ \multicolumn{1}{r|}{$5$} & \multicolumn{1}{|r|}{$19$} & \multicolumn{1}{|r|}{$1$} & \multicolumn{1}{r}{$2^{4}$}\\ \multicolumn{1}{r|}{$6$} & \multicolumn{1}{|r|}{$-$} & \multicolumn{1}{|r|}{$2^{2}$} & \multicolumn{1}{r}{$2\cdot3\cdot5\cdot11$}\\ \multicolumn{1}{r|}{$7$} & \multicolumn{1}{|r|}{$17$} & \multicolumn{1}{|r|}{$1$} & \multicolumn{1}{r}{$2\cdot7\cdot109$ \end{tabular} \] This table was generated by computer, see Code \ref{comp_code_ComputeIdealJ} on page \pageref{comp_code_ComputeIdealJ} for details. The dashes \textquotedblleft$-$\textquotedblright\ indicate whenever the given polynomial is not irreducible. Solely for the entertainment of the reader, let us also list a large example: For the randomly chosen number field $\mathbf{Q [T]/(T^{3}+2T+2000)$ one get \[ \#\mathcal{O}_{K}/J(\mathcal{O}_{K}^{\times,+})=2^{2}\cdot5^{2}\cdot 7\cdot967\cdot1649120827309715616889\text{. \] As far as I can tell, there does not seem to be an obvious pattern governing the structure of the ideal $J(U)$. \end{example} \begin{remark} \label{remark_MakeJBeOne}If the reader wants to produce infinite families of number fields so that $J(\mathcal{O}_{K}^{\times,+})=(1)$ for all of them, the easiest way is to pick some number field $L$ with $J(\mathcal{O}_{L ^{\times,+})=(1)$. Then take any family $L_{i}$ of number fields and consider the composita $K_{i}:=L_{i}\cdot L$. Then we hav \[ \mathcal{O}_{K_{i}}=J_{L_{i}}(\mathcal{O}_{L_{i}}^{\times,+})\mathcal{O _{K_{i}}\subseteq J_{K_{i}}(\mathcal{O}_{L_{i}}^{\times,+})\subseteq J_{K_{i }(\mathcal{O}_{K_{i}}^{\times,+})\text{, \] where $J_{F}$ refers to forming the ideal $J(U)$ with respect to the number field $F$. \end{remark} Of course Proposition \ref{Prop_KappaAgreesWithOK} provokes the question: What abelian groups can occur for $\mathcal{O}_{K}/J(U)$ at all? For example, for $s=t=1$ we see that only finite abelian groups with at most three generators are possible; this is of course already in Inoue's original paper \cite[\S 2, p. 274]{MR0342734}. Do all of them really occur? I supply a crude `first approach' for cyclic groups in Prop. \ref{prop_ConstructInoueSurfaceWithTorsionZm} below, but it is not quite satisfactory.\medskip There is also a completely different way to characterize $\mathcal{O}_{K}$ inside $\mathcal{O}_{K}\rtimes U$ and it could serve as an alternative definition of $\varkappa$ in the formulation of Proposition \ref{prop_reconstruct}: \begin{proposition} \label{prop_IdentifyOKAsLargestZnSubgroup}Let $K$ be a number field and $U\subseteq\mathcal{O}_{K}^{\times}$ a subgroup. Consider the family of all subgroup \[ \mathcal{H}:=\left\{ H\subseteq\mathcal{O}_{K}\rtimes U\mid H\simeq \mathbf{Z}^{n}\text{ for some }n\geq0\right\} \text{. \] Then there is a maximal $n$ which can occur, and all those realizing the maximal $n$ are partially ordered by inclusion and there is a unique maximal $H$ among them. In fact, this maximal $H$ is the subgroup $\mathcal{O}_{K}$. \end{proposition} \begin{proof} Suppose some non-trivial $H\in\mathcal{H}$ exists and let $(u,v)\in H$ be some element which is not the identity. Since $H$ is abelian, all $(\tilde {u},\tilde{v})\in H$ must commute with $(u,v)$. By Equation \ref{lcca2} this force \[ \mathbf{1}_{H}=(u,v)(\tilde{u},\tilde{v})(u,v)^{-1}(\tilde{u},\tilde{v )^{-1}=((1-\tilde{v})u-(1-v)\tilde{u},1)\text{, \] s \begin{equation} (1-\tilde{v})u-(1-v)\tilde{u}=0\text{.} \label{lT2 \end{equation} Now we need a case distinction:\newline(1) Suppose $v\neq1$. Then in the field $K$ we can solve for $\tilde{u}$ and fin \begin{equation} \tilde{u}=\frac{1-\tilde{v}}{1-v}u\text{.} \label{lT1 \end{equation} For any given $\tilde{v}\in U$ it can be true or false that the right-hand side lies in $\mathcal{O}_{K}$ (recall that $u,v$ are fixed). We obtain that the largest subset of $\mathcal{O}_{K}\rtimes U$ of elements commuting with $(u,v)$ i \begin{equation} C_{u,v}:=\left\{ (\tilde{u},\tilde{v})\text{ so that }\tilde{u =\frac{1-\tilde{v}}{1-v}u\in\mathcal{O}_{K}\right\} \label{lcca3 \end{equation} and by definition as a centralizer this is actually a subgroup. The latter can also be checked directly. As $H$ is abelian and contains $(u,v)$, we must have $H\subseteq C_{u,v}$. We compose the inclusion of $H$ with the projection of the semi-direct product, i.e \begin{align*} H\hookrightarrow C_{u,v}\hookrightarrow\mathcal{O}_{K}\rtimes U & \longrightarrow U\hookrightarrow\mathcal{O}_{K}^{\times}\simeq\mu_{K \times\mathbf{Z}^{s+t-1}\\ (\tilde{u},\tilde{v}) & \longmapsto\tilde{v}\text{. \end{align*} Since Equation \ref{lT1} implies that $\tilde{u}$ can be computed from $\tilde{v}$ (and $u,v$ were fixed), this composition is actually injective. It follows that the $\mathbf{Z}$-rank of $H$ can be at most $s+t-1$.\newline(2) Suppose $v=1$. Then Equation \ref{lT2} becomes $(1-\tilde{v})u=0$. We have $u\neq0$ since $(u,v)=(0,1)$ would then be the identity element, which we had excluded. Hence, the elements in $\mathcal{O}_{K}\rtimes U$ commuting with $(u,v)$ are precisely those with $\tilde{v}=1$; and these are precisely those forming the subgroup $\mathcal{O}_{K}\simeq\mathbf{Z}^{s+2t}$. We always have $s+2t>s+t-1$, so we conclude that the subgroups of Equation \ref{lcca3} are never realizing the maximal $\mathbf{Z}$-rank. Instead, it follows that only those $H\subseteq\mathcal{O}_{K}$ with $H\simeq\mathbf{Z}^{s+2t}$ realize the maximal $\mathbf{Z}$-rank and among all such $H$ contained in $\mathcal{O _{K}$ clearly the full group $\mathcal{O}_{K}\simeq\mathbf{Z}^{s+2t}$ is the unique maximal. \end{proof} \section{Proof of Proposition \ref{prop_reconstruct}} This is inspired from the perspective taken in the proof of \cite[Thm. 4.2]{MR2875828} by Parton and Vuletescu. \begin{proof} [Proof of Prop. \ref{prop_reconstruct}]Since $X$ is an Oeljeklaus-Toma manifold, it is connected and therefore $\pi:=\pi_{1}(X,x)$ is well-defined up to the inner automorphism coming from the choice of picking a base point. We denote by $\pi_{\operatorname*{ab}}$ its maximal abelian quotient, and by $\pi_{\operatorname*{ab},\operatorname*{fr}}$ the maximal torsion-free quotient of the latter. We may then define $\varkappa$ just as the corresponding kernel in the short exact sequenc \begin{equation} 1\longrightarrow\varkappa\longrightarrow\pi\longrightarrow\pi _{\operatorname*{ab},\operatorname*{fr}}\longrightarrow1\text{.}\label{ltt7 \end{equation} Since $\mathbf{H}^{s}\times\mathbf{C}$ is contractible and the action of $\mathcal{O}_{K}\rtimes\mathcal{O}_{K}^{\times,+}$ is easily checked to be free, the Oeljeklaus-Toma manifold $X=(\mathbf{H}^{s}\times\mathbf{C )/(\mathcal{O}_{K}\rtimes\mathcal{O}_{K}^{\times,+})$ is actually a classifying space for the group, i.e. its homotopy type is a $B(\mathcal{O _{K}\rtimes\mathcal{O}_{K}^{\times,+})=K(\mathcal{O}_{K}\rtimes\mathcal{O _{K}^{\times,+},1)$, an Eilenberg-MacLane space: $\pi_{1}(X,x)\simeq \mathcal{O}_{K}\rtimes\mathcal{O}_{K}^{\times,+}$ (and moreover $\pi _{i}(X,x)=0$ for $i\geq2$, but we do not need this). By Prop. \ref{Prop_KappaAgreesWithOK} we already know that for the group $\pi =\mathcal{O}_{K}\rtimes\mathcal{O}_{K}^{\times,+}$ our Equation \ref{ltt7} becomes \[ 1\longrightarrow\mathcal{O}_{K}\longrightarrow\pi\longrightarrow \mathcal{O}_{K}^{\times,+}\longrightarrow1\text{, \] where $K$ is our still unknown number field. Of course we just know $\mathcal{O}_{K}$ as an abelian group under addition here; we do not know the ring structure. However, by the rank of rings of integers and Dirichlet's Unit Theorem, we know that there exist non-unique isomorphism \begin{equation} \mathcal{O}_{K}\simeq\mathbf{Z}^{s+2t}\qquad\text{and}\qquad\mathcal{O _{K}^{\times,+}\simeq\mathbf{Z}^{s+t-1}\text{,}\label{ltt8 \end{equation} where $s,t$ are the number of real and complex places of $K$. We have $t=1$ by assumption\footnote{we had restricted our attention to this case in the entire text right from the beginning}. We recall that for any short exact sequence of groups as in Equation \ref{ltt7} there is a morphism $\rho:\pi _{\operatorname*{ab},\operatorname*{fr}}\rightarrow\operatorname*{Out (\varkappa)$, $\phi\mapsto(x\mapsto\tilde{\phi}x\tilde{\phi}^{-1})$, where $\tilde{\phi}$ is any lift of $\phi\in\pi_{\operatorname*{ab ,\operatorname*{fr}}$ to $\pi$ and \textquotedblleft$\operatorname*{Out $\textquotedblright\ denotes the group of outer automorphisms, i.e. automorphisms modulo conjugations. Since $\mathcal{O}_{K}$ is abelian and conjugations are trivial, we may lift this $\rho$ to take values in $\operatorname*{Aut}(\varkappa)$, and moreover $\rho$ recovers the action used in the formation of the semi-direct product. From Equation \ref{ltt8} we already know that $\rho$ become \[ \rho:\mathcal{O}_{K}^{\times,+}\longrightarrow\operatorname*{GL (\mathbf{Z}^{s+2}) \] in our particular situation. Choosing a different splitting would just change $\rho$ into an equivalent representation. Since we know that $\pi =\mathcal{O}_{K}\rtimes\mathcal{O}_{K}^{\times,+}$ was formed by letting $\mathcal{O}_{K}^{\times,+}$ act by multiplication on $\mathcal{O}_{K}$, we know that $\rho$ must (up to conjugation) be precisely the multiplication action. Hence, the minimal polynomial of any $\alpha\in\mathcal{O}_{K ^{\times,+}$ acting on $\mathbf{Z}^{s+2}$ will be nothing but its minimal polynomial as an algebraic number. A generically chosen $\alpha\in \mathcal{O}_{K}^{\times,+}$ is a primitive element for the field extension $K/\mathbf{Q}$, so $K$ is uniquely determined. Note moreover that any conjugation does not affect minimal polynomials, so it does not matter that we only know $\rho$ up to equivalence. We conclude that all complex eigenvalues of all $\alpha\in\mathcal{O}_{K}^{\times,+}$ lie in $K^{\operatorname*{g}}$. Now we are done if adjoining all of them to $\mathbf{Q}$ contains $K$ (because adjoining all roots of the minimal polynomials produces a Galois extension and the smallest Galois extension containing $K$ is $K^{\operatorname*{g}}$). Suppose not. This means that $K$ is a number field such that $\mathcal{O _{K}^{\times,+}$ lies in a proper subfield $L\subsetneqq K$, so even $\mathcal{O}_{K}^{\times,+}\subseteq\mathcal{O}_{L}^{\times}$, and we ge \begin{equation} s^{\prime}+2t^{\prime}<s+2\qquad s\leq s^{\prime}+t^{\prime}-1\label{ltt9 \end{equation} for the real and complex places $s^{\prime},t^{\prime}$ of $L$ by comparing ranks of rings of integers and units. This forces $t^{\prime}=0$ and $s^{\prime}=s+1$, s \[ s+2=[K:\mathbf{Q}]=[K:L]\cdot\lbrack L:\mathbf{Q}]=[K:L](s+1)\text{, \] implying $s=0$, which is impossible since our Oeljeklaus-Toma manifolds always come from number fields with at least one real place. \end{proof} \begin{remark} Instead of identifying $\mathcal{O}_{K}$ inside the fundamental group via the torsion-free quotient of the abelianization, it can also be characterized as \textquotedblleft the largest\textquotedblright\ subgroup isomorphic to $\mathbf{Z}^{n}$ for some $n$. See Proposition \ref{prop_IdentifyOKAsLargestZnSubgroup} for this alternative perspective. \end{remark} There is also the following harmless generalization: \begin{proposition} \label{prop_reconstruct_with_U}Let $X$ be known to be of the shap \[ X(K;U):=\left( \mathbf{H}^{s}\times\mathbf{C}\right) /\left( \mathcal{O _{K}\rtimes U\right) \] with $U$ a finite-index subgroup of $\mathcal{O}_{K}^{\times,+}$ for some number field $K$ which has $s\geq1$ real places and precisely one complex place. \begin{enumerate} \item Then $X(K;U)$ is an LCK manifold. \item Just knowing its fundamental group $\pi$, $K^{\operatorname*{g}}$ can be reconstructed from $\pi$ by the same recipe as in Prop. \ref{prop_reconstruct}. \item We hav \begin{align*} & \sum_{\sigma\in\operatorname*{Gal}(K^{\operatorname*{g}}/\mathbf{Q})}\sigma U\\ & \qquad\quad=\left\{ \alpha\in\mathcal{O}_{K^{\operatorname*{g}}}^{\times }\left\vert \begin{array} [c]{l \text{there exists }x\in\pi_{\operatorname*{ab},\operatorname*{fr}}\text{ such that }\rho(x)\in\operatorname*{GL}\nolimits_{n}(\mathbf{Z})\text{ has}\\ \text{the same minimal polynomial as }\alpha\text{. \end{array} \right. \right\} \text{. \end{align*} The sum on the left-hand side is the smallest subgroup of $\mathcal{O _{K^{\operatorname*{g}}}^{\times}$ containing $U$ and being closed under the Galois action of $K^{\operatorname*{g}}/\mathbf{Q}$. \end{enumerate} \end{proposition} \begin{proof} (1) Once $\mathcal{O}_{K}^{\times,+}$ is admissible, any finite-index subgroup like $U$ is as well, so $X(K;U)$ is just an instance of the construction in \cite{MR2141693}. (2) The proof of Prop. \ref{prop_reconstruct} applies word for word, just replace each $\mathcal{O}_{K}^{\times,+}$ by $U$ and whenever Dirichlet's Unit Theorem is applied, use that a finite-index subgroup of $\mathbf{Z}^{n}$ must itself be isomorphic to $\mathbf{Z}^{n}$. (3) For any $\alpha\in\pi_{\operatorname*{ab},\operatorname*{fr}}\cong U$ the minimal polynomial of $\rho(\alpha)$ matches the minimal polynomial of $\alpha$ as an algebraic integer. \end{proof} \begin{example} \label{Example_CannotReconstructForMoreComplexPlaces}In this example we will show that for $t>1$ complex places, the construction $X(K;U)$ of \cite{MR2141693} can also produce (non-LCK) complex manifolds with isomorphic fundamental groups so that the Galois closures of their underlying number fields differ. Thus, the scope of Proposition \ref{prop_reconstruct} does not extend to general $X(K;U)$. To this end, consider the number field \[ L_{1}:=\mathbf{Q}[S]/(S^{3}+S+1)\quad L_{2}:=\mathbf{Q}[T]/(T^{3}-T+2)\quad L_{3}:=\mathbf{Q}[T]/(T^{3}-T+1)\text{. \] All of these fields satisfy $s=t=1$. Henceforth, we shall also write $S$ and $T$ to denote the image of these elements in these fields. The element $S$ lies in $\mathcal{O}_{L_{1}}^{\times}$ since its minimal polynomial $S^{3}+S+1$ has constant coefficient $1$ and therefore must be a unit. We compute its single real embedding to be $-0.6823\ldots$, so $S^{2 \in\mathcal{O}_{L_{1}}^{\times,+}$ generates a subgroup isomorphic to $\mathbf{Z}$. One can show that $\mathcal{O}_{L_{1}}^{\times,+}=\mathbf{Z \left\langle -S\right\rangle $, but we will not need to know this. Now fix $i=2,3$ and consider the compositu \ \xymatrix{ & L_{1}\cdot L_{i} \\ L_{1} \ar[ur] & & L_{i} \ar[ul] \\ & \mathbf{Q}. \ar[ul] \ar[ur] \\ \] We find that $L_{1}\cdot L_{i}$ is a degree $9$ number field with $s=1$ real places and $t=4$ complex places. Since $T$ is integral, the submodul \begin{equation} J_{i}:=\mathbf{Z}\left\langle 1,S,S^{2},T,T^{2},ST,S^{2}T,ST^{2},S^{2 T^{2}\right\rangle \qquad\subset L_{1}\cdot L_{i} \label{lup1 \end{equation} defines a subring of the ring of integers and one checks that we actually have equality. As far as I can tell confirming this is the only difficult part of this computation. See Code \ref{comp_code_SageForExampleNoReconstruct} on page \pageref{comp_code_SageForExampleNoReconstruct} for a verification by computer. Since our number field has precisely one real place, the admissible subgroups of $\mathcal{O}_{L_{1}\cdot L_{i}}^{\times,+}$ (in the sense of \cite{MR2141693}) have rank one and we will simply take $S^{2}\in \mathcal{O}_{L_{1}}^{\times,+}$ as the generator. The action on the basis elements of Equation \ref{lup1} is easy to compute, we obtai \begin{equation \begin{array} [c]{lll S\cdot1=S & S\cdot T=ST & S\cdot S^{2}T=(-S-1)T\\ S\cdot S=S^{2} & S\cdot T^{2}=ST^{2} & S\cdot ST^{2}=S^{2}T^{2}\\ S\cdot S^{2}=-S-1 & S\cdot ST=S^{2}T & S\cdot S^{2}T^{2}=(-S-1)T^{2}\text{. \end{array} \label{lup2 \end{equation} The action of $S^{2}$ follows immediately. The main point here is that this table does not depend on whether $i=2$ or $3$. For the complex manifolds $X_{i}:=(\mathbf{H}\times\mathbf{C}^{4})/(\mathcal{O}_{L_{1}\cdot L_{i }\rtimes\left\langle S^{2}\right\rangle )$ of \cite[\S 1]{MR2141693} this table entirely determines the group structure of the semi-direct produc \[ \pi_{1}(X_{i},\ast)=J_{i}\rtimes\left\langle S^{2}\right\rangle =\mathbf{Z ^{9}\rtimes\mathbf{Z}\text{. \] Hence, $X_{2}$ and $X_{3}$ have isomorphic fundamental groups. Since they are actually classifying spaces, it follows that they even have the same homotopy type. It follows from a result of Oeljeklaus-Toma \cite[Prop. 2.9]{MR2141693} that the manifolds $X_{i}$ \textsl{do not }admit LCK metrics. They are also concrete examples which are not `of simple type' (in the sense of \cite[Definition 1.5 and Remark 1.8]{MR2141693}). Their underlying number fields have Galois closure $(L_{1}\cdot L_{i})^{\operatorname*{g} =L_{1}^{\operatorname*{g}}\cdot L_{i}^{\operatorname*{g}}$. It is easy to compute $L_{i}^{\operatorname*{g}}$ for $i=1,2,3$ and we find that each of them has degree $6$ over $\mathbf{Q}$. We also find that $L_{1 ^{\operatorname*{g}}\cdot L_{2}^{\operatorname*{g}}\cdot L_{3 ^{\operatorname*{g}}$ has degree $6^{3}=216$ over $\mathbf{Q}$. This implies that the degree $36$ fields $L_{1}^{\operatorname*{g}}\cdot L_{2 ^{\operatorname*{g}}$ and $L_{1}^{\operatorname*{g}}\cdot L_{3 ^{\operatorname*{g}}$ must be different. See Code \ref{comp_code_SageForExampleNoReconstruct} on page \pageref{comp_code_SageForExampleNoReconstruct} for an automated verification of this example by a computer algebra system. \end{example} Going far beyond the case of just Oeljeklaus-Toma manifolds, it is a classical theorem due to Mostow that any two compact solvmanifolds with isomorphic fundamental groups must be diffeomorphic \cite[Theorem A]{MR0061611}. \section{\label{sect_InvariantVolumeForm}The invariant volume form} Oeljeklaus and\ Toma found a very nice canonical LCK\ metric on $X(K;\mathcal{O}_{K}^{\times,+})$. See \cite{MR1481969} or \cite{MR0418003} for an introduction to LCK metrics. This Hermitian metric is induced from a global K\"{a}hler potential on the universal covering space $\mathbf{H ^{s}\times\mathbf{C}=\{(z_{1},\ldots,z_{s+1})\mid z_{1},\ldots,z_{s \in\mathbf{H}$, $z_{s+1}\in\mathbf{C}\}$. The relevant K\"{a}hler potential i \begin{equation} \phi:=\phi_{1}+\left\vert z_{s+1}\right\vert ^{2} \label{lyt1 \end{equation} wit \[ \phi_{1}:=\prod_{j=1}^{s}\frac{i}{(z_{j}-\overline{z_{j}})}=\frac{1}{2^{s }\prod_{j=1}^{s}\frac{1}{y_{j}}\text{, \] where $z_{j}=x_{j}+iy_{j}$ for $x_{j},y_{j}\in\mathbf{R}$, cf. \cite{MR2141693}, modulo the typo corrected in \cite{MR2875828}. Then the underlying Riemannian metric and the associated $(1,1)$-form are given b \begin{equation} g=\frac{1}{2 {\textstyle\sum_{k,l=1}^{s+1}} g_{kl}\left( \mathrm{d}z_{k}\otimes\mathrm{d}\overline{z_{l}}+\mathrm{d \overline{z_{l}}\otimes\mathrm{d}z_{k}\right) \qquad\omega {\textstyle\sum_{k,l=1}^{s+1}} \frac{i}{2}g_{kl}\mathrm{d}z_{k}\wedge\mathrm{d}\overline{z_{l}} \label{lyt2 \end{equation} with $g_{kl}:=(\partial_{z_{k}}\partial_{\overline{z_{l}}}\phi)$. One may follow the efficient explicit computation in \cite[proof of Thm. 5.1]{MR2875828}, leading us to \[ (g_{kl}) \begin{pmatrix} \frac{\phi_{1}}{2y_{1}^{2}} & \frac{\phi_{1}}{4y_{1}y_{2}} & \frac{\phi_{1 }{4y_{1}y_{3}} & \cdots & 0\\ \frac{\phi_{1}}{4y_{1}y_{2}} & \frac{\phi_{1}}{2y_{2}^{2}} & & & 0\\ \vdots & & \ddots & & \vdots\\ \frac{\phi_{1}}{4y_{1}y_{s}} & & & \frac{\phi_{1}}{2y_{s}^{2}} & 0\\ 0 & 0 & \cdots & 0 & 2 \end{pmatrix} \text{. \] In particular, the determinant of $\left( g_{kl}\right) $ is twice the determinant of the top left $(s\times s)$-minor. For the latter, the Leibniz formula yield \begin{align*} & \det(\text{top left }(s\times s)\text{-minor})\\ & \qquad=\sum_{\sigma\in\Sigma_{s}}\operatorname*{sgn}(\sigma)2^{\#\{j\mid j=\sigma(j)\}}\frac{\phi_{1}}{4y_{1}y_{\sigma(1)}}\cdots\frac{\phi_{1} {4y_{s}y_{\sigma(s)}}\\ & \qquad=\frac{\phi_{1}^{s}}{4^{s}}\left( \sum_{\sigma\in\Sigma_{s }\operatorname*{sgn}(\sigma)2^{\#\{j\mid j=\sigma(j)\}}\right) \frac{1 {y_{1}^{2}\cdots y_{s}^{2}}\text{. \end{align*} The inner bracket is easily seen to be $s+1$. One way to evaluate this is as follows: We readily see that the bracket agrees wit \[ \de \begin{pmatrix} 2 & 1 & \cdots & 1\\ 1 & 2 & 1 & \vdots\\ \vdots & 1 & 2 & 1\\ 1 & \cdots & 1 & 2 \end{pmatrix} =\left( -1\right) ^{s}\det\left( -\operatorname*{id} \begin{pmatrix} 1 & \cdots & 1\\ \vdots & 1 & \vdots\\ 1 & \cdots & 1 \end{pmatrix} \right) \text{, \] so the determinant is nothing but $\left( -1\right) ^{s}p_{A}(-1)$ with $p_{A}(t):=\det(t-A)$ the characteristic polynomial of the matrix $A$ whose entries are all $1$. The latter has the single non-zero eigenvector $(1,1,\ldots,1)^{t}$ with eigenvalue $s$. Therefore, the characteristic polynomial is $(t-s)t^{s-1}$. Hence, $(-1)^{s}p_{A}(-1)=(-1-s)(-1)^{2s-1 =(s+1)$, proving the claim. Returning to our main computation \begin{equation} \det(g_{kl})=2\cdot\frac{(s+1)}{4^{s}}\frac{1}{y_{1}^{2}\cdots y_{s}^{2} \phi_{1}^{s}=\frac{(s+1)}{2^{2s+s^{2}-1}}\frac{1}{y_{1}^{s+2}\cdots y_{s}^{s+2}}\text{.} \label{ltf1 \end{equation} The K\"{a}hler potential of Equation \ref{lyt1} defines a genuine K\"{a}hler form on $\mathbf{H}^{s}\times\mathbf{C}$. One easily checks that translations from $\mathcal{O}_{K}$ leave it invariant, while the multiplication with elements $\alpha\in\mathcal{O}_{K}^{\times,+}$ changes it by a homothety. More precisely, from Equation \ref{lyt2} we ge \[ \omega=\frac{i}{2^{s+3}y_{1}\cdots y_{s}}\left( \sum_{k,l=1}^{s \frac{2^{\delta_{k=l}}}{y_{k}y_{l}}\mathrm{d}z_{k}\wedge\mathrm{d \overline{z_{l}}\right) +i\mathrm{d}z_{s+1}\wedge\mathrm{d}\overline{z_{s+1} \] and therefor \begin{align*} \alpha^{\ast}\omega & =\frac{i}{2^{s+3}\sigma_{1}(\alpha)\cdots\sigma _{s}(\alpha)y_{1}\cdots y_{s}}\left( \sum_{k,l=1}^{s}\frac{2^{\delta_{k=l} }{\sigma_{k}(\alpha)\sigma_{l}(\alpha)y_{k}y_{l}}\sigma_{k}(\alpha )\overline{\sigma_{l}(\alpha)}\mathrm{d}z_{k}\wedge\mathrm{d}\overline{z_{l }\right) \\ & +i\left\vert \sigma_{s+1}(\alpha)\right\vert ^{2}\mathrm{d}z_{s+1 \wedge\mathrm{d}\overline{z_{s+1}}\text{. \end{align*} Here $\alpha^{\ast}$ denotes the pullback along the action of $\alpha \in\mathcal{O}_{K}^{\times,+}$. Note that $\prod_{j=1}^{s+2}\sigma_{j (\alpha)=N(\alpha)=+1$ (usually $\pm1$ since it is a unit, but $+1$ since all real embeddings $\sigma_{j}(\alpha)>0$ are positive by assumption and the remaining factor is $\left\vert \sigma_{s+1}(\alpha)\right\vert ^{2 =\left\vert \sigma_{s+1}(\alpha)\sigma_{s+2}(\alpha)\right\vert $ of the last pair of complex embeddings; this is also positive). Hence, $\frac{1 {\sigma_{1}(\alpha)\cdots\sigma_{s}(\alpha)}=\left\vert \sigma_{s+1 (\alpha)\right\vert ^{2}$, thus $\alpha^{\ast}\omega=\left\vert \sigma _{s+1}(\alpha)\right\vert ^{2}\omega$. The group $\mathcal{O}_{K \rtimes\mathcal{O}_{K}^{\times,+}$ acts by homotheties on the honest K\"{a}hler form $\omega$, therefore it does not descend to the quotient and will not equip it with a K\"{a}hler metric itself, but it means that the quotient is at least locally conformally K\"{a}hler (LCK). The relevant invariant form i \[ \tilde{\omega}:=y_{1}\cdots y_{s}\omega\qquad\text{i.e.}\qquad\tilde{\omega }:=e^{f}\omega\text{ with }f:=\log(y_{1}\cdots y_{s})\text{, \] i.e. this form is invariant under the group action, descends to the Oeljeklaus-Toma manifold, but is not K\"{a}hler anymore. We compute $\alpha^{\ast}\tilde{\omega}=\tilde{\omega}$ and $d\tilde{\omega =df\wedge\tilde{\omega}$. In particular, the Lee form \cite[Ch. 1]{MR1481969} of an Oeljeklaus-Toma manifold i \[ \theta:=df=d\log(y_{1}\cdots y_{s})=\sum_{j=1}^{s}d\log(y_{j})=\sum_{j=1 ^{s}\frac{\mathrm{d}y_{j}}{y_{j}}\text{. \] \begin{remark} [Inoue surface of type $\mathrm{S}^{0}$]In the case $s=1$ this simplifies t \[ \tilde{\omega}=\frac{i}{8}\frac{\mathrm{d}z_{1}\wedge\mathrm{d}\overline {z_{1}}}{y_{1}^{2}}+iy_{1}\mathrm{d}z_{2}\wedge\mathrm{d}\overline{z_{2 }\text{, \] which is essentially the $(1,1)$-form associated to the Tricerri metric. \end{remark} The Oeljeklaus-Toma manifold comes with a canonical volume form. Just like $\mathbf{H}^{s}\times\mathbf{C}$ carries the K\"{a}hler form $\omega$ and an invariant form $\tilde{\omega}$ associated to the LCK metric, $\mathbf{H ^{s}\times\mathbf{C}$ has a canonical volume form $vol$ from $\omega$, and an invariant counterpart $\widetilde{vol}$ belonging to $\tilde{\omega}$. The volume form can be computed for example b \[ vol=\frac{\omega^{s+1}}{(s+1)!}=\left( \frac{i}{2}\right) ^{s+1}\det (g_{kl})\mathrm{d}z_{1}\wedge\mathrm{d}\overline{z_{1}}\wedge\cdots \wedge\mathrm{d}z_{s+1}\wedge\mathrm{d}\overline{z_{s+1}}\text{, \] which we can unravel further thanks to Equation \ref{ltf1}. We may now either switch to $\tilde{\omega}$, or we can equivalently work with the scaled metric $(y_{1}\cdots y_{s}g_{kl})$ instead of $(g_{kl})$. Then the determinant scales to $\det(y_{1}\cdots y_{s}\cdot g_{kl})=(y_{1}\cdots y_{s})^{s+1}\det(g_{kl})$ since $(g_{kl})$ is a $(s+1)\times(s+1)$-matrix. Hence, we obtain \[ \widetilde{vol}:=\left( \frac{i}{2}\right) ^{s+1}\frac{(s+1)}{2^{2s+s^{2 -1}}\frac{1}{y_{1}\cdots y_{s}}\mathrm{d}z_{1}\wedge\mathrm{d}\overline{z_{1 }\wedge\cdots\wedge\mathrm{d}z_{s+1}\wedge\mathrm{d}\overline{z_{s+1}}\text{, \] which we may rewrite a \begin{equation} \widetilde{vol}=\frac{(s+1)}{2^{2s+s^{2}-1}}\frac{1}{y_{1}\cdots y_{s }\mathrm{d}x_{1}\wedge\mathrm{d}y_{1}\wedge\cdots\wedge\mathrm{d}x_{s+1 \wedge\mathrm{d}y_{s+1}\text{.} \label{lcw1 \end{equation} Note that by writing $\frac{\mathrm{d}y}{y}=\mathrm{d}\log y$, this looks just like the Euclidean volume form in suitable coordinates, say $y:=e^{r}$ or $e^{2r}$. This is the key reason why Prop. \ref{prop_main} will turn out to be true \[ =\frac{(s+1)}{2^{2s+s^{2}-1}}\cdot\bigwedge_{j=1}^{s}(\mathrm{d}x_{j \wedge\mathrm{d}\log(y_{j}))\wedge\mathrm{d}x_{s+1}\wedge\mathrm{d y_{s+1}\text{. \] In the next section we shall tailor a fundamental domain suitable to this volume form. \section{A fundamental domain} Fix a number field $K$ with $s\geq1$ real places and precisely one complex place.\medskip In this section we shall determine a fundamental domain for the action of $\mathcal{O}_{K}\rtimes(\mathcal{O}_{K}^{\times,+})^{2}$ on $\mathbf{H ^{s}\times\mathbf{C}$. By Lemma \ref{lemma_sl2_vs_OT_action}\ this action is precisely the same as the action of the (almost) \textquotedblleft standard Borel\textquotedblrigh \ \begin{pmatrix} a & b\\ & a^{-1 \end{pmatrix} \subset\operatorname*{SL}\nolimits_{2}(\mathcal{O}_{K})\qquad\text{(with a\in\mathcal{O}_{K}^{\times,+}\text{, }b\in\mathcal{O}_{K}\text{). \] Here $\operatorname*{SL}\nolimits_{2}(\mathcal{O}_{K})$ acts under the diagonal embeddings of Equation \ref{ly6}. On the individual factors $\mathbf{H}$ this is precisely the M\"{o}bius action. We will therefore work with the coordinates coming from the Iwasawa decomposition of $\operatorname*{SL}\nolimits_{2}(\mathbf{R})$, see Equation \ref{l2}. It parametrizes $\mathbf{H}$ in exactly this shape, namely $\mathbf{H}\simeq A\cdot N$. We use the following explicit coordinates \begin{equation \begin{pmatrix} e^{r} & b\\ & e^{-r \end{pmatrix} \begin{pmatrix} \sqrt{y} & \frac{x}{\sqrt{y}}\\ & \frac{1}{\sqrt{y} \end{pmatrix} \label{ltz1 \end{equation} with $r,b\in\mathbf{R}$, giving a semi-direct product presentatio \begin{align} & 0\longrightarrow\mathbf{R}\longrightarrow A\cdot N\longrightarrow \mathbf{R}\longrightarrow0\label{ltz2}\\ & b\mapst \begin{pmatrix} 1 & b\\ & 1 \end{pmatrix} \text{,}\qqua \begin{pmatrix} e^{r} & b\\ & e^{-r \end{pmatrix} \mapsto r\text{.}\nonumber \end{align} Replicating the decomposition stemming from the coordinates of Equation \ref{ltz2} for each real place, we ge \begin{equation \begin{array} [c]{ccccccc \mathbf{R}^{s}\times\mathbf{C} & \longrightarrow & \mathbf{R} & \times \cdots\times & \mathbf{R} & \times & \mathbf{C}\\ \downarrow & & \downarrow & & \downarrow & & \downarrow\\ \mathbf{H}^{s}\times\mathbf{C} & \longrightarrow & AN & \times\cdots\times & AN & \times & \mathbf{C}\\ \downarrow & & \downarrow & & \downarrow & & \\ \mathbf{R}^{s} & \longrightarrow & \mathbf{R} & \times\cdots\times & \mathbf{R}\text{.} & & \end{array} \label{l3 \end{equation} This picture is also related to the solvmanifold viewpoint proposed by Kasuya \cite{MR3033950}. Under the diagonal embedding of Equation \ref{ly6}, we have $\mathcal{O}_{K}\rtimes(\mathcal{O}_{K}^{\times,+})^{2}\subset\prod_{j=1 ^{s}AN\times\operatorname*{Aut}\mathbf{C}$, and on the factors $AN$ this group action is just matrix multiplication, see Lemma \ref{lemma_sl2_vs_OT_action}. Moreover, the arro \begin{equation \begin{array} [c]{cccc \mathbf{H}^{s}\times\mathbf{C} & \qquad & \begin{pmatrix} e^{r_{i}} & b_{i}\\ & e^{-r_{i} \end{pmatrix} _{i=1,\ldots,s}\times(b_{s+1}) & \\ \downarrow & & \downarrow & \\ \mathbf{R}^{s} & & (r_{1},\ldots,r_{s}) & \end{array} \label{lq1 \end{equation} is $\mathcal{O}_{K}\rtimes(\mathcal{O}_{K}^{\times,+})^{2}$-equivariant, where the action on the bottom row unravels to factor throug \[ \mathcal{O}_{K}\rtimes(\mathcal{O}_{K}^{\times,+})^{2}\twoheadrightarrow (\mathcal{O}_{K}^{\times,+})^{2 \] and $\alpha\in(\mathcal{O}_{K}^{\times,+})^{2}$ is easily seen to act as translation \[ \alpha\cdot(r_{1},\ldots,r_{s})=(\log\left\vert \sigma_{1}\alpha\right\vert +r_{1},\ldots,\log\left\vert \sigma_{s}\alpha\right\vert +r_{s})\text{. \] By Dirichlet's Unit Theorem we can pick free generators $(\mathcal{O _{K}^{\times,+})^{2}=\mathbf{Z}\left\langle \varepsilon_{1},\ldots ,\varepsilon_{s}\right\rangle \simeq\mathbf{Z}^{s}$, i.e. a multiplicatively independent system of units in this group. Then \begin{align*} \Lambda:= & \left\{ \sum_{i=1}^{s}\beta_{i}B_{i}\mid0\leq\beta _{i}<1\right\} \\ B_{i}:= & (\log\left\vert \sigma_{1}(\varepsilon_{i})\right\vert ,\ldots,\log\left\vert \sigma_{s}(\varepsilon_{i})\right\vert )^{t \end{align*} is a fundamental domain for the action of $\mathcal{O}_{K}\rtimes (\mathcal{O}_{K}^{\times,+})^{2}$ on the base row of Diagram \ref{l3}. Next, suppose we are given an element in the middle row, say $(b_{1},\ldots ,b_{s},b_{s+1},r_{1},\ldots,r_{s})$ with $b_{1},\ldots,b_{s}\in\mathbf{R}$, $b_{s+1}\in\mathbf{C}$, $r_{1},\ldots,r_{s}\in\mathbf{R}$. Then for \textit{fixed} $r_{1},\ldots,r_{s}$ an element $\alpha\in\mathcal{O _{K}\subset\mathcal{O}_{K}\rtimes(\mathcal{O}_{K}^{\times,+})^{2}$ is easily checked to act a \ \begin{pmatrix} 1 & \sigma_{i}\alpha\\ & 1 \end{pmatrix} \cdo \begin{pmatrix} e^{r_{i}} & b_{i}\\ & e^{-r_{i} \end{pmatrix} \begin{pmatrix} e^{r_{i}} & b_{i}+e^{-r_{i}}\sigma_{i}(\alpha)\\ & e^{-r_{i} \end{pmatrix} \] in the $i$-th coordinate. In particular, the orbit under $\mathcal{O}_{K}$ stays in the same fiber over $r_{1},\ldots,r_{s}$. Fixing the fiber, we see that $\mathcal{O}_{K}$ acts solely on the coordinates $b_{1},\ldots,b_{s \in\mathbf{R}$ and $b_{s+1}\in\mathbf{C}$ by translation. Moreover, if we pick generators $\mathcal{O}_{K}=\mathbf{Z}\left\langle a_{1},\ldots,a_{s+2 \right\rangle \simeq\mathbf{Z}^{s+2}$ a fundamental domain for the action of $\mathcal{O}_{K}$ \textit{in the fiber over} $r_{1},\ldots,r_{s}$ is given b \begin{align} \Phi(r_{1},\ldots,r_{s}):= & \left\{ \sum_{i=1}^{s+2}\alpha_{i}\tilde {A}_{i}\mid0\leq\alpha_{i}<1\right\} \label{l5}\\ \text{with }\tilde{A}_{i}:= & (e^{-r_{1}}\sigma_{1}(a_{i}),\ldots,e^{-r_{s }\sigma_{s}(a_{i}),\sigma_{s+1}(a_{i}))^{t}\text{.}\nonumber \end{align} \begin{proposition} \label{prop_fund_domain}The se \begin{align*} \mathcal{F}und & :=\left\{ \coprod_{(r_{1},\ldots,r_{s})\in\Lambda}^{\cdot }\Phi(r_{1},\ldots,r_{s})\right\} \\ & =\left\{ (r_{1},\ldots,r_{s},b_{1},\ldots,b_{s+1})\left\vert \begin{array} [c]{l r_{1},\ldots,r_{s}\in\Lambda\\ b_{1},\ldots,b_{s+1}\in\Phi(r_{1},\ldots,r_{s}) \end{array} \right. \right\} \end{align*} is a fundamental domain for the action of $\mathcal{O}_{K}\rtimes (\mathcal{O}_{K}^{\times,+})^{2}$ on $\mathbf{H}^{s}\times\mathbf{C}$ in $(r_{i},b_{i})$-coordinates. \end{proposition} \begin{proof} [Proof of Prop. \ref{prop_fund_domain}]We prove that the inclusio \[ \mathcal{F}und\hookrightarrow(\mathbf{R}^{s}\times\mathbf{C})\times \mathbf{R}^{s \] induces a bijection onto the quotient by $\mathcal{O}_{K}\rtimes (\mathcal{O}_{K}^{\times,+})^{2}$.\newline\textit{(Surjectivity)} We have already observed that the downward arrow in Diagram \ref{lq1} is equivariant. By Dirichlet's Unit Theorem the $s$ different vector \[ B_{i}^{\prime}:=(\log\left\vert \sigma_{1}(\varepsilon_{i})\right\vert ,\ldots,\log\left\vert \sigma_{s}(\varepsilon_{i})\right\vert ,\log\left\vert \sigma_{s+1}(\varepsilon_{i})\right\vert )^{t \] for $i=1,\ldots,s$ give a full rank lattice in the \textquotedblleft log-norm\textquotedblright\ hyperplan \[ H:=\left\{ (v_{1},\ldots,v_{s+1})\mid v_{1}+\cdots+v_{s}+2v_{s+1}=0\right\} \subseteq\mathbf{R}^{s+1}\text{. \] Thus, there is a linear isomorphis \begin{align*} \mathbf{R}^{s} & \rightarrow H\\ (v_{1},\ldots,v_{s}) & \mapsto(v_{1},\ldots,v_{s},-\frac{1}{2}(v_{1 +\cdots+v_{s}))\\ (v_{1},\ldots,v_{s}) & \leftarrowtail(v_{1},\ldots,v_{s},v_{s+1})\text{, \end{align*} where $\mathbf{R}^{s}$ is understood to refer to the base in Diagram \ref{lq1}. It follows that the $s$ vectors $B_{i}:=(\log\left\vert \sigma _{1}(\varepsilon_{i})\right\vert ,\ldots,\log\left\vert \sigma_{s (\varepsilon_{i})\right\vert )^{t}$, i.e. just the image of the $B_{i ^{\prime}$ under this isomorphism, span a full rank lattice in $\mathbf{R ^{s}$. Hence, since the $B_{i}$ are thus an $\mathbf{R}$-vector space basis, each element in $\mathbf{R}^{s}$ has a unique presentation a \[ \sum_{i=1}^{s}(n_{i}+\beta_{i})B_{i}\qquad\text{with}\qquad n_{i}\in \mathbf{Z}\text{, }0\leq\beta_{i}<1\text{. \] Thus, letting $\alpha:=\varepsilon_{1}^{-n_{1}}\cdots\varepsilon_{s}^{-n_{s }\in(\mathcal{O}_{K}^{\times,+})^{2}\subset\mathcal{O}_{K}\rtimes (\mathcal{O}_{K}^{\times,+})^{2}$ act, we obtain an element of the orbit which lies in our fundamental domain $\Lambda$ for the base. Obviously, since our map is equivariant, we can let the same uniquely determined element act on the entire space. Thus, we have found a representative of our element in $(\mathbf{R}^{s}\times\mathbf{C})\times\Lambda$. Next, the translation action of $\mathcal{O}_{K}$ leaves the base invariant and just acts in the fibers of Equation \ref{l3}. We get a unique presentatio \[ \sum_{i=1}^{s+2}(m_{i}+\alpha_{i})\tilde{A}_{i}\qquad\text{with}\qquad m_{i}\in\mathbf{Z}\text{, }0\leq\alpha_{i}<1\text{, \] thus, letting $\beta:=\sum m_{i}b_{i}$ act for $\mathcal{O}_{K}=\mathbf{Z \left\langle a_{1},\ldots,a_{s+2}\right\rangle $, we get a unique representative in $\Phi(r_{1},\ldots,r_{s})\times\{(r_{1},\ldots,r_{s )\}\in\mathcal{F}und$, as desired. Note that the group elements we acted by were canonically determined, so we actually get a well-defined ma \[ (\mathbf{R}^{s}\times\mathbf{C})\times\mathbf{R}^{s}\longrightarrow \mathcal{F}und\text{. \] \newline\textit{(Injectivity)} Suppose $x,y\in\mathcal{F}und$ lie in the same orbit of the action of $\mathcal{O}_{K}\rtimes(\mathcal{O}_{K}^{\times,+ )^{2}$. By the equivariance of the morphism in Diagram \ref{lq1} it follows that their images in $\mathbf{R}^{s}$ lie in the same orbit of the action of $(\mathcal{O}_{K}^{\times,+})^{2}$ on $\mathbf{R}^{s}$. But since $x,y\in\mathcal{F}und$, their images $\overline{x},\overline{y}\in \mathbf{R}^{s}$ lie in $\Lambda$, and since this was a fundamental domain for $(\mathcal{O}_{K}^{\times,+})^{2}$ we must have $\overline{x}=\overline{y}$. But then $x,y$ lie in the same fiber $\Phi(r_{1},\ldots,r_{s})$. We check that $\mathcal{O}_{K}\subseteq\mathcal{O}_{K}\rtimes(\mathcal{O}_{K}^{\times ,+})^{2}$ is the largest subgroup stabilizing a fiber, which implies that $x,y$ only differ by the translation action of $\mathcal{O}_{K}$ inside the fiber. But $\Phi(r_{1},\ldots,r_{s})$ was constructed as a fundamental domain for this action, so we deduce $x=y$. \end{proof} We may restate this in more conventional coordinates. Define (or recall) the standard Minkowski fundamental domai \[ \Phi_{\operatorname*{Mink}}:=\left\{ \sum_{i=1}^{s+2}\alpha_{i}\tilde{A _{i}\mid0\leq\alpha_{i}<1\right\} \subseteq\mathbf{R}^{s}\times\mathbf{C \] with $\tilde{A}_{i}^{\ast}:=(\sigma_{1}(a_{i}),\ldots,\sigma_{s}(a_{i ),\sigma_{s+1}(a_{i}))^{t}$. Just from a change of coordinates Prop. \ref{prop_fund_domain} can equivalently be reformulated as follows: \begin{corollary} \label{cor_fund_domain_in_xy_coords}The set $\Lambda\times\Phi _{\operatorname*{Mink}}= \[ \left\{ (x_{1},y_{1},\ldots,x_{s},y_{s},x_{s+1}+iy_{s+1})\left\vert \begin{array} [c]{l \frac{1}{2}\log y_{1},\ldots,\frac{1}{2}\log y_{s}\in\Lambda\\ x_{1},\ldots,x_{s},x_{s+1}+iy_{s+1}\in\Phi_{\operatorname*{Mink} \end{array} \right. \right\} \] is the same fundamental domain, but in $(x_{i},y_{i})$-coordinates. \end{corollary} \section{The volume computation} \begin{proof} [Proof of Prop. \ref{prop_main}]Let us compute the volume of $X:=X(K;\mathcal{O}_{K}^{\times,+})$. For this we will integrate its canonical volume form $\widetilde{vol}$ on $X$, which is best done by integrating it over our fundamental domain of Cor. \ref{cor_fund_domain_in_xy_coords}. We comput \begin{align*} & \int_{X}\widetilde{vol}=\frac{1}{2^{s}}\int_{(\mathbf{H}^{s}\times \mathbf{C})/(\mathcal{O}_{K}\rtimes(\mathcal{O}_{K}^{\times,+})^{2 )}\widetilde{vol}=\frac{1}{2^{s}}\int_{\Lambda\times\Phi_{\operatorname*{Mink }}\widetilde{vol}\\ & =\frac{1}{2^{s}}\int_{\Lambda\times\Phi_{\operatorname*{Mink}}}\frac {(s+1)}{2^{2s+s^{2}-1}}\frac{1}{y_{1}\cdots y_{s}}\mathrm{d}x_{1 \wedge\mathrm{d}y_{1}\wedge\cdots\wedge\mathrm{d}x_{s+1}\wedge\mathrm{d y_{s+1}\text{. \end{align*} Switching to $r$-coordinates, i.e. substituting $y_{i}=e^{2r_{i}}$ for $i=1,\ldots,s$, this effectively reduces to computing an Euclidean volume, namel \begin{align*} & =\frac{1}{2^{s}}\frac{(s+1)}{2^{2s+s^{2}-1}}\int_{\Lambda\times \Phi_{\operatorname*{Mink}}}\mathrm{d}x_{1}\wedge\mathrm{d}r_{1}\wedge \cdots\wedge\mathrm{d}x_{s}\wedge\mathrm{d}r_{s}\wedge\mathrm{d}x_{s+1 \wedge\mathrm{d}y_{s+1}\\ \qquad & =\frac{1}{2^{s}}\frac{(s+1)}{2^{2s+s^{2}-1}}\left( \int_{\Lambda }\mathrm{d}x_{1}\wedge\cdots\wedge\mathrm{d}x_{s}\wedge\mathrm{d}x_{s+1 \wedge\mathrm{d}y_{s+1}\right) \left( \int_{\Phi_{\operatorname*{Mink} }\mathrm{d}r_{1}\wedge\cdots\wedge\mathrm{d}r_{s}\right) \\ & =\frac{1}{2^{s}}\frac{(s+1)}{2^{2s+s^{2}-1}}\cdot\det\left( \tilde{A _{1}^{\ast},\ldots,\tilde{A}_{s+2}^{\ast}\right) \cdot\det(B_{1},\ldots ,B_{s})\text{. \end{align*} Now we can use the classical fact that the vectors $\tilde{A}_{1}^{\ast },\ldots,\tilde{A}_{s+2}^{\ast}$, which are generating the Minkowski fundamental domain, span a parallelepiped of Euclidean volume $2^{-t \cdot\sqrt{\left\vert \triangle_{K/\mathbf{Q}}\right\vert }$ with $t$ the number of complex embeddings. Moreover, the determinant of the vectors $B_{1},\ldots,B_{s}$ is almost literally the definition of the Dirichlet regulator \begin{align*} & =\frac{1}{2^{s}}\frac{(s+1)}{2^{2s+s^{2}-1}}\cdot\left( \frac{1}{2 \cdot\sqrt{\left\vert \triangle_{K/\mathbf{Q}}\right\vert }\right) \cdot(2^{s}\cdot R_{K})\\ & =\frac{(s+1)}{2^{2s+s^{2}}}\cdot\sqrt{\left\vert \triangle_{K/\mathbf{Q }\right\vert }\cdot R_{K}\text{. \end{align*} The factor $2^{s}$ in front of the regulator $R_{K}$ occurs as follows: We would get precisely the regulator here if $\varepsilon_{1},\ldots ,\varepsilon_{s}$ was a basis for $\mathcal{O}_{K}^{\times}$. However, our $\varepsilon_{1},\ldots,\varepsilon_{s}$ are a basis for $(\mathcal{O _{K}^{\times,+})^{2}$. We recall the analytic class number formula, stating that (in the case we consider \[ \operatorname*{res}\nolimits_{s=1}\zeta_{K}(s)=\frac{2^{s}\pi h_{K}R_{K }{\sqrt{\left\vert \triangle_{K/\mathbf{Q}}\right\vert }}\text{. \] Solving for $R_{K}$ yields the claim by plugging it into our previous formula for the volume. We get Equation \ref{lcy3} as desired. \end{proof} One can actually `speed up' this computation slightly by working directly with a fundamental domain under the action of the full group $\mathcal{O _{K}\rtimes\mathcal{O}_{K}^{\times,+}$, leading the two mutually cancelling factors $\frac{1}{2^{s}}$ and $2^{s}$ to disappear altogether. \begin{example} Consider the cubic fiel \begin{equation} K:=\mathbf{Q}[T]/(T^{3}+T^{2}-1)\text{.}\label{l35 \end{equation} The image $\overline{T}\in K$ is actually a generator of $\mathcal{O _{K}^{\times,+}$ because its norm is one and its single real embedding has value $0.7548\ldots>0$. The number field $K$ has discriminant $\triangle _{K/\mathbf{Q}}=-23$, class number $h_{K}=1$ and regulato \[ R_{K}=\left\vert \log0.754877\ldots\right\vert =0.28119957432... \] It has one real and one complex place. We may form its Oeljeklaus-Toma manifol \[ X:=X(K;\mathcal{O}_{K}^{\times,+})\text{, \] giving a (non-K\"{a}hler) Inoue surface of type $\mathrm{S}^{0}$ with Tricerri's metric. According to Prop. \ref{prop_main} its volume i \[ \operatorname*{Vol}\left( X\right) =\frac{1}{4}\cdot\sqrt{23}\cdot 0.28119957432...\approx0.3371\ldots\text{. \] We will show in Prop. \ref{prop_intext_minvol} that no smaller volume is possible among cubic fields. The commutator subgroup of its fundamental group is (by Prop. \ref{Prop_StructureOfCommutator} \[ \lbrack\pi,\pi]=J(\mathcal{O}_{K}^{\times,+})=(1-\overline{T})\text{, \] the ideal in $\mathcal{O}_{K}$ generated by $1-\overline{T}$. From the minimal polynomial, Equation \ref{l35}, we see that $\overline{T}^{3}+\overline{T ^{2}=1$ and a simple polynomial division reveals that $(1-\overline{T )\cdot(\overline{T}^{2}+2\overline{T}+2)=1$, showing that $J(\mathcal{O _{K}^{\times,+})=(1)$ is actually the entire ring of integers. So the maximal abelian quotient $\pi_{\operatorname*{ab}}\simeq\mathbf{Z}$ is already torsion-free itself. By Prop. \ref{prop_TorsionInH1} we therefore hav \[ H_{1}(X,\mathbf{Z})=\mathbf{Z}\text{. \] Following the recipe of Prop. \ref{prop_reconstruct} we let a generator act on $\mathbf{Z}^{3}$ and this will be $\overline{T}$ or $\overline{T}^{-1}$. We have no way of distinguishing them if we are just given $\mathbf{Z}$ abstractly. Say it was $\overline{T}$, and we get precisely that its action on $\mathbf{Z}^{3}$ has minimal polynomial $x^{3}+x^{2}-1$ in $\mathbf{Z}[x]$, generating $K$ over $\mathbf{Q}$. Adjoining all three complex roots yields $K^{\operatorname*{g}}/\mathbf{Q}$, a field of degree $6$. \end{example} \section{Prescribed torsion} We want to exhibit a particularly nice family of Inoue surfaces for which we can freely prescribe the order of the torsion in $H_{1}(X,\mathbf{Z})$. As will be clear from the proof, this construction largely rests on ideas of Ishida, porting from number theory to geometry. \begin{proposition} \label{prop_ConstructInoueSurfaceWithTorsionZm}For any given $m\geq1$ there exists an Inoue surface $X$ of type $\mathrm{S}^{0}$ wit \[ H_{1}(X,\mathbf{Z})\cong\mathbf{Z}\oplus\mathbf{Z}/m \] and equipped with the Oeljeklaus-Toma metric, it has volum \[ \operatorname*{Vol}(X)=\frac{1}{4}\cdot\sqrt{4m^{3}+27}\cdot\log\left\vert z-\frac{m}{3z}\right\vert \] for the real numbe \[ z:=\sqrt[3]{\frac{1}{2}+\frac{\sqrt{3}}{18}\sqrt{4m^{3}+27}}\text{. \] In fact, $X$ can be constructed as a finite unramified coverin \begin{equation \xymatrix{ X \ar[d] \\ X(K;\mathcal{O}_{K}^{\times,+}) } \label{lja21 \end{equation} of the Oeljeklaus-Toma manifol \begin{equation} X:=X(K;\mathcal{O}_{K}^{\times,+})\qquad\text{for}\qquad K:=\mathbf{Q [T]/(T^{3}+mT-1)\text{.} \label{lja20 \end{equation} If $4m^{3}+27$ is square-free, this covering is trivial. Alternatively, suppose $m=3k$ and $4k^{3}+1$ is square-free: Then if $3\nmid k$, the covering is also trivial. If $3\mid k$, it is a covering of degree $3$. \end{proposition} I suspect that all $H_{1}(X,\mathbf{Z})\cong\mathbf{Z}\oplus\mathbf{Z}/m$ can be realized by genuine Oeljeklaus-Toma manifolds without the need to allow finite coverings. \begin{proof} Let $m\geq1$ be given. The polynomial $T^{3}+mT-1$ has one sign change in its coefficients, so by Descartes Sign Rule it has a single positive real root and no negative real roots. Moreover, it is irreducible over $\mathbf{Q} $ (\textit{Proof:} Otherwise it has a rational root $\alpha_{1}$. Hence, over the algebraic closure it factors as $(T-\alpha_{1})(T-\alpha_{2})(T-\alpha _{3})$ with $\alpha_{1}\in\mathbf{Q}\cap\overline{\mathbf{Z}}=\mathbf{Z}$ and $\alpha_{2},\alpha_{3}\in\overline{\mathbf{Z}}$. Since $\alpha_{1}\alpha _{2}\alpha_{3}=1$ it follows that $\alpha_{1}$ is also a unit, so $\alpha _{1}=1$ since we already know that there is no negative real root. But by plugging in we see that this is certainly not a root). It follows tha \[ K:=\mathbf{Q}[T]/(T^{3}+mT-1) \] is a cubic number field with $s=t=1$. We write $\overline{T}$ to denote the image of $T$ in $K$. Since the constant coefficient in the minimal polynomial of $\overline{T}$ is $-1$, it is a unit in $\mathcal{O}_{K}^{\times}$ and we had already seen that its single real embedding is necessarily positive. Thus, $\overline{T}\in\mathcal{O}_{K}^{\times,+}$ and it generates a subgroup $U:=\mathbf{Z}\left\langle \overline{T}\right\rangle $ of finite index. Similarly, instead of the full ring of integers we so far just understand $\mathbf{Z}[\overline{T}]\subseteq\mathcal{O}_{K}$, which might be of some finite index, too.\newline This elementary construction already allows us to construct $X$: We consider the complex manifold $X$, defined by \begin{equation \begin{array} [c]{cl \dfrac{\mathbf{H}\times\mathbf{C}}{\mathbf{Z}[\overline{T}]\rtimes U} & =X\\ \downarrow & \\ \dfrac{\mathbf{H}\times\mathbf{C}}{\mathcal{O}_{K}\rtimes\mathcal{O _{K}^{\times,+}} & =X(K,\mathcal{O}_{K}^{\times,+}) \end{array} \label{lja10 \end{equation} and equip it with the Oeljeklaus-Toma metric, which is of course also invariant under the action since $\mathbf{Z}[\overline{T}]\rtimes U$ forms some finite index subgroup of $\mathcal{O}_{K}\rtimes U$. In particular, $X$ is compact as well. It clearly is an Inoue surface. The definition of the ideal $J$ (Definition \ref{def_IdealJ}) also makes sense in the subring $\mathbf{Z}[\overline{T}]\subset\mathcal{O}_{K}$ and we comput \begin{align} J(U) & =\mathbf{Z}[\overline{T}]/(\overline{T}-1)\label{lja9}\\ & =\mathbf{Z}/[T]/(T-1,T^{3}+mT-1)=\mathbf{Z}/m\text{.}\nonumber \end{align} We leave it to the reader to check that Prop. \ref{Prop_KappaAgreesWithOK} can be generalized to the manifold $X$ and gives us $H_{1}(X,\mathbf{Z )\cong\mathbf{Z}\oplus\mathbf{Z}/m$. In fact, the proof carries over verbatim. Finally, we can compute its volume as follows: Instead of the discriminant $\triangle_{K/\mathbf{Q}}$ of the number field $K$, we now just get the discriminant of the order $\mathbf{Z}[\overline{T}]\subset\mathcal{O}_{K}$, but this makes things easier since that is just the discriminant of the generating polynomial, i.e. $-4m^{3}-27$. The regulator matrix for $K$ is the $\left( 1\times1\right) $-matrix with the single entry $\log\left\vert \sigma_{1}(\overline{T})\right\vert $, where $\sigma_{1}(\overline{T})$ denotes the single real embedding of $\overline{T}$, or equivalently the single real root of $T^{3}+mT-1$. We may solve this using the classical Vieta substitution $t$ for depressed cubics (a variant to the Cardano-Tartaglia formula): The real root is given b \[ t:=z-\frac{m}{3z}\qquad\text{for}\qquad z:=\sqrt[3]{\frac{1}{2}+\frac{\sqrt {3}}{18}\sqrt{4m^{3}+27}}\text{. \] This formula is `fairly' simple since in the polynomial $T^{3}+mT-1$ the quadratic term is already eliminated.\newline The rest of the proof, and the only difficult part, exclusively concerns the question to control the index o \[ \mathbf{Z}[\overline{T}]\rtimes U\subseteq\mathcal{O}_{K}\rtimes \mathcal{O}_{K}^{\times,+ \] in order to understand the degree of the covering. For the discriminant of the order $\mathbf{Z}[\overline{T}]\subseteq\mathcal{O}_{K}$ we compute $\operatorname*{disc}(T^{3}+mT-1)=-4m^{3}-27$ and therefor \begin{equation} -4m^{3}-27=\triangle_{K/\mathbf{Q}}\cdot\lbrack\mathcal{O}_{K}:\mathbf{Z [\overline{T}]]^{2} \label{lja11 \end{equation} by the discriminant-index formula. Hence, if $4m^{3}+27$ is square-free, we must have $[\mathcal{O}_{K}:\mathbf{Z}[\overline{T}]]=1$ and therefore $\mathbf{Z}[\overline{T}]=\mathcal{O}_{K}$. Next, we use a clever theorem of Ishida telling us that this also implies that $\mathcal{O}_{K}^{\times}$ is generated by $\overline{T}$, namely \cite[Theorem 1]{MR0335469} (strictly speaking, Ishida's theorem only applies for $m\geq2$, so we ask the reader to deal with the single case $m=1$ either by using a computer $-$ or by hand. The latter can be done by checking that the norm equation $N(-)=+1$ cannot have a real solution of smaller absolute value). The fundamental unit $\overline{T}$ must moreover be totally positive since it was chosen from a polynomial which did not have negative real roots. Thus \begin{equation} \mathbf{Z}[\overline{T}]\rtimes U=\mathcal{O}_{K}\rtimes\mathcal{O _{K}^{\times,+} \label{lja17 \end{equation} and the manifold $X$ of Equation \ref{lja10} becomes literally a genuine Oeljeklaus-Toma manifold.\newline Let us deal with the remaining case: $m=3k$ and $4k^{3}+1$ is square-free. The same theorem of Ishida \cite[Theorem 1]{MR0335469} tells us that this also suffices to have $U=\mathcal{O _{K}^{\times,+}$. However, $[\mathcal{O}_{K}:\mathbf{Z}[\overline{T}]]$ can be larger than one. Actually, the paper of Ishida gives us also all the tools we need to deal with this problem, but Ishida does not summarize his findings in this case as a separate theorem, so let me guide you through his argument: In \cite[\S 3, all on page $248$]{MR0335469} he first deduces from the discriminant-index formula, i.e. Equation \ref{lja11}, tha \[ \lbrack\mathcal{O}_{K}:\mathbf{Z}[\overline{T}]]=3^{d \] for some $d\geq0$. In the case that $3\nmid k$, he uses that $\overline{T}$ and $\overline{T}+1$ clearly generate the same number field, bu \[ (T+1)^{3}+m(T+1)-1=T^{3}+3T^{2}+(3k+3)T+3k \] is an Eisenstein polynomial at the prime $p=3$, which implies that $3\nmid\lbrack\mathcal{O}_{K}:\mathbf{Z}[\overline{T}]]$. Thus, again $\mathbf{Z}[\overline{T}]=\mathcal{O}_{K}$ and we are back in the situation of Equation \ref{lja17}. It remains to deal with the case $3\mid k$, so $3^{3}\mid m$. In this case Ishida exhibits the elemen \[ \frac{1}{3}(1+\overline{T}+\overline{T}^{2})\in\frac{1}{3}\mathbf{Z [\overline{T}]\text{, \] which can be checked by direct computation to be integral, i.e. it lies in $\mathcal{O}_{K}$. This forces $d\geq1$ and using the discriminant-index formula once more, he concludes $[\mathcal{O}_{K}:\mathbf{Z}[\overline{T}]]=3 $. Thus, our covering is also of degree $3$. \end{proof} \begin{example} With the help of the computer we can compute the index of $\mathbf{Z [\overline{T}]$ inside $\mathcal{O}_{K}$, resp. $U$ inside $\mathcal{O _{K}^{\times,+}$. Several of the cases below are of course fully explained by the proposition above. However, not all of them, and in particular we see that the covering of Equation \ref{lja21} can sometimes have fairly large degree \ \begin{tabular} [c]{c|c|cc $m$ & $[\mathcal{O}_{K}:\mathbf{Z}[\overline{T}]]$ & $[\mathcal{O}_{K ^{\times,+}:\mathbf{Z}\left\langle \overline{T}\right\rangle ]$ & \multicolumn{1}{|c}{$x^{2}\mid4m^{3}+27$}\\\hline \multicolumn{1}{r|}{$8$} & \multicolumn{1}{|r|}{$5$} & \multicolumn{1}{|r|}{$2$} & \multicolumn{1}{r}{$5^{2}$}\\ \multicolumn{1}{r|}{$16$} & \multicolumn{1}{|r|}{$1$} & \multicolumn{1}{|r|}{$1$} & \multicolumn{1}{r}{}\\ \multicolumn{1}{r|}{$24$} & \multicolumn{1}{|r|}{$1$} & \multicolumn{1}{|r|}{$1$} & \multicolumn{1}{r}{$3^{4}$}\\ \multicolumn{1}{r|}{$32$} & \multicolumn{1}{|r|}{$1$} & \multicolumn{1}{|r|}{$1$} & \multicolumn{1}{r}{}\\ \multicolumn{1}{r|}{$40$} & \multicolumn{1}{|r|}{$1$} & \multicolumn{1}{|r|}{$1$} & \multicolumn{1}{r}{}\\ \multicolumn{1}{r|}{$48$} & \multicolumn{1}{|r|}{$1$} & \multicolumn{1}{|r|}{$1$} & \multicolumn{1}{r}{$3^{2}$}\\ \multicolumn{1}{r|}{$56$} & \multicolumn{1}{|r|}{$31$} & \multicolumn{1}{|r|}{$2$} & \multicolumn{1}{r}{$31^{2}$}\\ \multicolumn{1}{r|}{$64$} & \multicolumn{1}{|r|}{$1$} & \multicolumn{1}{|r|}{$1$} & \multicolumn{1}{r}{}\\ \multicolumn{1}{r|}{$72$} & \multicolumn{1}{|r|}{$3\cdot11$} & \multicolumn{1}{|r|}{$2$} & \multicolumn{1}{r}{$3^{2}\cdot11^{2}$ \end{tabular} \] The rightmost column lists square factors. Among the first $500$ values of $m $ we get $\mathcal{O}_{K}=\mathbf{Z}[\overline{T}]$ for $415$ of them. The condition for $4m^{3}+27$ to be square-free gives a reasonable sufficient condition to have $[\mathcal{O}_{K}:\mathbf{Z}[\overline{T}]]=1$, but is still quite remote from a precise criterion. As I have learnt from Ishida's paper \cite{MR0335469}, it was shown by the famous Erd\H{o}s that $4m^{3}+27$ is square-free for infinitely many $m$. \end{example} \section{A curiosity} As we had seen from \S \ref{section_CommutatorSubgroup} the structure of the ideal $J(U)$ can be quite a non-trivial matter. Even though its concrete structure seems fairly elusive from the outset, one can bound its index in terms of the units of the underlying number field. Sadly, controlling their size is similarly inaccessible. However, these two elusive bounds control each other.\medskip I only record the following estimate as a curiosity. Since I know of no way to compute the volume of an Oeljeklaus-Toma manifold except from the arithmetic invariants of the underlying number field, I would not know how to put the following inequality into any computational use. \begin{proposition} Let $K$ be a number field with $s=t=1$. Then the torsion in the first homology of the Oeljeklaus-Toma surface $X:=X(K;\mathcal{O}_{K}^{\times,+})$ can be bounded in terms of the volume and discriminant. Specifically \[ \#H_{1}(X,\mathbf{Z})_{\operatorname*{tor}}\leq3(z+z^{2}) \] wher \[ z:=\max(w,\sqrt{1/w})\qquad\text{and}\qquad w:=\exp\left( 4\frac {\operatorname*{Vol}\left( X\right) }{\sqrt{\left\vert \triangle _{K/\mathbf{Q}}\right\vert }}\right) \text{. \] \end{proposition} \begin{proof} From Prop. \ref{Prop_KappaAgreesWithOK} we have the equalit \[ \#H_{1}(X,\mathbf{Z})_{\operatorname*{tor}}=\#(\mathcal{O}_{K}/J(\mathcal{O _{K}^{\times,+}))\text{. \] By Dirichlet's Unit Theorem $\mathcal{O}_{K}^{\times}\simeq\left\langle -1\right\rangle \times\mathbf{Z}\left\langle u\right\rangle $ with $u$ a fundamental unit. Without loss of generality we can assume that $u$ is totally positive, otherwise replace $u$ by $-u$. Then $u$ is a generator of $\mathcal{O}_{K}^{\times,+}$. By Lemma \ref{Lemma_JUCanBeDefinedOnGenerators} we therefore hav \[ J(\mathcal{O}_{K}^{\times,+})=(1-u)\text{. \] As this is a principal ideal, its ideal norm can be computed just in terms of the norm of the generating element.\ This means tha \[ \#(\mathcal{O}_{K}/J(\mathcal{O}_{K}^{\times,+}))=\left\vert N_{K/\mathbf{Q }(1-u)\right\vert {\textstyle\prod\nolimits_{i=1}^{3}} \sigma_{i}(1-u)\text{. \] As usual, let $\sigma_{1}$ denote the single real embedding and $\sigma _{2},\overline{\sigma_{2}}=\sigma_{3}$ are the complex conjugate embeddings of the single complex place. We have $\sigma_{1}(u)>0$ and therefore $N_{K/\mathbf{Q}}(u)=\sigma_{1}(u)\left\vert \sigma_{2}(u)\right\vert ^{2}>0$ and the norm lies in $\{\pm1\}=\mathbf{Z}^{\times}$ since $u$ is a unit. Hence, $N_{K/\mathbf{Q}}(u)=1$ and we can continue the above computation with \begin{align} & =1-\sum_{i=1}^{3}\sigma_{i}(u)+\sum_{1\leq i<j\leq3}\sigma_{i}(u)\sigma _{j}(u)-N_{K/\mathbf{Q}}(u)\nonumber\\ & =\sum_{i<j}\sigma_{i}(u)\sigma_{j}(u)-\sum_{i=1}^{3}\sigma_{i}(u)\text{.} \label{ltta1 \end{align} By $\sigma_{1}(u)\left\vert \sigma_{2}(u)\right\vert ^{2}=1$ we have $\left\vert \sigma_{2}(u)\right\vert =\sqrt{1/\sigma_{1}(u)}$. Thus, for $z:=\max(\sigma_{1}(u),\sqrt{1/\sigma_{1}(u)})>0$ we get the estimat \[ \#(\mathcal{O}_{K}/J(\mathcal{O}_{K}^{\times,+}))\leq3z^{2}+3z=3(z+z^{2}) \] from Equation \ref{ltta1}. From Prop. \ref{prop_main} we know tha \[ \operatorname*{Vol}\left( X\right) =\frac{1}{4}\cdot\sqrt{\left\vert \triangle_{K/\mathbf{Q}}\right\vert }\cdot R_{K}=\frac{1}{4}\cdot \sqrt{\left\vert \triangle_{K/\mathbf{Q}}\right\vert }\cdot\log\left\vert \sigma_{1}u\right\vert \text{, \] since the Dirichlet regulator is just formed from a $(1\times1)$-matrix in the present situation, and the single entry comes from the logarithmic embedding of the fundamental unit. Thus, reversing the usual logic, we can also say tha \[ \sigma_{1}u=\exp\left( 4\frac{\operatorname*{Vol}\left( X\right) {\sqrt{\left\vert \triangle_{K/\mathbf{Q}}\right\vert }}\right) \text{. \] The claim follows from connecting this with our previous upper bound. \end{proof} \begin{example} \label{Example_VolumeBounds}As usual in this text, let us compare this estimate to precise values: We shall study the number field \[ K:=\mathbf{Q}[T]/(T^{3}+8T-m) \] for $m\geq1$, whenever the given polynomial is irreducible. It is easy to see that these are number fields with $s=t=1$. The discriminant of the order $\mathbf{Z}[\overline{T}]\subseteq\mathcal{O}_{K}$ is easily computed to b \[ \triangle_{\mathbf{Z}[\overline{T}]/\mathbf{Q}}=-27m^{2}-2048 \] and for most of the $1\leq m\leq10$ the order $\mathbf{Z}[\overline{T}]$ is the entire ring of integers or at least has only a small index. With the help the computer we obtain \ \begin{tabular} [c]{c|c|c|c $m$ & $H_{1}(X,\mathbf{Z})_{\operatorname*{tor}}$ & upper bound & $\operatorname*{Vol}\left( X\right) $\\\hline \multicolumn{1}{r|}{$1$} & \multicolumn{1}{|r|}{$4$} & \multicolumn{1}{|r|}{$13.54$} & \multicolumn{1}{|r}{$2.3702...$}\\ \multicolumn{1}{r|}{$2$} & \multicolumn{1}{|r|}{$2$} & \multicolumn{1}{|r|}{$9.58$} & \multicolumn{1}{|r}{$1.0105...$}\\ \multicolumn{1}{r|}{$3$} & \multicolumn{1}{|r|}{$2856582$} & \multicolumn{1}{|r|}{$8575220$} & \multicolumn{1}{|r}{$177.8782...$}\\ \multicolumn{1}{r|}{$4$} & \multicolumn{1}{|r|}{$32$} & \multicolumn{1}{|r|}{$122.47$} & \multicolumn{1}{|r}{$22.1167...$}\\ \multicolumn{1}{r|}{$5$} & \multicolumn{1}{|r|}{$5146$} & \multicolumn{1}{|r|}{$15731.73$} & \multicolumn{1}{|r}{$111.5530...$}\\ \multicolumn{1}{r|}{$6$} & \multicolumn{1}{|r|}{$288$} & \multicolumn{1}{|r|}{$1022.58$} & \multicolumn{1}{|r}{$79.3724...$}\\ \multicolumn{1}{r|}{$7$} & \multicolumn{1}{|r|}{$1288$} & \multicolumn{1}{|r|}{$4175.28$} & \multicolumn{1}{|r}{$104.6757...$}\\ \multicolumn{1}{r|}{$8$} & \multicolumn{1}{|r|}{$2$} & \multicolumn{1}{|r|}{$11.07$} & \multicolumn{1}{|r}{$1.5189...$}\\ \multicolumn{1}{r|}{$10$} & \multicolumn{1}{|r|}{$14$} & \multicolumn{1}{|r|}{$43.89$} & \multicolumn{1}{|r}{$41.7309...$ \end{tabular} \] The values in the two right-hand side columns have been truncated. The particularly large values for $m=3,5$ are mostly caused by the fact that these number fields have exceptionally large Dirichlet regulators. Allow me to emphasize once more that the computation of the upper bounds requires the determination of the fundamental unit just as does finding the torsion group. Therefore, this estimate is truly not of any algorithmic use. \end{example} \section{\label{sect_SmallestVolume}Smallest volume} Firstly, we must ask: Is this question well-defined at all?\medskip Usually, when one looks at questions lik \[ (\text{complex surfaces})\cap(\text{LCK manifolds}) \] as in Vaisman's paper \cite{MR1038005}\footnote{This paper seems to have been written in response to Wall's study \cite{MR827276}, \cite{MR837617}. Taking inspiration from Thurston's geometries, Wall asks which $4$-dimensional geometries (= nice simply connected Riemannian real manifolds whose isometry group acts transitively and admits lattices) possess a complex structure so that the isometry action is holomorphic. He finds that a complex structure often exists, often unique, but not always K\"{a}hler.}, o \[ \left( \text{real solvmanifolds}\right) \cap\left( \text{LCK manifolds \right) \] as suggested in work of Hasegawa \cite{MR2235860}, we might primarily be interested in the existence of a K\"{a}hler or LCK metric at all. Once such exists, there can be many, at the very least we can rescale it (\textquotedblleft K\"{a}hler cones\textquotedblright). In this sense the volume depends on choices and it is a pointless task to find a smallest volume among arbitrary choices. However, the situation is a little different for Oeljeklaus-Toma manifolds.\medskip\newline For finite volume hyperbolic $n$-manifolds $X,X^{\prime}$ (with $n\geq3$) if there exists an isomorphism of fundamental groups $\pi_{1}(X,\ast)\overset{\sim}{\longrightarrow}\pi _{1}(X^{\prime},\ast)$, then there even exists an isometry $\phi :X\overset{\sim}{\longrightarrow}X^{\prime}$ (Mostow-Prasad Rigidity). In particular, the volume is a topological invariant; homeomorphic spaces must have the same volume. This makes it very interesting to study the possible volumes, and to search for a smallest volume.\newline For the Oeljeklaus-Toma manifolds $X(K;\mathcal{O}_{K}^{\times,+})$ the Proposition \ref{prop_reconstruct} creates a somewhat similar situation. We get a well-defined functio \[ \operatorname*{Vol}:\left\{ \begin{array} [c]{c \text{spaces }X\text{ homeomorphic to an}\\ \text{Oeljeklaus-Toma manifold \end{array} \right\} \longrightarrow\mathbf{R \] by associating to any $X$ its \textquotedblleft canonical model\textquotedblright\ $(\mathbf{H}^{s}\times\mathbf{C})/(\pi_{1}(X,\ast))$, which comes with the standard normalized Oeljeklaus-Toma metric. So at least after fixing once and for all a normalized metric (as we have done in this text), we get a well-defined volume and in particular well-defined infimum of volumes. The situation might be quite different for the spaces $X(K;U)$ for $t>1$ complex places. As Example \ref{Example_CannotReconstructForMoreComplexPlaces} shows, there are different number fields $K,K^{\prime}$ and admissible subgroups $U,U^{\prime}$ so that there exists a diffeomorphis \[ \frac{\mathbf{H}^{s}\times\mathbf{C}^{t}}{\mathcal{O}_{K}\rtimes U \overset{\sim}{\longrightarrow}\frac{\mathbf{H}^{s}\times\mathbf{C}^{t }{\mathcal{O}_{K^{\prime}}\rtimes U^{\prime}}\text{, \] yet even if there happens to exist a normalized LCK metric (for example Battisti's generalized Oeljeklaus-Toma metric, \cite[Appendix]{MR3193953}), I would suspect the volumes to differ. Example \ref{Example_CannotReconstructForMoreComplexPlaces}\ however says nothing about this since these spaces do not admit any LCK\ metrics for sure, as we explain \textit{loc. cit}.\medskip This being said and an overall normalization chosen, let us investigate whether there is a smallest volume. Certainly, the infimum of volumes could just be zero. For those readers who like the bridge to hyperbolic $3$-manifolds as alluded to in \S \ref{sect_toymodelproducthyperbolic}, it should be said that there is a unique smallest compact orientable hyperbolic $3$-manifold, the Weeks manifold \cite{MR1882023}, \cite{MR2525782}. Its volume i \[ \frac{3\cdot23^{\frac{3}{2}}}{4\pi^{4}}\zeta_{K}(2)\qquad\text{for}\qquad K:=\mathbf{Q}[T]/(T^{3}-T+1)\text{. \] This cubic number field $K$ is the one whose discriminant has the smallest absolute value among all cubic fields. The volume of its Oeljeklaus-Toma manifold is $\approx0.33714644$. Surprisingly, it turns out that this is also the smallest possible volume of an Oeljeklaus-Toma manifold with $s=1$. By quoting some rather hard results from analytic number theory and the geometry of numbers, one can show with little effort that, once fixing a number of real places $s$, the volume among all Oeljeklaus-Toma manifolds generally stays bounded away from zero: \begin{proposition} \label{prop_intext_minvol}For every $s\geq1$ there exists a unique real number $\operatorname*{Vol}\nolimits_{s}$ so that the following holds: \begin{enumerate} \item All Oeljeklaus-Toma manifolds with fixed $s$ have volume $\geq \operatorname*{Vol}\nolimits_{s}$. \item There exists at least one, but at most finitely many, actually attaining this minimal volume $\operatorname*{Vol}\nolimits_{s}$. \item We have the crude lower bound \[ \operatorname*{Vol}\nolimits_{s}\geq\pi\frac{(s+2)^{s+1}}{4^{s+2}\cdot 2^{s^{2}}\cdot s!}\text{. \] \end{enumerate} For the special case $s=1$ there is a unique Oeljeklaus-Toma manifold of smallest volume, namel \[ \operatorname*{Vol}\nolimits_{1}=0.337146\ldots \] It is the one coming from the number fiel \[ K:=\mathbf{Q}[T]/(T^{3}-T+1)\text{. \] \end{proposition} \begin{proof} All the real work here lies in a deep result of Friedman \cite{MR1022309}, based on earlier work of Remak and Zimmert. We have: \begin{itemize} \item For every number field $K$, apart from three exceptions with $[K:\mathbf{Q}]=6$, we have $R_{K}>\frac{1}{4}$ (\cite[Theorem B]{MR1022309}). \item For every number field $K$ with $s=t=1$ and $\left\vert \triangle _{K/\mathbf{Q}}\right\vert <18.7^{3}$ we have $R_{K}/2\geq0.14$ (\cite[Prop. 2.2 and Table 2 for $(r_{1},r_{2})=(1,1)$]{MR1022309}). \end{itemize} If one is willing to accept far weaker bounds, a short proof of a lower bound for the regulator in terms of $s$ can also be found in \cite{MR1225260}. Let $X$ be an arbitrary Oeljeklaus-Toma manifold for a given $s\geq1$. From the first estimate and Prop. \ref{prop_main} we readily obtain the boun \[ \operatorname*{Vol}\left( X\right) >\frac{(s+1)}{4^{s+1}\cdot2^{s^{2}} \cdot\sqrt{\left\vert \triangle_{K/\mathbf{Q}}\right\vert }\text{, \] except for finitely many fields and we can ignore them as this does not affect the validity of our claim (since their regulators are explicitly known and listed in Friedman's work, we could also just work with the overall minimal regulator). Furthermore, there is the standard Minkowski discriminant estimat \[ \sqrt{\left\vert \triangle_{K/\mathbf{Q}}\right\vert }\geq\left( \frac{\pi }{4}\right) \frac{n^{n}}{n!}\qquad\text{for }n:=s+2 \] the degree of the field. Combining these inequalities, we arrive a \begin{equation} \operatorname*{Vol}\left( X\right) >\pi\frac{(s+1)(s+2)^{s+2}}{4^{s+2 \cdot2^{s^{2}}(s+2)!}\text{.} \label{lja1 \end{equation} Work of Odlyzko, Martinet, and many others would give much better lower bounds for particular ranges of $s$, but this estimate suffices for our needs. Defin \[ V_{s}=\{\operatorname*{Vol}\left( X\right) \mid X(K,\mathcal{O}_{K ^{\times,+})\text{ for any }K\text{ with }t=1\text{ and given }s\}\subset \mathbf{R}\text{, \] the set of all volumes that can occur for fixed $s$. This set is non-empty and bounded from below by Equation \ref{lja1}, so it will have some infimum $\wp:=\inf(V_{s})$. We now argue by contradiction: Suppose there is no $X$ whose volume attains this infimum. This means that there exists a sequence of number fields $K_{n}$ so tha \begin{align} \wp & =\underset{n\rightarrow\infty}{\lim}\operatorname*{Vol}\left( X(K_{n},\mathcal{O}_{K_{n}}^{\times,+})\right) \nonumber\\ & =\frac{(s+1)}{4^{s}\cdot2^{s^{2}}}\cdot\underset{n\rightarrow\infty}{\lim }\sqrt{\left\vert \triangle_{K_{n}/\mathbf{Q}}\right\vert }\cdot R_{K_{n }\text{.} \label{lja2 \end{align} The Hermite-Minkowski Theorem tells us that there are only finitely many number fields of bounded discriminant $\left\vert \triangle_{K/\mathbf{Q }\right\vert <C$ for any $C\geq0$, so if the sequence $(\left\vert \triangle_{K_{n}/\mathbf{Q}}\right\vert )_{n\geq0}$ stays bounded, $\{K_{0},K_{1},K_{2},\ldots\}$ is actually a finite set and therefore some $K_{i}$ will realize the infimum, contradicting our assumption. Thus, we must have $\lim\nolimits_{n\rightarrow\infty}\sqrt{\left\vert \triangle _{K_{n}/\mathbf{Q}}\right\vert }=+\infty$. Hence, from Equation \ref{lja2} we can deduce that $\lim\nolimits_{n\rightarrow\infty}R_{K_{n}}=0$. This contradicts Friedman's bound $R_{K_{n}}>\frac{1}{4}$. Thus, there exists at least one $K_{i}$ with $\operatorname*{Vol}\left( X(K_{n},\mathcal{O}_{K_{n }^{\times,+})\right) =\wp$. If $\{K_{i}\}$ now denotes the (possibly infinite) set of all number fields realizing the volume $\wp$, that i \[ \wp=\frac{(s+1)}{4^{s}\cdot2^{s^{2}}}\cdot\underset{n\rightarrow\infty}{\lim }\sqrt{\left\vert \triangle_{K_{n}/\mathbf{Q}}\right\vert }\cdot R_{K_{n }\text{, \] \newline the same argument as above shows that the set $\{K_{i}\}$ must be finite, for otherwise the discriminants grow arbitrarily large, ultimately forcing regulators $\leq\frac{1}{4}$, which is impossible. Next, consider the case $s=t=1$: Firstly, (for $\left\vert \triangle _{K/\mathbf{Q}}\right\vert \geq18.7^{3}$) the first Friedman estimate shows tha \[ \operatorname*{Vol}\left( X\right) =\frac{1}{4}\cdot\sqrt{\left\vert \triangle_{K/\mathbf{Q}}\right\vert }\cdot R_{K}>\frac{1}{16}\cdot \sqrt{18.7^{3}}>5\text{. \] Next, suppose $\left\vert \triangle_{K/\mathbf{Q}}\right\vert <18.7^{3}$. Then the second Friedman estimate implie \[ \operatorname*{Vol}\left( X\right) =\frac{1}{4}\cdot\sqrt{\left\vert \triangle_{K/\mathbf{Q}}\right\vert }\cdot R_{K}>\frac{0.28}{4}\cdot \sqrt{\left\vert \triangle_{K/\mathbf{Q}}\right\vert \] and since the smallest possible discriminant of a cubic field is $\left\vert \triangle_{K/\mathbf{Q}}\right\vert =23$, we deduce $\operatorname*{Vol \left( X\right) >0.335708$. Thus, we have a good lower bound for the smallest possible volume. Next, let us assume that $K$ has a discriminant of absolute value larger than $23$, so at least $24$. Then Friedman's bound shows tha \[ \operatorname*{Vol}\left( X\right) >\frac{0.28}{4}\cdot\sqrt{24 >0.3429\text{. \] Since the Oeljeklaus-Toma manifold of $K:=\mathbf{Q}[T]/(T^{3}-T+1)$ has the underlined volume i \[ 0.335708<\underline{0.3371\ldots}<0.3429\text{, \] we deduce that the minimal volume can (and is) attained only for number fields $K$ with $s=t=1$ and discriminant $\left\vert \triangle_{K/\mathbf{Q }\right\vert =23$. However, in the present case it is known that there exists only one number field with discriminant of absolute value $23$. \end{proof} \begin{proof} [Proof of Prop. \ref{prop_boundedminvolume}]This is just a reformulation of the previous result, using that the dimension of an Oeljeklaus-Toma manifold is $\dim X=s+2$. \end{proof} After the Weeks manifold, the compact oriented arithmetic hyperbolic $3$-manifold of next larger volume is the Meyerhoff manifold, \cite{MR1882023 . It was shown by Chinburg \cite{MR883417} to be arithmetic and to have volum \[ \frac{12\cdot283^{\frac{3}{2}}}{(2\pi)^{6}}\zeta_{K}(2)\qquad\text{for}\qquad K:=\mathbf{Q}[T]/(T^{4}-T-1)\text{. \] This quartic number field $K$ has $s=2$ and $t=1$ real resp. complex places and discriminant $-283$. It is known that the smallest discriminants for these numbers of places are as given on the left-hand side column in the following table \ \begin{tabular} [c]{c|c|c $\triangle_{K/\mathbf{Q}}$ & $\operatorname*{Vol}\left( X\right) $ & min. polynomial\\\hline \multicolumn{1}{r|}{$-275$} & \multicolumn{1}{|r|}{$0.0717$} & \multicolumn{1}{|r}{$T^{4}-T^{3}+2T-1$}\\ \multicolumn{1}{r|}{$-283$} & \multicolumn{1}{|r|}{$0.0745$} & \multicolumn{1}{|r}{$T^{4}-T-1$}\\ \multicolumn{1}{r|}{$-331$} & \multicolumn{1}{|r|}{$0.0921$} & \multicolumn{1}{|r}{$T^{4}-T^{3}+T^{2}+T-1$}\\ \multicolumn{1}{r|}{$-400$} & \multicolumn{1}{|r|}{$0.1196$} & \multicolumn{1}{|r}{$T^{4}-T^{2}-1$}\\ \multicolumn{1}{r|}{$-475$} & \multicolumn{1}{|r|}{$0.1473$} & \multicolumn{1}{|r}{$T^{4}-2T^{3}+T^{2}-2T+1$ \end{tabular} \] We leave it to the reader to show that the middle column indeed gives the smallest four possible volumes for Oeljeklaus-Toma manifolds with two real places. One can proceed as in the argument above, this time using Friedman's estimate $R_{K}/2>0.1835$ for $\left\vert \triangle_{K/\mathbf{Q}}\right\vert \leq36^{4}$, \cite[Prop. 2.2 and Table 2 for $(r_{1},r_{2})=(2,1)$]{MR1022309}. \begin{example} We will now determine the minimal volumes of Oeljeklaus-Toma manifolds for $s=1,2,3,4,5$. We follow the same method as in the proof of Prop. \ref{prop_intext_minvol}, but suppress a number of details and just explain the general pattern. It would seem entirely hopeless to me to perform the necessary verifications below without the help of a computer. Using tables for minimal known discriminants, we first compile the following table \begin{equation \begin{tabular} [c]{c|c|c|c|c|c $s$ & $R_{K}^{>}$ & $\left\vert \triangle_{K/\mathbf{Q}}\right\vert _{1^{st}}$ & $\left\vert \triangle_{K/\mathbf{Q}}\right\vert _{2^{nd}}$ & $\operatorname*{Vol}$ of $1^{st}$ & $V_{2^{nd}}^{>}$\\\hline \multicolumn{1}{r|}{$1$} & \multicolumn{1}{|r|}{$0.28$} & \multicolumn{1}{|r|}{$23$} & \multicolumn{1}{|r|}{$31$} & \multicolumn{1}{|r|}{$0.33714$} & \multicolumn{1}{|r}{$0.38974$}\\ \multicolumn{1}{r|}{$2$} & \multicolumn{1}{|r|}{$0.367$} & \multicolumn{1}{|r|}{$275$} & \multicolumn{1}{|r|}{$283$} & \multicolumn{1}{|r|}{$0.07174$} & \multicolumn{1}{|r}{$0.07235$}\\ \multicolumn{1}{r|}{$3$} & \multicolumn{1}{|r|}{$0.6218$} & \multicolumn{1}{|r|}{$4511$} & \multicolumn{1}{|r|}{$4903$} & \multicolumn{1}{|r|}{$0.00515$} & \multicolumn{1}{|r}{$0.00531$}\\ \multicolumn{1}{r|}{$4$} & \multicolumn{1}{|r|}{$1.2376$} & \multicolumn{1}{|r|}{$92779$} & \multicolumn{1}{|r|}{$94363$} & \multicolumn{1}{|r|}{$0.0001146$} & \multicolumn{1}{|r}{$0.0001133$}\\ \multicolumn{1}{r|}{$5$} & \multicolumn{1}{|r|}{$2.7822$} & \multicolumn{1}{|r|}{$2306599$} & \multicolumn{1}{|r|}{$2369207$} & \multicolumn{1}{|r|}{$7.650\cdot10^{-7}$} & \multicolumn{1}{|r}{$7.478\cdot 10^{-7}$ \end{tabular} \label{lTable1 \end{equation} Here the column \textquotedblleft$R_{K}^{>}$\textquotedblright\ lists a lower bound for the regulator of all number fields with given $s$ and $t=1$. We just copied these values from the work of Friedman (\cite[Table $2$ for $(r_{1},r_{2})=(s,1)$]{MR1022309}), noting that his table spells out lower bounds for $R_{K}/2$. His values are only valid for discriminants smaller than certain bounds also given in \cite[Table $2$]{MR1022309}, but these are harmless in all cases we deal with. Unsurprisingly so, as we are mostly interested in the smallest possible discriminants. The columns \textquotedblleft$\left\vert \triangle_{K/\mathbf{Q}}\right\vert _{1^{st} $\textquotedblright\ and \textquotedblleft$\left\vert \triangle_{K/\mathbf{Q }\right\vert _{2^{nd}}$\textquotedblright\ list the smallest and second smallest discriminant possible for the given $s$ and $t=1$. In principle there could be several number fields realizing the smallest discriminant, but in all cases we touch here, there is a unique one: \begin{equation \begin{tabular} [c]{c|c $s$ & number field of $\left\vert \triangle_{K/\mathbf{Q}}\right\vert _{1^{st}}$\\\hline \multicolumn{1}{r|}{$1$} & \multicolumn{1}{|r}{$T^{3}-T^{2}+1$}\\ \multicolumn{1}{r|}{$2$} & \multicolumn{1}{|r}{$T^{4}-T^{3}+2T-1$}\\ \multicolumn{1}{r|}{$3$} & \multicolumn{1}{|r}{$T^{5}-T^{3}-2T^{2}+1$}\\ \multicolumn{1}{r|}{$4$} & \multicolumn{1}{|r}{$T^{6}-T^{5}-2T^{4 +3T^{3}-T^{2}-2T+1$}\\ \multicolumn{1}{r|}{$5$} & \multicolumn{1}{|r}{$T^{7}-3T^{5}-T^{4 +T^{3}+3T^{2}+T-1$ \end{tabular} \label{lTable2 \end{equation} We compute their regulators with the help of a computer and therefore obtain the volumes of the Oeljeklaus-Toma manifolds associated to the number field of smallest possible discriminant. These values are listed in the column \textquotedblleft$\operatorname*{Vol}$ of $1^{st}$\textquotedblright. Next, we use Friedman's bound to compute a lower bound on the volumes of all Oeljeklaus-Toma manifolds from number fields of discriminant at least the second smallest, i.e. in the column \textquotedblleft$V_{2^{nd}}^{> $\textquotedblright\ we lis \begin{equation} \frac{(s+1)}{4^{s}\cdot2^{s^{2}}}\sqrt{\left\vert \triangle_{K/\mathbf{Q }\right\vert _{2^{nd}}}\cdot\text{(Friedman~bound }R_{K}^{>}\text{).} \label{lja3 \end{equation} The cases $s=1,2,3$ are obvious now: We find that as soon as we use number fields whose discriminants are second smallest or larger, we will exceed the volume of the Oeljeklaus-Toma manifold made from the number field of smallest discriminant. The case $s=4$ is more involved since we see that the Oeljeklaus-Toma manifold of the unique sextic field of smallest discriminant has a volume strictly larger than a volume that could hypothetically occur for the second smallest discriminant as well. In fact, the second smallest discriminant for $s=4$, that is $-94363$, is realized by the number field o \[ T^{6}-2T^{4}-2T^{3}+3T+1\text{. \] We compute its volume to be $0.000116$, so it is not smaller. The next larger discriminant is known to be $\left\vert \triangle_{K/\mathbf{Q}}\right\vert _{3^{rd}}=103243$, and Friedman's bound as in Equation \ref{lja3} yields a minimal volume of $0.00011851$ for discriminants $\geq\left\vert \triangle_{K/\mathbf{Q}}\right\vert _{3^{rd}}$. This settles the case: Still, the manifold coming from the number field of smallest discriminant has also the smallest volume. For $s=5$ the same happens. The second smallest discriminant is realized b \begin{equation} T^{7}-4T^{5}+3T^{3}-T^{2}+T+1 \label{lja4 \end{equation} and we compute its volume to be $7.88\cdot10^{-7}$, so its volume is larger. We have $\left\vert \triangle_{K/\mathbf{Q}}\right\vert _{3^{rd}}=2616839$ and Friedman's bound shows that for this and larger discriminants the volumes must be at least $7.85\cdot10^{-7}$. This confirms that we have found the smallest one, but do not know for sure whether Equation \ref{lja4} defines the second smallest one.\newline We conclude: The values listed under \textquotedbllef $\operatorname*{Vol}$ of $1^{st}$\textquotedblright\ in Table \ref{lTable1} provide the smallest possible volume for the given $s$, and in each case are realized by only a single manifold; and these are the ones listed in Table \ref{lTable2}. \end{example} Although all of the above computations might suggest that the volumes follow the ordering of ascending discriminants, this simple pattern completely collapses as we get farther from the minimal volumes. The polynomials in the following table have been chosen rather at random, but ordering the rows by increasing volume shows that this does not imply much about the ordering of the discriminants \ \begin{tabular} [c]{c|c|c $\triangle_{K/\mathbf{Q}}$ & $\operatorname*{Vol}\left( X\right) $ & min. polynomial\\\hline \multicolumn{1}{r|}{$-1931$} & \multicolumn{1}{|r|}{$0.7162$} & \multicolumn{1}{|r}{$T^{4}+3T+1$}\\ \multicolumn{1}{r|}{$-6371$} & \multicolumn{1}{|r|}{$3.0870$} & \multicolumn{1}{|r}{$T^{4}+13T+1$}\\ \multicolumn{1}{r|}{$-8123$} & \multicolumn{1}{|r|}{$3.5939$} & \multicolumn{1}{|r}{$T^{4}-4T^{3}-T-1$}\\ \multicolumn{1}{r|}{$-12675$} & \multicolumn{1}{|r|}{$4.6792$} & \multicolumn{1}{|r}{$T^{4}-8T^{3}-T-1$}\\ \multicolumn{1}{r|}{$-6656$} & \multicolumn{1}{|r|}{$5.3600$} & \multicolumn{1}{|r}{$T^{4}-4T+1$}\\ \multicolumn{1}{r|}{$-16619$} & \multicolumn{1}{|r|}{$7.5061$} & \multicolumn{1}{|r}{$T^{4}-5T+1$}\\ \multicolumn{1}{r|}{$-8684$} & \multicolumn{1}{|r|}{$9.2152$} & \multicolumn{1}{|r}{$T^{4}-6T+1$ \end{tabular} \] Although Friedman's estimates are very non-trivial results, the result that the Weeks manifold has smallest volume among compact hyperbolic $3$-manifolds is of a completely different level of complexity. This remark truly applies to any comparison we make between Oeljeklaus-Toma manifolds and hyperbolic or product-hyperbolic geometries in this text. \begin{acknowledgement} I would like to express my sincere gratitude to Victor Vuletescu for teaching me a lot of things, not all of them of mathematical nature. This note is a direct result of his inspiring ideas about the interplay of geometric and arithmetic conditions in Oeljeklaus-Toma manifolds. I also thank Chris Wuthrich for introducing me to SAGE. \end{acknowledgement} \bibliographystyle{alpha}
1,314,259,994,373
arxiv
\section{Introduction} \label{sec:introduction} Technological advancements allow for both longer life expectancy and higher quality of life. These both increase demand on medical personnel, who are also expected more and more to perform personalized and patient-specific procedures, such as surgical planning via morphological approaches~\cite{furnstahl2016surgical} or functional simulation~\cite{pean2017physical}. To that end, even when it is possible to see target anatomical structures in an imaging modality such as MRI, CT, or ultrasound, it is still often the bottleneck to automatically identify and delineate (segment) them. Due to limited resources for manual annotations, patient-specific procedures are still not a common practice for most clinical applications. In the recent years, deep learning (DL) has shown encouraging performance for segmentation when sufficient amount of annotated data for the anatomical structure of interest is available. Annotating a sufficiently large dataset by medical experts is a time- and hence cost-intensive undertaking. The idea of \textit{active learning} is to identify the samples that, once annotated, will bring the most value, which can be defined, e.g., as the gain in segmentation performance of the learned model. In an iterative process, the developed framework selects a new set of samples --~also referred to as \textit{batch-mode} active learning~-- to be manually annotated at each active learning iteration. This is inherently feasible in the clinical environment, where medical experts anyhow annotate small batches of images at different intervals based on their availability between daily clinical responsibilities. In a clinical setting, typically at each annotation session, image data to be annotated is loaded from a picture archiving and communication system (PACS). A \textit{pool-based} active learning system can thus intervene at that stage, in order to intelligently determine which volumes or which image slices to display and request the user to annotate. Active learning with DL remains a challenging problem, since DL solutions do not typically generalize well to unseen samples. Hence, there have been a wide range of approaches in the literature to improve sample selection in active learning. Most of these works can be grouped under \textit{uncertainty} and \textit{representation} based sampling methodologies. \\ \noindent\textbf{Uncertainty Sampling.} In~\cite{gal2017deep,gal2016dropout}, it was shown that \emph{dropout} layers can be used at inference time to sample from the approximate posterior, so-called Monte Carlo (MC) Dropouts. This gives flexibility for sampling as many posteriors as desired, with virtually zero cost added during training; i.e., a similar cost for training a single model as opposed to an ensemble. Then, the disagreement among posteriors, e.g.,\, variance, can be used to quantify the uncertainty. In~\cite{matthias2018deep} a classificiation approach was proposed, where ``pseudo-labels'' are assigned on non-annotated samples using a network trained on a small annotated sample set. The objective at an active learning iteration is then to keep prediction accuracy as high as possible on the annotated sample set while using MC Dropouts to query for the most uncertain non-annotated samples to be annotated. In~\cite{konyushkova2019geometry}, the proposed method queries a patch from 3D volumes using a combination of geometric smoothness priors and entropy-based novel uncertainty measures. \\ \noindent\textbf{Representation Sampling.} Uncertainty quantification with DL models can lead to out-of-distribution samples being ignored in the active learning process~\cite{sener2017geometric}. Consequently, population coverage for active learning is widely investigated. In~\cite{sener2017active}, the authors propose a greedy sample selection algorithm using the last fully connected layer of a Convolutional Neural Network (CNN) to solve maximum set-cover~\cite{Feige1998a} between the pool of all images and the unison of currently annotated samples and the next sample to be queried. In~\cite{yang2017suggestive}, a similar representation sampling method is coupled with uncertainty sampling when tackling active learning for semantic segmentation. The authors compute uncertainty measure as the variance of predictions from multiple CNNs, where each CNN is trained with a bootstrap of the available dataset. Next, a representative subset of the most uncertain samples are sought by computing the angle between \textit{image descriptor} vectors $x^\mathrm{id}$, defined as the spatially averaged activation tensor from the CNN where the spatial resolution is the coarsest. Distance metric approaches in high-dimensional spaces suffer from the so-called\textit{distance concentration}~\cite{franccois2008high}, which is a limitation of both works~\cite{sener2017active} and \cite{yang2017suggestive} above. Note that with the methods described above, the not-yet-annotated dataset is only weakly integrated at any stage prior to quantifying a fitness metric of samples from that dataset. In other words, a posterior estimated from the relatively small annotated dataset is taken to be a good predictor of the complete dataset distribution. This is a strong assumption, especially at early active learning iterations when the annotated set size is still small. Powerful tools are proposed for unsupervised DL, such as Autoencoders (AEs)~\cite{bengio2007greedy}, which learn to map (encode) the high dimensional input space onto a manifold of substantially lower dimensions, such that it can reconstruct the high dimensional input with a second mapping function (decoder). Variational autoencoders (VAEs)~\cite{Kingma2013autoencoding} build on AEs, with additional regularization enforced in a latent space. This regularization constraint penalizes the encoder part of the network such that the training dataset is mapped onto a known prior distribution of some random variables, often modelled as standard normal distribution. Intuitively, this regularization promotes the creation of a continuous latent space of the observed samples. In ideal conditions, this means that traversing the manifold from the latent vector of one image to another, one can generate realistic samples that change from the former image to the latter. In active learning, having such an embedding space explaining the non-annotated data is a formidable source of information that can readily be exploited in order to ensure that key samples from the given population are queried for annotation early on. In~\cite{zhao2017infovae} the authors show that latent space of VAE can be suboptimal, hence they propose the method \textit{infoVAE}, which uses Maximum-Mean Discrepancy (MMD)~\cite{Gretton2006a,Li2015generative} instead of the KL-divergence measure as MMD learns a more continuous and informative latent space representation. Recently the authors of~\cite{sinha2019variational} presented an active learning framework where they train a VAE on all available images in an adversarial fashion with a discriminator classifying between annotated and non-annotated samples. Then, sample selection can be done with the discriminator using the latent space of the trained VAE, potentially solving the distance concentration problem in high-dimensionality of earlier works. In the medical field, UNet~\cite{ronneberger2015unet} and DCAN~\cite{Chen2016DCAN} are some of the most popular neural network architectures for segmentation. Most work in the field of medical image analysis have adopted the UNet approach, thanks to its intuitive structure and consistently high performance in pixel-level tasks, such as~\cite{Zeng20173dunet,milletari2016vnet,ozdemir2018learn,Salehi2017precise}. On the other hand, DCAN won the 2015 MICCAI Gland Segmentation Challenge~\cite{Sirinukunwattana2017gland}. Thanks to its deeply supervised~\cite{lee2015deeply,guo2019btsdsn} architecture, DCAN can be trained faster, thereby being particularly attractive in active learning~\cite{yang2017suggestive}. In earlier work~\cite{ozdemir2018active}, we achieved state of the art results in active learning for the segmentation of a shoulder MR dataset. Inspired by~\cite{yang2017suggestive}, we proposed to have metrics quantifying both uncertainty and representativeness for selecting the next batch of samples. In contrast to~\cite{yang2017suggestive}, we used variance from MC Dropout samples~\cite{gal2016dropout} as an uncertainty metric, experimented with different representativeness metrics, explored different means to combine uncertainty and representativeness measures, and proposed a latent space regularization term that promotes maximizing its information content during training of the segmentation network. Although the optimization of maximum entropy in the latent space can be counter-intuitive for segmentation, our results in~\cite{ozdemir2018active} showed that it can help ensure to generate a discriminative representation of the image dataset. In this work, we approach the representativeness measure from a probabilistic point of view, where we optimize for the MMD~\cite{Gretton2006a} divergence using VAEs to learn meaningful latent features which follow a Gaussian distribution. This is herein studied for a segmentation task using a Bayesian approach for an efficient coverage of the entire set of images with the significantly smaller set of annotated images. Our representation sampling is agnostic to the current and future tasks, i.e.,\, independent of the task. Similarly to~\cite{sinha2019variational}, we herein adopt the idea of VAEs for a low dimensional representation for sampling. Additionally, we herein incorporate an uncertainty-based sampling criterion to further promote relevant sample selection. We utilize VAEs particularly with MMD, which was shown in~\cite{zhao2017infovae} to improve latent space representations. Note that for our purposes, in contrast to earlier works, an additional training for representation sampling is not needed at new active learning iterations, which is an important advantage since the pool of medical image datasets can be vast and prohibitive for regular additional training in the clinical setting. \section{Methods} \label{sec:methods} \subsection{Notation} Below we define the notations used in this manuscript. \noindent \textbf{Dataset.} Let the pool of all images $D_\mathrm{pool}$ consist of images $\{x\}$ and their annotations $\{y\}$, the latter of which in an active learning iteration would be partially inaccessible for images not yet annotated. At a given active learning iteration $t$, there would then be a readily annotated dataset $D_\mathrm{an}^{(t)} \subset D_\mathrm{pool}$. The not-yet-annotated dataset is referred to as $D_\mathrm{non}^{(t)} = D_\mathrm{pool}^{(t)} \setminus D_\mathrm{an}^{(t)}$, which in practice have only images available. For brevity, we will omit the active learning iteration representation $(\cdot)^{(t)}$ for descriptions within an iteration and use this only for formulations that affect multiple iterations. Note that typically $|D_\mathrm{an}| \ll |D_\mathrm{pool}|$, since active learning would be redundant if the set sizes were of similar cardinality. We will treat these sets as random variables, hence observations from the annotated and the pool image sets then become $x_\mathrm{an} \sim X_\mathrm{an}$ and $x_\mathrm{pool} \sim X_\mathrm{pool}$, respectively. At an active learning iteration, i.e.,\, prior to each manual annotation session, a method should select a set of samples $S_\mathrm{query}$ to be annotated, where $S_\mathrm{query} \subset D_\mathrm{non}$. Once annotated by the user, these samples will be appended to the annotated dataset along with their manual annotations, yielding $D_\mathrm{an}^{(t+1)}$ for the next iteration of active learning. \vspace{1ex}\noindent \textbf{Architecture.} The architecture of our fully convolutional networks (FCNs) for segmentation follows a DCAN-like structure~\cite{yang2017suggestive}, where the receptive field of the convolutional kernels increase through max-pooling operation, creating spatially coarser feature maps while increasing the number of feature channels being learned. We call the spatially coarsest level of the network as \textit{abstraction layer}~\cite{ozdemir2018active}, which is relevant for the baseline method we will be comparing against. Segmentation models are trained using pairs of images and annotations $\{x_i, y_i\} \in D_\mathrm{an}$. For all VAE-based methods, the learned embedding space $Z \in \mathbb{R}^{n_\mathrm{lat}}$ is defined by $n_\mathrm{lat}$ latent variables. VAE models are trained using only images $\{x_i\} \in D_\mathrm{pool}$. Without loss of generality different network architectures can also be envisioned for our active learning approach proposed in this work. What is essential is to accommodate necessary modules in the segmentation model to be able to quantify uncertainty, and estimate a latent space that can represent the image population in the form of a normal distribution for representativeness quantification. \subsection{Quantifying Uncertainty} \label{sec:uncertainty} Model uncertainty expected from segmenting a non-annotated image is undoubtedly one of the most important cues to aim for in active learning. However, uncertainty is not inherently quantified in most CNNs. Consider a conventional supervised segmentation task using dataset $D_\mathrm{an}$. For an observation $x$, the task can be formulated as computing the maximum a posteriori $p(y|x, \Theta)$, where $\Theta$ is the set of learned model parameters using $X_\mathrm{an}$ and $Y_\mathrm{an}$. This can be formulated as \begin{equation} \begin{split} y^{*} & = \arg\max_{y} \int p(y|x, \theta) p(\theta|X_\mathrm{an}, Y_\mathrm{an}) d\theta \\ & \approx p(y|x, \Theta) \text{ s.t. } \Theta = \arg\max_{\theta} p(\theta|X_\mathrm{an}, Y_\mathrm{an}) \text{, } \end{split} \label{eqn:segmentation_formula} \end{equation} where the maximum a posteriori for $\theta$ is instead learned due to the impracticalities for integrating over high dimensional $\theta$. This then leads to deterministic predictions for $y$. In order to approximate $p(y|x, \theta)$, MC Dropouts is proposed in~\cite{gal2017deep} to sample from model parameters, aggregating desired number of posterior predictions with only additional inference operations. In order to leverage the benefits of MC Dropout, we modify the DCAN architecture~\cite{yang2017suggestive} with additional spatial dropout layers~\cite{tompson2015efficient}, similarly to~\cite{ozdemir2018active}. First, we infer a tensor of segmentation predictions $p(y \!\! = \!\! l\, |\, x, \hat{\theta}) \in \mathbb{R}^{n_\mathrm{MC}, \mathrm{N}}$ for label $l$ given each draw of model parameters $\hat{\theta}$ depending on the random dropouts, where $n_\mathrm{MC}$ is the number of MC Dropout samples, and $N$ is the number of the input image pixels. Next, we compute the uncertainty map for label $l$ as the variance of each pixel prediction over $n_\mathrm{MC}$ inferences. Finally, we compute a scalar uncertainty measure as the spatial average of this uncertainty map, yielding \begin{equation} m_\mathrm{unc}(y=l) = \frac{1}{N}\sum_{n=1}^{N} \mathrm{var}\left[p(y=l\, |\,x, \hat{\theta})^{(n)}\right] \label{eqn:uncertainty} \end{equation} where $p(y \!\! = \!\! l\, |\, x, \hat{\theta})^{(n)}$ is the vector of $n_\mathrm{MC}$ predictions at pixel $n$. In the multi-class setting where each anatomy is similarly important, we estimate the model uncertainty for the segmentation task as the mean of scalar uncertainty measures for each segmentation label. \subsection{Maximum Likelihood Sampling in Latent Space} \label{sec:latent_likelihood} Note that the above quantification of model uncertainty for an observation $x_i$ is conditioned on the annotated dataset $D_\mathrm{an}$ but not on $D_\mathrm{pool}$. The latter is ideally needed for a good sample prediction for the image population. Below, we describe an approach to take into account the potential domain shift from the already-annotated to the entire dataset using unsupervised learning. The goal is to populate an image set $X_\mathrm{an}$ such that it provides a sufficiently good representative summary of $X_\mathrm{pool}$. For this purpose, consider a mapping function ${f_\mathrm{enc}:x_i \mapsto z_i}$, where each observation $x_i$ from $X_\mathrm{pool}$ is mapped onto $Z \in \mathbb{R}^{n_\mathrm{lat}}$, a continuously defined latent space with a desired probability distribution, e.g.,\, a multivariate normal. Intuitively, a batch of new queries for manual annotation from $X_\mathrm{non}$ after an active learning iteration $t$ should represent the distribution statistics of $X_\mathrm{pool}$ with an emphasis on the space that is unlikely for the distribution of $X_\mathrm{an}$. In other words, queried samples $x_i$ should not be redundant due to readily existing samples in $X_\mathrm{an}^{(t-1)}$. Provided that the mode of the latent space $Z$ will encode the most frequent attributes of $X_\mathrm{pool}$, the ideal sample $x^{*}$ can be queried as \begin{equation} x^{*} = \arg \!\!\!\!\! \max_{x_i\in X_\mathrm{non}} \frac{p(z|x_i, X_\mathrm{pool})}{p(z|x_i, X_\mathrm{an})} \ . \label{eqn:goal_in_real_life} \end{equation} Over iterations, samples queried based on $x^{*}$ will align the posteriors $p(z|X_\mathrm{an})$ and $p(z|X_\mathrm{pool})$, making representations of observations from $X_\mathrm{an}$ cover both breadth and mode of $X_\mathrm{pool}$, hence achieving the desired objective. To compute Eq.\,(\ref{eqn:goal_in_real_life}), we utilize Bayesian inference as \begin{equation} p(z|x_i, X) = \frac{p(x_i, X|z)p(z)}{p(x_i, X)} = \frac{p(x_i|X,z)p(X|z)p(z)}{p(x_i|X)p(X)} \text{ .} \label{eqn:bayes_step1} \end{equation} The right hand side contains the equivalent of the posterior $p(z|X)$, allowing for a simpler representation as, \begin{equation} p(z|x_i, X) = \frac{p(x_i|X,z)p(z|X)}{p(x_i|X)} \propto p(x_i|X,z)p(z|X) \text{ .} \label{eqn:bayes_step2} \end{equation} In order to approximate $f_\mathrm{enc}$, we train an infoVAE~\cite{zhao2017infovae} with the complete pool of images $X_\mathrm{pool}$ using MMD for latent space regularization as $L_\mathrm{infoVAE} = L_\mathrm{AE} + L_\mathrm{MMD}$, where \begin{equation} \begin{split} L_\mathrm{MMD}(q||p) = & \mathbb{E}_{z\sim q, z' \sim q}[k(z,z')] + \mathbb{E}_{z\sim p, z' \sim p}[k(z,z')]\\ & - 2 \mathbb{E}_{z\sim q, z' \sim p}[k(z,z')] \text{ ,} \end{split} \end{equation} $p$ is the prior, $q$ is the posterior inference in the latent space via the encoder, and $k(z,z')$ is the distance in a kernel space. We choose $p(z)$ to have a standard normal distribution and use a Gaussian as the kernel mapping $k(z,z') = \exp(-||z-z'||/2\sigma^2)$, where $\sigma$$=$$1$. Thereon, the true posterior inference $p(z|x)$ is approximated with $q_\phi(z|x)$, where $\phi$ is the learned parameter set of the infoVAE encoder. Hence, we can approximate Eq.\,(\ref{eqn:goal_in_real_life}) as \begin{equation} x^{*} \approx \arg \!\!\!\!\! \max_{x_i\in X_\mathrm{non}} \!\!\! [ \log(q_\phi(z|x_i, X_\mathrm{pool})) - \log(q_\phi(z|x_i, X_\mathrm{an}))] \label{eqn:log_bayes_query_numeric} \end{equation} and Eq.\,(\ref{eqn:bayes_step2}) as $q_\phi(x_i|X,z)q_\phi(z|X)$. Accordingly, we project samples from (i) $X_\mathrm{an}$ and (ii) $X_\mathrm{pool}$ onto the latent space of the infoVAE. Next, to compute $q_\phi(z|X)$, we fit a multivariate diagonal Gaussian to both projections, separately. Finally, we estimate the likelihoods $q_\phi(z|x_i, X_\mathrm{an})$ and $q_\phi(z|x_i, X_\mathrm{pool})$ using the error function $\mathrm{erf}(x)=\frac{1}{\sqrt{\pi}} \int_{-x}^{x}\exp{(-t^2)}dt$, as follows: \begin{equation} q_\phi(z|x, X) \approx 1 + \mathrm{erf}\left(-\frac{|x - \mu_{X}|}{\sigma_{X}\sqrt{2}}\right) \text{ , } \end{equation} where $\mu_{X}$ and $\sigma_{X}$ are the parameters of the fitted Gaussians. In other words, we use the first half of the cumulative distribution function of the fitted Gaussian since it is symmetric around its expected value $\mu_{X}$. Fig.~\ref{fig:bayesian_querying} illustrates this for a toy example, where $x^{*}$ would be selected based on maximizing the likelihood of being sampled from the non-normalized distribution shown with the dashed red Gaussian. Additional experiments corroborating our intuition are provided in the Appendix. \begin{figure} \centering \begin{subfigure}[b]{0.999\linewidth} \centering \includegraphics[width=0.999\linewidth, trim= 0cm 0.0cm 0cm 0cm, clip]{figs/al_bsq_toy1_1.png} \end{subfigure}% \caption{Toy example for Bayesian sample querying. Solid curves: Probability density function (pdf) of $q_\phi(z|X_\mathrm{pool})$ (blue) with $\mu$$=$$0$, $\sigma$$=$$1$ along with pdf of $q_\phi(z|X_\mathrm{an})$ with hypothetical $\mu$$=$$-1$, $\sigma$$=$$1.5$ (orange). Dashed red curve: non-normalized pdf of underrepresented samples in $X_\mathrm{an}$ given by ratio $q_\phi(z|X_\mathrm{pool}) / q_\phi(z|X_\mathrm{an})$.} \label{fig:bayesian_querying} \end{figure} \subsection{Comparative Evaluation} We define 5 methods for analysis and comparison: \noindent $\rightarrow \mathrm{FCN}_\mathrm{Random}$; a simplistic baseline approach of randomly selecting the samples to annotate; i.e.,\, random querying of $n_\mathrm{rep}$ samples. \noindent $\rightarrow \mathrm{FCN}_\mathrm{Uncertainty}$; the most uncertain $n_\mathrm{rep}$ samples based on Sec.~\ref{sec:uncertainty} are queried in each active learning iteration. \noindent $\rightarrow \mathrm{FCN}_\mathrm{Baseline}$; a baseline similar to~\cite{yang2017suggestive}, with the main difference being additional spatial dropout layers in the architecture, and using the uncertainty metric described in Sec.~\ref{sec:uncertainty} (instead of training 3 FCNs with different bootstrapped subsets of the available $D_\mathrm{an}$ and using variance across FCNs as in~\cite{ozdemir2018active}). Consequently, the computational cost is reduced by a third and the entire $D_\mathrm{an}$ is observed by the trained model. To be precise, first a set $S_\mathrm{unc}$ of the most uncertain $n_\mathrm{unc}$ elements from the non-annotated dataset $X_\mathrm{non}$ are selected. Next, \textit{image descriptor} $x^\mathrm{id} \in \mathbb{R}^{n_\mathrm{abs}}$ of each sample in $X_\mathrm{non}$ is computed as the global average pooling applied at the coarsest layer activations, where $n_\mathrm{abs}$ is the number of feature channels of the corresponding layer. The representativeness metric can then be computed using the following similarity measure \begin{equation} d_\mathrm{sim}(x_i, x_j) = \cos\left(x_i^\mathrm{id}, x_j^\mathrm{id}\right) \end{equation} between the two $x^\mathrm{id}$ vectors for any two images $x_i$ and $x_j$. In an iterative manner, we populate a representative sample set $S_\mathrm{rep} \subset S_\mathrm{unc}$ by adding the currently most representative sample $x^*_\mathrm{rep}$ via~\cite{yang2017suggestive} \begin{equation} x^*_\mathrm{rep} = \arg \max_{x_j \in S_\mathrm{unc} \setminus S_\textrm{rep}} \sum_{x_i \in X_\mathrm{non}} d_{\mathrm{sim}}(x_i, x_j \cup S_\textrm{rep}) \text{ .} \label{eqn:representativeness_metric} \end{equation} This maximizes maximum set-cover~\cite{Feige1998a} on $X_\mathrm{non}$ based on the $d_\mathrm{sim}$ metric. \noindent $\rightarrow \mathrm{FCN}_\mathrm{BSQ}$; \textit{Bayesian sample querying}, our proposed method, selects samples to be annotated based on the intersection of the most uncertain (Sec.~\ref{sec:uncertainty}) and representative samples following Eq.\,(\ref{eqn:log_bayes_query_numeric}). Specifically, we first select the most uncertain samples from $X_\mathrm{non}$. Then, we form $S_\mathrm{rep} \subset S_\mathrm{unc}$ following Eq.\,(\ref{eqn:log_bayes_query_numeric}) with $n_\mathrm{rep}$ samples to be queried for annotation for the next active learning iteration. \noindent $\rightarrow \mathrm{FCN}_\mathrm{Upperbound}$; the upper bound using as a reference in our quantitative analysis. The upper bound uses the same segmentation architecture as the above compared methods, but is trained on the complete $D_\mathrm{pool}$ in a supervised setting; i.e.,\, assuming we already know all annotations at each sample query iteration. \subsection{Implementation} \label{sec:implementation} For all compared methods, we used a modified DCAN architecture~\cite{ozdemir2018active} trained on 2D image input for the segmentation network using inverse frequency weighted cross entropy loss. Data augmentation of horizontal flip was randomly applied to images with 0.5 uniform probability during training. When training, Adam optimizer was used with a learning rate of $5\times 10^{-4}$ and a mini-batch size of 8 images. For both training and inference, dropout rate was set to 0.5, with $n_\mathrm{MC}$$=$$17$ for MC samples. At each active learning iteration including the initial training, models were trained for 8000 steps. We trained an infoVAE with 5 convolutional blocks in both encoder and decoder on downsampled images of size $96\times 96$, and we assigned dimensionality of the latent space as $n_\mathrm{lat}$$=$$200$. For infoVAE training, Adam optimizer with learning rate of $5 \times 10^{-5}$ and a mini-batch size of 32 images was used. Preprocessing of image-wise normalization was applied for the infoVAE training. We used $l_2$-norm for the reconstruction loss $L_\mathrm{AE}$. The methods were implemented and tested with the Tensorflow library on a cluster of NVIDIA Titan X GPUs. \section{Experiments} \label{sec:Experiments} \begin{table} \centering \caption{Dataset consists of 2 different acquisition settings, which are merged together as shown in the 3rd row. The depth resolution $\mathfrak{D}$ is 64 or 56, depending on the acquisition setting.} \small \label{tbl:dataset} \setlength\tabcolsep{5.pt} \begin{tabular}{r|r|r|r} \multicolumn{1}{c|}{Setting} & \multicolumn{1}{c|}{\#volumes} & \multicolumn{1}{c|}{vox res.\,{[}mm{]}} & \multicolumn{1}{c}{digital res.\,{[}px{]}} \\ \hline \#1 & 20 & 0.91 x 0.91 x 3.0 & 192 x 192 x 64 \\ \#2 & 16 & 0.83 x 0.83 x 3.0 & 144 x 144 x 56 \\ \hline Total & 36 & 0.91 x 0.91 x 3.0 & 192 x 192 x $\mathfrak{D}$ \end{tabular} \end{table} \noindent \textbf{Dataset.} We have conducted experiments on an magnetic resonance imaging (MRI) dataset of 36 shoulders acquired with Dixon sequence with two slightly varying acquisition settings, resulting in the specifications shown in Table~\ref{tbl:dataset}. For a more uniform dataset, images of the higher resolution setting \#2 were bilinearly interpolated to match the voxel resolution of the coarser dataset, and then zero padded to match the digital resolution of the images of that latter setting \#1. The data has expert annotations of two bones (humerus \& scapula) and two muscle groups (supraspinatus \& infraspinatus + teres minor). A cross-sectional view of two subjects along with the superimposed expert annotations are shown in Fig.~\ref{fig:data_samples}. Ground truth annotations of setting \#2 were resized to match setting \#1 using nearest neighbor interpolation. All experiments were conducted on the \textit{Total} dataset listed in Table~\ref{tbl:dataset}. \begin{figure} \centering \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[width=0.999\textwidth, trim= 0cm 0.0cm 0cm 0cm, clip]{figs/308794.png}% \label{fig:data_samples:1} \end{subfigure}% \hspace{0.1em}% \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[width=0.999\textwidth, trim= 0cm 0.0cm 0cm 0.3cm, clip]{figs/730093.png}% \label{fig:data_samples:2} \end{subfigure}% \caption{Cross-sectional view of 2 sample subject volumes along with expert annotations of humerus (red), scapula (blue), supraspinatus (yellow), and infraspinatus together with teres minor (green). } \label{fig:data_samples} \end{figure} \vspace{1ex}\noindent \textbf{Evaluation Metrics.} For quantitative results, we evaluated Dice coefficient score and mean surface distance (MSD) metrics as commonly used metrics in medical image segmentation. Dice score is $\mathrm{Dice}(M_S^l, M_G^l) = 2 |M_S^l \cap M_G^l| / (|M_S^l| + |M_G^l|)$, where $M_S^l$ is the binary predicted segmentation mask and $M_G^l$ the ground truth mask for label $l$. MSD is computed symmetrically between the contours of segmentation prediction ($C_S$) and ground truth ($C_G$) for each label $l$ as, \begin{equation} \mathrm{MSD}(C_S^l,C_G^l) = \frac{\sum_{\mathrm{p} \in C_S^l} d(\mathrm{p}, C_G^l) + \sum_{\mathrm{p}' \in C_G^l} d(\mathrm{p}', C_S^l)}{|C_S^l| + |C_G^l|} \end{equation} where $d(\mathrm{p},C)$ is the closest Euclidean distance from point $\mathrm{p}$ to surface $C$. To compute the contour for a binary mask, we subtract its morphologically eroded version from itself using an erosion kernel of ${3\times3\times3}$. Average Dice and MSD scores over the four given anatomical structures of interest are reported herein. \vspace{1ex}\noindent \textbf{Experimental Setup.} Typically, expert annotations on MR volumes are conducted for all image slices of a volume at once when this is fetched manually from the PACS. However, this may lead to suboptimal use of limited annotation resources due to redundancy of annotating potentially similar images in a volume. A PACS compatible software can indeed fetch only the desired slices (2D images) from various volumes for annotation. Therefore, we conducted experiments for both \textit{slice}-based and \textit{volume}-based active learning. The former assumes the feasibility of random slice access and annotation query within $X_\mathrm{pool}$ whereas the latter treats each subject volume as an indivisible entity. In an effort to efficiently utilize the available dataset, we generated 5 holdout sets using a pseudo random number generator where dataset splits were performed to roughly respect a $|D_\mathrm{pool}|$/validation/test ratio of $70\%/5\%/25\%$, with each subject being strictly in a single set. This yields to the following number of subjects: 25/2/9. Then, slices of roughly one volume (i.e.,\, 64 slices for slice-based and all slices of a single subject for volume-based experiments) were randomly picked for each holdout set, to define the initial training set $D_\mathrm{an}^{(0)}$ and this initial set was kept constant across tests of different methods to ensure comparability. \section{Results} \noindent \textbf{2D Image Slices.} \label{sec:slice_based} All slice-based experiments were initially trained on 64 slices. For every active learning iteration, $n_\mathrm{unc}=64$ and $n_\mathrm{rep} = 32$ is used. In Fig.~\ref{fig:scores_slice}, we show the Dice score and MSD of different methods over active learning iterations evaluated using the test set over 11 iterations, representing annotations from $4\%$ up to $27\%$ of the complete set $D_\mathrm{pool}$. \begin{figure} \centering \begin{subfigure}[b]{0.509\linewidth} \centering \includegraphics[width=\textwidth, trim= 0cm 0.0cm 1cm 0.7cm, clip]{figs/dice_slice_2.png}% \label{fig:scores_slice:dice} \end{subfigure}\hfill \begin{subfigure}[b]{0.489\linewidth} \centering \includegraphics[width=\textwidth, trim= 0cm 0.0cm 1cm 0.7cm, clip]{figs/assd_slice_2.png}% \label{fig:scores_slice:assd} \end{subfigure}% \caption{Average Dice score and mean surface distance (MSD) results of 2D image slice experiments for compared methods. Upper bound marks the average performance when trained on entire $D_\mathrm{pool}$, i.e. the images of all 25 training volumes. } \label{fig:scores_slice} \end{figure} One can see that all compared methods achieve higher segmentation performance than randomly querying samples ($\mathrm{FCN}_\mathrm{Random}$). While the holdout set averages of Dice and MSD of $\mathrm{FCN}_\mathrm{Uncertainty}$ and $\mathrm{FCN}_\mathrm{Baseline}$ sometimes intersect, our proposed ($\mathrm{FCN}_\mathrm{BSQ}$) clearly outperforms all compared methods, shown as the purple curve in Fig.~\ref{fig:scores_slice}. To highlight the improvement that our proposed method brings over the baseline, we also present in Fig.~\ref{fig:dice_differences} the Dice score difference of the two methods with the highest quantitative performance, i.e.,\, $\mathrm{FCN}_\mathrm{BSQ}$ and $\mathrm{FCN}_\mathrm{Baseline}$. \begin{figure} \centering \begin{subfigure}[b]{0.499\linewidth} \centering \includegraphics[width=\textwidth, trim= 0cm 0.0cm 1cm 0.7cm, clip]{figs/dice_slice_BSQ_baseline_boxplot_3.png}% \caption{2D Image Slices} \label{fig:dice_differences:slice} \end{subfigure}\hfill \begin{subfigure}[b]{0.499\linewidth} \centering \includegraphics[width=\textwidth, trim= 0cm 0.0cm 1cm 0.7cm, clip]{figs/dice_vol_BSQ_baseline_boxplot_3.png}% \caption{3D Image Volumes} \label{fig:dice_differences:vol} \end{subfigure}% \caption{ Dice score difference for each corresponding holdout set represented as box plot of their quartiles between the top two competing methods; $\mathrm{FCN}_\mathrm{BSQ}$ and $\mathrm{FCN}_\mathrm{Baseline}$. Red lines show the median value, the blue boxes range from 25th to 75th percentiles, and purple stars show the mean values. Overall positive values show superiority of $\mathrm{FCN}_\mathrm{BSQ}$ over $\mathrm{FCN}_\mathrm{Baseline}$, especially at earlier iterations. } \label{fig:dice_differences} \end{figure} In Fig.~\ref{fig:dice_differences:slice} it can be observed that the Dice scores of $\mathrm{FCN}_\mathrm{BSQ}$ averaged over holdout sets are strictly superior to $\mathrm{FCN}_\mathrm{Baseline}$ at each presented iteration. In Table~\ref{tbl:dice_diff_slice}, we list the mean and standard deviation of Dice score differences from the upper bound at different active learning iterations for the top two performing methods, $\mathrm{FCN}_\mathrm{BSQ}$ and $\mathrm{FCN}_\mathrm{Baseline}$. Therein, one can see the percentage of the dataset that was annotated for these two methods in order to reach a segmentation performance within different tolerance limits from the upper bound. \begin{table*} \centering \caption{Dice score difference of $\mathrm{FCN}_\mathrm{BSQ}$ and $\mathrm{FCN}_\mathrm{Baseline}$ from upper bound for corresponding holdout set. Scores are from 2D Image Slice experiments and are presented as mean ($\pm$ standard deviation) [\%]. Lower values resemble closer performance to the upper bound.} \label{tbl:dice_diff_slice} \scriptsize \setlength\tabcolsep{3pt} \begin{tabular}{r|llllllllllll} & \multicolumn{12}{c}{Annotation $\%$} \\ $\delta(\mathrm{Dice})\ [\%]$ & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{6} & \multicolumn{1}{c}{8} & \multicolumn{1}{c}{10} & \multicolumn{1}{c}{12} & \multicolumn{1}{c}{14} & \multicolumn{1}{c}{16} & \multicolumn{1}{c}{19} & \multicolumn{1}{c}{21} & \multicolumn{1}{c}{23} & \multicolumn{1}{c}{25} & \multicolumn{1}{c}{27} \\ \hline $\mathrm{FCN}_\mathrm{Baseline}$ & 19.8 (4.5) & 12.2 (2.7) & 8.8 (3.2) & 5.9 (2.8) & 4.8 (2.3) & 3.6 (1.8) & 3.1 (1.7) & 2.6 (1.6) & 1.9 (1.5) & 1.5 (1.6) & 2.7 (1.3) & 1.1 (1.6) \\ $\mathrm{FCN}_\mathrm{BSQ}$ & 18.5 (5.3) & 10.6 (2.6) & 7.4 (2.0) & 5.3 (1.7) & 3.7 (1.4) & 2.8 (1.3) & 2.5 (1.3) & 2.0 (1.2) & 1.5 (1.2) & 1.2 (1.1) & 1.2 (1.1) & 0.7 (1.3) \end{tabular} \end{table*} \vspace{1ex}\noindent \textbf{3D Image Volumes.} \label{sec:volume_based} In these experiments, the networks were initially trained on the slices of a single random subject ($D_\mathrm{an}^{(0)}$), and an active learning iteration consists of evaluating the respective scores of each method as an aggregation over a complete subject volume. For $\mathrm{FCN}_\mathrm{BSQ}$ and $\mathrm{FCN}_\mathrm{Baseline}$, the set sizes of $S_\mathrm{unc}$ and $S_\mathrm{rep}$ are fixed to $n_\mathrm{unc}=2$ volumes and $n_\mathrm{rep} = 1$ volume. Dice score and MSD of the compared methods for volume-based experiments are shown in Fig.~\ref{fig:scores_vol}. Segmentation performance is evaluated at every active learning iteration, for a total of 11 iterations, using the same test set that was used for slice-based experiments. In the volume-based experiment results, where the annotation of entire volumes are added at each active learning iteration, the advantage of the compared methods appear more subtly, due to the larger range of Dice scores; e.g.,\, Dice scores ranging approximately from 0.3 to 0.9 as opposed to 0.65 to 0.9 in Fig.~\ref{fig:scores_slice}. Dice score improvement using $\mathrm{FCN}_\mathrm{BSQ}$ over $\mathrm{FCN}_\mathrm{Baseline}$ can be seen in Fig.~\ref{fig:dice_differences:vol}, where we show their Dice score differences between each holdout as boxplots. One can see that $\mathrm{FCN}_\mathrm{BSQ}$ has improved average Dice score over $\mathrm{FCN}_\mathrm{Baseline}$ on every evaluation point (cf.~\ref{fig:dice_differences:vol} purple stars). In order to have a precise understanding of the Dice score gap of the competing two methods from the upper bound, we present the mean and standard deviation of Dice score differences of $\mathrm{FCN}_\mathrm{BSQ}$ and $\mathrm{FCN}_\mathrm{Baseline}$ in Table~\ref{tbl:dice_diff_vol}. \begin{figure} \centering \begin{subfigure}[b]{0.494\linewidth} \centering \includegraphics[width=\textwidth, trim= 0cm 0.0cm 1cm 0.7cm, clip]{figs/dice_vol_2.png}% \label{fig:scores_vol:dice} \end{subfigure}\hfill \begin{subfigure}[b]{0.504\linewidth} \centering \includegraphics[width=\textwidth, trim= 0cm 0.0cm 1.2cm 0.8cm, clip]{figs/assd_vol_2.png}% \label{fig:scores_vol:assd} \end{subfigure}% \caption{Average Dice score and mean surface distance (MSD) results of 3D volume experiments for compared methods. Upper bound marks the average scores performance when trained on all 25 training volumes. } \label{fig:scores_vol} \end{figure} \begin{table*} \centering \caption{Dice score difference of $\mathrm{FCN}_\mathrm{BSQ}$ and $\mathrm{FCN}_\mathrm{Baseline}$ from upper bound for corresponding holdout set. Scores are from 3D Image Volume experiments and are presented as mean ($\pm$ standard deviation) [\%]. Lower values resemble closer performance to the upper bound.} \label{tbl:dice_diff_vol} \scriptsize \setlength\tabcolsep{2.5pt} \begin{tabular}{r|rrrrrrrrrrrrllllllllllll} & \multicolumn{12}{c}{\#Annotated volumes} \\ $\delta(\mathrm{Dice})\ [\%]$ & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{3} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{5} & \multicolumn{1}{c}{6} & \multicolumn{1}{c}{7} & \multicolumn{1}{c}{8} & \multicolumn{1}{c}{9} & \multicolumn{1}{c}{10} & \multicolumn{1}{c}{11} & \multicolumn{1}{c}{12} \\ \hline $\mathrm{FCN}_\mathrm{Baseline}$ & 55.9 (6.0) & 37.4 (4.1) & 27.4 (5.5) & 18.7 (8.7) & 14.5 (7.3) & 12.3 (5.1) & 10.5 (5.3) & 8.7 (2.8) & 6.3 (2.1) & 5.2 (1.8) & 5.6 (1.4) & 4.0 (1.4) \\ $\mathrm{FCN}_\mathrm{BSQ}$ & 52.4 (5.0) & 34.5 (2.1) & 25.2 (6.4) & 17.2 (4.1) & 12.6 (3.7) & 9.5 (4.1) & 7.5 (2.4) & 5.4 (1.9) & 4.9 (1.8) & 3.7 (1.6) & 3.2 (1.5) & 2.8 (0.9) \end{tabular} \end{table*} \section{Discussion} \label{sec:discussions} Our preliminary experiments with VAE compared to infoVAE corroborate the claims in~\cite{zhao2017infovae} where the variance of the latent space was overestimated. Furthermore, active learning of segmentation through Bayesian sample querying using the above-mentioned VAE network trained on $X_\mathrm{pool}$ showed lower performance compared to $\mathrm{FCN}_\mathrm{BSQ}$. Since we are strictly interested in the representational power of the latent variables for a given image, poorer performance on active learning evaluations indirectly support the claim of ``learning un-informative latent variables'' when using KL divergence~\cite{zhao2017infovae}. The advantage of $\mathrm{FCN}_\mathrm{Baseline}$ over $\mathrm{FCN}_\mathrm{Uncertainty}$ only becomes evident after a sufficiently large $D_\mathrm{an}$ is achieved ($\sim$$10\%$ in Fig.~\ref{fig:scores_slice}). One can also draw a similar conclusion by looking at Fig.~\ref{fig:dice_differences:slice}, where the superiority of our proposed method is most prominent in early iterations of active learning, and it almost monotonically decreases over time. This is inline with our previous findings in~\cite{ozdemir2018active} that an image descriptor based representativeness metric for $S_\mathrm{rep}$ may be redundant, if not adverse, until an adequate portion of the complete set is annotated. All methods for the 3D volume experiment approach upper bound at a slower rate compared to the slice-based experiment (cf. Tables~\ref{tbl:dice_diff_slice}~\&~\ref{tbl:dice_diff_vol}). This can be due to having less options to select from (i.e.,\, total of 24 volumes at first active learning iteration) when compared to slice-based. This hypothesis would be in line with the reasonable expectation that certain slices have significant importance for the segmentation task while others (e.g.,\, at the borders of the field-of-view) are less important in a given volume, whereas when a volume is given as a whole, the utilizable information therein is more uniform. Another point of interest is that after 6 volumes, $\mathrm{FCN}_\mathrm{Uncertainty}$ achieves performance closer to $\mathrm{FCN}_\mathrm{BSQ}$. This can possibly be due to the fact that our dataset consisted of 2 different settings (cf.\ Table~\ref{tbl:dataset}); where $\mathrm{FCN}_\mathrm{BSQ}$ may have early on queried for key sample volumes from both settings to represent the Total dataset, while $\mathrm{FCN}_\mathrm{Uncertainty}$ may have eventually seen key samples only after roughly 5 iterations of active learning. Another explanation can come from the design choice of assigning $n_\mathrm{unc}=2$ volumes and $n_\mathrm{rep} = 1$ volume, heavily restricting the sequential representativeness metric to pick one of the two options. Upon comparison of the slice-based versus volume-based experimental setups, one can see the importance of querying slices as opposed to full volumes (e.g.,\, Dice score gap from the upper bound in early iterations of active learning on Tables~\ref{tbl:dice_diff_slice}~\&~\ref{tbl:dice_diff_vol}), i.e.,\, achieving better outcomes with less effort from experts. Furthermore, a Dice score gap of approximately $5\%$ from the upper bound is achieved with $\mathrm{FCN}_\mathrm{BSQ}$ after merely 8 volumes ($32\%$ of $D_\mathrm{pool}$) in volume-based experiments, whereas a similar score is reached as early as $10\%$ for slice-based experiments. In slice-based experiments, this gap drops to $2\%$ when less than a fifth ($19\%$) of the images are annotated; which equals to only $\sim$$285$ 2D image annotations, yielding a sufficiently high performance, compared to $\sim$$1500$ annotations necessitated for the upper bound scenario. \section{Conclusions} \label{sec:conclusions} In this work, we have proposed a novel method to quantify representativeness of a sample from a large unsupervised dataset using Bayesian inference in the latent space of MMD VAEs. We have shown that by using a learned mapping function onto a simple latent space and sample selection to align probability distributions in this space, the representational power of a subset of samples approach to that of the complete set, for the complex case of MR imaging. Our results support the proposed approach being a suitable candidate for sample querying for the segmentation task in active learning. Although our experimental dataset already harbors domain variation from two different acquisition settings, additional diversity is common in the clinical setting. Consequently, the advantage of our proposed sample picking approach is expected to be more pronounced, by achieving a good coverage of the complete pool of images with only few active learning iterations and annotations. The main hypothesis herein is the representability of a dataset in the latent space as a continuous and Gaussian distribution. Future work shall investigate other means of dataset representations. \begingroup \let\thefootnote\relax\footnotetext{This work was funded by the Swiss National Science Foundation (SNSF) and a Highly Specialized Medicine (HSM2) grant of the Canton of Zurich. We thank NVIDIA for their GPU support.} \endgroup \bibliographystyle{model2-names}
1,314,259,994,374
arxiv
\section{Introduction} Unmanned Aerial Vehicles (UAVs) with low cost and high mobility are an emerging technology and have found a wide range of applications, including food delivery, public safety, traffic monitoring, and many others over the past few decades\cite{Zeng2016,Challita2019a,Yang2019,Fan2019}. As these applications require reliable control of UAVs and real-time application data transmission, there is an urgent need for wireless technology that can guarantee ultra-reliable low latency communication (URLLC) for downlink command\&control (CC) transmission, and high throughput low E2E delay for mission-related data transmission \cite{Azari2019}. Conventional UAVs communication simply relies on the short-range communications (e.g., WiFi, and Bluetooth) with short transmission range, inefficient multi-UAVs collaboration, and limited multi-UAVs control. This may not be sufficient for beyond visual line-of-sight (LOS) communication needs, particularly for those applications requiring wide-area connectivity. To tackle this, cellular-connected UAVs communication has been proposed to allow beyond LOS control, low E2E delay, real-time communication, robust security, and ubiquitous coverage to support a myriad of applications ranging from real-time video streaming to surveillance. Most existing work mainly focused on the analysis and simulation of cellular-connected UAVs \cite{lin2018,Azari2019,Zeng2019}. In \cite{Azari2019}, a comprehensive analysis of coverage and data rate was carried out for the downlink. In \cite{lin2018}, simulation results of uplink throughput were obtained to exploit the feasibility of providing LTE connectivity for UAVs. In \cite{Zeng2019}, downlink data rates with fixed antenna pattern and 3D beamforming were compared. However, theoretical papers mainly focused on the physical layer without considering the content of transmission and protocols in cross-layers. Moreover, the mathematical channel model can not capture the full characteristics of practical implementation such as antenna angle and pattern, packet transmission frequency, obstacles, and etc. To evaluate the E2E performance of downlink control and uplink video transmission, we develop a cellular-connected UAV testbed based on the LTE network, which consists of the physical layer, MAC layer, network layer, transport layer, and application layer, and can be easily extended to 5G. The LTE network is established via OpenAirInterface (OAI) \cite{OAI} and Software Defined Radio (SDR). Then, we develop the downlink CC transmission algorithms and real-time uplink video streaming transmission through the LTE network. We also propose methods to measure the E2E delay of the CC transmission and video transmission. Finally, we carry out indoor experiments and analyze the experimental results for various flight heights, video resolutions, and transmission frequencies. The contributions of this paper are twofold. First, to the best of the authors' knowledge, it is the first paper that built the cellular-connected UAV testbed for real-time video streaming and control signal transmission. Second, the experimental results provide insights for the design of cellular-connected UAV system in practice as follows: \begin{itemize} \item Although higher transmission frequency offers more precise and smooth control of the UAV-UE, higher transmission frequency than a certain threshold leads to buffer overflow and significant increase of latency, which cannot guarantee the operation safety; \item Due to the mobility of the UAV-UE, the UAV network parameters, including locations, channel status, and etc, easily change during the missions. Therefore, it is crucial for the UAV system to dynamically adjust FPS to maintain continuity of the video transmission; \item As the antennas in the BS are tilted downward to the ground users, this provides limited gain for the UAV-UE and results in lower throughput, especially when UAV-UE passing through the top of the BS. Therefore, link outage in the cellular-connected UAV system remains an open problem to be solved; \end{itemize} Note that all the above insights can only be obtained from performing our experimental test, rather than theoretical analysis. The rest of this work is organized as follows. Section II presents the system overview and performance metrics. Section III provides the testbed setup. Section IV presents the experimental results. Finally, Section V concludes the paper. \section{System Overview} We consider cellular-connected UAVs to support various real-time video streaming applications, such as inspection, and product delivery, where uplink and downlink are established between the UAV User Equipment (UAV-UE) and the BS for video streaming and CC transmission, separately \cite{BorYaliniz2019}. Knowing that ultra-reliable and low latency are main performance metrics to guarantee the UAV safe control, high throughput and low latency are important for real-time video streaming, we formulate these metrics from the perspective of uplink video transmission and downlink CC transmission as following: \subsubsection{E2E downlink CC delay} The E2E CC link delay $ D_{\mathrm{CC}} $ is defined from the Evolved Packet Core (EPC) to the UAV-UE, which is the sum of transmission delay, propagation delay, processing delay, and queueing delay. To do so, we perform clock synchronization between the EPC and the UAV-UE to eliminate clock error, which will be explained in section III in detail. The E2E delay of downlink CC transmission $ D_{\mathrm{CC}} $ can be expressed as \begin{equation} D_{\mathrm{CC}} = T_{\mathrm{Cr}} - T_{\mathrm{Ct}}, \label{CC link latency} \end{equation} where $ T_{\mathrm{Ct}} $ is the transmit timestamp of the CC recorded at the EPC, $ T_{\mathrm{Cr}} $ is the received timestamp of the CC recorded at the UAV-UE. \subsubsection{E2E uplink video delay} The E2E delay of video transmission $ D_{\mathrm{Vd}} $ represents the amount of time it takes for a single frame of video to transmit from the camera at the UAV-UE to the display at the EPC, which is formulated as: \begin{equation} D_{\mathrm{Vd}} = T_{\mathrm{Vr}} - T_{\mathrm{Vt}}, \label{Application link latency} \end{equation} where $ T_{\mathrm{Vt}} $ is the time when the typical video frame is captured by the camera, $ T_{\mathrm{Vr}} $ is the time when the typical video frame is shown on the screen at the EPC. \subsubsection{Downlink CC transmission reliability} The downlink CC transmission reliability $ R_{\mathrm{CC}} $ is calculated as \begin{equation} R_{\mathrm{CC}} = \dfrac{N^{\mathrm{CC}}_{\mathrm{rece}}}{N^{\mathrm{CC}}_{\mathrm{trans}}}, \label{CC link data rate} \end{equation} where $ N^{\mathrm{CC}}_{\mathrm{rece}} $ and $N^{\mathrm{CC}}_{\mathrm{trans}}$ are the number of successfully received CC and total transmitted CC, respectively. \subsubsection{Uplink video transmission throughput} The video transmission throughput $ T_{\mathrm{Vd}} $ can be calculated based on the video frame E2E delay and the video frame size, where the video frame size $ S_{\mathrm{Vd}} $ varies for different video resolutions and video encoding formats. Thus, the video transmission throughput $ T_{\mathrm{Vd}} $ can be evaluated using \begin{equation} T_{\mathrm{Vd}} = \dfrac{S_{\mathrm{Vd}}}{D_{\mathrm{Vd}}}. \label{Application link data rate} \end{equation} \section{Testbed Setup} To evaluate the reliability, throughput and E2E delay, we build a cellular-connected UAV testbed as described in this section. In detail, we first describe the hardware part including the UAV-UE and ground control station (GCS) setup, then describe the software configuration implemented based on OAI, and at last present the communication part including the downlink CC transmission and the uplink video transmission. \subsection{Hardware}\label{AA} \subsubsection{\textbf{UAV-UE Setup}} As shown in Fig.~\ref{Hardware Setup} (a), The DJI MATRICE 100 \cite{dji2020} is selected as the UAV-UE, as it supports additional expansion bays to customize the payload and universal communication ports for connecting third-party components. The payload is fixed on top of the UAV-UE, which consists of the following parts: \begin{figure}[!tb] \centerline{\includegraphics[scale=0.28]{Figure2_hardware.jpg}} \caption{Hardware setup.} \label{Hardware Setup} \end{figure} \begin{itemize} \item Raspberry Pi: Raspberry Pi 4 is chosen as the UAV-UE onboard computer because of its lightweight. It receives the CC frame from the BS wirelessly and routes the CC to the UAV-UE through USB to UART adapter. It also encodes the video and transmits the real-time video streaming to the BS. \item LTE Module: To facilitate the UAV-UE communication with the BS, the Quectel EC25 LTE module is installed on the Pi through an interface bridge for LTE communication. Two LTE PCB antennas are installed on the LTE module to enable Maximum Ratio Combination (MRC) at the UAV-UE. \item Camera: The Raspberry Pi camera module is selected for video capturing due to its compatibility with Pi and the capability of capturing high-definition video. \end{itemize} \subsubsection{\textbf{GCS Setup}} The GCS consists of the remote controller, EPC, and the BS, as shown in Fig.~\ref{Hardware Setup} (b). The third-party remote controller is selected to be wired connected with the EPC to use the established LTE network for UAV-UE communication instead of DJI's embedded WiFi module. The EPC and the BS are installed on two different computers connected via Ethernet and the BS is equipped with Universal Software Radio Peripheral (USRP) as a radio frequency (RF) unit. \begin{itemize} \item USRP: USRP B210 \cite{ETTUS2020} is selected for radio transmission and reception in our testbed. It can provide a fully integrated, single-board platform with continuous frequency coverage from 70 MHz – 6 GHz. \item Antenna: Two omnidirectional antennas with 3dBi gain are installed on the TX/RX ports of USRP B210, which support dual-band. \item PCs: The EPC is set up on the PC with i7-7700 CPU and 16GB memory, and the BS is set up on the other PC with i9-9900K CPU and 48GB memory due to the heavy use of integer Single Instruction Multiple Data (SIMD) instructions. \end{itemize} \subsection{Software} We select an opensource project OAI, which consists of software EPC, BS, and UE, to implement the LTE network. In a general EPC, Home Subscriber Server (HSS) holds the database, which contains information related to UE authentication and access authorization. The Mobility Management Entity (MME) mainly controls the mobility and security for access. The Serving Gateway (S-GW), and Packet Data Network Gateway (P-GW) mainly act as interfaces that can serve UEs by routing the incoming and outcoming IP packets. A programmable SIM card is selected in the LTE module to match the database registered in the HSS of the EPC. After correctly configuring the parameters in the SIM card, such as security key, registration information, and corresponding gateway, the UAV-UE can attach to the configured BS and then connect with the EPC. \begin{figure}[!htb] \centerline{\includegraphics[scale=0.6]{Figure3_system_diagram.jpg}} \caption{Diagram of cellular-connected UAV testbed.} \label{Diagram of Cellular-connected UAV Testbed} \end{figure} \subsection{Communication} Our cellular-connected UAV communication testbed is shown in Fig.~\ref{Diagram of Cellular-connected UAV Testbed}, and we will present the downlink CC transmission and uplink video transmission in this part. Before the establishment of the links, we synchronize the time between the EPC and the UAV-UE for the E2E delay measurement. The time synchronization is achieved through Network Time Protocol (NTP) and the NTP server query interval is 10 seconds, which can guarantee the time synchronization error less than 1ms. \subsubsection{\textbf{Downlink CC transmission}} The downlink CC transmission consists of two processes: sending control signal, and controlling UAV-UE. First, the EPC encodes the CC signal from the remote controller and sends it to the UAV-UE using the established LTE network. After that, the UAV-UE decodes the CC signal and controls the UAV-UE movement through the DJI API. We present the algorithms for sending control signal in Algorithm 1, and controlling UAV-UE in Algorithm 2. In the following, we describe the details of these two algorithms. \begin{algorithm} \caption{The procedure of sending control signal} \label{control-sender} \begin{algorithmic}[1] \Procedure{control signal sending}{} \State Initialise UDP socket \State Initialise Joystick \State Setting destination IPV4 address \Loop \State Obtain CC parameters $roll,pitch,yaw,thrust$ \State Normalize CC parameters value \State Encode normalized CC parameters \State Send CC frame to destination \State Record frame ID and transmit time \State Wait for pre-set time \EndLoop \State Close socket \EndProcedure \end{algorithmic} \end{algorithm} The Algorithm 1 begins by creating a UDP socket on the EPC to send data through. There are two reasons to select non-blocking UDP protocol in the cellular-connected UAV testbed. One is that the control signal has to be sent at exact intervals without retransmission, and the other reason is to reduce the delay introduced by handshake when operating with stateful protocols like TCP. After that, we initialize the remote controller, which is a wired Xbox joystick in the case of our implementation. After initialization, we keep monitoring the CC from the remote controller, and four UAV-UE movement parameters are defined to control the UAV-UE. Roll, pitch, and yaw control the UAV-UE to rotate in three dimensions, and thrust controls the UAV-UE to move up or down. The CC parameters are normalized to map inside the range of UAV-UE parameters for smooth control, and are then encoded into the predefined frame structure, which includes four movement parameters along with the frame ID, and each of them occupies 4 Bytes. Finally, the CC frame is sent at a particular frequency, which is lower than 50Hz to prevent buffer overflow on the UAV-UE side. Once a CC signal has been sent, the EPC will record the corresponding frame ID and transmit time into the local file for E2E delay evaluation. \begin{algorithm} \caption{The procedure of controlling UAV-UE} \label{control-receiver} \begin{algorithmic}[1] \Procedure{Controlling UAV-UE}{} \State Initialize UDP socket \State Initialize UAV control API \State Set the control flag of the UAV \Loop \State Receive the CC frame $ data $ \If {$data$ is not $NULL$} \State Set DJI API parameters to $ data $ \Else \State Set DJI API to zeros \EndIf \State Send parameters and control flag through API \State Record frame ID and received time \State Sleep for 20 ms \EndLoop \State Close socket \EndProcedure \end{algorithmic} \end{algorithm} The Algorithm 2 running at the UAV-UE starts with the initialization of the UDP socket and DJI control API. We also set the control flag to indicate the intention to control the UAV-UE with the third-party remote controller. After that, the UAV-UE keeps monitoring the traffic from the UDP port and parses the CC frame to obtain the roll, pitch, yaw, thrust, and the frame ID. If the obtained data is not null, the input parameters of DJI API is set to the obtained data, otherwise, it is set to zeros. Finally, the corresponding frame ID and received time are recorded at the UAV-UE. As suggested by the DJI, the program remains idle for 20 ms and then parses the received CC frame again to achieve 50Hz control frequency. \subsubsection{\textbf{Uplink video transmission}} To allow the real-time video streaming transmission, an open-source server WebRTC\cite{WebRTC} is installed on the Raspberry Pi, and the EPC fetches real-time video streaming from 8080 port of the Raspberry Pi. WebRTC adopts the codec based on H.264, and allows to adjust FPS automatically. To measure the E2E delay, we apply the QR-code to carry the transmit and the receive time of video frame\cite{Boyaci2009}. The E2E delay measurement scheme consists of two processes: QR-code generation and QR-code recognition. During the QR-code generation process, the QR-code generation program runs at the EPC and encodes the local time to QR-code with a 60Hz refresh rate. The generated QR-code is shown on the right side of the screen at the EPC and captured by the UAV-UE's camera. During the QR-code recognition process, the EPC receives the video frame representing the transmit time from the UAV-UE, and displays the QR-code at the left side of the screen. The other program running at the EPC takes a screenshot with a frequency of 60Hz, and recognizes the QR-code on both sides to obtain the transmit time and the receive time. Finally, the E2E delay can be calculated based on \eqref{Application link latency}. \section{Experiments} In this section, we carry out the experiments to evaluate the throughput, reliability and E2E delay of uplink video transmission and downlink CC transmission under various UAV-UE flight heights, CC transmission frequencies and video resolutions. Due to the strict restrictions of Federal Aviation Administration (FAA), we conduct indoor experiments for three UAV-UE flight height $ h=$0m, 1m, 2m, where $h=0m$ represents the stationary UAV-UE on the ground. However, with the algorithms and practical implementations performed for this real-time cellular-connected UAV testbed, the experiment can be easily extended to multiple UAV scenarios and higher heights with the regulation approval for outdoor flight test. In all figures of this section, we use “Ave” to abbreviate “Average”. \begin{table}[!htb] \caption{Bandwidth and round-trip delay LTE network} \begin{center} \begin{tabular}{|c|c|c|} \hline & 25 PRB & 50 PRB \\ \hline Uplink bandwidth (Mb/s)& 8.78 & 18.77 \\ \hline Downlink bandwidth (Mb/s)& 16.57 & 34.3 \\ \hline Round-trip delay (ms)& 27.66 & 29 \\ \hline \end{tabular} \label{Performance of the the established LTE network} \end{center} \end{table} We first use iperf \cite{iperf} and ping to measure the bandwidth and round-trip delay of the established LTE network, and the detailed results under 25 physical resource block (PRB) and 50 PRB are given in Table~\ref{Performance of the the established LTE network}. Both uplink and downlink bandwidth with 50 PRB are more than twice the bandwidth of 25 PRB, and the round-trip delay is almost the same. Although 50 PRB can support higher bandwidth, its uplink and downlink are not stable compared to the LTE network with 25 PRB in OAI. Therefore, an LTE network with 25 PRB is established for the following CC and video transmission experiments. \begin{figure}[!tb] \centerline{\includegraphics[scale=0.32]{new_control_latency-eps-converted-to.pdf}} \caption{E2E delay and reliability of CC transmission.} \label{Control latency} \end{figure} Fig.~\ref{Control latency} plots the E2E CC delay and reliability versus various transmission frequency for various flight heights, where the delay represents the elapsed time from the generation of CC packet at the EPC to the reception of CC packet at the UAV-UE. We set the control signal transmission frequency from 10Hz to 70Hz and calculates the E2E average delay, maximum delay, and minimum delay over $ 10^{4} $ control signal transmissions. We can see that the E2E CC delay and reliability are almost the same for flight heights 0m, 1m, and 2m. For a certain flight height, the E2E CC delay remains around 20ms during 10Hz to 40 Hz, and the corresponding reliability achieves 100\%. Surprisingly, the E2E CC delay increases rapidly to 1355ms after 40Hz, along with a significant decrease on the reliability. This is because the CC transmission frequency is higher than the CC receiving frequency as shown in Algorithm 2, leading to buffer overflow at the UAV-UE. We also see the tradeoff between the reliability and latency performances over low and high transmission frequency. \begin{figure}[!h] \centerline{\includegraphics[scale=0.29]{video_result-eps-converted-to}} \caption{E2E video transmission performance.} \label{Video Streaming Latency} \end{figure} Fig.~\ref{Video Streaming Latency} plots the E2E real-time video delay, and throughput versus the UAV-UE flight height for various video resolutions including 320x240, 640x480 and 1280x720. The E2E delay of video transmission represents the elapsed time from the capturing of image at the UAV-UE to the display of image at the EPC. For stationary UAV-UE ($h=$0m), the average E2E delay increases with higher resolution due to larger data size and longer encoding/decoding time consumption. We can see that the average throughput of 320x240 video is 7.08 Mb/s, and the average throughput of 640x480 and 1280x720 video achieve 8.6 Mb/s, which is limited by the uplink bandwidth in Table~\ref{Performance of the the established LTE network}. For flying UAV-UE ($h=$1m, 2m), the average E2E delay is more dependent on the network condition than video resolution due to UAV-UE's mobility. Since the WebRTC is able to automatically adjust the FPS of the video based on the network condition, we can see that the average E2E delay of the 640x480 video transmission at $ h=$1m is lowest because of a lower FPS, which is consistent with its throughput performance. Overall, the UAV-UE flying at 2m has lower E2E video delay and higher throughput than that of UAV-UE flying at 1m because of a stronger LOS. \begin{figure}[!htb] \centerline{\includegraphics[scale=0.42]{new_throughput_error-eps-converted-to.pdf}} \caption{E2E video transmission throughput and segment loss at different elevation point.} \label{coverage and error} \end{figure} Fig.~\ref{coverage and error} plots the E2E video transmission throughput and segment loss when the UAV flies at high elevation angle (i.e., close to the top of the BS) and low elevation angle (i.e., away from the top of the BS) positions, separately. The UAV-UE flies close to the BS first, and then flies away from the BS. The flight height and video resolution are $ \mathrm{2m} $ and $ 1280\times720 $, respectively. We can see that the throughput drops significant to $ 2\times10^{6}~\mathrm{bits/s} $ when the UAV-UE is at the high elevation angle position, and increases to $ 8.5\times10^{6}~\mathrm{bits/s} $ after returning to low elevation angle position. We can also observe the tradeoff between the throughput and segment loss, where the segment loss increases at high elevation angle and decreases at low elevation angle. This is because the tilted-down antenna of BS provides limited gain for UAV with high elevation angle, which is consistent with the analytical results in \cite{Mobilitysky}. \section{Conclusion} In this paper, we developed a cellular-connected UAV testbed to evaluate the throughput, E2E delay, and reliability of CC and real-time video streaming transmission. We first implemented the LTE network via OAI and USRP, and equipped the UAV with Raspberry Pi and LTE module to act as UAV-UE. We then established the downlink CC transmission and the uplink video transmission, and proposed corresponding schemes for the throughput, E2E delay, and reliability measurement. Our indoor experimental results have shown: 1) the buffer overflow limits the CC transmission frequency; 2) the FPS needs to be dynamically adapted to guarantee the continuity of video transmission service; and 3) the link outage problem caused by the BS antenna pattern remains an urgent issues to be solved. \bibliographystyle{IEEEtran}
1,314,259,994,375
arxiv
\section{Introduction} Of the many types of data confronting signal processing researchers, time series data is perhaps one of the most common. While there are many possible ways to analyze a time series, one of the most important tasks in many areas of science and engineering is to characterize (or predict) the state of a dynamical system from a stream of its output data~\cite{book_brockwell2002timeseries,kantz2004nonlinear}. This type of state identification can be particularly challenging because the internal (possibly high-dimensional) system state $x(t) \in \mathbb{R}^N$ is often only indirectly observed via a one-dimensional time series of measurements produced through an observation function $s(t) = h(x(t))$, where $h:\mathbb{R}^N\to\mathbb{R}$. Surprisingly, when the dynamical system has low-dimensional structure because the state is confined to an attractor $\mathcal{M}$ of dimension $d$ ($d<N$) in the state space, Takens' Embedding Theorem~\cite{takens,embedology} shows that complete information about the hidden state of this system can be preserved in the time series output data $s(t)$. Indeed, many systems of interest do have this type of structure~\cite{book_strogatz1994nonlinear}, and a variety of algorithms for tasks such as time series prediction and attractor dimension estimation exploit Takens' result~\cite{kantz2004nonlinear}. Specifically, Takens defined the \emph{delay coordinate map} $F: \mathbb{R}^N \rightarrow \mathbb{R}^M$ as a mapping of the state vector $x(t)$ to a point in the \emph{reconstruction space} $(\mathbb{R}^M)$ by taking $M$ uniformly spaced samples of the past time series (with sampling interval $T_s$) and concatenating them into a single vector, \begin{equation} F(x(t)) = [s(t) \; s(t-T_s) \; s(t-2T_s) \;\cdots \; s(t-(M-1)T_s)]^T. \label{eqn:dcm} \end{equation} Takens' main result \cite{takens} (later refined in \cite{embedology}) states that (under a few conditions on $T_s$ discussed later) for almost every smooth observation function $h(\cdot)$, the delay coordinate map is an \textit{embedding}\footnote{An \textit{embedding} is a \textit{one-to-one immersion}.} of the state space attractor $\mathcal{M}$ when $M>2d$. In other words, despite the state being hidden from direct observation, the topology of the attractor that characterizes the dynamical system can be preserved in the time series data when it is arranged into a delay coordinate map. In the absence of imperfections such as measurement or system noise, Takens' result indicates that a delay coordinate map should be as useful for characterizing a system as direct observation of the hidden system state. However, in the presence of noise, a one-to-one mapping may not be sufficient to guarantee the robustness of any processing performed in the reconstruction space (e.g., dimensionality estimation). The main underlying problem is that while Takens' theorem guarantees the preservation of the attractor's \emph{topology}, it does not guarantee that the \emph{geometry} of the attractor is also preserved. For example, Takens' result guarantees that two points on the attractor $\mathcal{M}$ do not map to the same point in the reconstruction space, but there are no guarantees that close points on the attractor remain close under this mapping (or far away points remain far away). Consequently, relatively small imperfections could have arbitrarily large effects when the delay coordinate map is used in applications. In the signal processing community, recent work has highlighted the importance of well-conditioned measurement operators to ensure the geometry of a low-dimensional signal family is preserved. Consider a signal class $\widetilde{\mathcal{M}}$ with intrinsic dimension $d$ residing in $\mathbb{R}^N$ and measurement operator $\widetilde{F}: \mathbb{R}^N \rightarrow \mathbb{R}^M$. We call $\widetilde{F}$ to be a \emph{stable embedding} of the signal class $\widetilde{\mathcal{M}}$ if for all distinct pairs of points $x,y \in \widetilde{\mathcal{M}}$ their pairwise distances are preserved by satisfying \begin{eqnarray} C(1- \delta) \le \frac{\|\widetilde{F}(x)-\widetilde{F}(y)\|_2^2}{\|x - y\|_2^2} \le C(1 + \delta). \label{eqn:cond_embed} \end{eqnarray} The \emph{scaling constant} $C$ could be absorbed into $\widetilde{F}$ and the \emph{conditioning number} $0\leq \delta<1$ bounds how much pairwise distances between signals in $\widetilde{\mathcal{M}}$ can change when mapped by $\widetilde{F}$ (i.e., how near $\widetilde{F}$ is to an isometry). The Johnson-Lindenstrauss (JL) lemma \cite{dasgupta2002elementary,achlioptas2003database} gives an example of a stable embedding of a signal class $\widetilde{\mathcal{M}}$ consisting of a point cloud of $d=|\widetilde{\mathcal{M}}|$ distinct points in $\mathbb{R}^N$. In this result, a random measurement matrix $\widetilde{F}$ with $M=O(\log(d))$ rows ensures that \eqref{eqn:cond_embed} holds with high probability for all pairs of points in the point cloud $\widetilde{\mathcal{M}}$. Another example is the recent work in the field of \emph{compressed sensing} (CS)~\cite{CompSampCand,CompSenDon}, % where the canonical results show that similar random matrices $\widetilde{F}$ satisfy the \emph{Restricted Isometry Property} (RIP) with high probability when $M=O(d\log(N/d))$~\cite{jlcs,mendelson2008uniform}. The RIP guarantees that~\eqref{eqn:cond_embed} holds for all pairs of $d$-sparse signals (i.e., the signal family $\widetilde{\mathcal{M}}$ is comprised of signals on the union of all $d$-dimensional subspaces within $\mathbb{R}^N$). Beyond extending the concept of the JL lemma from a finite point cloud to an infinite signal family, the CS results show the value of stable measurement operators by also making guarantees about efficient and robust signal recovery from these measurements. The notion of a stable embedding has also been extended to other signal models~\cite{baraniuk2008model}, including manifold signal families~\cite{wakin_embedding,clarkson2008tighter}. The latter can be seen as an extension of Whitney's Embedding Theorem~\cite{whitney}; while Whitney's Embedding Theorem ensures a one-to-one mapping of a manifold $\widetilde{\mathcal{M}}$ with dimension $d$ for almost any smooth projection function $\widetilde{F}$ given that $M > 2d$, the results in~\cite{wakin_embedding} further guarantee that~\eqref{eqn:cond_embed} holds over this signal family for a given $\delta$ with high probability when $M=O(d\log(N))$ and $\widetilde{F}$ is a random orthoprojector.\footnote{The required number of measurements $M$ in~\cite{wakin_embedding} also depends on some properties of the manifold (e.g., the maximum curvature). Clarkson \cite{clarkson2008tighter} later improved upon $M$ to remove the dependence on the ambient dimension $N$ and reduce the dependence on certain properties of the manifold.} While the notion of embedding the state of a dynamical system may seem far removed from the CS results, there is actually a close connection. It is well-known that Takens' Embedding Theorem can be viewed as a special case of Whitney's Embedding Theorem where the measurement operator $\widetilde{F}$ is restricted to forming a delay coordinate map (i.e., $\widetilde{F}=F$) and $\widetilde{\mathcal{M}}$ is taken to be the state space attractor (i.e., $\widetilde{\mathcal{M}}=\mathcal{M}$)~\cite{kantz2004nonlinear}. The main contribution of this paper is to further these connections by establishing sufficient conditions whereby the delay coordinate map is a stable embedding of the state space attractor for linear systems with linear observations functions. Indeed, the main technical result of this paper establishes deterministic, explicit and non-asymptotic sufficient conditions for the delay coordinate map to be a stable embedding with a given conditioning $\delta$. We also explore the meaning of these conditions for characterizing systems via delay coordinate maps. In particular, the results of this exploration are interesting because they contrast with the standard CS results in two principle ways: $(i)$ the conditioning of the operator cannot always be improved by taking more measurements, as some system/observation pairs will have a fundamental limit in how well the system geometry can be preserved, and $(ii)$ the necessary number of measurements scales with the dimension of the attractor $d$ but is independent of the dimension of the ambient space $N$. Due to the importance of nonlinear systems, a similar general stable embedding result for nonlinear dynamical systems is obviously of great interest. Linear systems have a wealth of tools available for their analysis and the language of ``attractors'' is uncommon when studying these relatively simple systems (despite the notion of an attractor being technically well-posed for the restricted class of linear systems we study here). Therefore, beyond just contributing a new tool for linear systems analysis and design (as demonstrated in the example of Section~\ref{sec:dimest}), our present results are perhaps most valuable for elucidating some of the unique issues that arise when trying to stabilize the embeddings of dynamical systems, helping to pave the way for extensions to nonlinear systems. \section{Background and Related Work} \label{sec:prelim} In this section we will briefly review some preliminaries, including a precise statement of Takens' theorem, attractors of linear systems, and related work in stable embeddings of attractors and manifolds. \subsection{Linear Systems and Delay Coordinate Maps} \label{sec:linear_sys} Let a dynamical system be defined by the differential equation: \begin{eqnarray} \dot{x}(t) = \Psi\left(x(t) \right), \label{eq:diff_eq} \end{eqnarray} where $x(t)\in\mathbb{R}^N$ is the system state at time $t$, and $\Psi: \mathbb{R}^N \rightarrow \mathbb{R}^N$ is a smooth function. As stated earlier, in this paper we will restrict our examination to embeddings of linear dynamical systems where $\Psi \in \mathbb{R}^{N \times N}$ is a matrix. Before going on, our discussion of these systems will require us to establish a basic notation for complex vector spaces. For $u = [u_1 \; \cdots \; u_N]^T \in \mathbb{C}^N$, we denote the complex variable by $j$, the (element-wise) complex conjugate by ${u}^{*}$ and the Hermitian transpose by $u^H=(u^*)^T$. Given the system matrix $\Psi$ and the definition of a dynamical system~\eqref{eq:diff_eq}, knowing the state at some fixed time $t_0$ is equivalent to knowing the path that the system takes to and from that state (called the \emph{flow}). Classic results in linear systems theory~\cite{brogan_book} show that the explicit solution for this path is given by a matrix multiplication: $x(t_0+t) = e^{\Psi t} x(t_0) = \Phi_t x(t_0)$, where $\Phi_t =e^{\Psi t}$ is the \emph{flow matrix}. Note that this solution is valid for positive or negative values of $t$, describing the flow both forward and backward from time $t_0$. Delay coordinate maps that embed points on the attractor of a dynamical system are intimately connected with the flow of the system approaching that point. In particular, forming a delay coordinate map of a specific point in the state space requires collecting samples of the system flow backward in time from that point at regular intervals $T_s$. To enable mathematical descriptions of this sampling operation along the flow, we suppress the implicit dependence on the sampling time $T_s$ and define the compact notation for the flow matrix as $\Phi = \Phi_{-T_s}$ so that $x(t-T_s)= \Phi x(t)$. The delay coordinate map $F$ with $M$ delays given in~\eqref{eqn:dcm} for the case of linear dynamical systems and linear observation functions $h \in \mathbb{R}^N$ can then be written as a $M \times N$ matrix: \begin{equation} F = \left(h \;|\; \Phi^T h \;|\; \cdots \;|\; (\Phi^{M-1})^T h \right)^T. \label{eq:DCM_linear_arb_h} \end{equation} To ensure that the linear dynamical systems under consideration have non-trivial steady-state behavior (i.e., oscillations rather than convergence to a fixed point), we restrict our study to the class of systems ${\mathcal{A}(d)}$ described in the following definition. \begin{definition} \label{def:A-eigenvalues} We say that a linear dynamical system in $\mathbb{R}^N$ defined by~\eqref{eq:diff_eq} is of \textbf{Class $\mathbf{\mathcal{A}(d)}$} for $d \le \frac{N}{2}$ if the system matrix $\Psi$ is real, full rank and has distinct eigenvalues. Moreover, $\Psi$ has only $d$ strictly imaginary\footnote{A number $x$ is \textit{strictly imaginary} if $\operatorname{Re}\{x\} = 0$. This condition ensures that the system modes corresponding to these eigenvectors have persistent oscillation in the steady-state response.} conjugate \emph{pairs} of eigenvalues and the rest of its eigenvalues have real components strictly less than 0. The strictly imaginary conjugate pairs of eigenvalues are called the \textbf{$\mathcal{A}$-eigenvalues} and they can be expressed as $\{\pm j \theta_i\}_{i=1}^{d}$ where $\theta_1, \cdots, \theta_d > 0$ are $d$ distinct numbers. The corresponding unit-norm \textbf{$\mathcal{A}$-eigenvectors} are $v_1, {v_1}^{*}, \cdots, v_d, {v_d}^{*}$. The corresponding eigenvalues of the flow matrix $\Phi$ are called the \textbf{$\mathcal{A}_{\Phi}$-eigenvalues}, and are given by $\{ e^{\pm j \theta_i T_s} \}_{i=1}^{d}$. \end{definition} \noindent Furthermore, we define $\Lambda = \mathrm{diag}\left(j\theta_1, -j\theta_1, \dots, j\theta_d, -j\theta_d\right)$ as the diagonal matrix composed of the $\mathcal{A}$-eigenvalues and $V = \left( v_1\;|\; {v_1}^{*} \;|\; \cdots \;|\; v_d \;|\; {v_d}^{*} \right) \in \mathbb{C}^{N \times 2d}$ as the concatenation of the $\mathcal{A}$-eigenvectors into a matrix with $\operatorname{rank}(V) = 2d$. Since $\Phi$ is the matrix exponential of $\Psi$, it is well-known that they share the same eigenvectors~\cite{moon2000mathematical}. Therefore, if we denote $D = D_{-T_s} = e^{- \Lambda T_s}$ as the diagonal matrix comprised of the $\mathcal{A}_{\Phi}$-eigenvalues, then we have $\Phi V = V D$. In order to have a meaningful notion of an embedding, the dynamical system must have its state trajectory confined to a low-dimensional attractor in the state space. Even if the system has transient characteristics from a given starting point, the embedding of a system is only considered in steady-state when these transients have disappeared. Considering the steady-state dynamics of the system, we make explicit the notion of an \textit{attractor} through the following definition. \begin{definition} \label{def:mathcal_M} Let a linear dynamical system be of class $\mathcal{A}(d)$ and let $x_0 = V \alpha_0 \in \mathbb{R}^N$ for some $\alpha_0 \in \mathbb{C}^{2d}$ be an arbitrary initial state of the system.\footnote{We only need to consider $x_0$ in the span of the columns of $V$ because any orthogonal components vanish in steady-state.} We define the \textbf{attractor} of this linear dynamical system to be $\mathcal{M} = \left\{ x \in \mathbb{R}^N \;|\; x = V e^{\Lambda t} \alpha_0\;,\; t \in \mathbb{R} \right\}$. \end{definition} It is easy to see that $\mathcal{M}$ lives in the span of $V$. Also, the attractor of the system clearly depends on the initial state of the system. Because the main results of this paper do not depend on the choice of initial state, we will simply refer to the fixed attractor as $\mathcal{M}$ and suppress the implicit dependence on the initial state. Additionally, one can check that this definition meets the fundamental notion of an attractor, i.e., that any point on the attractor $\mathcal{M}$ when projected backwards (or forward) in time by $\Phi$ will remain on $\mathcal{M}$. Specifically, for any $x\in\mathcal{M}$, we can write $x = V \alpha_x$, where $\alpha_x = e^{\Lambda t_x} \alpha_0$ for some $t_x \in \mathbb{R}$. Then we see that for some $D$ (the diagonal matrix comprised of the $\mathcal{A}_{\Phi}$-eigenvalues as defined earlier) and any $k \in \mathbb{Z}$, $\Phi^k x = \Phi^k V \alpha_x = V D^k \alpha_x$, meaning that $x$ remains on the attractor even when it is projected forward or backward in time. Finally, while we will not show this in detail due to space constraints, one can show that for each $i$ the state $x(t)$ is moving in an elliptical orbit on the span of $\Reo{v_i}$ and $\Imo{v_i}$ with angular speed proportional to $\theta_i T_s$. For clarity and to build intuition, we give two brief examples where $N = 2$, $d = 1$ and $T_s = 1$. For the first example, consider a dynamical system of class $\mathcal{A}(d)$ with $\mathcal{A}$-eigenvalue $\theta = \frac{\pi}{4}$ and $\mathcal{A}$-eigenvector $v = \frac{1}{\sqrt{2}}[1,\; j]^T$. Shown in Figure~\ref{fig:attractor_example}(a) is the resulting circular attractor of this system, along with the real and imaginary components of the $\mathcal{A}$-eigenvector and a pair of states separated in time by $T_s$ (which corresponds to a separation of $\theta T_s$ in angle). For the second example, consider a dynamical system of class $\mathcal{A}(d)$ with the same parameters except that the $\mathcal{A}$-eigenvector is now defined as $v = [0.8165 + 0.4082j, \; -0.4082j]^T$. Shown in Figure~\ref{fig:attractor_example}(b) is the resulting elliptical attractor and state time samples, illustrating that the angular speed is unchanged at $\theta T_s$. In both of these examples, the elongation of the ellipse is determined by the inner product between $\Reo{v}$ and $\Imo{v}$, which governs how well the attractors fill the dimensions of the state space that it occupies. While this is intuitive to visualize in the present case of $d=1$, for general $d > 1$ this elongation is determined by the ratio between the smallest and largest eigenvalues of $V^H V$, denoted $A_1$ and $A_2$, respectively. When $A_1 = A_2$, the system state revolves around a circle when projected onto each of the subspaces spanned by $\Reo{v_p}$ and $\Imo{v_p}$ for $p = 1, \cdots, d$, and the resulting attractor is a product of these circular orbits. However when $A_2 \gg A_1$, the projection of the attractor onto some (or all) of these subspaces will be a highly elongated ellipse, therefore not equally filling the dimensions of the state space that it occupies. \begin{figure*} \hfil \begin{minipage}[t]{0.4\linewidth} % \centerline{\epsfysize = 45mm \epsffile{figs/fig1.eps}} \vspace{-4mm} \centerline{\small$\quad$~(a)} \end{minipage} % \hfil \begin{minipage}[t]{0.4\linewidth} \centerline{\epsfysize = 45mm \epsffile{figs/fig2.eps}} \vspace{-4mm} \centerline{\small$\quad$~(b)} \end{minipage} \vspace{-2mm} \caption{\sl\small Examples of attractors of linear dynamical systems of class $\mathcal{A}(d)$ in $\mathbb{R}^N$ for $N = 2$ and $d = 1$ with sampling interval $T_s = 1$. (a) A system attractor when $\theta = \frac{\pi}{4}$ and $v = \frac{1}{\sqrt{2}}[1,\; j]^T$. This results in a circular attractor where the system progresses at an angular speed determined by $\theta$. (b) A system attractor when $\theta = \frac{\pi}{4}$ and $v = [0.8165 + 0.4082j, \; -0.4082j]^T$. Here the system also progresses at the same angular speed, but the attractor is now an ellipse.} \label{fig:attractor_example} \vspace{-8mm} \end{figure*} \subsection{Attractor Embeddings} \label{sec:Takens} The following theorem is an extension of Takens' original result \cite{takens}, and gives a lower bound on the number of measurements $M$ sufficient to ensure that a delay coordinate map $F$ defined as in \eqref{eqn:dcm} is a one-to-one mapping from the state space attractor to the measurement (reconstruction) space. \begin{thm}[Takens' Embedding Theorem~\cite{embedology}] \it \label{thm:takens} Assume the dynamical system converges to an attractor $\mathcal{M}$ of dimension $d$ and pick a sampling interval $T_s > 0$. Let $M > 2d$ and suppose $\mathcal{M}$ has a finite number of equilibria, no periodic orbits of $\Psi$ of period $T_s$ or $2T_s$, and at most finitely many periodic orbits of period $kT_s$ for $k = 3, \cdots, M$. Then for almost every smooth function $h$, the {delay-coordinate map} $F$ is one-to-one on $\mathcal{M}$. \end{thm} \noindent The notion of ``almost every'' used in the theorem above is technical (see~\cite{embedology} for details), but is consistent with the heuristic notion that out of all possible functions $h$, most will indeed work. In this paper we consider the question of when the one-to-one property described in Theorem~\ref{thm:takens} can be improved to become a stable embedding where $F$ is (nearly) an isometry that preserves the geometry of $\mathcal{M}$. Specifically, we introduce the following definition to formalize the notion of a stable embedding. \begin{definition} Suppose we have a dynamical system in $\mathbb{R}^N$ that converges to an attractor $\mathcal{M}$ and a linear map $F: \mathbb{R}^N \rightarrow \mathbb{R}^M$. We say that $F$ is a \emph{stable embedding} of $\mathcal{M}$ with \emph{conditioning} $\delta$ if for all $x,y\in\mathcal{M}$ and for some \emph{scaling constant} $C$, we have \begin{equation} C(1- \delta) \le \frac{\|F(x)-F(y)\|_2^2}{\|x - y\|_2^2} \le C(1 + \delta). \label{eqn:ARIP} \end{equation} \end{definition} \noindent Note that smaller values of $\delta$ in the above definition imply a more stable embedding because it guarantees that the map is closer to an isometry. We also note that preservation of Euclidean distances also implies that the geodesic distances between points on the attractor are preserved~\cite{wakin_embedding}. Because Taken's result only tells us that the delay coordinate map $F$ is a one-to-one mapping, it does not guarantee any specific value of the conditioning, meaning that $\delta$ could be arbitrarily close to 1 and the embedding could be highly unstable. To see why Takens' Embedding can be insufficient, we present an illustrative example where the conditioning of the embedding can be made arbitrarily bad when $M$ is the minimum number of delays necessary to satisfy the sufficient conditions of Theorem~\ref{thm:takens}. Consider a linear system of class $\mathcal{A}(1)$ with $N=2$, $T_s=1$, $\mathcal{A}$-eigenvalue $\theta = 0.03$ and $\mathcal{A}$-eigenvector $v = \frac{1}{\sqrt{2}}[1, \; j]^T$. This system has a circular attractor as depicted in Figure \ref{fig:lead1_Q}(a). We set the observation function to be $h = \sqrt{\frac{2}{M}}[\sqrt{\epsilon},\; \sqrt{1-\epsilon}]^T$.\footnote{As will be described in Theorem \ref{thm:stability_thm}, the observation function is normalized so that we have scaling constant of $C=1$ regardless of $M$.} Given a particular pair of points $x,y$ on opposite ends of the circular attractor (shown in Figure \ref{fig:lead1_Q}(a)), we examine the ratio $Q(x,y) = \frac{\|F(x) - F(y)\|_2^2}{\|x - y\|_2^2}$, where $F$ is the delay coordinate map given in \eqref{eq:DCM_linear_arb_h}. Note that if $F$ is a perfect isometry then $Q(x,y)=1$, and we must have $Q(x,y)>0$ for $F$ to be one-to-one. Fixing the number of measurements at $M=3$ (the minimum required by Takens' theorem), Figure \ref{fig:lead1_Q}(b) shows the behavior of $Q(x,y)$ for this pair of points as a function of $\epsilon$. We see that while meeting the sufficient conditions of Takens' Theorem, $\lim_{\epsilon\to 0} Q(x,y)=0$. Stated another way, by adjusting the parameter $\epsilon$ the conditioning of $F$ can be made arbitrarily bad for this pair of points. To see that this is not simply a bad pairing of the measurement function to the system, note that for any admissible choice of $h$ there would exist a pair of points that would behave the same way.\footnote{One can imagine this by rotating the points $x,y$ by an angle equivalent to the angle between the new measurement function and the given $h$.} To explore this example further, Figure \ref{fig:lead1_Q}(c) plots $Q(x,y)$ with $\epsilon = 0.1$ and varying $M$ from 3 to 400. We see that with increasing $M$, the ratio $Q(x,y)$ increases, oscillates and converges to a value of $C = 1$. This provides evidence suggesting that as $M$ increases, the conditioning of $F$ improves because the distance between this pair of points is preserved with increasing fidelity. This effect is not predicted by Theorem~\ref{thm:takens}, but will be shown in our main results in Section~\ref{sec:STE_Ad}. \begin{figure*} \begin{center} \hfil \begin{minipage}[t]{0.3\linewidth} \centerline{\epsfysize = 40mm \epsffile{figs/fig3.eps}} \vspace{-1mm} \centerline{\small$\quad$~(a)} \end{minipage} \hfil \begin{minipage}[t]{0.3\linewidth} \centerline{\epsfysize = 40mm \epsffile{figs/fig4.eps}} \vspace{-1mm} \centerline{\small$\quad$~(b)} \end{minipage} % \hfil \begin{minipage}[t]{0.3\linewidth} \centerline{\epsfysize = 40mm \epsffile{figs/fig5.eps}} \vspace{-1mm} \centerline{\small$\quad$~(c)} \end{minipage} \end{center} \vspace{-3mm} \caption{\sl\small Examining the conditioning of Takens' embeddings. (a) The large (blue) circle shows the attractor of the linear system. The (black) diamond and (red) circle markers show 2 different points $x,y$ that we pick on the opposite ends of the attractor. The arrow depicts the measurement function $h(\epsilon)$. (b) The graph shows $Q(x,y)$ for the points $x,y$ in Figure \ref{fig:lead1_Q}(a) over a range of values of $\epsilon$ from 0.01 to 0.1. The number of measurements $M$ is fixed at 3, the minimum required by Takens' theorem. (c) Here $Q(x,y)$ is plotted for $M$ ranging from 3 to 400 (with $\epsilon$ fixed at 0.1), suggesting a near isometry for $F$ as $M$ increases.} \label{fig:lead1_Q} \vspace{-8mm} \end{figure*} \subsection{Related Work} Independently but at nearly the same time as Takens' original work, Aeyels \cite{aeyels1981generic} looked at the same problem from a control theory standpoint. He showed that the delay-coordinate map is related to the observability criteria and that given any system in $N$ dimensions (not just one confined to an attractor), a generic choice of observation function $h$ guarantees that the system is observable as long as $M\geq 2N + 1$. Similar to the idea of a stable embedding, the authors in~\cite{Kang2009a} developed a robustness measure for the observability of dynamical systems. Stated in the language of delay coordinate maps and sampled systems, they defined a system as \emph{observable with precision} $(\epsilon, \delta)$ if for any two states $x,y$ on a trajectory in the state space, $\|F(x) - F(y)\|_2 \le \epsilon$ implies $\|x - y\|_2 \le \delta$. In addition to Takens' original investigation of attractor embeddings~\cite{takens}, significant advances were made by Sauer et al. \cite{embedology} to extend these results to include attractors of non-integer dimensions (i.e., strange attractors) and to make the definition of ``almost every'' more in line with notions of an event that occurs with probability one. Our preliminary results showing conditions for a stable embedding for linear systems of class $\mathcal{A}(1)$ were reported in~\cite{yap2010stabletakens}. There has also been significant prior work related to embedding manifolds (or fractal sets), which has important implications for attractor embeddings. Specifically, embedding results for manifolds were derived by Whitney \cite{whitney} and later expanded on by Sauer et al. \cite{embedology}. These results show that if a manifold has dimension $d$, then almost every smooth function mapping into $\mathbb{R}^M$ with $M > 2d$ will be an embedding of the manifold. Baraniuk \& Wakin \cite{wakin_embedding} extended these results to show that for manifolds with dimension $d$ embedded in $\mathbb{R}^N$, random orthoprojections into $\mathbb{R}^M$ provide a stable embedding of the manifold as long as $M$ scales linearly with $d$ and logarithmically with $N$ (depending also on various properties of the manifold, such as the maximum curvature). Clarkson \cite{clarkson2008tighter} later improved on the required number of measurements $M$ by removing the dependence on $N$ and certain worst case properties of the manifold. We note that these stable embedding results have been used to show that manifold learning and dimensionality estimation algorithms can be performed in the compressed space with nearly the same accuracy as they could be performed in the original space~\cite{hegde2007manlearn}. The main distinction between these manifold embedding results and Takens' theorem is that these results acquire $M$ independent observations of each single point on the manifold, whereas Takens' result requires the repeated application of a single observation function to a system having its own internal time variations. In essence, the delay coordinate map relies on the system dynamics to provide measurement diversity when the observations are restricted to a single fixed function $h$. One of the principle benefits of a stable delay coordinate map would be resilience to noise and other imperfections. The effect of noise on the reconstruction of state space attractors has also been previously considered by several researchers apart from the notion of a stable embedding. In \cite{muldoon1998delay} the authors looked at a modified embedding theorem for systems corrupted by dynamical noise, considering specifically embeddings using multivariate time series system outputs and taking more measurements than is typically required for a delay-coordinate map. In \cite{casdagli1991state}, the authors study the effects of observational noise via statistical methods, showing how the choice of delay-coordinates (i.e., the choice of observation function $h$ and sampling time $T_s$ with respect to the system dynamics) affects the ability to make predictions. In particular, they showed that poor reconstruction amplifies noise and increases estimation error. In related work, there has also been considerable research on the choice of the optimal sampling interval $T_s$ for the construction of the delay coordinate map (typically for the study of chaotic dynamical systems). In particular, one of the more successful techniques is choosing $T_s$ to minimize the mutual information between any two time series samples separated by $T_s$~\cite{fraser1986independent}. The resulting reconstructed attractor usually makes the quantitative and qualitative study of the chaotic dynamics easier as the reconstructed trajectories tends to be unfolded to maximally fill the reconstruction space. In contrast, our goal is to characterize conditions on the system and observation functions (including but not limited to $T_s$) such that the geometry of the attractor is faithfully represented in the reconstruction space. \section{Stable Embeddings for Linear Dynamical Systems} In this section we present our main technical results. We first present a preliminary result in Section~\ref{sec:existence} that gives explicit sufficient conditions on the system and observation functions to guarantee that the delay coordinate map is a one-to-one map of the state space attractor. This is akin to Takens' Embedding Theorem, and we present it here to highlight the specific differences that arise under our restrictions (linear systems and measurement functions) and when seeking explicit conditions on system and measurement pairs (as opposed to the conditions for generic observation functions in Takens' theorem). We then present our main technical contribution in Section~\ref{sec:STE_Ad}, giving explicit conditions on the system and observation function for the delay coordinate map to be a stable embedding of the attractor with specific guarantees on the conditioning number of the embedding. \subsection{Takens' Embeddings} \label{sec:existence} The following theorem gives conditions on the system and the observation function such that the delay coordinate map $F$ is a one-to-one mapping. This is analogous to Theorem~\ref{thm:takens} in the context of linear dynamical systems and linear observation functions. \begin{thm}[Linear Takens' Embedding \cite{yap2010stabletakens}] \label{thm:existence_thm} Assume a linear dynamical system of class $\mathcal{A}(d)$ in $\mathbb{R}^N$ that is in steady state. Choose $T_s > 0$ to be the sampling interval, $h \in \mathbb{R}^N$ to be the observation function, and denote by $F$ the delay-coordinate map with $M$ delays as defined in \eqref{eq:DCM_linear_arb_h}. Suppose that $M \ge 2d$, the $\mathcal{A}_{\Phi}$-eigenvalues $\{e^{\pm j\theta_i T_s}\}$ are distinct and strictly complex,\footnote{We say that a number $x$ is \emph{strictly complex} if $\Imo{x} \neq 0$.} and $v_i^H h \neq 0$ for all $i = 1, \cdots, d$. Then for all distinct pairs of points $x,y \in \mathcal{M}$, $F$ satisfies \eqref{eqn:ARIP} for some constants $C$ and $\delta < 1$. \end{thm} \emph{Proof: The proof of this theorem can be found in Appendix~\ref{sec:proof_main}.} To explore the differences that arise in our specific setting of linear systems and linear observation functions, we compare the conditions of this theorem with that of Takens' theorem. First, we notice that the conditions on the measurement operation are very similar. Theorem~\ref{thm:existence_thm} requires $M \ge 2d$, which is similar to Takens' $M>2d$ and likely only different because of the specific structure of our attractors. There is also a close correspondence with the other condition on the measurement function $v_i^H h \neq 0$. This requirement is an explicit condition on the relationship between the system and observation function ensuring that the observation function can capture some information from every dimension of the attractor. We note that (Lebesgue) almost-every $h \in \mathbb{R}^N$ will satisfy this condition, and so we find that this is just a more explicit version of Takens' result that ``almost-every'' $h$ ensures an embedding. Next, we compare our conditions on the system with those imposed by Takens' theorem. Theorem~\ref{thm:existence_thm} requires that the $\mathcal{A}_{\Phi}$-eigenvalues are distinct and strictly complex, which is equivalent to having $e^{j\theta_p T_s} \neq e^{\pm j \theta_q T_s}$ (distinct) and $e^{j\theta_p T_s} \neq \pm 1$ (strictly complex) for all $p \neq q$ and $p,q = 1, \cdots, d$. While this requirement implies\footnote{This implication can be shown by contradiction. Pick any $1 \le k \le 2d$ and suppose that $\mathcal{M}$ has at least a periodic orbit of $\Psi$ with period $kT_s$. This would be equivalent to saying that $e^{j \theta_p kT_s} = \left(e^{j \theta_p T_s}\right)^{k} = 1$ for all $p$, meaning that for each $p$ from 1 to $d$ the quantity $e^{\pm j \theta_p T_s}$ is uniquely one of the $k$ roots of unity. However this is impossible as there are $2d$ distinct and strictly complex values of $\{e^{\pm j\theta_p T_s}\}$ and there are only $k \le 2d$ roots of unity (including $\pm 1$ which are not allowed), and hence we have a contradiction.} that $\mathcal{M}$ does not have periodic orbits of period $kT_s$ for $k = 1, \cdots, 2d$ (thus satisfying Takens' condition), our condition is actually more stringent than this restriction on periodic orbits (likely due to our restricted class of linear observation functions). We note that since $\{\theta_i\}_{i=1}^{d}$ are distinct by definition, this condition is dependent on the choice of sampling interval $T_s$. One can verify that choosing $T_s < \frac{\pi}{\max\{ \theta_i \}}$ is sufficient (but not necessary) to meet the condition of the theorem. \subsection{Stable Takens' Embeddings} \label{sec:STE_Ad} Before presenting our main result giving conditions for a stable embedding of a dynamical system in a delay coordinate map, it will be useful to define and understand the following quantities that characterize how well-behaved the system and measurement process are both individually and jointly. First, we define $\kappa_1 = \min_{i \in \{1, \dots, d\}} \left\{\frac{|v_i^H h|}{\|h\|_2} \right\}$ and $\kappa_2 = \max_{i \in \{1,\dots, d\}} \left\{\frac{|v_i^H h|}{\|h\|_2} \right\}$ characterizing the minimum and maximum projection of the (normalized) observation function on the $\mathcal{A}$-eigenvectors. Roughly speaking, these quantities are an indication of the disparity between the dimensions of the system attractor that are best and worst matched to the observation function. One would expect that a measurement system is most efficient when it observes all parts of the attractor equally such that $\kappa_1 \approx \kappa_2$. Second, we define $A_1, A_2$ as the smallest and largest eigenvalues of $V^H V$, respectively. As we discussed at the end of Section \ref{sec:linear_sys}, these quantities describe how well the system attractor fills the dimensions of the state space that it occupies (i.e., when $A_2\gg A_1$ the attractor is very elongated in the state space). Again, we would expect that a system will be most amenable to observation when it fills the space such that $A_1\approx A_2$. Finally, we define $\nu := \underset{p \neq q}{\max} \left\{ {\left|\sin(\theta_p T_s)\right|}^{-1}, {\left| \sin\left( \frac{(\theta_p - \theta_q)T_s}{2} \right) \right|}^{-1}, {\left| \sin\left( \frac{(\theta_p + \theta_q)T_s}{2} \right) \right|}^{-1} \right\}$, which will also bound the constants associated with the stable embedding. Notice that the first term is large if $\theta_p T_s$ is small for some $p$ (or that $\theta_p T_s \approx k\pi$ for some integer $k$), meaning that the system state proceeds in the span of $\Reo{v_p}$ and $\Imo{v_p}$ at a slow pace, thus not producing much diversity in consecutive measurements of the system along these dimensions. The second term is large if $\theta_p T_s - \theta_q T_s$ is small (or near $k\pi$) for some $p \neq q$ and $p,q = 1, \cdots, d$, implying that the system state is proceeding in the subspaces spanned by $\Reo{v_p}, \Imo{v_p}$ and $\Reo{v_q}, \Imo{v_q}$ at almost the same rate. % This condition would be unfavorable because the system will take an extremely long time to display enough diversity to determine that it is actually traveling on two separate subspaces instead of one. The third term is similar to the second term if we write $\theta_p T_s + \theta_q T_s = \theta_p T_s - (-\theta_q T_s)$. Thus if $\theta_p T_s \sim -\theta_q T_s$, then the system is again proceeding on two subspaces at almost the same rate (although the system is proceeding in one of the subspaces in the ``opposite'' direction). Armed with these definitions, we now present our main result giving deterministic, explicit and non-asymptotic guarantees on the conditioning of the delay coordinate map. \begin{thm}[Stable Linear Takens' Embedding] \label{thm:stability_thm} Assume a linear dynamical system of class $\mathcal{A}(d)$ in $\mathbb{R}^N$ that is in steady state. Choose $T_s > 0$ to be the sampling interval, $h \in \mathbb{R}^N$ to be the observation function such that $\|h\|_2^2 = \frac{2d}{M}$, and denote by $F$ the delay-coordinate map with $M$ delays as defined in \eqref{eq:DCM_linear_arb_h}. Suppose that $M~>~\left((2d~-~1)~\frac{A_2 \kappa_2^2}{A_1 \kappa_1^2}~\nu\right)$, the $\mathcal{A}_{\Phi}$-eigenvalues $\{e^{\pm j\theta_i T_s}\}$ are distinct and strictly complex, and $v_i^H h \neq 0$ for all $i = 1, \cdots, d$. Then for all distinct pairs of points $x,y \in \mathcal{M}$, $F$ satisfies \eqref{eqn:ARIP} with constants $C~:=~d~\left( \frac{\kappa_1^2}{A_2} + \frac{\kappa_2^2}{A_1} \right)$ and $\delta := \delta_0 + \delta_1(M)$, where: \begin{eqnarray} \delta_0 := \frac{A_2 \kappa_2^2 - A_1 \kappa_1^2 }{A_2 \kappa_2^2 + A_1 \kappa_1^2}, \;\;\;\;\; \delta_1(M) := \frac{(2d-1) \nu}{M} \left(\frac{2 A_2 \kappa_2^2}{A_2 \kappa_2^2 + A_1 \kappa_1^2}\right). \label{eqn:delta1} \end{eqnarray} \end{thm} \emph{Proof: The proof of this theorem can be found in Appendix \ref{sec:proof_main}.} We first note that the sufficient conditions of this theorem are the same as those in Theorem~\ref{thm:existence_thm}, except that the required number of measurements is larger to ensure specific guarantees on the conditioning number $\delta$ (i.e. $\delta < 1$). Also, note that this theorem requires an observation function with a particular norm $\|h\|_2^2 = \frac{2d}{M}$. This normalization is to remove from $C$ any dependence on the number of measurements $M$ and the dimension of the attractor $2d$ (since $\kappa_1^2$ and $\kappa_2^2$ both scale inversely with $d$). The normalization plays no other significant role in the proof (and therefore could be eliminated without losing generality, but at the expense of clarity). To understand the implications of Theorem~\ref{thm:stability_thm}, we examine the behavior of the conditioning number $\delta$ as it is the main quantity of interest. In the theorem statement, $\delta$ is a sum of $\delta_0$ (which does not depend on $M$) and $\delta_1(M)$ which is positive for all $M$ and for which $\lim_{M\to \infty} \delta_1(M)=0$. Thus, we see that by taking more observations one could drive the conditioning guarantee for the mapping to $\delta=\delta_0$, \emph{but not below}. In other words, some system and measurement pairs will have a plateau preventing the conditioning guarantee for the delay coordinate map from improving beyond a fundamental limit. This is in contrast with CS results where the conditioning can be continually improved by taking more measurements. Indeed, in order to get arbitrarily good conditioning we would need $\delta_0=0$, which happens if and only if $A_2 \kappa_2^2 - A_1 \kappa_1^2 = 0 \; \Leftrightarrow \; \frac{A_2}{A_1} = \frac{\kappa_1^2}{\kappa_2^2}=1$. Recall that $A_1 = A_2$ implies that the attractor $\mathcal{M}$ maximally fills the subspace spanned by $V$ and $\kappa_1 = \kappa_2$ means that the observation function $h$ projects equally onto the $\mathcal{A}$-eigenvectors. Thus even with an infinite number of measurements, the delay coordinate map can only be guaranteed to be an exact isometry ($\delta=0$) when the system and observation function maximally fill and measure the subspace containing the attractor. The quantity $\delta_1(M)$ can be used to determine the number of measurements necessary to ensure that the conditioning number $\delta$ is within $\epsilon$ of the optimal value $\delta_0$. To find the required number of measurements to meet this target $\widehat{M}(\epsilon)$, we set $\delta_1(M)=\epsilon$ and solve \eqref{eqn:delta1} for $M$ to get \begin{eqnarray} \widehat{M}(\epsilon) = \frac{(2d-1) \nu}{\epsilon} \left(\frac{2 A_2 \kappa_2^2}{A_2 \kappa_2^2 + A_1 \kappa_1^2}\right). \label{eq:def_widehatM} \end{eqnarray} By multiplying the numerator and denominator by $\frac{1}{A_2 \kappa_2^2}$ and noting that $0 < \frac{A_1 \kappa_1^2}{A_2 \kappa_2^2} \le 1$, we can deduce that $\frac{(2d-1) \nu}{\epsilon} \le \widehat{M}(\epsilon) < \frac{2(2d-1) \nu}{\epsilon}$. One immediate application of this fact is that we can calculate the number of measurements necessary to guarantee a stable embedding for the delay coordinate map with a specified conditioning $\delta \in (\delta_0, \; 1)$, which is made precise in the following corollary. \begin{cor} Suppose we have a linear system of class $\mathcal{A}(d)$, observation function $h$ and sampling time $T_s$ such that the conditions of Theorem \ref{thm:stability_thm} are satisfied. Choose any $0<\epsilon<\left(1-\delta_0\right)$. If the delay coordinate map $F$ defined in \eqref{eq:DCM_linear_arb_h} has a number of delays $M$ chosen to satisfy $M \ge \frac{2(2d - 1)\nu}{\epsilon}$, then $F$ is a stable embedding of $\mathcal{M}$ with conditioning $\delta \leq \delta_0 + \epsilon$. \end{cor} The proof of this corollary is not shown, but follows immediately from Theorem \ref{thm:stability_thm}. While the linear scaling with $d$ seen in this result is in line with state-of-the-art CS results, we see that in contrast to typical CS results $\widehat{M}(\epsilon)$ does not depend on the ambient dimension $N$. Also note that $\widehat{M}(\epsilon)$ depends strongly on the $\mathcal{A}$-eigenvalues via the quantity $\nu$. In contrast, the interactions of the $\mathcal{A}$-eigenvectors and the observation function $h$ determine the lower bound on the conditioning $\delta$, as evidenced by the roles played by the quantities $A_1,A_2$ and $\kappa_1, \kappa_2$ in the formula for $\delta_0$. \section{Simulation experiments} \label{sec:sims} While the main result in Theorem~\ref{thm:stability_thm} is encouraging, it remains to be shown that $(i)$ the theoretical quantities actually reflect the salient embedding characteristics seen in system and measurement combinations, and $(ii)$ having a stable embedding actually improves our ability to infer information about a hidden attractor. For example, it is important to know if the fundamental limits on the embedding quality $\delta(M)$ are artifacts of our proof technique or are empirically observed. If these limits on the embedding quality are actually present, it is also important to know if the related bounds are tight, both in their asymptotic values and in terms of their convergence speed as $M$ increases. Finally, for a stable embedding to be a valuable goal, we need to demonstrate that achieving this goal results in improved performance in specific tasks performed in the reconstruction space. This section will use a series of simple simulations to explore these aspects of our theoretical results. As a general approach, each simulation in Sections~\ref{sec:bounds} and~\ref{sec:convsp} below involve creating an observation function $h$ and a test system of dimension $N=50$ in class $\mathcal{A}(d)$ (defined by $\mathcal{A}$-eigenvalues and $\mathcal{A}$-eigenvectors) so that the conditions of Theorem~\ref{thm:stability_thm} are satisfied. We choose the arbitrary initial point $x_0$ defining the attractor such that $\alpha_0 = [1, \; \cdots,\; 1]^T$ and $x_0 = V \alpha_0$, and we assume a sample time of $T_s=1$. For a single trial, we generate a random pair of points on the attractor $x$ and $y$ by choosing uniform random numbers $t_x, t_y$ from $(0, 10000)$ and assigning $x = V e^{\Lambda t_x} \alpha_0$ and $y = V e^{\Lambda t_y} \alpha_0$. In other words, we start the system from the (arbitrary) initial condition and stop it after a random amount of time to get a single point on the attractor. We then vary $M$ from 1 to 200, and run 1000 trials for each $M$ (renormalizing $h$ for each $M$ as per Theorem~\ref{thm:stability_thm}). For each trial we calculate the quality of the conditioning $Q(x,y) = \frac{\|F(x) - F(y)\|_2^2}{\|x-y\|_2^2}$, and for each $M$ record the largest and smallest value of $Q(x,y)$ (denoted $\max\{Q\}$ and $\min\{Q\}$, respectively) as a way to quantify how the conditioning changes with the number of measurements. In the subsequent plots the dotted lines represent $C(1\pm\delta_0)$, showing the theoretical asymptotic bounds on the conditioning quality $Q(x,y)$, and the dashed lines are the theoretical bounds on the conditioning $C(1 \pm \delta(M))$ given by Theorem~\ref{thm:stability_thm}. \subsection{Bounds on the embedding quality} \label{sec:bounds} One of the fundamental characteristics of Theorem~\ref{thm:stability_thm} is that in general, the bound on the embedding quality $\delta (M)$ approaches $\delta_0\neq0$ as $M$ increases rather than approaching zero as is typical in CS results. The first question to ask is whether pairs of systems and observation functions can actually display such a plateau as predicted, or whether the conditioning instead continually improves with more measurements. To demonstrate this effect, we generate a simulation as described above with $d = 3$, choosing the $\mathcal{A}$-eigenvalues $\{\theta_i\}_{i=1}^{d}$ uniformly at random from $(0,\pi)$, and taking care to ensure that the resulting $\mathcal{A}_{\Phi}$-eigenvalues are distinct and strictly complex to satisfy the conditions of Theorem~\ref{thm:stability_thm}. We then create the $\mathcal{A}$-eigenvectors by letting $v_i = \frac{1}{\sqrt{2}} (e_{2i-1} + j e_{2i})$, where $\{e_i\}$ are the canonical basis vectors in $\mathbb{R}^{N}$. This choice of $\mathcal{A}$-eigenvectors ensures that $A_1 = A_2$. To generate a generic observation function $h$, we first create a vector $c \in \mathbb{R}^N$ such that $c = \sum_{i = 1}^{d}((1 + w_{2i-1}) \Reo{v_i} + (1 + w_{2i}) \Imo{v_i})$, where the $\{w_i\}$ are i.i.d. Gaussian random variables of zero mean and variance $0.1$. Thus $c$ is a (random) linear combination of the vectors that form the subspace of the attractor. For each $M$ we let $h = h(M) = \sqrt{\frac{2d}{M}}\frac{c}{\|c\|_2}$ so that $\|h\|_2^2 = \frac{2d}{M}$ to meet the conditions of Theorem~\ref{thm:stability_thm}. Note that the small variance of $\{w_i\}$ produces $\{{|v_i^H h|^2}/{\|h\|_2^2}\}$ centered tightly around 1, making $\delta_0$ small (due to $A_1=A_2$ and $\kappa_1$, $\kappa_2$ both close to 1).\footnote{The random variables $\{w_i\}$ are used to ensure that $\kappa_1$, $\kappa_2$ are close to, but not exactly equal to 1. The case where $\kappa_1=\kappa_2=1$ is considered in the simulation in Figure \ref{fig:edu_STE}(b).} The specific parameters in this simulation are shown in Table~\ref{tab:ex_d3_Ae_Kne}. \begin{table*}[th] \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Index $i$ & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline $\theta_i$ (rad) & 2.3129 & 0.1765 & 1.4861 & --- & --- & --- \\ \hline ${|v_i^H h|^2}/{\|h\|_2^2}$ & 0.8346 & 1.1637 & 1.0017 & --- & --- & --- \\ \hline $\lambda_i(V^H V)$ & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline \end{tabular} \end{center} \vspace{-5mm} \caption{\small\sl{Parameters for the simulation shown in Figure~\ref{fig:edu_STE}(a). In this case the relevant quantities are $A_1 = A_2 = 1$, $\kappa_1 = 0.8346$, $\kappa_2 = 1.1637$, $\nu = 5.6954$ and $\delta_0 = 0.1647$.}} \label{tab:ex_d3_Ae_Kne} \vspace{-12mm} \end{table*} \begin{figure*} \hfil \begin{minipage}[t]{0.32\linewidth} \centerline{\epsfysize = 44mm \epsffile{figs/fig6.eps}} \vspace{-1mm} \centerline{\small$\quad$~(a)} \end{minipage} % \hfil \begin{minipage}[t]{0.32\linewidth} \centerline{\epsfysize = 44mm \epsffile{figs/fig7.eps}} \vspace{-1mm} \centerline{\small$\quad$~(b)} \end{minipage} \hfil \begin{minipage}[t]{0.32\linewidth} \centerline{\epsfysize = 44mm \epsffile{figs/fig8.eps}} \vspace{-1mm} \centerline{\small$\quad$~(c)} \end{minipage} \vspace{-1mm} \caption{\sl\small Simulations exploring the asymptotic bounds on the conditioning of the delay coordinate map. Plotted are the largest and smallest value of $Q(x,y)$ (depicted by $\max\{Q\}$ and $\min\{Q\}$ respectively) attained by the 1000 pairs of $x,y$ for each $M$. The dotted (red) lines represent the values of $C(1\pm\delta_0)$ and $C$, and the dashed (black) lines are the theoretical values of $C(1 \pm \delta(M))$. (a) In this simulation, $A_1 = A_2$ but $\kappa_1 \neq \kappa_2$, thus a plateau on the conditioning is seen. (b) In this simulation, $A_1 = A_2$ and $\kappa_1 = \kappa_2$. As expected, the conditioning number asymptotically reaches 0 as $M$ grows. (c) In this simulation, $A_1 \neq A_2$ and $\kappa_1 \neq \kappa_2$ and the predicted asymptotic values of the conditioning are not tight.} \label{fig:edu_STE}\label{fig:edu2_STE} \vspace{-8mm} \end{figure*} The results for this simulation are shown in Figure~\ref{fig:edu_STE}(a). We see from the behavior of $\max\{Q\}$ and $\min\{Q\}$ that the embedding does indeed reach a fundamental limit where the conditioning does not improve with more measurements. Furthermore, we see in this case that this plateau is correctly captured by the value $C(1\pm\delta_0)$ as described in Theorem~\ref{thm:stability_thm}. Additionally, the bounds $C(1 \pm \delta(M))$ do contain $\max\{Q\}$ and $\min\{Q\}$ as expected from the theorem, and the characteristic shape of these curves seems to qualitatively reflect the empirically observed convergence of the conditioning number. As confirmation, we also verify the implication of Theorem~\ref{thm:stability_thm} that system and measurement combinations can be constructed where the conditioning can be made arbitrarily good with more measurements (akin to the more typical CS results). To show this, we create another system with the same $\mathcal{A}$-eigenvalues and $\mathcal{A}$-eigenvectors as in the previous simulation, with the latter implying that $A_1 = A_2$. For the observation function, we first define $c = V [1,\; \cdots,\; 1]^T$, and for each $M$ we let $h = h(M) = \sqrt{\frac{2d}{M}} \frac{c}{\|c\|_2}$ as before. One can verify this choice results in $|v_i^H h|/\|h\|_2 = 1$ for all $i$, and thus $\kappa_1 = \kappa_2$. The parameters of this experiment are summarized in Table \ref{tab:ex_d3_Ae_Ke}. \begin{table*}[th] \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Index $i$ & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline $\theta_i$ (rad) & 2.3129 & 0.1765 & 1.4861 & --- & --- & --- \\ \hline ${|v_i^H h|^2}/{\|h\|_2^2}$ & 1 & 1 & 1 & --- & --- & --- \\ \hline $\lambda_i(V^H V)$ & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline \end{tabular} \end{center} \vspace{-5mm} \caption{\small\sl{Parameters for the simulation shown in Figure \ref{fig:edu_STE}(b). The experiment was chosen such that $A_1 = A_2 = 1$ and $\kappa_1 = \kappa_2 = 1$, so that $\delta_0 = 0$. As the $\mathcal{A}$-eigenvalues are the same as in the previous experiment, $\nu$ remains at $5.6954$.}} \label{tab:ex_d3_Ae_Ke} \vspace{-12mm} \end{table*} With this choice of parameters such that $A_1 = A_2$ and $\kappa_1 = \kappa_2$, Theorem~\ref{thm:stability_thm} indicates that $\delta_0=0$ so that ${\lim_{M\to\infty} \delta(M)= 0}$. Figure~\ref{fig:edu_STE}(b) shows the results of running the simulation in the same manner as before. The values of $\max\{Q\}$ and $\min\{Q\}$ clearly converge to $C$ as expected, showing that in this case the conditioning of the embedding can indeed be made arbitrarily good by taking more measurements. Although Theorem~\ref{thm:stability_thm} indicates that a finite limit on the conditioning number is always reached when either $A_1 \neq A_2$ or $\kappa_1 \neq \kappa_2$, this bound is not always tight and the predicted plateau level of $C(1 \pm \delta_0)$ may be conservative. To show this, we construct a similar simulation as above, now setting the $\mathcal{A}$-eigenvectors to be $v_i = \frac{1}{\sqrt{\|a_i\|_2^2 + \|b_i\|_2^2}} (a_{i} + j b_{i})$, where $\{a_i,b_i\}$ are randomly constructed vectors in $\mathbb{R}^N$ whose entries are i.i.d. zero-mean Gaussian random variables with a variance of $1$. We keep the $\mathcal{A}$-eigenvalues the same and generate $h$ in the same manner as the first simulation shown in Figure~\ref{fig:edu_STE}(a). The specific parameters for this simulation are shown in Table \ref{tab:ex_d3_Ane_Kne}, where we see that indeed $A_1 \neq A_2$ and $\kappa_1 \neq \kappa_2$. Figure \ref{fig:edu2_STE}(c) shows the results of running the simulation in the same manner as before. We see that although a limit on the conditioning number is reached as predicted by Theorem~\ref{thm:stability_thm}, the predicted plateau level of $C(1\pm\delta_0)$ is not tight and the conditioning can be better than that predicted by $\delta_0$. \begin{table*}[th] \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Index $i$ & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline $\theta_i$ (rad) & 2.3129 & 0.1765 & 1.4861 & --- & --- & --- \\ \hline ${|v_i^H h|^2}/{\|h\|_2^2}$ & 1.8138 & 1.2064 & 1.1318 & --- & --- & --- \\ \hline $\lambda_i(V^H V)$ & 1.5316 & 1.3058 & 1.1294 & 0.8372 & 0.7644 & 0.4315 \\ \hline \end{tabular} \end{center} \vspace{-5mm} \caption{\small\sl{Parameters for the simulation shown in Figure \ref{fig:edu2_STE}(c). We see that $A_1 = 0.4315$, $A_2 = 1.5316$, $\kappa_1 = 1.1318$ and $\kappa_2 = 1.8138$. Since the $\mathcal{A}$-eigenvalues are the same as in the first simulation shown in Figure~\ref{fig:edu_STE}(a), $\nu$ remains the same at $5.6954$. We also calculate $\delta_0 = 0.7010$.}} \label{tab:ex_d3_Ane_Kne} \vspace{-12mm} \end{table*} \subsection{Convergence Speed} \label{sec:convsp} In the simulations of the previous section we concentrated on the conditioning limits predicted by Theorem~\ref{thm:stability_thm}, ignoring issues of the speed of convergence to those limits. Examining the formula for $\delta_1(M)$ in Theorem~\ref{thm:stability_thm}, we see that the $\mathcal{A}$-eigenvalues (via the parameter $\nu$) affect the convergence speed of $\delta(M)$ to its asymptotic value of $\delta_0$. In particular, the convergence speed scales with $1/\nu$, which is also demonstrated in \eqref{eq:def_widehatM} where the number of measurements $\widehat{M}(\epsilon)$ necessary to get the conditioning $\delta$ within $\epsilon$ of the best possible value $(\delta_0)$ is proportional to $\nu$. \begin{figure*} \hfil \begin{minipage}[t]{0.4\linewidth} % \centerline{\epsfysize = 50mm \epsffile{figs/fig9.eps}} \vspace{-1mm} \centerline{\small$\quad$~(a)} \end{minipage} % \hfil \begin{minipage}[t]{0.4\linewidth} \centerline{\epsfysize = 50mm \epsffile{figs/fig10.eps}} \vspace{-1mm} \centerline{\small$\quad$~(b)} \end{minipage} \vspace{-1mm} \caption{\sl\small Examining the effect of the $\mathcal{A}$-eigenvalues on the convergence speed of the conditioning. (a) In this simulation, $d = 1$ and we test $\theta=\frac{\pi}{200}, \frac{\pi}{100}$ and $\frac{\pi}{40}$. As expected, the closer $\theta$ is to $\pi/2$, the faster the rate of convergence of $\delta(M)$ to $\delta_0$. (b) In this simulation, $d = 3$ and we vary between 3 sets of $\mathcal{A}$-eigenvalues with different values of $\nu$. As expected, the set of eigenvalues that gives the smallest $\nu$ provides the fastest rate of convergence of $\delta(M)$ to $\delta_0$ and vice versa.} \label{fig:d1_3theta_STE} \vspace{-8mm} \end{figure*} For ease of analysis, we first consider the case where $d=1$, meaning that $\nu=|\sin(\theta)|^{-1}$ (since $T_s = 1$), where $\pm j\theta$ are the sole $\mathcal{A}$-eigenvalues. In this case, $|\sin(\theta)|^{-1} \ge 1$ with the minimum attained when $\theta = \frac{\pi}{2} +k\pi$ for any integer $k$. The closer $\theta$ is to $\frac{\pi}{2} +k\pi$, the faster the convergence of $\delta(M)$ to $\delta_0$. This is illustrated by the following simulation where the $\mathcal{A}$-eigenvectors are chosen such that $A_1 = A_2$, and the observation function is chosen randomly as in the experiment shown in Figure~\ref{fig:edu2_STE}(a) (except with $d=1$). Figure~\ref{fig:d1_3theta_STE}(a) plots $\max\{Q\}$ and $\min\{Q\}$ for $\theta=\frac{\pi}{200}, \frac{\pi}{100}$ and $\frac{\pi}{40}$, showing that Theorem~\ref{thm:stability_thm} correctly captures that the convergence speed to the asymptotic value of $C(1 \pm \delta_0)$ varies inversely with the value of $\theta$. When $d > 1$, the joint relationship of the $\mathcal{A}$-eigenvalues (not just their individual values) determines $\nu$, and subsequently the convergence speed. One can see intuitively in the definition of $\nu$ that $\mathcal{A}$-eigenvalues which are maximally spread out should produce favorable convergence speeds. To illustrate this, we generate a simulated system with $d=3$, choosing the $\mathcal{A}$-eigenvectors such that $A_1 = A_2$, and generating an observation function $h$ randomly (as in the experiment in Figure \ref{fig:edu_STE}(a)). We also choose three sets of $\mathcal{A}$-eigenvalues: two uniformly random sets, and one set that are slight perturbations of equally spaced points around the unit circle according to $\theta_p = \frac{p \pi}{d+1}$ (the choices of $\theta_p$ and their respective $\nu$ are given in Table~\ref{tab:d3_3nu_STE}).\footnote{The slight perturbation is used for plotting convenience so all three curves converge to the same asymptotic value. If exactly equally spaced eigenvalues are used, the attractor is sampled uniformly and the convergent value will be inside $C(1 \pm \delta_0)$, making comparative plots difficult.} Figure \ref{fig:d1_3theta_STE}(b) shows the results of the simulation, with the $\max\{Q\}$ and $\min\{Q\}$ curves showing clearly that $\nu$ indeed controls the speed of convergence of $\delta(M)$ as predicted. \begin{table*}[th] \small \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline & $\theta_1$ & $\theta_2$ & $\theta_3$ & $\nu$\\ \hline Set 1 (nearly equal spacing) & 0.7836 & 1.5864 & 2.3566 & 2.6619 \\ \hline Set 2 (random) & 0.0491 & 1.5737 & 2.3490 & 20.3851 \\ \hline Set 3 (random) & 0.0212 & 1.5684 & 2.3549 & 47.1388 \\ \hline \end{tabular} \end{center} \vspace{-5mm} \caption{\small\sl{Choice of $\{\theta_i\}$ (in radians) for the experiment in Figure \ref{fig:d1_3theta_STE}(b) and their respective $\nu$ value.}} \label{tab:d3_3nu_STE} \vspace{-12mm} \end{table*} \begin{figure*} \hfil \begin{minipage}[t]{0.4\linewidth} % \centerline{\epsfysize = 50mm \epsffile{figs/fig11.eps}} \vspace{-1mm} \centerline{\small$\quad$~(a)} \end{minipage} % \hfil \begin{minipage}[t]{0.4\linewidth} \centerline{\epsfysize = 50mm \epsffile{figs/fig12.eps}} \vspace{-1mm} \centerline{\small$\quad$~(b)} \end{minipage} \vspace{-1mm} \caption{\sl\small Examining the predicted number of measurements necessary to reach a specified conditioning level. (a) Plotted is the upper-half of Figure \ref{fig:edu_STE}(a), also indicating $C(1+\delta_0 + \epsilon)$ with $\epsilon = 0.2$. (b) In this simulation, we explore how $\widehat{M}(\epsilon)$ (for a fixed $\epsilon = 0.1$) varies with the $\mathcal{A}$-eigenvalues for the system defined in Figure \ref{fig:d1_3theta_STE}(a). We plot the theoretical values of $\widehat{M}(\epsilon)$ (given in \eqref{eq:def_widehatM}) for $\theta$ varying from $0$ to $\pi/2$ together with its actual values (as described in the text) obtained by running experiments for each $\theta$. } \label{fig:d3_Ae_Kne_Mhat_STE} \vspace{-8mm} \end{figure*} Given that Theorem~\ref{thm:stability_thm} seems to be correctly capturing the convergence speed dependence on $\nu$, the last facet of the problem to explore is the tightness of this bound. Specifically, given a system of class $\mathcal{A}(d)$ and an observation function $h$, it is often of interest to estimate the minimum number of measurements $\left(\widehat{M}(\epsilon)\right)$ needed to ensure that for any $M' \ge M$ the conditioning number $\delta(M')$ is at most $\epsilon$ above the asymptotic level of $\delta_0$ (such an estimate is given in \eqref{eq:def_widehatM}). To examine this, we refer back to the simulation shown in Figure~\ref{fig:edu_STE}(a) with parameters given in Table~\ref{tab:ex_d3_Ae_Kne}. Fixing $\epsilon = 0.2$, Figure~\ref{fig:d3_Ae_Kne_Mhat_STE}(a) re-plots $\max\{Q\}$ together with the line $C(1 + \delta_0 + \epsilon)$. Using the given parameters and \eqref{eq:def_widehatM} we calculate that $\widehat{M}(\epsilon)\approx 166$. Note that this value is also the intersection of the curve $C(1+ \delta(M))$ with the line $C(1+\delta_0 + \epsilon)$. Figure~\ref{fig:d3_Ae_Kne_Mhat_STE}(a) shows that $\max\{Q\}$ actually met this tolerance with only around 30 measurements. Thus, although the theoretical value of $\widehat{M}(\epsilon)$ given by \eqref{eq:def_widehatM} is correct, it is pessimistic in at least this particular case. To demonstrate that the linear dependence of $\widehat{M}(\epsilon)$ on $\nu$ is correctly captured in the theorem, we restrict ourselves to $d = 1$. Recall that when $d=1$, $\nu = |\sin(\theta)|^{-1}$ (since $T_s = 1$) where $\pm j\theta$ are the sole $\mathcal{A}$-eigenvalues. We repeat the simulation shown in Figure~\ref{fig:d1_3theta_STE}(a), this time using 100 values of $\theta$ equally spaced between $(0, \pi/2)$. Fixing $\epsilon = 0.1$, for each value of $\theta$ we note the value of $M$ where for all $M' > M$, $\max\left\{ \frac{\max\{Q\}}{C} - 1, \; 1 - \frac{\min\{Q\}}{C} \right\} < \delta_0 + \epsilon$. We call this value the ``actual'' $\widehat{M}(\epsilon)$, in contrast to the ``theoretical'' $\widehat{M}(\epsilon)$ given by \eqref{eq:def_widehatM}. Figure~\ref{fig:d3_Ae_Kne_Mhat_STE}(b) shows these actual and theoretical values of $\widehat{M}(\epsilon)$ as a function of $\theta$. This comparison shows that while the theoretical $\widehat{M}(\epsilon)$ captures the same trend as the actual $\widehat{M}(\epsilon)$, the theoretical estimate can be pessimistic compared to the empirical values (though it is not clear if the theoretical bounds are achieved by some systems). \subsection{Stable Embeddings for Dimension Estimation} \label{sec:dimest} To demonstrate the value of stable Takens' embeddings, this section will explore a simulated task estimating the dimensionality of an attractor. The \emph{correlation dimension} is a measure of attractor dimension often applied to strange attractors of chaotic systems~\cite{grassberger1983measuring}, which corresponds to the actual geometric dimension of regular objects such as the circles and ellipses seen in linear system attractors~\cite{kantz2004nonlinear}. To be precise, we first define the \emph{correlation sum} of tolerance $\epsilon$ for a set of points $\{x_k\}$ lying on a subset $\mathcal{M}$ and temporally related via the flow (i.e., $x_k = \Phi^k x_0$) as \begin{eqnarray} C(\epsilon, K) := \frac{2}{K(K-1)} \sum_{p = 1}^{K} \sum_{q = p+1}^{K} \Theta(\epsilon - \|F(x_p) - F(x_q)\|_2), \label{eq:corr_sum} \end{eqnarray} where $F$ is the delay coordinate map and $\Theta(\cdot)$ is the \emph{Heaviside step function} defined as $\Theta(x) = 0$ if $x \le 0$ and $\Theta(x) = 1$ if $x > 0$. The \emph{correlation dimension} is defined as $D = \lim_{\epsilon \rightarrow 0} \lim_{K \rightarrow \infty} \frac{\partial \log C(\epsilon, K)}{\partial \log \epsilon}$. This makes intuitive sense as in the limit of small $\epsilon$ and large $K$, we expect $C(\epsilon,K)$ to scale like $C(\epsilon, K) \propto \epsilon^{-D}$, where $D$ is the dimension of the subset $\mathcal{M}$ in question. Theoretically, one way to estimate correlation dimension is to plot the graph of $\log C(\epsilon, K)$ against $\log \epsilon$ for a large value of $K$, then simply read off the gradient for small values of $\log \epsilon$. In the absence of noise and with a topology preserving Takens' embedding (i.e. $M > 2d$), this estimate should be as good as if one had access to the hidden system state. However, when noise is present, small values of $\log \epsilon$ will be capturing the noise characteristics and overestimating the attractor dimension. A common approach in this case is to plot the local gradient $D(\epsilon) = \frac{\partial \log C(\epsilon,K)}{\partial \log \epsilon}$ against $\log (\epsilon)$ for a large value of $K$ and read off an estimate of the correlation dimension $D$ from a plateau in the graph, preferably in the regime of small $\epsilon$. In this section, we use the above approach to estimate the correlation dimension of linear system attractors $\mathcal{M}$ in the reconstruction space $\mathbb{R}^M$. For this simulation construct a linear dynamical system of class $\mathcal{A}(1)$ with $N=100$, $\mathcal{A}$-eigenvalue $\theta = \frac{\pi}{300}$ and $\mathcal{A}$-eigenvector $v = [1,\; j]^T$ (resulting in $A_1 = A_2$ and a circular attractor). We also choose $h = [1, \; 1]^T$, implying that $\kappa_1 = \kappa_2$ and subsequently $\delta_0=0$. Figure \ref{fig:corr_dim}(a) shows that the actual conditioning\footnote{By actual conditioning, we mean the empirical value $\delta = \max\left\{ \frac{\max\{Q\}}{C} - 1, 1 - \frac{\min\{Q\}}{C} \right\}$, for $Q$ defined in Section \ref{sec:sims}.} of $F$ approaches zero as we increase $M$. To simulate noisy measurements, we corrupt the resulting time series formed by $h$ by adding white gaussian noise with zero mean and standard deviation $\sigma = 0.05$ (to give an SNR of about $32 dB$). \begin{figure*} \hfil \begin{minipage}[t]{0.4\linewidth} % \centerline{\epsfysize = 50mm \epsffile{figs/fig13.eps}} \vspace{-1mm} \centerline{\small$\quad$~(a)} \end{minipage} % \begin{minipage}[t]{0.4\linewidth} \centerline{\epsfysize = 50mm \epsffile{figs/fig14.eps}} \vspace{-1mm} \centerline{\small$\quad$~(b)} \end{minipage} \vspace{-1mm} \caption{\sl\small Estimating the correlation dimension of a circular attractor $\mathcal{M}$ of a linear system of class $\mathcal{A}(1)$. (a) The conditioning of the stable embedding decreases with increasing number of measurements $M$. (b) The graphs of $D(\epsilon)$ for the various $M$ considered are plotted against $\log \epsilon$. The correlation dimension estimate can be read off the plateaus in these graphs. These plateau regions become more distinct with increasing $M$ (improving conditioning), and appear to converge to a value near the true dimension of 1.} \label{fig:corr_dim} \vspace{-8mm} \end{figure*} Figure \ref{fig:corr_dim}(b) shows the plots of $D(\epsilon)$ against $\log (\epsilon)$ with a number of delays $M = 3, 73, 153, 223$. For the graph corresponding to $M = 223$, a plateau is easily seen between $-1 < \log \epsilon < 0$, and corresponding to a correct dimension estimate of approximately 1. We observe that by taking more measurements (i.e., improving the conditioning of the embedding), the estimate of the correlation dimension also improves. Moreover, the width of the plateau region where we read off the correlation dimension estimate increases with increasing $M$, thus making its estimate more precise. Note that when we take the minimum number of measurements $M = 3$ required by Takens' Theorem, there is no discernible plateau region in Figure \ref{fig:corr_dim}(b) for us to estimate the correlation dimension, and even the most reasonable estimate near $\log \epsilon=1$ is less accurate than with the estimates produced by the embeddings with better conditioning. \section{Conclusion} The main result of this paper has established that a delay coordinate map (using linear observation functions) can form a stable embedding for all pairs of points on the attractor of a linear dynamical system of class $\mathcal{A}(d)$. The explicit, deterministic and non-asymptotic sufficient conditions we give for this stable embedding yield several observations about the embedding itself and favorable properties of system and measurement pairs. For example, for many system and measurement pairs, the conditioning number $\delta(M)$ reaches a non-zero asymptotic value of $\delta_0$ with increasing $M$. This ``plateau effect'' is in contrast with typical CS results where the conditioning of the stable embedding can be continually improved by increasing the number of measurements. Furthermore, the convergence speed of the embedding quality to this limit is governed by the joint relationship of the system eigenvalues, which capture the relative speed with which the system explores the different dimensions of the state space (i.e., more diversity in these speeds results in faster convergence). Finally, we also see that the minimum number of delays $M$ of the delay coordinate map scales linearly with the attractor dimension but is independent of the system dimension. This is again in contrast with typical CS results, where the number of compressive measurements also scales logarithmically with the system dimension (but interestingly does parallel recent improvements in these bounds for the stable embedding of manifolds~\cite{clarkson2008tighter}). While the comparisons with standard CS results reveal these interesting and non-intuitive technical differences between the results in each case, these discrepancies actually point to a much deeper difference in the problem setups that must be appreciated when embedding attractors of dynamical systems. Perhaps the easiest way to see this is to consider that in the present case of delay coordinate maps, while the number of measurements doesn't scale with the ambient system dimension, the total number of measurements may in fact have to be larger than the system dimension ($M>N$) in order to make a particular conditioning guarantee. In the typical CS case, this would of course be a ridiculous proposition. If the RIP property required ($M>N$) random measurements (e.g., due to very large constants in the typical sufficient conditions), one would likely abandon the CS strategy and simply take $N$ uncoded measurements (e.g., in the canonical basis). However, in the case of delay coordinate maps for dynamical systems, this luxury is simply not available. For example, the observers often do not have any control over the choice of observation function $h$, and in these cases cannot simply change the way the system is measured. But, more importantly, even if we were given complete control over $h$, it is only a ``seed'' that is used in producing the whole measurement process. One can view the entire set of measurements as being generated by repeatedly forcing this observation function through the dynamics of the system (seen explicitly in writing the delay coordinate map in~\eqref{eq:DCM_linear_arb_h}). Said another way, because there is only a single observation function for the system, the total measurement process for a delay coordinate map is beholden to the dynamics of the system itself to provide sufficient diversity to make the measurements informative. Therefore, even with complete control over the observation function, delay coordinate maps represent a highly restricted total measurement process that cannot be completely controlled (without access to and control over the system that is hidden and in need of measurement). Characterizing the delay coordinate map embeddings for attractors of linear dynamical systems with linear observation functions is a subset of the more general problem of characterizing these embeddings for attractors of nonlinear systems and general observation functions. From the results here, we conclude that there is reason to be optimistic that similar stability results can be obtained for this more general case of interest. Furthermore, these results also lead us to conclude that there are several issues that differ from standard CS results and will need to be carefully considered in any generalization. \appendices \section{Proof of Stable Takens' Embedding Theorem} \label{sec:proof_main} Because Theorems \ref{thm:existence_thm} and \ref{thm:stability_thm} are very similar in structure, we will essentially lay out the proof approach for both of them together in this section and then separately establish the necessary details for each result. Before proceeding with the specific proofs, we will introduce some notation and preliminary results that will be useful. \subsection{Notation and preliminaries} \subsubsection{Frame theory} Drawing on some terminology from the field of \emph{frame theory}, we say that a sequence of vectors $\{g_i\}_{i=1}^{M}$ in $\mathbb{C}^K$, $M \ge K$, forms a \emph{frame}~\cite{christensen2003introduction} for $\mathbb{C}^K$ if there exists two real constants $0 < B_1 \le B_2 < \infty$ such that for all $\alpha \in \mathbb{C}^K$, $B_1\|\alpha\|_2^2 \le \sum_{i=1}^{M} |\langle g_i,\alpha \rangle|^2 = \|G \alpha\|_2^2 \le B_2 \|\alpha\|_2^2,$ where $G^H = \left(g_1 \; | \; g_2 \; | \; \cdots \; | \; g_M\right) \in \mathbb{C}^{K \times M}$, the concatenation of the $\{g_i\}_{i=1}^{M}$, is called the \emph{frame analysis operator} and $B_1, B_2$ are called the \emph{frame bounds}. The frame bounds can be defined as $B_1 = \lambda_{\min}$ and $B_2 = \lambda_{\max}$, where $\lambda_{\min}$ and $\lambda_{\max}$ are the minimum and maximum eigenvalues of $G^H G \in \mathbb{C}^{K \times K}$. \subsubsection{Linear delay coordinate maps} Because the attractor $\mathcal{M}$ is contained in the span of the columns of $V$, for any $x,y \in \mathcal{M}$ we can write $x = V \alpha_x$ and $y = V \alpha_y$ for some complex coefficients $\alpha_x, \alpha_y \in \mathbb{C}^{2d}$. Using $F$ to denote the delay coordinate map for a linear system with flow matrix $\Phi$ and observation function $h$ as described in \eqref{eq:DCM_linear_arb_h}, the $k$-th row (for $k = 1, \cdots, M$) of the vector $F(x) - F(y)$ can be written $h^T\left(\Phi^{k-1}(x-y)\right) % = h^T \left( \Phi^{k-1} V(\alpha_x - \alpha_y) \right) % = h^T \left( V D^{k-1}(\alpha_x - \alpha_y) \right) = \langle g_k, \alpha_x - \alpha_y \rangle$, where \begin{equation} g_k^H \hspace{-1mm}= h^T V D^{k-1} = \left[ (v_1^T h) e^{\mbox{-}j(k-1)\theta_1 T_s } , (v_1^H h) e^{j(k-1)\theta_1 T_s } , \dots, (v_d^T h) e^{\mbox{-}j(k-1)\theta_d T_s}, (v_d^H h) e^{j(k-1)\theta_d T_s } \right] \label{eqn:gvec} \end{equation} and $D$ is the diagonal matrix comprised of $\mathcal{A}_{\Phi}$-eigenvalues as defined in Section \ref{sec:linear_sys}. Thus, we have: $\|F(x) - F(y)\|_2^2 = \sum_{k = 1}^{M} |\langle g_k , (\alpha_x - \alpha_y) \rangle|^2 =\|G(\alpha_x - \alpha_y)\|_2^2$, where $G \in \mathbb{C}^{M \times 2d}$ is the concatenation of $\{g_k\}$ as described above. In this following, $G$ is fixed to be this matrix given here. \subsubsection{Eigenvalue bounds} It will be important in the following proofs to determine bounds on the extreme eigenvalues of the matrix $G^H G$. To that end, we first introduce the well-known Gershgorin Circle Theorem, which we state here for notational convenience: \begin{thm}[Gershgorin Circle Theorem \cite{moon2000mathematical}] \label{thm:Gershgorin_disk} The eigenvalues of a $K \times K$ matrix $A$ all lie in the union of the Gershgorin disks of $A$. The Gershgorin disk $\mathcal{D}_i$ for $i = 1, \cdots, K$, is defined as $\mathcal{D}_i = \left\{ x \in \mathbb{C}\;:\; |x-\mathcal{C}_i| \le \widetilde{r}_i \right\},$ where $\widetilde{r}_i := \sum_{j=1,\;j\neq i}^{K} |(A)_{i,j}|$ is the radius, and $\mathcal{C}_i := (A)_{i,i}$ is the center of the $i$-th disk. Thus $\lambda(A) \subset \bigcup_{i=1}^{K} \mathcal{D}_i,$ where $\lambda(A) = \{ \lambda_1, \cdots, \lambda_K \}$, and $\{\lambda_i\}$ are the eigenvalues of $A$. \end{thm} To apply the Gershgorin Circle Theorem to obtain the extrema eigenvalues of $G^H G$, we introduce the following useful lemma that gives values for centers $\mathcal{C}_i$ and radii $\widetilde{r}_i$ of the Gershgorin disks $\mathcal{D}_i$ of $G^H G$. \begin{lemma} \label{lem:Gershgorin_disk} For $i = 1, \cdots, d$, the centers of the Gershgorin disks of $G^H G$ are $\mathcal{C}_{2i-1} = \mathcal{C}_{2i} = |v_i^H h|^2 M$ while their radii are $\widetilde{r}_{2i-1} = \widetilde{r}_{2i} = |v_i^H h|^2 \left| \frac{\sin (M\theta_iT_s)}{\sin (\theta_i T_s)} \right| + \sum_{p=1, \;p\neq i}^{d} |v_i^H h||v_p^H h| \left| \frac{\sin \left( M(\theta_i - \theta_p)T_s/2 \right)}{\sin \left( {(\theta_i - \theta_p)T_s/2} \right)} \right| % +\sum_{p=1, \;p\neq i}^{d} |v_i^H h||v_p^H h| \left| \frac{\sin \left( {M(\theta_i + \theta_p)T_s/2} \right)}{\sin \left( {(\theta_i + \theta_p)T_s/2} \right)} \right|$. \end{lemma} \begin{proof} We can write $G^H G = \sum_{k=1}^{M} g_k g_k^H,$ where we recall that $g_k$ is defined as in \eqref{eqn:gvec}. Thus the $(p,q)$ entry of $G^H G$ can be expressed as: $(G^H G)_{p,q} = \sum_{k=1}^{M} g_k(p) g_k(q)^{*}$, where $g_k(p)$ denotes the $p$-th entry of the vector $g_k$. As such, the formation of $G^H G$ involves the calculation of sum of complex trigonometric polynomials due to the complex exponentials $\left(\{e^{\pm j (k-1) \theta_p T_s} \}\right)$ appearing in the terms of each $g_k$. A few separate cases need to be considered because of the differences in the even ($2p$) and odd ($2p-1$) numbered rows of $G^H G$ for all $p$. We first consider the even numbered rows. The diagonal terms actually have a fairly simple form: $(G^H G)_{2p,2p} = \sum_{k=1}^{M} g_k(2p) g_k(2p)^{*}=\sum_{k=1}^{M} |v_p^H h|^2 = M |v_p^H h|^2$. The adjacent term to the left is given by: $(G^H G)_{2p,2p-1} = \sum_{k=0}^{M-1} \left( (v_p^T h) e^{-j k \theta_p T_s} \right)^2 = (v_p^T h)^2 \sum_{k=0}^{M-1} \left(e^{-j 2\theta_p T_s}\right)^k = (v_p^T h)^2 \frac{\sin(M\theta_p T_s)}{\sin(\theta_p T_s)}e^{-j(M-1)\theta_p T_s}$, where the last expression follows from the standard formula for a finite geometric sum, pulling out common exponential factors, and using Euler's formula. The other cross terms for all $p,q \in \{1,\dots, d\}$ such that $p \neq q$ can be derived similarly as: \begin{eqnarray*} (G^H G)_{2p,2q} &= (v_p^T h)(v_q^H h) \sum_{k=0}^{M-1} \left( e^{-j 2 \left( \frac{\theta_p - \theta_q}{2} \right) T_s} \right)^k = (v_p^T h)(v_q^H h) \frac{\sin\left(M\left( \frac{\theta_p - \theta_q}{2} \right) T_s\right)}{\sin\left(\left( \frac{\theta_p - \theta_q}{2} \right) T_s\right)} e^{-j(M-1)\left( \frac{\theta_p - \theta_q}{2} \right) T_s},\\ (G^H G)_{2p,2q-1} &= (v_p^T h)(v_q^T h) \sum_{k=0}^{M-1} \left( e^{-j 2 \left( \frac{\theta_p + \theta_q}{2} \right) T_s} \right)^k = (v_p^T h)(v_q^T h) \frac{\sin\left(M\left( \frac{\theta_p + \theta_q}{2} \right) T_s\right)}{\sin\left(\left( \frac{\theta_p + \theta_q}{2} \right) T_s\right)} e^{-j(M-1)\left( \frac{\theta_p + \theta_q}{2} \right) T_s}. \end{eqnarray*} The relevant quantities for the odd numbered rows are given similarly as \begin{align*} (G^H G)_{2p-1,2p-1} & = (G^H G)_{2p,2p} =M |v_p^H h|^2,\\ (G^H G)_{2p-1,2p} &= (G^H G)_{2p,2p-1}^{*} = (v_p^H h)^2 \frac{\sin(M\theta_p T_s)}{\sin(\theta_p T_s)}e^{j(M-1)\theta_p T_s} ,\\ (G^H G)_{2p-1,2q} &= (G^H G)_{2q,2p-1}^{*} = (v_q^H h)(v_p^H h) \frac{\sin\left(M (\theta_q + \theta_p) T_s/2 \right) }{\sin\left((\theta_q + \theta_p) T_s/2 \right)}e^{j(M-1)\left( \frac{\theta_q + \theta_p}{2} \right) T_s} ,\\ (G^H G)_{2p-1,2q-1} &= (v_p^H h)(v_q^T h) \frac{\sin\left(M(\theta_p - \theta_q) T_s/{2} \right)}{\sin\left( (\theta_p - \theta_q) T_s/{2} \right)}e^{j(M-1)\left( \frac{\theta_p - \theta_q}{2} \right) T_s}. % \end{align*} Finally we note that many of the above complex quantities only differ in their phase because of symmetry in the summations, making their magnitudes equal when calculating the radii of the Gershgorin disks. The expressions for $\mathcal{C}_i$ and $\widetilde{r}_i$ in the lemma are obtained simply by applying the notation of the Gershgorin Circle Theorem to the calculated magnitudes of the entries of $G^H G$. \end{proof} \subsection{General proof approach} \label{sec:approach} Using the preliminaries above, we can now sketch out the general approach for the proof of both theorems below. Essentially, the theorems result from using (or establishing) the following three facts: \begin{enumerate} \item If $G^H G \in \mathbb{C}^{2d \times 2d}$ is established to be full rank, then $\{g_k\}_{k=1}^{M}$ form a frame in $\mathbb{C}^{2d}$. Thus there exists $0 < B_1 \le B_2 < \infty$ such that $B_1 \le \frac{\|F(x) - F(y)\|_2^2}{\|\alpha_x - \alpha_y\|_2^2} \le B_2$ holds for all distinct pairs of points $x,y \in \mathcal{M}$. In particular, to establish conditioning guarantees, we can let $B_1$ and $B_2$ be the smallest and largest eigenvalues of $G^H G$ (respectively) and determine bounds on those important quantities. \item Next, we use the fact that $\|x-y\|_2^2 = (\alpha_x - \alpha_y)^H V^H V (\alpha_x - \alpha_y)$ to get $A_1 \le \frac{\|x-y\|_2^2}{\|\alpha_x - \alpha_y\|_2^2} \le A_2$, where $A_1$ and $A_2$ are the smallest and largest eigenvalues of $V^H V \in \mathbb{C}^{2d \times 2d}$ respectively. By the definition of $V$ we know that $V^H V$ is well-defined and full rank, meaning that $0 < A_1 \le A_2 < \infty$. \item Putting the 2 previous steps together, we get $0 < \frac{B_1}{A_2} \le \frac{\|F(x) - F(y)\|_2^2}{\|x-y\|_2^2} \le \frac{B_2}{A_1} < \infty,$ where the bounds $\frac{B_1}{A_2}$ and $\frac{B_2}{A_1}$ can be manipulated to get the scaling constant $C$ and conditioning $\delta$ in \eqref{eqn:ARIP}. Specifically, we can set $C = \frac{1}{2}\left(\frac{B_1}{ A_2} + \frac{B_2}{ A_1} \right)$ and $\delta~=~1~-~\frac{B_1}{ C A_2}$. \end{enumerate} \subsection{Proof of Theorem \ref{thm:existence_thm}} \begin{proof} For Theorem \ref{thm:existence_thm}, we follow the three steps detailed in Appendix~\ref{sec:approach}, where we only need to show that $G^H G$ is indeed full rank given the conditions of the theorem. Consider first the case when $M=2d$, where showing $G^H G$ is full rank is equivalent to showing $\det(G^H G) = \det(G)^2 > 0$. % The matrix $G$ can be expressed in terms of a product of a Vandermonde matrix and a diagonal matrix: \begin{eqnarray*} G &=& \left( \begin{smallmatrix} 1 & 1 & \cdots & 1 & 1 \\ e^{-j\theta_1 T_s} & e^{j\theta_1 T_s} & \cdots & e^{-j\theta_d T_s} & e^{j\theta_d T_s} \\ \vdots & \vdots & & \vdots & \vdots \\ e^{-j2d \theta_1 T_s} & e^{j2d \theta_1 T_s} & \cdots & e^{-j2d \theta_d T_s} & e^{j2d \theta_d T_s} \\ \end{smallmatrix} \right) \left( \begin{smallmatrix} v_1^T h & & & & (0) \\ & v_1^H h & & & \\ & & \ddots & & \\ & & & v_d^T h & \\ (0) & & & & v_d^H h \end{smallmatrix} \right) = \widetilde{M}^T \widetilde{H}, \nonumber \end{eqnarray*} where $\widetilde{M}$ is the Vandermonde matrix with the $\mathcal{A}_{\Phi}$-eigenvalues as its parameters and $\widetilde{H}$ is a diagonal matrix whose diagonal elements are made up of the projection of $h$ onto the $\mathcal{A}$-eigenvectors. Thus, $\det(G) = \det(\widetilde{M}) \det(\widetilde{H})$. One of the conditions of Theorem~\ref{thm:existence_thm} ensures that the $\{e^{\pm j \theta_i T_s}\}_{i=1}^{d}$ are distinct, which implies that the determinant of this square Vandermonde matrix \cite[Ch 0]{horn1990matrix} obeys $|\det(\widetilde{M})| > 0$. Also since $v_i^H h \neq 0$ for all $i = 1, \cdots, d$, we also know that $|\det(\widetilde{H})| > 0$. Therefore for $M = 2d$, $\operatorname{rank}(G^H G) = 2d$. Since adding vectors to a frame does not change the rank of $G^H G$ (i.e., frame bounds cannot be lowered by adding more vectors to the frame), it follows that if $M \ge 2d$ then $\operatorname{rank}(G^H G) = 2d$ and the proof of Theorem~\ref{thm:existence_thm} is complete. \end{proof} \subsection{Proof of Theorem \ref{thm:stability_thm}} \label{sec:proof_of_stability_thm} \begin{proof} To prove Theorem \ref{thm:stability_thm}, we again follow the three steps detailed in Appendix~\ref{sec:approach}, this time establishing specific guarantees on the frame bounds $B_1(M)$ and $B_2(M)$ appearing in the first step. From Lemma~\ref{lem:Gershgorin_disk}, we first observe that for all $i$ we can bound the Gershgorin disk radii by $\widetilde{r}_{2i-1} = \widetilde{r}_{2i} \le \left(|v_i^H h|^2 + \sum_{p=1,\;p\neq i}^{d} |v_i^H h||v_p^H h| + \sum_{p=1,\; p\neq i}^{d} |v_i^H h||v_p^H h|\right) \nu \leq (2d-1)\kappa_2^2 \|h\|_2^2 \nu$. Noting that $\|h\|_2^2 = \frac{2d}{M}$, we see that for each $i$, the Gershgorin disks of $G^H G$ satisfy $\mathcal{D}_{2i-1} = \mathcal{D}_{2i} \subset \left[ |v_i^H h|^2 M - \|h\|_2^2 (2d - 1) \nu \kappa_2^2, \; |v_i^H h|^2 M + \|h\|_2^2 (2d - 1) \nu \kappa_2^2 \right]$. Then applying the Gershgorin Circle Theorem, we get $\lambda(G^H G) \subset \bigcup_j^{2d} \mathcal{D}_j \subset \left[ 2d \kappa_1^2 - \frac{2d}{M} (2d - 1) \nu \kappa_2^2,\; 2d \kappa_2^2 + \frac{2d}{M} (2d - 1) \nu \kappa_2^2 \right]$. By choosing $B_1(M) = 2d \left( \kappa_1^2 - \frac{(2d-1)\nu \kappa_2^2}{M} \right)$ and $B_2(M) = 2d \left( \kappa_2^2 + \frac{(2d-1)\nu \kappa_2^2}{M} \right)$, and applying step 2 in Section \ref{sec:approach}, % we arrive at: \begin{eqnarray} \frac{B_1(M)}{A_2} \le \frac{\|F(x) - F(y)\|_2^2}{\|x-y\|_2^2} \le \frac{B_2(M)}{A_1} \label{eq:iso_bounds_repeat} \end{eqnarray} for all distinct pairs of points $x,y \in \mathcal{M}$ and for all $M$. Now as $M \rightarrow \infty$, $B_1(M) \rightarrow 2d \kappa_1^2$ and $B_2(M) \rightarrow 2d \kappa_2^2$. Thus in the limit of large $M$, the lower and upper bounds of the inequality \eqref{eq:iso_bounds_repeat} approaches $\frac{2d \kappa_1^2}{A_2}$ and $\frac{2d \kappa_2^2}{A_1}$, respectively. We define the scaling constant $C$ as the average of the asymptotic values of these lower and upper bounds: $C := \frac{2d}{2}\left(\frac{\kappa_1^2}{A_2} + \frac{\kappa_2^2}{A_1} \right)$. Also define the conditioning number $\delta(M)$ for a given $M$ as the maximum deviation of the lower and upper bounds of \eqref{eq:iso_bounds_repeat} from $C$, normalized by $C$: $\delta(M) := \max \left\{ 1 - \frac{B_1(M)}{C A_2}, \frac{B_2(M)}{C A_1} - 1 \right\}.$ Now $1 - \frac{B_1(M)}{C A_2} = 1 - \frac{2d \left( \kappa_1^2 - {(2d-1)\nu \kappa_2^2/M} \right)}{(2d/2)\left({\kappa_1^2} + \kappa_2^2(A_2/A_1) \right)} = \frac{A_2 \kappa_2^2 - A_1 \kappa_1^2 + {2 A_1 (2d-1)\nu \kappa_2^2/M}}{A_2 \kappa_2^2 + A_1 \kappa_1^2}$, and $\frac{B_2(M)}{C A_1} - 1 = \frac{2d \left( \kappa_2^2 + {(2d-1)\nu \kappa_2^2/M} \right)}{(2d/2)\left((A_1/A_2){\kappa_1^2} + \kappa_2^2 \right)} - 1 = \frac{A_2 \kappa_2^2 - A_1 \kappa_1^2 + {2 A_2 (2d-1)\nu \kappa_2^2/M}}{A_2 \kappa_2^2 + A_1 \kappa_1^2}$. Since $A_1 \le A_2$, we have that $\delta(M) = \frac{B_2(M)}{C A_1} - 1 = \frac{A_2 \kappa_2^2 - A_1 \kappa_1^2}{A_2 \kappa_2^2 + A_1 \kappa_1^2} + \frac{2 A_2 \kappa_2^2}{A_2 \kappa_2^2 + A_1 \kappa_1^2}\frac{(2d-1)\nu}{M}.$ We can then define $\delta_0$ and $\delta_1(M)$ as the first and second term of the sum above. Notice that $\delta(M)$ represents a worst case bound on the deviation from $C$, as we maximized over upper and lower bounds that may not be the same magnitude (i.e., in general $C(1 - \delta(M)) \neq \frac{B_1(M)}{A_2}$). Finally, we recall that for the embedding conditioning number to be valid, we must have $0 \le \delta(M) < 1$. The first condition $\delta(M) \ge 0$ is achieved by construction. % The upper bound is equivalent to the condition for $M$ required by the theorem statement, thus completing the proof. \end{proof} \section*{Acknowledgment} The authors are grateful to Michael Wakin and Armin Eftekhari for valuable discussions about this work. \bibliographystyle{IEEEtran}
1,314,259,994,376
arxiv
\section{Introduction} Only within the last two decades has it been realised that the interplay between vigorous star formation and the state of the interstellar medium (ISM) can have profound implications for the evolution of galaxies and their environments (see for example Norman \& Ikeuchi 1989). Starbursts, and in particular the galactic mass outflows or winds driven by thermalised stellar winds from massive stars and supernovae, have implications for systems of all sizes. Galactic winds may be responsible for the destruction of dwarf galaxies (Dekel \& Silk 1986; Heckman {\rm et al.\thinspace} 1995), enrichment of the ICM and IGM in clusters and groups and removal of gas from merger remnants. Heckman {\rm et al.\thinspace} (1993) provide a comprehensive review of the observational data and theory of galactic winds. Briefly, thermalised kinetic energy from stellar winds and supernovae from massive stars in the starburst creates a hot ($\sim10^{8} {\rm\thinspace K}$) bubble in the ISM. This expands, sweeping up ambient material into a dense shell. Eventually the bubble breaks out of the disk of the galaxy along the minor axis. The hot wind then escapes freely at several thousand kilometers per second. The dense shell fragments due to Rayleigh-Taylor instabilities, and is carried along by the wind at velocities of order hundreds of kilometers per second. This or ambient clouds overrun by the wind is the source of optical emission line filaments, and possibly the soft \mbox{X-ray} emission. The archetypal starburst M82 presents possibly the best test case, given its proximity (3.63 {\rm\thinspace Mpc}, Freedman {\rm et al.\thinspace} 1994) and the wealth of observational data available. Its high infrared luminosity ($L_{IR}=4\times10^{10} \hbox{$\thinspace L_{\odot}$}$, Rieke {\rm et al.\thinspace} 1993), disturbed morphology, population of supernova remnants (Muxlow {\rm et al.\thinspace} 1994) and luminous young super star clusters (O'Connell {\rm et al.\thinspace} 1995) are all signatures of a strong burst of star formation. The starburst was probably caused by a close interaction with M82's nearby (projected distance $\sim 40 {\rm\thinspace kpc}$) neighbour M81 about $2\times10^{8 } {\rm\thinspace yr}$ ago (Cottrell 1977), and a tidal bridge of H{\sc i} \, connects the two galaxies (Yun {\rm et al.\thinspace} 1993). A set of emission line filaments along M82's minor axis show velocities consistent with gas motions along the surface of a cone at $v=600 \hbox{$\km\s^{-1}\,$}$ (Axon \& Taylor 1978): cooler material swept out of the galaxy by the much hotter wind. The ${\rm H\alpha}$ emission defines an outflow that has a radius $\sim440{\rm\thinspace pc}$ at a distance of $z\sim180{\rm\thinspace pc}$ above the galactic plane (G\"{o}tz {\rm et al.\thinspace} 1990), and is approximately cylindrical for $z<350{\rm\thinspace pc}$. At larger $z$ the blowout flares out to a cone with an opening angle $\theta\approx30\hbox{$^\circ$}$ (see Fig.~5 in McKeith {\rm et al.\thinspace} 1995). Within $1 {\rm\thinspace kpc}$ of the nucleus the inferred electron density in the filaments decreases with increasing $z$. McKeith {\rm et al.\thinspace} (1995) claim this is consistent with a $\rho \propto z^{-2}$ model, as would be expected if the ${\rm H\alpha}$ filaments were in pressure equilibrium with freely expanding wind such as that proposed by Chevalier \& Clegg (1985). A similar density decrease was also inferred by McCarthy {\rm et al.\thinspace} (1987). Additional evidence for a galactic wind is the synchrotron emitting radio halo, extended preferentially along the minor axis (Seaquist \& Odegard 1991), due to relativistic electrons from supernovae swept out from the starburst region by the wind. This has a maximum extent comparable to the \mbox{X-ray} emission (this paper). A steepening of the spectral index interpreted as arising from energy loss by Inverse Compton (IC) scattering of the electrons off IR photons, allows an estimate of the speed with which the electrons are being convected outwards, assuming re-acceleration in shocks to be negligible. Seaquist and Odegard (1991) claim a conservative estimate of the wind velocity, allowing for the uncertainties, lies in the range 1000-3000 $\hbox{$\km\s^{-1}\,$}$, similar to that predicted from theory (Chevalier \& Clegg 1985; Heckman {\rm et al.\thinspace} 1993). Schaaf {\rm et al.\thinspace}'s (1989) suggestion that \mbox{X-rays} produced by this IC scattering could be the source of the observed \mbox{X-ray} emission is argued against by Seaquist {\rm et al.\thinspace} (1991) who predict $L_{\rm IC}=10^{38} \hbox{$\erg\s^{-1}\,$}$, in contrast with the value we derive below of $2\times10^{40} \hbox{$\erg\s^{-1}\,$}$ in the {\em ROSAT} band. \mbox{X-ray} observations should provide a direct method of testing the galactic wind paradigm, given that thermalised stellar wind and supernovae ejecta is expected to have a temperature of $\sim10^{8} {\rm\thinspace K}$. Previous X-ray observations of M82 have suffered from poor sensitivity, poor spectral resolution and to a lesser extent poor spatial resolution. Watson {\rm et al.\thinspace} (1984) detected several very luminous ($\sim10^{39} \hbox{$\erg\s^{-1}\,$}$) sources along with diffuse emission using the {\em Einstein} HRI. The diffuse emission was seen to extend out to $\sim3'$ ($\sim3{\rm\thinspace kpc}$) to the south-east and $\sim2'$ to the north-west. Spectral fits using the {\em Einstein} IPC and MPC were inconclusive in that they were unable to distinguish between a power law or a thermal origin for the emission. Given that they were unable to separate the point sources and the diffuse emission, this is not surprising. A reanalysis of the {\em Einstein} data by Fabbiano (1988) did attempt to separate the different components. The MPC (without any imaging capability) fitted temperature of $6.8^{+5.7}_{-2.3} {\rm\thinspace keV}$ would be dominated by the nuclear source and hence is not an estimate of the wind temperature. The IPC gives $T\sim1.2 {\rm\thinspace keV}$ for the emission within $3'$ of the nucleus, compared to $\sim2.7 {\rm\thinspace keV}$ for the emission between 200-300$\arcm\hskip -0.1em\arcm$. The radial surface brightness in the IPC falls off approximately as $r^{-3}$, consistent with the expectation for a free wind. Schaaf {\rm et al.\thinspace} (1989) use an {\em EXOSAT} observation together with the {\em Einstein} data. The {\em EXOSAT} spectrum is consistent with either a power law or a Raymond \& Smith plasma with temperature $9^{+9}_{-4} {\rm\thinspace keV}$, again without any separation of point source and diffuse components. Although the extent of the emission seen within the {\em EXOSAT} and {\em Einstein} observations compares well with the higher sensitivity observation of {\em ROSAT} (Fig. 1), Tsuru {\rm et al.\thinspace} (1990) from observations with Ginga, claim evidence for a very extended, $100{\rm\thinspace kpc}$ halo. Two north-south scans of a $5\hbox{$^\circ$}$ region centred on M82 show excess flux to the north of M82, but not to the south. A spectral fit to this emission is essentially unconstrained, with a temperature in the range 1-11 {\rm\thinspace keV}. Tsuru {\rm et al.\thinspace} argue that a single point source could not produce the observed feature, as the position of the extra source is inconsistent between the two scans. They concede this could be due to two or more point sources, but estimate the chance of finding two sources of suitable flux in such a small region as 4 square degrees is $<5$\%. The dynamical age of such a halo is $\sim10^{8} {\rm\thinspace yr}$, hence this might be the remains of a wind from a starburst $\sim10^{8} {\rm\thinspace yr}$ ago. The {\em ROSAT} HRI observations of M82 (Bregman {\rm et al.\thinspace} 1995) show three sources within the nuclear region of M82, although two of them have very low S/N values above the strong and spatially varying wind emission. A very bright source present in the {\em Einstein} data appears to have faded away completely (Collura {\rm et al.\thinspace} 1994), although the main nuclear source is at a position consistent with the {\em Einstein} observation. Bregman {\rm et al.\thinspace} (1995 )analyse the diffuse emission without the benefit of any spectral information. They conclude that the extended emission along the minor axis is consistent with an outflow of gas with opening angle that decreases with increasing radius within $1.7'$ of the nucleus and at constant opening angle at larger radii. They model the emission successfully by adiabatically expanding gas of constant mass flux, and predict a decrease in temperature of the gas with increasing radius. \begin{figure*} \picplace{13cm} \label{fig:wind-grey} \caption[]{Contours of X-ray emission ($0.1$--$2.4 {\rm\thinspace keV}$) from the PSPC overlaid on a digitised \mbox{sky-survey} optical image of M82. The X-ray emission has been lightly smoothed with a Gaussian of standard deviation $\sigma=10\arcm\hskip -0.1em\arcm$ to suppress noise. The contour levels increase in factors of two from $2.88 \times 10^{-3}$ cts \hbox{\s$^{-1}\,$} arcmin$^{-2}$ ($\sim 3\sigma$ above the background).} \end{figure*} We report below, an analysis of the {\em ROSAT} PSPC and HRI observations of this \mbox{X-ray} emission. The PSPC's mixture of good spatial and spectral capabilities compared to any other \mbox{X-ray} instrument, allow the best determination yet of the properties of this emission. In particular, we can separate point source and diffuse emission, and investigate the variation of spectral properties as a function of distance from the nucleus. For the first time, we show that the diffuse emission is thermal in origin, and obtain temperatures, emission measures, metallicities, and, for an assumed geometry, electron densities, gas masses and total energies. We compare our results with Chevalier \& Clegg's (1985) analytical model of a galactic wind, and a simple model in which the emission comes from shock heated clouds rather than the wind itself. Our results allow us to reject the possibility that the \mbox{X-ray} emission comes from the wind itself, and show that it could be consistent with emission from shock heated clouds. \section{Data reduction} \label{sec:red} M82 was observed three times early in the {\em ROSAT} mission (Tr\"umper 1984) by both the PSPC and the HRI (Table~\ref{tab:rosobs}). Only the PSPC and the longer of the two HRI observations are used in the present analysis. The use of the HRI's good spatial resolution ($\sim 5\arcm\hskip -0.1em\arcm$ FWHM) to complement the spectral information available at moderate resolution ($27\arcm\hskip -0.1em\arcm$ at 1 {\rm\thinspace keV}) from the PSPC, is advantageous, especially with regard to clarifying source confusion. The data sets were obtained from the Leicester Data Archive (LEDAS) and were analysed using the Starlink {\em ASTERIX} X-ray analysis system. \begin{table*} \caption[]{ROSAT observations of M82. Although the first HRI observation rh600021 was ostensibly taken within a couple of days of the PSPC observation, only $\sim 170$ seconds were taken in March 1991, the rest being taken in early May 1991.} \begin{flushleft} \begin{tabular}{lllll} \hline\noalign{\smallskip} Instrument & Exposure (s) & ROR \# & P.I. & Start date \\ \noalign{\smallskip} \hline\noalign{\smallskip} HRI & 24613 & rh6200021 & Bregman & 25 03 1991 \\ PSPC & 26088 & rp600110 & Watson & 28 03 1991 \\ HRI & 9496 & rh600021ao1 & Bregman & 20 10 1992 \\ \noalign{\smallskip} \hline \end{tabular} \end{flushleft} \label{tab:rosobs} \end{table*} \subsection{Background subtraction of PSPC data} \label{sec:bgsub} The data were cleaned of periods of high background (both particle and Solar) and poor pointing stability, leaving 21194 seconds of good data. A spectral image (or {\em data cube}) was formed over a $0.3\hbox{$^\circ$} \times 0.3\hbox{$^\circ$}$ region centred on the $2.2 {\rm \mu m}$ nucleus, with a pixel size of $5\arcm\hskip -0.1em\arcm$ and 22 energy bins between channel numbers 11 and 230 (corresponding to roughly 0.11-2.3 keV). A model of the background was constructed using data from an annulus $r = 0.15\hbox{$^\circ$}-0.25\hbox{$^\circ$}$, centred on M82, with contaminating point sources removed. The particle contribution to the background was estimated using the master veto rate (Snowden {\rm et al.\thinspace} 1992) and the remainder was corrected for energy-dependent vignetting, to give a spatial-spectral model of the background covering the entire field. This model was then adjusted (by 3\%) as a result of an iterative process of further source searching using the PSS programme (Allan 1995), removal of sources from the background annulus, and rescaling of the background to achieve a drop to zero surface brightness away from M82. The resulting background subtraction should be accurate to 2\%. \subsection{HRI reduction} \label{sec:hri-red} The HRI's resolution and relative insensitivity to diffuse emission, make it an ideal instrument for investigating point sources in the field, clarifying the PSPC analysis. The data were binned into an image of size $0.3 \hbox{$^\circ$} \times 0.3 \hbox{$^\circ$}$ centred on the $2.2 {\rm \mu m}$ nucleus, with a $3 \arcm\hskip -0.1em\arcm$ pixel size to exploit the HRI's superior spatial resolution. Given the flatness of the HRI vignetting function within the region of interest, no vignetting correction was applied. Source searching is described below. \begin{figure*} \picplace{9cm} \label{fig:hri} \caption[]{Background subtracted HRI image of M82. The data have been lightly smoothed with a Gaussian of $\sigma=6\arcm\hskip -0.1em\arcm$. Contour levels begin at $4.0\times10^{-3}\, {\rm counts} \hbox{\s$^{-1}\,$}$ arcmin$^{-2}$, an increase in factors of two. HRI sources detected as described in the text are marked by crosses, and are numbered as in Table 2.} \end{figure*} \begin{figure*} \picplace{13cm} \caption[]{The wind regions from which spectra were collected, overlaid onto an X-ray image. Sources (shown as circles and numbered as in Table 2) were removed from the analysis. Contour levels are as in Fig. 1.} \label{fig:wind-reg} \end{figure*} \section{Point sources} \begin{table*} \caption[]{PSPC and HRI detected sources. Positional errors are at 90\% confidence, flux errors are 1 $\sigma$. Where a source is detected both in the PSPC and the HRI the HRI determined position is given.} \begin{flushleft} \begin{tabular}{lllllll} \noalign{\smallskip} \hline \noalign{\smallskip} Source & RA & Dec. & Pos. Err. & Flux (PSPC) & Flux (HRI) & Identification \\ & (J2000.0) & (J2000.0) & (arcmin) & Counts & Counts & \\ \noalign{\smallskip} \hline \noalign{\smallskip} 1 & 09 54 20 & +69 46 42 & 0.10 & $ 41\pm{8} $ & & \\ 2 & 09 54 34 & +69 48 13 & 0.17 & $ 33\pm{8} $ & & \\ 3 & 09 55 07 & +69 43 40 & 0.11 & $ 40\pm{9} $ & & Wind enhancement? \\ 4 & 09 55 15 & +69 36 19 & 0.04 & $ 39\pm{8} $ & $ 17\pm{7} $ & \\ 5 & 09 55 16 & +69 47 41 & 0.06 & $ 120\pm{13} $ & $ 45\pm{9} $ & \\ 6 & 09 55 33 & +69 44 47 & 0.10 & $ 62\pm{12} $ & & Wind enhancement? \\ 7 & 09 55 47 & +69 41 28 & 0.03 & & $ 48\pm{13} $ & \\ 8 & 09 55 51 & +69 40 48 & 0.01 & $ 8398\pm{108} $ & $ 1195\pm{48} $ & M82 nuclear source \\ 9 & 09 56 02 & +69 41 13 & 0.02 & & $ 49\pm{11} $ & \\ 10 & 09 56 18 & +69 49 02 & 0.04 & & $ 18\pm{6} $ & \\ 11 & 09 56 22 & +69 38 53 & 0.04 & & $ 15\pm{6} $ & \\ 12 & 09 56 43 & +69 38 05 & 0.17 & $ 37\pm{9} $ & & Wind enhancement? \\ 13 & 09 56 59 & +69 38 52 & 0.04 & $ 35\pm{8} $ & $ 23\pm{7} $ & QSO 0952+698 \\ 14 & 09 57 00 & +69 34 21 & 0.11 & $ 57\pm{9} $ & & \\ 15 & 09 57 13 & +69 44 18 & 0.04 & & $ 15\pm{6} $ & \\ 16 & 09 57 23 & +69 35 36 & 0.03 & $ 67\pm{10} $ & $ 26\pm{8} $ & \\ \noalign{\smallskip} \hline \end{tabular} \end{flushleft} \label{tab:pspc-srcs} \end{table*} \begin{table*} \caption[]{PSPC source properties. Error bounds are 1 $\sigma$. The best-fit parameters for a Raymond-Smith hot plasma model and a power law model are given.} \begin{flushleft} \begin{tabular}{lllllll} \noalign{\smallskip} \hline \noalign{\smallskip} Source & \multicolumn{3}{l}{Raymond-Smith} & \multicolumn{3}{l}{Power Law} \\ & $\hbox{$N_{\rm H}$}$ & T & EM & $\hbox{$N_{\rm H}$}$ & $\alpha$ & normalisation \\ & {\scriptsize (\tpow{21} $\hbox{$\cm^{-2}\,$}$)} & {\scriptsize (${\rm\thinspace keV}$)} & {\scriptsize (\tpow{55} ${\rm\thinspace cm}^{3}$ / $10 {\rm\thinspace kpc}^{2}$)} & {\scriptsize (\tpow{21} $\hbox{$\cm^{-2}\,$}$)} && {\scriptsize (\tpow{-5} \mbox{photons} $\hbox{$\cm^{-2}\,$} \hbox{\s$^{-1}\,$} {\rm\thinspace keV}^{-1}$)} \\ \noalign{\smallskip} \hline \noalign{\smallskip} 1 & $ 0.28^{+0.28}_{-0.17} $ & $ 1.78^{+20}_{-0.47} $ & $ 5.4^{+5.7}_{-2.0} $ & $ 0.48^{+0.44}_{-0.31} $ & $ 2.3^{+1.0}_{-0.9} $ & $ 0.8^{+0.2}_{-0.2} $ \\ 2 & $ 0.35^{+0.73}_{-0.35} $ & $ 0.88^{+1.32}_{-0.40} $ & $ 4.9^{+9.8}_{-4.6} $ & $ 0.70^{+1.16}_{-0.47} $ & $ 2.7^{+1.3}_{-1.2} $ & $ 0.7^{+0.4}_{-0.2} $ \\ 3 & $ 4.2^{+7.6}_{-4.2} $ & $ 0.23^{+0.36}_{-0.18} $ & $ 48.7^{+26580}_{-48.7} $ & $ 7.5^{+0.9}_{-2.5} $ & $ 8.0\pm{1.6} $ & $ 12.4^{+4.6}_{-6.6} $ \\ 4 & $ 0.00^{+0.06}_{-0.00} $ & $ 0.57^{+0.18}_{-0.17} $ & $ 2.0^{+1.7}_{-0.7} $ & $ (0.19^{+0.33}_{-0.19}) $ & $ (2.2\pm{1.0}) $ & $ (0.7^{+0.1}_{-0.2}) $ \\ 5 & $ 0.79^{+1.26}_{-0.28} $ & $ 0.65^{+0.14}_{-0.34} $ & $ 32.1^{+59.1}_{-16.1} $ & $ 8.5^{+0.7}_{-3.2} $ & $ 8.0\pm{2.2} $ & $ 31.7^{+5.4}_{-14.8} $ \\ 6 & $ 0.87^{+0.30}_{-0.20} $ & $ 0.36^{+0.08}_{-0.06} $ & $ 100.3^{+97.8}_{-48.6} $ & $ 1.77^{+0.79}_{-0.36} $ & $ 4.6^{+0.7}_{-0.5} $ & $ 48.0^{+12.4}_{-6.3} $ \\ 8 & $ (3.17^{+0.24}_{-0.17}) $ & $ (2.39^{+0.48}_{-0.43}) $ & $ (3770^{+30}_{-20}) $ & $ 3.96^{+0.29}_{-0.28} $ & $ 2.17^{+0.14}_{-0.13} $ & $ 717^{+56}_{-44} $ \\ 12 & $ 0.9^{+3.4}_{-0.4} $ & $ 0.43^{+0.24}_{-0.30} $ & $ 36^{+144}_{-29} $ & $ 5^{+6}_{-1} $ & $ 6.3^{+4.2}_{-2.2} $ & $ 4.3^{+8.8}_{-4.3} $ \\ 13 & $ 2^{+7}_{-2} $ & $ 0.7^{+3.5}_{-0.2} $ & $ 24^{+409}_{-24} $ & $ 4^{+7}_{-4} $ & $ 3.6^{+3.7}_{-2.2} $ & $ 2.8^{+10.9}_{-2.8} $ \\ 14 & $ 0.10^{+0.10}_{-0.08} $ & $ 1.4^{+3.6}_{-0.6} $ & $ 6.1^{+4.4}_{-2.0} $ & $ 0.21^{+0.21}_{-0.15} $ & $ 2.2\pm{0.7} $ & $ 0.8\pm{0.2} $ \\ 16 & $ 0.24^{+0.18}_{-0.14} $ & $ 2.9^{+21.2}_{-0.5} $ & $ 9.0^{+4.6}_{-2.3} $ & $ 0.32^{+0.28}_{-0.22} $ & $ 1.7\pm{0.7} $ & $ 1.4^{+0.3}_{-0.2} $ \\ \noalign{\smallskip} \hline \end{tabular} \end{flushleft} \label{tab:psrcs-fits} \end{table*} \subsection{Source searching in the presence of diffuse emission} \label{sec:srcs} For M82, source searching must take into account the presence of a spatially variable high surface brightness diffuse background due to the wind. This affects both the PSPC and the HRI. Failure to incorporate this additional background leads to the detection of large number of low significance sources within the diffuse emission, whilst simply increasing the threshold significance leads to the non-detection of what are clearly real sources in regions free of diffuse emission. As a result of this we employ an iterative procedure which starts by using a smoothed image (including all sources and diffuse emission) as the background for the source searching procedure. Sources detected at $\gtsimm5\sigma$ are then excised from the image, out to the $\sim80\%$ enclosed energy radius ($7\arcm\hskip -0.1em\arcm$ in the HRI, $26\arcm\hskip -0.1em\arcm$ in the PSPC), and the resulting holes interpolated over. This dataset is then smoothed to provide a second approximation to the background. The 80\% radius, to which sources are removed, is a compromise between removing so large a radius that the interpolation is unreliable, and too small a radius, leaving significant source contamination in the background. The above procedure of source searching followed by background estimation, was repeated until there was no further change in the list of detected sources. In practice this required three cycles. The results of the method depend on the smoothing scale employed when estimating the background, and $\sigma=18\arcm\hskip -0.1em\arcm$ and $60\arcm\hskip -0.1em\arcm$ were found to give best results for the HRI and PSPC, respectively. The final combined sourcelist is shown in Table~\ref{tab:pspc-srcs}. \subsection{Source spectra} Individual exposure-corrected spectra centred on the positions given in Table~\ref{tab:pspc-srcs} were obtained for each PSPC source from the data cube within a radius corresponding to a 95\% enclosed energy fraction at an energy of $0.5 {\rm\thinspace keV}$ (an appropriate energy for QSO's which form the majority of background sources). \mbox{Raymond \& Smith} (1977) and power law models were fitted to these spectra. Standard $\chi^{2}$ fitting is inappropriate, due to the low numbers of counts per bin; we therefore used a \mbox{maximum-likelihood} fit. For each source, the spectrum predicted from the spectral model is added to an estimated background spectrum derived from the background model cube discussed in Sect.~\ref{sec:bgsub}. The resulting total source plus background spectrum is fitted to the observed spectral data using the Cash C-statistic (Cash 1979). Table~\ref{tab:psrcs-fits} gives the results of the spectral fits. As maximum likelihood does not provide a goodness-of-fit measure akin to $\chi^{2}$, it is difficult to assess how good the fits are, except by visual inspection and comparing the fitted parameters with typical values for QSO's and stars. Source 8 (the nucleus) is strong enough that significant systematic discrepancies between the data and the fitted models are apparent, and is dealt with separately in Sect.~\ref{sec:nucleus}. \subsection{Comparison between PSPC and HRI sources} \label{sec:hrisrc} Given the presence of several possible point sources within or in close proximity to the diffuse emission, the use of the HRI can clarify whether these are true point sources or not. As discussed in Sect.~\ref{sec:hri-red} the presence of strong diffuse emission complicates source searching, which may lead to the detection of spurious sources within the diffuse emission. Only five sources were detected independently in both the PSPC and the HRI. We have searched for counterparts to these objects at other wavelengths in the SIMBAD database. One (Source 8) is M82's nuclear source, another (Source 13) a QSO. The other three do not have any counterparts. For the remaining PSPC sources not detected in the HRI, we found $3\sigma$ upper limits for the HRI flux due to any point source within a region defined by the PSPC positional uncertainty. These, together with predicted HRI count rates from the spectral fits to the PSPC sources, are given in Table~\ref{tab:hri_uplim}. There is little difference between the predicted fluxes for the power law and \mbox{Raymond \& Smith} models. For the sources detected in both PSPC and HRI, the observed HRI count rates agree well with those predicted, except for the nucleus, which is discussed below. In most other cases the upper limits are greater than (i.e. consistent with) the predicted count rates. For the two possible sources within the northern wind, sources 3 \& 6, the predicted count rates are higher than the HRI $3\sigma$ upper limits. This could mean that these are not true point sources, merely bright lumps in the wind. Note, however, that source 6 appears to be significantly cooler than the temperature of the wind emission in region n7, where it is centred. Alternatively, the predicted count rate could be overestimating the flux, due to the addition of diffuse flux along with real source flux. Variability is another possible explanation for the HRI non-detection. \subsection{The nucleus} \label{sec:nucleus} As the HRI observations have shown (Collura {\rm et al.\thinspace} 1994; Bregman {\rm et al.\thinspace} 1995), the nucleus of M82 contains a very luminous, variable \mbox{X-ray} source, along with non-uniform diffuse emission. It is little surprise that the single component fits to the nuclear source in the PSPC data (source 8 in Table~\ref{tab:pspc-srcs}) are of poor quality. Given the large number of counts in the spectrum ($11033\pm249$) we can use standard $\chi^{2}$ fitting. The power law from Table~\ref{tab:pspc-srcs} has a reduced $\chi^{2}$ of 7 with 19 degrees of freedom. An additional problem is the finite radius of $34\arcm\hskip -0.1em\arcm$ within which the spectrum was extracted. Although the 95\% enclosed energy radius at $0.5 {\rm\thinspace keV}$ is good for almost all sources, the brightness of the nucleus, coupled with its hardness, means that significant flux is scattered outside this radius. At energies above $\sim1.75 {\rm\thinspace keV}$ this radius only encloses $\sim70\%$ of the flux. Given the large number of photons involved, this is likely to have a significant effect on any fit. Fitting the data from within a larger ($r=0.02\hbox{$^\circ$}$, giving $\sim90$\% enclosed energy at $1.75{\rm\thinspace keV}$, $17473\pm415$ counts) radius from the nucleus, we achieve a best-fit reduced $\chi^{2}$ of 2.08 with 15 degrees of freedom (see Table~\ref{tab:nuc}) using a two component soft \mbox{Raymond \& Smith} plus harder bremsstrahlung model. The fitted temperature of the hard component is outside {\em ROSAT}'s energy range, and so should only be interpreted as being being significantly hotter than 6.2 {\rm\thinspace keV}. The spectrum can also be fitted by a two component power law plus \mbox{Raymond \& Smith} model, however the fit requires the power law to have a {\it lower} column than the hot plasma component. This is not physically sensible if the hard component represents nuclear emission, so we rejected this model in favour of the \mbox{Raymond \& Smith} plus bremsstrahlung model. \begin{table} \caption[]{HRI count rates and upper limits for the PSPC detected sources. The predicted HRI count rate was calculated as described in the text. Only five PSPC sources are directly detected in the HRI, the remaining sources have $3\sigma$ upper limits quoted. The range in predicted flux corresponds to using either the power law or Raymond-Smith fit.} \begin{flushleft} \begin{tabular}{lll} \hline\noalign{\smallskip} Source & Predicted flux & Observed Flux \\ & ($10^{-4}$ cts $\hbox{\s$^{-1}\,$}$) & ($10^{-4}$ cts $\hbox{\s$^{-1}\,$}$) \\ && \\ \noalign{\smallskip} \hline\noalign{\smallskip} 1 & 5.9--6.3 & $<$ 9.5 \\ 2 & 4.9--5.4 & $<$ 9.4 \\ 3 & 12.8--13.4 & $<$ 10.2 \\ 4 & 6.1--6.9 & $6.6\pm2.6$ \\ 5 & 21.8--24.3 & $18.8\pm3.8$ \\ 6 & 25.8--29.5 & $<$ 12.5 \\ 8 & 1857--1996 & $500\pm20$ \\ 12 & 8.6--8.8 & $<$ 10.8 \\ 13 & 7.1--7.2 & $9.3\pm3.0$ \\ 14 & 7.9--8.0 & $<$ 9.7 \\ 16 & 10.9-11.3 & $11.0\pm3.0$ \\ \noalign{\smallskip} \hline \end{tabular} \end{flushleft} \label{tab:hri_uplim} \end{table} \begin{table} \caption[]{Best-fit parameters (reduced $\chi^{2}=2.08$) for the nuclear source. Luminosities are quoted in the {\em ROSAT} band ($0.1-2.4 {\rm\thinspace keV}$), for a distance of $3.63 {\rm\thinspace Mpc}$ (Freedman {\rm et al.\thinspace} 1994) to M82. The luminosity escaping M82 $L_{X-esc}$ is corrected for absorption in our own galaxy. The intrinsic source luminosities $L_{X-in}$ are corrected to zero absorption. $^{\dagger}$ This is the $1\sigma$ lower bound. The temperature is unbounded above this value.} \begin{flushleft} \begin{tabular}{lll} \hline\noalign{\smallskip} Parameter & Hard component& Soft component \\ & {\scriptsize bremsstrahlung} & {\scriptsize Raymond \& Smith} \\ \noalign{\smallskip} \hline\noalign{\smallskip} $\hbox{$N_{\rm H}$}$ {\scriptsize($10^{21} \hbox{$\cm^{-2}\,$}$)} & $5.76^{+1.49}_{-1.17}$ & $0.92^{+0.08}_{-0.12}$ \\ EM {\scriptsize(\tpow{57} cm$^{3} / 10 {\rm\thinspace kpc}^{2}$)} & $36.7^{+2.6}_{-3.4}$ & $8.3^{+2.3}_{-2.5}$ \\ T {\scriptsize({\rm\thinspace keV})} & $>6.2^{\dagger}$ & $0.76^{+0.02}_{-0.03}$ \\ Metallicity {\scriptsize($Z_{\odot}$)} & - & $0.30^{+0.09}_{-0.05}$ \\ $L_{X-esc}$ {\scriptsize (\hbox{$\erg\s^{-1}\,$})} & $1.3\times10^{40}$ & $7.8\times10^{39}$\\ $L_{X-in}$ {\scriptsize (\hbox{$\erg\s^{-1}\,$})} & $3.5\times10^{40}$ & $1.2\times10^{40}$ \\ \noalign{\smallskip} \hline \end{tabular} \end{flushleft} \label{tab:nuc} \end{table} The predicted HRI count rate for the hard bremsstrahlung component derived above is $\sim0.15$ cts $\hbox{\s$^{-1}\,$}$, compared to the observed value of $\sim0.05$ cts $\hbox{\s$^{-1}\,$}$. The thin, hot component of the wind is predicted to provide very little emission in the {\em ROSAT} band (Suchkov {\rm et al.\thinspace} 1994), and there are no other strong point sources seen by the HRI. Since the HRI source associated with the nucleus is known to be variable (Collura {\rm et al.\thinspace} 1994) by a factor of several within the HRI observations, the difference between PSPC and HRI fluxes is most likely due to such variability. The first HRI observation (rh600021, see Table~\ref{tab:rosobs}), is divided into two blocks which are interleaved with the PSPC observation. The short ($\sim170 {\rm\thinspace s}$) initial HRI pointing showed the source intensity at a level $\sim\frac{1}{3}$ that of the remaining HRI data, taken $\sim40$ days later. The first block of PSPC data falls in this 40 day gap, with the remainder commencing $\sim200$ days later. We have searched for variation of the nuclear source within the PSPC observation, which is broken into three main blocks separated by long gaps, but observe no significant variation between the blocks. In the second HRI observation, taken a year later, the nuclear count rate decreases steadily over a period of six days from the previous HRI level to about half that (Collura {\rm et al.\thinspace} 1994). The intrinsic nuclear point source luminosity in the {\em ROSAT} band of $\sim3.5\times10^{40} \hbox{$\erg\s^{-1}\,$}$ (Table~\ref{tab:nuc}) is significantly higher than the {\em ROSAT} HRI estimate of $\sim6.3\times10^{39} \hbox{$\erg\s^{-1}\,$}$. This estimate assumed a Raymond-Smith model with $T=3 {\rm\thinspace keV}$ and $\log \hbox{$N_{\rm H}$}=21.5$. With the higher temperature and absorption column inferred from the PSPC spectral fit, the HRI luminosity would increase, although the inferred PSPC luminosity remains several times higher. This PSPC luminosity corresponds to the Eddington luminosity for a $\sim250 \hbox{$\thinspace M_{\odot}$}$ object. The position of this source corresponds (to within {\em ROSAT} pointing accuracy) to the position of a strong $6 {\rm\thinspace cm}$ radio source (41.5+597) which, on the basis of a 100\% drop in flux within a year, is unlikely to be a supernova remnant (Muxlow {\rm et al.\thinspace} 1994). A high surface brightness complex is seen in the optical (region E in O'Connell {\rm et al.\thinspace} 1995) at this position. This is an unusual object deserving more study. \section{Wind properties} \label{sec:wind-prop} \begin{figure*} \picplace{8cm} \caption{Azimuthal profile of the emission within $0.1\hbox{$^\circ$}$ of the nuclear source. Sources other than the nucleus have been masked out. The northern minor axis is at an azimuth of $65\hbox{$^\circ$}$, the southern minor axis at $245\hbox{$^\circ$}$.} \label{fig:azimuth} \end{figure*} \subsection{Analysing the wind} \label{sec:an_wind} As our aim is to derive spatially resolved plasma parameters (temperature, density, metallicity and absorbing column) it is necessary to assume a geometry for the emission. This will dictate the regions from which spectra are taken and the volumes used in deriving the density of the emitting plasma. Given the lack of symmetry of the diffuse emission (henceforth called the wind, bearing in mind alternative explanations of its origin as wind-shocked ambient material or even a hydrostatic halo as discussed in Sect.~\ref{sec:discuss}) apparent in Fig.~\ref{fig:wind-grey}, it is not obvious what geometry to choose. Within the wind paradigm, a conical ({\it e.g.\ } Suchkov {\rm et al.\thinspace} 1994) to cylindrical outflow ({\it e.g.\ } Tomisaka \& Ikeuchi 1988; Tomisaka \& Bregman 1993) along the galaxy's minor axis is expected, and if the emission arises from \mbox{wind-shocked} material a similar geometry would apply. The azimuthal profile (Fig.~\ref{fig:azimuth}) of the PSPC data about the centre of the galaxy can be used to explore the geometry of the diffuse emission. A biconical outflow would result in a sharp drop in the azimuthal brightness profile at the angles corresponding to the edges of the cone. If the cone were actually limb brightened (as suggested in some models), then a bimodal structure would be seen in the azimuthal profile of each outflow. In practice, the profile varies quite smoothly with azimuth in both hemispheres, and no suggestion of a limb brightening is apparent. It appears that a conical geometry is a poor representation. Inspection of the surface brightness shows the Northern wind to be reasonably well described as a cylinder of radius $r = 0.05 \hbox{$^\circ$}$. We therefore adopt a cylindrical geometry for the bulk of our analysis. For consistency, we apply the same geometry to the southern wind, although it is clear from Fig.~3 that this is a poorer approximation, which will overestimate the emitting volume. A set of spectra along the northern and southern winds were accordingly extracted from a series of rectangular regions of width $0.1\hbox{$^\circ$}$ and height $h$ along the minor axis (Fig. 3). A compromise must be made between large $h$ (collecting a larger number of photons in the spectrum) and small $h$ (giving good spatial resolution along the wind). A value of $h=0.01 \hbox{$^\circ$}$ was chosen, giving 9 regions along the wind while still having sufficient photons to give reasonable constraints on the spectra for all but the outermost regions. Corresponding background spectra were formed from the \mbox{background-model} cube for each of the wind region to allow maximum likelihood fitting. Since it is likely that there will in practice be some degree of divergence of the outflows (though as we will show, the X-ray emission is almost certainly not coming from the wind fluid itself), we have investigated the effects of this by performing an identical analysis using two truncated cones. These truncated cones have a radius in the galactic plane $r_{\rm base}=0.03\hbox{$^\circ$}$, and an opening angle of $50\hbox{$^\circ$}$. The height $h$ is again $0.01\hbox{$^\circ$}$. The cylindrical and diverging geometries are compared in Fig.~\ref{fig:cone}. \begin{figure} \picplace{5cm} \caption[]{Cylindrical and conical geometries assumed for the spectral analysis.} \label{fig:cone} \end{figure} As discussed in Sect.~\ref{sec:hrisrc}, several point-sources were detected within or in close proximity to the diffuse wind emission. While it is possible that these represent regions of enhanced diffuse emission rather than truly independent sources, they were masked out of the wind regions to prevent any possible contamination of the wind spectra by foreign flux. The spectra were then corrected for \mbox{dead-time} and exposure corrected. Raymond \& Smith (1977) hot-plasma models were then fitted to the strip spectra using maximum likelihood, initially allowing all parameters to optimise. It was found that the metallicities consistently fitted low, $0.00$--$0.07 Z_{\odot}$. In view of the present uncertainties in the accuracy of {\em ROSAT} metallicities (see Bauer \& Bregman 1996), and the expectation that the metallicity should not vary greatly through the wind, we refitted all the spectra with the metal abundance fixed at $0.05 Z_{\odot}$, which reduced the scatter in the other fit parameters. All results for the wind quoted below are derived from these $Z=0.05 Z_{\odot}$ fits. Such a low metallicity, whilst surprising, is supported by the results of recent ASCA observations (Ptak {\rm et al.\thinspace} 1996; Tsuru 1996). ASCA has the spectral resolution to clearly distinguish the iron L complex, which is the strongest metallicity indicator for plasmas of this temperature, and the implied iron abundances in the soft spectral component is 0.04--0.05 $Z_{\odot}$ (with a typical error of $\approx 0.02$), in good agreement with our results. Since emission lines are so strong in the {\em ROSAT} energy band for $T\sim 0.5${\rm\thinspace keV} plasma, metallicity trades off against emission measure in fitted spectra. This can be clearly seen in Fig.~\ref{fig:zem}, which shows the error ellipses for 68\% confidence for {\em two} interesting parameters (optimising temperature and absorbing column) for all of the regions used in the analysis. Hence any error in our assumed metallicity will lead to a corresponding error in derived emission measure, and hence in inferred gas density. As can be seen, from Fig.~\ref{fig:zem}, the error envelopes generally fall below $Z=0.2 Z_{\odot}$. Hence, taking $Z=0.2 Z_{\odot}$ as the highest plausible value for metallicity (though this falls well outside the ASCA errors), our emission measures would be overestimated by a factor $\sim4$, and hence the densities would be a factor $\sim2$ too high. Such an error is quite modest compared to those introduced by the unknown filling factor and uncertain geometry of the emitting material. \begin{figure*} \picplace{16.5cm} \caption{Error ellipses for metallicity against emission measure at 68\% confidence for two interesting parameters, for all the wind regions. Regions n7 and n8 are clearly peculiar as discussed in Sect.~4.5. Region n5 is poorly constrained.} \label{fig:zem} \end{figure*} As previous X-ray observations of M82 have been unable to determine whether a hot plasma or a power law gives a better fit, we also fitted power law spectra to the data. These were found to give significantly poorer fits than the hot plasma fits (see {\it e.g.\ } Fig.~\ref{fig:fits}) for all but the outer regions, where the statistics were too poor to tell. Although maximum likelihood gives no absolute goodness of fit measure, the relative likelihood between two fits to the same data can be derived from the Cash C-statistic. From Cash (1979), the relative probabilities $P_{\rm 2}/P_{\rm 1}=\exp{(\Delta C_{\rm 12}/ 2)}$, where $\Delta C_{\rm 12}=C_{\rm 1} - C_{\rm 2}$ is the change in Cash statistic between the two fits. For the inner wind, contamination-corrected Raymond \& Smith fits are clearly superior, {\it e.g.\ } for region n2, the hot plasma fit is $\sim400$ times more probable than the best-fit power law. \begin{figure*} \picplace{10cm} \caption[]{Spectral fits to two for two of the wind regions, n2 (left) and n6 (right), representative of the range in quality of the spectra obtained. Normalised background-subtracted spectra are shown overlaid with power law (dashed line) and contamination corrected Raymond \& Smith (solid line) best fits. For the inner regions (such as n2) the power law is clearly a poorer fit than the Raymond \& Smith model. } \label{fig:fits} \end{figure*} \subsection{Nuclear contamination of the wind} \label{sec:contam} Given the presence of an extremely luminous hard point source (nearly a third of all counts detected from M82 with the PSPC are within a PSF sized region of $r\sim 30\arcm\hskip -0.1em\arcm$) at the centre of M82, and the increasing size of the PSPC PSF at higher energies, one expects some contamination of the wind emission in the inner strips by photons from the nuclear source. We can roughly assess the level of contamination by asking what fraction of the flux within the two innermost wind regions (n1 and s1) is due to scattered nuclear flux. If we assume all the flux within a $r=0.02\hbox{$^\circ$}$ region centred on the nuclear source were due to a point source, then the fraction of this flux scattered into wind region n1 is 15\% of the total flux observed in n1. For the innermost southern region, s1, the value is 12\%. These are overestimates, as only $\sim\frac{1}{2}$ of the flux within $r=0.02\hbox{$^\circ$}$ is due to the point source. We allow for nuclear contamination by using two-component models for the wind regions: a soft Raymond \& Smith plasma for the wind, and a harder bremsstrahlung component for the nuclear contamination. The bremsstrahlung component in each strip was fixed: the absorbing column and temperature taking the values derived from the nuclear fit discussed in Sect.~\ref{sec:nucleus}, and the contaminating flux being estimated from the (energy dependent) PSPC point spread function. \subsection{X-ray morphology} \begin{figure*} \picplace{9cm} \caption{Surface brightness along the minor axis in a slice $0.1\hbox{$^\circ$}$ wide. Diamonds represent the emission to the south, crosses the norther data. In each case the line represents the data with sources (other than the nuclear source) excluded. The south is brighter than the north within $\sim2{\rm\thinspace kpc}$ in both the \mbox{X-ray} and the radio (Seaquist \& Odegard 1991). Diffuse emission extends out to $\sim6{\rm\thinspace kpc}$. The emission seen in the north at $z\sim7.5{\rm\thinspace kpc}$ is a point source (number 5 in Fig.~3).} \label{fig:surf-b} \end{figure*} It is clear from Fig.~\ref{fig:wind-grey} that the diffuse emission is not symmetric around the plane of the galaxy. The surface brightness in a strip of width $0.10\hbox{$^\circ$}$ parallel to the minor axis (Fig.~\ref{fig:surf-b}) is initially higher to the south, but then drops more rapidly than to the north. Beyond $2{\rm\thinspace kpc}$ ($\sim120\arcm\hskip -0.1em\arcm$) from the nucleus the northern wind is consistently brighter. This asymmetry is also seen in the radio data of Seaquist \& Odegard (1991). Within $100\arcm\hskip -0.1em\arcm$ of the nucleus, the brightness profile at $20{\rm\thinspace cm}$ is brighter towards the south, while beyond $100\arcm\hskip -0.1em\arcm$ the north is brighter. Emission can be traced out to $z\approx0.1\hbox{$^\circ$}$ ($\equiv6{\rm\thinspace kpc}$) from the nucleus along the minor axis, before dropping into the noise. The {\em Einstein} IPC estimate of emission extending to $\sim9\hbox{$^\prime$}$ from the nucleus was due to the inclusion of a point source (source 5 in Table~\ref{tab:pspc-srcs}) in the diffuse emission (see Fig. 3 in Fabbiano 1988). \subsection{Comparison with H{\sc i} \, distribution} As can be seen in Fig.~\ref{fig:h1-xray}, the X-ray emission appears to \mbox{anti-correlate} with the large scale distribution of H{\sc i} \, surrounding M82. To the north-east, the \mbox{X-ray} emission appears to be bounded by the northern tidal streamer. Yun {\rm et al.\thinspace} (1993) claim this to be M82's tidally disrupted outer H{\sc i} \, disk. To the north-west, another streamer of H{\sc i} \, intrudes onto the \mbox{X-ray} distribution on the eastern edge of regions n4 and n5. This northern H{\sc i} \, has a velocity consistent with being on the far side of M82, as is the northern wind. The southern wind appears confined between the clump of hydrogen to the south-east and the beginnings of the southern tidal streamer to the south-west. The south-eastern H{\sc i} \, clump shows a broad blueshifted line wing which Yun {\rm et al.\thinspace} (1993) note may be due to the impact of a wind. The H{\sc i} \, in the tidal streamers could provide a natural obstacle for the wind, constraining its expansion. The northern and southern streamers each contain $\sim6\times10^{7} \hbox{$\thinspace M_{\odot}$}$, similar to the mass of material we infer below for the soft X-ray emitting material in the wind; so the H{\sc i} \, could potentially form a significant barrier for the wind. In the inner regions, Fig.~\ref{fig:h1-xray}, shows significant amounts of H{\sc i} \, in the region occupied by the optical filamentation and inner wind. The inner H{\sc i} \, displays a velocity gradient along the minor axis in the same sense as the ${\rm H\alpha}$ emission, hence the H{\sc i} \, probably consists of material swept out of the disk by the wind. As is discussed below, the X-ray spectra show signs of excess absorption. \begin{figure*} \picplace{9cm} \caption{Comparison between X-ray and H{\sc i} \, distributions. A greyscale \mbox{X-ray} image (lowest tone corresponding a flux of $1.44\times10^{-3} {\rm counts} \hbox{\s$^{-1}\,$}$ arcmin$^{-2}$), overlaid with contours of H{\sc i} \, column density (adapted from Yun {\rm et al.\thinspace} (1993)). The contours correspond to $2.7\times10^{19} \hbox{$\cm^{-2}\,$}$ times 1, 2, 3, 4, 6, 10, 15 and 25.} \label{fig:h1-xray} \end{figure*} \label{sec:windres} \subsection{Wind parameters} The results of the spectral fitting to the wind regions are given in Table~\ref{tab:wind-res}. Contamination of the spectral properties of the wind by the nuclear point source has been allowed for as discussed in section~\ref{sec:contam}. As can be seen from Fig. 3 there is little wind emission beyond regions n8 and s6, and no useful spectral parameters could be derived for these outermost strips. The absorbing column is found to decrease as the distance from the plane of the galaxy increases. Only in the south does the column drop to the Stark (1992) value of $4.0\pm{0.5}\times10^{20} \hbox{$\cm^{-2}\,$}$. It can be seen from Fig.~\ref{fig:h1-xray} that, on the basis of the H{\sc i} \, distribution, excess $\hbox{$N_{\rm H}$}$ would be expected to extend only to $z\sim2{\rm\thinspace kpc}$, and the magnitude of the observed excess for the north ($\sim3\times10^{20} \hbox{$\cm^{-2}\,$}$) is larger than expected, except in the nuclear region. Absorption in the ROSAT band arises predominantly from He, C, N, and O rather than H{\sc i}, hence, if the absorbing gas has a low metallicity such as is inferred for the hot \mbox{X-ray} emitting gas, the absorbing masses required at large heights above the plane are several $10^{7}$ to several $10^{8} M_{\odot}$. Fig.~\ref{fig:coltemp} shows 68\% confidence error ellipses for column and temperature. Excess absorption is required to the north, although the column for the south drops to close to the Stark value. The temperatures are well constrained, and do not depend strongly on the fitted column. The origin of the excess absorption to the north remains to be determined. \begin{figure*} \picplace{16.5cm} \caption{Error ellipses for column against temperature at 68\% confidence in two interesting parameters for the wind regions. Regions n7 and n8 are peculiar, as discussed in Sect.~4.5. The dotted line shows the Stark column. For the northern regions (n1-n6) it is clear that: a) the altering the temperature will not remove the need for excess absorption, and b) the temperatures for the innermost regions are well determined.} \label{fig:coltemp} \end{figure*} Temperature and density both decrease with increasing distance along the minor axis $z$, although the temperature drop is small (Figs ~\ref{fig:temp} -- ~\ref{fig:dens}). The density is initially higher to the south, but then drops below the density to the north beyond $\sim2{\rm\thinspace kpc}$, as indicated by the surface brightness profiles (Fig.~\ref{fig:surf-b}). \begin{table*} \caption[]{Spectral fits to the wind regions. The Stark column is $0.40\pm{0.05}\times10^{21} \hbox{$\cm^{-2}\,$}$. The metallicity is frozen at $0.05 Z_{\odot}$.} \begin{flushleft} \begin{tabular}{lllll} \noalign{\smallskip} \hline \noalign{\smallskip} Region & Counts & \multicolumn{3}{l} {Raymond-Smith parameters} \\ && $\hbox{$N_{\rm H}$}$ & T & EM \\ && (\tpow{21} $\hbox{$\cm^{-2}\,$}$) & (${\rm\thinspace keV}$) & (\tpow{56} ${\rm\thinspace cm}^{3}$ / $10 {\rm\thinspace kpc}^{2}$) \\ \noalign{\smallskip} \hline \noalign{\smallskip} n1 & $ 1714\pm{43}$ & $ 1.26^{+0.51}_{-0.21} $ & $ 0.65^{+0.04}_{-0.06} $ & $ 44.7^{+10.2}_{-3.6} $ \\ n2 & $ 810\pm{30}$ & $ 0.85^{+0.13}_{-0.10} $ & $ 0.44^{+0.04}_{-0.04} $ & $ 26.0^{+4.7}_{-3.5} $ \\ n3 & $ 579\pm{26}$ & $ 0.73^{+0.11}_{-0.09} $ & $ 0.41^{+0.04}_{-0.04} $ & $ 20.0^{+3.9}_{-3.1} $ \\ n4 & $ 416\pm{23}$ & $ 0.73^{+0.15}_{-0.11} $ & $ 0.41^{+0.05}_{-0.05} $ & $ 13.9^{+3.5}_{-2.6} $ \\ n5 & $ 295\pm{20}$ & $ 0.96^{+0.63}_{-0.21} $ & $ 0.32^{+0.05}_{-0.12} $ & $ 16.3^{+10.4}_{-4.3} $ \\ n6 & $ 198\pm{17}$ & $ 0.94^{+0.57}_{-0.24} $ & $ 0.33^{+0.07}_{-0.07} $ & $ 9.9^{+6.5}_{-3.2} $ \\ n7 & $ 116\pm{14}$ & $ 0.64^{+0.39}_{-0.18} $ & $ 0.76^{+0.20}_{-0.15} $ & $ 2.4^{+0.4}_{-0.3} $ \\ n8 & $ 107\pm{14}$ & $ 0.61^{+0.69}_{-0.19} $ & $ 0.89^{+0.27}_{-0.14} $ & $ 2.3^{+0.4}_{-0.3} $ \\ n9 & $ 40\pm{12}$ & $-$ & $-$ & $-$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} s1 & $ 2859\pm{54}$ & $ 0.89^{+0.06}_{-0.05} $ & $ 0.60^{+0.02}_{-0.02} $ & $ 69.6^{+3.3}_{-2.9} $ \\ s2 & $ 1494\pm{40}$ & $ 0.84^{+0.08}_{-0.07} $ & $ 0.56^{+0.03}_{-0.03} $ & $ 38.5^{+2.9}_{-2.5} $ \\ s3 & $ 621\pm{27}$ & $ 0.61^{+0.08}_{-0.06} $ & $ 0.52^{+0.05}_{-0.05} $ & $ 14.7^{+2.1}_{-1.5} $ \\ s4 & $ 246\pm{19}$ & $ 0.74^{+0.30}_{-0.16} $ & $ 0.44^{+0.09}_{-0.07} $ & $ 7.6^{+3.1}_{-1.8} $ \\ s5 & $ 185\pm{17}$ & $ 0.50^{+0.16}_{-0.11} $ & $ 0.45^{+0.11}_{-0.08} $ & $ 4.6^{+1.9}_{-1.1} $ \\ s6 & $ 141\pm{16}$ & $ 0.52^{+0.16}_{-0.13} $ & $ 0.33^{+0.09}_{-0.06} $ & $ 5.3^{+2.4}_{-1.8} $ \\ s7 & $ 115\pm{15}$ & $-$ & $-$ & $-$ \\ s8 & $ 86\pm{14}$ & $-$ & $-$ & $-$ \\ s9 & $ 62\pm{13}$ & $-$ & $-$ & $-$ \\ \noalign{\smallskip} \hline \end{tabular} \end{flushleft} \label{tab:wind-res} \end{table*} Under our assumed cylindrical geometry, it is possible to derive further useful gas parameters (see Table~\ref{tab:wind-pars}). We assume a distance of $3.63 {\rm\thinspace Mpc}$ to M82 throughout. The volume $V$ is derived from the geometry, of which the emitting gas is assumed to occupy some fraction $\eta$ (the filling factor of the hot gas). The fitted emission measure then equals $\eta \, n^{2}_{e} \, V$, and (assuming an ionised hydrogen plasma for simplicity) the mean electron density, total gas mass $M_{\mbox{\small gas}}=m_{\rm H} \, n_{e} \, V$, thermal energy $E_{\mbox{\small th}}=3 \, n_{e}\, V \,{\rm k} T$, bulk kinetic energy $K_{\mbox{gas}}=\frac{1}{2} \, M_{\mbox{\small gas}} v^{2}_{\rm gas}$, cooling timescale $t_{\mbox{\small cool}}=E_{\mbox{\small th}} / L_{\rm X}$, mass deposition rate $\dot{M}_{\mbox{\small cool}}=M_{\mbox{\small gas}} / t_{\mbox{\small cool}}$, and sound speed $C_{\rm sound}=\sqrt(2 \, \gamma \, {\rm k} \, T \, / m_{\rm H})$, can then be calculated. The intrinsic ({\it i.e.\ } corrected for both galactic and intrinsic absorption) X-ray luminosity $L_{X-in}$ in the {\em ROSAT} band is also given. \begin{table*} \caption[]{Derived gas parameters for the wind, assuming a distance of $3.63 {\rm\thinspace Mpc}$ to M82. $\eta$ is the volume filling factor of the gas. All parameters have been derived assuming $\eta=1$. Conversion factors to arbitrary $\eta$ are given. $v_{1000}$ is the outflow velocity of the \mbox{X-ray} emitting gas in units of $1000 \hbox{$\km\s^{-1}\,$}$, which may not be the same as the wind velocity. } \begin{flushleft} \begin{tabular}{llllllllll} \noalign{\smallskip} \hline \noalign{\smallskip} {\scriptsize Region} & {\scriptsize Volume} & {\scriptsize $n_{\rm e}$} & {\scriptsize $M_{\rm gas}$} & {\scriptsize $E_{\rm th}$} & {\scriptsize $C_{\rm sound}$} & {\scriptsize $K_{\rm gas}$} & {\scriptsize $L_{\rm X-in}$} & {\scriptsize $t_{\rm cool}$} & {\scriptsize $\dot{M}_{\rm cool}$} \\ & {\scriptsize(\tpow{65}${\rm\thinspace cm}^{3}$)} & {\scriptsize (\tpow{-3} $\hbox{$\cm^{-3}\,$}$)} & {\scriptsize($10^{6} M_{\odot}$)} & {\scriptsize($10^{55}$ ergs)} & {\scriptsize $\hbox{$\km\s^{-1}\,$}$} & {\scriptsize($10^{55}$ ergs)} & {\scriptsize($10^{38} \hbox{$\erg\s^{-1}\,$}$)} & {\scriptsize(Myr)} & {\scriptsize($\hbox{$\thinspace M_{\odot}$} yr^{-1}$)} \\ && {\scriptsize($\times1/\sqrt{\eta}$)} & {\scriptsize($\times\sqrt{\eta}$)} & {\scriptsize($\times\sqrt{\eta}$)} & & {\scriptsize($v_{1000}^{2}\times\sqrt{\eta}$)} & {\scriptsize($0.1-2.4 {\rm\thinspace keV}$)} & {\scriptsize($\times\sqrt{\eta}$)} & \\ \noalign{\smallskip} \hline \noalign{\smallskip} n1 & 5.85 & $ 31.7^{+3.6}_{-1.4} $ & 15.4 & 5.6 & 460 & 15.4 & 33.8 & 520 & 0.030 \\ n2 & 5.85 & $ 24.2^{+2.2}_{-1.6} $ & 11.8 & 2.9 & 370 & 11.8 & 16.2 & 560 & 0.021 \\ n3 & 5.85 & $ 21.2^{+2.1}_{-1.7} $ & 10.4 & 2.4 & 360 & 10.4 & 11.8 & 640 & 0.016 \\ n4 & 5.85 & $ 17.7^{+2.2}_{-1.6} $ & 8.6 & 2.0 & 360 & 8.6 & 8.3 & 750 & 0.011 \\ n5 & 5.85 & $ 19.2^{+6.1}_{-2.5} $ & 9.4 & 1.7 & 320 & 9.4 & 8.3 & 640 & 0.015 \\ n6 & 4.17 & $ 17.7^{+5.8}_{-2.9} $ & 6.2 & 1.1 & 330 & 6.2 & 5.2 & 690 & 0.009 \\ n7 & 4.15 & $ 8.7^{+0.7}_{-0.5} $ & 3.0 & 1.3 & 490 & 3.0 & 1.9 & 2160 & 0.001 \\ n8 & 5.85 & $ 7.1^{+0.6}_{-0.4} $ & 3.5 & 1.7 & 540 & 3.5 & 1.8 & 3050 & 0.001 \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multicolumn{3}{l}{{\scriptsize Sub-total (North)}} & 68.4 & 18.6 & & 68.4 & 87.2 & & 0.104\\ \noalign{\smallskip} \hline \noalign{\smallskip} s1 & 5.85 & $ 39.6^{+0.9}_{-0.8} $ & 19.3 & 6.4 & 440 & 19.3 & 51.0 & 400 & 0.048 \\ s2 & 5.85 & $ 29.4^{+1.1}_{-0.9} $ & 14.4 & 4.5 & 430 & 14.4 & 27.5 & 520 & 0.028 \\ s3 & 5.85 & $ 18.2^{+1.3}_{-0.9} $ & 8.9 & 2.6 & 410 & 8.9 & 10.1 & 800 & 0.011 \\ s4 & 5.85 & $ 13.0^{+2.7}_{-1.6} $ & 6.4 & 1.6 & 370 & 6.4 & 4.7 & 1050 & 0.006 \\ s5 & 5.85 & $ 10.1^{+2.1}_{-1.3} $ & 5.0 & 1.2 & 380 & 5.0 & 2.9 & 1370 & 0.004 \\ s6 & 4.55 & $ 10.9^{+2.5}_{-1.9} $ & 4.1 & 0.8 & 330 & 4.1 & 2.7 & 880 & 0.005 \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multicolumn{3}{l}{{\scriptsize Sub-total (South)}} & 58.1 & 17.0 & & 58.1 & 98.9 & & 0.102 \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multicolumn{3}{l}{{\scriptsize Total}} & 126.4 & 35.6 & & 126.4 & 186.2 & & 0.206 \\ \noalign{\smallskip} \hline \end{tabular} \end{flushleft} \label{tab:wind-pars} \end{table*} In order to quantify the trends in the data, particularly in the behaviour of the temperature and density with increasing distance along the wind, we perform weighted \mbox{least-squares} fits to the data for north and south separately. We also regress temperature against density using weighted orthogonal regression (Feigelson \& Babu 1992), allowing for the significant errors on both axes, using the package ODRPACK (Boggs {\rm et al.\thinspace} 1992). Table~\ref{tab:reg-fits} gives the fitted slopes, while the data and fitted lines are plotted in Figs. ~\ref{fig:temp} - \ref{fig:temp-dens}. The fits used data for regions n1-n6 and s1-s6 only; regions n7 \& n8 were excluded as they clearly deviate from the general trend in the North. The elevated temperatures of n7 and n8 are difficult to explain. We have checked that the two point sources which fall in the vicinity have been effectively excluded from the data. One obvious possibility is that the temperature rise is due to a shock, however, in this case the density would also be expected to rise, whereas it appears {\it lower} than expected from the trend of the inner six northern regions. A hardness map shows a lack of soft flux at the edges of the wind in regions n7 and n8, with no corresponding lack of hard flux, but the regions of reduced soft flux do not seem to correspond to areas of excess H{\sc i} \, and hence higher absorption. We have investigated the possibility of the excess hard flux being due to the energy dependent scattering from inner regions of the wind, but such contamination from one region into the next is at too low a level, {\em decreases} in importance with $z$, and is not strongly energy dependent. Also, it should be noted that there is no corresponding temperature rise in the south. \begin{table*} \caption[]{Results of the linear regression applied separately to the data from both north and south winds as described in the text. $z$ is the distance along the minor axis. Results are given for both the contamination ``corrected'' and uncorrected data to demonstrate the effect the contamination has. To assess the effect of the chosen geometry, results for a truncated conical geometry with radius on the major axis $r_{\rm base}=0.03\hbox{$^\circ$}$ and an opening angle of $50\hbox{$^\circ$}$ are also shown.} \begin{flushleft} \begin{tabular}{llllll} \noalign{\smallskip} \hline \noalign{\smallskip} \multicolumn{6}{l}{Cylinder, radius $r=0.05\hbox{$^\circ$}$ Corrected for nuclear contamination of the wind.} \\ \noalign{\smallskip} \hline \noalign{\smallskip} Relationship & Wind & \multicolumn{2}{l}{Slope} & \multicolumn{2}{l}{Intercept} \\ & & & {\scriptsize 95\% confidence interval} & & {\scriptsize 95\% confidence interval} \\ \noalign{\smallskip} \hline \noalign{\smallskip} T:$n_{e}$ & North & $1.047\pm{0.206}$ & 0.474 to 1.620 & $1.374\pm{0.335}$ & 0.445 to 2.303 \\ & South & $0.262\pm{0.058}$ & 0.100 to 0.423 & $0.146\pm{0.087}$ & $-0.097$ to 0.389 \\ \noalign{\smallskip} T:$z$ & North & $-0.474\pm{0.078}$ & $-0.691$ to $-0.258$ & $1.208\pm{0.251}$ & 0.511 to 1.906 \\ & South & $-0.227\pm{0.060}$ & $-0.394$ to $-0.061$ & $0.458\pm{0.186}$ & 0.057 to 0.974 \\ \noalign{\smallskip} $n_{e}$:$z$ & North & $-0.467\pm{0.045}$ & $-0.590$ to $-0.343$ & $0.117\pm{0.144}$ & $-0.517$ to 0.283 \\ & South & $-0.836\pm{0.108}$ & $-1.137$ to $-0.535$ & $1.095\pm{0.334}$ & 0.168 to 2.022 \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multicolumn{6}{l}{Cylinder, radius $r=0.05\hbox{$^\circ$}$. No contamination correction.} \\ \noalign{\smallskip} \hline \noalign{\smallskip} Relationship & Wind & \multicolumn{2}{l}{Slope} & \multicolumn{2}{l}{Intercept} \\ & & & {\scriptsize 95\% confidence interval} & & {\scriptsize 95\% confidence interval} \\ \noalign{\smallskip} \hline \noalign{\smallskip} T:$n_{e}$ & North & $0.821\pm{0.119}$ & 0.490 to 1.151 & $1.074\pm{0.195}$ & 0.532 to 1.616 \\ & South & $0.185\pm{0.052}$ & 0.041 to 0.329 & $0.079\pm{0.078}$ & $-0.137$ to 0.295 \\ \noalign{\smallskip} T:$z$ & North & $-0.467\pm{0.058}$ & $-0.628$ to $-0.305$ & $1.236\pm{0.188}$ & 0.715 to 1.758 \\ & South & $-0.188\pm{0.039}$ & $-0.295$ to $-0.081$ & $0.384\pm{0.120}$ & $-0.052$ to 0.715 \\ \noalign{\smallskip} $n_{e}$:$z$ & North & $-0.568\pm{0.072}$ & $-0.769$ to $-0.368$ & $-0.189\pm{0.233}$ & $-0.457$ to 0.835 \\ & South & $-0.922\pm{0.114}$ & $-1.239$ to $-0.606$ & $1.353\pm{0.351}$ & 0.378 to 2.328 \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multicolumn{6}{l}{Truncated cone, radius $r_{\rm base}=0.03\hbox{$^\circ$}$, opening angle $\theta_{\rm op}=50\hbox{$^\circ$}$, contamination corrected.} \\ \noalign{\smallskip} \hline \noalign{\smallskip} Relationship & Wind & \multicolumn{2}{l}{Slope} & \multicolumn{2}{l}{Intercept} \\ & & & {\scriptsize 95\% confidence interval} & & {\scriptsize 95\% confidence interval} \\ \noalign{\smallskip} \hline \noalign{\smallskip} T:$n_{e}$ & North & $0.641\pm{0.135}$ & 0.266 to 1.0170 & $0.695\pm{0.207}$ & 0.121 to 1.270 \\ & South & $0.204\pm{0.051}$ & 0.063 to 0.346 & $0.037\pm{0.072}$ & $-0.164$ to 0.237 \\ \noalign{\smallskip} T:$z$ & North & $-0.476\pm{0.089}$ & $-0.724$ to $-0.227$ & $1.223\pm{0.284}$ & 0.436 to 2.010 \\ & South & $-0.228\pm{0.068}$ & $-0.418$ to $-0.038$ & $0.459\pm{0.213}$ & $-0.132$ to 1.049 \\ \noalign{\smallskip} $n_{e}$:$z$ & North & $-0.762\pm{0.027}$ & $-0.837$ to $-0.687$ & $0.887\pm{0.086}$ & 0.649 to 1.125 \\ & South & $-1.103\pm{0.118}$ & $-1.431$ to $-0.775$ & $2.023\pm{0.365}$ & 1.010 to 3.036 \\ \noalign{\smallskip} \hline \end{tabular} \end{flushleft} \label{tab:reg-fits} \end{table*} \begin{figure*} \picplace{9cm} \caption[]{Absorbing column against minor axis distance for Northern (crosses) and Southern (diamonds) winds. The column is in units of $10^{21} \hbox{$\cm^{-2}\,$}$. The Stark (1992) column is $0.40\pm{0.05}$ in these units. As the northern side of M82 is inclined away from us, the initially higher column to the north is entirely natural.} \label{fig:hcol} \end{figure*} \begin{figure*} \picplace{9cm} \caption[]{Temperature against minor axis distance for Northern (data: crosses, regression line: dashed) and Southern(data: diamonds, regression line: solid) winds. Only the first six points (n1 -- n6 and s1 -- s6) are used in the regressions.} \label{fig:temp} \end{figure*} \begin{figure*} \picplace{9cm} \caption[]{Derived density against minor axis distance for Northern (data: crosses, regression line: dashed) and Southern (data: diamonds, regression line: solid) winds. Only the first six points (n1 -- n6 and s1 -- s6) are used in the regressions.} \label{fig:dens} \end{figure*} \begin{figure*} \picplace{9cm} \caption[]{Temperature against density for Northern (data: crosses, regression line: dashed) and Southern (data: diamonds, regression line: solid) winds. For an adiabatically expanding gas the slope of the $\log T : \log \rho$ regression line would be $\gamma-1 = \frac{2}{3}$.} \label{fig:temp-dens} \end{figure*} A systematic error in the background subtraction could possibly mimic a real trend with increasing distance along the wind due to the increasing importance of the background as the surface brightness of the emission decreases. To check the effects of this, the analysis was repeated for backgrounds 5\% over and undersubtracted with respect to the ideal background described above. The fitted parameters were within $1\sigma$ of those from the standard background in all cases. Hence over or undersubtraction is not a serious problem. \subsection{The effect of the assumed geometry} \label{sec:assum_geom} As discussed in Sect.~\ref{sec:an_wind}, M82's \mbox{X-ray} emission is not obviously well approximated by either a conical or a cylindrical outflow. In addition, the asymmetry between north and south makes the choice of a consistent geometry difficult. The inclination of the galactic plane to our line of sight will also blur any results, by superposition of physically differing regions, even if the plasma's properties do vary only with $z$. It is not unreasonable to expect variation perpendicular to the minor axis, leading to further superposition of different components along the line of sight. We checked for this by performing spectral fits for regions n3, n4, s3 \& s4, binning the emission into eastern, central and western spectra. The resulting temperatures across the wind fell within $1\sigma$ of each other, indicating that cross-wind variations are not a major effect. Let us suppose that the plasma properties vary not with $z$, but with radius from the galactic centre, as in a spherical or conical outflow. Our derived spectral properties, using a cylindrical geometry, will then differ from the true properties. For a conical distribution, the degree to which the fitted parameters deviate from the true parameters depends on the opening angle of the cone. The fitted parameters will be some flux-weighted average of the various components of different $r$ that fall within a slice at constant $z$. The effect will be worst for the inner regions, and for large cone opening angle, but will be small at large $z$. For a parameter $F$ that decreases with $r$, the fitted parameter $F$ at some $z$ will always be lower than the true value for $r=z$, due to the incorporation of flux from regions of greater $r$ and hence lower $F$. This will have the effect of flattening the slope of any real trend in the data as the discrepancy between true and fitted values is less at large $r$. In the present case, given the flatness of the temperature profile, the true temperature will not vary much from the fitted values. The emission measure will not be too far out either, given that the higher surface brightness emission along the minor axis dominates the fitted emission measure ({\it i.e.\ } the inclusion of the lower surface brightness emission further from the minor axis in our cylindrical geometry has rather little effect). The major source of bias is the volume, which we would overestimate at small $r$, and underestimate at large $r$. However, it should be borne in mind that the derived gas density depends only on the inverse square root of the assumed volume, $n_{\rm e} \propto \sqrt({\rm EM} / V )$. In conclusion, if the soft X-ray emission does have a rather more divergent geometry than we have assumed, then the true temperatures will be similar to those obtained, while the density will drop off faster than our result, the inner densities being higher and the outer densities lower than those we have derived. In order to test the magnitude of these effects, the standard analysis above was repeated treating the emission as arising from two inverted truncated cones of radius $r_{0}=0.03 \hbox{$^\circ$}$ on the galactic major axis and \mbox{semi-opening} angle $\theta = 25 \hbox{$^\circ$}$, chosen by eye to give an reasonable approximation to the observed emission in the North (see Fig.~\ref{fig:cone}). The results of this on the derived trends in $n$ and $T$, can be seen in Table~\ref{tab:reg-fits}. The slope of the $T:z$ relation is essentially unchanged, whilst the $n_{\rm e}:z$ trend becomes steeper, as expected. The implications of this will be discussed below. \section{Comparison with wind models} \label{sec:discuss} Given previous claims that the X-ray emission from M82 is consistent with that expected from the adiabatic expansion of a free flowing wind (Fabbiano 1988; Bregman {\rm et al.\thinspace} 1995), in particular that the density derived from the surface brightness falls off as $\sim r^{-2}$, it is worth investigating whether our spectral results are consistent with this idea. \subsection{Chevalier \& Clegg's adiabatic wind} The simplest useful model of a galactic wind is that of Chevalier \& Clegg (1985). This is just a spherically-symmetric outflow from a region of constant mass and energy injection (the starburst) ignoring the effects of gravity, radiative cooling and the presence of any ambient medium. The hot gas smoothly passes through a sonic transition at the radius of the starburst region, and then becomes a supersonic outflow which cools adiabatically. Provided the kinetic energy supplied by the numerous SN and stellar winds within the starburst is efficiently thermalised, the temperature of the hot gas within the starburst region will be $\sim 10^{8} {\rm\thinspace K}$ for reasonable mass and energy injection rates, making the neglect of the effects of gravity and radiative cooling valid. For regions well outside the starburst region, $r \gg R_{\star}$, the wind density $\rho \propto r^{-2}$, wind temperature $T \propto r^{-4/3}$ and thermal pressure $P_{\rm th} \propto r^{-10/3}$. McCarthy {\rm et al.\thinspace} (1987) compared variation of pressure with radius in the optical filamentation with the Chevalier \& Clegg (CC) model, under the assumption that the thermal pressure in the filaments would equal the total (thermal plus ram) pressure in the wind, achieving a good match within a kiloparsec of the nucleus. Fabbiano (1988) reported in a reanalysis of the {\em Einstein} IPC data, that the radial distribution of the X-ray emission was consistent with $\rho \propto r^{-2}$, {\it i.e.\ } a free-flowing wind. The Chevalier \& Clegg model assumes a spherical outflow, in contrast to the cylindrical geometry we have adopted. What would we expect to see from a free wind in a more generalised outflow geometry, {\it e.g.\ } a bubble that has broken out of the disk of the galaxy and now allows free escape of the wind material? Assuming a constant mass loss rate, $\dot{M} = \rho_{r} A_{r} v_{r}$, where $A_{r}$ and $v_{r}$ are the \mbox{cross-sectional} area and the velocity of the flow, and $A_{r}$ of the form $A_{r} \propto r^{\beta}$, together with a constant outflow velocity, it follows that $\rho_{r} \propto r^{-\beta}$. For a cylinder, $\beta=0$, hence $\rho$ is constant. For a sphere or a cone of constant opening angle, $\beta=2$. The density for a cone is just a constant ratio higher ($4\pi/ \Omega$, where $\Omega$ is the opening angle in steradians) than that for a spherical wind for a constant $\dot{M}$. Obviously we can produce any $\rho_{r}$ and retain the concept of the emission as arising from a free wind, by choosing the appropriate geometry. However, for isentropic gas, the temperature $T_{r} \propto \rho^{\gamma-1}_{r}$, where $\gamma=5/3$ in the present case. In the absence of cooling (a good approximation given that the outflow timescale is $\sim10^{6}-10^{7} {\rm\thinspace yr}$ while the cooling timescales are $\sim10^{9} {\rm\thinspace yr}$) a free wind would expand adiabatically. Hence we expect a $\log T : \log \rho$ graph to have a slope of $\gamma-1=\frac{2}{3}$ for {\em any} free wind irrespective of geometry. Inspection of Table~\ref{tab:reg-fits} shows that although the northern emission is consistent with this, the southern emission is inconsistent at greater than the 95\% confidence level, in the sense that the temperature drops too slowly relative to the density -- {\it i.e.\ } the entropy {\it rises} outwards. Note that if the absorbing columns in the northern outer regions are overestimated, then the real temperatures for these regions will be higher, reducing the temperature gradient. This would make the north less isentropic and reduce the difference between north and south. However the fits clearly require a higher column than Stark for these regions. As discussed in Sect.~\ref{sec:assum_geom}, our use of a cylindrical geometry will lead us to underestimate the slope of the density profile if the emission comes from a conical outflow, whilst our temperature estimate is quite robust. This means that the inconsistency of the southern wind with an adiabatic outflow can only be {\em accentuated} if the flow diverges. The slopes for the re-analysis with a truncated conical geometry confirm this, the southern emission being less isentropic than for the cylindrical analysis. Bearing in mind the geometry issue, we can attempt a more quantitative comparison between the observed temperatures and densities and the CC model, using the forms for $\rho$ and $T$ from CC, and the scaling relationships for the mass and energy injection given by Heckman {\rm et al.\thinspace} (1993). These use the predicted deposition of mass, kinetic energy and momentum from a starburst calculated by Leitherer {\rm et al.\thinspace} (1992). For a constant star formation rate over a period of $5\times10^{7} {\rm\thinspace yr}$, solar metallicity and a normal Salpeter IMF extending up to $100 \hbox{$\thinspace M_{\odot}$}$, the various injection rates are related to the starburst bolometric luminosity by: \begin{eqnarray} \dot{E} & = & 8\times10^{42} \, L_{\rm bol,11} \hbox{$\erg\s^{-1}\,$} \\ \dot{M} & = & 3 \, L_{\rm bol,11} \hbox{$\thinspace M_{\odot}$} \hbox{$\yr^{-1}\,$} \end{eqnarray} \noindent where $L_{\rm bol,11}$ is the bolometric luminosity in units of $10^{11} \hbox{$\thinspace L_{\odot}$}$. For most starbursts $L_{\rm FIR}$, is the dominant contributor to $L_{\rm bol}$ so it is a reasonable approximation to equate $L_{\rm bol,11}$ with $L_{\rm FIR}$. How valid are these scaling relations for M82? Visual inspection of Leitherer {\rm et al.\thinspace}'s figures show that for all but the lowest metallicities the injection rates are approximately constant after $\sim5\times10^{6} {\rm\thinspace yr}$, similar to the age of M82's starburst (Rieke {\rm et al.\thinspace}, 1993). Applying the above scaling relations to Chevalier \& Clegg's model we obtain, for radii large compared to the starburst radius $R_{\star}$: \begin{eqnarray} n_{\rm e} (\hbox{$\cm^{-3}\,$}) & = & 8.14\times10^{-2}L_{\rm bol,11} \left(\frac{r}{R_{\star}}\right)^{-2} \left(\frac{4\pi}{\Omega}\right) \\ T ({\rm\thinspace keV}) & = & 4.13 \left(\frac{r}{R_{\star}}\right)^{-4/3} \left(\frac{4\pi}{\Omega}\right)^{2/3} \end{eqnarray} \noindent where $n_{\rm e}$ is the electron number density number density, $T$ the temperature and $\Omega$ the total solid angle through which the wind flows out. Note that the temperature is independent of $L_{\rm bol}$, being a ratio of the energy and mass injection rates. For a bolometric luminosity of $4\times10^{10} \hbox{$\thinspace L_{\odot}$}$ (Rieke {\rm et al.\thinspace} 1993) and a characteristic radius, $R_{\star}$, for the starburst of $200 {\rm\thinspace pc}$, we can predict $\rho$ and $T$ at radii corresponding to the regions in Fig.~\ref{fig:wind-reg} -- see Table~\ref{tab:models}. Even for the innermost regions (n1 and s1) the CC model underestimates the density by an order of magnitude. Although the predicted temperature is almost equal to that observed near the galactic centre, the adiabatically expanding wind cools too quickly to match the PSPC data, dropping to $\sim0.05 {\rm\thinspace keV}$ in the outermost regions, whereas the observed temperature along the minor axis is almost constant at $0.4-0.5 {\rm\thinspace keV}$ and even rises in regions $n7$ and $n8$ to $\sim0.8 {\rm\thinspace keV}$. In summary, the densities and temperatures derived under our assumed cylindrical geometry differ greatly from those predicted under a spherical geometry by the CC model. Given an arbitrary outflow geometry, it is in principle possible for the adiabatic wind model to reproduce the observed shallower trend in density, however the departure of the southern wind from constant entropy is a robust result which is incompatible with the basic assumptions of the CC model. We conclude, therefore, that the observed \mbox{X-ray} emission cannot arise from a single phase, expanding wind. The fact that the entropy actually {\it rises} with $z$ in the south, means that the CC model cannot be saved by supposing that additional material is entrained into the flow as it proceeds. Although this might raise the density, it would cause the entropy to decline with $z$, accentuating the disagreement with our results. \subsection{A hydrostatic halo} It has been suggested (W.~Pietsch, private communication) that the extended \mbox{X-ray} emission around NGC 253 may be due not to a wind or shocked cloud emission, but to a static halo. For M82 the optical emission line velocity data and spectral index variations in the radio halo (McKeith {\rm et al.\thinspace} 1995; Seaquist \& Odegard 1991) demonstrate conclusively the presence of a galactic wind. Since the synchrotron emission is very similar to the \mbox{X-ray} in extent, so it is difficult to argue that the \mbox{X-ray} emission is not associated with a wind. Can we rule out a hydrostatic halo on the basis of the observed \mbox{X-ray} properties? From the observed density and temperature the mass within some radius $r$ for a hydrostatic halo, assuming spherical symmetry, is: \begin{equation} M(<r) \approx 4.45\times10^{10} \, (\beta+\epsilon) \, T_{{\rm\thinspace keV}} \, r_{\rm kpc} \hbox{$\thinspace M_{\odot}$}, \end{equation} \noindent where the observed temperature $T\propto r^{-\epsilon}$ and density $n_{\rm e} \propto r^{-\beta}$. For regions n3 and s3 the predicted mass within $2.2{\rm\thinspace kpc}$ are $4\times10^{10} \hbox{$\thinspace M_{\odot}$}$ and $5\times10^{10} \hbox{$\thinspace M_{\odot}$}$ respectively. From G\"{o}tz {\rm et al.\thinspace} (1990) the mass within this radius, based on velocities measured in H{\sc i} \,, is $\sim3\times10^{9} \hbox{$\thinspace M_{\odot}$}$, an order of magnitude lower. Hence we can rule out any possibility that the X-ray emitting gas is bound to M82 as a static halo. \subsection{Shocked clouds in a wind} An alternative to emission from a free wind is emission from shocked material embedded in such a wind. Any clouds of denser material overrun by the wind, be they fragmented remnants of the dense shell swept up by the wind in its ``snow-plough'' phase or clouds in the ISM, will be shock-heated. The ${\rm H\alpha}$ filamentation seen along the minor axis has line ratios indicative of material shocked to $\sim10^{4} {\rm\thinspace K}$. Less dense material would be heated to even higher temperatures, and could be the source of the soft \mbox{X-rays} seen, rather than the emission being due to the wind itself. The temperature to which these clouds will be heated depends on the speed of the shock driven into them by the wind. This depends on the relative densities of the cloud and the wind, and the wind velocity. In the case of strong shocks (Mach number M $\gg$ 1) we can ignore the thermal pressure of both the wind and the cloud, and equate momentum flux across the shock. Given the mass and energy injection rates, one expects a wind velocity $v_{\rm w} \approx 3000 \hbox{$\km\s^{-1}\,$}$, whereas the sound speed in the \mbox{X-ray} emitting gas is $\sim400 \hbox{$\km\s^{-1}\,$}$. Clouds will be accelerated by the wind to varying extents depending on their column density and individual histories, but only to velocities of order hundreds of $\hbox{$\km\s^{-1}\,$}$, as seen in the ${\rm H\alpha}$ filaments. Hence the strong shock approximation is not unreasonable. The shock driven into the cloud will then have a velocity $v_{\rm c} \approx v_{\rm w} \sqrt{\rho_{\rm w}/\rho_{\rm c}}$. The eventual temperature of the cloud will be proportional to $v^{2}_{\rm c}$. \begin{table} \caption[]{Predicted parameters of the \mbox{X-ray} emitting gas using the simple models discussed in Section 5. The shocked cloud temperatures are calculated using the derived electron densities from the northern wind for unit filling factor. Predicted cloud temperatures for the south are similar. Values assume spherical outflow for the wind. For a conical outflow of total solid angle $\Omega$, the shocked cloud temperature $T_{\rm c}$ and the CC wind density are $\propto 4\pi / \Omega$, and the CC temperature $\propto (4\pi / \Omega)^{2/3}$.} \begin{flushleft} \begin{tabular}{lllll} \hline\noalign{\smallskip} Region & $z$ & \multicolumn{2}{l}{Chevalier \& Clegg} & Shocked clouds \\ && $n_{\rm e}$ & $T_{\rm w}$ & $T_{\rm c}$ \\ & (pc) & ($10^{-4}\hbox{$\cm^{-3}\,$}$) & (${\rm\thinspace keV}$)& (${\rm\thinspace keV}$) \\ \noalign{\smallskip} \hline\noalign{\smallskip} 1 & 945 & 14.6 & 0.52 & 1.58 \\ 2 & 1575 & 5.3 & 0.26 & 0.74 \\ 3 & 2205 & 2.7 & 0.17 & 0.43 \\ 4 & 2835 & 1.6 & 0.12 & 0.31 \\ 5 & 3465 & 1.1 & 0.09 & 0.19 \\ 6 & 4095 & 0.8 & 0.07 & 0.15 \\ 7 & 4725 & 0.6 & 0.06 & 0.23 \\ 8 & 5355 & 0.5 & 0.05 & 0.22 \\ \noalign{\smallskip} \hline \end{tabular} \end{flushleft} \label{tab:models} \end{table} For a constant velocity, spherical or conical wind flowing into solid angle $\Omega$, with constant mass injection rate $\dot{M}$, the wind density $\rho_{\rm w} \propto r^{-2}$, and the shocked cloud temperature \begin{equation} T_{\rm c} = 6.15\times10^{-3} v_{1000} \, \dot{M} \, r^{-2}_{\rm kpc} \, n^{-1}_{\rm c}\left(\frac{4\pi}{\Omega}\right) \, {\rm\thinspace keV}, \end{equation} \noindent where $v_{1000}$ is the wind velocity in units of $1000 \hbox{$\km\s^{-1}\,$}$, and $\dot{M}$ in units of $M\sol \hbox{$\yr^{-1}\,$}$. We assume the postshock cloud density is four times the preshock density, and that ionization, dissociation, magnetic fields and radiative losses are negligible. Relaxing the previous four assumptions would result in lower shocked cloud temperatures. Given $n_{\rm c} \approx 2n_{\rm e}$, the number densities derived above, a wind velocity of $3000\hbox{$\km\s^{-1}\,$}$ and $\dot{M}$ from Eq.~(2) we can predict the temperature we expect assuming unit filling factor and $\Omega=4\pi$ (Table~\ref{tab:models}). From Fig.~\ref{fig:wind-grey} one can estimate the solid angle the wind flows into as $\sim\pi$ steradians, raising $T_{\rm c}$ by a factor $\sim4$. In the context of clouds in a wind, the filling factor should be substantially less than unity. The ${\rm H\alpha}$ filaments have $\eta \sim 10^{-2}$ (McCarthy {\rm et al.\thinspace} 1987) in the inner kiloparsec, so it would not be unreasonable to expect cloud filling factors of order $10^{-2}-10^{-1}$ at larger distances from the nucleus. This would reduce $T_{\rm c}$ by a factor $\sim3$--10. The net effect is that the predicted temperatures are rather lower than those observed, but considering the large uncertainties involved, this simple model must be regarded as giving results consistent with observation. Assuming that $\Omega$ and the filling factor do not vary with distance along the wind, the predicted temperatures (Table~\ref{tab:models}) drop off too quickly to match the observed trend of $T$ with $z$. The steepness of the predicted temperature profile could be due to the assumed geometry. As discussed in Sect.~\ref{sec:assum_geom} the inner densities may be higher than calculated. Higher inner cloud densities would lead to lower postshock cloud temperatures and hence flatten the trend. So, in summary, predicted postshock temperatures for a simple model where the wind shock heats clouds are consistent with, if slightly lower than, those observed, given the observed emission measure (density). \subsection{Numerical models} \label{sec:nummod} Tomisaka \& Ikeuchi (1988) were the first to explicitly model the wind in M82 using 2D hydrodynamical simulations. They found a roughly cylindrical bipolar wind formed naturally for a constant mass and energy input rate in the nucleus of 0.1 SN $\hbox{$\yr^{-1}\,$}$. However, they modelled the ISM as a cold rotationally supported disk in which the angular velocity was independent of the distance from the plane of the disk. This physically unrealistic configuration creates a strong funnel along the $z$-\mbox{axis} which strongly collimates the wind. Tomisaka \& Bregman (1993) allowed the rotational velocity to decrease exponentially away from the cold disk, into a hot low density halo. This distribution still provides a strong cylindrical funnel for the expansion of the wind at low $z$. Suchkov {\rm et al.\thinspace} (1994, hereafter SBHL) provide a more realistic ISM for their modelling of M82: a two component cold rotating dense disk and non-rotating hot tenuous halo, and a starburst history incorporating the milder mass and energy input from stellar winds before the more energetic supernovae dominated stages. They find bipolar outflows form easily over a wide range in different halo and disk conditions. Shocked halo gas at temperatures $0.2-0.4{\rm\thinspace keV}$ provides the majority of the \mbox{X-ray} emission in their ``soft'' band ($0.1-2.2{\rm\thinspace keV}$). The eventual wind geometry depends on the initial gas distribution and the mass and energy input history. For those models with ``mild'' early winds (models A1 and A2 in SBHL) biconical outflows of opening angle $\theta \sim 60-90\hbox{$^\circ$}$ with dense disk material entrained along the surface do occur naturally. Initially the mild wind creates a cavity in the disk to the halo for the wind to escape, without substantially damaging the disk. The wind then propagates outward in the halo, sweeping it up and shocking it. In the later SN-driven stage of the starburst, the more vigorous wind does manage to disrupt some of the disk, dragging it out to form a cone within a much larger bubble. This provides a natural explanation for the difference in distribution between the outflow cone seen in H$\alpha$ and the more extended \mbox{X-ray} emission. Without an early mild wind (models B1 and B2), the disk is disrupted before the wind has easy access to the halo, and so no obvious cone in the \mbox{X-ray} is visible. Even when present, the conical wind does not provide appreciable soft \mbox{X-ray} flux compared to the shocked halo (see Figs. 4, 10 and 14 in SBHL), so we would not expect to see such structures in the {\em ROSAT} data. The radial extent of the soft \mbox{X-ray} emitting material is much greater than that of the cones, the emission appearing more like a figure-of-eight (Fig. 4 in SBHL) or a cylinder (Fig. 10 in SBHL). None of the models SBHL present would be seen as more conical than cylindrical when projected along the line of sight and observed by a real \mbox{X-ray} instrument. As SBHL did not provide projected surface brightness plots, it is difficult to assess how limb-brightened the emission would be, but given the brightness of the shocked halo material it should be a detectable effect. The luminosity varies greatly between the different models, and is not simply proportional to the mass and energy input in the starburst. Model A1 has a time averaged mass and energy input an order of magnitude less than model B1, but has a soft ($0.1-2.2{\rm\thinspace keV}$) luminosity of $4\times10^{41} \hbox{$\erg\s^{-1}\,$}$ at $t=16.6$ Myr (when the starburst luminosity is $\sim10^{44} \hbox{$\erg\s^{-1}\,$}$, very close to M82's bolometric luminosity), well above the $0.1-2.4{\rm\thinspace keV}$ wind luminosity of $\sim2\times10^{40} \hbox{$\erg\s^{-1}\,$}$ derived above for M82. Model B1 has a corresponding \mbox{X-ray} luminosity of only $1.3\times10^{40} \hbox{$\erg\s^{-1}\,$}$, despite having a similar initial gas distribution to model A1. Given the wide range of predicted wind luminosity it would require a deeper investigation of the available parameter space to use M82's luminosity to constrain the allowable models. SBHL provide ``effective'' temperatures for the gas in their ``soft'' band which corresponds well to the {\em ROSAT} band, by comparing fluxes in two energy bands of $0.1-0.7{\rm\thinspace keV}$ and $0.7-2.2{\rm\thinspace keV}$, as well as the temperature range for the gas that provides the majority of the soft emission. In all cases the emission is very soft, typically $T\sim0.2{\rm\thinspace keV}$. The hottest model (B1) has a characteristic temperature of only $0.4{\rm\thinspace keV}$. This is still cooler than M82's wind, where the temperature varies between $0.4-0.6{\rm\thinspace keV}$. SBHL stress the lack of appreciable amounts of gas hotter than $\sim6\times10^{6} {\rm\thinspace K}$. The wind itself is much hotter, but provides very little emission, $L_{X}\sim10^{38} \hbox{$\erg\s^{-1}\,$}$. The lower temperatures predicted by SBHL do correspond with observations of some galaxies for which soft \mbox{X-ray} emission can be detected. NGC 891 has a halo with $T\sim0.3{\rm\thinspace keV}$ (Bregman \& Pildis, 1994), and Wang {\rm et al.\thinspace} (1995) detect soft $\sim0.25{\rm\thinspace keV}$ \mbox{X-ray} emission out to more than $8{\rm\thinspace kpc}$ from the plane of NGC 4631. A survey of {\em ROSAT} PSPC observation of nearby normal and starburst galaxies by Read {\rm et al.\thinspace} (1996) shows that many starbursts have diffuse gas with temperatures $\sim0.5{\rm\thinspace keV}$, more in line with our results. The density of the gas responsible for the emission in SBHL's models is typically $n\sim4-8 \times 10^{-2} \hbox{$\cm^{-3}\,$}$. This is not inconsistent with our results of $n_{\rm e}\sim1-4 \times 10^{-2} \eta^{-1/2} \hbox{$\cm^{-3}\,$}$. The filling factor of the emitting gas in their models we can roughly estimate as $\sim0.1$, which bring their densities close to ours. The gas mass providing the bulk of the soft emission depends strongly on the model used, but model B1 which has a soft \mbox{X-ray} luminosity similar to that observed for M82's wind has a gas mass of $10^{6} \hbox{$\thinspace M_{\odot}$}$, comparable with the total mass derived from the PSPC of $1.3\times10^{7} \eta^{1/2}\hbox{$\thinspace M_{\odot}$}$ for reasonable values of the filling factor. \section{Conclusions} We have provided a detailed spectral investigation of the extended soft \mbox{X-ray} emission associated with the starburst galaxy M82. Point sources have been located and removed from the diffuse wind emission, and the effects of contamination by the nuclear source allowed for. The diffuse emission was divided into a set of distinct regions as a function of distance from the galactic plane, allowing temperature, emission measure and absorbing column to be derived as a function of distance along the minor axis. The metallicity was found to be apparently low (0.00--$0.07 Z_{\odot}$) throughout the wind, in agreement with results from ASCA. This work has shown the following: \begin{itemize} \item The observed soft \mbox{X-ray} morphology differs significantly between north and south, and is not well described as a conical outflow. The emission extends out to $\sim6{\rm\thinspace kpc}$ from the plane of the galaxy. There is no evidence for any limb brightening, as would be expected if the soft emission came from the shock heated halo surrounding the hot wind. \item The emission from the wind is thermal. The temperature drops slowly along the wind, from $\sim0.6{\rm\thinspace keV}$ near the nucleus, to $\sim0.4{\rm\thinspace keV}$ in the outer wind. Numerical models of galactic winds predict the majority of the emission in the {\em ROSAT} band to be shocked halo, with effective temperatures in the range $0.2-0.4{\rm\thinspace keV}$. \item Since the entropy of the gas is not constant, at least in the south, the observed emission cannot originate from a free wind itself. Our baseline analysis is based on an assumed cylindrical geometry, but a more divergent geometry only makes the southern wind less isentropic. \item The emission cannot come from a static halo of gas around M82, given that the mass required to bind a hydrostatic halo is an order of magnitude higher than that inferred from the H{\sc i} \, rotational velocity in the outer disk. \item For reasonable mass and energy input from the starburst, a simple model for emission from shock heated clouds using the observed density of the \mbox{X-ray} emitting gas to predict post-shock temperatures is consistent with the observed temperatures. \end{itemize} \begin{acknowledgements} We thank the anonymous referee for valuable comments that have lead to the improvement of this paper. We acknowledge the use of the {\em Starlink} node at the University of Birmingham and thank the developers and maintainers of the \mbox{X-ray} analysis package ASTERIX. DKS and IRS gratefully acknowledge PPARC funding. This research has made use of data obtained from the UK ROSAT Data Archive Centre at the Department of Physics and Astronomy, Leicester University, UK, and the SIMBAD astronomical database at the CDS, Strasbourg. \end{acknowledgements}
1,314,259,994,377
arxiv
\section{Introduction} Quantum phase transitions involve a fundamental change in the symmetry of the ground state of a quantum system. Such a transition usually takes place due to the variation of some parameter $\lambda$ in the Hamiltonian of the system and is necessarily accompanied by diverging length and time scales \cite{sachdev}. A direct consequence of such a diverging time scale is that a quantum system fails to be in the adiabatic limit when it is sufficiently close to the quantum critical point. Thus a time evolution of the parameter $\lambda$ at a finite rate $1/\tau$, which takes such a system across a quantum critical point located at $\lambda=\lambda_c$, leads to failure of the system to follow the instantaneous ground state in a finite region around $\lambda_c$. As a result, the state of the system after such a time evolution does not conform to the ground state of its final Hamiltonian leading to the production of defects \cite{kz1,damski1}. It is well known that for a slow quench, the density of these defects $n$ depends on the quench time $\tau$ according to $n \sim 1/\tau^{d\nu/(\nu z+1)}$, where $\nu$ and $z$ are the correlation length and the dynamical critical exponents characterizing the critical point \cite{anatoly1,anatoly2,comment1}. A theoretical study of such a quench dynamics requires a knowledge of the excited states of the system. As a result, early studies of the quench problem are mostly restricted to quantum phase transitions in exactly solvable models such as the one-dimensional (1D) Ising model in a transverse field \cite{ks1,dziar1,cardy}, the infinite range ferromagnetic Ising model \cite{das}, the 1D XY model \cite{levitov,sen1}, and quantum spin chains \cite{damski2,caneva,zurek}. On the experimental side, trapped ultracold atoms in optical lattices provide possibilities of realization of many of the above-mentioned systems \cite{bloch}. Experimental studies of defect production due to quenching of the magnetic field in a spin-one Bose condensate has also been undertaken \cite{sadler}. Recently, Kitaev proposed a 2D spin-1/2 model on a honeycomb lattice with a Hamiltonian \cite{kitaev1} \bea H_{1} &=& \sum_{j+l={\rm even}} ~(~ J_1 \sigma_{j,l}^x \sigma_{j+1,l}^x ~+~ J_2 \sigma_{j-1,l}^y \sigma_{j,l}^y \nonumber \\ & & ~~~~~~~~~~~~~~+ J_3 \sigma_{j,l}^z \sigma_{j,l+1}^z ~), \label{kham1} \eea where $j$ and $l$ denote the column and row indices of the honeycomb lattice. This model has several interesting features which led to a plethora of theoretical works on it \cite{feng,baskaran,vidal}. For example, it provides a rare example where a 2D model can be exactly solved using a Jordan-Wigner transformation \cite{kitaev1,feng,nussinov1,nussinov2}. Further, when $J_3=0$, the model provides an example of an 1D spin model which supports a topological quantum phase transition with the critical point at $J_1=J_2$ \cite{feng}. Moreover, in $d=2$, the model supports a gapless phase for $|J_1-J_2|\le J_3 \le J_1+J_2$ which has a possible connection to a spin liquid state and demonstrates fermion fractionalization at all energy scales \cite{baskaran}. Finally, it has been shown in Ref.\ \onlinecite{kitaev1} that the presence of magnetic field, which induces a gap in the 2D gapless phase, leads to non-Abelian statistics of the low-lying excitations of the model; these excitations can be viewed as robust qubits in a quantum computer \cite{kitaev2}. An extended version of this model has also been suggested in Ref.\ \onlinecite{lee} which has the Hamiltonian \bea H_2 &=& J_4 ~\Bigg [ \sum_{j+l={\rm odd}}\sigma_{j,l}^y \sigma_{j+1,l}^z \sigma_{j+2,l}^x \nonumber \\ && ~~~ + \sum_{j+l={\rm even}}\sigma_{j,l}^x\sigma_{j+1,l}^z\sigma_{j+2,l}^y \Bigg] ~+~ H_1. \label{extkit1} \eea The quench dynamics of the 2D Kitaev model has been studied very recently in Ref.\ \onlinecite{sms}. It has been shown that for this model, quenching $J_3$ takes the system through a critical line instead of critical point which leads to unconventional variation of the defect density as a function of the quench rate. In this context, it has also been shown that for a general $d$-dimensional model, where the quench take the system through a $d-m$ dimensional hypersurface characterized by the correlation length exponent $\nu$ and dynamical critical exponent $z$, the defect density obeys $n_d \sim 1/\tau^{m \nu/(z \nu +1)}$. The Kitaev model provides a concrete example of such a quench for $d=2$ and $m=1$. The defect correlation function for such a quench has also been computed in Ref.\ \onlinecite{sms}. In this work, we extend and elaborate on the results of Ref.\ \onlinecite{sms} and study the quench dynamics of the Kitaev model both in $d=1$ and $d=2$ and the extended Kitaev model in $d=2$. The main results that we have obtained are the following. First, we show that in 1D ($J_3=0$), where quenching $J_1$ takes the system across the topological quantum critical point located at $J_1=J_2$, the density of defects produced due to the quench scales as $1/\sqrt{\tau}$ in the limit of slow quench (large $\tau$). We also identify and compute all independent non-zero spin-spin correlation functions and use them to elucidate the spatial extent of the defect correlation function. Second, we outline a general proof of the result reported in Ref.\ \onlinecite{sms} that for a $d$ dimensional quantum model, where the quench take the system through a $d-m$ dimensional hypersurface characterized by the correlation length exponent $\nu$ and dynamical critical exponent $z$, the defect density obeys $n_d \sim 1/\tau^{m \nu/(z \nu +1)}$. Third, we elaborate on the variation of shape and size of the defect correlation function for the 2D Kitaev model with the quench rate and the model parameters. Fourth, we compute the entropy generated due to such a quench and discuss its dependence on the model parameters and the quench rate. Finally, we study the defect scaling law, entropy generation and defect correlation function of the 2D extended Kitaev model described by $H_2$. \begin{figure} \rotatebox{0}{\includegraphics*[width=\linewidth]{figquench1.comp.ps}} \caption{Schematic representation of the Kitaev model on a honeycomb lattice showings the bonds $J_1$, $J_2$ and $J_3$. Schematic pictures of the ground states, which correspond to pairs of spins on vertical bonds locked parallel (antiparallel) to each other in the limit of large negative (positive) $J_3$, are shown at one bond on the left (right) edge respectively. ${\vec M}_1$ and ${\vec M}_2$ are spanning vectors of the lattice, and $a$ and $b$ represent inequivalent sites.} \label{fig0} \end{figure} The organization of the paper is as follows. In Sec.\ \ref{1da}, we analyze the quench dynamics of the Kitaev model in 1D and obtain the quench rate dependence of the defect density. This is followed by Sec.\ \ref{1db}, where we compute the 1D correlation functions and use them to discuss the nature of the defect correlation function. Next, in Sec.\ \ref{2da}, we obtain the quench rate dependence of the defect density in 2D. The computation of the defect correlation function is detailed in Sec.\ \ref{2db} and the entropy generated during the quench process is computed in Sec.\ \ref{ent}. This is followed by the study of quench dynamics of the extended Kitaev model in Sec.\ \ref{ekit}. Finally, we conclude in Sec.\ \ref{concl}. \section{Quench in 1D} \label{1d} \subsection{Defect density} \label{1da} For $J_3=0$, the Kitaev model represents a spin-1/2 model in 1D with the Hamiltonian \bea H_{\rm 1D}&=& \sum_{n} \left(J_1 \sigma_{2n}^x \sigma_{2n+1}^x + J_2 \sigma_{2n-1}^y \sigma_{2n}^y \right) \label{hamkit1d1}, \eea where $n$ denotes site indices of a one dimensional chain with $N$ sites (we will assume $N$ is a multiple of 4). The lattice spacing $a$ and the Planck constant $\hbar$ will be set equal to $1$ in the rest of this work. The Hamiltonian in Eq. (\ref{hamkit1d1}) can be exactly diagonalized using a standard Jordan-Wigner transformation \cite{kogut} \bea a_n &=& \left( \prod_{j=-\infty}^{2n-1} ~\sigma_j^z \right) ~\sigma_{2n}^y, \nonumber \\ b_n &=& \left( \prod_{j=-\infty}^{2n} ~\sigma_j^z \right) ~\sigma_{2n+1}^x , \label{maj1d} \eea where $a_n$ and $b_n$ are independent Majorana fermions at site $n$. They satisfy relations such as $a_n^\dagger = a_n$, $\{ a_m , a_n \} = 2 \delta_{m,n}$ and $\{ a_m , b_n \} = 0$. The label $n$ for $a_n$ and $b_n$ go over $N/2$ values since that is the number of unit cells. In terms of these operators, $H_{\rm 1D}$ can be written as \bea H_{\rm 1D} &=& i ~\sum_n ~[~ J_1 ~b_n a_n ~+~ J_2 ~b_n a_{n+1} ~] \nonumber \\ &=& 2i ~\sum_{k=0}^\pi ~[~ b_k^\dagger a_k ~(J_1 + J_2 e^{ik}) \nonumber \\ & & ~~~~~~~~~~~+~ a_k^\dagger b_k ~(- J_1 - J_2 e^{-ik}) ~], \label{hamkit1d2} \eea where the Majorana fermion creation and destruction operators $a_k^\dagger$ and $a_k$ are Fourier components of the $a_n$'s, \bea a_n ~=~ \sqrt{\frac{4}{N}} ~\sum_{k=0}^\pi ~[~ a_k ~e^{ikn} ~+~ a_k^\dagger ~ e^{-ikn} ~]. \label{fourier1} \eea The sum over $k$ in Eq. (\ref{fourier1}) only goes over half the Brillouin zone because $a_n$ describes a Majorana fermion; the number of modes lying in the range $0 \le k \le \pi$ is $N/4$. [There is a small correction that one has to make in Eq. (\ref{fourier1}) for the modes with $k=0$ and $\pi$ for which there is no distinction between $k$ and $-k$; these two modes should have a coefficient of $\sqrt{2/N}$ instead of $\sqrt{4/N}$. However, we will ignore this correction here because we will be interested in the $N \to \infty$ limit, and we will change from a sum over $k$ to an integral over $k$.] The operators $a_k$ and $a_k^\dagger$ satisfy the anticommutation relations $\{ a_k , a_{k'}^\dagger \} = \delta_{kk'}$ and $\{ a_k , a_{k'} \} = 0$. One can now define a two component fermionic creation operator $\psi_k = (a_k ~b_k)$, so that $H_{\rm 1D}$ can be written as \bea H_{\rm 1D} &=& \sum_{k=0}^\pi \psi_k^\dagger ~H_k \psi_k , \nonumber \\ {\rm where} ~~H_k &=& 2i ~\left( \begin{array}{cc} 0 & -J_1 - J_2 e^{-ik} \\ J_1 + J_2 e^{ik} & 0 \end{array} \right). \label{kitham1d3} \eea {}From Eq.\ (\ref{kitham1d3}), we find that $H_{\rm 1D}$ can be diagonalized leading to an energy spectrum consisting of two bands \bea E_k^{\pm} = \pm 2 ~\sqrt{J_1^2 + J_2^2 + 2J_1 J_2 \cos k}. \label{en1} \eea Note that the band gap vanishes at $J_1= \pm J_2$ for $k=\pi$ and $0$ respectively, where the bands touch each other. It was shown in Ref.\ \onlinecite{feng} that this vanishing of the energy gap signals a topological phase transition between the two phases of the model at $J_1 > J_2$ and $J_1 < J_2$. To study the quench of the system across this critical point, we will now consider what happens when we evolve $J_1$ linearly in time at a rate $1/\tau$ from $-\infty$ to $\infty$, keeping $J_2$ fixed at some positive value: we take $J_1=J_2 t/\tau$. The ground states of $H_{\rm 1D}$ in Eq.\ (\ref{kitham1d3}) have $\sigma_{2n}^x \sigma_{2n+1}^x = 1$ and $-1$ for $t=-\infty$ and $\infty$ respectively for all values of $n$. In terms of the Hamiltonian in Eq. (\ref{kitham1d3}), the ground and excited states for $J_1 \to - \infty$ are respectively given by \bea \psi_{1k} &=& \frac{1}{\sqrt{2}} ~\left( \begin{array}{c} 1 \\ i \end{array} \right) ~~{\rm and}~~ \psi_{2k} ~=~ \frac{1}{\sqrt{2}} ~\left( \begin{array}{c} 1 \\ -i \end{array} \right). \label{wave} \eea For $J_1 \to \infty$, the ground and excited states are given by $\psi_{2k}$ and $\psi_{1k}$ respectively. By a change of basis, one can rewrite Eq. (\ref{kitham1d3}) in the form $H_{\rm 1D} = \sum_k \psi^{'\dagger}_k H'_k \psi'_k$ where \bea H'_k ~=~ 2 ~\left( \begin{array}{cc} J_1 + J_2 \cos k & - J_2 \sin k \\ - J_2 \sin k & - J_1 - J_2 \cos k \end{array} \right). \label{kitham1d4} \eea Note that unlike Eq.\ (\ref{kitham1d3}), the off-diagonal elements of Eq.\ (\ref{kitham1d4}) do not change with time if $J_2$ is held fixed. As a result, the problem of quench dynamics is reduced to solving a standard Landau-Zener problem for each momentum $k$ \cite{lz}. The density of defect formation $n$ can thus be found to be \cite{book1} \bea n &=& \int_0^\pi ~\frac{dk}{\pi} ~p_k, \nonumber \\ {\rm where} ~~p_k &=& e^{-2 \pi J_2 \tau \sin^2 k} \label{defect1} \eea denotes the probability of the system to remain in the initial ($J_1 \to -\infty$) ground state for momentum $k$. For $J_2 \tau \gg 1$, the contribution to $n$ comes mainly from the regions near $k=0$ and $\pi$ where $p_k =1$. Thus one finds that in the slow quench regime $n \simeq 1/\sqrt{J_2 \tau}$. Such a $1/\sqrt{\tau}$ scaling of defect density conforms to the prediction of Ref.\ \onlinecite{anatoly1}. For the present case, it is easy to see from Eq.\ (\ref{en1}), that the gap $\Delta(k)= E^+ (k) - E^- (k)$ vanishes linearly at the critical point both with the quench parameter $J_1$ and with momentum around $k =0$ and $\pi$, so that $z\nu = z = 1$. Thus, $n \sim 1/\tau^{d\nu/(z\nu +1)} = 1/\sqrt{\tau}$ in 1D. \begin{figure} \rotatebox{0}{\includegraphics*[width=\linewidth]{figquench2.ps}} \caption{Defect density produced by quenching $J_1$ in $d=1$.} \label{fig1} \end{figure} A plot of the defect density as a function of the quench time $\tau$ is shown in Fig.\ \ref{fig1}. The plot confirms the expected result, that the defect density is maximum for an infinite quench rate ($\tau \to 0$), when the system has no time to adjust to the quench and remains in the old ground state leading to a normalized defect density of $1$. As the rate of quench is decreased, $n$ decreases quickly before settling down to a $1/\sqrt{\tau}$ behavior for large $\tau$. It is useful to note that the Hamiltonian $H_k$ in Eq. (\ref{kitham1d3}) can also be written, after a suitable change of basis, as \beq H'_k ~=~ 2 ~\left( \begin{array}{cc} J_- \sin (k/2) & - i J_+ \cos (k/2)\\ i J_+ \cos (k/2) & - J_- \sin (k/2) \end{array} \right), \eeq where $J_{\pm} = J_1 \pm J_2$. This form is useful if, for instance, one wants to study the effect of quenching $J_-$ from $-\infty$ to $\infty$ keeping $J_+$ fixed. \subsection{Correlation functions} \label{1db} Let us now consider how the system may be described at the final time $t \to \infty$ when $J_1 = \infty$. In principle, the time evolution of the system is unitary, so that it will always be a pure state. However, for each momentum $k$, the wave function is given by $\sqrt{1 - p_k} \psi_{2k} e^{-iE_k^2 t} ~+~ \sqrt{p_k} \psi_{1k} e^{-iE_k^1 t}$, where $E_k^{1,2} = \pm \infty$. As a result, the final density matrix of the system will have off-diagonal terms involving $\psi_{2k}^* \psi_{1k}$ and $\psi_{1k}^* \psi_{2k}$ which vary extremely rapidly with time; their effects on physical quantities will therefore average to zero. Hence the final density matrix is effectively diagonal like that of a mixed state \cite{levitov}, where the diagonal entries are time-independent as $t \to \infty$ and are given by $1 - p_k$ and $p_k$. Such a density matrix is associated with an entropy which we will discuss in Sec. \ref{ent} in the context of 2D Kitaev model. Using the above density matrix, we will now compute the correlation functions corresponding to the operators $O_r = i b_n a_{n+r}$, where $r$ is an integer. In terms of the spins, as can be seen from Eq.\ (\ref{maj1d}), the operator $O_r$ can be written as \bea O_0 &=& \sigma_{2n}^x \sigma_{2n+1}^x, ~~~~ O_1 ~=~ \sigma_{2n+1}^y \sigma_{2n+2}^y, \nonumber \\ O_r &=& \sigma_{2n+1}^y ~\left( \prod_{j=2n+2}^{2n+2r-1} \sigma_j^z \right)~ \sigma_{2n+2r}^y ~~{\rm for} ~~ r \ge 2, \nonumber \\ &=& \sigma_{2n+2r}^y ~\left( \prod_{j=2n+2r+1}^{2n} \sigma_j^z \right)~ \sigma_{2n+1}^y ~~ {\rm for} ~~r \le -1. \nonumber \\ & & \eea We will calculate the expectation values of these operators shortly. In principle, one can also consider expectation values of the operators $i a_n a_{n+r}$ and $i b_n b_{n+r}$; however a direct calculation shows that these vanish if $r \ne 0$. Further, for the Kitaev model, it has been shown that the spin-spin correlations between sites lying on different bonds vanish, {\it i.e.}, $\langle \sigma_{2n}^x \sigma_{2n+r}^x \rangle =0$ for $|r| \ge 2$ \cite{baskaran}. Therefore $\langle O_r \rangle$ are the only non-vanishing spin-correlators of the model \cite{nussinov1}. To compute $\langle O_r \rangle$ we note that $O_r$ can be expressed in terms of the fermion operators $a_k$ and $b_k$. This will in general involve summations over two different momenta $k$ and $k'$. However, when $\langle O_r \rangle$ is computed in a direct product of states involving $a_k$ and $b_k$, only terms in which $k' = k$ will contribute. In the limit $N \to \infty$, the relevant part of $O_r$ which contributes to the correlation function can be written as \beq O_r ~=~ - ~\frac{4i}{N} ~\sum_{k=0}^\pi ~[~ b_k^\dagger a_k e^{ikr} ~-~ a_k^\dagger b_k e^{ikr} ~]. \label{corr2} \eeq Using the wave functions given in Eq. (\ref{wave}), we find that \beq \langle O_r \rangle ~=~ \pm ~\int_0^\pi dk \cos (kr) ~=~ \pm \delta_{r,0}, \eeq where the $+$ and $-$ signs refer to the ground states of $J_1 = - \infty$ and $\infty$ respectively. This is expected since $\sigma_{2n}^x \sigma_{2n+1}^x = \pm 1$ while all other correlations vanish in those two states. Finally, after quench, in a state in which we have a mixture of the ground and excited states of $J_1 = \infty$ with probabilities $1 - p_k$ and $p_k$ respectively, we find that \beq \langle O_r \rangle ~=~ - ~\delta_{r,0} ~+~ \frac{2}{\pi} ~\int_0^\pi dk ~p_k ~\cos (kr). \label{int1} \eeq A plot of $\langle O_r \rangle$ as a function of $r$ is shown for representative values of $J_2 \tau=1,10$ in Fig.\ \ref{fig2}. We find that $\langle O_r \rangle$ shows a damped oscillatory behavior. Note that since $\langle O_r \rangle = -\delta_{r,0}$ for the ground state of $H_{\rm 1D}$ for $J_1 \to \infty$, the plot of $\langle O_r \rangle$ in the state of the system after the quench provides a direct measurement of the spatial extent of the correlation between the defects generated during the quench. \begin{figure} \rotatebox{0}{\includegraphics*[width=\linewidth]{figquench3.ps}} \caption{Plot of correlation function $\langle O_r \rangle$ as a function of $r$ for $J_2 \tau = 10$ (red circles and red solid line) and $J_2 \tau =1$ (black squares and black dashed line). $\langle O_r \rangle$ shows a damped oscillatory behavior as a function of $r$.} \label{fig2} \end{figure} For $J_2 \tau \gg 1$, the dominant contribution in the integral in Eq. (\ref{int1}) comes from the regions near $k = 0$ and $\pi$ as can be seen from the expression for $p_k$ in Eq.\ (\ref{defect1}). One can combine these two regions, and write the expression in (\ref{int1}) approximately as \bea \langle O_r \rangle &=& - ~\delta_{r,0} ~+~ \frac{2}{\pi} ~\int_0^\infty dk ~e^{-2 \pi J_2 \tau k^2} \nonumber \\ & & ~~~~~~~~~~~~~~~~~~~~\times ~[\cos (kr) ~+~ \cos \{(\pi - k) r \}] \nonumber \\ &=& - ~\delta_{r,0} ~+~ \frac{1 ~+~ (-1)^r}{\pi} ~\frac{e^{-r^2 / (8 \pi J_2 \tau)}}{\sqrt{2 J_2 \tau}}. \label{or} \nonumber \\ \eea Note that this vanishes if $r$ is odd. For a given value of $J_2 \tau$, the expression in Eq. (\ref{or}) decreases with increasing $r$, particularly for $r > \sqrt{8 \pi J_2 \tau}$. On the other hand, for a given large value of $r$, Eq. (\ref{or}) has a maximum at $\tau = r^2 /(4 \pi J_2)$. The fact that the crossover in both cases occurs around $r \sim \sqrt{4 \pi J_2 \tau}$ signals the fact that the associated length scale for the defect correlation function is of order $\sqrt{4 \pi J_2 \tau}$. \subsection{Sum rule} \label{sumrule} There is a sum rule that we can write down for $\left<O_r\right>$. {}From Eq. (\ref{int1}), we see that \beq O_{total} ~\equiv~ \sum_{r=-\infty}^\infty ~\langle O_r \rangle ~=~ -1 ~+~ 2 p_0, \label{sum} \eeq where we have used the identity $\sum_r e^{ikr} = 2\pi \delta (k)$ for $-\pi < k < \pi$. Going back to Eq. (\ref{kitham1d4}), we see that for $k=0$, the Hamiltonians at different times commute with each other irrespective of how $J_1$ is varied in time from $-\infty$ to $\infty$. This means that if we start with the ground state of $J_1 = - \infty$, no transition will occur at any time, and we will have $p_0 = 1$. Eq. (\ref{sum}) then implies that $O_{total} = 1$. \section{2D Kitaev model} \label{2d} \subsection{Defect density} \label{2da} When $J_3 \ne 0$, the Kitaev model with Hamiltonian given by Eq.\ (\ref{kham1}) describes a spin model on a hexagonal 2D lattice. Usually spin models are not exactly solvable in two dimensions. One of the main properties of the Kitaev model which makes it theoretically attractive is that, even in 2D, it can be mapped onto a non-interacting fermionic model by a suitable Jordan-Wigner transformation \cite{kitaev1,feng,nussinov1,nussinov2}. In terms of the Majorana fermions $a_{jl}$ and $b_{jl}$ one can write \bea a_{jl} &=& \left( \prod_{i=-\infty}^{j-1} ~\sigma_{il}^z \right) ~ \sigma_{jl}^y ~~{\rm for}~{\rm ~ even }~ j+l, \nonumber \\ b_{jl} &=& \left( \prod_{i=-\infty}^{j-1} ~\sigma_{il}^z \right) ~\sigma_{jl}^x ~~ {\rm for}~{\rm ~odd }~ j+l. \label{maj2d} \eea Such a transformation maps the spin Hamiltonian $H$ in Eq.\ (\ref{kham1}) to a fermionic Hamiltonian given by \bea H_{\rm 2D} &=& i ~\sum_{\vec n} ~[J_1 ~b_{\vec n} a_{{\vec n} - {\vec M}_1} ~+~ J_2 ~ b_{\vec n} a_{{\vec n} + {\vec M}_2} \nonumber \\ && ~~~~~~~~~+~ J_3 D_{\vec n} ~b_{\vec n} a_{\vec n}], \label{kitham2d1} \eea where $\vec n = {\sqrt 3} {\hat i} ~n_1 + (\frac{\sqrt 3}{2} {\hat i} + \frac{3}{2} {\hat j} ) ~n_2$ denote the midpoints of the vertical bonds. Here $n_1, n_2$ run over all integers so that the vectors $\vec n$ form a triangular lattice whose vertices lie at the centers of the vertical bonds of the underlying honeycomb lattice; the Majorana fermions $a_{\vec n}$ and $b_{\vec n}$ sit at the top and bottom sites respectively of the bond labeled $\vec n$. The vectors ${\vec M}_1 = \frac{\sqrt 3}{2} {\hat i} + \frac{3}{2} {\hat j}$ and ${\vec M}_2 = \frac{\sqrt 3}{2} {\hat i} - \frac{3}{2} {\hat j}$ are spanning vectors for the reciprocal lattice, and $D_{\vec n}$ can take the values $\pm 1$ independently for each $\vec n$. The crucial point that makes the solution of Kitaev model feasible is that $D_{\vec n}$ commutes with $H_{\rm 2D}$, so that all the eigenstates of $H_{\rm 2D}$ can be labeled by specific values of $D_{\vec n}$. It has been shown that for any value of the parameters $J_i$, the ground state of the model always corresponds to $D_{\vec n}=1$ on all the bonds. Since $D_{\vec n}$ is a constant of motion, the dynamics of the model starting from any ground state never takes the system outside the manifold of states with $D_{\vec n}=1$. For $D_{\vec n}=1$, it is straightforward to diagonalize $H_{\rm 2D}$ in momentum space. We define Fourier transforms of the Majorana operators $a_{\vec n}$ as \beq a_{\vec n} ~=~ \sqrt{\frac{4}{N}} ~\sum_{\vec k} ~[~ a_{\vec k} ~e^{i\vec k \cdot \vec n} ~+~ a_{\vec k}^\dagger ~ e^{-i\vec k \cdot \vec n} ~] \label{fourier2} \eeq (and similarly for $b_{\vec n}$), where $N$ is the number of sites (assumed to be even, so that the number of unit cells $N/2$ is an integer), and the sum over $\vec k$ extends over half the Brillouin zone of the 2D hexagonal lattice. We then obtain $H_{\rm 2D} = \sum_{\vec k} \psi_{\vec k}^\dagger H_{\vec k} \psi_{\vec k}$, where $\psi_{\vec k}^\dagger =(a_{\vec k}^\dagger , b_{\vec k}^\dagger)$, and $H_{\vec k}$ can be expressed in terms of Pauli matrices $\sigma^{1,2,3}$ as \bea H_{\vec k} &=& 2 ~[J_1 \sin ({\vec k} \cdot {\vec M}_1) ~-~ J_2 \sin ({\vec k} \cdot {\vec M}_2)] ~\sigma^1 \nonumber \\ & & +~ 2 ~[J_3 ~+~ J_1 \cos ({\vec k} \cdot {\vec M}_1) ~+~ J_2 \cos ({\vec k} \cdot {\vec M}_2)] ~\sigma^2 . \label{ham2} \nonumber \\ \eea The energy spectrum of $H_{\rm 2D}$ therefore consists of two bands with energies \bea E_{\vec k}^\pm &=& \pm ~2 ~[(J_1 \sin ({\vec k} \cdot {\vec M}_1) - J_2 \sin ({\vec k} \cdot {\vec M}_2))^2 \nonumber \\ && ~~~~+ (J_3 + J_1 \cos ({\vec k} \cdot {\vec M}_1) + J_2 \cos ({\vec k} \cdot {\vec M}_2))^2 ]^{1/2} . \nonumber \\ \label{hk1} \eea We note for $|J_1-J_2|\le J_3 \le (J_1+J_2)$, these bands touch each other so that the energy gap $\Delta_{\vec k} = E_{\vec k}^+ - E_{\vec k}^-$ vanishes for special values of $\vec k$ leading to the gapless phase of the model \cite{kitaev1,feng,lee,nussinov1}. We will now quench $J_3(t) =J t/\tau$ at a fixed rate $1/\tau$, from $-\infty$ to $\infty$, keeping $J$, $J_1$ and $J_2$ fixed at some non-zero values; we have introduced the quantity $J$ to fix the scale of energy. We note that the ground states of $H_{\rm 2D}$ corresponding to $J_3 \to -\infty(\infty)$ are gapped and have $\sigma_{j,l}^z \sigma_{j,l+1}^z = 1(-1)$ for all lattice sites $(j,l)$. To study the state of the system after the quench, we first note that after an unitary transformation $U= \exp(-i \sigma_1 \pi/4)$, one can write $H_{\rm 2D} = \sum_{\vec k} \psi_{\vec k}^{'\dagger} H'_{\vec k} \psi'_{\vec k}$, where $H'_{\vec k} = UH_{\vec k} U^\dagger$ is given by \bea H'_{\vec k} &=& 2 ~[J_1 \sin ({\vec k} \cdot {\vec M}_1) ~-~ J_2 \sin ({\vec k} \cdot {\vec M}_2)] ~\sigma^1 \nonumber \\ & & + ~2 ~[J_3(t) +J_1 \cos ({\vec k} \cdot {\vec M}_1) + J_2 \cos ({\vec k} \cdot {\vec M}_2)] ~\sigma^3 . \nonumber \\ \eea Hence the off-diagonal elements of $H'_{\vec k}$ remain time independent and the problem of quench dynamics reduces to a Landau-Zener problem for each ${\vec k}$. The defect density can then be computed following a standard prescription \cite{lz} \bea n &=& \frac{1}{A} ~\int_{\vec k} ~d^2 \vec k ~p_{\vec k}, \nonumber \\ p_{\vec k} &=& e^{ - 2 \pi \tau ~[J_1 \sin ({\vec k} \cdot {\vec M}_1)- J_2 \sin ({\vec k} \cdot {\vec M}_2)]^2/J}, \label{defect2d} \eea where $A = 4\pi^2 /(3\sqrt{3})$ denotes the area of half the Brillouin zone over which the integration is carried out. Since the integrand in Eq. (\ref{defect2d}) is an even function of $\vec k$, one can extend the region of integration over the full Brillouin zone. This region can be chosen to be a rhombus with vertices lying at $(k_x,k_y)= (\pm 2\pi / \sqrt{3}, 0)$ and $(0,\pm 2\pi /3)$. Introducing two independent integration variable $v_1, v_2$, each with a range $0\le v_1,v_2 \le 1$, one finds that \bea k_x &=& 2\pi ~\frac{v_1 + v_2 -1}{\sqrt 3}, \quad k_y = 2\pi ~ \frac{v_2 - v_1}{3}. \eea Such a substitution covers the rhombus uniformly and facilitates the numerical integration necessary for computing $n$. A plot of $n$ as a function of the quench time $J \tau$ and $\alpha = \tan^{-1} (J_2/J_1)$ (we have taken $J_{1[2]}= J \cos(\alpha)[\sin(\alpha)]$) is shown in Fig.\ \ref{fig3}. We note that the density of defects produced is maximum when $J_1=J_2$. This is due to the fact that the length of the gapless line through which the system passes during the quench is maximum at this point. This allows the system to remain in the non-adiabatic state for the maximum time during the quench, leading to the maximum density of defects. For $J_1/J_3 >2J_2/J_3$, the system does not pass through a gapless phase during the quench, and the defect production is exponentially suppressed. \begin{figure} \rotatebox{0}{\includegraphics*[width=\linewidth]{figquench4.comp.ps}} \caption{Plot of defect density $n$ as a function of the quench time $J \tau$ and $\alpha = \tan^{-1} (J_2/J_1)$. The density of defects is maximum at $J_1=J_2$.} \label{fig3} \end{figure} For sufficiently slow quench $2 \pi J \tau \gg 1$, $p_{\vec k}$ is exponentially small for all values of ${\vec k}$ except in the region near the line \beq J_1 ~\sin ({\vec k} \cdot {\vec M}_1) ~-~ J_2 \sin ({\vec k} \cdot {\vec M}_2) ~=~ 0, \label{line} \eeq and the contribution to the momentum integral in (\ref{defect2d}) comes from values of $\vec k$ close to this line of zeroes. We note that the line of zeroes where $p_{\vec k}=1$ precisely corresponds to the zeroes of the energy gap $\Delta_{\vec k}$ as $J_3$ is varied for a fixed $J_2$ and $J_1$. Thus the system becomes non-adiabatic when it passes through the intermediate gapless phase in the interval $|J_1-J_2|\le J_3(t) \le (J_1+J_2)$. It is then easy to see, by expanding $p_{\vec k}$ about this line that in the limit of slow quench, the defect density scales as $n \sim 1/\sqrt{\tau}$. We note that the scaling of the defect density with the quench rate in a quench where the system passes through a critical {\it line} in momentum space is different from the situation where the quench takes the system through a critical {\it point}. In the latter case, for the Kitaev model which has $z=\nu=1$, Ref.\ \onlinecite{anatoly1} predicts a defect density $n \sim 1/\tau$ for $d=2$. Thus the defect density crucially depends on the dimensionality of the critical surface through which the system passes during the quench. This observation leads to a simple but general conclusion which we present below. Consider a $d$-dimensional model with $z=\nu=1$ described by a Hamiltonian \bea H_{d} &=& \sum_{\vec k} \psi^\dagger_{\vec k} \left( \begin{array}{cc} \epsilon(\vec k,t) & \Delta(\vec k) \\ \Delta^{\ast}(\vec k)& - \epsilon(\vec k,t) \end{array} \right) \psi_{\vec k}, \label{hd} \eea where $\epsilon(\vec k,t)=\epsilon(\vec k)t/\tau$. Now let us assume that the quench takes such a system through a critical surface of $d-m$ dimensions. The defect density for a sufficiently slow quench can be expressed as \cite{lz,book1} \bea n &=& \frac{1}{A} ~\int_{\rm BZ} d^d k ~p(\vec k), ~~{\rm where} ~~p(\vec k) =e^{-\pi \tau f(\vec k)}, \nonumber \\ &\simeq & \frac{1}{A} ~\int_{\rm BZ} d^d k ~ \exp [~-\tau \sum_{\alpha \beta=1,m} g_{\alpha \beta} k_{\alpha} k_{\beta}] \nonumber \\ & \sim & 1/\tau^{m/2} , \label{genres} \eea where $p_{\vec k}$ is the defect probability for momentum $\vec k$, $f (\vec k)=|\Delta(\vec k)|^2/|\epsilon(\vec k)|$ vanishes on the $d-m$ dimensional critical surface, $\alpha, \beta$ denote one of the $m$ orthogonal directions to the critical surface and $g_{\alpha \beta} = (\partial^2 f(\vec k)/\partial k_{\alpha} \partial k_{\beta})_{\vec k \in {\rm critical ~surface}}$. We note that this result depends only on the property that $f(\vec k)$ has to vanish on a $d-m$ dimensional surface, and not on the precise nature of $f(\vec k)$. For $m=d$, where the quench takes the system through a critical point, our results coincide with that of Ref.\ \onlinecite{anatoly1}. Finally we generalize our arguments for models where the $d-m$ dimensional hypersurface is characterized by correlation length exponent $\nu$ and dynamical critical exponent $z$. Let us assume that the system is described by a Hamiltonian $H[\lambda(t)]$ with quasi-energy eigenvalues $E(\vec k,t)$ and that the time evolution of the parameter $\lambda(t)=\lambda_0 (t/\tau)$ takes the system through the critical point $\lambda_0=\lambda_c$ at $t=t_0$. First we note that for large $\tau$, a non-vanishing probability of defect formation requires the non-adiabaticity condition $|\Delta(\vec k)|^2 \sim |\partial E(\vec k,t)/\partial t|$ [\onlinecite{anatoly1}]. Also, since $\partial E(\vec k,t)/\partial t =(\partial E(\vec k,t)/\partial \lambda)\tau^{-1}$ and near the critical point $E \sim \lambda^{z\nu}$, we get \bea \Delta^2 \sim \tau^{-1} \lambda^{z\nu-1} \label{drule1} \eea Further, as shown in Ref.\ \onlinecite{anatoly1}, near any point on the critical surface, quite generally, one has $\Delta \sim |k|^z$, $\lambda \sim k^{1/\nu}$ and $ k \sim 1/\tau^{\nu/(z\nu +1)}$. Using these relations we find from Eq.\ \ref{drule1} that on any point near the gapless surface \bea \Delta \sim 1/\tau^{z\nu/(z\nu+1)} \label{drule2} \eea Next, let us consider the available phase space for formation of defects. When the quench takes the system through a $d-m$ dimensional hypersurface in momentum space, the available phase space is $\Omega \sim k^m \sim \Delta^{m/z}$. Since this available phase space is directly proportional to the defect density \cite{anatoly1}, we find, using Eq.\ (\ref{drule2}), \bea n \sim \Omega \sim \Delta^{m/z} \sim 1/\tau^{m \nu/(z\nu+1)} \label{gquench1} \eea This generalizes the scaling law for defect density to arbitrary critical systems. Note that for $z=\nu=1$, we recover our earlier result $n \sim 1/\tau^{m/2}$ (Eq.\ (\ref{genres})). For $m=d$, which represents quench through a critical point, we also recover the result of Ref.\ \onlinecite{anatoly1} ($ n \sim 1/\tau^{d\nu/(z\nu+1)}$) as a special case. \subsection{Defect Correlation} \label{2db} The calculation of the correlation function can be accomplished along similar lines as in 1D. First, we define the operators \beq O_{\vec r}^{\rm 2D} ~=~ i b_{\vec n} a_{\vec n + \vec r}. \label{kitop1} \eeq In terms of the spin operators, we have $O_{\vec 0}^{\rm 2D} = \sigma_{j,l}^z \sigma_{j,l+1}^z$. For $\vec r \ne {\vec 0}$, $O_{\vec r}^{\rm 2D}$ can be written as a product of spin operators going from a $b$ site at $\vec n=(j,l)$ to an $a$ site at $\vec n + \vec r = (j',l')$: the product will begin with a $\sigma^x$ or $\sigma^y$ and end with a $\sigma^x$ or $\sigma^y$ with a string of $\sigma^z$'s in between, where the choice of the initial and final $\sigma$ matrices depends on whether the values of $j+l$ and $j'+l'$ are even or odd. Note that $O_{\vec r}^{\rm 2D}$ for $\vec r \ne {\vec 0}$ measures correlation between the defects produced during the quench. In particular, a plot of the correlation function $\langle O_{\vec r}^{\rm 2D} \rangle$ versus $\vec r$ in the defect ground state provides an estimate of the shape and spatial extent of the defect correlations produced during the quench. Note that $(O_{\vec r}^{\rm 2D})^2 = 1$, so that all the moments of $O_{\vec r}^{\rm 2D}$ can be found trivially: $\langle (O_{\vec r}^{\rm 2D})^n \rangle = \langle O_{\vec r}^{\rm 2D} \rangle$ if $n$ is odd and $=1$ if $n$ is even. $O_{\vec r}^{\rm 2D}$ can be written in terms of the Majorana fermion operators $a_{\vec k}$ and $b_{\vec k}$; this again involves a sum over two different momenta $\vec k$ and $\vec k'$. However, the expectation value of $O_{\vec r}^{2D}$ in a direct product of states involving $\vec k$ only gets a contribution from terms in which $\vec k' = \vec k$. It turns out that the relevant part of $O_{\vec r}^{2D}$ contributes to the expectation values can be written as, \beq O_{\vec r}^{2D} ~=~ \frac{4i}{N} ~\sum_{\vec k} ~[ b_{\vec k}^\dagger a_{\vec k} e^{i\vec k \cdot \vec r} ~-~ a_{\vec k}^\dagger b_{\vec k} e^{-i\vec k \cdot \vec r}]. \eeq The ground state and excited states for $J_3 = - \infty$ are given by $\psi_{1\vec k}$ and $\psi_{2\vec k}$ respectively, while the two states are interchanged for $J_3 = \infty$. Using Eq.\ (\ref{wave}), we find that \beq \langle O_{\vec r}^{\rm 2D} \rangle ~=~ \pm \frac{4}{N} ~\sum_{\vec k} ~\cos (\vec k \cdot \vec r), \eeq where the $+$ and $-$ signs refer to the ground states of $J_3 = - \infty$ and $\infty$ respectively. This confirms our earlier expectation that in the ground states of $J_3 \to - \infty (\infty)$, $\langle O_{\vec r}^{\rm 2D} \rangle = \pm \delta_{\vec r, {\vec 0}}$. Finally, in the state after quench, in which we have a mixture of the ground and excited states of $J_3 = \infty$ with probabilities $1 - p_{\vec k}$ and $p_{\vec k}$ respectively, we find that \bea \langle O_{\vec r}^{\rm 2D} \rangle &=& - ~\delta_{\vec r,{\vec 0}} ~+~ \frac{2}{A} ~ \int ~d^2 \vec k ~ p_{\vec k} ~\cos (\vec k \cdot \vec r), \label{int2} \eea where the integral over momentum runs over half the Brillouin zone with area $A$. Note that the full Brillouin zone as well as $p_{\vec k}$ remains invariant under a reflection through the point $\vec k = (\pi/\sqrt{3}, 0)$: $k_x \to 2\pi/\sqrt{3} - k_x$, $k_y \to - k_y$. However, $\cos (\vec k \cdot {\vec r})$ changes by a factor of $(-1)^{n_2}$, if the components of $\vec r$ are given by $x=\sqrt{3}(n_1 +n_2/2)$ and $y=3n_2/2$. Hence, $\langle O_{\vec{r}}^{\rm 2D} \rangle =0$ for odd values of $n_2$. \begin{figure} \rotatebox{0}{\includegraphics*[width=\linewidth]{figquench5.comp.ps}} \caption{Plot of $O_{\vec r}^{\rm 2D}$ sans the delta function peak at the origin for $J_1=J_2=J$ and $J \tau=10$ as a function of $n_1$ and $n_2$ (see text for details). The spatial anisotropy of the defect correlation function is clearly evident even for $J_1=J_2$.} \label{fig4} \end{figure} For large values of $\tau$, substituting the expression in Eq.\ (\ref{defect2d}) in the above integral, we find that the dominant contribution comes from the region near the line given in Eq. (\ref{line}). Thus at every point $\vec k_0$ lying on that line, we can introduce variables $k_\parallel$ and $k_\perp$ which vary along the line and perpendicular to it along the directions ${\hat n}_\parallel$ and ${\hat n}_\perp$ respectively. Close to $\vec k_0$, the integrand in Eq.\ (\ref{int2}) will take the form $\exp [-a\tau k_\perp^2 \pm i (\vec k_0 + k_\parallel {\hat n}_\parallel + k_\perp {\hat n}_\perp) \cdot \vec r]$, where $a$ is a number of order 1 whose value depends on $\vec k_0$. The integral over $k_\perp$ will give a factor of $\exp \left[-(\vec r \cdot {\hat n}_\perp)^2 /(4a\tau)\right]/\sqrt{a \tau}$. Thus we find that the density of defects is of order $1/\sqrt{\tau}$ in accordance with Eq.\ (\ref{genres}). This also leads us to expect that the spatial range of the defect correlation should go as $\sqrt{\tau}$. Next we consider the shape of the defect correlation function. For this purpose, we evaluate Eq.\ (\ref{int2}) numerically so as to obtain the $\vec r$ dependence of $\langle O_{\vec r}^{\rm 2D} \rangle$. In general, we expect the correlation will be anisotropic in space if $J_1/J_2 \gg 1$ or $\ll 1$ which can be most easily seen from the fact that setting $J_1=0$ or $J_2=0$ leads to the 1D result derived in Sec.\ \ref{1db}. A plot of the correlation function $\langle O_{\vec r}^{\rm 2D} \rangle$, without the delta function peak at ${\vec r}=0$, and as a function of $n_1$ and $n_2$, where $x=\sqrt{3}(n_1 +n_2/2)$ and $y=3n_2/2$ is shown in Fig.\ \ref{fig4}. In this plot, we have omitted the delta function contribution to $\langle O_{\vec r=0}^{\rm 2D}\rangle$ in order to make the correlations at $\vec r \ne {\vec 0}$ visible. In the $x$ direction, the correlations oscillate; the amplitude of oscillations decays monotonically with $x$, in a qualitatively similar manner to the 1D correlation function $O_r$ shown in Fig.\ \ref{fig1} for $y=n_2=0$. The correlations decay in a monotonic way with $y$ for $x = n_1 + n_2/2 = 0$ (along the straight line at an angle $\theta = \tan^{-1}(-0.5)$ in the figure). Thus the correlations behave quite anisotropically even for $J_1=J_2$. \begin{figure} \rotatebox{0}{\includegraphics*[width=\linewidth]{figquench6.comp.ps}} \caption{Plot of $\left<O_{r}^{\rm 2d}\right>$ sans the delta function peak at the origin as a function of $\vec r$ for several representative values of $J_2/J$ for $J_1=J$ and $J\tau=5$. The plot displays the change in the shape of defect correlation function as a function of $J_2/J_1$ (see text for details).} \label{fig5} \end{figure} We now aim at obtaining an understanding of the variation of the spatial dependence of $\left<O_r^{\rm 2d}\right>$ with the parameters $J_1$ and $J_2$. Such a variation can be analytically understood by noting that for $J\tau \gg 1$, the maximum contribution to $\left< O_{\vec r} \right>$ comes from around the wave vector ${\vec k}_0$ for which $p({\vec k}_0)=1$. For $J_2 \gg (\ll) 1$, this occurs when $\sin[{\vec k}\cdot {\vec M}_2({\vec M}_1)] =0$ which yields ${\vec k}_0 = \pi (\sqrt{3} {\hat i} \mp {\hat j})/2$. The maximum contribution to $\left<O_{\vec r}^{\rm 2D} \right>$ occurs where $\cos({\vec k}_0 \cdot {\vec n})$ is maximum, ${\it ie.}$, ${\vec k}_0 \cdot {\vec n}=0$. Thus for $J_2 \gg (\ll) J_1$, $\left<O_{\vec r}^{\rm 2D}\right>$ is expected to be maximal along the lines $n_1 +n_2=0 (n_2=0)$ in the $n_1-n_2$ plane. This expectation is confirmed as seen in Fig.\ \ref{fig5} which shows $\left<O_r^{\rm 2d}\right>$ for several representative values of $J_2/J$ for a fixed $J_1=J$ and $J\tau=5$. We find that $\left< O_{\vec r} \right>$ is maximal along $n_2=0 (n_1+n_2=0)$ line for $J_2=5(0.25)J_1$. This clearly shows that the defects produced in the quench will be highly anisotropic in this limits. For intermediate values of $J_1$ and $J_2$, the anisotropy in $\left<O_r^{\rm 2d}\right>$ can be similarly deduced by first finding ${\vec k}_0$ for which $p_{\vec k_0}=0$ and then computing ${\vec n}$ for which ${\vec k}_0 \cdot {\vec n}$ vanishes. The gradual evolution of the shape of $\left<O_r^{\rm 2d}\right>$ as we go from the limit $J_2 \ll J_1$ to the limit $J_2 \ll J_1$ can be seen from in Fig.\ \ref{fig5}. \begin{figure} \rotatebox{0}{\includegraphics*[width=\linewidth]{figquench7.ps}} \caption{Plot of $\left<O_{r}^{\rm 2d}\right>$ (sans the delta function peak) at representative points $(-1,0)$ on the $x$ axis (black solid line) $(0,2)$ on the $y$ axis (blue dotted line) and $(2,-2)$ along $-45^{\circ}$ in the $n_1-n_2$ plane (red dashed line) as a function of $\alpha = \tan^{-1} (J_2/J_1)$ for fixed $J^2=1$.} \label{fig7} \end{figure} To obtain a more detailed picture of the spatial anisotropy of the defect correlations as a function of $J_1/J_2$ we define a parameter $\alpha$: $J_{1[2]} = J \cos(\alpha)[\sin(\alpha)]$. A variation of $\alpha$ therefore changes the ratio $J_1/J_2$ from $0$ to $\infty$ while fixing $J_1^2+J_2^2=J^2=1$. The plot of $\left<O_{\vec r}^{\rm 2d} \right>$ at points $(n_1,n_2)=(-1,0)$ (on the $x$ axis of the $n_1-n_2$ plane), $(n_1,n_2)=(2,-2)$ (along the $-45^{\circ}$ line in the $n_1-n_2$ plane) and $(n_1,n_2)=(0,-2)$ (on the $y$ axis of the $n_1-n_2$ plane) as a function of $\alpha$ shown in Fig.\ \ref{fig7} clearly reveal the nature of the anisotropy of the correlation function. We find that as the ratio of $J_1/J_2 = \tan(\alpha)$ is varied from $0$ to $\infty$, the correlation on the representative point $(1,0)$ along the $x$ axis increases till it reaches the point $J_1=J_2$ ($\alpha=\pi/4$) and then decays to $0$ as $\alpha$ approaches $\pi/2$. This signifies that the correlation along the $x$ axis in the $n_1-n_2$ plane becomes maximum for $J_1=J_2$. On the other hand, for the representative point $(0,2)$ on the $y$ axis and $2,-2$ along the line with slope $-45^{\circ}$, the correlation becomes maximum when $J_2 \ll J_1$ ($\alpha=0$) and $J_2 \gg J_1$ ($\alpha=\pi/2$) respectively, as expected from Fig.\ \ref{fig5}. This lead us to conclude that the spatial anisotropy of the defect correlation function $\left<O_{\vec r}^{\rm 2d}\right>$ depends crucially on the ratio of $J_1/J_2$. Finally we note that we can obtain a measure of the spatial extent of the defect correlation function by calculating \beq \langle \vec r^2 \rangle ~\equiv~ \sum_{\vec r} ~\vec r^2 ~\langle O_{\vec r}^{\rm 2D} \rangle . \eeq To evaluate this, we first rewrite Eq. (\ref{int2}) as \bea \langle O_{\vec r}^{\rm 2D} \rangle &=& - ~\delta_{\vec r,{\vec 0}} ~+~ \frac{1}{A} ~ \int ~d^2 \vec k ~ p_{\vec k} ~e^{i\vec k \cdot \vec r}, \label{int3} \eea where the integral now runs over the entire Brillouin zone. We then note that $\vec r^2 e^{i\vec k \cdot \vec r} = - \nabla^2_{\vec k} e^{i\vec k \cdot \vec r}$, integrate by parts in Eq. (\ref{int3}) so as to make $\nabla^2_{\vec k}$ act on $p_{\vec k}$, and use the identity $\sum_{\vec r} e^{i\vec k \cdot \vec r} = 2 A \delta^2 (\vec k)$, to obtain $\langle \vec r^2 \rangle = -2 (\nabla^2_{\vec k} p_{\vec k})_{\vec k = {\vec 0}} = 24 \pi \tau (J_1^2 + J_2^2 + J_1 J_2)/J$. This shows that the spatial extent of $\langle O_{\vec r}^{\rm 2D} \rangle $ grows as $\sqrt \tau$ for large $\tau$. [Eq. (\ref{or}) shows that we get the same behavior in 1D.] Finally, we can get an idea of the spatial anisotropy of $\langle O_{\vec r}^{\rm 2D} \rangle$ by computing \beq \langle \vec r^2 \rangle_\theta ~\equiv~ \sum_{\vec r} ~(x \cos \theta + y \sin \theta)^2 ~ \langle O_{\vec r}^{\rm 2D} \rangle , \eeq where $\vec r = (x,y)$, and $\theta$ denotes a direction along which the spatial extent is being calculated. By writing $(x \cos \theta + y \sin \theta)^2 e^{i\vec k \cdot \vec r} = - (\cos \theta \partial /\partial k_x + \sin \theta \partial /\partial k_x)^2 e^{i\vec k \cdot \vec r}$, we can prove that $\langle \vec r^2 \rangle_\theta = 6 \pi \tau [(J_1 - J_2) \cos \theta + \sqrt{3} (J_1 + J_2) \sin \theta]^2 /J$. We see that $\langle \vec r^2 \rangle_\theta$ has a marked dependence on $\theta$; in fact, it vanishes in the direction given by $\theta = \tan^{-1} [(J_2 - J_1)/\sqrt{3} (J_2 + J_1)]$, and is maximum in the perpendicular direction. These statements should be interpreted with some care; $\langle \vec r^2 \rangle_\theta$ may be small for some value of $\theta$ either due to a cancellation between positive and negative correlations or because $\langle O_{\vec r}^{\rm 2D} \rangle$ is small in that direction. We note that the sum rule discussed in Sec. \ref{sumrule} is also valid in 2D, and we get $\sum_{\vec r} \langle O_{\vec r}^{\rm 2D} \rangle = -1 + 2 p_{\vec 0} = 1$ regardless of how $J_3$ is varied from $- \infty$ to $\infty$. \subsection{Entropy} \label{ent} As discussed in Sec. \ref{1db}, for each momentum $\vec k$, the final density matrix is effectively diagonal, with entries $1 - p_{\vec k}$ and $p_{\vec k}$. The density matrix of the entire system takes the product form $\rho = \bigotimes \rho_{\vec k}$. The von Neumann entropy density corresponding to this state is given by \beq s ~=~ - ~\frac{1}{A} ~\int d^2 \vec k ~[~ (1 - p_{\vec k}) \ln (1 - p_{\vec k}) ~ +~ p_{\vec k} \ln p_{\vec k} ~], \label{entropy1}\eeq where the integral again goes half the Brillouin zone. Let us now consider the dependence of this quantity on the quenching time scale $\tau$ \cite{sen1}. If $\tau$ is very small, the system stays in its initial state and $p_{\vec k}$ will be close to 1 for all values of $\vec k$; for the same reason, $\langle O_{\vec 0} \rangle$ will remain close to 1. If $\tau$ is very large, the system makes a transition to the final ground state for all momentum except near the line described in Eq. (\ref{line}). Hence $p_{\vec k}$ will be close to 0 for all $\vec k$ except near that line, and $\langle O_{\vec 0} \rangle$ will be close to -1. In both these cases, the entropy density will be small. We therefore expect that there will be an intermediate region of values of $\tau$ in which $s$ will show a maximum and $\langle O_{\vec 0} \rangle$ will show a crossover from $-1$ to 1. A plot of $s$ and as a function of $J\tau$ and $\alpha$, shown in Fig.\ \ref{fig6} confirms this expectation. We find that the entropy reaches a maximum for the intermediate value of $J\tau$ where $\langle O_{\vec 0} \rangle$ crosses over from $-1$ to 1 for all values of $\alpha$. \begin{figure} \rotatebox{0}{\includegraphics*[width=\linewidth]{figquench8.comp.ps}} \caption{Plot of the entropy density $s$ as a function of $J\tau$ and $\alpha=\tan^{-1} (J_2/J_1)$. The entropy density peaks when $\left<O_{\vec 0}\right>$ crosses from $-1$ to $1$ as discussed in the text.} \label{fig6} \end{figure} \section{Extended Kitaev Model} \label{ekit} The extended Kitaev model, described by $H_2$ (Eq.\ (\ref{extkit1})), can also be mapped, using the Majorana transformation given by Eq.\ (\ref{maj2d}), to a Fermionic Hamiltonian \bea H'_1 &=& i J_4 \sum_{(j,l) \in A} \left[a_{j,l} a_{j+2,l} - b_{j,l+1} b_{j+2,l+1} \right] ~+~ H_{\rm 2D}, \nonumber \\ & & \label{exkit1} \eea where $H_{\rm 2D}$ is given by Eq.\ (\ref{kitham2d1}). We note that in this model, just as for $H_{\rm 2D}$, $D_n$ commutes with all the terms in the Hamiltonian and the ground state corresponds to $D_n=1$ for all links of the honeycomb lattice. Thus, in momentum space, $H'_1$ reduces to a bilinear $2$ by $2$ matrix Hamiltonian $H'_2 = \sum_{\vec k} \psi(\vec k)^\dagger H'_3(\vec k) \psi(\vec k)$, where \bea H'_3(\vec k) &=& 2 \Bigg\{~[J_1 \sin ({\vec k} \cdot {\vec M}_1) ~-~ J_2 \sin ({\vec k} \cdot {\vec M}_2)] ~\sigma^1 \nonumber \\ && + [J_3 ~+~ J_1 \cos ({\vec k} \cdot {\vec M}_1) ~+~ J_2 \cos ({\vec k} \cdot {\vec M}_2)] ~\sigma^2 \nonumber \\ && - J_4 \sum_k \sin(\sqrt{3}k_x) \sigma^3 \Bigg\}. \label{exkit2} \eea This can be diagonalized to obtain the energy eigenvalues \bea E_{\vec k}^{'\,\pm} &=& \pm 2 \Bigg( J_4^2 \sin^2(\sqrt{3} k_x) + [J_3 ~+~ J_1 \cos ({\vec k} \cdot {\vec M}_1) \nonumber \\ && + J_2 \cos ({\vec k} \cdot {\vec M}_2)]^2 + [J_1 \sin ({\vec k} \cdot {\vec M}_1) \nonumber \\ && -J_2 \sin ({\vec k} \cdot {\vec M}_2)]^2 \Bigg)^{1/2} \label{exkit3} \eea Note that the presence of a non-zero $J_4$ introduces a gap in the spectrum (except when $\sqrt{3}k_x = n \pi$) for all values of $J_1$, $J_2$ and $J_3$. Thus the quench of $J_4$ ($J_4 = J (t/\tau)$) carries the system through a critical point at $t=0$ provided $|J_1-J_2| \le J_3 \le (J_1+J_2)$. The probability $p_{\vec k}$ of defect formation in such a quench, where the system evolves according to Landau-Zenner dynamics, can be read off from Eqs.\ (\ref{exkit2}-\ref{exkit3}) as \bea p_{\vec k} &=& e^{-\pi \tau (E_{\vec k}^{'\,\pm})^2|_{J_4=0}/| 2 J \sin (\sqrt{3} k_x) |}. \label{exkit4} \eea The density of defects is thus given by $n = \int d^2 \vec k p_{\vec k} /A$, where the integral is taken over half the Brillouin zone defined by the triangle with vertices lying at $(k_x,k_y)=(2\pi/\sqrt{3},0),(0,2\pi/3),(0,-2\pi/3)$ and $A$ is the area of this region. A plot of the defect density as a function of the quench rate $\tau$ and $\eta= J_3/J_1$ for $J_1=J_2=J$ is shown in Fig.\ \ref{fig9}. Note that for large quench time $\tau$, the maximum contribution to the quench comes from around the momentum $\vec k_0 = (k_{x0},k_{y0})$ for which $E_{\vec k_0}^{'\rm \pm}|_{J_4=0}$ vanishes. Around this point $p_{\vec k} \sim \exp[-\pi J \tau \sum_{\alpha,\beta=x,y} f_{\alpha \beta} (\vec k_0) (\vec k -\vec k_0)_{\alpha} (\vec k - \vec k_0)_{\beta}]$ so that $n \sim 1/\tau$ in accordance with the prediction of the general formula Eq.\ (\ref{gquench1}) for $d=m=2$ and $\nu=z=1$. \begin{figure} \rotatebox{0}{\includegraphics*[width=\linewidth]{figquench9.comp.ps}} \caption{Plot of the defect density as a function of $\eta = J_3/J_1$ and $J\tau$ for $J_1=J_2=J$.} \label{fig9} \end{figure} Next, we look at the defect correlation functions for the extended Kitaev model. To this end, we define the operator \beq O_{\vec r}^{\rm ext} ~=~ i ~\left(a_{\vec n} a_{\vec n+\vec r}-b_{\vec n} b_{\vec n +\vec r} \right) \label{op1} \eeq and consider its expectation value for ${\vec r} \ne {\vec 0}$. Here $\vec r = (\sqrt{3}n_1+\sqrt{3}n_2/2,3n_2/2)$ (with integers $n_1$ and $n_2$) specifies the sites of the honeycomb lattice. (For ${\vec r} = {\vec 0}$, $O_{\vec r}^{\rm ext}$ vanishes since $a_{\vec n}^2 = b_{\vec n}^2 =1$). For $J_4 \to \mp \infty$, the model reduces to a set of decoupled chains involving Majorana fermions on nearest neighbor sites. For this model, it is known \cite{shastry} that for $\vec r \ne \vec 0$, \beq \left< O_{\vec r}^{\rm ext} \right> ~=~ \mp \delta_{n_2,0} ~ \frac{2}{\pi n_1}~ [(-1)^{n_1} ~-~ 1] \eeq in the ground states for $J_4 \to \mp \infty$ respectively. For generic values of $J_4$ and for a mixed final state after the quench characterized by a defect probability $p_{\vec k}$, we find \bea \left< O_{\vec r}^{\rm ext} \right> &=& -\frac{8}{N} ~\sum_{\vec k} \left< a_{\vec k}^\dagger a_{\vec k}- b_{\vec k}^\dagger b_{\vec k} \right> \sin (\vec r \cdot {\vec k}) \nonumber \\ &=& \delta_{n_2,0} ~\frac{2}{\pi n_1}~ [(-1)^{n_1} ~-~ 1] \nonumber \\ & & +~ \frac{4}{A} \int ~d^2 \vec k ~sgn [\sin (\sqrt{3} k_x)] ~p_{\vec k} ~\sin (\vec r \cdot {\vec k}). \nonumber \\ & & \label{op2} \eea The sign of $\sin (\sqrt{3} k_x)$ appears in Eq. (\ref{op2}) because for $J_4 \to \infty$, the ground state of Eq. (\ref{exkit2}) has $\left< a_{\vec k}^\dagger a_{\vec k} - b_{\vec k}^\dagger b_{\vec k} \right> = \pm 1$ depending on whether $\sin (\sqrt{3} k_x) > 0$ or $<0$ respectively. To obtain an analytical understanding of the nature of the correlation function, we look at the case where $J_1=J_2=J$, $J_3= \eta J$ and $J\tau \to \infty$. Note that one needs the condition $0\le \eta \le 2$ for the system to pass through a gapless (critical) point during the quench. In this case, the main contribution to the last term of the correlation function $\left< O_{\vec r}^{\rm ext} \right>$ in Eq.\ (\ref{op2}) comes from $\vec k=\vec k_0 =((2/\sqrt{3}) \cos^{-1} (-\eta/2),0)$ where $p_{\vec k =\vec k_0} =1$. Thus for $J\tau \to \infty$ one gets \bea \left< O_{\vec r}^{\rm ext} \right> \simeq \sin\left[ \left\{ 2n_1 + n_2 \right\} \cos^{-1} \left(\frac{-\eta}{2} \right)\right] \label{op3} \eea where we have omitted the first term (proportional to $\delta_{n_2 ,0}$) in Eq. (\ref{op2}). Eq.\ (\ref{op3}) clearly brings out the dependence of the spatial anisotropy of the defect correlation function as a function of $\eta$. In particular, for $\eta=0$, $\left< O_{\vec r}^{\rm ext} \right> \sim \sin \{ (n_1+ n_2/2) \pi \}$, so that its sign alternates between sites with odd and even values of $n_1$ (if $n_2$ is odd). Similarly, for $\eta=2$, $\left< O_{\vec r}^{\rm ext} \right> \sim \sin \{ (2n_1 + n_2)\pi \} \sim 0$. Such a behavior of the correlation function is qualitatively supported by the numerical computation of $\left< O_{\vec r}^{\rm ext} \right>$ for $J_1=J_2=J$, $J_3= \eta J$ and $J\tau =3$ as shown in Fig.\ \ref{fig10}. We find that for $\eta =0$ (top left plot of Fig.\ \ref{fig10}), it alternates between odd and even $n_1$ sites, while for $\eta$ close to $2$ (bottom right plot in Fig.\ \ref{fig10}), the correlation function is much smaller than for $\eta =0$. \begin{figure} \rotatebox{0}{\includegraphics*[width=\linewidth]{figquench10.comp.ps}} \caption{Plot of the defect correlation function (sans the first term with the delta function peak in Eq.\ (\ref{op2})) with $J\tau=3$ and $J_1=J_2= J$ for several representative values of $\eta=J_3/J_1$. See text for details.} \label{fig10} \end{figure} Finally, we compute the entropy generated due to such a quench process given by Eq.\ (\ref{entropy1}) where $p_{\vec k}$ is given by Eq.\ (\ref{exkit4}). A plot of the entropy density as a function of $J\tau$ and $\alpha=\tan^{-1} (J_2/J_1)$ with $J_1=J_3=J$ is shown in Fig.\ \ref{fig11}. Once again we find, similar to that in the Kitaev model, that the entropy density peaks for intermediate value of $\tau$. \section{Discussion} \label{concl} \begin{figure} \rotatebox{0}{\includegraphics*[width=\linewidth]{figquench11.comp.ps}} \caption{Plot of the entropy density $s$ as a function of quench time $\tau$ and $\eta = J_3/J_1$.} \label{fig11} \end{figure} In conclusion, we have studied the quench dynamics of the Kitaev model in 1D and 2D and the extended Kitaev model in 2D. For the 1D Kitaev model and the 2D extended Kitaev model, we have shown that the defect density scales as $1/{\tau}^{d/2}$ with the quench time $\tau$, in accordance with the general results of Ref.\ \onlinecite{anatoly1}. For the 2D Kiatev model, where the quench takes the system through a gapless line, we found that the scaling of the defect density with $\tau$ changes due to the presence of a critical {\it line} instead of a critical {\it point}. In this context, we have presented a general formula for the quench rate dependence of the defect density for a $d$ dimensional system when the quench takes such a system through a $d-m$ dimensional critical surface. We have also computed the defect correlation function for such quenches by an exact computation of all independent non-zero spin correlation functions in the defect ground state. In $d=2$, we have found that the defect correlation function exhibit spatial anisotropy and studied the dependence of this anisotropy with the system parameter. Finally, we have computed the entropy generated in such processes and have shown that the entropy peaks approximately at values of the quench rate for which the defect correlation function changes from $-1$ to $1$. There have been proposals for experimentally realizing the Kitaev model in systems of ultracold atoms and molecules trapped in optical lattices \cite{duan}. If this can be done, the evolution of the defect correlations with various parameters (such as $J_2/J_1$ as shown in Fig.\ \ref{fig7}) can, in principle, be experimentally detected by spatial noise correlation measurements as pointed out in Ref. \onlinecite{altman}. Finally, we would like to note that the quench dynamics of the $XXZ$ spin-1/2 chain has been recently studied with the Hamiltonian being varied along a line in parameter space where the model is critical \cite{pellegrini}. In momentum space, the model only has a finite number of critical points, but the system stays close to those critical points for a long time. This is a different situation from the one that we have analyzed in Sec. III where there is a line of critical points in momentum space; hence our results for the scaling of the defect density are not applicable to the work in Ref. \onlinecite{pellegrini}. We thank Amit Dutta and Anatoly Polkovnikov for stimulating discussions and several important suggestions. DS thanks DST, India for financial support under the project SR/S2/CMP-27/2006. \vspace{-0.0 cm}
1,314,259,994,378
arxiv
\section{Introduction} \noindent We continue our program of geometrical preserved quantities (in Article I \cite{PE-1}'s sense) being systematically derived as solutions to PDE systems -- preserved equations -- treated as Free \cite{CH1} Characteristic Problems \cite{CH2, John}. \cite{PE-1} moreover provides specific methods of solution for these PDE systems. Articles I and II having considered this for Similarity and Affine Geometry respectively, we now turn to the case of 1-$d$ Projective Geometry \cite{Desargues, VY10, HC32, S04, Stillwell, Hartshorne}, Projective Geometry playing a notable role in the Foundations of Geometry \cite{Hilb-Ax, HC32, Coxeter, Stillwell, 8-Pillars} The corresponding geometrical automorphism group is \begin{equation} Proj(1) \m = \m PGL(2, \mathbb{R}) \mbox{ } , \end{equation} `PGL' standing for projective general linear group. This has three generators: a translation $P$, a dilation $D$ and a {\it special-projective transformation} $Q$ (c.f.\ the more commonly used concept of a special-conformal transformation and generator). \mbox{ } \noindent $P$ and $D$ already being the 1-$d$ similarity group $Sim(1)$'s generators, they were already considered in Article I. We thus begin by considering $Q$ by itself in Secs 2 and 3. While the ensuing automorphism group is isomorphic to that of the 1-$d$ translation $P$, $Q$ nonetheless possesses a distinct notion of preserved quantity: {\it differences of reciprocals} rather than just differences. Upon including $D$ as well as $Q$ (Sec 4), the automorphism group deviates from the dilatations ($P$ and $D$) by just a sign, and yet again a distinct notion of preserved quantity ensues: {\it ratios of differences of reciprocals} rather than just ratios of differences. We take these distinctive geometrical invariants to further motivate study of these `Iso-Translational' and `Para-Dilatational' Geometries, which moreover in 1-$d$ further coincide with `Iso-Euclidean' and `Iso-Similarity' = `Iso-Affine' Geometries respectively. \mbox{ } \noindent Full 1-$d$ projective invariants are well-known to be {\it cross-ratios}, quantities whose invariance properties can already be inferred from the work of Pappus \cite{Pappus}, and whose modern-era development started with the elder Carnot \cite{Carnot}, for all that the name `cross-ratio' itself was only coined in subsequent work by Clifford \cite{Clifford}. We moreover derive that 1-$d$ projective preserved quantities are suitably smooth functions of cross-ratios in Secs 5 and 6, establishing these to be the {\sl unique} functional form solving the 1-$d$ projective preserved equations system's Free Characteristic Problem in Secs 7 and 8. The current Article's analysis points moreover to a new interpretation of cross-ratio: cross-ratio functional dependence is that functional dependence which is concurrently of ratios of differences and of differences of reciprocals. With $P$ and $Q$ being inconsistent by themselves, this completes the study of the geometrically-significant continuous subgroups of $Proj(1)$. \mbox{ } \noindent Preserved quantities as conceived of in the current Series of Articles are moreover underlied by consideration of constellations, constellation spaces, shapes and shape spaces \cite{Kendall84, Kendall89, Small, Kendall, FileR-Quad-I, PE16, ABook, I-II-III-Minimal-N}. In the context of Projective Geometry, the corresponding Projective Shape Theory has been developed and reviewed in particular in \cite{MP05, Bhatta, PE16, KKH16}. Its main application to date is to Shape Statistics \cite{Kendall, JM00, Bhatta, DM16, PE16} in connection with \end{titlepage} \noindent Image Analysis \cite{Images} and Computer Vision \cite{CV}. Projective Geometry has largely not yet entered comparative study \cite{I89-I91, ABook, ASoS, AMech, Minimal-N-2} of Background Independence \cite{DiracObs, BI, BI-2, Giu06, ABeables, AObs2, AObs3, AObs4, APoT, ABook, 5-6-7} in Foundational and Theoretical Physics. Upcoming preprints in this regard will be linked here to subsequent versions of the current preprint as regards this substantial research frontier. \section{Special-projective transformations and preserved equations} In $d$-dimensional Projective Geometry, the special-projective transformation's generator is \begin{equation} Q^a := x^a x^b \pa_b \mbox{ } . \end{equation} The special-projective preserved equation is thus \noindent\begin{equation} \sum_{I = 1}^N Q^a(\underline{q}^I) \mbox{\scriptsize \boldmath$Q$} \m = \m \sum_{I = 1}^N q^{aI} q^{bI} \pa_{q^{bI}} \mbox{\scriptsize \boldmath$Q$} \m = \m 0 \mbox{ } . \end{equation} Special-projective transformations close by themselves, forming the geometrical -- if hitherto nonstandard -- automorphism group \begin{equation} P\mbox{-}Iso\mbox{-}Tr(d) \mbox{ } . \end{equation} `Iso' stands here for isomorphic, and the $P$ prefix for projective version, there also being a conformal version denoted by a $C$ prefix in Article V. \mbox{ } \noindent While this is isomorphic to the non-compact $d$-dimensional Abelian group of translations, \begin{equation} P\mbox{-}Iso\mbox{-}Tr(d) \mbox{ } \cong \mbox{ } \mathbb{R}^d \mbox{ } \cong \mbox{ } Tr(d) \mbox{ } , \end{equation} it clearly involves a different representation of generators, $x^a x^b \pa_b$ rather than the $\pa_b$ used for the translations. We show in the current Article that this furthermore leads to $P\mbox{-}Iso\mbox{-}Tr(d)$ having different geometrical preserved quantities than $Tr(d)$, strengthening the position that $P\mbox{-}Iso\mbox{-}Tr(d)$ indeed corresponds to a distinct Geometry in its own right. \mbox{ } \noindent In 1-$d$, the special-projective generator moreover simplifies to \begin{equation} Q \m = \m x^2 \, \frac{\d}{\d x} \mbox{ } . \end{equation} This is one of the ways in which 1-$d$ is a distinguished special case. The special-projective preserved equation is now \noindent\begin{equation} \sum_{I = 1}^N q^{I \, 2} \pa_{q^{I}} \mbox{\scriptsize \boldmath$Q$} \m = \m 0 \mbox{ } . \label{SP-PE} \end{equation} Moreover, in 1-$d$, \begin{equation} \mathbb{R} \m = \m P\mbox{-}Iso\mbox{-}Tr(1) \m = \m P\mbox{-}Iso\mbox{-}Eucl(1) \mbox{ } , \end{equation} due to the absence of rotations and of nontrivial special-linear transformations respectively. This is a second way in which 1-$d$ is a distinguished case. \section{Piecemeal solution of the 1-$d$ special-projective preserved equation} For $N = 1$, using the notation $q_1 = x$, our preserved equation reduces by the flow method to the ODE \begin{equation} x^2 \, \frac{\d \mbox{\scriptsize \boldmath$Q$}}{\d x} = 0 \mbox{ } . \end{equation} So for $x \neq 0$, this reduces to \begin{equation} \frac{\d \mbox{\scriptsize \boldmath$Q$}}{\d x} = 0 \mbox{ } , \end{equation} and thus admits just the trivial solution, \begin{equation} \mbox{\scriptsize \boldmath$Q$} = const \mbox{ } . \end{equation} For $x = 0$, $\mbox{\scriptsize \boldmath$Q$}$ is a free function, but as for the dilational case in Article 1, this entails having no freedom in moving away from $x = 0$. \mbox{ } \noindent $N = 2$ is minimal as regards having a nontrivial solution. Using also the notation $q_2 = y$, our preserved equation is now the PDE \begin{equation} ( x^2 \pa_x + y^2 \pa_y ) \mbox{\scriptsize \boldmath$Q$} = 0 \mbox{ } . \label{Proj1N2} \end{equation} Being a homogeneous-linear PDE in 2 variables, this is equivalent to the ODE \begin{equation} \frac{\d x}{x^2} \m = \m \frac{\d y}{y^2} \end{equation} which is amenable to direct integration, giving \begin{equation} - \frac{1}{x} \m = \m - \frac{1}{y} + u \mbox{ } , \end{equation} i.e.\ \begin{equation} u \m = \m \frac{1}{x} - \frac{1}{y} \mbox{ } . \label{Char-N2} \end{equation} By the chain-rule, any \begin{equation} \mbox{\scriptsize \boldmath$Q$} \m = \m \mbox{\scriptsize \boldmath$Q$}(u) \m = \m \mbox{\scriptsize \boldmath$Q$}\left( \frac{1}{x} - \frac{1}{y} \right) \label{PP-N2} \end{equation} moreover also solves: \begin{equation} (x^2\pa_x + y^2 \pa_y)\mbox{\scriptsize \boldmath$Q$}(u) \m = \m \mbox{\scriptsize \boldmath$Q$}^{\prime}(u)(x^2\pa_x + y^2 \pa_y)\left( \frac{1}{x} - \frac{1}{y} \right) \m = \m \mbox{\scriptsize \boldmath$Q$}^{\prime}(u) \left( - \frac{x^2}{x^2} + \frac{y^2}{y^2} \right) \m = \m \mbox{\scriptsize \boldmath$Q$}^{\prime}(u)(-1 + 1) = 0 \mbox{ } , \end{equation} where $\mbox{}^{\prime} := \d/\d u$. Alternatively, by the flow method, PDE (\ref{Proj1N2}) is equivalent to the ODE system \begin{equation} \dot{x} = x^2 \mbox{ } , \end{equation} \begin{equation} \dot{y} = y^2 \mbox{ } , \end{equation} \begin{equation} \dot{\mbox{\scriptsize \boldmath$Q$}} = 0 \mbox{ } , \end{equation} to be treated as a Free Characteristic Problem. Integrating, \begin{equation} t \m = \m - \frac{1}{x} + u \mbox{ } , \label{ODE-1} \end{equation} \begin{equation} t \m = \m - \frac{1}{y} \mbox{ } , \label{ODE-2} \end{equation} \begin{equation} \mbox{\scriptsize \boldmath$Q$} = \mbox{\scriptsize \boldmath$Q$}(u) \mbox{ } . \label{ODE-3} \end{equation} Next, eliminating $t$ between (\ref{ODE-1}-\ref{ODE-2}), we obtain the form of the characteristic coordinate (\ref{Char-N2}). Finally substituting this in (\ref{ODE-3}), we recover the form (\ref{PP-N2}) for the preserved quantities. \mbox{ } \noindent Extending to the arbitrary-$N$ case, the preserved equation PDE (\ref{SP-PE}) is equivalent by the flow method to the ODE system \begin{equation} \dot{q}^I = q^{I \, 2} \mbox{ } , \end{equation} \begin{equation} \dot{\mbox{\scriptsize \boldmath$Q$}} = 0 \mbox{ } , \end{equation} to be treated as a Free Characteristic Problem. Integrating and splitting off $q^N$ for distinct treatment, \begin{equation} t \m = \m - \frac{1}{q^i} + u_i \mbox{ } , \label{ODE-13} \end{equation} \begin{equation} t \m = \m - \frac{1}{q^N} \mbox{ } , \label{ODE-14} \end{equation} \begin{equation} \mbox{\scriptsize \boldmath$Q$} = \mbox{\scriptsize \boldmath$Q$}(u^i) \mbox{ } . \label{ODE-15} \end{equation} Next, eliminating $t$ from (\ref{ODE-14}) in (\ref{ODE-13}), we obtain the form of the characteristic coordinates, \begin{equation} u^i \m = \m \frac{1}{q^i} - \frac{1}{q^N} \mbox{ } , \label{Char-N} \end{equation} Finally substituting these in (\ref{ODE-15}), we obtain the functional form \begin{equation} \mbox{\scriptsize \boldmath$Q$} \m = \m \mbox{\scriptsize \boldmath$Q$}\left( \frac{1}{q^i} - \frac{1}{q^N} \right) \end{equation} for the preserved quantities. We finally summarize 1-$d$ special-projective preserved quantities by \begin{equation} \mbox{\scriptsize \boldmath$Q$} \m = \m \mbox{\scriptsize \boldmath$Q$}(\, {\bm{/-/}} \,) \mbox{ } : \end{equation} suitably-smooth functions of differences of reciprocals, for which we have introduced the shorthand ${\bm{/-/}}$. \mbox{ } \noindent Subsequent restrictions of note render $N = 3$ and 4 as minimal cases of interest as well. For $N = 3$, using $q_3 := z$ as well, \begin{equation} \mbox{\scriptsize \boldmath$Q$} \m = \m \mbox{\scriptsize \boldmath$Q$}(u, \, v) \:= \mbox{\scriptsize \boldmath$Q$}\left( \frac{1}{x} - \frac{1}{z} \m , \m \m \frac{1}{y} - \frac{1}{z} \right) \mbox{ } . \end{equation} \noindent For $N = 4$, using $q_4 =: w$ as well, \begin{equation} \mbox{\scriptsize \boldmath$Q$} \m = \m \mbox{\scriptsize \boldmath$Q$}(u, \, v, \, \omega) \:= \mbox{\scriptsize \boldmath$Q$}\left( \frac{1}{w} - \frac{1}{z} \m , \m \m \frac{1}{x} - \frac{1}{z} \m , \m \m \frac{1}{y} - \frac{1}{z} \right) \mbox{ } . \end{equation} \section{$P\mbox{-}Para\mbox{-}Dilatat(1)$} The $Q$ and $D$ generators mutually close to form the geometrical automorphism group \begin{equation} P\mbox{-}Para\mbox{-}Dilatat(1) \m = \m \mathbb{R} \rtimes \mathbb{R}_+ \m = \m P\mbox{-}Para\mbox{-}Sim(1) \m = \m P\mbox{-}Para\mbox{-}Aff(1) \mbox{ } . \end{equation} `Para' refers to this case {\sl not} coinciding isomorphically with \begin{equation} Dilatat(1) \m = \m \mathbb{R} \rtimes \mathbb{R}_+ \m = \m Sim(1) \m = \m Aff(1) \mbox{ } . \end{equation} It is possible for two non-isomorphic groups to both be of the form $A \rtimes B$ because the semidirect product operation is {\sl not} precisely defined unless one supplements it by identifying which map is involved \cite{Cohn}. On the present occasion, this non-isomorphism is clear from \begin{equation} \mbox{\bf [} Q \mbox{\bf ,} \, D \mbox{\bf ]} \mbox{ } \sim \mbox{ } Q \mbox{ } \mbox{ and } \mbox{ } \mbox{\bf [} P \mbox{\bf ,} \, D \mbox{\bf ]} \mbox{ } \sim \mbox{ } P \end{equation} differing by a sign in their more detailed right-hand-sides. We shall encounter further $P$- and $C$- $Para$ groups of geometrical automorphisms in Articles IV and V. \mbox{ } \noindent The corresponding preserved equations are a system of 2 equations, \noindent\begin{equation} \mbox{\boldmath$q$} \circ \bnabla \mbox{\scriptsize \boldmath$Q$} \m = \m \sum_{I = 1}^N q^I \pa_I \mbox{\scriptsize \boldmath$Q$} \m = \m 0 \mbox{ } , \label{Sis-1} \end{equation} \noindent\begin{equation} \sum_{I = 1}^N q^{I \, 2} \pa_I \mbox{\scriptsize \boldmath$Q$} \m = \m 0 \mbox{ } . \label{Sis-2} \end{equation} Counting out, $N = 3$ is now minimal so as to realize nontrivial preserved quantities. In this case, our system reduces to \begin{equation} ( x \, \pa_x + y \, \pa_y + z \, \pa_z ) \mbox{\scriptsize \boldmath$Q$} = 0 \mbox{ } , \label{Dil1N3} \end{equation} \begin{equation} ( x^2 \pa_x + y^2 \pa_y + z^2 \pa_z ) \mbox{\scriptsize \boldmath$Q$} = 0 \mbox{ } . \label{Proj1N3-2} \end{equation} But we solved these equations piecemeal in Secs I.8 and III.3, so we have the {\it compatibility equation} \begin{equation} \mbox{\scriptsize \boldmath$Q$}\left( \frac{x}{z} \m , \m \m \frac{y}{z} \right) \m = \m \mbox{\scriptsize \boldmath$Q$}\left( \frac{1}{x} - \frac{1}{z} \m , \m \m \frac{1}{y} - \frac{1}{z} \right) \mbox{ } . \end{equation} {\bf Lemma 1} This is solved by \begin{equation} \mbox{\scriptsize \boldmath$Q$}\left( \frac{\frac{1}{\mbox{$x$}} - \frac{1}{\mbox{$z$}}}{\frac{1}{\mbox{$y$}} - \frac{1}{\mbox{$z$}}} \right) \mbox{ } . \end{equation} {\underline{Derivation}}. On the one hand, this is manifestly a function of \begin{equation} \frac{1}{x} - \frac{1}{z} \m , \m \m \frac{1}{y} - \frac{1}{z} \mbox{ } \mbox{ alone} \mbox{ } . \end{equation} On the other hand, \begin{equation} \frac{ \frac{1}{\mbox{$x$}} - \frac{1}{\mbox{$z$}} }{ \frac{1}{\mbox{$y$}} - \frac{1}{\mbox{$z$}} } \m = \m \frac{ \frac{1}{\mbox{$z$}} \left( \frac{\mbox{$z$}}{\mbox{$x$}} - 1 \right) }{ \frac{1}{\mbox{$z$}} \left( \frac{\mbox{$z$}}{\mbox{$y$}} - 1 \right) } \m = \m \frac{ \frac{\mbox{$z$}}{\mbox{$x$}} - 1 }{ \frac{\mbox{$z$}}{\mbox{$y$}} - 1 } \m = \m \frac{ \left( \frac{\mbox{$x$}}{\mbox{$z$}} \right)^{-1} - 1 }{ \left( \frac{\mbox{$z$}}{\mbox{$y$}} \right)^{-1} - 1 } \mbox{ } , \end{equation} which is manifestly a function of \begin{equation} \frac{x}{z} \m , \m \m \frac{y}{z} \mbox{ } \mbox{ alone} \mbox{ } . \mbox{ } \mbox{ } \Box \end{equation} Note also the following alternative form for this answer, \begin{equation} \mbox{\scriptsize \boldmath$Q$} \m = \m \mbox{\scriptsize \boldmath$Q$}\left( \frac{y(z - x)}{x(z - y)} \right) \mbox{ } . \end{equation} $N = 4$ is also of interest, as the minimal case in the next section. In this case, our system reduces to \begin{equation} ( w \pa_w + x \pa_x + y \pa_y + z \pa_z ) \mbox{\scriptsize \boldmath$Q$} = 0 \mbox{ } , \label{Dil1N4} \end{equation} \begin{equation} ( w^2 \pa_w + x^2 \pa_x + y^2 \pa_y + z^2 \pa_z ) \mbox{\scriptsize \boldmath$Q$} = 0 \mbox{ } . \label{Proj1N4-2} \end{equation} But we solved these equations piecemeal in Secs I.7 and III.2, so we have the {\it compatibility equation} \begin{equation} \mbox{\scriptsize \boldmath$Q$}\left( \frac{w}{z} \m , \m \m \frac{x}{z} \m , \m \m \frac{y}{z} \right) \m = \m \mbox{\scriptsize \boldmath$Q$}\left( \frac{1}{w} - \frac{1}{z} \m , \m \m \frac{1}{x} - \frac{1}{z} \m , \m \m \frac{1}{y} - \frac{1}{z} \right) \mbox{ } . \end{equation} This is solved similarly by \begin{equation} \mbox{\scriptsize \boldmath$Q$} \m = \m \mbox{\scriptsize \boldmath$Q$}\left( \frac{\frac{1}{\mbox{$w$}} - \frac{1}{\mbox{$z$}}}{\frac{1}{\mbox{$y$}} - \frac{1}{\mbox{$z$}}} \m , \m \m \frac{\frac{1}{\mbox{$x$}} - \frac{1}{\mbox{$z$}}}{\frac{1}{\mbox{$y$}} - \frac{1}{\mbox{$z$}}} \right) \m = \m \mbox{\scriptsize \boldmath$Q$}\left( \frac{y(z - w)}{w(z - y)} \m , \m \m \frac{y(z - x)}{x(z - y)} \right) \mbox{ } . \end{equation} Finally, in the arbitrary-$N$ case, (\ref{Sis-1}, \ref{Sis-2}) lead to the compatibility equation \begin{equation} \mbox{\scriptsize \boldmath$Q$}\left( \frac{\mbox{$q$}^{\mbox{\scriptsize$i$}}}{\mbox{$q$}^{\mbox{\scriptsize$N$}}} \right) \m = \m \mbox{\scriptsize \boldmath$Q$}\left( \frac{1}{\mbox{$q$}^{\mbox{\scriptsize$i$}}} - \frac{1}{\mbox{$q$}^{\mbox{\scriptsize$N$}}} \right) \mbox{ } , \end{equation} which is solved by \begin{equation} \mbox{\scriptsize \boldmath$Q$} \m = \m \mbox{\scriptsize \boldmath$Q$}\left( \frac{\frac{1}{\mbox{$q$}^{\bar{r}}} - \frac{1}{\mbox{$q$}^{\mbox{\scriptsize$N$}}}}{\frac{1}{\mbox{$q$}^{\mbox{\scriptsize$n$}}} - \frac{1}{\mbox{$q$}^{\mbox{\scriptsize$N$}}}} \right) \m = \m \mbox{\scriptsize \boldmath$Q$}\left( \frac{q^n(q^N - q^{\bar{r}})}{q^{\bar{r}}(q^N - q^n)} \right) \m = \m \mbox{\scriptsize \boldmath$Q$}\left( \, {\bm{/-/}} \, \bm{\mbox{\Large /}} \, {\bm{/-/}} \, \right) \mbox{ } : \end{equation} suitably-smooth functions of ratios of differences of reciprocals. \mbox{ } \noindent{\bf Remark 1} These are geometrical preserved quantities, and distinctively different from those of any of the hitherto well-studied Geometries. \section{$Proj(1)$ system of preserved equations} \noindent First note that considering translations $P$ alongside special-projective transformations $Q$ and no other generators is inconsistent by the integrability relation \begin{equation} \mbox{\bf [} P \mbox{\bf ,} \, Q \mbox{\bf ]} \mbox{ } \sim \mbox{ } D \mbox{ } . \end{equation} forces the dilation generator $D$ to be included as well. Taking all three of these generators together, one has the group of 1-$d$ projective transformations, \begin{equation} Proj(1) \m = \m PGL(2, \mathbb{R}) \mbox{ } . \end{equation} \noindent The corresponding system of preserved equations is \noindent\begin{equation} \sum_{I = 1}^N \pa_I \mbox{\scriptsize \boldmath$Q$} \m = \m 0 \mbox{ } , \end{equation} \noindent\begin{equation} \mbox{\boldmath$q$} \circ \bnabla \mbox{\scriptsize \boldmath$Q$} \m = \m \sum_{I = 1}^N q^I\pa_I \mbox{\scriptsize \boldmath$Q$} \m = \m 0 \mbox{ } , \end{equation} \noindent\begin{equation} \sum_{I = 1}^N q^{I \, 2}\pa_I \mbox{\scriptsize \boldmath$Q$} \m = \m 0 \mbox{ } . \end{equation} Counting out, $N = 4$ is now minimal to support nontrivial solutions. \mbox{ } \noindent The sequential working based on passing to centre of mass coordinates moreover also fails, as a consequence of this integrability involving $P$ more intimately in $Proj(1)$'s Lie algebra than a simple semidirect product addendum. \section{$Proj(1)$ preserved quantities for $N = 4$} \noindent However, for $N = 4$, however solved the first pair of these equations in Sec I.8 and the last equation in Sec III.3, so we have the compatibility equation \begin{equation} \mbox{\scriptsize \boldmath$Q$}\left( \frac{1}{w} - \frac{1}{z} \m , \m \m \frac{1}{x} - \frac{1}{z} \m , \m \m \frac{1}{y} - \frac{1}{z} \right) \m = \m \mbox{\scriptsize \boldmath$Q$}\left( \frac{w - z}{y - z} \m , \m \m \frac{x - z}{y - z} \right) \mbox{ } . \end{equation} \noindent {\bf Lemma 2} This is solved by \begin{equation} \mbox{\scriptsize \boldmath$Q$} \m = \m \mbox{\scriptsize \boldmath$Q$}\left(\frac{(w - z)(x - y)}{(w - y)(x - z)}\right) \:= \mbox{\scriptsize \boldmath$Q$}\left( \, [z, \, y; \, w, \, x] \, \right) \mbox{ } , \end{equation} for $[ \mbox{ } , \mbox{ } ; \mbox{ } , \mbox{ } ]$ the quaternary {\it cross-ratio} operation. \mbox{ } \noindent{\underline{Derivation}} On the one hand, \begin{equation} \frac{(w - z)(x - y)}{(w - y)(x - z)} \m = \m \frac{\frac{(\mbox{$w$} - \mbox{$z$})(\mbox{$x$} - \mbox{$y$})}{\mbox{$w$}\mbox{$x$}\mbox{$y$}\mbox{$z$}}} {\frac{(\mbox{$w$} - \mbox{$y$})(\mbox{$x$} - \mbox{$z$})}{\mbox{$w$}\mbox{$x$}\mbox{$y$}\mbox{$z$}}} \m = \m \frac{ \frac{\mbox{$w$} - \mbox{$z$}}{\mbox{$w$}\mbox{$z$}} \times \frac{\mbox{$x$} - \mbox{$y$}}{\mbox{$x$}\mbox{$y$}} }{ \frac{\mbox{$w$} - \mbox{$y$}}{\mbox{$w$}\mbox{$y$}} \times \frac{\mbox{$x$} - \mbox{$z$}}{\mbox{$x$}\mbox{$z$}} } \m = \m \frac{ \left( \frac{1}{\mbox{$z$}} - \frac{1}{\mbox{$w$}} \right)\left( \frac{1}{\mbox{$y$}} - \frac{1}{\mbox{$x$}} \right) } { \left( \frac{1}{\mbox{$y$}} - \frac{1}{\mbox{$w$}} \right)\left( \frac{1}{\mbox{$x$}} - \frac{1}{\mbox{$z$}} \right) } \m = \m \frac{ -\left( \frac{1}{\mbox{$w$}} - \frac{1}{\mbox{$z$}} \right) \left( \left( \frac{1}{\mbox{$y$}} - \frac{1}{\mbox{$z$}} \right) - \left( \frac{1}{\mbox{$x$}} - \frac{1}{\mbox{$z$}} \right) \right) } { \left( \left( \frac{1}{\mbox{$y$}} - \frac{1}{\mbox{$z$}} \right) - \left( \frac{1}{\mbox{$w$}} - \frac{1}{\mbox{$z$}} \right) \right) \left( \frac{1}{\mbox{$x$}} - \frac{1}{\mbox{$z$}} \right) } \mbox{ } , \end{equation} which is manifestly a function of \begin{equation} \frac{\frac{1}{\mbox{$w$}} - \frac{1}{\mbox{$z$}}}{\frac{1}{\mbox{$y$}} - \frac{1}{\mbox{$z$}}} \m , \m \m \frac{\frac{1}{\mbox{$x$}} - \frac{1}{\mbox{$z$}}}{\frac{1}{\mbox{$y$}} - \frac{1}{\mbox{$z$}}} \mbox{ } \mbox{ alone} \mbox{ } . \end{equation} On the other hand, \begin{equation} \frac{ (w - z)(x - y)} { (w - y)(x - z) } \m = \m \frac{ (w - z)((x - z) - (y - z)) }{ ((w - z) - (y - z))(x - z) } \m = \m \frac{ \frac{\mbox{$x$} - \mbox{$z$}}{\mbox{$y$} - \mbox{$z$}} - 1 }{ \frac{\mbox{$x$} - \mbox{$z$}}{\mbox{$y$} - \mbox{$z$}} } \times \frac{ \frac{\mbox{$w$} - \mbox{$z$}}{\mbox{$y$} - \mbox{$z$}} }{ \frac{\mbox{$w$} - \mbox{$z$}}{\mbox{$y$} - \mbox{$z$}} - 1 } \mbox{ } , \end{equation} which is manifestly a function of \begin{equation} \frac{w - z}{y - z} \m , \m \m \frac{x - z}{y - z} \mbox{ } \mbox{ alone} \mbox{ } . \mbox{ } \mbox{ } \Box \end{equation} \noindent{\bf Remark 1} We can now {\sl characterize cross-ratios' functional dependence as being concurrently of ratios of differences and of differences of reciprocals}, \begin{equation} {\bm{;}} \m = \m {\bm{ - / - }} \mbox{ } \bigcap \mbox{ } {\bm{ / - / }} \mbox{ } . \end{equation} This characterization is enlightening since it is in terms of simpler, more well-known and more intuitive operations. \section{Uniqueness of cross-ratios in 1-$d$. I. $N = 4$} The above type of working is still open to the possibility that \begin{equation} \frac{(a - b)(c - d)(e - f) ...}{(\sigma(a) - \sigma(b))(\sigma(c) - \sigma(d))(\sigma(e) - \sigma(f))...} \end{equation} for $\sigma$ a permutation, provides further independent functions of both concurrently differences of reciproals and ratios of differences. We now dismiss this possibility by use of the sequential chain rule method. \mbox{ } \noindent We first present this for $N = 4$. We begin by solving the special-projective preserved equation piecemeal, as per Sec 3. Then by the chain rule, \begin{equation} \pa_w = u_w \pa_w + v_w \pa_v + \omega_{w} \pa_{\omega} \m = \m - \frac{1}{w^2} \pa_u \mbox{ } , \end{equation} \begin{equation} \pa_x = u_x \pa_w + v_x \pa_v + \omega_{x} \pa_{\omega} \m = \m - \frac{1}{x^2} \pa_v \mbox{ } , \end{equation} \begin{equation} \pa_y = u_w \pa_y + v_w \pa_v + \omega_{y} \pa_{\omega} \m = \m - \frac{1}{y^2} \pa_{\omega} \mbox{ } , \end{equation} \begin{equation} \pa_z = u_z \pa_w + v_z \pa_v + \omega_{z} \pa_{\omega} \m = \m \frac{1}{z^2} \, ( \pa_u + \pa_v + \pa_{\omega} ) \mbox{ } . \end{equation} \noindent The dilational preserved equation then becomes \begin{equation} 0 \m = \m \left( \left(\frac{1}{w} - \frac{1}{z} \right) \pa_u + \left( \frac{1}{x} - \frac{1}{z} \right) \pa_v + \left( \frac{1}{y} - \frac{1}{z} \right) \pa_{\omega}\right) \mbox{\scriptsize \boldmath$Q$} \m = \m ( u \, \pa_u + v \, \pa_v + w \, \pa_{\omega} ) \mbox{\scriptsize \boldmath$Q$} \mbox{ } . \end{equation} As this has the same form as the original dilational equation but for one object less, this can be envisaged as a parallel of the sequential method underlied by passing to centre of mass frame for translations. This analogy is in turn underlined by $P\mbox{-}Para\mbox{-}Dilatat(d)$ itself having a semidirect product structure like $Dilatat(d)$. \mbox{ } \noindent We then know from Sec I.7 that this is solved by \begin{equation} \mbox{\scriptsize \boldmath$Q$}(U, \, V) \:= \mbox{\scriptsize \boldmath$Q$} \left( \, \frac{u}{\omega} \m , \m \m \frac{v}{\omega} \, \right) \mbox{ } . \end{equation} So far, this consists of a sequential chain rule rederivation of Sec 4. \mbox{ } \noindent If the translational preserved equation is moreover present, this becomes \begin{equation} 0 \m = \m \left( \left( \frac{1}{z^2} - \frac{1}{w^2} \right) \pa_u + \left( \frac{1}{z^2} - \frac{1}{x^2} \right) \pa_v + \left( \frac{1}{z^2} - \frac{1}{y^2} \right) \pa_{\omega} \right) \mbox{\scriptsize \boldmath$Q$} \end{equation} in the special-projective characteristic coordinates. Moreover, by tht chain rule, \begin{equation} \pa_u = U_u \pa_U + V_v \pa_V \m = \m \frac{1}{\omega} \pa_U \mbox{ } , \end{equation} \begin{equation} \pa_v = U_v \pa_U + V_v \pa_V \m = \m \frac{1}{\omega} \pa_V \mbox{ } , \end{equation} \begin{equation} \pa_{\omega} = U_w \pa_U + V_w \pa_V \m = \m - \frac{ u \, \pa_U + v \, \pa_V }{ \omega^2 } \mbox{ } . \end{equation} This sends our remaining PDE to $$ 0 \m = \m \frac{1}{\omega} \left( \left( \frac{1}{z^2} - \frac{1}{w^2} - \left( \frac{1}{z^2} - \frac{1}{y^2} \right) U \right) \pa_U + \left( \frac{1}{z^2} - \frac{1}{x^2} - \left( \frac{1}{z^2} - \frac{1}{y^2} \right) V \right) \pa_V \right) \mbox{\scriptsize \boldmath$Q$} $$ $$ \m = \m \left( \left( - \frac{1}{z} - \frac{1}{w} + \frac{1}{z} + \frac{1}{y} \right) U \, \pa_U + \left( - \frac{1}{z} - \frac{1}{x} + \frac{1}{z} + \frac{1}{y} \right) V \, \pa_V \right) \mbox{\scriptsize \boldmath$Q$} $$ \begin{equation} \m = \m \left( \, ( \omega - u ) U \, \pa_U + ( \omega - v ) V \, \pa_V \, \right) = \omega \left( \, ( 1 - U ) U \, \pa_U + ( 1 - V ) V \, \pa_V \, \right) \mbox{\scriptsize \boldmath$Q$} \mbox{ } . \end{equation} We have also used here differences of two squares and $x, y, z, w$ to $u, v, \omega$ relations in the second step, and $u, v, \omega$ to $U, V$ relations in the third and fourth steps. \mbox{ } \noindent So (assuming $\omega \neq 0$, if not permute allocation of coordinates), we have the first-order homogeneous linear PDE \begin{equation} \left( \, ( 1 - U ) U \, \pa_U + ( 1 - V ) V \, \pa_V \, \right) \mbox{\scriptsize \boldmath$Q$} = 0 \mbox{ } . \end{equation} By the flow method, this is equivalent to the ODE system \begin{equation} \dot{U} = ( 1 - U ) U \mbox{ } , \end{equation} \begin{equation} \dot{V} = ( 1 - V ) V \mbox{ } , \end{equation} \begin{equation} \dot{\mbox{\scriptsize \boldmath$Q$}} = 0 \mbox{ } , \end{equation} to be treated as a Free Characteristic Problem. Integrating by use of partial fractions, \begin{equation} t \m = \m \mbox{ln} \left( \frac{U}{1 - U} \right) + \mbox{ln} \, W \mbox{ } , \end{equation} \begin{equation} t \m = \m \mbox{ln} \left( \frac{V}{1 - V} \right) \mbox{ } , \end{equation} \begin{equation} \mbox{\scriptsize \boldmath$Q$} = \mbox{\scriptsize \boldmath$Q$}(W) \mbox{ } . \end{equation} Eliminating $t$ between the first two of these equations yields the form for the final characteristic variable, \begin{equation} W \m = \m \frac{ V ( 1 - U ) }{ U ( 1 - V ) } \m = \m \frac{ \frac{\mbox{$v$}}{\mbox{$\omega$}} \left( 1 - \frac{\mbox{$u$}}{\mbox{$\omega$}} \right) } { \frac{\mbox{$u$}}{\omega} \left( 1 - \frac{\mbox{$v$}}{\mbox{$\omega$}} \right) } \m = \m \frac{ v ( \omega - u )}{u( \omega - v )} \m = \m \frac{ \left( \frac{1}{\mbox{$x$}} - \frac{1}{\mbox{$z$}} \right)\left( \frac{1}{\mbox{$y$}} - \frac{1}{\mbox{$z$}} - \left( \frac{1}{\mbox{$w$}} - \frac{1}{\mbox{$z$}} \right) \right) } { \left( \frac{1}{\mbox{$w$}} - \frac{1}{\mbox{$z$}} \right)\left( \frac{1}{\mbox{$y$}} - \frac{1}{\mbox{$z$}} - \left( \frac{1}{\mbox{$x$}} - \frac{1}{\mbox{$z$}} \right) \right) } \m = \m \frac{ \frac{\mbox{$z$} - \mbox{$x$}}{\mbox{$z$}\mbox{$x$}} \times \frac{\mbox{$w$} - \mbox{$y$}}{\mbox{$w$}\mbox{$y$}} } { \frac{\mbox{$z$} - \mbox{$w$}}{\mbox{$z$}\mbox{$w$}} \times \frac{\mbox{$x$} - \mbox{$y$}}{\mbox{$x$}\mbox{$y$}} } \m = \m \frac{(z - x)(y - w)}{(z - w)(y - x)} \mbox{ } : \end{equation} the cross-ratio, now as a {\sl unique} equation to an explicit PDE. Thus finally we recover that \begin{equation} \mbox{\scriptsize \boldmath$Q$} \m = \m \mbox{\scriptsize \boldmath$Q$}\left( \frac{(z - x)(y - w)}{(z - w)(y - x)} \right) \mbox{ } . \end{equation} \section{Uniqueness of cross-ratios in 1-$d$. II. Arbitrary $N$} We finally show that the previous section's method extends to arbitrary $N \geq 4$. We begin by solving the special-projective preserved equation piecemeal, as per Sec 3. Next, by the chain rule, \noindent\begin{equation} \pa_{q^i} \m = \m \sum_{j = 1}^n {u^j}_{q^i} \pa_{u^j} \m = \m - \frac{1}{q^{i \, 2}} \pa_{u^i} \mbox{ } , \end{equation} \noindent\begin{equation} \pa_{q^N} \m = \m \sum_{j = 1}^n {u^j}_{q^N} \pa_{u^j} \m = \m - \frac{1}{q^{N \, 2}} \sum_{j = 1}^n \pa_{u^j} \mbox{ } . \end{equation} \noindent The dilational preserved equation then becomes \noindent\begin{equation} 0 \m = \m \left( \sum_{i = 1}^n \left( \frac{1}{q^i} - \frac{1}{q^N} \right) \pa_{u^i} \right) \mbox{\scriptsize \boldmath$Q$} \m = \m \sum_{i = 1}^n u^i \pa_{u^i} \mbox{\scriptsize \boldmath$Q$} \mbox{ } . \end{equation} \noindent We then know from Sec I.8 that this is solved by \begin{equation} \mbox{\scriptsize \boldmath$Q$}(U^{\bar{r}}) \:= \mbox{\scriptsize \boldmath$Q$} \left( \frac{u^{\bar{r}}}{u^n} \right) \mbox{ } , \end{equation} for $\bar{r}$ taking values 1 to $\bar{n} := n - 1$. So far, this consists of a sequential chain rule rederivation of the second half of Sec 4. \mbox{ } \noindent If the translational preserved equation is moreover present, this gets reformulated as \begin{equation} 0 \m = \m \left( \frac{1}{q^{N \, 2}} - \frac{1}{q^{i \, 2} } \right) \pa_{u^i} \mbox{\scriptsize \boldmath$Q$} \end{equation} in the special-projective characteristic coordinates. By the chain rule, \noindent\begin{equation} \pa_{u^{\bar{r}}} \m = \m \sum_{\bar{s} - 1}^{\bar{n}} {U^{\bar{s}}}_{U^{\bar{r}}} \pa_u^{\bar{s}} \m = \m \frac{1}{u^n} \pa_{U^{\bar{r}}} \mbox{ } , \end{equation} \noindent\begin{equation} \pa_{u^n} \m = \m \sum_{\bar{s} - 1}^{\bar{n}} {U^{\bar{s}}}_{U^n} \pa_u^{\bar{s}} \m = \m - \frac{1}{u^n}\sum_{\bar{r} = 1}^{\bar{n}}U^{\bar{r}}\pa_{U^{\bar{r}}} \mbox{ } , \end{equation} our remaining PDE is sent to \noindent$$ 0 \m = \m \frac{1}{u^n} \sum_{\bar{r} = 1}^{\bar{n}} \left( \left( \frac{1}{q^{N \, 2}} - \frac{1}{u^{\bar{r} \, 2}} \right) \left( \frac{1}{q^{N \, 2}} - \frac{1}{q^{n \ 2}} \right) U^{\bar{r}} \right) \pa_{U^{\bar{r}}} \mbox{\scriptsize \boldmath$Q$} \m = \m \sum_{\bar{r} = 1}^{\bar{n}} \left( - \frac{1}{q^{N}} - \frac{1}{u^{\bar{r}}} + \frac{1}{q^{N}} + \frac{1}{q^{n}} \right) U^{\bar{r}} \pa_{U^{\bar{r}}} \mbox{\scriptsize \boldmath$Q$} $$ \noindent\begin{equation} \m = \m \sum_{\bar{r} = 1}^{\bar{n}} ( u^n - u^{\bar{r}} ) U^{\bar{r}} \pa_{U^{\bar{r}}} \mbox{\scriptsize \boldmath$Q$} = u_n \sum_{\bar{r} = 1}^{\bar{n}} ( 1 - U^{\bar{r}} ) U^{\bar{r}} \pa_{U^{\bar{r}}} \mbox{\scriptsize \boldmath$Q$} \mbox{ } , \end{equation} using analogous moves to those declared in the $N = 4$ version. \mbox{ } \noindent So (assuming $u^n \neq 0$, if not permute allocation of coordinates), we have the first-order homogeneous linear PDE \noindent\begin{equation} \sum_{\bar{r} = 1}^{\bar{n}} ( 1 - U^{\bar{r}})U\pa_{U^{\bar{r}}} \mbox{\scriptsize \boldmath$Q$} \m = \m 0 \mbox{ } . \end{equation} By the flow method, this is equivalent to the ODE system \begin{equation} \dot{U}^{\bar{r}} = ( 1 - U^{\bar{r}} )U^{\bar{r}} \mbox{ } , \end{equation} \begin{equation} \dot{\mbox{\scriptsize \boldmath$Q$}} = 0 \mbox{ } , \end{equation} to be treated as a Free Characteristic Problem. Integrating by use of partial fractions, \begin{equation} t \m = \m \mbox{ln} \left( \frac{ U^{\widetilde{r}} }{ 1 - U^{\widetilde{r}} } \right) + \mbox{ln} \, W^{\widetilde{r}} \mbox{ } , \label{tilder} \end{equation} \begin{equation} t \m = \m \mbox{ln} \left( \frac{ U^{\bar{n}} }{ 1 - U^{\bar{n}} } \right) \mbox{ } , \label{teq} \end{equation} \begin{equation} \mbox{\scriptsize \boldmath$Q$} = \mbox{\scriptsize \boldmath$Q$}(W^{\widetilde{r}}) \mbox{ } . \end{equation} Eliminating $t$ from (\ref{teq}) in (\ref{tilder}) yields the form for the final characteristic variables, \begin{equation} W^{\widetilde{r}} \m = \m \frac{U^{\bar{n}}(1 - U^{\widetilde{r}})}{U^{\widetilde{r}}(1 - V^{\bar{n}})} \m = \m \frac{u^{\bar{n}}(u^n - u^{\widetilde{r}})}{u^{\widetilde{r}}(u^n - u^{\bar{n}})} \m = \m \frac{(q^N - q^{\bar{n}})(q^n - q^{\widetilde{r}})}{(q^n - q^{\bar{n}})(q^N - q^{\widetilde{r}} )} \mbox{ } : \end{equation} the cross-ratio, now as a {\sl unique} equation to an explicit PDE. Thus finally we recover that \begin{equation} \mbox{\scriptsize \boldmath$Q$} \m = \m \mbox{\scriptsize \boldmath$Q$}\left( \frac{(q^N - q^{\bar{n}})(q^n - q^{\widetilde{r}})}{(q^n - q^{\bar{n}})(q^N - q^{\widetilde{r}} )} \right) \m = \m \mbox{\scriptsize \boldmath$Q$}( \, {\bm{;}} \, ) \mbox{ } : \end{equation} suitably-smooth functions of cross-ratios. \mbox{ } \noindent Note that the cross-ratios arise here in the form of a basis made by using 3 points $N$, $n$, $\bar{n}$ as fixed reference points with respect to which to form the cross-ratio with the $\widetilde{n} := N - 3$ remaining points $\widetilde{r}$. Because of this, moreover, it is clear that $N \geq 4$ is required. \section{Conclusion} { \begin{figure}[!ht] \centering \includegraphics[width=0.85\textwidth]{Proj-1-Latt.png} \caption[Text der im Bilderverzeichnis auftaucht]{ \footnotesize{Lattices of a) 1-$d$ notions of geometry, b) their corresponding automorphism groups, and c) the corresponding dual lattice of preserved quantities. }} \label{Proj-1-Latt} \end{figure} } \noindent We found preserved quantities for the $P$-$Iso$-$Tr(1)$ = $P$-$Iso$-$Eucl(1)$ Geometry whose automorphisms consist solely of special-projective transformations $Q$. These are suitably smooth functions of differences of reciprocals. Thus they do not coincide with the mere differences of $Tr(1) = Eucl(1)$ despite $Eucl(1) \cong P\mbox{-}Iso\mbox{-}Eucl(1)$, the difference in representation between $Q$ and the translational generator $P$ sufficing to have this effect. \mbox{ } \noindent Upon including dilations $D$ as well, we found preserved quantities for the corresponding \noindent $P$-$Para$-$Dilatat(1)$ = $P$-$Para$-$Sim(1)$ = $P$-$Para$-$Aff(1)$ Geometry. These are suitably smooth functions of ratios differences of reciprocals. Thus they do not coincide with the mere ratios differences of \noindent $Dilatat(1) = Sim(1) = Aff(1)$, despite the algebra for this differing by a single sign from our case. (This sign difference also accounts for us calling these `Para' rather than `Iso' Geometries). \mbox{ } \noindent We finally derived that 1-$d$ projective preserved quantities are suitably smooth functions of cross-ratios in Secs 5 and 6, establishing these to be moreover the {\sl unique} functional form solving the 1-$d$ projective preserved equations system's Free Characteristic Problem in Secs 7 and 8. The current Article's analysis points moreover to a new interpretation of cross-ratio. Namely that cross-ratio functional dependence is that functional dependence which is concurrently of ratios of differences and of differences of reciprocals, as can be read off Sec 6's compatibility equation. This is a significant result firstly due to the importance of cross-ratios in Projective Geometry and secondly because of Application 2 below. \mbox{ } \noindent{\bf Application 1} The above analysis featuring novel {\it partially} projective preserved quantities serves to disqualify the `counterexamples' to lattice duality of \cite{AObs3}, as these `counterexamples' failed to factor in the possibility of partially conformal preserved quantities (themselves covered in Article V). Now that preserved quantities are viewed as intersections of characteristic surfaces, it has become clear that the lattice of preserved quantities is dual to that of sums-over-points of generators, and, by extension via Article I's Bridge Theorem, that the lattice of observables is dual to that of first-class constraints. We can thus re-issue \cite{AObs3} free from this lacuna. \mbox{ } \noindent{\bf Application 2} The current Article is moreover a useful prototype as regards systematically solving PDEs to obtain the more involved higher-$d$ projective preserved quantities in Article IV, alongside yet further partially projective preserved quantities from interplay with rotations and affine transformations. \mbox{ } \noindent{\bf Acknowledgments} I thank Chris Isham and Don Page for previous discussions. Reza Tavakol, Malcolm MacCallum, Enrique Alvarez and Jeremy Butterfield for support with my career.
1,314,259,994,379
arxiv
\subsubsection{\@startsection{subsubsection}{3}% \normalparindent{.5\linespacing\@plus.7\linespacing}{-.5em}% {\normalfont\bfseries}} \setcounter{tocdepth}{1} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{claim}[thm]{Claim} \newtheorem{dfprop}[thm]{Definition-Proposition} \newtheorem{conj}[thm]{Conjecture} \newtheorem{ep}[thm]{Expectation} \newtheorem{df}[thm]{Definition} \newtheorem{qu}[thm]{Question} \newtheorem{rmk}[thm]{Remark} \newtheorem{nota}[thm]{Notation} \newtheorem{ex}[thm]{Example} \def\nonumber{\nonumber} \def\delta{\delta} \def\epsilon{\epsilon} \def\Gamma{\Gamma} \def\langle{\langle} \def\rangle{\rangle} \def{\mathbb R}{{\mathbb R}} \def{\mathbb C}{{\mathbb C}} \def{\mathbb Z}{{\mathbb Z}} \def{\mathbb T}{{\mathbb T}} \def{\mathbb F}{{\mathbb F}} \def{\mathscr H}{{\mathscr H}} \def{\mathscr A}{{\mathscr A}} \def\Lambda_{hex}{\Lambda_{hex}} \def\Lambda_{hc}{\Lambda_{hc}} \def{\mathscr B}{{\mathscr B}} \def\Lambda{\Lambda} \def\widehat\Theta{\widehat\Theta} \def\B_{\formTheta}{{\mathscr B}_{\widehat\Theta}} \def\B_{\Theta}{{\mathscr B}_{\Theta}} \def\rho_{\Gamma}{\rho_{\Gamma}} \def\alpha{\alpha} \def\bar\G_{\L}{\bar\Gamma_{\Lambda}} \def\G_{\L}{\Gamma_{\Lambda}} \def\Gamma_+{\Gamma_+} \def\bar\Gamma_+{\bar\Gamma_+} \def\it Cliff{\it Cliff} \def\bar\Gamma_+^{\it crystal}{\bar\Gamma_+^{\it crystal}} \def\L_{\rm hon}{\Lambda_{\rm hon}} \def\T^3_{\Theta}{{\mathbb T}^3_{\Theta}} \setcounter{tocdepth}{3} \begin{document} \title{The noncommutative geometry of wire networks from triply periodic surfaces } \author [Ralph M.\ Kaufmann]{Ralph M.\ Kaufmann} \address{Department of Mathematics, Purdue University, West Lafayette, IN 47907} \ead{rkaufman@math.purdue.edu} \author [Sergei Khlebnikov]{Sergei Khlebnikov} \address{Department of Physics, Purdue University, West Lafayette, IN 47907} \ead{skhleb@physics.purdue.edu} \author [Birgit Kaufmann]{Birgit Wehefritz--Kaufmann} \address{Department of Mathematics and Department of Physics, Purdue University, West Lafayette, IN 47907} \ead{ebkaufma@math.purdue.edu} \pacs{61.46 -w, 71.10 -w, 73.22 -f, 02.40 Gh } \submitto{\JPA} \maketitle \begin{abstract} We study wire networks that are the complements of triply periodic minimal surfaces. Here we consider the P, D, G surfaces which are exactly the cases in which the corresponding graphs are symmetric and self-dual. Our approach is using the Harper Hamiltonian in a constant magnetic field as set forth in \cite{B,BE,KKWK}. We treat this system with the methods of noncommutative geometry and obtain a classification for all the $C^*$ geometries that appear. \end{abstract} \section*{Introduction} It is well known that the only triply periodic minimal surfaces whose complements are given by symmetric and self-dual graphs are the P, D and G surfaces , see e.g. \cite{Anderson}. While the P and D surfaces were already discovered by Schwarz in 1830 \cite{Schwarz}, it took until 1970 for the G surface to be discovered by Alan Schoen \cite{Schoen}. In real situations these surfaces appear as the boundary between phases. We will concentrate on the complement of these surfaces which consists of two components or channels. For the P, D and G surfaces these two channels have the same underlying skeletal graph onto which they retract. This graph carries all the homotopical information, such as the K-theory. Figures \ref{Pgraph}, \ref{Dgraph} and \ref{Ggraph} show one channel and its skeletal graph for the respective cases. The guiding physical motivation for this study is that, when the boundary has a finite thickness (as it always does in real materials), the complement still forms two channels of a nanoporous structure. These channels can be filled with a (semi)conductor, forming a nanowire network of potential interest in applications. Indeed, for the G surface, or rather the double Gyroid, this has been achieved \cite{Hillhouse}. Each channel is composed of approximately cylindrical segments joined together at triple junctions. Numerical simulations of a simple wave equation \cite{Khlebnikov&Hillhouse} have shown that the lowest-energy wavefunctions are supported primarily on the junctions. Thus, one may expect to reproduce the low-energy end of the spectrum by using the tight-binding approximation, in which the junctions are replaced by the vertices, and the segments connecting them by the edges, of a graph. Mathematically speaking, this means that each component of the complement of a G surface is indeed contracted onto its graph. Of particular interest is the behavior of periodic nanoporous materials in an external magnetic field, specifically, the questions of existence and number of any additional gaps in the spectrum the field may produce. Such gaps would be a 3-dimensional analog of Hofstadter's butterfly \cite{Hofstadter}. Note that the materials in question are ``supercrystals,'' whose lattice constants far exceed atomic dimensions. (For instance, for the double gyroid of \cite{Hillhouse}, the lattice constant is of order 20 nm.) As a result, the magnetic flux through the unit cell may be a sizable fraction of the flux quantum for realistic magnetic fields, opening the possibility of an experimental study of the additional gap structure. We would like to understand this structure from the point of view of non-commutative geometry, in parallel with the earlier studies of the quantum Hall effect \cite{BE}. In our previous article \cite{KKWK}, we gave a general approach for such wire systems treated as graphs with a given translational symmetry group. The relevant result of this analysis was that for a constant magnetic field the relevant $C^*$ algebra ${\mathscr B}$ generated by magnetic translation operators and the Harper Hamiltonian has a faithful matrix representation as a subalgebra of a matrix algebra of a noncommuative torus. More precisely, let $n$ be the dimension of the ambient space, $k$ be the number of sites in a primitive cell and $B=2\pi \Theta$ be the magnetic field expressed as a 2--form. Then ${\mathscr B}$ embeds into $ M_k({\mathbb T}^n_{\Theta})$, where ${\mathbb T}^n_{\Theta}$ is the noncommutative n--torus with parameter $\Theta$ and $M_k({\mathbb T}^n_{\Theta})$ is the $C^{\star}$ algebra of $k \times k$ matrices with entries in ${\mathbb T}^n_{\Theta}$. One expects that generically, that is if all entries of $\Theta$ are irrational, the algebra ${\mathscr B}$ is the full matrix algebra and thus is Morita equivalent to ${\mathbb T}^n_{\Theta}$ itself. At rational points there is no such expectation. An interesting question is to classify the points at which the algebra is a proper subalgebra, as they should have special physical properties. Applying this general theory, we will focus on the case $n=3$ and the graphs arising from the P, D and G surfaces. The classification of special points and their $C^*$ algebras for the $G$ surface was one of the main aims of \cite{KKWK}. We will review those results here giving a concise statement of the main results. The $P$ surface is much simpler since in this case $k=1$. The D case has not yet been considered before, and we give the complete entirely new calculation here. \section{General background} The general setup is following the noncommutative approach we call Connes-Bellissard-Harper approach \cite{B, MM, Connes, Harper}. We start by considering a $C^*$ algebra ${\mathscr B}$ which is the smallest algebra containing the Hamiltonian and the symmetries. The standard choice of the Hamiltonian is the Harper Hamiltonian \cite{Harper}. This acts on the Hilbert space ${\mathscr H}=\ell^2(\Lambda)$ where $\Lambda$ are the vertices of the graph. Physically, this corresponds to using the tight-binding approximation and Peierls substitution \cite{PST}. If we turn on a magnetic field this procedure expresses the Hamiltonian in terms of a sum of magnetic translation or Wannier \cite{Harper} operators. In the general setting the magnetic field will be given by a two form on ${\mathbb R}^n$ which in ${\mathbb R}^3$ restricts to the familiar vector field $B$. We will concentrate on the case of a constant magnetic field $B=2\pi \theta_{ij}dx^idx^j$ where $\Theta=(\theta)_{ij}$ is the skew--symmetric matrix of a skew--symmetric 2-tensor. We will now describe our setup in more detail. Fix $\Gamma\subset {\mathbb R}^n$ to be a connected embedded graph whose edges are line segments. We denote by $L$ a (maximal) translational symmetry group of $\Gamma$, s.t. $\bar \Gamma=\Gamma/L$ is finite. Here a translational symmetry group is a group isomorphic to a free Abelian group of rank $n$ which acts by translations on ${\mathbb R}^n$ leaving $\Gamma$ invariant. Let $\pi:\Gamma\to \bar \Gamma$ be the projection. The vertices of $\bar\Gamma$ are the vertices in a primitive cell, but the graph $\bar\Gamma$ is just an abstract graph\footnote{The graph $\bar\Gamma$ is naturally embedded in the torus ${\mathbb R}^n/\Gamma$, but not in ${\mathbb R}^n$ itself.}. Let $\Lambda$ be the set of vertices of $\Gamma$, $\bar\Lambda$ the set of vertices of $\bar\Gamma$, and denote by $T$ the (free Abelian) subgroup of ${\mathbb R}^n$ generated by the {\em edge vectors}. Notice that $L\subset T$, but in general this inclusion is strict. On ${\mathscr H}$ a magnetic translation by a vector ${\bf e}$ of $L$ is represented by a unitary operator $U_{\bf e}$, while a translation by a vector in $T$ only gives rise to a partial isometry. To see this, we decompose the Hilbert space ${\mathscr H}=\bigoplus_{v\in \bar\Lambda} {\mathscr H}_{v}$ where ${\mathscr H}_v=l^2(\pi^{-1}(v))$. Then a translation by $e\in T$ which goes from $w$ to $v$ will act as $U_{e}:{\mathscr H}_v\to {\mathscr H}_w$. \footnote{This is assuming the standard action for magnetic translation operators.} In this formalism, the Hamiltonian is represented by a sum of partial isometries. As it is defined ${\mathscr B}$ is a $C^*$ sub-algebra of the operators on ${\mathscr H}$. In order to calculate the algebra ${\mathscr B}$ more explicitly, we wish to define a matrix representation of it. For this one fixes a rooted spanning tree. A spanning tree is a subtree of the graph, which contains all vertices. Being rooted means that one vertex is distinguished. Our main theorem which allows us to do explicit computations is then: {\bf Theorem.} \cite{KKWK} For $\Gamma$, $L$ as above and a fixed $B$ given by $2\pi \Theta$, fixing a choice of rooted spanning tree for $\bar \Gamma$, an order of the vertices of $\bar \Gamma$ and a basis for $L$ defines a faithful matrix representation of ${\mathscr B}$ which is a sub--$C^*$--algebra $\B_{\Theta}$ of the $C^*$ algebra $M_{|V(\bar\Gamma)|}({\mathbb T}^{n}_{\Theta})$, where ${\mathbb T}^{n}_{\Theta}$ is the noncommutative torus. {\bf Consequence.} From this it follows, that if $\Theta$ is rational then the spectrum of $H$ has finitely many gaps. Moreover the maximal number is determined by the entries of $\Theta$. In the case of the square lattice, this gives rise to the Hofstadter butterfly \cite{Hofstadter}. Hence our result can be viewed as a generalization to the lattices of our setup. In the above theorem, the translations of $L$ are what gives rise to the noncommutative torus. In particular each fixed basis element $e_i$ of $L$ gives rise to a unitary diagonal operator valued matrix $\rho(U_i)$. These matrices satisfy the commutation relations $\rho(U_i)\rho(U_j)=e^{2\pi\theta_{ij}} \rho(U_j)\rho(U_i)$ and hence give a representation of ${\mathbb T}^{n}_{\Theta}$ which is the $C^*$ algebra spanned by $n$ independent unitary operators $U_i$ satisfying the commutation relations $U_iU_j=e^{2\pi\theta_{ij}} U_jU_i$. In the matrix representation of the Hamiltonian, each partial isometry which corresponds to the summand of the Hamiltonian describing the translation along the edge joining the vertex $k$ to the vertex $l$ gives rise to a ${\mathbb T}^{n}_{\Theta}$ valued matrix entry in the $(l,k)$-th position. Notice that there are two incarnations of the Harper Hamiltonian, the first, which we will simply call the Harper Hamiltonian is the operator acting on $l^2(\Lambda)$. The second one is its representation in the matrix ring $M_k(T^n_{\Theta})$ which we call the matrix Harper Hamiltonian. {\bf Associated (non)-commutative geometries. } On general grounds we expect three types of different possible phenomenologies according to whether (a) $\Theta=0$ and there is no magnetic field, (b) $\Theta$ is generic (i.e.\ all entries are irrational), (c) $\Theta$ contains rational entries. If $\Theta=0$ then the $C^*$ algebra is a unital commutative and by the Gelfand-Naimark Theorem it is isomorphic to the $C^*$ algebra $C(X)$ of continuous $\mathbb C$ valued functions on a compact Hausdorff space $X$. Thus starting with $\Gamma$ and $T$ in ${\mathbb R}^3$ we get a new geometry $X$. Here the base $T^n$ is given by the possible exponential values of momenta in the basis directions of $L$. \footnote{Notice these are {\em not} the momenta along the x,y,z axis.} The cover can then be interpreted as the different Eigenvalues of $H$. Let us call a point non--degenerate if $H$ at theses fixed momenta has $n$ distinct Eigenvalues. Since this is an open condition, we get that if there is one point which is non--degenerate then that this is generically the case. In general, we showed \cite{KKWK} {\bf Theorem.} The space $X$ is a branched cover of the torus $T^n=S^1\times \dots \times S^1$ ramified over the locus where $H$ has degenerate Eigenvalues . When $\Theta$ is {\em generi}c, it is known ${\mathbb T}_{\Theta}^n$ is simple, which means that it has no two sided proper ideals. So, we {\bf expect} $\B_{\Theta}=M_{|V(\bar\Gamma)|}({\mathbb T}^{n}_{\Theta})$ which is Morita equivalent to ${\mathbb T}^{n}_{\Theta}$. That is the noncommutative geometry of $\Gamma$ in the magnetic field $B$ is given by the noncommutative torus. This is not a proof, however, and it has to be checked in each case. When $\Theta$ {\em contains rational entries}, there is {\bf no expectation} and in a sense this is the most interesting case. It can happen that the resulting algebra $\B_{\Theta}$ is (i) commutative, this corresponds to special commensurabilities, (ii) that it is again the full matrix algebra or (iii) that it is a proper subalgebra of the matrix algebra. In the next section, we will analyze the three cases of the P, D and G wire networks explicitly. In the case of ${\mathbb R}^3$ the skew--symmetric bilinear form $\Theta$ given by $B$ takes on the familiar form $$\Theta(v,w)= \frac{1}{2\pi}B \cdot (v\times w)$$. \section{Specific results for the cubic (P) case} \begin{figure} \begin{center} \includegraphics[height=5cm]{Pgraph.jpg} \caption{One channel of the P surface and its skeletal graph. This and Figures 2 and 5 were obtained using the level surface approximation for the corresponding minimal surfaces \cite{levelsurface}.} \label{Pgraph} \end{center} \end{figure} The P surface has a complement which has two connected components each of which can be retracted to the simple cubical graph whose vertices are the integer lattice ${\mathbb Z}^3\subset {\mathbb R}^3$. The translational group is again ${\mathbb Z}^3$ in this embedding as shown in Figure \ref{Pgraph}, so it reduces to the case of a Bravais lattice which we treated already in \cite{KKWK}. Let us review some of the details. The graph $\bar \Gamma$ is the graph with one point and three loops, so $n=1$. Fixing the standard basis $e_1,e_2,e_3$ of ${\mathbb Z}^3$, we get the operators $U_1,U_2,U_3$, which generate $T^3_{\Theta}$ and the Hamiltonian is simply $H=\sum_i (U_i+U_i^*)$. If $\Theta\neq 0$ then $\B_{\Theta}$ is simply the noncommutative torus and if $\Theta=0$ then this is the $C^*$ algebra of $T^3$. \section{The diamond lattice (D) case} \begin{figure} \begin{center} \includegraphics[height=5cm]{Dgraph.jpg} \caption{One channel of the diamond surface and its skeletal graph. The red and green dots refer to the vertices of the two interlaced fcc lattices} \label{Dgraph} \end{center} \end{figure} The D surface has a complement consisting of two channels each of which can be retracted to the diamond lattice $\Gamma_{\diamond}$. The diamond lattice is given by two copies of the fcc lattice, where the second fcc is the shift by $\frac{1}{4}(1,1,1)$ of the standard fcc lattice, see Figure \ref{Dgraph}. The edges are nearest neighbor edges. The symmetry group is $Fd\bar3m$. In the diamond lattice case, we have 2 vertices in the primitive cell. The quotient graph $\Gamma_{\diamond}/fcc$ is the graph with 2 vertices and 4 edges connecting them, see Figure \ref{graphfig}. The edges correspond to the 4 vectors to the center of a tetrahedron centered at $(0,0,0)$. $$e_1=\frac{1}{4}(1,1,1), e_2=\frac{1}{4}(-1,-1,1), e_3=\frac{1}{4}(-1,1,-1), e_4=\frac{1}{4}(1,-1,-1)$$ These vectors satisfy $\sum_i e_i=0$. We parameterize the $B$ field by fixing the values of the skew--symmetric bilinear form $\Theta$ on the basis elements $(-e_1,e_2,e_3)$ as follows: \begin{equation*} {\Theta} (-e_1,e_2)=\varphi_1\quad {\Theta} (-e_1,e_3)=\varphi_2\quad {\Theta} (e_2,e_3)=\varphi_3 \end{equation*} Our results will depend on the phases: \begin{equation} \chi_i=e^{i \varphi_i}\;\mbox{for}\; i=1,2,3 \end{equation} \begin{figure} \begin{center} \includegraphics[height=3cm]{graphs.pdf} \caption{The quotient graphs for the cubic, diamond and gyroid lattices} \label{graphfig} \end{center} \end{figure} The Harper Hamiltonian according to the construction of \cite{KKWK} in terms of the partial isometries reads $$ \left( \begin{array}{cc} 0 & U^*_{e_1}+U^*_{e_2}+U^*_{e_3}+U^*_{e_4}\\ U_{e_1}+U_{e_2}+U_{e_3}+U_{e_4} & 0 \end{array} \right) $$ Before we can write down the matrix Harper Hamiltonian, we have to fix some data and notations. The three edges of the tetrahedron incident to one point are $f_2=\frac{1}{2}(-1,-1,0),f_3=\frac{1}{2}(-1,0,-1),f_4=\frac{1}{2}(0,-1,-1)$. The translation operators along those edges fulfill the following commutation relations: \begin{equation} U_{f_i} U_{f_j} = e^{ 2 \pi i {\Theta}(f_i,f_j)}U_{f_j} U_{f_i} \end{equation} We set $U=\chi_1U_{f_2}$, $V=\chi_2U_{f_3}$ and $W=\bar \chi_1\bar \chi_2U_{f_4}$. These operators span a ${\mathbb T}^3_{\Theta}$: \begin{equation} U V = q_1 V U \quad UW = q_2 WU\quad VW = q_3 WV \end{equation} where the $q_i$ expressed in terms of the $\chi_i$ are: \begin{equation} q_1=\bar{\chi_1}^2 \chi_2^2 \chi_3^2 \quad q_2=\bar{\chi_1}^6 \bar{\chi_2}^2 \bar{\chi_3}^2 \quad q_3=\bar{\chi_1}^2\bar{ \chi_2}^6 \chi_3^2 \end{equation} Vice versa, fixing the values of the $q_i$ fixes the $\chi_i$ up to eighth roots of unity: \begin{equation} \chi_1^8=\bar{q}_1 \bar{q}_2 \quad \chi_2^8=q_1 \bar{q}_3 \quad \chi_3^8=q_1^2\bar{q}_2 {q}_3 \end{equation} Other useful relations are $ q_2 \bar{q}_3= \bar{\chi}_1^4 \chi_2^4 \bar{\chi}_3^4$ and $q_2 q_3 =\bar{\chi}_1^8 \bar{\chi}_2^8$. Using the $e_1$ edge as the spanning tree with the root being the vertex that corresponds to $\pi(0,0,0)$, we get that the embedding representation $\rho$ of ${\mathbb T}^3_{\Theta}$ into $M_2({\mathbb T}^3_{\Theta})$ defined by the action of $L$ is given by \begin{eqnarray} \rho(U)&=&diag(U, \chi^2_1U),\;\; \rho(V)=diag(V,\chi^2_2 V)\nonumber\\ \rho(W)&=&diag(W,\bar\chi^2_1\bar\chi^2_2 W). \end{eqnarray} And the matrix Harper Hamiltonian is $$ H= \left( \begin{array}{cc} 0&1+U^*+V^*+W^*\\ 1+U+V+W&0 \end{array} \right) $$ \subsection{The commutative case} In this case, we see that the algebra $\B_{\Theta}$ is a subalgebra of $M_2(C(T^3))$, where $C(T^3)$ is the $C^*$ algebra of complex functions on the torus $T^3$. The space $X$ corresponding to the commutative $C^*$ algebra is a ramified cover of $T^3$ which is generically $2:1$. The branching locus is given by the degenerate points. These are computed by: $$ det(H-\lambda id)= \lambda^2-(1+U+V+W)(1+U+V+W)^* $$ There are degenerate Eigenvalues of $H$ on a point of $T^3$ which corresponds to the character $\chi: {\mathbb T}^3\to {\mathbb C}$, given by evaluating at that point, if the following equations are satisfied: Set $e^{i\phi_1}=z_1=\chi(U),e^{i\phi_2}=z_2=\chi(V),e^{i\phi_3}=z_3=\chi(W), \in S^1\subset {\mathbb C}$ then the square root has only one value $0$ if $$ 1+z_1+z_2+z_3=0 $$ We calculate $$ -z_1=1+z_2+z_3 $$ $$ 1=z_1\bar z_1=1+z_2\bar z_2+z_3\bar z_3+z_2+\bar z_2 +z_3+\bar z_3+z_2\bar z_3+\bar z_2z_3 $$ multiplying by $z_2z_3$ $$ 0=2z_2z_3+z_2^2z_3+z_3+z_2z_3^2+z_2+z_2^2+z_3^2=(z_2+z_3)(z_2+z_3+1+z_2z_3) $$ This gives the solution $z_2=-z_3, z_1=-1$ or $z_2(z_3+1)=-(z_3+1)$. The latter equation has the solutions $z_3=-1,z_1=-z_2$ and $z_2=-1, z_3=-z_1$. {\bf Cover of $T^3$ defined by the $D$ wire network.} We see that the space $X$ defined by ${\mathscr B}$ in the commutative case is a generically 2--fold cover of the 3--torus $T^3$ where the ramification is along three circles on $T^3$ given by the equations $\phi_i=\pi, \phi_j\equiv \phi_k+\pi\; \mbox{mod}\; 2\pi$ with $\{i,j,k\}=\{1,2,3\}$. They are shown in Figure \ref{solcomm}, where the cube has to be taken with periodic boundaries. Therefore the intersection points on opposite faces of the cube are identified and the six lines form three circles which pairwise touch at a point. \begin{figure} \begin{center} \includegraphics[height=5cm]{sol_comm_rot.pdf} \caption{Commutative case for the D surface- ramification locus on $T^3$ depicted as the cube with periodic boundaries} \label{solcomm} \end{center} \end{figure} \subsection{The non--commutative case} In the following we would like to characterize the algebra $\B_{\Theta}$ for general values of the magnetic field. The results will split into cases according to the values of the parameters $q_i$ and $\chi_i$. In a first step, set $ X_1= H -\rho(\bar\chi^2_1 U) H \rho(U^*)$. $$ X_1= \left( \begin{array}{cc} 0& (1-\bar{\chi_1}^4)(1+U^*)+(1-\bar{\chi_1}^4\bar{q_1})V^*+(1-\bar{\chi_1}^4\bar{q_2})W^*\\ (1-q_1) V +(1-q_2) W&0 \end{array} \right) $$ Now, set $X_2 = X_1 -\rho(\bar\chi_2^2V) X_1 \rho(V^*)$ and $X_3= X_2 -\rho(\chi_1^2\chi_2^2) X_2 \rho(U_{f_4}^*)$. We obtain \begin{equation} \label{xmatrix} X_3= \left( \begin{array}{cc} 0& a 1+ b U^*+c V^*+d W^*\\ 0&0 \end{array} \right) \end{equation} with $$a=(1-\chi_1^4 \chi_2^4)(1-\bar{\chi_2}^4)(1-\bar{\chi_1}^4), \quad b=(1-\chi_1^4 \chi_2^4 \,q_2)(1-\bar{\chi_2}^4q_1)(1-\bar{\chi_1}^4) $$ $$ c= (1-\chi_1^4 \chi_2^4\, q_3 )(1-\bar{\chi_2}^4)(1-\bar{\chi_1}^4\,\bar{q_1}),\quad d=(1-\chi_1^4 \chi_2^4)(1-\bar{\chi_2}^4\bar{q_3})(1-\bar{\chi_1}^4\,\bar{q_2}) $$ Now the procedure is as follows. One treats the following two cases. Either all $a=b=c=d=0$ or not all these coefficients vanish. We will summarize our results here and give the details of the calculation in the appendix. {\bf Classification Theorem.} The algebra $\B_{\Theta}$ is the {\em full} matrix algebra {\em except} in the following cases in which it is a proper subalgebra. \begin{enumerate} \item $q_1=q_2=q_3=1$ (the special bosonic cases) and one of the following is true: \begin{enumerate} \item All $\chi_i^2=1$ then $\B_{\Theta}$ is isomorphic to the commutative algebra in the case of no magnetic field above. \item Two of the $\chi_i^4=-1$, the third one necessarily being equal to $1$.\end{enumerate} \item If $q_i=-1$ (special fermionic cases) and $\chi_i^4=1$. This means that either \begin{enumerate} \item all $\chi_i^2=-1$ or \item only one of the $\chi_i^2=-1$ the other two being $1$. \end{enumerate} \item $\bar q_1=q_2=q_3=\bar \chi^4_2$ and $\chi^2_1=1$ it follows that $\chi_2^4=\chi_3^4$. This is a one parameter family. \item $q_1=q_2=q_3=\bar\chi_1^4$ and $\chi_2^2=1$ it follows that $ \chi_1^4=\bar \chi_3^4$. This is a one parameter family. \item $q_1=q_2=\bar q_3=\bar \chi_1^4$ and $\chi_1^2=\bar\chi_2^2$. It follows that $\chi_3^4=1$. This is a one parameter family. \end{enumerate} The subalgebra in the case (i)(b) is the most complicated. Notice that in this case $\Theta$ has integer entries and so $\T^3_{\Theta}\simeq {\mathbb T}^3={\mathbb T}^3_{\Theta=0}$ is actually commutative, but $\B_{\Theta}$ is not. This can happen because we are looking at a sub-algebra of the non--commutative matrix algebra. It is explicitly given as follows. Consider $G_1=(1+U+V+W)(1+U^*+V^*+W^*)$ then the (2,2) entry of $\rho(G_1)$ will be of the form $G_2=A-B+iC-iD$ where the $A,B,C,D$ and polynomials in the $U,V,W,U^*,V^*,W^*$ of degree $0,1,2$ with positive integer coefficients. This is because the $\chi_i^2$ are $\pm i$ or $\pm 1$. Let $J$ be ideal of ${\mathbb T}^3$ spanned by $G_1$ and $G_2$, let $J_{12}$ be the ideal spanned by $1+U^*+V^*+W^*$ and $J_{21}$ the ideal spanned by $1+U+V+W$. Then \begin{equation} \label{special} \B_{\Theta}=\rho(\T^3_{\Theta})+\left(\begin{array}{cc}J&J_{12}\\ J_{21}&J\end{array} \right) \end{equation} The special fermionic case (ii) is related to Clifford algebras. Consider the quadratic form $Q$ on ${\mathbb R}^3$ with basis vectors $b_1,b_2,b_3$ given by $diag(\chi_1^2,\chi_2^2,\bar \chi_1^2\bar\chi_2^2)$. The condition $\chi_i^4=1$ translates to the fact that the entries are $\pm 1$. Let $Cl={\it Cliff}(Q)\otimes {\mathbb C}$ be the complexified Clifford algebra of $Q$. In the fermionic case all the $q_i=-1$ so the generators of $\T^3_{\Theta}$ anti--commute and there is a $C^*$ algebra map $\phi:\T^3_{\Theta}\to Cl$ given by $\phi(U)=b_1,\phi(V)=b_2, \phi(W)=b_3$. Let ${\mathscr J}:=ker(\phi)$ be the ideal defined by the kernel of $\phi$. Since the $\chi_i^2=\pm 1$ there is an involution $\hat{}:\T^3_{\Theta}\to \T^3_{\Theta}$ given by $\hat U=\chi_1^2U, \hat V=\chi_2^2 V$ and $\hat W=\bar \chi_1\bar \chi_2 W$. With these notations: \begin{equation} \label{cliffalg} \B_{\Theta}=\{\left(\begin{array}{cc}a & b \\\hat b & \hat a\end{array}\right)+ J, \text{ with } a,b\in \T^3_{\Theta} \text { and } J\in M_2({\mathscr J})\}. \end{equation} In the three families the algebra $\B_{\Theta}$ is the $C^*$ algebra generated by $\T^3_{\Theta}$ and two elements $A$ and $B$, which commute with each other and $\T^3_{\Theta}$, and satisfy equations $A^2=p$ and $B^2=q$ for fixed $p$ and $q$ in $\T^3_{\Theta}$, i.e.\ there are adjoined square roots. For details on $p$ and $q$, see the Appendix. \section{The Gyroid (G) case} \begin{figure} \begin{center} \includegraphics[height=5cm]{gyrlabels.jpg} \caption{One of the channels of the gyroid surface and its skeletal graph} \label{Ggraph} \end{center} \end{figure} We recall some of the setup from \cite{KKWK}. The Gyroid and its graph are very complex and we will not give all the details here. One channel and the Gyroid graph $\Gamma^+$ are shown in Figure \ref{Ggraph}. The symmetry group is $Ia\bar3d$. This means that the translation group is the bcc lattice. The graph $\bar \Gamma^+$ is the full square, see Figure \ref{graphfig}. We choose the generators of bcc to be the vectors $g_1=\frac{1}{2}(1,-1,1),\; g_2=\frac{1}{2}(-1,1,1), \; g_3=\frac{1}{2}(1,1,-1)$. These can be used these to fix the cocycle defining the interaction with the magnetic field: $$ \theta_{12}=\frac{1}{2\pi}B\cdot (g_1\times g_2), \quad \theta_{13}=\frac{1}{2\pi} B\cdot (g_1\times g_3), \quad \theta_{23}=\frac{1}{2\pi}B\cdot (g_2\times g_3) $$ The edge vectors of $\Gamma^+$ span the fcc group. Explicitly the edge vectors are $e_1=\frac{1}{4} (-1,1,0),$ $e_2=\frac{1}{4} (0,-1,1),$ $e_3=\frac{1}{4} (1,0,-1),$ $e_4=\frac{1}{4} (1,1,0),$ $e_5=\frac{1}{4} (0,-1,-1),$ $e_6=\frac{1}{4} (-1,0,-1)$. In the direct sum decomposition of ${\mathscr H}$ the Harper Hamiltonian reads \begin{equation} H_{\bar\Gamma_+}=\left( \begin{array}{cccc} 0&U_1^*&U_2^*&U_3^*\\ U_1&0&U_6^*&U_5\\ U_2&U_6&0&U_4\\ U_3&U_5^*&U_4^*&0\\ \end{array} \right) \end{equation} We choose the rooted spanning tree $\tau$ (root $A$, edges $e_1,e_2,e_3$) Using this we obtain the following matrix Harper operator according to \cite{KKWK} \begin{equation} H_=\left( \begin{array}{cccc} 0&1&1&1\\ 1&0&U_1^*U_6^*U_2&U_1^*U_5U_3\\ 1&U_2^*U_6U_1&0&U_2^*U_4U_3\\ 1&U_3^*U_5^*U_1&U_3^*U_4^*U_2&0 \end{array} \right) =:\left( \begin{array}{cccc} 0&1&1&1\\ 1&0&A&B^*\\ 1&A^*&0&C\\ 1&B&C^*&0 \end{array} \right) \end{equation} The operators $A,B,C$ again span a non--commutative three torus: \begin{equation} AB=\alpha_1 BA, \quad AC=\bar\alpha_2CA, \quad BC=\alpha_3CB \end{equation} where now in terms of the $B$ field $\alpha_1:=e^{2\pi i \theta_{12}}, \bar \alpha_2:= e^{2\pi i \theta_{13}}$, $\alpha_3:=e^{2\pi i \theta_{23}} $. \subsection{The commutative case.} It is easy to check that generically the Hamiltonian has 4 distinct Eigenvalues. We can use the character $\chi(A)=-1,\chi(b)=1,\chi(C)=-1$ for this. The corresponding Eigenvalues are $\pm \sqrt 5,\pm1$. By the general theory we then know that the commutative geometry if given by a generically unramified 4-fold cover of the three torus, see \cite{KKWK}. The actual calculation of the branching behavior is more difficult. For this we have to analyze the characteristic polynomial of $H$ and thus we have to deal with a fourth order equation. Although it is in principle possible to solve the equation, this is rather difficult and lengthy. We will treat this case in a subsequent paper \cite{KKWK3}. There we show that there are only 4 ramification points. This means that the locus is of real codimension 3 contrary to the D case where it was of codimension 2. Furthermore the degenerations are 3 branches coming together at 2 points and 2 pairs of branches coming together at the other two points. \subsection{Non-commutative case} To state the results of \cite{KKWK} we use $$\phi_1=e^{\frac{\pi}{2} i \theta_{12}}, \quad \phi_2= e^{\frac{\pi}{2} i \theta_{31}},\quad \phi_3= e^{\frac{\pi}{2} i \theta_{23}}, \quad \Phi=\phi_1\phi_2\phi_3 $$ {\bf Classification Theorem.} \begin{enumerate} \item If $\Phi\neq1$ or $\Phi=1$ and at least one $\alpha_i\neq 1$ and all $\phi_i$ are different then $\B_{\Theta}=M_4({\mathbb T}^3_{\Theta})$. \item If $\phi_i=1$ for all $i$ then the algebra is the same as in the commutative case. \item In all other cases ${\mathscr B}$ is non--commutative and $\B_{\Theta}\subsetneq M_4({\mathbb T}^3_{\Theta})$. \end{enumerate} Further information, which is too lengthy to reproduce here, about the case (iii) is in \cite{KKWK}. We only wish to point out that the fermionic case $\alpha_i=-1$ is not a special case. Rather it is a mixed case in which two of the $\alpha_i=-1$ and one $\alpha_i=1$ which yields a proper subalgebra involving a Clifford algebra. \section{Conclusion} We have treated all the triply periodic self-dual symmetric surface wire arrays ---given by the P, D, G geometries--- using the methods developed in \cite{KKWK} to study their commutative and noncommutative geometry. We gave the commutative geometry as an explicit branched cover of the three torus and classified all the noncommutative $C^*$ algebras that arise from turning on a constant magnetic field. The G case was considered before in \cite{KKWK}. As we discussed the P case can be reduced to information contained in that paper as well. Here we completely treated the D case which has a much richer structure. A new feature of the commutative case is that the branching locus is not of dimension zero, but rather of dimension one. A novel trait of the non--commutative case for the D surface is the appearance of whole one--dimension families where the algebra drops to a proper subalgebra of the matrix algebra. An intriguing question is if these two features are related. Although the base space is $T^3$ in both cases, it parameterizes completely different moduli. In the commutative case the parameters are the momenta, while in the non--commutative case they are the parameters of the noncommutative torus which are given by the magnetic field, which is completely absent in the commutative case. Thus there does not seem to be a direct relation, but one could expect such a relation on the grounds of a, yet to be determined, duality. We leave this for further investigation. \section*{Acknowledgments} RK thankfully acknowledges support from NSF DMS-0805881. BWK thankfully acknowledges support from the NSF under the grant PHY-0969689. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. \section*{Appendix} In this appendix we give the details of the calculations for the D surface wire network. As mentioned above the proof boils down to two major cases depending on wether the matrix $X_3$ of equation (\ref{xmatrix}) is zero or not. \subsection{The matrix $X_3 \neq 0$} We also assume that all the $q_i\neq 1$. The case of all $q_i=1$ will be treated separately below. The strategy is to reduce the matrix by conjugation so that only one term is non--zero. After multiplication with the appropriate matrix one can obtain the matrix $E_{12}$ and hence the whole matrix algebra. The subcases one treats are (I) $a\neq 0$ and (II) $a=0$. In case (I), one can successively kill all the entries except for the one proportional to $1$. Explicitly, after performing the three operations $X_4= \bar{q_1}X_3 -\chi_1^4 \rho(U_{f_2}) X_3 \rho(U_{f_2}^*)$, then $X_5= q_1X_4 -\chi_2^4 \rho(U_{f_3}) X_4\rho(U_{f_3}^*)$, and finally $X_6= \bar{q_2}X_5 -\bar{\chi_1}^4 \bar{\chi_2}^4 \rho(U_{f_4}) X_5 \rho(U_{f_4}^*)$, we obtain $X_6=a''E_{12}$ which has only one possibly non--zero entry, $$ a^{\prime\prime}=(\bar{q_2}-1)(q_1-1)(\bar{q_1}-1)(1-\chi_1^4 \chi_2^4)(1-\bar{\chi_2}^4)(1-\bar{\chi_1}^4) $$ Hence $X_6$ can be brought to $E_{12}$ by dividing by $a^{\prime\prime}$, provided it is non--zero. Since we assume not all $q_i=1$ and $a\neq0$, the remaining cases are when one or both $q_1=1,q_2=1$ but not all three $q_i=1$. These can be handled similarly and all lead to the full matrix algebra. The case (II) splits as several subcases corresponding to the factors of $a$: (A) $\chi_1^4=1$, (B) $\chi_2^4=1$ and (C) $\chi_1^4\chi_2^4=1$. All these cases are similar, we show how to treat (A). In this case, we already know that $b=0$ and if we further assume that $d=0$ it follows $c=0$ and we are in the case $X_3=0$. So, we assume $d\neq0$. If $c=0$ there is only one term and we are done. If $c\neq0$ then we can conjugate with $\rho(V)$ and kill the $V^*$ term leaving only the $W^*$ term and we are done. \subsection{The matrix $X_3=0$.} This is more tedious. The cases we get from assuming that all the coefficients are zero are: (A) $\chi_1^4=\chi_2^4=1$ which implies $q_1=\bar q_2=q_3$. (B) $\chi_1^4=1$ and (1) $q_3=\bar\chi_2^4$ which implies $\bar q_1= q_2=q_3$, $\chi_2^4=\chi_3^4$ or (2) $q_2=1$ which implies $q_1=q_2=1$, $\bar \chi_2^4=\chi_3^4$. (C) $\chi_2^4=1$ and (1) $q_2=\bar \chi_1^4$ which implies $q_1=q_2=q_3$, $\bar\chi_1^4=\chi_3^4$ or (2) $q_3=1$ which implies $q_1=q_3=1$, $\chi_1^4=\chi_3^4$. And finally (D) $\chi_1^4=\bar \chi_2^4=1$ and (1) $q_1=\chi_2^4$ which implies $q_1=q_2=\bar q_3$, $\chi_3^4=1$ or (2) $q_2=1$ which implies $q_2=q_3=1$, $q_1=\chi_3^4$. Again all $q_i=1$ will be treated separately. In case (A), either $q_3\neq \bar q_3$ and we can proceed as usual and obtain the full matrix algebra. Or $q_3=\bar q_3$, and then either all $q_i=1$ or all $q_i=-1$. In the latter case we will show that $\B_{\Theta}$ is indeed the algebra given by (\ref{cliffalg}). For the time being denote that algebra by ${\mathscr B}'$. It is easy to check that ${\mathscr B}'$ is a subalgebra. It is also proper, since it does not surject onto the image of $\phi$, for instance $E_{12}$ is not in the image. Since $$H=\left(\begin{array}{cc}0 & 1+\hat U +\hat V+\hat W \\1+U+V+W & 0\end{array}\right)+\left(\begin{array}{cc}0 & U^*-\hat U +V^*-\hat V+W^*-\hat W \\0 & 0\end{array}\right)$$ we see that $H\in {\mathscr B}'$, likewise one checks that $\rho(\T^3_{\Theta})\subset {\mathscr B}'$ and hence $\B_{\Theta}\subset {\mathscr B}'$. To get the other inclusion, one proceeds in the usual fashion to obtain the matrices $$ I=\left(\begin{array}{cc}0 & 1\\1 & 0\end{array}\right), \left(\begin{array}{cc}0 & U^*\\U&0\end{array}\right), \left(\begin{array}{cc}0 & V^* \\V & 0\end{array}\right), \left(\begin{array}{cc}0 & W^* \\W & 0\end{array}\right) $$ By multiplying $I$ with elements of $\rho(\T^3_{\Theta})$ and subtracting we get the matrices $U^*-\hat U E_{12}$,$U^*-\hat U E_{12}$,$V^*-\hat V E_{12}$ and $W^*-\hat W E_{12}$ which generate $\mathscr J$. Thus ${\mathscr B}'\subset \B_{\Theta}$. In case (B) (1) with the assumption $q_i\neq 1$, we can either have $\chi_1^2\neq1$ in which case the usual procedure produces the full matrix algebra or $\chi^2_1=1$ in which case we obtain the matrices $ A=\left(\begin{array}{cc}0 & U^*\\1&0\end{array}\right), C=\left(\begin{array}{cc}0 & W^* \\V & 0\end{array}\right) $ and their adjoints. Set $B=C\rho(V^*)=\left(\begin{array}{cc}0 & \bar\chi_2^2W^*V^* \\1 & 0\end{array}\right)$. Then both $A$ and $B$ commute with $\rho(\T^3_{\Theta})$ and with each other. Now $A^2=\rho(U)$ and $B^2=\chi_2^2\rho(W^*V^*)$. Since $H=A+A^*+C+C^*$ we see that $H$ is in the $C^*$ sub--algebra spanned by $\rho(\T^3_{\Theta})$, $A$ and $B$ with the given relations. To show that this is not the full matrix algebra, we can use the mapping of $\phi:\T^3_{\Theta}\to {\mathbb T}^2_{\frac{1}{2}}$ given by $\phi(U)=S, \phi(V)=T, \phi(W)=S^*T$ where $S,T$ are the generators of ${\mathbb T}_{\frac{1}{2}}^2$, which satisfy $ST=-TS$. We see that $ker(\phi)$ is the two sided $C^*$ ideal generated by $V^*W-U$. The map $\phi$ induced a map $\hat\phi:M_2(\T^3_{\Theta})\to M_2({\mathbb T}^2_{\frac{1}{2}})$. Since the image of $A$ is the image of $\bar\chi_2^2 B$, we see that the image of ${\mathscr B}$ is generated by $\hat\phi\rho(\T^3_{\Theta})$ and $\hat\phi(A)$, which does not contain $E_{12}$. Hence $\hat\phi|_{\B_{\Theta}}$ is not surjective and $\B_{\Theta}$ is not the full algebra. From this it is also easy to see that in $M_2(\T^3_{\Theta})$, $A$ and $B$ satisfy no other relations modulo $\rho(\T^3_{\Theta})$. This is the family (iii). The case (B)(2) yields the full algebra unless $q_3\neq 1$ and hence all $q_i=1$. The case (C) is completely analogous upon switching $U$ and $V$. (C)(1) yields the family $(iv)$. In the case (D) $W$ plays the special role, which $U$ played in (B)(1) and hence the condition is that $\chi_1^2=\chi_2^2=1$. This yields the case of the family (v). \section*{Bibliography}
1,314,259,994,380
arxiv
\section{Introduction} \label{sec:intro} Inspired by the observed double-tail structure of comets, which indicates the presence of gas outflow \citep{Biremann_1951ZA}, \citet{Parker_1958ApJ} predicted the outward expansion of the hot coronal plasma, which results in the formation of transonic outflow. Later on, the in-situ measurement by the Mariner 2 Venus probe confirmed the existence of supersonic plasma streams from the Sun, which is now called the solar wind \citep{Neugebauer_1966JGR}. Hot coronae and stellar winds are also ubiquitously observed in low-mass main sequence stars that possess a surface convection zone \citep{Wood_2005ApJ,Wood_2021ApJ,Gudel_2014_proceedings,Vidotto2021LRSP} In the framework of the thermally-driven wind model, the energy source of the solar wind is the thermal energy of the solar corona. The thermally-driven wind model therefore predicts that faster solar wind emanates from hotter regions of the corona, and vice versa. In reality, however, several observations indicate that high-speed solar wind emanates from relatively cool parts of the corona. Fast solar wind is known to originate from coronal holes \citep{Krieger_1973SoPh,Kohl_2006A&ARv}, which exhibit cooler temperature than the other regions of the corona \citep[e.g.,][]{Withbroe_1977ARA&A,Narukage_2011SoPh}. The observed anti-correlation between the freezing-in temperature and the velocity of the solar wind \citep{Geiss_1995SSRev,von_Steiger_2000JGR} also supports the fact that fast solar wind originates in cool portions of the corona. These observations indicate that magnetic field plays a substantial role in the solar wind acceleration. It is believed that the convection beneath the photosphere is the source of the energy for the hot corona and the solar wind \citep{Klimchuk_2006_SolPhys, McIntosh_2011}. Convective fluctuations excite various modes of waves that propagate upward \citep{Lighthill_1952RSPSA,Stein_1967SoPh,Stepien1988ApJ,Bogdan_1991ApJ}. Magnetic reconnection between open and closed field lines is another possible source of transverse waves \citep{Nishuzuka_2008ApJ}, in addition to the direct ejection of heated plasma \citep{Fisk_2003JGRA}. Among various types of waves, Alfv\'{e}n(ic) waves have been highlighted as a reliable agent to effectively transfer the kinetic energy of the convection to the corona and the solar wind via the Poynting flux \citep[e.g.,][]{Belcher_1971ApJ,Shoda_2019ApJ,Sakaue_2020ApJ,Matsumoto_2021_MNRAS}. This is first because they are not so much affected by the shock dissipation owing to the incompressible nature, unlike compressible waves, which easily steepen to form shocks as a result of the amplification of the velocity amplitude in the stratified atmosphere, and second because they do not refract, unlike fast-mode magnetohydronamical (MHD hereafter) waves \citep[e.g.,][]{Matsumoto_2014MNRAS}, but do propagate along magnetic field lines \citep{Alazraki_1971A&A,Bogdan_2003ApJ}. In recent years transverse waves have been detected in the chromosphere \citep{Okamoto_2007Sci,De_Pontieu_2007_Science,McIntosh_2011,Okamoto_2011ApJ,Srivastava_2017NatSR}, whereas it is still under debate whether the sufficient energy required for the formation of the corona and the solar wind propagates into the corona \citep{Thurgood_2014ApJ}. Once Alfv\'enic waves enter the corona, the key is how the Poynting flux is transferred to the thermal and kinetic energies of the coronal plasma via the dissipation of the waves. Various damping processes of Alfv\'enic waves have been explored, including turbulent cascade \citep{Velli_1989_PhysRevLett,Matthaeus_1999_ApJ,Cranmer_2007ApJS,Verdini_2010_ApJ,Howes_2013PhPl,Perez_2013ApJ,Shiota_2017ApJ,Adhikari_2020ApJS,Zank_2021PhPl}, nonlinear mode conversion to compressible waves \citep{Kudoh_1999ApJ,Suzuki_2005_ApJ,Suzuki_2006_JGRA,Vasheghani_Farahani_2021ApJ}, resonant absorption \citep{Ionson_1978_ApJ,VanDoorsselaere2004,Antolin2015_ApJ} and phase mixing \citep{Heyvaerts_1983_AA,DeMoortel2000A&A,Magyar_2017NatSR}. In contrast to Alfv\'enic waves, acoustic waves have not been considered to be a major player in the coronal heating because the acoustic waves that are excited by $p$-mode oscillations at the photosphere \citep[e.g.,][]{Lighthill_1952RSPSA,Felipe_2010} rapidly steepen to form shocks before reaching the corona \citep{Stein_1972ApJ,Priest_2014masu.book,Cranmer_2007ApJS} . However, \cite{Morton_2019_Nature_Astronomy} pointed out the contribution of $p$-mode oscillations to the generation of Alfv\'enic waves via the mode conversion from longitudinal waves to transverse waves \citep{Cally_2011ApJ}. The aim of the present paper is to investigates roles of the acoustic waves that are excited by vertical oscillations at the photosphere in the Alfv\'en wave-drive wind. For this purpose, we perform MHD simulations that handle the propagation, dissipation, and mode-conversion of both transverse and longitudinal waves from the photosphere to \shimizu{several tens of solar radii} with self-consistent heating and cooling. In Section \ref{sec:method} we describe the setup of our simulations. We present main results in Section \ref{sec:Results}. We discuss related topics in Section \ref{sec:discussion} and summarize the paper in Section \ref{sec:summary}. \section{Methods} \label{sec:method} \begin{figure*}[t] \label{model} \begin{center} \includegraphics[width=18cm]{./figure/model.pdf} \caption{ Schematic pictures of the model. Left picture is the overview of the model. The black solid lines represent the shape of the open flux tube, which expands super-radially. Right picture in the blue frame is the schematic picture of the wave dissipation in the model. The black dashed curve represents solar surface. Red characters refer to the physical processes considered in this model. \label{fig:model} } \end{center} \end{figure*} We consider the magnetohydrodynamics of the solar wind in one-dimensional (1D hereafter) open magnetic flux tubes from the photosphere at $r=R_{\odot}$ (solar surface) to \shimizu{several tens of solar radii}. Figure \ref{fig:model} shows an overview of our model. \subsection{Basic Equations} We consider a one-dimensional (spherical symmetric, $\partial / \partial \theta=\partial / \partial \phi=0$), super-radially expanding flux tube. The cross section of the flux tube is defined by the filling factor of the open flux tube $f^{\rm op} (r)$, which is lower than unity on the photosphere and asymptotically approaches unity as $r$ gets larger. The conservation of the open magnetic flux $\Phi^{\rm op}$ yields the following relation. \begin{align} \left|B_{r}(r)\right| r^{2} f^{\rm op}(r)=\left|B_{r,\odot}\right|R_{\odot}^2 f^{\rm op}_{\odot}=\Phi^{\rm op}, \label{eq:divB} \end{align} where $X_\odot$ represents the value of $X$ on the photosphere. We note that $\Phi^{\rm op}$ is constant in each simulation. We solve the one-dimensional MHD equations along the flux tube characterized by $f^{\rm op} (r)$. For simplicity, we consider the polar wind, which is not affected by the solar rotation. In deriving the MHD equations in a super-radially expanding flux tube, the scale factors of the coordinate system are required. Here, we assume that the magnetic flux tube expands isotropically in $\theta$ and $\phi$ directions. In terms of scale factors, this assumption yields \begin{align} \label{expf} h_{r}=1, \quad h_{\theta}=h_{\phi}=r \sqrt{f^{\rm op}}. \end{align} Using these scale factors, the MHD equations in an expanding flux tube is derived (see \cite{Shoda_Takasao_2021arXiv} for derivation). The basic equations are given as follows. \begin{align} \frac{\partial}{\partial t} \rho + \frac{1}{r^{2} f^{\rm op}} \frac{\partial}{\partial r} \left(\rho v_{r} r^{2} f^{\rm op} \right)=0, \end{align} \begin{align} &\frac{\partial}{\partial t} \left(\rho v_{r}\right)+\frac{1}{r^2 f^{\rm op}} \frac{\partial}{\partial r}\left[\left(\rho v_r^2 + p_T \right) r^2f^{\rm op}\right] \nonumber \\ &=-\rho \frac{G M_\odot}{r^2} + \left( \rho \boldsymbol{v}_\perp^2+ 2p \right) \frac{d}{d r} \ln \left( r \sqrt{f^{\rm op}} \right),\label{rovr} \end{align} \begin{align} &\frac{\partial}{\partial t}\left(\rho \boldsymbol{v}_{\perp}\right) +\frac{1}{r^{2} f^{\rm op}} \frac{\partial}{\partial r}\left[\left(\rho v_{r} \boldsymbol{v}_{\perp}-\frac{1}{4 \pi} B_{r} \boldsymbol{B}_{\perp}\right) r^{2} f^{\rm op}\right] \nonumber \\ &=\left(\frac{B_{r} \boldsymbol{B}_{\perp}}{4 \pi}-\rho v_{r} \boldsymbol{v}_{\perp}\right) \frac{d}{d r} \ln \left(r \sqrt{f^{\rm op}}\right)+\rho \boldsymbol{D}_{v_\perp}^{\text {turb}}, \label{rovx} \end{align} \begin{align} \frac{1}{r^{2} f^{\rm op}} \frac{\partial}{\partial r}\left(B_{r} r^{2} f^{\rm op} \right)=0, \end{align} \begin{align} &\frac{\partial}{\partial t} \bm{B}_{\perp}+\frac{1}{r^{2} f^{\rm op}} \frac{\partial}{\partial r}\left[\left(v_{r} \bm{B}_{\perp}-\bm{v}_{\perp} B_{r}\right) r^{2} f^{\rm op} \right] \nonumber \\ &=\left(v_{r} \bm{B}_{\perp}-\bm{v}_{\perp} B_{r}\right) \frac{d}{d r} \ln \left(r \sqrt{f^{\rm op}}\right)+\sqrt{4 \pi \rho} \bm{D}_{b_\perp}^{\mathrm{turb}}, \label{induction_x} \end{align} \begin{align}\label{eq:energy} &\frac{\partial}{\partial t} e+\frac{1}{r^{2} f^{\rm op}} \frac{\partial}{\partial r} \left[\left(\left(e+p_{T}\right) v_{r}-\frac{B_{r}}{4 \pi}\left(\boldsymbol{v}_{\perp} \cdot \boldsymbol{B}_{\perp}\right) \right) \right. \nonumber r^{2} f^{\rm op}\biggr] \nonumber \\ &=-\rho v_{r} \frac{G M_{\odot}}{r^{2}} + Q_{\rm C} - Q_{\rm R}, \end{align} where $\bm{v}$, $\bm{B}$, $\rho$ and $p$ are velocity, magnetic field, density and gas pressure, respectively. $\bm{v}_\perp$ and $\bm{B}_\perp$ are the perpendicular ($\theta$ and $\phi$) components of $\bm{v}$ and $\bm{B}$, respectively, that is, \begin{align} \bm{v}_\perp =v_\theta\bm{e}_\theta+v_\phi\bm{e}_{\phi}, \quad \bm{B}_\perp =B_\theta\bm{e}_\theta+B_\phi\bm{e}_{\phi}, \end{align} where $\bm{e}_\theta$ and $\bm{e}_\phi$ are unit vectors in $\theta$ and $\phi$ direction, respectively. $M_\odot$ is the solar mass. $e$ denotes the total energy density per unit volume given by \begin{align} e = e_{\rm int} +\frac{1}{2} \rho \boldsymbol{v}^{2}+\frac{\boldsymbol{B}_{\perp}^{2}}{8 \pi}, \label{eq:etotal} \end{align} where $e_{\rm int}$ is the internal energy density per unit volume. $p_T$ denotes the total pressure: \begin{align} p_{T}&=p+\frac{\boldsymbol{B}_\perp^2}{8 \pi}. \label{eq:ptotal} \end{align} $\bm{D}_{v_\perp}^{\text {turb }}$ and $\bm{D}_{b_\perp}^{\text {turb }}$ represent the phenomenological turbulent dissipation of Alfv\'en waves (see Section \ref{sec:Alfventurbulence} for detail). $Q_{\rm C}$ and $Q_{\rm R}$ represent the conductive heating and radiative cooling per unit volume, respectively. In terms of conductive flux $q_{\rm cnd}$, $Q_{\rm C}$ is given by \begin{align} Q_{\rm C} = - \frac{1}{r^2 f^{\rm op}} \frac{\partial}{\partial r} \left( q_{\rm cnd} r^2 f^{\rm op} \right). \end{align} For $q_{\rm cnd}$, we employ the Spitzer-Härm type conductive flux \citep{Spitzer_1953_PhRv} that strongly depends on temperature and transports energy preferentially along the magnetic field line. Besides, to speed up the simulation without loss of reality, we quench the conductivity in the low-density region. $q_{\rm cnd}$ is then employed as follows. \begin{equation} \label{F_C} q_{\rm cnd}= - \min \left(1, \frac{\rho}{\rho_{\rm cnd}}\right) \frac{B_{r}}{|\boldsymbol{B}|} \kappa_{0} T^{5 / 2} \frac{d T}{d r} \end{equation} where $\kappa_{0}=10^{-6} \operatorname{erg} \mathrm{cm}^{-1} \mathrm{~s}^{-1} \mathrm{~K}^{-7 / 2}$. We set $\rho_{\rm cnd} = 10^{-20} \mathrm{~g} \mathrm{~cm}^{-3}$, following \citet{Shoda_2020_ApJ}. The radiative cooling rate is given by a linear combination of optically thick and thin components as follows. \begin{equation} Q_{\rm R} = Q_{\rm R}^{\rm thck} \xi_{\rm rad} + Q_{\rm R}^{\rm thin} \left( 1-\xi_{\rm rad} \right), \end{equation} where $Q_{\rm R}^{\rm thck}$ and $Q_{\rm R}^{\rm thin}$ correspond to the optically thick and thin radiative cooling rates, respectively. The control parameter $\xi_{\rm rad}$ should satisfy $\xi_{\rm rad} \approx 1$ in the photosphere and $\xi_{\rm rad} \approx 0$ from above the transition region. Although the profile of $\xi_{\rm rad}$ is given as a solution of radiative transfer, here we simply model it as follows. \begin{equation} \label{eq:xi_rad} \xi_{\rm rad}=\max\left(0,1-\frac{p_{\rm chr}}{p}\right), \end{equation} where we set $p_{\rm {chr}}=0.1 p_\odot$. Thus the radiation is assumed to be optically thick in $p \gtrsim p_{\rm chr}$ and optically thin in $p \lesssim p_{\rm chr}$. Following \citet{Gudiksen_Nordlund_2005_Apj}, we approximate the optically thick cooling by an exponential cooling that forces the local internal energy to approach the reference value $e_{\rm int}^{\rm ref}$: \begin{align} Q_{\rm R}^{\rm thck} = \frac{1}{\tau^{\rm thck}} \left(e_{\rm int} - e_{\rm int}^{\rm ref} \right) , \label{eq:Newtoncl} \end{align} where we set the time scale as follows. \begin{align} \tau_{\rm thck} = 0.1 \left(\frac{\rho}{\bar{\rho}_\odot}\right)^{-1/2} \mathrm{s}, \end{align} where $\bar{\rho}_\odot=1.87\times10^{-7}\;\mathrm{g}\;\mathrm{cm}^{-3}$, is the mean (time-averaged) mass density in the photosphere. The reference internal energy is calculated once the corresponding reference temperature $T^{\rm ref} (r)$ is given. Here, we set $T^{\rm ref} (r) = T_\odot$. The optically-thin cooling function is composed of two different contributions. In the chromospheric temperature range, we employ the radiative cooling function given by \cite{Googman_2012ApJ} ($Q_{\rm GJ}$), while in the coronal temperature range, the loss function $\Lambda (T)$ is given by the CHIANTI atomic database. \begin{align} Q_{\rm R}^{\rm thin} &=Q_{\rm GJ}(\rho, T) \xi_{2} + n_{\rm H} n_{e} \Lambda(T) \left(1-\xi_{2}\right) , \end{align} where we set \begin{align} \xi_{2} &=\max \left(0, \min \left(1, \frac{T_{\mathrm{TR}}-T}{\Delta T}\right)\right), \nonumber \end{align} where $T_{\text TR}=15000 \ \text K$ and $\Delta T=5000 \ \text K$. \subsection{Equation of state} The hydrogen in the lower atmosphere (photosphere and chromosphere) of the Sun is partially ionized because the temperature there is not sufficiently high. In this work, the effect of the partial ionization is considered in the equation of state. The internal energy is composed of the random thermal motion of the particles and the latent heat of the hydrogen atoms, which is given by \begin{align} e_{\rm int} = \frac{p}{\gamma-1} + n_{\rm H} \chi I_{\rm H}, \ \ \ \ n_{\rm H} = \rho/m_{\rm H}, \end{align} where $n_{\rm H}$ is the number density of hydrogen atoms, $\chi$ is the ionization degree and $I_{\rm H} = 13.6 {\rm \ eV}$ is the ionization potential of hydrogen. For simplicity, the formation of ${\rm H}_2$ molecules is not considered. The thermal equilibrium is assumed with respect to ionization, in which the ionization degree is given by the Saha-Boltzmann equation. \begin{align} \frac{\chi^2}{1-\chi} = \frac{2}{n_{\rm H} \lambda_e^3} \exp \left( - \frac{I_{\rm H}}{k_B T} \right), \end{align} where $\lambda_e$ is the thermal de Broglie wavelength of an electron: \begin{align} \lambda_e = \sqrt{\frac{h^2}{2 \pi m_e k_B T}}. \end{align} Note that pressure and ionization degree are connected by \begin{align} p = \left( 1 + \chi \right) n_{\rm H} k_B T. \end{align} \begin{center} \begin{table*} \hspace{-15mm} \scalebox{1.0}{ \begin{tabular}{ccccccccc} \hline \hline Model & $\langle \delta v_{\perp,\odot}^+\rangle$ & $\langle \delta v_{\parallel, \odot}\rangle$ & $B_{r,\odot}$ & $f_\odot^{\rm op}$ & $f_{\rm chr}^{\rm op}/f_{\rm \odot}^{\rm op}$ & $r_\text{out}$ & $\dot{M}$ & $v_{r,{\rm out}}$ \rule[0mm]{0mm}{4mm} \\ & $[{\rm km \ s^{-1}}]$ & $[{\rm km \ s^{-1}}]$ & $[{\rm G}]$ & & & & $[M_\odot \ {\rm yr}^{-1}]$ & $[{\rm km \ s^{-1}}]$ \rule[-2mm]{0mm}{4mm} \\ \hline \hline B0V06 & 0 & 0.6 & $1.3\times10^{-4}$ & $1.00\times 10^{-3}$ & 100 & $95.6R_\odot$ & accretion & \rule[-2mm]{0mm}{6mm} \\ \hline BsV00 & 0.6 & 0 & 1300 & $1.00\times 10^{-3}$ & 100 & $99.5R_\odot$ & $1.32\times 10^{-14}$ & 688.05 \rule[-2mm]{0mm}{6mm} \\ BsV04 & 0.6 & 0.4 & 1300 & $1.00\times 10^{-3}$ & 100 & $99.5R_\odot$ & $1.75\times 10^{-14}$ & 687.77 \rule[-2mm]{0mm}{6mm} \\ BsV06 & 0.6 & 0.6 & 1300 & $1.00\times 10^{-3}$ & 100 & $99.5R_\odot$ & $1.97\times 10^{-14}$ & 697.02 \rule[-2mm]{0mm}{6mm} \\ BsV09 & 0.6 & 0.9 & 1300 & $1.00\times 10^{-3}$ & 100& $99.5R_\odot$ &$2.63\times 10^{-14}$ & 701.24 \rule[-2mm]{0mm}{6mm} \\ BsV12 & 0.6 & 1.2 & 1300 & $1.00\times 10^{-3}$ & 100 & $99.5R_\odot$ & $3.10\times 10^{-14}$ & 716.19 \rule[-2mm]{0mm}{6mm} \\ BsV15 & 0.6 & 1.5 & 1300 & $1.00\times 10^{-3}$ & 100 & $99.5R_\odot$ & $3.54\times 10^{-14}$ & 691.51 \rule[-2mm]{0mm}{6mm} \\ BsV18 & 0.6 & 1.8 & 1300 & $1.00\times 10^{-3}$ & 100 & $39.1R_\odot$ & $4.18\times 10^{-14}$ & 633.64 \rule[-2mm]{0mm}{6mm} \\ BsV21 & 0.6 & 2.1 & 1300 & $1.00\times 10^{-3}$ & 100 & $39.1R_\odot$ & $4.57\times 10^{-14}$ & 635.62 \rule[-2mm]{0mm}{6mm} \\ BsV27 & 0.6 & 2.7 & 1300 & $1.00\times 10^{-3}$ & 100 & $39.1R_\odot$ & $5.09\times 10^{-14}$ & 560.80 \rule[-2mm]{0mm}{6mm} \\ BsV30 & 0.6 & 3.0 & 1300 & $1.00\times 10^{-3}$ & 100 & $39.1R_\odot$ & $4.97\times 10^{-14}$ & 561.31 \rule[-2mm]{0mm}{6mm} \\ \hline BwV00 & 0.6 & 0 & 325 & $4.00\times 10^{-3}$ & 25 & $95.3R_\odot$ & $1.26\times 10^{-14}$ & 586.55 \rule[-2mm]{0mm}{6mm} \\ BwV06 & 0.6 & 0.6 & 325 & $4.00\times 10^{-3}$ & 25 & $95.3R_\odot$ & $2.57\times 10^{-14}$ & 581.24 \rule[-2mm]{0mm}{6mm} \\ BwV18 & 0.6 & 1.8 & 325 & $4.00\times 10^{-3}$ & 25 & $37.8R_\odot$ & $4.76\times 10^{-14}$ & 460.09 \rule[-2mm]{0mm}{6mm} \\ \hline \hline \end{tabular} } \vspace{0.5em} \caption{\label{tab:settings} Input parameters (2nd - 7th columns) and output results (8th - 9th columns) of different cases. } \vspace{1em} \end{table*} \end{center} \subsection{Phenomenology of Alfv\'en-wave turbulence} \label{sec:Alfventurbulence} In heating the solar wind, energy cascading is required to convert the kinetic and magnetic energies to heat by viscosity and resistivity. Alfv\'en-wave turbulence, a type of MHD turbulence which is triggered by the collision of counter-propagating Alfv\'en waves \citep[e.g.,][]{Goldreich_1995ApJ,Lazarian_2016}, is a promising process for the energy cascading in the solar wind. Because Alfv\'en wave turbulence is a three dimensional process, and thus, to deal with the turbulent dissipation in the one-dimensional system, one needs to model the effect of turbulence. Here we adopt a phenomenological model of Alfv\'en wave turbulence \citep{Hossain_1995PhFl,Dmitruk_2002,van_Ballegooijen_2016ApJ}, which yields the (averaged) turbulent heating rate in terms of mean-field quantities (Els\"asser variables). Following \citet{Shoda_2018_ApJ_a_self-consistent_model}, the turbulent dissipation terms in Eq.s (\ref{rovx}) and (\ref{induction_x}) are explicitly given by \begin{align} \label{turblence} D_{v_\theta,\phi}^{\text {turb }} &=-\frac{c_{d}}{4 \lambda_{\perp}}\left(\left|z_{\theta,\phi}^{+}\right| z_{\theta,\phi}^{-}+\left|z_{\theta,\phi}^{-}\right| z_{\theta,\phi}^{+}\right) \end{align} and \begin{align} D_{b_\theta,\phi}^{\text {turb }} &=-\frac{c_{d}}{4 \lambda_{\perp}}\left(\left|z_{\theta,\phi}^{+}\right| z_{\theta,\phi}^{-}-\left|z_{\theta,\phi}^{-}\right| z_{\theta,\phi}^{+}\right), \end{align} where $\lambda_\perp$ is a perpendicular correlation length and ${\bm z}_\perp^{\pm}$ are Els\"asser variables \citep{Elsasser_PhRv_1950} defined by \begin{equation} \label{eq:elsasser} z_{\theta,\phi}^{\pm}=v_{\theta,\phi} \mp B_{\theta,\phi} / \sqrt{4 \pi \rho} . \end{equation} We assume that the correlation length increases with the radius of the flux tube. \begin{equation} \lambda_{\perp}=\lambda_{\perp, \odot} \frac{r}{R_\odot} \sqrt{\frac{f^{\rm op}}{f_\odot^{\rm op}}}. \label{eq:corrlength} \end{equation} Because the Alfv\'enic fluctuations are localized in inter-granular lanes on the photosphere \citep{Chitta_2012_ApJ}, we set $\lambda_\perp$ as a typical width of inter-granular lanes. \begin{equation} \lambda_{\perp, \odot}=150 \ \mathrm{km} \label{eq:lambda0} \end{equation} The dimensionless coefficient $c_d$ is chosen following \citet{van_Ballegooijen_2017} as \begin{equation} c_{d}=0.1. \end{equation} \subsection{Geometory of Flux Tubes} \label{sec:fluxtube} In modeling the solar wind in a one-dimensional flux tube, we need to prescribe the filling factor of the open flux tube $f^{\rm op}$ as a function of $r$. Since the open flux tube is localized on the photosphere and expands as the radial distance increases, $f^{\rm op} (r)$ should be an increasing function of $r$ that asymptotically approaches unity. Following \citet{Shoda_2020_ApJ}, we employ the two-step expansion of the flux tube, which is described in terms of $f^{\rm op} (r)$ as \begin{align} f^{\rm op} (r)=f_{\odot}^{\rm op} f_{1}^{\rm exp}(r) f_{2}^{\rm exp}(r), \end{align} where $f_{1}^{\rm exp}(r)$ and $f_{2}^{\rm exp}(r)$ represent the first and second expansions, respectively. The first expansion occurs in the chromosphere until one flux tube merges with the adjacent flux tube. Although the direct observation of chromospheric magnetic field is still missing, because the expansion occurs in response to the exponential decrease in the ambient gas pressure, it would be straightforward to assume that the filling factor increases exponentially in height. For this reason, the following formulation is adopted. \begin{align} f_{1}^{\rm exp}(r)=\min \left[f_{\rm cor}^{\rm op} / f_{\odot}^{\rm op}, \exp \left (\frac{r-R_\odot}{H_{\rm mag}} \right)\right], \end{align} where $f_{\rm cor}^{\rm op}$ is the open-flux filling factor in the corona and $H_{\rm mag}$ is the scale height of flux-tube expansion. We relate $H_{\rm mag}$ to the pressure scale height on the photosphere $H_\odot$ by \begin{align} H_{\rm mag} = 2.5 H_\odot = 2.5 \frac{a_\odot^2}{g_\odot}, \end{align} where $a_{\odot}=6.9 \ {\rm km \; s^{-1}}$ and $g_{\odot}=0.274 \ {\rm km \ s^{-2}}$ are the sound speed and the gravitational acceleration on the photosphere, respectively. The second expansion occurs in the extended corona with a typical length scale of $\sim R_\odot$. Following \citet{Kopp_1976_SolPhys}, we adopt the coronal expansion as \begin{align} f_{2}^{\exp }(r)=\frac{\mathcal{F}(r)+f_{\rm cor}^{\rm op}+\mathcal{F}\left(R_{\odot}\right)\left(f_{\rm cor}^{\rm op}-1\right)}{f_{\rm cor}^{\rm op}(\mathcal{F}(r)+1)}, \end{align} where \begin{align} \mathcal{F}(r)=\exp \left(\frac{r-r_{\exp }}{\sigma_{\exp }}\right). \end{align} We adopt the fixed values of $r_{\rm exp} / R_\odot =1.3$, $\sigma_{\rm exp} / R_\odot=0.5$. The values of $f^{\rm op}_\odot$ and $f^{\rm op}_{\rm cor}$ are summarized in Table \ref{tab:settings}. \subsection{Simulation Setup} \label{sec:setup} The simulation domain extends from the photosphere ($r=R_\odot$) to the outer boundary ($r=r_{\rm out}$) located at nearly $r=100R_{\odot}$ in most cases. The radial distance of $r_{\rm out}$ for each run is tabulated in Table \ref{tab:settings}. At $r=r_{\rm out}$, we set the free boundary conditions. \shimizu{A great advantage to set the inner boundary at the photosphere is that we can self-consistently calculate the density at the coronal base, which is one of the critical parameters to determine the mass loss rate, $\dot{M}_w$, of the solar wind \citep[e.g., ][]{Lamers_1999book}. The coronal base, where the density is nearly ten orders of magnitude smaller than the density at the photosphere, is frequently set as the inner boundary of simulations for solar and stellar winds \citep{Verdini_2010_ApJ,Lionello_2014_ApJ,Shoda_2019ApJ}. However, the coronal-base density is determined by the chromospheric evaporation as a result of the energy balance between conductive heating and radiative cooling at the transition region \citep{RTV_1978ApJ,Withbroe_1988ApJ}. Specifically, when the heating in the corona increases, denser chromospheric material is heated up by the thermal conduction from the corona, resulting in an increase in the density at the coronal base. Since our numerical simulations solve these heating and cooling processes in a self-consistent manner, we can obtain reliable $\dot{M}_w$ independently from the treatment of the inner boundary at the photosphere.} The size of the spatial grid, which varies with $r$, is set as follows: \begin{equation} \Delta r = \max \left[ \Delta r_{\rm m},{\rm min} \left[ \Delta r_{\rm M},\frac{2 \varepsilon_{\rm ge}}{2 + \varepsilon_{\rm ge}} (r-r_{\rm ge}) +\Delta r_{\rm m} \right] \right] \end{equation} where we set $\Delta r_{\rm m}=20 {\rm \ km}$, $\Delta r_{\rm M}=2000 {\rm \ km}$, $\varepsilon_{\rm ge} =0.01$, and $r_{\rm ge} = 1.04 R_\odot$. Figure \ref{grid_size} shows $\Delta r$ as a function of $r$. \begin{figure}[!t] \begin{center} \includegraphics[width=8.5cm]{./figure/grid_size.pdf} \caption{The radial profile of the grid size, $\Delta r$. \label{grid_size}} \end{center} \end{figure} At the inner boundary, we fixed the temperature to the photospheric value, \begin{equation} T_{\odot} = 5770 \;\mathrm{K} . \end{equation} The initial temperature is set to $T=T_{\odot}$ in the entire simulation region. We initially set the hydrostatic density distribution with $T=T_{\odot}$ in the inner region that is extended with a power-law profile in the outer region: \begin{equation} \bar{\rho}_{\rm init}(r)=\max \left[ \rho_\odot e^{-\frac{r-R_\odot}{H_{\odot}}},\rho_{\rm w,0}(r/R_\odot-1)^{-2.5}\right], \label{eq:initdens} \end{equation} where we adopt $\rho_{\rm w,0}=10^{-19}$ g cm$^{-3}$ unless otherwise stated. The inner hydrostatic profile switches to the outer power-law one at $r/R_{\odot}-1\approx 0.01$. We note that although the outer density is larger than the hydrostatic value with $T=T_{\odot}$, it is still smaller than the observed density in the solar corona and wind by a factor of five. The transverse components of velocity and magnetic field correspond to the amplitudes of Alfv\'enic waves. The inner boundary condition of them are defined in terms of the Els\"{a}sser variables (Eq. \eqref{eq:elsasser}) in the photosphere. We set the free boundary condition to the incoming component at the inner boundary so that it is absorbed without being reflected there: \begin{equation} \left.\frac{\partial}{\partial r} z_{\theta,\phi}^{-}\right|_{\odot}=0 \end{equation} To inject MHD waves from the photosphere, we impose time dependent boundary conditions for the density, velocity and perpendicular magnetic field. The transverse perturbation is injected via the outgoing component of the Els\"asser variables with a broadband spectrum, \begin{align} z_{\theta,\phi, \odot}^{+} \propto \sum_{N=0}^{100} \sin \left(2 \pi f_{N}^{t} t+\phi_{N}^{t}\right) / \sqrt{f_{N}^{t}}, \end{align} where $\phi_N^t$ is a random phase and \begin{align} 1.00\times10^{-3} \mathrm{Hz} \leq f_{N}^{t} \leq 1.00 \times 10^{-2}\; \mathrm{Hz}. \label{eq:freqtransverse} \end{align} The longitudinal perturbation, which originates from the $p$-mode oscillation, is excited with the density, \begin{align} \rho_{\odot}&=\overline{\rho_\odot} \left(1+\frac{v_{r, \odot}}{a_\odot}\right), \end{align} where $\overline{\rho_\odot} = 1.88 {\rm \ g \ cm^{-3}}$, and the radial velocity, \begin{align} v_{r,\odot}&= \delta v_{\parallel, \odot}(t), \end{align} with \begin{align} \delta v_{\parallel, \odot} \propto \sum_{N=0}^{100} \sin \left(2 \pi f_{N}^{l} t+\phi_{N}^{l}\right) / \sqrt{f_{N}^{l}}, \end{align} where $\phi_N^l$ is a random phase and \begin{align} 3.33 \times 10^{-3} \mathrm{Hz} \leq f_{N}^{l} \leq 1.00 \times 10^{-2}\; \mathrm{Hz}. \end{align} This corresponds to the period range between 100 seconds and 5 minutes, which is narrower than that of the transverse component. We tabulate the transverse and longitudinal components of the root-mean-squared velocity amplitudes, $\langle \delta v_{\perp,\odot} \rangle$ and $\langle \delta v_{\parallel,\odot} \rangle$, of the input fluctuations at the photosphere in Table \ref{tab:settings}. \shimizu{ We take $\langle \delta v_{\perp,\odot}\rangle = \langle \delta v_{\parallel,\odot}\rangle = 0.6$ km s$^{-1}$ as standard values for the velocity perturbation on the photosphere. We here note that, because the only outward flux is selectively injected from the photosphere in both transverse and longitudinal components, the corresponding ``random'' velocity amplitude is about $\sqrt{2}$ times larger than these values, which are comparable to observed transverse \citep{Matsumoto_2010ApJ} and longitudinal \citep{Oba2017ApJ} amplitudes of $\sim 1$ km s$^{-1}$. } Each model is labeled BxVyy, where ``x'' indicates the type of the magnetic flux tube and ``yy'' denotes the amplitude of $\langle \delta v_{\parallel,\odot} \rangle$. We classify the cases into three groups by the effect of the magnetic field. The first group, which includes only one case, is labeled x $=0$. In this case, we switch off Alfv\'enic waves by setting $\langle \delta v_{\perp,\odot} \rangle = 0$; we test whether the formation of the corona and wind is possible or not only by longitudinal waves. The result of this case is presented in Appendix \ref{sec:sound wave wind} \citep[see also][]{Suzuki_2002ApJ}. The second and third groups are labeled x $=$ s and w, which stand for ``standard (or strong)'' and ``weak'' magnetic fields, respectively. The aim of these groups is to investigate how the longitudinal-wave excitation on the photosphere affects the properties of the solar wind. For this purpose, we compare cases with different amplitudes of $\langle \delta v_{\parallel,\odot}\rangle$ for a fixed transverse amplitude of $\langle \delta v_{\perp,\odot}\rangle=0.6$ km s$^{-1}$. In the second group, we adopt the equipartition magnetic field, $B_{\odot}=1300$ G, at the photosphere from $8\pi p_{\odot}/B_{\odot}^2=1$ (Section \ref{sec:Results}) to model observed kilo-Gauss patches \citep{Tsuneta_2008ApJ,Ito_2010ApJ}. In the third group, in order to examine the effect of the geometry of magnetic flux tubes on the propagation and dissipation of waves in the chromosphere, we reduce $B_{\odot}$ to 1/4 of that of the second group with keeping the field strength above the corona (Section \ref{sec:dependenceonB}). \shimizu{In order to extensively investigate the effect of longitudinal waves on the solar wind, the second group in particular is investigating a wide range of $0<\langle\delta v_{\parallel,}\odot\rangle<3.0$ km s$^{-1}$. $\langle\delta v_{\parallel,}\odot\rangle\gtrsim$2 km s$^{-1}$, which is larger than the observed average value explained above, targets transient large-amplitude disturbances \citep[e.g.,][]{Oba_2017ApJ}. } We perform the simulations for a sufficiently long time in order to study the average behavior of the atmosphere and wind after they reach a quasi-steady state. To satisfy this requirement, the simulation time is set to 4500 minutes for the cases with $\langle\delta v_{\parallel,\odot}\rangle=0-1.2$ km s$^{-1}$ and 6000 minutes for the cases with $\langle\delta v_{\parallel,\odot}\rangle=1.5-3.0$ km s$^{-1}$. Even after the quasi-steady state is achieved, the radial profile fluctuates in time. Therefore, when we compare average properties of different cases, we take the average of physical quantities for 1500 minutes before the end of the simulation. \section{Results} \label{sec:Results} In this section, we show results of the cases of the standard magnetic field, BsVyy. \subsection{Overview: comparison of radial profiles} \begin{figure}[ht] \begin{center} \includegraphics[width=9cm]{./figure/temp_and_vr.pdf} \end{center} \caption{\label{fig:temp_vr} Time-averaged wind profiles of cases with $\langle \delta v_{\parallel,\odot}\rangle=0 \,{\rm km\ s^{-1}}$ (blue dashed; BsV00), $0.6 \,{\rm km\ s^{-1}}$ (green solid; BsV06) and $1.8 \,{\rm km\ s^{-1}}$ (red dotted; BsV18) in comparison with observations. {\bf (a)}: Temperature. The circles \citep{Fludra_1999SSRv} show the radial distribution of electron temperature observed by CDS/SOHO. {\bf (b)}: Radial velocity. The circles \citep{Teriaca_2003AIPC} and the stars \citep{Zangrilli_2002ApJ} represent proton outflow speeds in polar regions observed by SOHO. The location of the top of the chromosphere at $T = 2\times10^4$ K for each case is shown by diamonds in both panels. } \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=9cm]{./figure/rho_log.pdf} \end{center} \caption{\label{fig:rho} Time-averaged density profiles in the chromosphere and the low corona in comparison to observations. The line types are the same as those in Figure \ref{fig:temp_vr}. The squares represent electron density (right axis) obtained from observations of multiple total solar eclipses during solar minimum phases \citep{Saito_1970AnTok}. The crosses and triangles \citep{Wilhelm_1998ApJ} are electron density observed by SOHO in interplume lanes and plume lanes, respectively. Open and filled diamonds show the location of the top of the chromosphere at $T = 2\times10^4$ K and the location of the coronal base at $T=5\times 10^5$ K, respectively. } \end{figure} To see the overview, we show how the radial profile of the atmosphere and wind depends on the longitudinal-wave amplitude on the photosphere. Figure \ref{fig:temp_vr} (a) and (b) show the time-averaged radial profiles of the temperature $T$ and radial velocity $v_r$ for three cases: $\langle \delta v_{\parallel,\odot}\rangle=0.0$ km s$^{-1}$ (BsV00, blue-dashed line), $\langle \delta v_{\parallel,\odot}\rangle=0.6$ km s$^{-1}$ (BsV06, green-solid line), and $\langle \delta v_{\parallel,\odot}\rangle=1.8$ km s$^{-1}$ (BsV18, red-dotted line). Also shown by symbols are the observed values taken from the literature (see the caption for detail). Several features are found in this comparison. \begin{enumerate} \item The transition region is higher in the large-$\langle \delta v_{\parallel,\odot}\rangle$ cases. Given that the upward motion of the transition region (spicules; see Section \ref{sec:TimeVariability}) is likely to be driven by longitudinal waves, the higher transition region is a natural consequence of larger-amplitude longitudinal waves. \item No significant differences are seen in the coronal temperature, regardless of the larger energy injection on the photosphere. \item In the $v_r$ profiles, while the outflow in the inner region ($r/R_{\odot}-1\lesssim 10$) is slightly faster in large-$\langle \delta v_{\parallel,\odot}\rangle$ cases, the terminal velocity is nearly invariant with $\langle \delta v_{\parallel,\odot}\rangle$. This shows that the variety in the solar wind velocity is unlikely to come from the longitudinal-wave injection from the photosphere. \end{enumerate} Figure \ref{fig:rho} shows the radial profiles of the mass density $\rho$ (left axis) in the chromosphere and corona ($0.005 \le r/R_\odot-1 \le 1$), in comparison to the observed electron densities $n_{\rm e}$ (right axis) in the corona. In converting $n_{\rm e}$ to $\rho$, we assume that the corona is composed of fully ionized hydrogen plasma, that is, $\rho = m_{\rm H} n_{\rm e}$. The line format is the same as that of Figure \ref{fig:temp_vr}. In contrast to the temperature and velocity, the density depends significantly on $\langle \delta v_{\parallel,\odot}\rangle$. Specifically, the coronal density is four times larger in $\langle \delta v_{\parallel,\odot}\rangle=1.8$ km s$^{-1}$ than in $\langle \delta v_{\parallel,\odot}\rangle=0.0$ km s$^{-1}$. Given that the filling factor of the open flux tube $f^{\rm op}$ is fixed and the wind velocity is nearly independent from $\langle \delta v_{\parallel,\odot}\rangle$, the larger coronal density means the larger mass-loss rate $\dot{M}_w$, which is given by \begin{equation} \dot{M}_w=4\pi r^2 f^{\rm op}\rho v_r. \label{eq:Mdot} \end{equation} Our simulation results show that the wind mass-loss rate is sensitive to the longitudinal-wave injection. The underlying physics is discussed in detail in the following sections. \subsection{Mass-loss rate and Wind Energetics} \label{sec:MasslossEnergetics} \begin{figure}[!t] \begin{center} \includegraphics[width=9cm]{figure/massloss_powerlaw.pdf} \end{center} \caption{\label{fig:vr_mass_loss} Mass loss rate versus injected longitudinal wave amplitudes. Left and right axes indicate $\Delta \dot{M}_w$ (Equation \eqref{eq:DeltaMdot}) and $\dot{M}_w$ (Equation \eqref{eq:Mdot}), respectively. Blue star symbols with error bars show time-averaged $\dot{M}_w$ of the BsVyy cases with maximum and minimum values during the period of the time average. Red open circles represent the theoretical prediction of Equation (\ref{eq:Cranmer_2011}) introduced by \citet{Cranmer_2011_ApJ}. The blue solid line is the power law fit to the time-averaged $\Delta\dot{M}_{w}$ for $\langle \delta v_{\parallel,\odot}\rangle \le 2.7$ km s$^{-1}$(Equation \ref{eq:powerlawfit}). } \end{figure} \shimizu{To see more quantitatively how the mass-loss rate depends on the photospheric longitudinal-wave amplitude $\langle \delta v_{\parallel,\odot}\rangle$, we present in Figure \ref{fig:vr_mass_loss} the relation between $\langle \delta v_{\parallel,\odot}\rangle$ and the mass-loss rate (blue stars) evaluated at $r=r_{\rm out}$}; shown on the right axis is $\dot{M}_w$ and shown on the left axis is the enhancement of the mass loss rate $\Delta \dot{M}_w$, which is defined by \begin{align} \Delta \dot{M}_w = \dot{M}_w - \dot{M}_w^0, \label{eq:DeltaMdot} \end{align} where $\dot{M}_w^0$ ($=1.32\times 10^{-14}M_{\odot}$ yr$^{-1}$) denotes the mass-loss rate derived from the case with $\langle \delta v_{\parallel,\odot}\rangle = 0.0 {\rm \ km \ s^{-1}}$ (BsV00). The time variability is also presented by vertical error bars taken from the maximum and minimum values during the time averages. Cases with larger $\langle \delta v_{\parallel,\odot}\rangle$ exhibit higher time variability, which is discussed later in Section \ref{sec:TimeVariability}. The blue solid line in Figure \ref{fig:vr_mass_loss} is the power-law fit to the time-averaged $\Delta \dot{M}_w$ in a range of $\langle \delta v_{\parallel,\odot}\rangle \le 2.7$ km s$^{-1}$: \begin{equation} \Delta\dot{M}_w=1.41\langle\delta v_{\parallel,\odot}\rangle^{1.05} \ 10^{-14}M_\odot\; {\rm yr}^{-1}. \label{eq:powerlawfit} \end{equation} The fitting formula indicates that $\Delta\dot{M}_w$ increases almost linearly with $\langle \delta v_{\parallel,\odot}\rangle$ until saturating above $\langle \delta v_{\parallel,\odot}\rangle \gtrsim 2.7 {\rm \ km \ s^{-1}}$. The linear dependence indicates that the increase of $\dot{M}_w$ is slower than the increase of the injected energy flux carried by the longitudinal waves $\propto \langle\delta v_{\parallel,\odot} \rangle^2$. This implies that not all of the additional input energy of the longitudinal waves, but a portion of it, is used to enhance the mass loss. One possible reason is that, as $\langle \delta v_{\parallel,\odot}\rangle$ increases, a larger fraction of the input longitudinal waves dissipates in the chromosphere due to more efficient shock formation. \begin{figure}[!t] \centering \includegraphics[width=8.5cm]{figure/enegy_flux_corona_trans.pdf} \caption{Various components of surface-integrated energy fluxes normalized by the Alfv\'enic Poynting flux at the photosphere with $\langle \delta v_{\parallel_\odot}\rangle$. Cian stars, red pentagons, and green open circles with solid lines respectively denote the integrated radiative cooling loss (Equation \eqref{eq:L_R}), Alfv\'enic Poynting flux (Equation \eqref{eq:L_A}), and gravitational potential-energy flux (Equation \eqref{eq:L_G}) measured at the coronal base. Black triangles with dashed line represent the kinetic energy flux (Equation \eqref{eq:L_K}) at \shimizu{$r=r_{\rm out}$}. \label{fig:energy_flux} } \end{figure} Although the mass-loss rate depends on the amplitude of the longitudinal wave in the photosphere, it does not mean that the longitudinal wave is the main driver of the solar wind. As shown in Appendix \ref{sec:sound wave wind}, without transverse wave injection (B0V06), the atmosphere is heated only up to a few times $10^5$ K and steady outflows do not occur by the acoustic waves from the photosphere. Thus, the interaction between longitudinal and transverse waves is possibly the key to understand the cause of the enhancement of the mass loss. To figure out what caused the increase of $\dot{M}_w$, we investigate the global energetics of the wind, which is a key to understand the scaling law of mass-loss rate \citep{Cranmer_2011_ApJ,Shoda_2020_ApJ}. In particular, we consider the radiative energy loss to discuss the energy conservation law from the photosphere to the solar wind \citep{Suzuki_2013_PASJ}. In the quasi-steady state, the time averaged energy conservation Equation \eqref{eq:energy} is given by \begin{equation} \frac{d}{d r}\left(L_{\rm K}+L_{\rm E}+L_{\rm A}-L_{\rm C}-L_{\rm G}\right) \approx -4 \pi r^{2} f^{\rm op} Q_{\rm R}, \label{eq:energy_conservation_time_average} \end{equation} where \begin{align} \label{eq:L_K} L_{\rm K} &=\frac{1}{2} \rho v_{r}^{3} 4 \pi r^{2} f^{\rm op}, \\ \label{eq:L_E} L_{\rm E} &=\frac{\gamma}{\gamma-1} p v_{r} 4 \pi r^{2} f^{\rm op}, \\ \label{eq:L_A} L_{\rm A} &=\left[\left(\frac{1}{2} \rho \boldsymbol{v}_{\perp}^{2}+\frac{\boldsymbol{B}_{\perp}^{2}}{4 \pi}\right) v_{r}-\frac{B_{r}}{4 \pi}\left(\boldsymbol{v}_{\perp} \cdot \boldsymbol{B}_{\perp}\right)\right] 4 \pi r^{2} f^{\rm op}, \\ \label{eq:L_C} L_{\rm C} &=-q_{\rm cnd} 4 \pi r^{2} f^{\rm op}, \\ \label{eq:L_G} L_{\rm G} &=\rho v_{r} \frac{G M_{\odot}}{r} 4 \pi r^{2} f^{\rm op}=\dot{M}_w \frac{G M_{\odot}}{r} \end{align} are the surface-integrated kinetic energy flux, enthalpy flux, Poynting flux, conductive flux, and gravitational potential-energy flux, respectively. We note that $\dot{M}_w$ in Equation \eqref{eq:L_G} can be assumed to be constant in the quasi-steady state. We define the radiation luminosity $L_R$ as follows: \begin{equation}\label{eq:L_R} L_{\rm R}(r)= \int_{r_{\rm lch}}^{r}Q_{\rm R} 4\pi r^2f^{\rm op}dr, \end{equation} where $r_{\rm lch}$ is the radial distance in the lower chromosphere. We set $r_{\rm lch}-R_\odot = 0.7 {\rm \ Mm}$ ($r_{\rm lch}/R_{\odot} = 1.001$). Below $r<r_{\rm lch}$ we assume $L_{\rm R}=0$ because the exponential (Newtonian) cooling, which dominates the radiation in $r \lesssim r_{\rm lch}$, should yield negligible net radiative loss. Equation \eqref{eq:energy_conservation_time_average} is then rewritten in terms of $L_{\rm R}$ as follows. \begin{align} L_{\rm K}+L_{\rm E}+L_{\rm A}-L_{\rm C}-L_{\rm G} + L_{\rm R} \equiv L_{\rm tot} \approx {\rm const}, \label{eq:energy_conservation_Ltot} \end{align} where $L_{\rm tot}$ is the total surface-integrated energy flux, which is expected to be constant in $r$ in the quasi-steady state. By relating the values of $L_{\rm tot}$ at different radial distances, several analytical relations are derived. \begin{enumerate} \item Photosphere: Because the kinetic, thermal, and conductive energy fluxes are negligibly small on the nearly static and low-temperature photosphere, the dominant terms in $L_{\rm tot}$ are the Poynting flux and the energy flux of gravitational potential, that is, \begin{align} L_{\rm tot} \approx L_{{\rm A}, \odot} - L_{{\rm G}, \odot} = L_{{\rm A}, \odot} - \frac{1}{2}\dot{M}_w v_{g,\odot}^2 , \label{eq:Ltot_photosphere} \end{align} where $v_{g,\odot} = \sqrt{2GM_\odot/R_\odot} = 617$ km s$^{-1}$ is the escape velocity. We note that $L_{\rm R,\odot}=0$ is assumed as described above. \item Coronal base: Because the mean outflow velocity is small at the coronal base, $L_{\rm K}$ and $L_{\rm E}$ are negligible, and thus, $L_{\rm tot}$ is approximated by \begin{align} L_{\rm tot} \approx L_{{\rm A}, {\rm cb}} - L_{{\rm C}, {\rm cb}} - L_{{\rm G}, {\rm cb}} + L_{{\rm R}, {\rm cb}}, \label{eq:Ltot_coronal_base_ori} \end{align} where the subscript ``cb'' denotes the value at the coronal base, which we set $r_{\rm cb}/R_\odot = 1.03$. We have confirmed that the conductive luminosity is small at the coronal base because the temperature gradient is already shallow. Therefore, we can safely simplify Equation \eqref{eq:Ltot_coronal_base_ori} to \begin{align} L_{\rm tot} \approx L_{{\rm A}, {\rm cb}} - L_{\rm G,cb} + L_{{\rm R}, {\rm cb}}. \label{eq:Ltot_coronal_base} \end{align} \item Distant solar wind (outer boundary): Because the kinetic energy flux dominates the enthalpy, Poynting, and conductive fluxes in the super-Alfv\'enic region, $L_{\rm tot,out}$ is approximated by \begin{align} \label{eq:Ltot_solar_wind} L_{\rm tot} \approx L_{{\rm K}, {\rm out}} + L_{{\rm R}, {\rm out}} \approx L_{{\rm K}, {\rm out}} + L_{{\rm R}, {\rm cb}}, \end{align} where the subscript ``out'' denotes the value at the outer boundary ($r=r_{\rm out}$). Because the radiative loss above the coronal base is generally negligible, we use $L_{{\rm R}, {\rm out}} \approx L_{{\rm R}, {\rm cb}}$ (but see discussion below). \end{enumerate} The energy conservation between the photosphere and the coronal base (Eq.s. \eqref{eq:Ltot_photosphere} and \eqref{eq:Ltot_coronal_base}) yields \begin{align} L_{{\rm A}, \odot} \approx L_{{\rm A}, {\rm cb}} + L_{{\rm R}, {\rm cb}}, \label{eq:LA_coronal_base} \end{align} where we approximate $L_{{\rm G},\odot}\approx L_{\rm G,cb}$. Cian stars and red pentagons in Figure \ref{fig:energy_flux} respectively denote $L_{\rm R,cb}$ and $L_{\rm A,cb}$ normalized by $L_{\rm A,\odot}$. Equation \eqref{eq:LA_coronal_base} is satisfied if the sum of these two components is 100 \% in Figure \ref{fig:energy_flux}. As one can see, however, this conservation is not perfectly fulfilled, possibly because of the treatment of the radiative cooling in the low chromosphere. As described previously, $L_{\rm R}$ excludes the contribution of the radiation cooling below $r<r_{\rm lch}$. By this assumption, we should underestimate $L_{\rm R}$, leading to $L_{{\rm A}, \odot} > L_{{\rm A}, {\rm cb}} + L_{{\rm R}, {\rm cb}}$. Although we have to bear in mind that $L_{\rm R,cb}$ could be underestimated, an increasing trend of $L_{\rm R,cb}$ for $\langle \delta v_{\parallel,\odot} \rangle$ is physically plausible; the density in the chromosphere and the low corona is higher for larger $\langle \delta v_{\parallel,\odot} \rangle$ (Figure \ref{fig:rho}), which yields larger radiative cooling. As a result, $L_{\rm A,cb}/L_{\rm A,\odot}$ does not monotonically increase with $\langle\delta v_{\parallel,\odot}\rangle$ but eventually saturates for $\langle\delta v_{\parallel,\odot}\rangle \gtrsim 2$ km s$^{-1}$ (Figure \ref{fig:energy_flux}) because a large portion of the input Alfv\'enic Poynting flux is already lost via radiation below the coronal base. Next, the energy conservation between the coronal base and the outer boundary (Eq.s \eqref{eq:Ltot_coronal_base} and \eqref{eq:Ltot_solar_wind}) yields \begin{align}\label{eq:L_Kout_approx} L_{{\rm K}, {\rm out}} \approx L_{{\rm A}, {\rm cb}} - L_{\rm G,cb}. \end{align} The green open circles and black triangles in Figure \ref{fig:energy_flux} respectively represent $L_{\rm G,cb}$ and $L_{\rm K,out}$ normalized by $L_{{\rm A},\odot}$. $L_{\rm A,cb}$, $L_{\rm G,cb}$ and $L_{\rm K,out}$ in Figure \ref{fig:energy_flux} exhibit a similar trend on $\langle\delta v_{\parallel,\odot}\rangle$; they increase in $\langle\delta v_{\parallel,\odot}\rangle\lesssim 2$ km s$^{-1}$ and saturate for $\langle\delta v_{\parallel,\odot}\rangle\gtrsim 2$ km s$^{-1}$. Figure \ref{fig:energy_flux} also shows that Eq.\eqref{eq:L_Kout_approx} is reasonably satisfied, that is, $L_{\rm K,out} + L_{\rm G,cb} \approx L_{\rm A,cb}$. The wind velocity can be well approximated by the escape velocity, $v_{r,{\rm out}} \approx v_{g,\odot} = \sqrt{2GM_\odot/R_\odot}$, which is also a reasonable assumption in our numerical results (Table \ref{tab:settings}). Then, by using $L_{\rm K,out}\approx L_{\rm G,cb} \approx \dot{M}_w v_{g,\odot}^2/2$, we can rewrite Equation \eqref{eq:L_Kout_approx} as follows: \begin{align} \dot{M}_w \approx \frac{L_{{\rm A}, {\rm cb}}}{v_{g,\odot}^2}, \label{eq:Cranmer_2011} \end{align} as already found in \citet{Cranmer_2011_ApJ}. The comparison of $\dot{M}_w$ (blue stars) to ${L_{{\rm A}, {\rm cb}}}/{v_{g,\odot}^2}$ (orange circles) in Figure \ref{fig:vr_mass_loss} confirms that Equation \eqref{eq:Cranmer_2011} explains the obtained mass loss rate quite well particularly in the small $\langle \delta v_{\parallel,\odot}\rangle \lesssim 1$ km s$^{-1}$ regime. In other words, $\dot{M}_w$ is primarily determined by the Alfv\'{e}nic Poynting flux at the coronal base. In contrast, Equation \eqref{eq:Cranmer_2011} slightly overestimates the obtained $\dot{M}_w$ in the larger $\langle \delta v_{\parallel,\odot}\rangle \gtrsim 1$ km s$^{-1}$ cases because the radiative cooling above $r>r_{\rm cb}$ is not negligible in Equation \eqref{eq:Ltot_solar_wind} owing to the larger density in the corona (Figure \ref{fig:rho}). However, even in these cases with large $\langle \delta v_{\parallel,\odot}\rangle$, Equation \eqref{eq:Cranmer_2011} still gives a reasonable estimate of $\dot{M}_w$. In summary, the increase and saturation of $\dot{M}_w$ on $\langle \delta v_{\parallel,\odot}\rangle$ directly reflect the trend of the Alfv\'enic Poynting flux at the coronal base. The saturation can be interpreted by the excess of the radiative cooling discussed previously. On the other hand, in order to understand the increase of $L_{\rm A,cb}$ in $\langle \delta v_{\parallel,\odot}\rangle\le 2.7$ km s$^{-1}$, we need to further examine detailed properties of waves below the coronal base, which is presented in the following subsection. \subsection{Dissipation and Mode Conversion of Waves}\label{sec:Alfven energy} \begin{figure}[!t] \begin{center} \includegraphics[width=8cm]{figure/alfven_energy1.pdf} \end{center} \caption{\label{fig:alfven_energy} Comparison of the radial profiles of the surface integrated Alfv\'enic Poynting flux (a) and the fractions of the energy loss by the turbulent dissipation (b) and the mode conversion (c). Blue dashed, green solid and red dotted lines show the results of $\langle \delta v_{\parallel,\odot}\rangle=0 \,{\rm km\ s^{-1}}$ (BsV00), $0.6 \,{\rm km\ s^{-1}}$ (BsV06) and $1.8 \,{\rm km\ s^{-1}}$ (BsV18), respectively. In the top panel, the location of the top of the chromosphere at $T = 2\times10^4$ K for each case is shown by diamonds. } \end{figure} To understand the dependence of $L_{\rm A,cb}$ on $\langle\delta v_{\parallel,\odot}\rangle$ in Figure \ref{fig:energy_flux}, we examine the propagation and dissipation of transverse waves ($\approx$ Alfv\'{e}n waves) from the chromosphere to the low corona. \citet{Shoda_2020_ApJ} introduced an equation that describes the evolution of Alfv\'{e}n waves: \begin{equation} \frac{\partial}{\partial t}\left(\frac{1}{2}\rho v_\perp^2+\frac{B_\perp^2}{8\pi}\right) +\frac{1}{4\pi r^2f^{\rm op}}\frac{\partial}{\partial r}L_{\rm A} = -\varepsilon_{\parallel\leftrightarrow\perp}-Q_{\rm turb} \end{equation} where $\epsilon_{\parallel\leftrightarrow\perp}$ and $Q_{\rm turb}$ indicate the mode conversion from transverse waves to longitudinal waves and the energy loss by turbulent cascade, respectively. These are explicitly written as \begin{align} \label{eq:mode_conversion} \varepsilon_{\parallel\leftrightarrow\perp} &= -v_r\frac{\partial}{\partial r}\left(\frac{B_\perp^2}{8\pi}\right) +\left(\rho v_\perp^2-\frac{B_\perp^2}{4\pi}\right)v_r \frac{d}{dr}\ln (r\sqrt{f}) \\ \label{eq:turbulence} Q_{\rm turb} &= c_{d} \rho \sum_{i=\theta, \phi} \frac{\left|z_{i}^{+}\right| (z_{i}^{-})^2+\left|z_{i}^{-}\right| (z_{i}^{+})^2}{4 \lambda_{\perp}}. \end{align} We note that the first term of Equation (\ref{eq:mode_conversion}) denotes the nonlinear excitation of longitudinal perturbation from the magnetic fluctuation associated with transverse waves \citep{Hollweg_1971_JGR,Kudoh_1999ApJ,Suzuki_2005_ApJ,Matsumoto_2010_ApJ}. Using $\varepsilon_{r\leftrightarrow\perp}$ and $Q_{\rm turb}$, we define the the energy loss rates via the turbulent dissipation and the mode conversion as \begin{align} \Delta L_{\mathrm{A}, \mathrm{turb}}(r) &=\int_{R_{\odot}}^{r} d r 4 \pi r^{2} f^{\mathrm{op}} Q_{\mathrm{turb}} \end{align} and \begin{align} \Delta L_{\mathrm{A}, \mathrm{mc}}(r) &=\int_{R_{\odot}}^{r} dr 4 \pi r^{2} f^{\mathrm{op}} \varepsilon_{\parallel \leftrightarrow \perp}. \end{align} Figure \ref{fig:alfven_energy} shows properties of the damping of Alfv\'enic waves in the chromosphere; Panel (a) presents the radial profile of $L_{\mathrm{A}}$; Panels (b) and (c) present $\Delta L_{\mathrm{A}, \mathrm{turb}}$ (energy loss by turbulence) and $\Delta L_{\mathrm{A}, \mathrm{mc}}$ (energy loss by mode conversion), respectively. We note that the net loss of Alfv\'enic waves, $L_{\mathrm{A},{\odot}}-L_\mathrm{A}$, is not always equal to the sum of these energy losses, possibly because of numerical dissipation. As shown in Figure \ref{fig:alfven_energy}, the mode conversion rate and the turbulent loss rate behave differently. A general trend is that $\Delta L_{\mathrm{A}, \mathrm{turb}}$ increases and $\Delta L_{\mathrm{A}, \mathrm{mc}}$ decreases with increasing $\langle \delta v_{\parallel,\odot}\rangle$. As $\langle \delta v_{\parallel,\odot}\rangle$ increases, the mode conversion from transverse (Alfv\'{e}nic) waves to longitudinal (acoustic) waves is suppressed; instead, the ``inverse conversion'' (longitudinal-to-transverse wave energy transfer), $\Delta L_{\mathrm{A}, \mathrm{mc}}<0$, takes place in the case with large $\langle \delta v_{\parallel,\odot}\rangle$ (red dotted; BsV18). This is because transverse waves are excited at the region with plasma $\beta\approx 1$ in the chromosphere by the mode conversion from the large-amplitude longitudinal waves injected from the photosphere \citep{Cally_2006, Schunker_2006, Cally_2008}. We here note that the conversion from the longitudinal mode to the transverse mode occurs even in the simple 1D geometry because the direction of magnetic field, $(B_r\hat{\boldsymbol{r}}+\boldsymbol{B}_{\perp})/|B|$, is not parallel with the direction of the wave propagation that is strictly along the $r$ direction, where $\hat{\boldsymbol{r}}$ is the radial unit vector. As a result of the excitation of transverse wave from longitudinal wave, $L_{\rm A}$ increases near the surface (Figure \ref{fig:alfven_energy}(a)). The amplitude of the excited transverse waves increases with $\langle\delta v_{\parallel,\odot}\rangle$, which raises the Alfv\'enic Poynting flux at the coronal base (Figure \ref{fig:energy_flux}). \shimizu As a consequence of increased energy injection to the corona (increased $L_{\rm A,cb}$), the coronal heating rate increases, which leads to larger coronal density (see discussion in Section \ref{sec:setup}). In fact, as shown in Figure \ref{fig:rho}, the density at the coronal base (where $T=5\times 10^5$ K) is higher for larger $\langle\delta v_{\parallel,\odot}\rangle$, even though the coronal base is located at a higher altitude. } \shimizu{ Another interesting point is that the velocity of the wind is insensitive to the value of $\langle \delta v_{\parallel,\odot}\rangle$ (bottom panel of Figure \ref{fig:temp_vr}); the increase of $\dot{M}_w (\propto \rho v_r)$ is solely by the increase in the density. According to the standard model of the solar/stellar winds \citep{Hansteen_1995_JGR,Lamers_1999book}, the additional heating and momentum inputs in the subsonic region ($v_r<a$) of a wind raise the mass-loss rate with negligible effects on the terminal velocity, while those in the supersonic region ($v_r>a$) do not affect the mass-loss rate but result in the higher terminal velocity.} \shimizu{Based on this background, to understand the behavior of wind speed with respect to $\langle \delta v_{\parallel,\odot}\rangle$, we examine the radial distribution of the energy transfer rate from the Alfv\'enic wave to the gas from the corona to the wind.} \shimizu{Specifically, we calculate the loss rate of the Alfv\'{e}nic Poynting flux per unit mass, defined as \begin{align} \zeta_{\rm A}=-\frac{1}{4\pi\rho r^2 f^{\rm op}}\frac{\partial L_{\rm A}}{\partial r}. \label{eq:dissipationrate} \end{align} Because of energy conservation, $\zeta_{\rm A}$ corresponds to the energy ($=$ heating $+$ work) transfer rate from the Alfv\'{e}nic Poynting flux to the plasma. } \shimizu{Figure \ref{fig:Alfven_critical} presents $\zeta_{\rm A}$ of the cases with $\langle \delta v_{\parallel,\odot}\rangle = 0.6$ km s$^{-1}$(BsV06; green solid) and 1.8 km s$^{-1} $(BsV18; red dashed) normalized by $\zeta_{\rm A}$ of $\langle \delta v_{\parallel,\odot}\rangle = 0$ (BsV00). } \shimizu{The increase in $\langle \delta v_{\parallel,\odot}\rangle$ promotes the energy input in the subsonic region ($r\lesssim 2-3R_{\odot}$) but does not affect (or even reduces) the energy input in the supersonic region ($r\gtrsim 2-3R_{\odot}$). In other words, the vertical oscillation on the photosphere affects only the subsonic region. For this reason, an addition of $\langle \delta v_{\parallel,\odot}\rangle$ does not affect the wind velocity but only increases the mass-loss rate. } \begin{figure}[!t] \begin{center} \includegraphics[width=8cm]{figure/alfven_energy_critical.pdf} \end{center} \caption{\label{fig:Alfven_critical} \shimizu{Dissipation rate of Alfv\'{e}nic Poynting flux per unit mass (Equation \eqref{eq:dissipationrate}) of $\langle \delta v_{\parallel,\odot}\rangle=0.6 \,{\rm km\ s^{-1}}$ (BsV06; green solid line) and $1.8 \,{\rm km\ s^{-1}}$ (BsV18; red dashed line). The values are normalized by that of $\langle \delta v_{\parallel,\odot}\rangle=0$ (BsV00).} } \end{figure} \section{Discussion} \label{sec:discussion} \subsection{Dependence on Magnetic Field in Chromosphere} \label{sec:dependenceonB} \begin{figure}[h] \begin{center} \includegraphics[width=8cm]{figure/beta_chr.pdf} \end{center} \caption{\label{fig:beta} Comparison of the radial profiles of time averaged $\langle \beta_r\rangle$ (Equation \eqref{eq:betar}). Thin and thick lines correspond to the cases of $B_{r,\odot}=1300$ G (BsVyy) and $325$ G (BwVyy), respectively. Dashed and solid lines show the cases with $\langle \delta v_{\parallel,\odot}\rangle = 0$ (BxV00) and 0.6 km s$^{-1}$ (BxV06), respectively. Diamonds show the location of the top of the chromosphere at $T = 2\times10^4$ K for each case. } \end{figure} We have shown that the nonlinear mode conversion between transverse waves and longitudinal waves in the chromosphere is the key to determine the wind properties when both transverse and longitudinal perturbations are input at the photosphere. The mode conversion rate sensitively depends on plasma $\beta=8\pi p/B^2$ with peaked at $\beta\approx 1$ \citep{Hollweg_1982_ApJ,Spruit_1992_ApJ}. Therefore, it is expected that the magnetic field strength in the chromosphere plays an essential role in determining the Alfv\'{e}nic Poynting flux that enters the corona. Here, we perform simulations in a flux tube with weaker magnetic field from the photosphere to the chromosphere but with the same field strength above the corona (``BwVyy'' in Table \ref{tab:settings}). Figure \ref{fig:beta} compares the radial profiles of the time averaged plasma beta values of four cases, BsV00, BsV06, BwV00, and BwV06, that are evaluated from the only radial component of the magnetic field, \begin{align} \langle\beta_r\rangle \equiv \frac{8\pi \langle p\rangle}{B_r^2}. \label{eq:betar} \end{align} We note that, although $\beta_r \ge \beta$, the difference is small because $|\delta B_{\perp}|<|B_r|$. \begin{figure}[t] \begin{center} \includegraphics[width=8cm]{figure/alfven_energy2.pdf} \end{center} \caption{\label{fig:alfven_energy2} Comparison of the radial profiles of the surface-integrated Alfv\'enic Poynting flux (a), the ratio of the inward Poynting flux to the outward Poynting flux (b), and the fraction of the energy loss by the turbulent dissipation (c) and the mode conversion (d). The line types are the same as those in Figure \ref{fig:beta}. In the top panel, the location of the top of the chromosphere at $T = 2\times10^4$ K for each case is shown by diamonds. } \end{figure} Figure \ref{fig:alfven_energy2} compares the properties of Alfv\'{e}n waves of these four cases. The panels (a), (c), and (d) are the same as (a), (b) and (c) of Figure \ref{fig:alfven_energy} but the vertical axis of (a) and the horizontal axis are shown in logarithmic scale. The panel (b) presents the ratio of the incoming component $L_{\rm A}^{-}$, to the outgoing component $L_{\rm A}^{+}$, of Alfv\'{e}nic Poynting luminosity, which are defined by \begin{align} L_{\rm A}^{+} &= \rho (z_\perp^{+})^2 (v_r + v_{\rm A}) \pi r^2 f^{\rm op}, \\ L_{\rm A}^{-} &= \rho (z_\perp^{-})^2 (v_r - v_{\rm A}) \pi r^2 f^{\rm op}, \end{align} where $v_{\rm A}=B_r/\sqrt{4\pi\rho}$ is the Alfv\'en velocity along the $r$ direction. We note that $L_{\rm A} = L_{\rm A}^{+} + L_{\rm A}^{-}$. Let us begin with the comparison of the cases without $\langle \delta v_{\parallel,\odot}\rangle$, BwV00 (light blue thick dashed lines) and BsV00 (deep blue thin dashed lines). Figure \ref{fig:alfven_energy2} (a) shows that $L_{\rm A}$ of the weak field case (BwV00) is larger than $L_{\rm A}$ of the standard field case (BsV00) near the photosphere. However, the former declines more rapidly in the chromosphere to be smaller than the latter above the upper chromosphere of $r/R_{\odot}-1>2\times 10^{-3}$. As a result, the mass loss rate $\dot{M}_w$ of BwV00 is slightly smaller than $\dot{M}_w$ of BsV00 (Table \ref{tab:settings}), which is consistent with Equation \eqref{eq:Cranmer_2011}. The rapid damping of the Alfv\'en waves is mainly because of more efficient turbulent dissipation (Figure \ref{fig:alfven_energy2}(c)). Utilizing Equation \eqref{eq:divB}, we can rewrite the correlation length (Equation \eqref{eq:corrlength}) as follows: \begin{equation} \lambda_\perp=\lambda_{\perp,\odot}\frac{r}{R_\odot}\sqrt{\frac{f^{\rm op}}{f^{\rm op}_\odot}}, \end{equation} where we are adopting the same $\lambda_{\perp,\odot} = 150 {\rm \ km}$ in both cases (Equation \eqref{eq:lambda0}). Since the flux-tube expansion of the weak field case is smaller in our setup, $f^{\rm op}/f^{\rm op}_\odot$ is smaller, which enhances the turbulent dissipation. The rapid turbulent damping in the chromoshere reduces the amplitude of the outgoing Alfv\'{e}n waves in the upper region. The reflected Alfv\'{e}n waves downward to the photosphere are also suppressed to give the smaller ratio of $L_{\rm A}^{-}/L_{\rm A}^{+}$ near the photosphere of the weak field case (Figure \ref{fig:alfven_energy2}(b)). Therefore, the net outgoing Poynting flux, $L_{\rm A} (=L_{\rm A}^{+} + L_{\rm A}^{-})$, is larger there (Figure \ref{fig:alfven_energy2}(a)). The comparison of BwV06 to BwV00 indicates that $\dot{M}_w$ increases more than twice by the additional input of the longitudinal perturbation of $\langle \delta v_{\parallel, \odot}\rangle = 0.6$ km s$^{-1}$ in the weak field condition (Table \ref{tab:settings}). The enhancement factor of $\dot{M}_w$ is considerably larger than the value ($\approx 1.5$ times) obtained in the standard field condition. This is because in the weak field case of BsV06 (orange thick lines) larger Alfv\'enic Poynting flux reaches the coronal base (Figure \ref{fig:alfven_energy2}(a)) by the generation of transverse waves through the mode conversion (Figure \ref{fig:alfven_energy2}(d)) in spite of the higher turbulent loss (Figure \ref{fig:alfven_energy2}(c)). Figure \ref{fig:beta} shows that $\langle \beta_r\rangle$ of this case decreases with height and crosses unity in the chromosphere, which induces the efficient longitudinal-to-transverse mode conversion (Figure \ref{fig:alfven_energy2}(d)) as shown by \citet{Cally_2006} and \citet{ Cally_2008}. In contrast, $\langle\beta_r\rangle$ stays $< 1$ in the standard field case of BsV06 (Figure \ref{fig:beta}). As a result, $\Delta L_{\rm A,mc}$ remains positive (Figure \ref{fig:alfven_energy2} (d)), namely the excitation of transverse waves by the mode conversion is negligible. We can conclude that, in addition to the longitudinal fluctuation in the photosphere (Section \ref{sec:Results}), the magnetic field strength in the chromosphere is also an essential factor to determine the global wind properties through nonlinear processes of MHD waves. \subsection{Limitation of the 1D Geometry} We have simulated the propagation, dissipation, and mode conversion of MHD waves in 1D super-radially open flux tubes. While we took the phenomenological approach to the Alfv\'en-wave turbulence (Section \ref{sec:Alfventurbulence}) to consider the 3D effect, multi-dimensional effects are also important in other wave processes \citep[e.g.][]{Hasan_2008ApJ,Matsumoto_2012ApJ,Iijima_2017ApJ,Matsumoto_2021_MNRAS}. The nonlinear mode conversion, which is a key process in the present paper, is probably one of those that have to take into account 3D effects because the conversion rate increases with the attacking angle between the direction of a magnetic field line and a wave-number vector \citep{Schunker_2006}. Since the attacking angle tends to be restricted to a small value in the 1D treatment, the amount of the generated transverse waves by the mode conversion may be underestimated in our simulations. Although we have only considered shear Alfv\'{e}n waves, torsional Alfv\'{e}n waves are also expected to be excited \citep{Kudoh_1999ApJ}. The nonlinear steepening of the torsional mode is slower than that of the shear mode \citep{Vasheghani_Farahani_2012A&A}. Therefore, if we considered torsional waves in addition to shear waves, the dissipation rate of the transverse waves would be slower, which may affect the global wind properties. \subsection{Missing physics in the chromosphere} We described that the radiative cooling in the chromosphere governs the saturation of the Alfv\'enic Poynting flux that reaches the coronal base in the cases of the large $\langle\delta v_{\parallel,\odot}\rangle$ (Figure \ref{fig:energy_flux} and Sections \ref{sec:MasslossEnergetics} \& \ref{sec:Alfven energy}). The local thermodynamical equilibrium is not strictly satisfied in the chromosphere, and the radiative cooling is governed by multiple bound-bound transitions \citep{Carlsson_2012A&A}. In addition, the radiative loss also affects the propagation of compressional waves such as acoustic waves \citep[e.g.,][]{Bogdan_1996ApJ}. Ideally, detailed radiative transfer has to be solved to accurately handle these complicated processes, although we have taken the approximated prescription to consider them phenomenologically (Section \ref{sec:MasslossEnergetics}). A more accurate treatment \citep[e.g.,][]{Hansteen2015ApJ,Iijima_2017ApJ} might modify the radiative loss rate, which we plan to tackle in our future works. The gas in the chromosphere is partially ionized plasma. The relative motion between charged particles and neutral particles, which is called ambipolar diffusion, promotes damping of transverse waves and heating the gas \citep{Khodachenko2004A&A,Khomenko_2012ApJ}. However, in the current condition of the solar chromosphere, the ambipolar diffusion does not give a large impact on the low-frequency Alfv\'en waves with $<10^{-2}$ Hz considered in this paper \citep[Equation \eqref{eq:freqtransverse};][]{Arber2016ApJ}, although it may affect higher-frequency Alfv\'en waves and rapid dynamical phenomena \citep[e.g.,][]{Singh2011PhPl}. Another interesting aspect is that magnetic tension, which is induced by ambipolar diffusion, can be an additional generation mechanism of transverse waves in the chromosphere \citep{Martinez-Sykora_2017Sci}. In future, the effect of partial ionization should be investigated also in the context of the solar/stellar wind studies. \subsection{Density fluctuation} \begin{figure}[ht] \begin{center} \includegraphics[width=8cm]{./figure/density_fluctuation.pdf} \end{center} \caption{\label{fig:den_fluc} Comparison of the radial profiles of the root-mean-squared density fluctuations normalized by the average density $\langle \delta \rho\rangle / \langle\rho\rangle=\sqrt{\langle\rho^2\rangle-\rho_{\rm ave}^2}/\langle\rho\rangle$ of the three cases, BsV00 (dashed blue), BsV06 (solid green), and BsV18 (dotted red). Diamonds show the location of the top of the chromosphere at $T = 2\times10^4$ K for each case. } \end{figure} Figure \ref{fig:den_fluc} compares the radial profiles of normalized density fluctuations of different $\langle \delta v_{\parallel,\odot}\rangle$ cases. Density fluctuations are larger near the surface for larger $\langle \delta v_{\parallel,\odot}\rangle$ simply due to the larger injection of the longitudinal perturbations. However, $\langle \delta \rho\rangle / \langle \rho\rangle$ of the different cases converges to a similar trend in the chromosphere of $r/R_{\odot}-1\gtrsim 3\times 10^{-2}$. This saturation possibly comes from more efficient longitudinal-to-transverse mode conversion in the chromosphere (Figure \ref{fig:alfven_energy}) in addition to more rapid dissipation of longitudinal wave by shock formation. Above that, all the presented three cases exhibit a first peak of $\langle \delta \rho\rangle / \langle \rho\rangle$ around $r/R_{\odot}-1\sim (2-5)\times 10^{-2}$ from the transition region to the low corona. This peak reflects time-variable spicule activities (see Section \ref{sec:TimeVariability}); density fluctuations are excited by the nonlinear mode conversion from transverse waves to longitudinal waves via the gradient of the magnetic pressure associated with the Alfv\'enic waves \citep{Hollweg_1971_JGR,Kudoh_1999ApJ,Matsumoto_2010_ApJ}. As explained in Figure \ref{fig:alfven_energy}, the upward Alfv\'enic Poynting flux is larger for larger longitudinal-wave injection at the photosphere even though the same amplitude of transverse fluctuations is excited. Therefore, taller spicules are generated and the first peak is located at a higher altitude for larger $\langle \delta v_{\parallel,\odot}\rangle$. Although the location of the first peak depends on $\langle \delta v_{\parallel,\odot}\rangle$, the radial profiles of $\langle \delta \rho\rangle / \langle \rho \rangle$ converge to a similar level above $r/R_{\odot}\gtrsim 1$ almost independently from $\langle \delta v_{\parallel,\odot}\rangle$. A gentle second peak of $\langle \delta \rho \rangle / \langle \rho \rangle$ is formed around $r/R_{\odot}\sim 5$ by the parametric decay instability of Alfv\'{e}nic waves \citep{Terasawa_1986,Tenerani_2017ApJ,Suzuki_2006_JGRA, Shoda_2018ApJ_PDI,Reville_2018ApJ}. Since the density fluctuation in the corona and solar wind is observable by remote sensing, our model could be constrained by the detailed comparison with observation \citep{Miyamoto_2014_ApJ,Hahn_2018_ApJ,Krupar_2020_ApJ}. For example, it is reported that the relative density fluctuation in the coronal base is as large as $10 \ \%$ or larger \citep{Hahn_2018_ApJ,Krupar_2020_ApJ}, which possibly indicates the non-negligible fraction of compressional waves present in the coronal base. \begin{figure}[!t] \begin{center} \includegraphics[width=8cm]{./figure/elssaser_variable.pdf} \end{center} \caption{\label{fig:elssaser} Comparison of the radial profiles of the root-mean-squared Els\"asser variables of the outward (thick lines) and inward (thin lines) components. The line types are the same as those in Figure \ref{fig:den_fluc}. Red and blue circles respectively represent the observed amplitudes of $z_\perp^+$ and $z_\perp^-$ from PSP \citep{Chen_2020_ApjS}. Diamonds show the location of the top of the chromosphere at $T = 2\times10^4$ K for each case. } \end{figure} \subsection{Alfv\'enicity of the solar wind} In most cases, the simulated solar wind is fast, in that the termination velocity mostly exceeds $500 {\rm \ km \ s^{-1}}$. The fast solar wind is known to be Alfv\'enic, that is, the outward Els\"asser variable is much larger than the inward Els\"asser variable in the fast streams. Here the Alfv\'enic nature of the simulated solar wind is discussed. Figure \ref{fig:elssaser} compares the numerical results of the time averaged outgoing and incoming Els\"asser variables $\langle z_{+}\rangle$ and $\langle z_{-}\rangle$ to observations at $r\approx 40 R_{\odot}$ by the $\it Parker\ Solar\ Probe$ (hereafter PSP) \citep{Chen_2020_ApjS}. The obtained $\langle z_{\pm}\rangle$ from these three cases are consistent with the observed $z_{+}$ and $z_{-}$ that show large scatters. Both $\langle z_{+}\rangle$ and $\langle z_{-}\rangle$ of our numerical results are larger for larger $\langle \delta v_{\parallel,\odot}\rangle$ in the coronal regions of $10^{-2}<r/R_\odot-1<1$. However, larger $\langle z^{\pm}\rangle $ yields larger turbulent loss as shown in Figure \ref{fig:alfven_energy} , which suppresses the increase of $\langle z_{\pm}\rangle$. As a result, almost the same maximum amplitudes of $\langle z_{\pm}\rangle$ are obtained for the different $\langle \delta v_{\parallel,\odot}\rangle$ cases in a self-regulated manner, similarly to the density perturbations (Figure \ref{fig:den_fluc}). In the solar wind region, $r/R_{\odot}\gtrsim 10$, $\langle z_{\pm}\rangle$ is smaller for larger $\langle \delta v_{\parallel,\odot}\rangle$ because the density is higher (Figure \ref{fig:rho}) . \subsection{Time Variability} \label{sec:TimeVariability} \begin{figure*}[!t] \centering \includegraphics[width=16cm]{./figure/time_slice_ro.pdf} \caption{ \label{fig:r-t_diagram} Time-distance diagrams of mass density in the lower atmospheres of BsV00 (a), BsV06 (b), BsV18 (c), BwV00 (d), BwV06 (e) and BwV18 (f). The yellow and blue solid lines represent contour lines of $T=2\times 10^4$[K] and $\beta_r=1$, respectively. } \end{figure*} Figure \ref{fig:r-t_diagram} shows time versus radial distance diagrams of the mass density in the low atmosphere. The left and right columns respectively present the cases with the standard (BsVyy) and weak (BwVyy) magnetic field in the chromosphere. The top, middle, and bottom rows correspond to $\langle\delta v_{\parallel,\odot}\rangle=$ 0, 0.6, and 1.8 $\rm km \,s^{-1}$, respectively. The yellow lines represent the positions of $T=2\times 10^4 \,{\rm K}$, which correspond to the bottom of the transition region. One can see that the transition region moves up and down in all the cases, which should be observed as spicules \citep[e.g.,][]{Beckers_1972ARA&A,De_Pontieu_2007_Science,Shoji_2010_pasj,Okamoto_2011ApJ,Yoshida_2019ApJ,Tei_2020ApJ}. The velocity of the upward motions can be derived to be an order of $\sim 10$ km s$^{-1}$ from the slope of yellow lines, which roughly coincides with the sound speed and accordingly the propagation speed of slow MHD waves. Namely the gas in the upper chromosphere is lifted up by longitudinal slow-mode waves that are generated from transverse waves through the mode conversion in the upper chromosphere \citep{Hollweg_1982_ApJ,Suematsu_1982SoPh,Matsumoto_2010_ApJ,Sakaue_2021ApJ}. In the cases without longitudinal perturbations (top panels of Figure \ref{fig:r-t_diagram}), the stronger field in the chromosphere (top-left) gives the higher and more dynamical transition region because of the larger Alfv\'enic Poynting flux (Figure \ref{fig:alfven_energy2} and Section \ref{sec:dependenceonB}). By adding the longitudinal perturbations at the photosphere, the transition region behaves more abruptly and the average height increases. The comparison of the middle right panel to the top right panel indicates that the activity of the transition region is drastically enhanced in the weak field case by the addition of $\langle \delta v_{\parallel,\odot}\rangle = 0.6$ km s$^{-1}$. This is because in the weak field condition transverse waves are generated more effectively from acoustic waves around $\beta_r= 1$ (blue line) in the chromosphere as discussed in Section \ref{sec:dependenceonB}. The bottom panels of Figure \ref{fig:r-t_diagram} exhibit a dynamical behavior of transition regions with chromospheric gas being violently uplifted to higher altitudes by the injection of the large-amplitude vertical perturbation with $\langle \delta v_{\parallel,\odot}\rangle = 1.8$ km s$^{-1}$. Multiple yellow lines are frequently plotted at single time slices. This indicates that cooler gas with $T<2\times 10^4$ K is transiently distributed above hotter gas with $T>2\times 10^4$ K. \section{Summary} \label{sec:summary} We investigated how the properties of solar winds are affected by the $p$-mode like longitudinal perturbation at the photosphere. We performed 1D simulations from the photosphere to \shimizu{beyond several tens of solar radii} for Alfv\'en-wave-driven winds in the wide range of the amplitude of the vertical perturbation of $0.0 {\rm \ km \ s^{-1}} \le \langle\delta v_{\parallel,\odot}\rangle \le 3.0$ km s$^{-1}$ in super-radially open magnetic flux tubes. The coronal temperature and wind velocity are not significantly affected by the additional input of the longitudinal perturbation (Figure \ref{fig:temp_vr}). However, higher coronal density is obtained for larger $\langle\delta v_{\parallel,\odot}\rangle$ (Figure \ref{fig:rho}), and accordingly, the mass-loss rate increases with $\langle\delta v_{\parallel,\odot}\rangle$ by up to $\approx$ 4 times (Figure \ref{fig:vr_mass_loss}) because larger Alfv\'enic Poynting flux enters the corona so as to drive denser outflows \shimizu{as a result of more efficient chromospheric evaporation.} The $p$-mode like vertical oscillation excites acoustic waves, a part of which is converted to the transverse waves by the mode conversion in the chromosphere (Figure \ref{fig:alfven_energy}). These transverse waves contribute to the upgoing Alfv\'enic Poynting flux, in addition to the Alfv\'en waves that come from the photosphere. This result confirms the observationally inferred link between $p$-mode oscillations and Alfv\'enic waves in the solar corona \citep{Morton_2019_Nature_Astronomy}. Cases with larger $\langle\delta v_{\parallel,\odot}\rangle$ exhibit higher time variability and larger density perturbations in the low corona. The mass loss rate saturates when $\langle\delta v_{\parallel,\odot}\rangle\gtrsim 2.5$ km s$^{-1}$, because an increase of $\langle\delta v_{\parallel,\odot}\rangle$ no longer leads to the excitation of transverse waves by the mode conversion but instead is compensated by the radiative loss by the direct shock dissipation of acoustic waves in the chromosphere. Simulations with weaker field strength in the low atomosphere show that the magnetic field in the chromosphere controls the mode conversion between longitudinal and transverse modes. In the cases that include a region with plasma $\beta\approx 1$ in the middle chromosphere, the mode conversion effectively generates transverse waves there even for a moderate amplitude of $\langle\delta v_{\parallel,\odot}\rangle = 0.6$ km s$^{-1}$. We conclude that $p$-mode oscillations at the photosphere play an important role in enhancing Alfv\'enic Poynting flux over the corona of the Sun and solar-type stars. Numerical simulations in this work were partly carried out on Cray XC50 at Center for Computational Astrophysics, National Astronomical Observatory of Japan. M.S. is supported by a Grant-in-Aid for Japan Society for the Promotion of Science (JSPS) Fellows and by the NINS program for cross-disciplinary study (grant Nos. 01321802 and 01311904) on Turbulence, Transport, and Heating Dynamics in Laboratory and Solar/ Astrophysical Plasmas: “SoLaBo-X.” T.K.S. is supported in part by Grants-in-Aid for Scientific Research from the MEXT/JSPS of Japan, 17H01105 and 21H00033 and by Program for Promoting Research on the Supercomputer Fugaku by the RIKEN Center for Computational Science (Toward a unified view of the universe: from large-scale structures to planets, grant 20351188—PI J. Makino) from the MEXT of Japan. \newpage \begin{appendix} \section{Acoustic wave-driven wind}\label{sec:sound wave wind} \begin{figure}[htp] \begin{center} \includegraphics[width=7cm]{./figure/B0V06_prim.pdf} \end{center} \caption{\label{fig:tate} The time-averaged radial profile of B0V06. (a): Temperature. (b): Density. The black dotted and red solid lines represent the initial profile and the final profile, respectively. (c): Mass loss rate. Black dotted and solid lines represent negative (inflow) and positive (outflow) values, respectively. } \end{figure} We examine the properties of the atmosphere of the case with the only longitudinal fluctuation (B0V06) to see if the corona and solar wind are formed solely by acoustic waves. In order to avoid the initial infall of material from the upper layer, we set lower initial density ($\rho_{\rm w,0}=10^{-25}$ g cm$^{-3}$ in Eq. \eqref{eq:initdens}) than that of the other cases. Figure \ref{fig:tate} presents the radial profile of the atmosphere averaged from $t=3500$ min to $t=5000$ min. The acoustic waves that travel upward from the photosphere rapidly dissipate at low altitudes, $r/R_{\odot}-1\lesssim 0.1$. Although the atmosphere is heated up by wave dissipation of longitudinal waves, the temperature of the ``corona'' remains low ($\lesssim 2\times 10^5$ K, Figure \ref{fig:tate} (a)) As a result, the gas in the upper atmosphere does not stream out. Instead, it falls down to the surface, which is seen as negative mass loss rate, $\dot{M}_{w}<0$, in $r/R_{\odot}-1 > 1$ (Figure \ref{fig:tate} (c)). The accretion reduces (raises) the density in the outer (inner) region of $r/R_{\odot}-1 <$($>$)$0.5$ from the initial value (Figure \ref{fig:tate} (b)). The accretion occurs partially because the initial density in the upper region is still higher than the hydrostatic density with $T\lesssim 10^5$ K. We note, however, that the initial density is much lower by 6 - 7 orders of magnitude than the observed density in the solar wind. This simulation demonstrates that even such low-density gas cannot be driven outward only by the acoustic waves. We thus conclude, through a direct numerical demonstration, that the solar coronal heating and the solar wind driving cannot be accomplished only by the sound waves from the photosphere. \end{appendix} \bibliographystyle{aasjournal} \section{Introduction} \label{sec:intro} While the temperature at the solar photosphere is $\lesssim$ 6000 K, the corona is heated up beyond one million K. \begin{comment} The corona can be divided into two main regions depending on the magnetic field configuration of the solar surface. One is a closed magnetic field region in which both magnetic field footpoints are rooted in the photosphere. The other is an open magnetic field region where one footpoint is anchored to the photosphere and the other is open to the interplanetary space. The region of the open magnetic field is roughly equivalent to a coronal hole. Fast solar winds are driven from coronal holes \citep{Phillips_1995_GRL}. \end{comment} Inspired by the observed motions of comet tails \citep{Biremann_1951ZA}, \citet{Parker_1958ApJ} predicted the outward expansion of the hot coronal plasma to form transonic outflows because the gas pressure of the hot corona cannot be confined by the external pressure of the interstellar medium. Later on, the Mariner 2 Venus probe confirmed the existence of supersonic plasma streams from the Sun, solar winds, by in situ measurements \citep{Neugebauer_1966JGR}. Hot coronae and stellar winds are also ubiquitously observed in solar-type stars and lower-mass main sequence stars \citep{Wood_2005ApJ,Wood_2021ApJ,Gudel_2014_proceedings} In the framework of the Parker model, the solar wind is driven by the gradient of the gas pressure of the hot corona, which predicts faster winds from higher-temperature regions. In reality, however, high-speed solar winds emanate from open magnetic flux regions called coronal holes \citep{Krieger_1973SoPh,Kohl_2006A&ARv}, of which the temperature is lower than the other coronal regions \citep[e.g.,][]{Withbroe_1977ARA&A,Narukage_2011SoPh}. This discrepancy implies the importance of magnetic fields in driving solar winds. It is widely accepted that the convection beneath the photosphere is the source of the energy for the hot corona and the solar wind \citep{Klimchuk_2006_SolPhys, McIntosh_2011}. Convective fluctuations excite various modes of waves that propagate upward \citep{Lighthill_1952RSPSA,Stein_1967SoPh,Stepien1988ApJ,Bogdan_1991ApJ}. Reconnection events between an open magnetic flux and a closed loop, which are a consequence of the restructuring of the the magnetic topology by complex convective motions, also possibly generate transverse waves \citep{Nishuzuka_2008ApJ}, in addition to the direct ejection and heating of plasma \citep{Fisk_2003JGRA}. Among various types of waves, Alfv\'{e}n(ic) waves have been highlighted as a reliable agent to effectively transfer the kinetic energy of the convection to the corona and the solar wind via the Poynting flux \citep[e.g.,][]{Belcher_1971ApJ,Shoda_2019ApJ,Sakaue_2020ApJ,Matsumoto_2021_MNRAS}. This is first because they are not so much affected by the shock dissipation owing to the incompressible nature, unlike compressible waves, which easily steepen to form shocks as a result of the amplification of the velocity amplitude in the stratified atmosphere, and second because they do not refract, unlike fast-mode magnetohydronamical (MHD hereafter) waves \citep[e.g.,][]{Matsumoto_2014MNRAS}, but do propagate along magnetic field lines \citep{Alazraki_1971A&A,Bogdan_2003ApJ}. In recent years transverse waves have been detected in the chromosphere \citep{Okamoto_2007Sci,De_Pontieu_2007_Science,McIntosh_2011,Okamoto_2011ApJ,Srivastava_2017NatSR}, whereas it is still under debate whether the sufficient energy required for the formation of the corona and the solar wind propagates into the corona \citep{Thurgood_2014ApJ}. Once Alfv\'enic waves enter the corona, the key is how the Poynting flux is transferred to the thermal and kinetic energies of the coronal plasma via the dissipation of the waves. Various damping processes of Alfv\'enic waves have been explored, including turbulent cascade \citep{Velli_1989_PhysRevLett,Matthaeus_1999_ApJ,Cranmer_2007_ApJ,Verdini_2009,Howes_2013PhPl,Perez_2013ApJ}, nonlinear mode conversion to compressible waves \citep{Kudoh_1999ApJ,Suzuki_2005_ApJ,Suzuki_2006_JGRA}, resonant absorption \citep{Ionson_1978_ApJ} and phase mixing \citep{Heyvaerts_1983_AA,Magyar_2017NatSR}. \begin{comment} In addition to Alfv\'en waves, sound waves ($p$-modes) are also excited in the Sun \citep{Felipe_2010}. However, the sound wave has not been considered to play a major role in heating the corona and driving the wind \citep{Jacques_1977_ApJ,Suzuki_2002ApJ}. Recently, however, \citet{Morton_2019_Nature_Astronomy} suggests that internal acoustic mode waves may play an important role in injecting pointing flux into the upper atmosphere of the Sun. Meanwhile, there has been no systematic study of the role of longitudinal (sound) waves in the corona and solar wind in the presence of transverse (Alf\'enic) waves. The purpose of this study is to investigate the role of longitudinal waves in an Alfv\'en-driven wind model. \end{comment} In contrast to Alfv\'enic waves, acoustic waves have not been considered to be a major player in the coronal heating because the acoustic waves that are excited by p-mode oscillations at the photosphere \citep[e.g.,][]{Lighthill_1952RSPSA,Felipe_2010} rapidly steepen to form shocks before reaching the corona \citep{Stein_1972ApJ} (more REFs?). However, \cite{Morton_2019_Nature_Astronomy} pointed out the contribution of p-mode oscillations to the generation of Alfv\'enic waves via the mode conversion from longitudinal waves to transverse waves \citep{Cally_2011ApJ}. The aim of the present paper is to investigates roles of the acoustic waves that are excited by vertical oscillations at the photosphere in the Alfv\'en wave-drive wind. For this purpose, we perform MHD simulations that handle the propagation, dissipation, and mode-conversion of both transverse and longitudinal waves from the photosphere to the interplanetary space with self-consistent heating and cooling. In Section \ref{sec:method} we describe the setup of our simulations. We present results in Section \ref{sec:Results} and discuss related topics in Section \ref{sec:discussion}. \begin{figure}[t] \label{model} \begin{center} \includegraphics[width=8cm]{./figure/figure1.pdf} \caption{Schematic diagram of the stellar-like model used in this study. } \end{center} \end{figure} \section{Methods} \label{sec:method} We consider 1D open magnetic flux tubes that expand super-radially in the radial The unsigned radial magnetic field, $\left|B_{r}(r)\right|$, is given by the the conservation of open magnetic flux, $\Phi_{\rm open}$, as follows: \begin{align} \left|B_{r}(r)\right| r^{2} f^{\rm open}(r)=\Phi_{\text {open }}(=\rm const) , \label{eq:divB} \end{align} where $f^{\text {open}}$ is the filling factor of open magnetic flux. We note that $\Phi_{\text {open }}$ is constant in $r$ each simulation. We assume spherical symmetry and solve one-dimensional (1D hereafter) MHD equations along the flux tube from the polar region in spherical coordinates. The magnetic flux tube is assumed to expand to both $\theta$ and $\phi$ directions in the same manner; the scale factors are given as follows. \begin{align} \label{expf} h_{r}=1, \quad h_{\theta}=h_{\phi}=r \sqrt{f^{\text {open}}}. \end{align} Because turbulent dissipation of Alfv\'en waves is a 3D process in principle, we adopt a phenomenological model to include turbulent dissipation of Alfv\'en waves.\citep{Hossain_1995PhFl,Dmitruk_2002,van_Ballegooijen_2016ApJ} Figure \ref{model} shows an overview of our model. \subsection{Basic Equations} We assume a 1D geometry under the spherical symmetric $(\partial / \partial \theta=\partial / \partial \phi=0)$ approximation. The MHD equations are explicitly written with the scale factors in eq.(2) (see \cite{Shoda_Takasao_2021arXiv} for derivation)as \begin{align} \frac{\partial}{\partial t} \rho + \frac{1}{r^{2} f^{\text {open }}} \frac{\partial}{\partial r} \left(\rho v_{r} r^{2} f^{\text {open }}\right)=0, \end{align} \begin{align} \frac{\partial}{\partial t}&\left(\rho v_{r}\right)+\frac{1}{r^2 f^{\text{open}}} \frac{\partial}{\partial r}\left[\left(\rho v_r^2 + p_T \right) r^2f^{\text {open}}\right] \nonumber \\ \quad&=-\rho \frac{G M_{*}}{r^{2}} + \left[\rho (v_\theta^2+v_\phi^2)+ 2p \right] \frac{d}{d r} \ln (r \sqrt{f^{\text {open}}}),\label{rovr} \end{align} \begin{align} \frac{\partial}{\partial t}\left(\rho \boldsymbol{v}_{\perp}\right) +\frac{1}{r^{2} f^{\text {open }}} \frac{\partial}{\partial r}\left[\left(\rho v_{r} \boldsymbol{v}_{\perp}-\frac{1}{4 \pi} B_{r} \boldsymbol{B}_{\perp}\right) r^{2} f^{\text {open }}\right] \nonumber \\ \quad=\left(\frac{B_{r} \boldsymbol{B}_{\perp}}{4 \pi}-\rho v_{r} \boldsymbol{v}_{\perp}\right) \frac{d}{d r} \ln \left(r \sqrt{f^{\text {open}}}\right)+\rho \boldsymbol{D}_{v_\perp}^{\text {turb}}, \label{rovx} \end{align} \begin{align} \frac{1}{r^{2} f^{\text {open }}} \frac{\partial}{\partial r}\left(B_{r} r^{2} f^{\text {open }}\right)=0, \end{align} \begin{align} \frac{\partial}{\partial t} \bm{B}_{\perp}+\frac{1}{r^{2} f^{\mathrm{open}}} \frac{\partial}{\partial r}\left[\left(v_{r} \bm{B}_{\perp}-\bm{v}_{\perp} B_{r}\right) r^{2} f^{\mathrm{open}}\right] \nonumber \\ \quad=\left(v_{r} \bm{B}_{\perp}-\bm{v}_{\perp} B_{r}\right) \frac{d}{d r} \ln \left(r \sqrt{f^{\text {open }}}\right)+\sqrt{4 \pi \rho} \bm{D}_{b_\perp}^{\mathrm{turb}}, \label{induction_x} \end{align} \begin{comment} \begin{align} \frac{\partial}{\partial t} B_{\phi}+\frac{1}{r^{2} f^{\mathrm{open}}} \frac{\partial}{\partial r}\left[\left(v_{r} B_{\phi}-v_{\phi} B_{r}\right) r^{2} f^{\mathrm{open}}\right] \nonumber \\ \quad=\left(v_{r} B_{\phi}-v_{\phi} B_{r}\right) \frac{d}{d r} \ln \left(r \sqrt{f^{\text {open }}}\right)+\sqrt{4 \pi \rho} D_{b_\phi}^{\mathrm{turb}}, \label{induction_y} \end{align} \end{comment} \begin{align}\label{eq:energy} \frac{\partial}{\partial t} e+\frac{1}{r^{2} f^{\text {open }}} \frac{\partial}{\partial r}&\left[\left(\left(e+p_{T}\right) v_{r}-\frac{B_{r}}{4 \pi}\left(\boldsymbol{v}_{\perp} \cdot \boldsymbol{B}_{\perp}\right)+F_{C}\right)\right. \nonumber \\ & r^{2} f^{\text {open }}\biggr] =-\rho v_{r} \frac{G M_{\odot}}{r^{2}}-Q_{R}, \end{align} where $\bm{v}$, $\bm{B}$, $\rho$ and $p$ are velocity, magnetic field, density and gas pressure. \begin{align} e&=e_{\rm int} +\frac{1}{2} \rho v^{2}+\frac{B_{\perp}^{2}}{8 \pi} \end{align} is total energy per unit volume, which is the sum of internal energy, $e_{\rm int}$, kinetic energy and magnetic energy; \begin{align} p_{T}&=p+\frac{B_{\perp}^{2}}{8 \pi} \end{align} is the total pressure; \begin{align} \bm{B}_\perp &=B_\theta\bm{e}_\theta+B_\phi\bm{e}_{\phi} \end{align} and \begin{align} \bm{v}_\perp &=v_\theta\bm{e}_\theta+v_\phi\bm{e}_{\phi}. \end{align} are transverse velocity and magnetic field strength. $F_C$ and $Q_R$ represent conductive flux and radiative cooling rate. $\bm{D}_{v_\perp}^{\text {turb }}$ and $\bm{D}_{b_\perp}^{\text {turb }}$ represent the phenomenological turbulent dissipation of Alfv\'en waves (see section 2.4 for detail). $M_\odot$ is the solar mass. The subscript $\odot$ indicates the value evaluated at the photosphere, which is the inner boundary of our simulations. $\bm{e}_\theta$ and $\bm{e}_\phi$ are unit vectors in $\theta$ and $\phi$ direction, respectively. We employ Spitzer-Härm type conductive flux \citep{Spitzer_1953_PhRv} that strongly depends on temperature and transports energy preferentially along the magnetic field line. In addition, we consider a quenching effect of thermal conduction in low-density plasma, which generally works in $r>5R_\odot$. This limitation is for numelical reason to avoid the problem of time step. \begin{equation} \label{F_C} F_{C}=-\min \left(1, \frac{\rho}{\rho_{C}}\right) \frac{B_{r}}{|\boldsymbol{B}|} \kappa_{0} T^{5 / 2} \frac{d T}{d r} \end{equation} where $\kappa_{0}=10^{-6} \operatorname{erg} \mathrm{cm}^{-1} \mathrm{~s}^{-1} \mathrm{~K}^{-7 / 2}$. We set $\rho_{C}=10^{-20} \mathrm{~g} \mathrm{~cm}^{-3}$ following \citet{Shoda_2020_ApJ}. \subsection{Equation of state} Due to its low temperature, the hydrogen in the lower atmosphere of the Sun is partially ionized. In this work, the effect of partial ionization is considered in the equation of state. The internal energy is composed of the random thermal motion of the gas particles and latent heat of hydrogen atoms, which is given by \begin{align} e_{\rm int} = \frac{p}{\gamma-1} + n_{\rm H} \chi I_{\rm H}, \ \ \ \ n_{\rm H} = \rho/m_{\rm H}, \end{align} where $n_{\rm H}$ is the number density of hydrogen atoms, $\chi$ is the ionization degree and $I_{\rm H} = 13.6 {\rm \ eV}$ is the ionization potential of hydrogen. For simplicity, the formation of ${\rm H}_2$ molecules is not considered. The thermal equilibrium is assumed with respect to ionization, in which the ionization degree is given by the Saha-Boltzmann equation. \begin{align} \frac{\chi^2}{1-\chi} = \frac{2}{n_{\rm H} \lambda_e^3} \exp \left( - \frac{I_{\rm H}}{k_B T} \right), \end{align} where $\lambda_e$ is the thermal de Broglie wavelength of electron: \begin{align} \lambda_e = \sqrt{\frac{h^2}{2 \pi m_e k_B T}}. \end{align} Note that pressure and ionization degree are connected by \begin{align} p = \left( 1 + \chi \right) n_{\rm H} k_B T. \end{align} \subsection{Radiative cooling} The radiative cooling rate $Q_R$ is given as follows. \begin{equation} Q_{R}=Q_{R, \text { thck }} \xi_{\rm rad}+Q_{R, \text { thin }}\left(1-\xi_{\rm rad}\right) \end{equation} where $Q_{R, {\rm thck}}$ and $Q_{R, {\rm thin}}$ corresponds to the optically thick and thin radiative cooling rates. The parameter $\xi_{\rm rad}$ behaves qualitatively like the optical depth, which takes $\xi_{\rm rad} \approx 1$ in the photosphere and $\xi_{\rm rad} \approx 0$ in the corona. \begin{equation} \label{eq:xi_rad} \xi_{\rm rad}=\max\left(0,1-\frac{p_{\text {chr}}}{p}\right) \end{equation} where $p_{\rm {chr}}=0.1 p_\odot$. $T_{\rm eff}$ is effective temperature. Following \citet{Gudiksen_Nordlund_2005_Apj}, we approximate the optically thick cooling \begin{align} Q_{R, \text { thck }} &=\frac{1}{\tau_{\text {thck }}}\left(e_{\text {int }}-e_{\text {int },\text { ref }}\right) , \end{align} where \begin{align} \tau_{\text {thck}} &=0.1\left(\frac{\rho}{\bar{\rho}_\odot}\right)^{-5 / 3} \mathrm{s}. \end{align} is a modeled cooling timescale. Here, $\bar{\rho}_\odot=1.87\times10^{-7}\, \mathrm{g} \ \mathrm{cm}^{-3}$ is the mean surface density, and $e_{\text {int }, \text { ref }}$ is the internal energy at a given reference temperature that mimics the radiation-balanced profile. \begin{equation} e_{\text {int }, \text { ref }}=\frac{n_H(1+\chi_{\rm ref})kT}{\gamma-1}+n_H\chi_{\rm ref}I_H \end{equation} According to equation \eqref{eq:xi_rad}, $\xi_{\rm rad}=0$ outside of the near phtosphere, so $Q_{R, \text { thck }}$ works near the photosphere. The optically-thin cooling function is composed of two different contributions. In the chromosphere, we employ the radiative cooling function given by \cite{Googman_2012ApJ} ($Q_{\text GJ}$), while in the corona, the optically thin cooling function taken from \cite{Rempel_2017ApJ} is used. \begin{eqnarray} Q_{R, \text { thin }} &=Q_{\mathrm{GJ}}(\rho, T) \xi_{2}+n_{\rm H} n_{e} \Lambda(T)\left(1-\xi_{2}\right) \\ \xi_{2} &=\max \left(0, \min \left(1, \frac{T_{\mathrm{TR}}-T}{\Delta T}\right)\right), \end{eqnarray} where $T_{\text TR}=15000\ \text K$ and $\Delta T=5000\ \text K$ \begin{center} \begin{table*} \hspace{-15mm} \scalebox{1.0}{ \begin{tabular}{|c|cccccccc|} \hline Model & $\langle \delta v_{\perp,\odot}^+\rangle$ & $\langle \delta v_{\parallel, \odot}\rangle$ & $B_{r,\odot}$ & $f_\odot^{\rm open}$ & $f_{\rm cor}^{\rm open}$ & $r_\text{out}$ & $\dot{M}$ & $v_{r,{\rm out}}$ \\ & $[{\rm km \ s^{-1}}]$ & $[{\rm km \ s^{-1}}]$ & $[{\rm G}]$ & & & & $[M_\odot \ {\rm yr}^{-1}]$ & $[{\rm km \ s^{-1}}]$ \\ \hline B0V06 & 0 & 0.6 & $1.3\times10^{-4}$ & $1.00\times 10^{-3}$ & $\left(f_{\odot}^{\text {open }}\right)^{1/3}$ & $95.6R_\odot$ & accretion & \\ \hline BsV00 & 0.6 & 0 & 1300 & $1.00\times 10^{-3}$ & $\left(f_{\odot}^{\text {open }}\right)^{1/3}$ & $99.5R_\odot$ & $1.32\times 10^{-14}$ & 688.05\\ BsV04 & 0.6 & 0.4 & 1300 & $1.00\times 10^{-3}$ & $\left(f_{\odot}^{\text {open }}\right)^{1/3}$ & $99.5R_\odot$ & $1.75\times 10^{-14}$ & 687.77\\ BsV06 & 0.6 & 0.6 & 1300 & $1.00\times 10^{-3}$ & $\left(f_{\odot}^{\text {open }}\right)^{1/3}$ & $99.5R_\odot$ & $1.97\times 10^{-14}$ & 697.02\\ BsV09 & 0.6 & 0.9 & 1300 & $1.00\times 10^{-3}$ & $\left(f_{\odot}^{\text {open }}\right)^{1/3}$ & $99.5R_\odot$ &$2.63\times 10^{-14}$ & 701.24\\ BsV12 & 0.6 & 1.2 & 1300 & $1.00\times 10^{-3}$ & $\left(f_{\odot}^{\text {open }}\right)^{1/3}$ & $99.5R_\odot$ & $3.10\times 10^{-14}$ & 716.19\\ BsV15 & 0.6 & 1.5 & 1300 & $1.00\times 10^{-3}$ & $\left(f_{\odot}^{\text {open }}\right)^{1/3}$ & $99.5R_\odot$ & $3.54\times 10^{-14}$ & 691.51\\ BsV18 & 0.6 & 1.8 & 1300 & $1.00\times 10^{-3}$ & $\left(f_{\odot}^{\text {open }}\right)^{1/3}$ & $39.1R_\odot$ & $4.18\times 10^{-14}$ & 633.64\\ BsV21 & 0.6 & 2.1 & 1300 & $1.00\times 10^{-3}$ & $\left(f_{\odot}^{\text {open }}\right)^{1/3}$ & $39.1R_\odot$ & $4.57\times 10^{-14}$ & 635.62\\ BsV27 & 0.6 & 2.7 & 1300 & $1.00\times 10^{-3}$ & $\left(f_{\odot}^{\text {open }}\right)^{1/3}$ & $39.1R_\odot$ & $5.09\times 10^{-14}$ & 560.80\\ BsV30 & 0.6 & 3.0 & 1300 & $1.00\times 10^{-3}$ & $\left(f_{\odot}^{\text {open }}\right)^{1/3}$ & $39.1R_\odot$ & $4.97\times 10^{-14}$ & 561.31\\ \hline BwV00 & 0.6 & 0 & 325 & $0.25\times 10^{-3}$ & $0.25\left(f_{\odot}^{\text {open }}\right)^{1/3}$ & $95.3R_\odot$ & $1.26\times 10^{-14}$ & 586.55\\ BwV06 & 0.6 & 0.6 & 325 & $0.25\times 10^{-3}$ & $0.25\left(f_{\odot}^{\text {open }}\right)^{1/3}$ & $95.3R_\odot$ & $2.57\times 10^{-14}$ & 581.24\\ BwV18 & 0.6 & 1.8 & 325 & $0.25\times 10^{-3}$ & $0.25\left(f_{\odot}^{\text {open }}\right)^{1/3}$ & $37.8R_\odot$ & $4.76\times 10^{-14}$ & 460.09\\ \hline \end{tabular} } \caption{\label{tab:settings} Summary of the input and output parameters of our simulations. The first four columns correspond to the input parameters, while the last two columns show the output parameters (mass-loss rate, wind velocity at $r=r_{\rm out}$)}. \end{table*} \end{center} \subsection{Phenomenology of Alfv\'en-wave turbulence} Although it is not yet clear how the solar wind is heated from the coronal region, Alfv\'en-wave turbulence is a promising mechanism. This is triggered by the collision counter-propagating Alfv\'en waves. Following \citet{Shoda_2018_ApJ_a_self-consistent_model}, we introduce phenomenological terms of the turbulent dissipation. \begin{align} \label{turblence} D_{v_\theta,\phi}^{\text {turb }} &=-\frac{c_{d}}{4 \lambda_{\perp}}\left(\left|z_{\theta,\phi}^{+}\right| z_{\theta,\phi}^{-}+\left|z_{\theta,\phi}^{-}\right| z_{\theta,\phi}^{+}\right) \\ D_{b_\theta,\phi}^{\text {turb }} &=-\frac{c_{d}}{4 \lambda_{\perp}}\left(\left|z_{\theta,\phi}^{+}\right| z_{\theta,\phi}^{-}-\left|z_{\theta,\phi}^{-}\right| z_{\theta,\phi}^{+}\right), \end{align} where $\lambda_\perp$ is a perpendicular correlation length and $z_\perp^{\pm}$ are Els\"asser variables \citep{Elsasser_PhRv_1950}: \begin{equation} \label{eq:elsasser} z_{\theta,\phi}^{\pm}=v_{\theta,\phi} \mp B_{\theta,\phi} / \sqrt{4 \pi \rho} \end{equation} We assume that the correlation length increases with the radius of a flux tube: \begin{equation} \lambda_{\perp}=\lambda_{\perp, \odot} \sqrt{\frac{B_{\odot}}{B_{r}}} \label{eq:corrlength} \end{equation} The subscript $\odot$ indicates the value evaluated at the photosphere, which is the inner boundary of our simulations. At the photosphere, the Alfv\'enic fluctuations are localized in inter-granular lanes where the magnetic fields are concentrated \citet{Chitta_2012_ApJ}, so we set \begin{equation} \lambda_{\perp, \odot}=150 \mathrm{km} \end{equation} as a typical width of inter-granular lanes. We adopt a coefficient, \begin{equation} c_{d}=0.1 \end{equation} from \citet{van_Ballegooijen_2017}. \subsection{Flux-tube expansion factor} We adopt the same functional form for the super-radial expansion of the magnetic flux tube as in \citet{Shoda_2020_ApJ}. Decreasesing the gas pressure in the atmosphere with increasing height, causing a expansion of the flux tubes. Since the scale heights of the chromosphere and corona are very different, the flux tube expands firstly in the chromosphere and secondly in the corona\citep{Cranmer_2005ApJS}. \begin{eqnarray} f^{\text {open }}(r)&=&f_{\odot}^{\text {open }} f_{1}^{\exp }(r) f_{2}^{\exp }(r) \\ f_{1}^{\exp }(r)&=&\min \left[f_{\text {cor }}^{\text {open }} / f_{\odot}^{\text {open }}, \exp \left(\frac{r-R_{*}}{2 h_{\exp }}\right)\right] \\ f_{2}^{\exp }(r)&=&\frac{\mathcal{F}(r)+f_{\text {cor }}^{\text {open }}+\mathcal{F}\left(R_{\odot}\right)\left(f_{\text {cor }}^{\text {open }}-1\right)}{f_{\text {cor }}^{\text {open }}(\mathcal{F}(r)+1)} \end{eqnarray} where $\mathcal{F}(r)=\exp \left(\frac{r-r_{\exp }}{\sigma_{\exp }}\right)$. $f_{\odot}^{\text {open }}$ is defined as Table \ref{tab:settings}. $f_{1}^{\exp }$ and $f_{2}^{\exp }$ represent the expansion of the magnetic flux tube in the chromosphere and in the corona, respectively. The pressure scale height from the photosphere to the chromosphere is expressed as \begin{equation} h_{\rm exp}=a_\odot^{2} / g_\odot \end{equation} where $a_{\odot}$ is the speed of sound and $g_{\odot}$ is the gravitational acceleration at the solar surface. We assume that the pressure scale height does not change so much from the photosphere to the chromosphere because the variation of the temperature is small. For the coronal expansion, we follow the formulation of \citet{Kopp_1976_SolPhys}, with $r_{\rm exp} / R_\odot =1.3$, $\sigma_{\rm exp} / R_\odot=0.5$. \subsection{Simulation Setup} We set the simulation domain from the photosphere ($r=R_\odot$) to the outer boundary at ($r=r_{\rm out}$). The size of the spatial grid varies with $r$. The grid size $\Delta r$ is set as follows: \begin{equation} \Delta r = {\rm MAX} \left[ \Delta r_{\rm min},{\rm min} \left[ \Delta r_{\rm MAX},\frac{0.02(r-1.04R_\odot)}{2.01}+\Delta r_{\rm min} \right] \right] \end{equation} where $\Delta r_{\text min}=20{\;\rm km}$ and $\Delta r_{\text MAX}=2000{\;\rm km}$. Figure \ref{grid_size} shows $\Delta r$ as function of $r$. Beyond the outer boundary, $r_{\rm out}$, we set a free boundary condition. \begin{figure}[!t] \begin{center} \includegraphics[width=7cm]{./figure/grid_size.pdf} \caption{The grid size $\Delta r$ as function of $r$. \label{grid_size}} \end{center} \end{figure} At the inner boundary, we fixed the temperature to the photospheric value, \begin{equation} T_{\odot} = 5770 \;\mathrm{K} \end{equation} The transverse component of velocity and magnetic field correspond to the amplitudes of Alfv\'enic waves. We handle them by the Els\"{a}sser variables (eq.\ref{eq:elsasser}) at the photosphere. The so incoming Alfv\'en waves is set free boundary condition so that it is absorbed without being reflected at the inner boundary as follows. \begin{equation} \left.\frac{\partial}{\partial r} z_{\theta,\phi}^{-}\right|_{\odot}=0 \end{equation} To inject MHD waves from the photosphere, we imposed time dependent boundary conditions for the density, velocity and perpendicular magnetic field.: \begin{align} \rho_{\odot}&=\bar{\rho}_\odot\left(1+\frac{v_{r, \odot}}{a_\odot}\right)[\mathrm{g}\,\mathrm{cm}^{-3}],\\ v_{r,\odot}&= \delta v_{\parallel, \odot}(t),\\ z_{\theta,\phi, \odot}^{+}&= \delta z_{\theta,\phi, \odot}^{+}(t), \end{align} where $a_{\odot}$ is sound speed at photosphere. Here we adopt a broadband spectrum for $\delta v_{\parallel,\odot}$ and $\delta v_{\perp,\odot}^{+}$: \begin{align} \delta v_{\parallel, \odot} \propto \sum_{N=0}^{100} \sin \left(2 \pi f_{N}^{l} t+\phi_{N}^{l}\right) / \sqrt{f_{N}^{l}} \end{align} \begin{align} \delta z_{\theta,\phi, \odot}^{+} \propto \sum_{N=0}^{100} \sin \left(2 \pi f_{N}^{t} t+\phi_{N}^{t}\right) / \sqrt{f_{N}^{t}}, \end{align} where $\phi_N^l$ and $\phi_N^t$ are random phases and \begin{align} 3.33 \times 10^{-3} \mathrm{Hz} \leq f_{N}^{l} \leq 1.00 \times 10^{-2}\; \mathrm{Hz} \end{align} \begin{align} 1.00\times10^{-3} \mathrm{Hz} \leq f_{N}^{t} \leq 1.00 \times 10^{-2}\; \mathrm{Hz} \end{align} We ran twelve different simulations with different injected wave amplitudes from the photosphere. The input parameters are summarized in Table 1. $\langle \delta v_{\perp,\odot} \rangle$ and $\langle \delta v_{\parallel,\odot} \rangle$ represent the root mean square value of $v_{\theta,\phi}$ and $v_r$ at photosphere respectively. In case of top of table, only longitudinal fluctuations are injected; in case of in the case of second from the top, only transverse fluctuations are input. The under it nine cases are different longitudinal amplitudes fluctuations. The bottom case and one above it are different longitudinal amplitudes fluctuations or the magnetic field strength at the photosphere. \section{Results} \label{sec:Results} The aim of this work is to investigate how the longitudinal-wave excitation on the photosphere affects the properties of the solar wind. For this purpose, we compare the simulation results with the same transverse-wave amplitude and different longitudinal-wave amplitude on the photosphere. We also perform a simulation without transverse-wave injection, which is discussed in Appendix \ref{sec:sound wave wind} \citep[see also][]{Suzuki_2002ApJ}. \subsection{Overview: comparison of radial profiles} To see the overview, we show how the radial profiles change with the longitudinal-wave amplitude on the photosphere. Because the radial profile fluctuates in time, we compare the time-averaged profiles in the quasi-steady state, We take 1500 minutes for the time averaging. \begin{figure}[ht] \begin{center} \includegraphics[width=9cm]{./figure/temp_and_vr.pdf} \end{center} \caption{\label{fig:temp_vr} Time-averaged wind profiles of cases with $\langle v_{\parallel,\odot}\rangle=0 \,{\rm km\ s^{-1}}$ (blue dashed), $0.6 \,{\rm km\ s^{-1}}$ (green solid) and $1.8 \,{\rm km\ s^{-1}}$ (red dotted) in comparison with observations. {\bf (a)}: Temperature. The circles \citep{Fludra_1999SSRv} show the radial distribution of electron temperature observed by CDS/SOHO. {\bf (b)}: Radial velocity. The circles \citep{Teriaca_2003AIPC} and the stars \citep{Zangrilli_2002ApJ} represent proton outflow speeds in polar regions observed by SOHO. } \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=9cm]{./figure/rho_linear.pdf} \end{center} \caption{\label{fig:rho} Time-averaged density profiles in the chromosphere and the low corona with observations. The three lines are shown by case with $\langle v_{\parallel,\odot}\rangle=0 \,{\rm km\ s^{-1}}$ (blue dashed), $0.6 \,{\rm km\ s^{-1}}$ (green solid) and $1.8 \,{\rm km\ s^{-1}}$ (red dotted). The squares represent the empirieal formula \citep{Saito_1970AnTok}. The crosses and triangles \citep{Wilhelm_1998ApJ} are observed in inter plume lane and plume lane respectively by SOHO. } \end{figure} Figure \ref{fig:temp_vr} (a) and (b) show the time-averaged radial profiles of the temperature $T$ and radial velocity $v_r$ for three cases. The red-dotted, green-solid, and blue dashed lines correspond to the cases with $\langle \delta v_{\parallel,\odot}\rangle=1.8$ km s$^{-1}$, $0.6$ km s$^{-1}$, and $0.0$ km s$^{-1}$, respectively. Also shown by symbols are the observed values taken from the literature (see caption for detail). Several features can be inferred from the comparison. First, the transition region is higher in the large-$\langle \delta v_{\parallel,\odot}\rangle$ cases. Given that the upward motion of the transition region (spicule) are likely to be driven by longitudinal waves, the higher transition region is a natural consequence of larger-amplitude longitudinal wave. Meanwhile, no significant differences are found in the coronal temperature. In the $v_r$ profiles, a trend that the outflow is slightly faster in large-$\langle \delta v_{\parallel,\odot}\rangle$ cases is found. The termination velocity is, on the other hand, nearly invariant with $\langle \delta v_{\parallel,\odot}\rangle$. In other words, the variation in the solar wind velocity is unlikely to come from the longitudinal-wave injection in the lower-atmosphere. Figure \ref{fig:rho} shows the radial profiles of the mass density $\rho$ from $r/R_\odot-1\sim 0.007$ to $r/R_\odot-1\sim 0.1$. The format is the same as Figure \ref{fig:temp_vr}. In contrast to temperature and velocity, the mass density depends significantly on $\langle \delta v_{\parallel,\odot}\rangle$; $\langle \delta v_{\parallel,\odot}\rangle=1.8$ km s$^{-1}$ yields four times larger coronal density than $\langle \delta v_{\parallel,\odot}\rangle=0.0$ km s$^{-1}$. Given that the flux-tube expansion factor is fixed and the wind velocity is nearly independent from $\langle \delta v_{\parallel,\odot}\rangle=0.0$ km s$^{-1}$, the larger coronal density means the larger mass-loss rate. Thus, our simulation indicates that the wind mass-loss rate is sensitive to the longitudinal-wave injection, which is discussed in more detail in the following section. \subsection{Mass-loss rate} To see more quantitatively how the mass-loss rate ($\dot{M}_w=4\pi r^2 f^{\rm open}\rho v_r$) depends on the photospheric longitudinal-wave amplitude, we show in Figure \ref{fig:vr_mass_loss} by blue points the relation between $\langle \delta v_{\parallel,\odot}\rangle$ and the mass-loss rate $\dot{M}_w$ (right axis) the mass-loss rate the enhancement in the mass-loss rate $\Delta \dot{M}_w$ (left axis), which is defined by \begin{align} \Delta \dot{M}_w = \dot{M}_w - \dot{M}_w^0, \end{align} where $\dot{M}_w^0$ denotes the mass-loss rate in $\langle \delta v_{\parallel,\odot}\rangle = 0.0 {\rm \ km \ s^{-1}}$. Also shown by the blue-solid line is the power-law fit to the blue points (in the small-$\langle \delta v_{\parallel,\odot}\rangle$ range) given by \begin{equation} \Delta\dot{M}_w=1.41\langle\delta v_{\parallel,\odot}\rangle^{1.11} \ 10^{-14}M_\odot\; {\rm yr}^{-1}. \end{equation} It is seen from Figure \ref{fig:vr_mass_loss} that the mass-loss rate increases nearly linearly with $\langle \delta v_{\parallel,\odot}\rangle$ increases,and until saturating at $\langle \delta v_{\parallel,\odot}\rangle \sim 2.0 {\rm \ km \ s^{-1}}$. Because the energy flux of the longitudinal wave increases as $\propto \langle\delta v_{\parallel,\odot}^2\rangle$, the mass-loss rate enhances sub-linearly with the longitudinal-wave energy flux. Given that the wind mass-loss rate is approximately proportional to the energy flux injected to the corona, the sub-linear dependence means that the longitudinal-wave energy flux in the corona is not proportional to the longitudinal-wave energy flux on the photosphere. One possible reason for this is that, as $\langle \delta v_{\parallel,\odot}\rangle$ increases, the longitudinal waves get dissipative in the chromosphere due to faster shock formation. Also shown by the orange points in Figure \ref{fig:vr_mass_loss} are the coronal Alfv\'en-wave luminosity $L_{\rm A}$ divided by the square of the escape velocity $v_{\rm esc}$. From the energy conservation of the solar wind, the mass-loss rate satisfies the following relation \citep{Cranmer_2011_ApJ}: \begin{align} \dot{M}_w \approx L_{\rm A,cb}/v_{\rm esc}^2. \end{align} Indeed, Figure \ref{fig:vr_mass_loss} directly confirms the above approximation. In other words, the enhancement and saturation of the mass-loss rate can be understood in terms of the Alfv\'en-wave energy flux. We discuss the energetics of the solar wind and Alfv\'en wave in the following sections. \begin{figure}[!t] \begin{center} \includegraphics[width=8cm]{figure/massloss_powerlaw.pdf} \end{center} \caption{\label{fig:vr_mass_loss} Mass loss rate versus injected longitudinal wave amplitudes. Blue dots and orange dots represent $\dot{M}$ and $\frac{L_{A, \mathrm{cb}}}{v_{g, \odot}^{2}}$ which is derived from energy balance equation (\ref{eq:Cranmer_equation}). Shown by the dashed lines are power law fit to the numerical results: } \end{figure} \subsection{Wind energetics} \begin{figure}[!t] \centering \includegraphics[width=8cm]{figure/enegy_flux_corona_trans.pdf} \caption{Energy fluxs at $r/R_\odot-1=0.03$ (solid line) and $r=39.1R_\odot$ (dashed line) as function of $\langle \delta v_{\parallel_\odot}\rangle$. Red, green, black and cian denote Alfv\'en energy flux, kinetic energy flux, gravitational energy flux and radiation cooling loss, respectively. \label{fig:energy_flux} } \end{figure} To understand what determines the mass-loss rate, it is often helpful to investigate the global wind energetics \citep{Cranmer_2011_ApJ,Suzuki_2013_PASJ,Shoda_2020_ApJ}. Here, we investigate the energy budget from the low atmosphere to the wind region to figure out what causes the increase and saturation of the mass-loss rate with respect to $\langle \delta v_{\parallel,\odot}\rangle$. When the steady-state condition is satisfied, taking the time average of the energy equation Eq. (\ref{eq:energy}), we obtain \begin{equation} \frac{d}{d r}\left(L_{\rm K}+L_{\rm E}+L_{\rm A}-L_{\rm C}-L_{\rm G}\right)=-4 \pi r^{2} f^{\mathrm{open}} Q_{\mathrm{rad}}, \end{equation} where \begin{align} \label{eq:L_K} L_{\rm K} &=\frac{1}{2} \rho v_{r}^{3} 4 \pi r^{2} f^{\text {open }}, \\ \label{eq:L_E} L_{\rm E} &=\frac{\gamma}{\gamma-1} p v_{r} 4 \pi r^{2} f^{\text {open }}, \\ \label{eq:L_A} L_{\rm A} &=\left[\left(\frac{1}{2} \rho \boldsymbol{v}_{\perp}^{2}+\frac{\boldsymbol{B}_{\perp}^{2}}{4 \pi}\right) v_{r}-\frac{B_{r}}{4 \pi}\left(\boldsymbol{v}_{\perp} \cdot \boldsymbol{B}_{\perp}\right)\right] 4 \pi r^{2} f^{\text {open }}, \\ \label{eq:L_C} L_{\rm C} &=-F_{C} 4 \pi r^{2} f^{\text {open }}, \\ L_{\rm G} &=\rho v_{r} \frac{G M_{\odot}}{r} 4 \pi r^{2} f^{\text {open }}=\dot{M}_w \frac{G M_{\odot}}{r}. \end{align} Here, $L_K, L_E, L_A, L_C,$ and $L_G$ indicate different components of the luminosities that are respectively derived from the kinetic energy flux, enthalpy flux, Alfv\'en energy flux, heat transfer flux, and gravitational energy flux integrated over the surface area. We note that the $\dot{M}_w$ in equation \ref{eq:L_G} can be assumed to be constant when the quasi-steady state is satisfied, and therefore $L_{\rm G}\propto r^{-1}$. Here, we also evaluate the radiation loss from the low chromosphere at $r_{\rm lch}-R_\odot= 0.7$[Mm] to $r$ as follows: \begin{equation}\label{eq:L_R} L_R(r)= \int_{r_{\rm lch}}^{r}Q_{\rm rad} 4\pi r^2f^{\rm open}dr , \end{equation} Figure \ref{fig:energy_flux} compares $L_{\rm A, cb}$ and $L_{\rm G, cb}$ evaluated at the coronal base, $r=r_{\rm cb}=1.03R_{\odot}$, $L_{\rm R}(r_{\rm cb})$ from the low chromsphere to the coronal base, and $L_{\rm K}$ measured in the wind region at $r=39.1R_{\odot}$, which corresponds to the outer boundary of the simulation box in the models of $\langle\delta v_{\parallel,\odot}\rangle\ge1.8[{\rm km s^{-1}}]$. These $L$ values are normalized by the Alfvén wave luminosity at the photosphere, $L_{A,\odot}$ so that the red solid line means the energy transmittance of the Alfvén waves to the low corona. The sum of $L_K+L_E+L_A-L_C-L_G$, is almost conserved above the coronal base, because the radiative cooling is not so significant there to give $L_{R,{\rm cor}}\approx L_{R,{\rm out}}$. If we compare the energy balance between the coronal base and the wind region, we can neglect $L_{\rm c}$, which only redistributes the energy within the considered region, and less dominant $L_{\rm E}$. As a result, we finally get an approximated equation that determines the kinetic energy of the wind \citep{Cranmer_2011_ApJ} as \begin{equation}\label{eq:Cranmer_equation} L_{K, \text { out }} \approx L_{A, \text { cb }}-L_{G, \text { cb }}. \end{equation} One clearly sees that this relation is satisfied in Figure \ref{fig:energy_flux}. Turning to the dependence on $\langle\delta v_{\parallel,\odot}\rangle$ (Figure \ref{fig:energy_flux}), $L_{\rm K,out}$, $L_{\rm A,cb}$ and $L_{\rm G,cb}$ exhibit a similar trend each other; they increase initially for $\langle\delta v_{\parallel,\odot}\rangle\lesssim 2$ km s$^{-1}$ but saturate afterward for $\langle\delta v_{\parallel,\odot}\rangle\gtrsim 2$ km s$^{-1}$. The dependence of $\dot{M}_w$ on $\langle\delta v_{\parallel,\odot}\rangle$ in Figure \ref{fig:vr_mass_loss} follows this trend because $\dot{M}_w\propto L_{\rm K,out}+L_{\rm G,cb}$ (eq.\ref{eq:L_K} \& \ref{eq:L_G}). Namely, $\dot{M}_w$ is determined by the the Alfv\'{e}n wave luminosity at the coronal base. Therefore, it is a key to understand how the Alfv\'en waves are damped in the chromosphere and transmit to the corona. The increase of $\dot{M}_w$ is primarily because of the increase of the coronal density (Figure \ref{fig:rho})). The larger density enhances the radiative cooling loss , and therefore the sufficient energy is not remained for Alfv\'{e}n waves. As a result, $L_{\rm A,cb}/L_{\rm A,\odot}$ dose not increase with $\langle\delta v_{\parallel,\odot}\rangle$ but saturates for $\langle\delta v_{\parallel,\odot}\rangle \gtrsim 2$ km s$^{-1}$ (Figure \ref{fig:energy_flux}), which is also the reason of the saturation of $\dot{M}_w$ (Figure \ref{fig:vr_mass_loss}). \subsection{Dissipation of Alfv\'en Wave}\label{sec:Alfven energy} \begin{figure}[!t] \begin{center} \includegraphics[width=8cm]{figure/alfven_energy1.pdf} \end{center} \caption{\label{fig:alfven_energy} The radial profile of the luminosity of Alfv\'en waves (a) and the fractions of the turbulent energy losses (b) and mode conversion losses (c) from photosphere to coronal base. The three lines are shown by case with $\langle v_{\parallel,\odot}\rangle=0 \,{\rm km\ s^{-1}}$ (blue dashed), $0.6 \,{\rm km\ s^{-1}}$ (green solid) and $1.8 \,{\rm km\ s^{-1}}$ (red dotted). } \end{figure} In order to understand the dependence of $L_{\rm A,cb}$ on $\langle\delta v_{\parallel,\odot}\rangle$ in Figure \ref{fig:energy_flux}, we examine the propagation and dissipation of Alfv\'{e}n waves from the chromosphere to the low corona. \citep{Shoda_2018_ApJ_a_self-consistent_model} introduced an equation that describes the evolution of Alfv\'{e}n waves: \begin{equation} \frac{\partial}{\partial t}\left(\frac{1}{2}\rho v_\perp^2+\frac{B_\perp^2}{8\pi}\right) +\frac{1}{r^2f}\frac{\partial}{\partial r}L_A = -\varepsilon_{\parallel\leftrightarrow\perp}-Q_{\rm turb} \end{equation} where $\epsilon_{\parallel\leftrightarrow\perp}$ and $Q_{\rm turb}$ indicate the mode conversion from transverse waves to longitudinal waves and the energy loss by turbulent cascade, respectively. These are explicitly written as \begin{eqnarray} \label{eq:mode_conversion} \varepsilon_{\parallel\leftrightarrow\perp} &=&-v_r\frac{\partial}{\partial r}\left(\frac{B_\perp^2}{8\pi}\right) +\left(\rho v_\perp^2-\frac{B_\perp^2}{4\pi}\right)v_r \frac{d}{dr}\ln (r\sqrt{f})\nonumber\\ \\ \text{and}\nonumber\\ \label{eq:turbulence} Q_{\rm turb} &=&c_{d} \rho \frac{\left|z_{\theta}^{+}\right| z_{\theta}^{-2}+\left|z_{\theta}^{-}\right| z_{\theta}^{+^{2}}+\left|z_{\phi}^{+}\right| z_{\phi}^{-2}+\left|z_{\phi}^{-}\right| z_{\phi}^{+^{2}}}{4 \lambda_{\perp}}. \nonumber\\ \end{eqnarray} We note that the first term of eq.(\ref{eq:mode_conversion}) denotes the nonlinear excitation of longitudinal perturbation from the magnetic fluctuation associated with transverse waves \citep{Hollweg_1971_JGR,Suzuki_2006_JGRA}, which correspons to the Poderomotive force. Using $\varepsilon_{r\leftrightarrow\perp}$ and $Q_{\rm turb}$, we define the the energy loss rates via the turbulent dissipation and the mode conversion as \begin{align} \Delta L_{A, \mathrm{turb}}(r) &=\int_{R_{\odot}}^{r} d r 4 \pi r^{2} f^{\mathrm{open}} Q_{\mathrm{turb}} ,\\ \text{and}\nonumber\\ \Delta L_{A, \mathrm{mc}}(r) &=\int_{R_{\odot}}^{r} dr 4 \pi r^{2} f^{\mathrm{open}} \varepsilon_{\parallel \leftrightarrow \perp}. \end{align} Figure \ref{fig:alfven_energy} presents properties of the damping of Alfv\'{e}n waves in the chromosphere; the panel (a) presents the radial profile of Alfv\'en energy fluxs, while Figure \ref{fig:alfven_energy} (b) and (c) present $\Delta L_{A, \mathrm{turb}}$ and $\Delta L_{A, \mathrm{mc}}$, respectively. We note that the net loss of Alfv\'{e}n waves, $L_{A,{\odot}}-L_A$, is not always equal to the sum of these energy losses because of numerical dissipation. As shown in Figure \ref{fig:alfven_energy}, the mode conversion rate and the turbulent loss rate behave differently; General trends are that $\Delta L_{A, \mathrm{turb}}$ increases and $\Delta L_{A, \mathrm{mc}}$ decreases for increasing $\langle \delta v_{\parallel,\odot}\rangle$. As $\langle \delta v_{\parallel,\odot}\rangle$ increases, the mode conversion from transverse (Alfv\'{e}nic) waves to longitudinal (acoustic) waves is suppressed; instead, the negative conversion, $\Delta L_{A, \mathrm{mc}}<0$, takes place in the case with $\langle \delta v_{\parallel,\odot}\rangle = 1.8$ km s$^{-1}$. This is because transverse waves are excited at the region with plasma $\beta\approx 1$ in the chromosphere by the mode-conversion from the large-amplitude longitudinal waves injected from the photosphere \citet{Cally_2006, Schunker_2006, Cally_2008}. We here note that the conversion from the longitudinal mode to the transverse mode occurs even in the simple 1D geometry because there is a finite angle between the direction of the wave propagation $\parallel r$ and the direction of magnetic field, $B_r\hat{r}+\boldsymbol{B}_{\perp}$. This inverse conversion is seen as the increase of $L_{\rm A}$ near the surface (Figure \ref{fig:alfven_energy}(a)). \section{Discussion} \label{sec:discussion} \subsection{Dependence on Magnetic Field in Chromosphere} \begin{figure}[h] \begin{center} \includegraphics[width=8cm]{figure/beta_chr.pdf} \end{center} \caption{\label{fig:beta} Radial profile of time averaged $\langle \beta_r\rangle$ (eq.\ref{eq:betar}). Dashed and solid line correspond to model of $B_{r,\odot}=1300$ G and $B_{r,\odot}=325$ G respectively. } \end{figure} We have shown that the nonlinear mode conversion between transverse waves and longitudinal waves in the chromosphere is the key to determine the wind properties when both $\langle \delta v_{\perp,\odot}\rangle$ and $\langle \delta v_{\parallel,\odot}\rangle$ are considered. The mode conversion rate sensitively depends on plasma $\beta=p/(B^2/8\pi)$ with peaked at $\beta\approx 1$\citep{Hollweg_1982_ApJ,Spruit_1992_ApJ}. Therefore, it is expected that the magnetic field strength in the chromosphere plays an essential role in determining the Alfv\'{e}nic wave luminosity that enters the corona. Here, we perform simulations in a flux tube with weaker magnetic field in the chromosphere but with the same field strength above the corona (``BsVxx'' in Table \ref{tab:settings}). Figure \ref{fig:beta} compares the radial profiles of the time averaged plasma beta values of four cases, BsV00, BsV06, BwV00, and BwV06, that are evaluated from the only radial component of the magnetic field, \begin{align} \langle\beta_r\rangle \equiv \frac{8\pi \langle p\rangle}{B_r^2}, \label{eq:betar} \end{align} in the chromosphere. We note that, although $\beta_r \ge \beta$, the difference is small because $|\delta B_{\perp}|<|B_r|$. \begin{figure}[t] \begin{center} \includegraphics[width=8cm]{figure/alfven_energy2.pdf} \end{center} \caption{\label{fig:alfven_energy2} The radial profile of the luminosity of Alfv\'en waves (a), the ratio of the inward and outward luminosity of Alfv\'en waves (b), and the fractions of the turbulent energy losses (c) and mode conversion losses (d) from photosphere to coronal base. The four lines are shown by case with BsV00(blue dotted), BsV06(green solid), BwV00(cian solid) and BwV06(yellow dotted). } \end{figure} Figure \ref{fig:alfven_energy2} compares the properties of Alfv\'{e}n waves of these four cases. The panels (a), (c), and (d) are the same as (a), (b) and (c) of Figure \ref{fig:alfven_energy} but the vertical axis of (a) and the horizontal axis are shown in logarithmic scale. The panel (b) presents the ratio of the incoming component to the outgoing components of Alfv\'{e}nic wave luminocity, $L_{\rm A}^{-}/L_{\rm A}^{+}$; we here note that $L_{\rm A} = L_{\rm A}^{+} - L_{\rm A}^{-}$. Let us begin with the comparison of the cases without $\langle \delta v_{\parallel,\odot}\rangle$, BsV00 (dashed light blue lines) and BwV00 (dashed blue lines). The panel (a) shows that $L_{\rm A}$ of the weak field case (BwV00) is larger than $L_{\rm A}$ of the strong field case (BsV00) in the inner part of the displayed region. However, $L_{\rm A}$ of BwV00 declines more rapidly in the chromosphere and is smaller than $L_{\rm A}$ of BsV00 at the coronal base. As a result, $\dot{M}$ of BwV00 is slightly smaller than that of BsV00 (Table \ref{tab:settings}), following eq.(\ref{eq:CS11}). The rapid damping is owing to the turbulent dissipation (panel c) because the correlation length, $\lambda_{\perp}$, of the weak field cases is smaller except at the inner boundary. The correlation length, eq.(\ref{eq:corrlength}), can be rewritten by using eq.(\ref{eq:divB}) as \begin{equation} \lambda_\perp=\lambda_{\perp,\odot}\frac{r}{R_\odot}\sqrt{\frac{f^{\rm open}}{f^{\rm open}_\odot}}. \end{equation} We assume that the same $\lambda_{\perp,\odot}$ at the inner boundary in both models. Since the flux-tube expansion of the weak field case is suppressed in our setup $f^{\rm open}/f^{\rm open}_\odot$ is smaller, which raises the turbulent dissipation. The rapid turbulent damping in the chromoshere leads to the smaller amplitude of the outgoing Alfv\'{e}nic waves in BsV00 than in BwV00. As a result, the reflected Alfv\'{e}nic waves downward to the photosphere are also reduced to give smaller ratio, $L_{\rm A}^{-}/L_{\rm A}^{+}$, as shown in Figure \ref{fig:alfven_energy2}(b). Therefore, the net upgoing flux, $L_{\rm A (=L_{\rm A}^{+} - L_{\rm A}^{-})}$, in panel (a) is larger near the inner boundary. The comparison of BsV06 to BsV00 indicates that $\dot{M}$ increases more than twice by adding the longitudinal perturbation of $\langle \delta v_{\parallel, \odot}\rangle = 0.6$ km s$^{-1}$ in the weak field cases (Table \ref{tab:settings}). The enhancement factor of $\dot{M}_w$ is considerably larger than that of $\approx 1.5$ times obtained in the strong field cases. This is because the longitudinal to transverse mode conversion excites transverse waves in the chromosphere (panel (d)) and larger $L_{\rm A}$ enters the corona (panel (a)) in BwV06. On the other hand, in BsV06 the inverse mode conversion does not occur. Figure \ref{fig:beta} shows that $\langle\beta_r\rangle$ stays below unity in the strong field cases. In contrast, in the weak field cases $\beta_r$ decreases from $\langle\beta_r \rangle > 1$ to $\langle\beta_r \rangle < 1$. The inverse mode conversion from the longitudinal mode to the transverse mode takes place most efficiently at $\beta\approx 1$ \citep{Cally_2006, Cally_2008}. This is the main reason why in BwV06 the Alfv\'{e}nic waves are excited in the chromosphere to give larger $L_{\rm A}$ at the corona base. \subsection{Limitation of 1D treatment} \suzuki{While we only consider shear Alfv\'{e}n waves in our 1D simulations, it is expected that in reality torsional Alfv\'{e}n waves are also excited. The nonlinear steepening of the torsional mode is slower than the shear mode, which will reduce the dissipation rate of Alfv\'{e}nic waves (Vasheghani Farahani et al. 2012). } The multi-dimensional effect also induces the other wave coupling processes\citep{Hasan_2005ApJ,Hasan_2008ApJ}. Longitudinal to transverse mode conversion is multidimensional phenomena and wave transmission decreases with increasing attack angle between the magnetic field and the wave vector\citep{Schunker_2006}. Partial mode conversion when the shock is obliquely incident on the magnetic field has been shown to smooth out slow shocks\citep{Pennicott_2019ApJ}. In order to further understand the solar wind acceleration quantitatively and qualitatively, it is necessary to perform a multi-dimensional MHD simulation. \subsection{Weak ionization in low chromosphere} The chromosphere is partially ionized and occured significant diffusion of charged particles relative to neutrals, which is known as ambipolar diffusion. This diffusion promotes damping of the transverse wave and heating the gas \citep{Khomenko_2012ApJ}. The magnetic tension induced by the ambipolar diffusion also generates transverse waves \citep{Martinez-Sykora_2017Sci}. \cite{Khomenko_2018A&A} suggest that the Poynting flux is indeed absorbed in numerical calculations for the chromo incorporating ambipolar diffusion. The amount of this depression increases toward the high frequencies in the lower layers, and the increasing transverse waves due to Mode conversion are the high-frequency \citep{Shoda_2018Apj}, which may prevent the energy flux increaseing in this study. Therefore, it is important to perform numerical calculations that incorporate ambipolar diffusion in future studies. \subsection{Radiation below chromosphere} Because of the ionization non-equilibrium in the chromosphere, the ionization degree cannot be derived from Saha's equation. In this study, radiative cooling (heating) was reproduced by combining several radiation functions depending on the optical thickness, but in general, the ionization degree may depend not only on the electron density and temperature, but also on the atmospheric history \citep{Carlsson_2012A&A}. From Figure \ref{fig:energy_flux}, radiative cooling (heating) affects the saturation of Alfvén energy below the corona, and therefore, accurate RMHD calculations may affect the results in this study. \subsection{Observation signature} Density fluctuations and Els\"{a}sser Varables are important quantities in the observation of Alfvén wave-driven solar wind models, but our study shows that there is no significant difference between the models. Therefore, it is difficult to estimate the traces of mode conversion between longitudinal and transverse waves from these physical quantities. \begin{figure}[ht] \begin{center} \includegraphics[width=8cm]{./figure/density_fluctuation.pdf} \end{center} \caption{\label{fig:den_fluc} Radial profile of the r.m.s \sout{of} density fluctuation normalized by the average density $\langle \delta \rho\rangle / \langle\rho\rangle=\sqrt{\langle\rho^2\rangle-\rho_{\rm ave}^2}/\langle\rho\rangle$. The three lines correspond to the models of BsV00 (dashed blue), BsV06 (solid green), BsV18 (dotted red) respectively. } \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=8cm]{./figure/elssaser_variable.pdf} \end{center} \caption{\label{fig:elssaser} Radial profile of root mean squared Els\"asser variable of outward (solid lines) and inward (dashed line). Red and blue circles represent the observed amplitudes of $z_\perp^+$ and $z_\perp^-$ respectively from PSP \citep{Chen_2020_ApjS}. } \end{figure} Figure \ref{fig:den_fluc} presents the radial profile of dimensionless density fluctuations, which exhibits two peaks at $r/R-1\sim10^{-2}$ in the chromosphere and the transition region and at $r/R_{\odot}\sim 5$ in the corona and the solar wind region. The first peak is formed by the nonlinear mode conversion from the Alfv\'enic waves to sound waves via the wave pressure gradient \citep{Hollweg_1971_JGR,Kudoh_1999ApJ,Matsumoto_2010_ApJ}. The second peak of $\langle \delta \rho \rangle / \langle \rho \rangle$ at $r/R_{\odot}\sim 5$ is formed by the parametric decay instability of Alfv\'{e}n waves.\citep{Terasawa_1986,Tenerani_2017ApJ, Shoda_2018ApJ, Re'ville_2018ApJ} Although $\langle \delta \rho\rangle / \langle \rho\rangle$ is larger near the surface for larger $\langle \delta v_{\parallel,\odot}\rangle$ simply due to the larger longitudinal injection, different $\langle \delta v_{\parallel,\odot}\rangle$ cases give the similar peak values at $r/R_{\odot}-1\approx 0.1$ in the low corona. This is because the efficient inverse conversion from the longitudinal mode to the transverse mode takes place in the chromosphere in the case of larger $\langle \delta v_{\parallel,\odot},\rangle$ (Figure \ref{fig:alfven_energy}), which suppresses the increase of the density fluctuations. Finally, the radial profiles of $\langle \delta \rho\rangle / \langle \rho \rangle$ converge to a similar one in and above the corona in a self-regulated manner, even though longitudinal perturbations with different $\langle \delta v_{\parallel,\odot}\rangle$ are injected from the photosphere. Figure \ref{fig:elssaser} compares the numerical results of the time averaged $\langle z_{+}\rangle$ and $\langle z_{-}\rangle$ to observations the $\it Parker\ Solar\ Probe$ (hereafter PSP) at $r\approx 40 R_{\odot}$ \citep{Chen_2020_ApjS}. The obtained $\langle z_{\pm}\rangle$ from these three cases are consistent with the observed $z_{+}$ and $z_{-}$ that show large scatters. Both $\langle z_{+}\rangle$ and $\langle z_{-}\rangle$ are larger for larger $\langle \delta v_{\parallel,\odot}\rangle$ in $10^{-2}<r/R_\odot-1<1$. Larger $\langle z^{\pm}\rangle $ yields larger turbulent loss as shown in Figure \ref{fig:alfven_energy} , which suppresses the increase of $\langle z_{\pm}\rangle$. As a result, almost the same maximum amplitudes of $\langle z_{\pm}\rangle$ are obtained for the different $\langle \delta v_{\parallel,\odot}\rangle$ cases. In the solar wind region, $r/R_{\odot}\gtrsim 10$, $\langle z_{\pm}\rangle$ is smaller for larger $\langle \delta v_{\parallel,\odot}\rangle$ because the density is higher (Figure \ref{fig:rho}) \subsection{Time dependency} \begin{figure*}[!t] \centering \includegraphics[width=16cm]{./figure/time_slice_ro.pdf} \caption{ \label{fig:r-t_diagram} The radial-time diagrams of mass density in the lower atmospheres of BsV00, BsV06, BsV18, BwV00, BwV06 and BwV18. The yellow and blue solid lines represent contour lines of T=$2\times 10^4$[K] and $\beta_r=1$ } \end{figure*} Figure 12 shows the r-t diagram of the mass density in the lower atmosphere. The left column shows the model with a strong magnetic field (BsVxx) and the right column shows the model with a weak magnetic field (BwVxx). The top, middle, and bottom columns show $\langle\delta v_{\parallel,\odot}\rangle=$0, 0.6, and 1.8 $km \,s^{-1}$, respectively. The yellow line represents the contour lines of $T=2\times 10^4 K$. The up and down motions of chromosphere correspond to the solar spicules. The longitudinal motions propagate upward as slow waves and lift up the choromosphere which means that the spicule is produced.\citep{Hollweg_1982_ApJ,Shibata_1982SoPh} From fig.12 (e), the maximam height of chromosphere are about 10 Mm and the period is about 5 minutes. This is consistent with spicule's observations.\citep{Beckers_1972ARA&A} The arrows represent $v_r$, Alfvén speed and the speed of sound. The longitudinal shock wave propagates as a fast shock in $\beta\gg1$ and as a slow shock in $\beta\ll 1$. In Figure 12, the density shading appears as a slow mode in $\beta\ll 1$. The height of the chromosphere increases with $\rangle\delta v_{\parallel,\odot}\langle$, because of the increase of Slow shock with the increase of longitudinal wave amplitude. \begin{comment} \begin{figure*}[!t] \centering \includegraphics[width=16cm]{./figure/time_slice_zp.pdf} \caption{ \label{fig:r-t_diagram} The radial-time diagrams of $z_\perp^+/c_a$ in the lower atmospheres of BsV00, BsV06, BsV18, BwV00, BwV06 and BwV18. } \end{figure*} \end{comment} \section{Summary} \suzuki{T.K.S. is supported in part by Grants-in-Aid for Scientific Research from the MEXT of Japan, 17H01105 and 21H00033.} \newpage \begin{appendix} \section{Acoustic wave-driven wind}\label{sec:sound wave wind} \begin{figure}[htp] \begin{center} \includegraphics[width=7cm]{./figure/B0V06_prim.pdf} \end{center} \caption{\label{fig:tate} The time-averaged radial profile of B0V06. (a): Temperature. (b): Density. The black dotted and red solid lines represent the initial profile and the final profile, respectively. (c): Mass loss rate. Black dotted and solid lines represent negative (inflow) and positive (outflow) values, respectively. } \end{figure} We examine the properties of the atmosphere heated solely by acoustic waves. Figure \ref{fig:tate} presents the radial profile of the atmosphere of the case with the only longitudinal fluctuation averaged from $t=3500$ (min) to $t=5000$ (min). The acoustic waves that travel upward from the photosphere rapidly dissipate at low altitudes, $r/R-1\lesssim 0.1$. Although the atmosphere is heated up by the wave dissipation, the obtained temperature $\lesssim 2\times 10^5$ K is too low to maintain the hot corona with $T\gtrsim 10^6$ K (Figure \ref{fig:tate} (a)). As a result, the gas in the upper atmosphere does not stream out. Instead, it accrets to the surface, which is seen as the increase (decrease) of the density in $r/R_{\odot}-1 <$($>$)$2$ (Figure \ref{fig:tate} (b)). The accretion occurs partially because the initial density in the upper region is higher than the hydrostatic density with $T\lesssim 10^5$K. We note, however, that the initial density is much lower by 5 orders of magnitude than that of the solar wind region:initial density profile of this model and the other models are \begin{equation} \rho(r)={\rm MAX}\left(\rho_\odot e^{-\frac{r-R_\odot}{h}},10^{-25}(r/R_\odot-1)^{-2.5})\right) \end{equation} and \begin{equation} \rho(r)={\rm MAX}\left(\rho_\odot e^{-\frac{r-R_\odot}{h}},10^{-19}(r/R_\odot-1)^{-2.5})\right) \end{equation} This simulation demonstrates that even such low-density gas cannot be driven outward only by the acoustic waves. Figure \ref{fig:tate} (c) shows the radial distribution of the mass loss rate $\dot{M}_w$. From Equation 3, the mass loss rate is constant with respect to $r$ under quasi-steady state. (This sentence is incorrect. We should refer to eq.(3) in order to describe the steady-state condition.) However, the simulation of this model does not reach the quasi-steady state because the time that it takes to obtain the quasi-steady state is much longer than that of other models. Magnitude of the negative mass loss rate at $r/R_\odot-1 \gtrsim 1$ decreases with time. We can summarize that the coronal heating and the solar wind driving cannot be accomplished as those observed in the Sun and solar-type stars only by the sound waves from the photosphere. \end{appendix} \bibliographystyle{aasjournal}
1,314,259,994,381
arxiv
\section{Introduction}\label{model} \subsection{Modeling Computation} \label{intro-model} There is an interplay between the design of algorithms, modeling computation, algorithm implementation, and actual hardware. An algorithm is a high level description of a way of solving a problem, and we use a model of computation to measure the quality of an algorithm. The goal of the model of computation is that it is general enough to reason well with, and the runtime predicted by the model correlates well with the actual runtime on a computer. However, as computers evolve, hardware advances to the point where existing models of computation may no longer accurately predict runtimes on real computers, thus rendering so-call optimal algorithms as less than optimal. \textbf{RAM.} The majority of research on algorithms and data structures is done with what is known as the RAM model~\cite{DBLP:books/daglib/0023376}. This models a computer as something that can do computation on a constant number of data items at unit cost. For example, two numbers can be added to yield another in unit time. This is also the model taught to students: you break down the algorithm into constant-cost chunks and count the number of chunks executed. \textbf{DAM.} It was realized long ago that the RAM model is ill-suited to modeling computation when the data did not reside in memory. The Disk-Access model (DAM) \cite{DBLP:journals/cacm/AggarwalV88}, also known as the external memory model, is the archetypical model of a two-level memory hierarchy; there is a computer with memory size $M$ and a disk which stores data in blocks of size $B$, and blocks of data can be read or written to the disk from the memory at unit cost. The underlying premise is that since a disk is so much slower than internal memory, counting the number of block transfers and ignoring all else is a good model of runtime. The classic data structure in the DAM model is the B-tree~\cite{DBLP:journals/csur/Comer79}. This model ignores the fact that accessing adjacent blocks on a disk is in real life much faster than two random blocks~\cite{DBLP:books/daglib/0022093}. \textbf{Cache-oblivious.} As the modern computer has evolved, it has become not just a two-level hierarchy but a multi-level hierarchy, from the registers to the disk or SSD there can easily be seven or more levels of cache. Each level typically has a smaller amount of faster memory than the previous one. The most successful attempt to model this hierarchy to date has been the cache-oblivious model~\cite{DBLP:journals/talg/FrigoLPR12}. In this model, analysis is done with the same method as in the DAM model, except that the values of $B$ and $M$ are unknown to the algorithm but are used in the analysis. Some DAM model algorithms are trivially cache-oblivious, for example scanning an array of size $N$ takes time $N/B \pm 1$ in both models. Other DAM-model structures, such as the $B$-tree, are completely dependent on the provision of a single $B$ to the algorithm and thus cache-oblivious searching requires completely different methods \cite{DBLP:journals/siamcomp/BenderDF05,DBLP:journals/jal/BenderDIW04}. \textbf{Locality of reference.} The cache-oblivious model tries to capture one view of a modern machine. However, a disk does not actually have blocks of fixed size that are moved in at unit cost, where accessing two neighboring blocks cost the same as accessing two random blocks. Instead, the performance of a disk can be characterized by locality of reference, where accessing adjacent items costs much less than random items. Additionally, the performance of a computer is not just about cache hits and misses, there are numerous additional hardware features such as address translation and prefetching that affect runtime, typically in ways that reward locality of reference~\cite{DBLP:books/daglib/0022093}. The importance of locality of reference stems from the large disparity between the time to perform a memory access and the time to perform a computation~\cite{Hennessy:2011:CAF:1999263}. Poor locality results in many slow accesses to memory, which can result in performance loss for a range of reasons including instruction pipeline stalls and memory bandwidth bottlenecks. While a number of low-level optimizations, such as prefetching and branch prediction, aim to combat these issues, locality remains an important aspect of computer performance. But yet, despite oversimplifying the interplay between locality of reference and performance, the cache-oblivious model is a success, and has become the leading ``realistic'' model of computation used by algorithms researchers for non-parallel algorithm design. The central advice when designing algorithms in the cache-oblivious model is to maximize locality of reference. Algorithms designed in the cache-oblivious model also tend to be quite beautiful, with recursive constructions ensuring locality at different levels of granularity. The central idea of this work is to turn the design principle on its head. What if ``maximize locality of reference'' is not simply a design principle, but the formal goal? We show how to formalize locality of reference, and show that broad classes of asymptotically optimal cache-oblivious algorithms and algorithms that maximize locality of reference are one and the same. Thus, optimal cache-oblivious algorithms are asymptotically optimal for not only hardware consisting of multi-level caches as envisioned by the model, but on any hardware that rewards locality of reference, under some reasonable assumptions. \subsection{Motivation}\label{intro-inspiration} The underlying thesis of the cache-oblivious model and this work is that there is some one-dimensional address space that a program reads and writes data from, and that the cost (runtime) is a function of the sequence of the locations accessed in this address space. This is also how one typically programs, we allocate memory and read and write to it, but we let the compiler and operating system manage exactly how this is optimized. In the RAM model, the runtime is simply the number of accesses performed (we use the word \emph{access} to denote either a read or a write). Call the memory locations accessed by the running of a program on an input an \emph{execution sequence}. Let $E=(e_1, e_2, \ldots e_{|E|})$ denote such an execution sequence, where $e_i$ is the memory location of the $i$-th access of $E$. In the RAM model we would say that $E$ takes time $|E|$, since each access has cost 1, regardless of location. In the cache-oblivious and DAM models, however, the cost of an access $e_i$ depends on the memory size, $M$, the block size, $B$, and the locations of accesses prior to $e_i$. When accessing element $e_i$, if the memory block of size $B$ containing $e_i$ is already in memory, then it costs 0 (and costs 1 otherwise). In this discussion (and in Section~\ref{querytype}) we assume that the accesses of $E$ are in strictly non-decreasing memory locations (i.e., $e_{i-1} \leq e_i$), and we call these \emph{query-type} sequences. This lets us ignore the role of the memory size, and simplifies the discussion by making the resulting cache-oblivious cost depend only on block size, $B$. In Section~\ref{with-memory}, we remove this assumption and consider algorithms that make use of memory. However, we note that query-type sequences are not uncommon in data structures, as one can view many structures as a topological sort into memory of some abstract DAG or tree. For example, searching in one dimension has an optimal cache-oblivious cost of $O(\log_B{N})$, which is independent of $M$. The cache-oblivious/DAM cost (runtime) of a non-decreasing execution sequence, $E$, is $$ \co{B}{E} = \sum_{i=2}^{|E|}\mathrm{sgn} \left( \left| \left\lfloor \frac{e_i}{B} \right\rfloor - \left\lfloor \frac{e_{i-1}}{B} \right\rfloor \right| \right) $$ where sgn denotes the sign function which is 0 for 0 and 1 for all positive values. This formula simply computes the block number of the current and previous access, and charges 0 if they are the same and 1 if they are different. Let's make some simplifying assumptions to try to obtain something more elegant and without ceilings and floors. Instead of computing the cache-oblivious cost for a single block size $B$, which represents the runtime on a two-level memory hierarchy with block size $B$, we consider total cost over all levels of a multi-level memory hierarchy. This gives rise to the questions: what are the sizes of the levels, and what is the relative cost of a cache miss at each level? To keep things simple (but unrealistic), we start by assuming that our memory hierarchy, $\mathcal{B} = (B_0, B_1, B_2, \ldots)$, is a sequence of block sizes, $B_l$, such that there is a block size for every power of two (i.e., $B_l = 2^l$). We also assume that a cache miss at any level has equal unit cost. So, with all of these assumptions, what is the total cost of execution sequence $E$? $$ \sum_{i=2}^{|E|} \sum_{l=0}^{\infty} \begin{cases} 1 & \text{if $e_i$ and $e_{i-1}$ are in different blocks of size $2^l$} \\ 0 & \text{otherwise} \end{cases} $$ Finally, to simplify this expression, we assume that at each level, the blocks are not aligned so that memory locations $B-1$ and $B$ are in different blocks, but are rather randomly shifted so that any two adjacent memory locations have a $\frac{1}{B}$ chance of being in different blocks of size $B$. At step $i$, location $e_i$ is being accessed and the previously accessed location is $e_{i-1}$. Let $d=e_i-e_{i-1}$ be the distance between $e_{i-1}$ and $e_i$. For all block sizes $B_l \leq d$, there will be a cache miss and the access will have cost 1. If $B_l > d$ then the chance of a cache miss is simply $\frac{d}{B_l}$. This gives $$ \sum_{i=2}^{|E|} \Bigg( \sum_{l=0}^{\lfloor \log (e_i-e_{i-1}) \rfloor } 1 + \overbrace{\sum_{l=\lfloor \log (e_i-e_{i-1}) \rfloor+1}^{\infty} \frac{e_i-e_{i-1}}{2^l}}^{\leq 2} \Bigg) \\ = \sum_{i=2}^{|E|} (\log (e_i-e_{i-1})+O(1)) $$ This is truly simple. If we call the the distance in memory between two accesses a \emph{jump}, then the runtime is simply the sum of the logarithms of the jump distances. Doing algorithm analysis with such a formula is easy. For example, binary search would cost $\Theta(\log^2 N)$ worst-case in this model, a $B$-tree $\Theta(\log^{\frac{3}{2}} N)$ (assuming binary search within a node, and $B$ chosen to be $2^{\sqrt{\log N}}$ to optimize $O(\frac{\log^2 N}{\log B}+\log N \log B)$) and performing a predecessor search on a cache-oblivious Van Emde Boas layout would cost $\Theta(\log n \log \log n)$ (1 jump of size $\leq N$, 2 jumps of size $\leq \sqrt{N}$, 4 jumps of size $\leq N^{1/4}$ \ldots). What about all of the assumptions that led to the above simplification? Changing the block sizes from the powers of two to some other geometric progression will just change the base of the logarithm. The assumption that cache misses at all levels incur the same cost, however, is very unrealistic. A more realistic assumption is that the costs would also form a geometric sequence, and if one follows through with the math, one obtains a total cost that is also a function of jump distances, but instead of a logarithm it is a root. But is this the right answer? We don't attempt to answer that. However, the above exercise does inspire us to believe that the runtime should be given as some function of the jump distances, which we call the \emph{locality function}, $\ell$. Thus using $\ell$ we define the total \emph{locality of reference (LoR) cost} of execution sequence $E$ as \begin{align*} \jr{\ell}{E} &=\sum_{i=1}^{|E|}\ell(|e_i-e_{i-1}|) \end{align*} If the function $\ell(d)=1$,then this is just the RAM model. If the function $\ell(d)=d$ then this gives us a simplified model of the runtime on a tape drive. And, as we saw above, different assumptions could give $\ell(d)=\log d$ or $\ell(d)=\sqrt{d}$. We place several restrictions on the locality functions we consider. The locality functions should be non-negative, non-decreasing, and subadditiv These restrictions follow naturally from hardware that rewards locality of reference where big jumps do not cost less than small ones and accessing locations (with increasing values) $e_1, e_2, e_3$ should cost no less than $e_1, e_3$. We claim that, for the appropriate choice of locality function, this is the proper way to model a computer that rewards locality of reference. But what is the right locality function to perform analysis on? We avoid this question by allowing the possibility that an algorithm can be optimal for all valid locality functions; call such algorithms \emph{locality-of-reference optimal} (LoR optimal). Such algorithms would be asymptotically optimal for all computers that reward locality of reference, regardless of specifics. For example, to sum all the numbers in an array, scanning it sequentially rather than any other way (such as via a randomly chosen permutation) is clearly the optimal way no matter what the specifics of how locality of reference is rewarded. The central result of this work is that an algorithm is LoR optimal if and only if it is optimal in the cache-oblivious model, for a broad class of non-pathological problems. This implies that all optimal cache-oblivious algorithms are optimal not just in the originally envisioned scenario of a multi-level memory hierarchy but, in fact, on any hardware whose performance is characterized by locality of reference. \subsection{Summary of Results}\label{intro-summary} The central focus of this work is presenting the \emph{locality of reference (LoR)} model, a computational model that looks at memory in a new way: the cost of a memory access is based on the proximity from a prior access via what we call a \emph{locality function}. A wide range of locality functions can be used to compute the cost, and certain locality functions correspond to well-known models (e.g., RAM, DAM, etc.). We first consider in Section~\ref{querytype} the LoR model in the context of algorithms that do not rely on memory, which we call ``query-type'' algorithms. We prove that for a broad class of non-pathological problems any query-type algorithm that is asymptotically cache-obliviously optimal is asymptotically optimal for \emph{all} subadditive locality functions, and vise-versa. Thus, cache obliviously optimal query-type algorithms have asymptotically optimal cost on any system that reasonably awards locality of reference. In Section~\ref{with-memory} we generalize the LoR model to apply to algorithms that make use of locality with relation to not just the last access, but where an access should be fast if it is close in address space to an item that has been accessed recently (in time). To do so we extend the idea to a locality function to a bidimensional locality function that includes both spatial and temporal components. We propose a model that computes the cost of an access by considering the cost of a jump from the two previous accesses with maximal locality (temporal and spatial) to the left and right. We prove that, when using a specific locality function, this cost equates to the number of cache misses with the \emph{least recently used} cache replacement policy, which is within a factor two of the cache oblivious cost with half the memory size. Thus, we prove that, for algorithms with are not drastically effected by memory size, optimal cache-oblivious cost implies optimality in the LoR model, and vice-versa. The cache-oblivious model analyzes algorithms in terms of a two-level system and does not look at the memory hierarchy as whole. However, the LoR model is easily extended to multi-level hierarchies of caches. In Section~\ref{multilevel}, we define general classes of memory hierarchies and prove that, under some reasonable assumptions, the LoR model cost is asymptotically equivalent to the cache-oblivious cost in a multi-level hierarchy of caches, for any execution sequence (even those that are heavily impacted by memory size). \subsection{Related work} One related attempt, besides the previously described cache-oblivious model is the Hierarchical Memory Model \cite{DBLP:conf/stoc/AggarwalACS87}. In this model, accessing memory location $x$ takes time $f(x)$. This was extended to a blocked version where accessing $k$ memory locations takes times $f(x)+k$ \cite{DBLP:conf/focs/AggarwalCS87}. In particular the case where $f(k)=\log x$ was studied and optimality obtained for a number of problems. This model, through its use of the memory cost function $f$, bears a number of similarities to ours, and it meant to represent a multi-level cache where the user manually controls the movement of data from slow to fast memory. However, while it is able to capture temporal coherence well, even in the blocked version it does not capture fully the idea of spatio-temporal locality of reference, where an access is fast because it was close to something accessed recently. Another model that proposed analyzing algorithm performance on a multi-level memory hierarchy is the Uniform Memory Hierarchy model (UMH)~\cite{DBLP:conf/focs/AlpernCF90}. The UMH model is a multi-level variation of the DAM that simplifies analysis by assuming that the block size increases by a fixed factor at each level and that there is a uniform ratio between block and memory size at each level of the hierarchy. In Section~\ref{multilevel}, we show how similar assumptions can be applied to the LoR model as well, although the LoR model provides a general framework that extends to arbitrary hierarchies or systems that reward locality of reference. \section{Query-type algorithms}\label{querytype} We first consider algorithms that generate execution sequences that operate on strictly non-decreasing memory locations. These types of execution sequences are indicative of query or search algorithms, as queries often can make little use of memory accessed in recent queries. Here we introduce the \emph{Locality of Reference} (LoR) model in this simplified, yet practical, context, and extend the model to the general case of algorithms that make use of memory in Section~\ref{with-memory}. \subsection{Query-type LoR model} The query-type LoR model of computation assumes that, given an infinite array of memory and a sequence of accesses that are in non-decreasing memory locations $E = (e_0, e_1,\ldots)$ (an \emph{execution sequence}), the cost to access a memory location $e_i$ immediately after accessing location $e_{i-1}$ is a function of the \emph{spatial distance} $|e_i - e_{i-1}|$. The model uses a locality function, denoted as $\ell$, which maps the spatial distance to a cost. The cost to execute the sequence $E$ in the query-LoR model with locality function $\ell$ is denoted as $\jr{\ell}{E}$ and is simply $\sum_{i=1}^{|E|}\ell(|e_i-e_{i-1}|)$. We define the class of all locality functions, $\mathcal{L}$, as all subadditive functions with $\ell(0)=0$. The restriction that functions be subadditive captures the idea that spatial locality of reference reduces the cost of an access. \subsection{External memory and cache-oblivious}\label{prelim} Given an execution sequence $E$, what is the cost in the cache-oblivious model? Recall that in this model, memory is split into blocks of size $B$, and that when a block not in memory is encountered it is brought into memory at unit cost. In the query-type setting, this happens only when the block currently being accessed differs from the previously accessed block. Let $\co{B}{E}$ denote the cache-oblivious cost to execute $E$ with block size $B$, this is $ \sum_{i=1}^{|E|} \begin{cases} 1 &\text{if } \lfloor \frac{e_{i}}{B}\rfloor \neq \lfloor \frac{e_{i-1}}{B} \rfloor \\ 0 &\text{otherwise} \end{cases} $. Note that the external memory (DAM) cost and cache oblivious cost to execute $E$ has the same definition; the only difference is that $B$ can be used in the definition of an external memory algorithm, and cannot for a cache oblivious algorithm. \subsection{Optimality for query-type algorithms}\label{uni-optimal} To discuss optimality in the context of the query-LoR model, we first provide formal definitions of problems, algorithms, and optimality. We define a problem $P$ to have a set of problem instances $P=\{I_1,I_2,\ldots,\}$, where each instance corresponds to some input sequence for which the problem can be solved. We further define $\mathcal{I}^P_n$ to be the set of instances of $P$ that correspond to inputs of size $n$. An algorithm $A$ is a method to turn an instance $I$ into an execution sequence $E(A,I)$. We say that algorithm $A$ solves $P$ if, for every instance $I \in P$, the algorithm $A$ generates an execution sequence $E(A,I)$ that correctly solves (i.e., generates the correct output for) instance $I$. We define the set of all algorithms that correctly solve $P$ as $\mathcal{A}_P$. For any algorithm $A$ that solves problem $P$, we define the worst-case LoR model runtime for input size $n$ and locality function $\ell$ as $ W_\ell(P,A,n) = \max_{I \in \mathcal{I}^P_n} (\jr{\ell}{E(A,I)})$. We say that an algorithm is \emph{LoR optimal} if, for every locality function, the worst-case cost is within a constant factor of the optimal: \begin{definition}\label{def:jumpopt} Algorithm $A_{\text{LoR}}$ is asymptotically LoR optimal for problem $P$ iff, $$ \exists_{c,n_0} \forall_{n>n_0}\forall_{A \in \mathcal{A}_P}\forall_{\ell \in \mathcal{L}} { \Big[ W_\ell(P,A_{\text{LoR}},n) \leq c\cdot W_\ell(P,A,n) \Big ] } $$ \end{definition} The notion of a LoR optimal algorithm seems to be very powerful, as such an algorithm would be asymptotically optimal on any computing device that rewards locality of reference. Similarly, we formally define a cache-oblivious optimality (CO optimal) for query-type algorithms. We say $W_B(P,A,n) = \max_{I \in \mathcal{I}^P_n}(\co{B}{E(A,I)})$ is the worst-case cache-oblivious cost of algorithm $A$ on problem instances of size $n$ of problem $P$, for a specific block size $B$. \begin{definition}\label{def:coopt} Algorithm $A_{\text{CO-Opt}}$ is asymptotically CO optimal iff, $$ \exists_{c,n_0} \forall_{n>n_0} \forall_{B\geq 1} \forall_{A \in \mathcal{A}_P} \Big [ W_B(P,A_{\text{CO-Opt}},n) \leq c\cdot W_B(P,A,n) \Big ] $$ \end{definition} Many algorithms are known to be asymptotically cache-obliviously optimal~\cite{DBLP:journals/talg/FrigoLPR12}. For example, performing a predecessor query on a van Emde Boas layout is a CO optimal query-type algorithm. \subsection{$B$-stable problems}\label{sub:bstable} To show the equivalence between LoR optimal and CO optimal algorithms, we must avoid pathological problems with worst-case behavior that varies dramatically with different instances of the problem, for different block sizes. We say that a problem is \emph{\emph{B}-stable\xspace} if, for any algorithm $A$ that solves $P$, there is some ``worst-case'' instance $I_w$ that, for every $B$, has CO cost asymptotically no less than the optimal worst-case cost for that $B$, over all instances. Formally, \begin{definition}\label{def:bstable} Problem $P$ is \emph{\emph{B}-stable\xspace} if, for any algorithm $A$ that solves $P$, \begin{align*} \exists_{c,n_0}\forall_{n>n_0}\exists_{I_w \in \mathcal{I}_n^P} \forall_{B\ge 1} \Big [ \min_{A' \in \mathcal{A}_n^P} W_B(P,A',n) \le c \cdot \co{B}{E(A,I_w)} \Big ] \end{align*} \end{definition} Intuitively, for any algorithm that solves a \emph{B}-stable\xspace problem, there must be a single instance that, for all block sizes, has cost no less than the asymptotically worst-case optimal cost. By Definition~\ref{def:coopt}, this implies that, if $P$ is \emph{B}-stable\xspace and $A_{\text{CO-Opt}}$ solves it CO-optimally, \begin{align*} \exists_{c,n_0}\forall_{n>n_0}\forall_{A \in \mathcal{A}_P}\exists_{I_w \in \mathcal{I}_n^P} \forall_{B\ge 1} \Big [ W_B(P,A_{\text{CO-Opt}},n) \le c \cdot \co{B}{E(A,I_w)} \Big ] \end{align*} That is, every algorithm must have an instance on which it performs no better, asymptotically, than $A_{\text{CO-Opt}}$ on every $B$. In Appendix~\ref{s:needbstable} we show that there are non-$B$-stable algorithms and that our main result does not hold for them; this justifies our classification and exclusion of these pathological cases. \subsection{Main result} We will find it useful to define a \emph{smoothed} version of an execution sequence. Given a sequence $E$, let $E_{B-\text{smooth}}$ be the sequence derived from $E$ where a shift $s$ is added to every element of $E$; $s$ is a uniform random integer in the range $[0,B)$. We define $\coshift{B}{E}$ to be $E[\co{B}{E_{B-\text{smooth}}}]$. Then: \begin{lemma}\label{co-sco} $\co{B}{E}$ and $\coshift{B}{E}$ are within a factor of two of each other, for any query-type execution sequence, $E$. \end{lemma} \begin{proof} Shifting the execution sequence may cause accesses that were in one block to now be in two consecutive blocks, and accesses that were in two consecutive block to be in one block. Thus that cost may grow or shrink by a factor of two, but not more. \end{proof} The use of the smoothed version of $E$ allows us to show that there is a locality function that yields the same runtime in the LoR model as the cache oblivious model for a single $B$. Let $\ell_B(d) = \min\left(1,\frac{d}{B}\right)$, which is a valid locality function. Then: \begin{lemma}\label{CO-jb} For any query-type execution sequence $E$ and block size $B$, $\coshift{B}{E} = \jr{\ell_B}{E} $ \end{lemma} \begin{proof} Consider a single access $e_i \in E$. Let $d=e_{i}-e_{i-1}$. If $d\geq B$ then by definition $\ell_B(d)=1$, and the $i$th term of $\co{B}{E}$ is also 1 as $\lfloor \frac{e_{i}+s}{B}\rfloor \neq \lfloor \frac{e_{i-1}+s}{B} \rfloor$ for all $0\leq s < B$. If $d<B$ then by definition $\ell_B(d)=\frac{d}{B}$, and the $i$th term of $\co{B}{E}$ is also $\frac{d}{B}$ as $\lfloor \frac{e_{i}+s}{B}\rfloor \neq \lfloor \frac{e_{i-1}+s}{B} \rfloor$ for $d$ of the possible shifts $s \in [0,B)$. \end{proof} \begin{corollary}\label{J_Implies_CO} If query-type algorithm $A$ is asymptotically LoR optimal for problem $P$, then it is asymptotically CO optimal for $P$. \end{corollary} \begin{proof} Since $A$ is LoR optimal, then it within a constant factor of optimal for all locality functions, including $\ell_B(d) = \min\left(1,\frac{d}{B}\right)$. Thus Lemma~\ref{CO-jb} with Lemma~\ref{co-sco} proves the corollary. \end{proof} While the above relies on specific $\ell_B(d)$ functions, we now show that \emph{any} locality function can be represented as a linear combination of $\ell_B(d)$, for various $B$: \begin{restatable}{lem}{lincomb}\label{lin_comb} For every locality function $\ell(d) \in \mathcal{L}$ there exist nonnegative positive constants $\alpha_1, \alpha_2 \ldots \alpha_N$ and $\beta_1, \beta_2 \ldots \beta_N$ such that $ \ell(d) = \sum_{i=1}^N \alpha_i \ell_{\beta_i}(d) $ for integers $d$ in $[1..N)$. \end{restatable} \end{onlymain} \begin{onlyproof} \begin{proof} Let $\gamma_i=2\ell(i) - \ell(i+1) - \ell(i-1)$, $\alpha_i=i\gamma_i$, and $\beta_i=i$. As $\gamma_i$ is nonnegative given $\ell$ is a convex function, $\alpha$ and $\beta$ values are all nonnegative. Thus: \begin{align*} \sum_{i=1}^N \alpha_i \ell_{\beta_i}(x) &= \sum_{i=1}^N \min\lrp{\alpha_i,\frac{\alpha_ix}{\beta_i}} \\ &= \sum_{i=1}^N \min(i\gamma_i,\gamma_i x) \\ &=\overbrace{\sum_{i=1}^{x-1} i\gamma_i}^{A} + \overbrace{\gamma_x x}^{B} + \overbrace{\sum_{i=x+1}^N \gamma_i x}^{C} \intertext{We first simplify the $A$ term gives us} A &= \sum_{i=1}^{x-1} \Big [2\ell(i)i - \ell(i-1)i - \ell(i+1)i \Big ]\\ &= \sum_{i=1}^{x-1}2\ell(i)i - \sum_{i=0}^{x-2}\ell(i)(i+1) - \sum_{i=2}^{x}\ell(i)(i-1) \\ &= \sum_{i=2}^{x-2}\Big[ \ell(i)(2i-(i-1)-(i+1))\Big ] \\ &+ 2\ell(1) + 2\ell(x-1)(x-1) - 1\ell(0) - 2\ell(1) - \ell(x-1)(x-2) - \ell(x)(x-1) \intertext{Assuming that $\ell(0)=0$, we get} &= \ell(x-1)(2x-2-x+2) - \ell(x)(x-1) \\ &= \ell(x-1)x - \ell(x)(x-1) \intertext{We now simplify the $C$ term from above} C &= \sum_{i>x}\gamma_i x \\ &= \sum_{i=x+1}^{N} \Big [ 2\ell(i)x - \ell(i+1)x - \ell(i-1)x \Big ] \\ &= \sum_{i=x+1}^{N} 2\ell(i)x - \sum_{i=x+2}^{N+1} \ell(i)x - \sum_{i=x}^{N-1} \ell(i)x \\ &= \sum_{i=x+2}^{N-1}\Big [ 2\ell(i)x - \ell(i)x - \ell(i)x \Big ] + 2\ell(x+1)x - \ell(x)x - \ell(x+1)x \\ &+ 2\ell(N)x - \ell(N)x - \ell(N+1)x \intertext{Since we are only considering $\ell(d)$ such that $d \in [1\ldots N)$, we say that $\ell(N+1) = \ell(N)$, thus} C &= \ell(x+1)x - \ell(x)x \intertext{Combining the simplified terms, we get} \sum_{i=1}^N \alpha_i \ell_{\beta_i}(x) &= \overbrace{\ell(x-1)x - \ell(x)(x-1)}^{A} + \overbrace{\gamma_x x}^{B} + \overbrace{\ell(x+1)x - \ell(x)x}^{C} \\ &= \ell(x-1)x - \ell(x)(x-1) + \Big( 2\ell(x)x - \ell(x+1)x - \ell(x-1)x \Big) \\&+ \ell(x+1)x - \ell(x)x \\ &= 2\ell(x)x - \ell(x)x - \ell(x)(x-1) \\ &= \ell(x) \end{align*} \end{proof} \end{onlyproof} \begin{onlymain} \begin{restatable}{coro}{lincombsum}\label{lin_comb_sum} For every locality function $\ell(d) \in \mathcal{L} $ there exists a sets of $n$ nonnegative constants $\alpha_{1}, \alpha_{2} \ldots \alpha_{n}$, and $\beta_{1}, \beta_{2} \ldots \beta_{n}$ such that, for any execution sequence $E$, $$ \sum_{i=1}^{n} \alpha_{i} \jr{\ell_{\beta_{i}}}{E} = \jr{\ell}{E} $$ \end{restatable} \begin{proof} The proof follows naturally from Lemma~\ref{lin_comb}. \end{proof} \end{onlymain} \iffalse \begin{onlyproof} \subsection{Proof of Corollary~\ref{lin_comb_sum}} \label{pf:lin_comp_sum} \lincombsum* \begin{proof} For every access $e_i \in E$, the distance from the previous access, $d_i = (e_i - e_{i-1})$, is at most $n-1$. By Lemma~\ref{lin_comb}, for every $e_i \in E$, $\sum_{k=1}^{\log{\ell(n)}} \alpha_{k} \ell_{\beta_{k}}(d_i) = c\cdot \ell(d_i)$. Thus, across the entire access pattern \begin{align*} \sum_{k=1}^{\lfloor\log{\ell(n)}\rfloor} \alpha_k \jr{\ell_{\beta_k}}{E} &= \sum_{k=1}^{\lfloor\log{\ell(n)}\rfloor} \sum_{i=1}^{|E|} \Big (\alpha_k \ell_{\beta_k}(d_i)\Big) \\ &= \sum_{i=1}^{|E|} c\cdot \ell_(d_i) \\ &= c\cdot \jr{\ell}{E} \end{align*} for some constant $c$. \end{proof} \end{onlyproof} \fi \iffalse \begin{onlymain} \begin{corollary} \ben{Can probably get rid of this with different case for linear $\ell$ functions in the prev. proof. That's really the only one that gives us the extra log factor.} For any locality function $\ell(d) = O(d^{1-\epsilon})$, there exists a sets of $\log{(\ell(d))}$ non-negative constants $\alpha_{1}, \alpha_{2} \ldots \alpha_{\log{(\ell(n)})}$, and $\beta_{1}, \beta_{2} \ldots \beta_{\log{(\ell(n)})}$ such that \begin{align*} \sum_{i=1}^{\log{(\ell(n)})} \alpha_{i} \ell_{\beta_{i}}(d) \leq c\cdot \ell(d) \end{align*} for all $d$ in $[1..n)$ and some constant $c$. \end{corollary} \end{onlymain} \begin{onlyproof} \begin{proof} For any locality function $\ell(d) = O(d^{1-\epsilon})$, the inverse function is of the form $\loc^{^-1}(d) = \Omega(d^{1+\epsilon})$. Thus, the $B$ term in the proof above becomes \begin{align*} B &\leq \sum_{j=i+1}^{\log{(\ell(n))}} 2^j \frac{2^i}{2^{(1+\epsilon)\cdot j}} \\ &\leq c\cdot 2^i \intertext{for some constant $c$. Combining this with the $A$ term above, we get} \hat{\ell}(\loc^{^-1}(2^i)) &\leq 2^{i+1} - 1 + c\cdot 2^i \intertext{And since $\ell(d)$ is within a factor 2 from $\ell(2^i)$, for some $i$,} \hat{\ell}(d) &\leq c' \cdot\ell(d) \end{align*} for some constant $c'$. \end{proof} \end{onlyproof} \fi \begin{onlymain} \begin{restatable}{lem}{cojump}\label{CO_to_Jump} If query-type algorithm $A_{\text{CO-Opt}}$ is asymptotically CO optimal for a problem $P$ that is \emph{B}-stable\xspace, then it is asymptotically LoR optimal for $P$. \end{restatable} \end{onlymain} \begin{onlyproof} \begin{proof} Consider some algorithm $A$ that solves $P$. Since $P$ is \emph{B}-stable\xspace, by Definition~\ref{def:bstable}, \begin{align*} &\exists_{c,n_0}\forall_{n>n_0}\exists_{I_w \in \mathcal{I}_n^P}\forall_{B\ge1} \Big [ W_B(P,A_{\text{CO-Opt}},n) \leq c \cdot \co{B}{E(A,I_w)} \Big ] \intertext{Using the definition of the worst-case cost $W_B$, we get} &\exists_{c,n_0}\forall_{n>n_0}\exists_{I_w \in \mathcal{I}_n^P}\forall_{B\ge1} \Big [ \max_{I \in \mathcal{I}^P_n} \co{B}{E(A_{\text{CO-Opt}},I)} \leq c \cdot \co{B}{E(A,I_w)} \Big ] \intertext{By Lemma~\ref{co-sco}, we get} &\exists_{c,n_0}\forall_{n>n_0}\exists_{I_w \in \mathcal{I}_n^P}\forall_{B\ge1}\Big [ \max_{I \in \mathcal{I}_n^P}\coshift{B}{E(A_{\text{CO-Opt}},I)} \le 4c \cdot \coshift{B}{E(A,I_w)} \Big ] \intertext{and since, by Lemma~\ref{CO-jb}, for any $B$ the smoothed CO cost is equivalent to the LoR cost with the corresponding $\ell_B$ function,} &\exists_{c,n_0}\forall_{n>n_0}\exists_{I_w \in \mathcal{I}_n^P}\forall_{B\ge1}\Big [ \max_{I \in \mathcal{I}_n^P}\jr{\ell_B}{E(A_{\text{CO-Opt}},I)} \le 4c \cdot \jr{\ell_B}{E(A,I_w)} \Big ] \intertext{This inequality holds for all $B$ and thus all linear combinations of various $B$. For any locality function $\ell$ in the set of valid locality functions, $\mathcal{L}$, consider $\alpha^\ell_1, \alpha^\ell_2, \ldots, \alpha^\ell_n$ and $\beta^\ell_1, \beta^\ell_2, \ldots, \beta^\ell_n$ given by Lemma~\ref{lin_comb}. We use the $\beta$'s as the $B$ values and the $\alpha$'s as the coefficients in the linear combination to get} &\exists_{c',n_0}\forall_{n>n_0}\exists_{I_w \in \mathcal{I}_n^P}\forall_{\ell \in \mathcal{L}} \\ \Big [ & \sum_{k=1}^{n} \max_{I \in \mathcal{I}_n^P} \Big( \alpha^{\ell}_k \jr{\ell_{\beta^{\ell}_k}}{E(A_{\text{CO-Opt}},I)}\Big) \le c' \sum_{k=1}^{n} \Big( \alpha^{\ell}_k \jr{\ell_{\beta^{\ell}_k}}{E(A,I_w)} \Big ) \Big ] \intertext{$I_w$ is a single instance of $P$, therefore it cannot have a greater total cost than the single instance that maximizes the cost} &\exists_{c',n_0}\forall_{n>n_0}\forall_{\ell \in \mathcal{L}} \\ \Big [ & \sum_{k=1}^{n} \max_{I \in \mathcal{I}_n^P} \Big (\alpha^{\ell}_k \jr{\ell_{\beta^{\ell}_k}}{E(A_{\text{CO-Opt}},I)} \Big ) \le c' \max_{I \in \mathcal{I}_n^P}\sum_{k=1}^{n}\Big( \alpha^{\ell}_k \jr{\ell_{\beta^{\ell}_k}}{E(A,I)} \Big) \Big ] \intertext{Moving the max outside the summation can only decrease the overall cost of the left side of the inequality, thus} &\exists_{c',n_0}\forall_{n>n_0}\forall_{\ell \in \mathcal{L}} \\ \Big [ & \max_{I \in \mathcal{I}_n^P} \sum_{k=1}^{n}\Big (\alpha^{\ell}_k \jr{\ell_{\beta^{\ell}_k}}{E(A_{\text{CO-Opt}},I)} \Big ) \le c' \max_{I \in \mathcal{I}_n^P}\sum_{k=1}^{n}\Big( \alpha^{\ell}_k \jr{\ell_{\beta^{\ell}_k}}{E(A,I)} \Big) \Big ] \intertext{Using Corollary~\ref{lin_comb_sum}, we get} &\exists_{c',n_0} \forall_{n>n_0}\forall_{\ell \in \mathcal{L}} \Big [ \max_{I \in \mathcal{I}^P_n} ( \jr{\ell}{E(A_{\text{CO-Opt}},I)}) \leq c' \max_{I \in \mathcal{I}^P_n} (\jr{\ell}{E(A,I)}) \Big ] \\ \intertext{We are considering an arbitrary algorithm $A$ that solves $P$, so this applies to all $A \in \mathcal{A}_P$. By the definition of worst-case LoR cost,} &\exists_{c',n_0} \forall_{n>n_0}\forall_{\ell \in \mathcal{L}} \forall_{A \in \mathcal{A}_P} \Big [ W_{\ell}(P,A_{\text{CO-Opt}},n) \leq c' W_{\ell}(P,A,n) \Big ] \end{align*} Thus, by Definition~\ref{def:jumpopt}, $A_{\text{CO-Opt}}$ is asymptotically LoR optimal. \end{proof} \end{onlyproof} \begin{onlymain} \begin{theorem} \label{t:main} Any query-type algorithm $A$ that solves a \emph{B}-stable\xspace problem $P$ is LoR optimal if and only if it is CO optimal. \end{theorem} \begin{proof} The proof follows naturally from Corollary~\ref{J_Implies_CO} and Lemma~\ref{CO_to_Jump}. \end{proof} In \S\ref{s:needbstable} of appendix we prove and discuss the following which justifies the restriction to \emph{B}-stable\xspace problems: \begin{lemma} There exists a problem $P$ which is not \emph{B}-stable\xspace and which has a CO optimal algorithm which is not LoR optimal. Thus theorem~\ref{t:main} would not hold if the restriction to $B$-stable algorithms were to be removed. \end{lemma} \end{onlymain} \section{General models for algorithms with memory}\label{with-memory} In the previous section, we considered execution sequences that were unidirectional and thus the size of internal memory was irrelevant. Now we generalize the model and apply it to execution sequences that are not necessarily unidirectional. This requires that we consider the size and contents of internal memory when computing the expected cost of an access. \subsection{Cache-oblivious model and related memory models} \begin{comment} The general cache-oblivious model incorporates memory size, $M$, and block size $B$ when computing the cost of an execution sequence. The $\frac{M}{B}$ blocks stored in internal memory make up the \emph{working set}, and we define $\mathcal{W}_{M,B}(E,i)$ to be the working set after the $i$-th access of execution sequence $E$. Thus, the cache-oblivious cost of execution sequence $E$ is $\co{M,B}{E}~=~\sum\limits_{i=1}^{|E|}~\begin{cases} 0 \text{ if } e_i \in \mathcal{W}_{M,B}(E,i{-}1) \\ 1 \text{ otherwise} \end{cases}$. We note that the cache-oblivious model assumes \emph{ideal cache replacement}~\cite{DBLP:journals/talg/FrigoLPR12}, so evictions from the working set are selected such that the total cost of $E$ is minimized. We also define the \emph{LRU working set}, $\mathcal{W}^{\textsc{lru}}_{M,B}(E,i)$, which assumes the \emph{least recently used} cache replacement policy~\cite{DBLP:books/daglib/0022093}. With this, we define the \emph{LRU cost} of execution sequence $E$ as $\lru{M,B}{E}~=~\sum\limits_{i=1}^{|E|}~\begin{cases} 0 \text{ if } e_i \in \mathcal{W}^{\textsc{lru}}_{^{M,B}}(E,i{-}1) \\ 1 \text{ otherwise} \end{cases}$. A more rigorous and formal definition of working set, cache replacement policies, and the cache-oblivious and LRU costs is included in \S\ref{formal-CO-LRU} of the appendix. As we did with the unidirectional cache-oblivious cost in Section~\ref{prelim}, we define \emph{smoothed} versions of these cost functions using $E_{B-\text{smooth}}$, which is execution sequence $E$ with a uniform random shift, $s$, in the range $[0,B)$ applied to every access. Specifically, we define $\coshift{M,B}{E} = E[\co{M,B}{E_{B-\text{smooth}}}]$ and $\lrushift{M,B}{E}=E[\lru{M,B}{E_{B-\text{smooth}}}]$. \end{comment} \end{onlymain} \begin{onlyapp} The general cache-oblivious model uses $M$ as the size of internal memory, with $\frac{M}{B}$ blocks being stored in internal memory at a given time, which we call the \emph{working set}. The working set is made up of blocks of contiguous memory, each containing $B$ elements. For a given block size, $B$, we enumerate the blocks of memory by defining the block containing element $e$ as $\bl{B}{e}$ (the $\left \lfloor \frac{e}{B}\right \rfloor$-th block). Formally, we define the working set \emph{after} the $i$-th access of execution sequence $E$ on a system with memory size $M$, block size $B$, and cache replacement policy $\mathcal{P}$ (formally defined below) as $\mathcal{W}_{M,B,\mathcal{P}}(E,i)$. For simplicity of notation, we refer to the working set after the $i$-th access simply as $\mathcal{W}_{i}$ when the other parameters ($M$, $B$, $\mathcal{P}$, and $E$) are unambiguous. When we access an element $e_i$, if the block containing $e_i$ is in the working set (i.e., $\bl{B}{e_i} \in \mathcal{W}_{i-1}$), it is a \emph{cache hit} and, in the cache-oblivious model, it has a cost of 0. However, if $\bl{B}{e_i}$ is not in the working set, it is a \emph{cache miss}, resulting in a cost of 1. On a cache miss, the accessed block, $\bl{B}{e_i}$ is loaded into memory, replacing an existing block, which is determined by \emph{cache replacement policy}. We define a general cache replacement policy as a function that selects the block of the working set to evict when a cache miss occurs, i.e., for memory size $M$ and block size $B$: \begin{align*} \mathcal{P}_{M,B}(E,\mathcal{W},i) &= \begin{cases} \bl{B}{e_k} &\text{ if }|\mathcal{W}| = \frac{M}{B} \text{ and } \bl{B}{e_i} \not\in \mathcal{W} \\ \emptyset &\text{ otherwise} \end{cases}\\ \intertext{where $\mathcal{W}$ is the working set, $e_i$ and $e_k$ are the $i$-th and $k$-th accesses in sequence $E$, respectively, $k<i$, and $\bl{B}{e_k} \in \mathcal{W}$. For a given cache replacement policy and execution sequence, $E$, we define the working set after access $i \in E$ as} \mathcal{W}_{i} &= \left( \mathcal{W}_{i-1} \backslash \mathcal{P}_{M,B}(E,\mathcal{W}_{i-1},i)\right ) \cup \bl{B}{e_i} \end{align*} where $\mathcal{P}_{M,B}(E,\mathcal{W}_{i-1},i)$ defines the block to be evicted and $\bl{B}{e_i}$ is the new block being added to the working set. Since a cache miss results in a cost of 1 and a cache hit has cost 0, the total cost of execution sequence $E$ is simply: \begin{align*} C_{M,B,\mathcal{P}}(E) &= \sum_{i=1}^{|E|} \begin{cases} 0 & \text{if } \bl{B}{e_i} \in \mathcal{W}_{M,B,\mathcal{P}}(E,i{-}1) \\ 1 & \text{otherwise}\\ \end{cases} \\ \end{align*} For this work, we focus on the following cache replacement policies: \begin{itemize} \item $\PI{M,B}(E,\mathcal{W},i)$: The ideal cache replacement policy with internal memory size $M$ and block size $B$. The number of evictions (and cache misses) over execution sequence $E$ is minimized. This is equivalent to Belady's algorithm~\cite{Belady1966} that evicts the block $\bl{B}{e_k}$ that is accessed the farthest in the future among all blocks in $\mathcal{W}$. \item $\PLRU{M,B}(E,\mathcal{W},i)$: The least recently used (LRU) cache replacement policy with internal memory size $M$ and block size $B$. The evicted block, $\bl{B}{e_k}$, is the ``least recently used'' block in $\mathcal{W}$. That is, $\bl{B}{e_k}$ is selected such that no element in $\bl{B}{e_k}$ has been accessed more recently than the most recently accessed element of any other block in $\mathcal{W}$. \end{itemize} We define $\mathcal{W}^{\textsc{opt}}_{^{M,B}}(E,i)$ and $\mathcal{W}^{\textsc{lru}}_{^{M,B}}(E,i)$ as the working sets after the $i$-th access of sequence $E$, when using the ideal and LRU cache replacement policies, respectively. Thus, the cache-oblivious cost (using the ideal cache replacement policy) of performing the $i$-th access of on a system with memory size $M$ and block size $B$ is $\co{M,B}{E,i}~=~\begin{cases} 0 \text{ if } \bl{B}{e_i} \in \mathcal{W}^{\textsc{opt}}_{^{M,B}}(E,i{-}1) \\ 1 \text{ otherwise}\\ \end{cases} $ and the total cost for the entire execution sequence $E$ is $\co{M,B}{E} = \sum_{i=1}^{|E|} \co{M,B}{E,i}$. We similarly define the cost with the LRU cache replacement policy for a single access $e_i$ and a total execution sequence $E$ as $\lru{M,B}{E,i}$ and $\lru{M,B}{E}$, respectively. \begin{theorem}\label{thm:CO-LRU} For any execution sequence, $E$, memory size $M$, and block size $B$, the total cache misses using the LRU cache replacement policy with a memory twice the size ($2M$) is \emph{2-competitive} with number of cache misses using the ideal cache replacement policy, i.e., $\lru{2M,B}{E} \leq 2\cdot \co{M,B}{E}$. \end{theorem} \begin{proof} It follows from the work of Sleator and Tarjan~\cite{DBLP:journals/cacm/SleatorT85}. \end{proof} As we did with the unidirectional cache-oblivious cost in Section~\ref{prelim}, we define \emph{smoothed} versions of these cost functions using $E_{B-\text{smooth}}$, which is execution sequence $E$ with a uniform random shift in the range $[0,B)$ applied to every access. Specifically, we define $\coshift{M,B}{E} = E[\co{M,B}{E_{B-\text{smooth}}}]$ and $\lrushift{M,B}{E}=E[\lru{M,B}{E_{B-\text{smooth}}}]$. \begin{comment} As we did with the uni-directional cache-oblivious cost in Section~\ref{prelim}, we define \emph{smoothed} versions of these cost functions. Our smoothed versions compute the cost as the expected cost of each access for a range of alignment shifts $s$ such that $0 \leq s < B$, i.e., \begin{align*} \coshift{M,B}{E,i} &= \frac{1}{B}\sum_{s=0}^{B-1} \co{M,B}{E_s,i} \intertext{where $E_s$ is execution sequence $E$ with a shift of distance $s$ applied to each access (i.e., for each $e_i \in E$, there is $(e_i + s) \in E_s$). Thus, the total smoothed cost is} \coshift{M,B}{E} &= \sum_{i=1}^{|E|} \left (\frac{1}{B} \sum_{s=0}^{B-1} \co{M,B}{E_s,i} \right )\\ &= \frac{1}{B} \sum_{s=0}^{B-1} \co{M,B}{E_s} \intertext{Similarly, we define the the smoothed LRU cost as} \lrushift{M,B}{E,i} &= \frac{1}{B} \sum_{s=0}^{B-1} \lru{M,B}{E_s,i} \\ \lrushift{M,B}{E} &= \frac{1}{B} \sum_{s=0}^{B-1} \lru{M,B}{E_s} \end{align*} \end{comment} \end{onlyapp} \begin{onlymain} \subsection{General LoR model}\label{bivariate-jump} To capture the concept of the working set for algorithms that use internal memory, we define bidimensional locality functions that compute LoR cost based on two dimensions: distance and time. This bidimensional locality function, $\ell(d,\delta)$ represents the cost of a jump from a \emph{source} element, $s$, to a \emph{target} element, $t$, where $d$ and $\delta$ are the \emph{spatial distance} and \emph{temporal distance}, respectively, between $s$ and $t$. This captures the concept of the working set by using ``time'' to determine if the source element is in memory or not. If the source is temporally close (was accessed recently) and spatially close to the target, $t$, the resulting locality cost of the jump is small. For the general LoR model, we define a class of bidimensional locality functions as functions of the form $\ell(d, \delta) = max(f(d), g(\delta))$, where $f(d)$ is subadditive and $g(\delta)$ is a \emph{0-1 threshold function}. That, is, $g(\delta) = \begin{cases} 0 \text{ if } 0\leq \delta \leq x \\ 1 \text{ otherwise} \end{cases}$, for some value $x$. For any $k \leq i$, the bidimensional locality cost of a jump from source element $e_k$ to target $e_i$ in execution sequence $E$ is $\ell(|e_i - e_k|, t(E,i) - t(E,k))$, where $t(E,i)$ is the \emph{time} of the $i$-th access. For simplicity of notation, we define $\dist{E,k,i}$ to be the temporal distance between the $i$-th access ($e_i$) and $k$-th access ($e_k$), i.e., $\dist{E,k,i} = t(E,i)-t(E,k)$. Intuitively, we can think of $\dist{E,k,i}$ as the time from access $e_k$ to ``present'' when accessing $e_i$. We add the additional restriction to our definition of bidimensional locality functions that they cannot be more ``sensitive'' to temporal locality than spatial locality, i.e., for any locality function $\ell(d,\delta)=\max(f(d),g(\delta))$, $\forall_x [ f(x) \geq g(x) ]$. This corresponds to the \emph{tall cache assumption} that is typically used with the cache-oblivious model~\cite{DBLP:conf/stoc/BrodalF03,DBLP:journals/tcs/Silvestri08}. A more in-depth discussion of the tall cache assumption and how it relates to the LoR model is in Section~\ref{tall-cache}. We form our definition of time based on the amount of change that occurs to the working set. For example, if an access causes a block of $B$ elements to be evicted, we say that time increases by 1. Thus, time depends on the locality function, so we define the time of the $i$-th access of $E$, for the given locality function $\ell$ to be $\tj{\ell}{E,i} = \sum_{k=1}^{i-1} (\jr{\ell}{E,k})$. That is, the time of access $e_i \in E$ is simply the sum of costs of all accesses prior to $e_i$ in sequence $E$. We note that the time after the last access of $E$ is the total LoR cost (i.e., $\jr{\ell}{E} = \tj{\ell}{E,|E|+1}$). Unlike the query-type LoR cost, we cannot simply compute the cost of access $e_i$ using the distance from the previous access, $e_{i-1}$, since any of the prior accesses may be in the working set when accessing $e_i$. Furthermore, since we no longer consider only non-decreasing execution sequences, when accessing $e_i$, there may be accesses to both the \emph{left} and \emph{right} that could be in the same block as $e_i$. Therefore, computing $\jr{\ell}{E,i}$ using the locality function from a single source is insufficient to capture the idea of the working set, and a detailed example showing why this is the case is included in Section~\ref{no-onefinger}. We define the general LoR cost of access $e_i \in E$ as \begin{align*} \jr{\ell}{E, i} &= \max\Bigg( \overbrace{\min_{\substack{\forall L<i \text{ s.t.} \\ e_L \leq e_i}} \ell(e_i-e_L, \dist{E,i,L})}^{\text{left side}} + \overbrace{\min_{\substack{\forall R<i \text{ s.t.} \\ e_R \geq e_i}} \ell(e_R-e_i, \dist{E,i,R})}^{\text{right side}} - 1, 0\Bigg ) \end{align*} Intuitively, the LoR cost of access $e_i \in E$ is computed from the minimum cost jumps from both the left side and right side of $e_i$. We note that this generalizes the query-type LoR cost definition, as the locality function from source $e_R$ will always evaluate to 1 for non-decreasing accesses. This formulation has the added benefit that it lets us easily visualize an execution sequence in a graphical representation. We include a discussion of this graphical representation in Section~\ref{graphical}. \end{onlymain} \begin{onlyapp} \subsection{On the tall cache assumption}\label{tall-cache} Cache-oblivious algorithms are analyzed for memory size $M$ and block size $B$ and the tall cache assumption simply assumes that $M \ge B^2$. This assumption is required by many cache-obliviously optimal algorithms because many require that at least $B$ \emph{blocks} can be loaded into internal memory at a time. It has been proven that without the tall cache assumption, one cannot achieve cache-oblivious optimality for several fundamental problems, including matrix transposition~\cite{DBLP:journals/tcs/Silvestri08} and comparison-based sorting~\cite{DBLP:conf/stoc/BrodalF03}. Thus, we consider how this assumption is reflected in the LoR model, and whether we can gain insight into the underlying need for the tall cache assumption. Recall that our class of bidimensional locality functions are of the form $\ell(d,\delta)=(f(d),g(\delta))$, where $f$ is subadditive and $g$ is a 0-1 threshold function. In Section~\ref{sec:jump-CO} we define the locality function that corresponds to a memory system with memory size $M$ and block size $B$ to be \begin{align*} \ell_{M,B}(d,\delta)&=\max\left (\min\left(1, \frac{d}{B}\right), \min\left(1, \left\lfloor\frac{\delta}{M/B}\right\rfloor \right) \right) \end{align*} thus, for this function, $f(d)=\min\left(1, \frac{d}{B}\right)$ and $g(\delta)=\min\left(1, \left\lfloor\frac{\delta}{M/B}\right\rfloor \right)$. The tall cache assumption states that $M \ge B^2$, or $\frac{M}{B} > B$. This is reflected in our locality function as the requirement that $\forall_{k\ge 0} [f(k) \ge g(k)]$. This restriction between $f$ and $g$ implies that $\ell$ cannot be more ``sensitive'' to temporal locality than spatial locality. That is, the LoR cost when spatial and temporal distance are equal will be computed from the spatial distance (i.e., $\ell(d,\delta) = f(d)$ if $d\ge\delta$). Additionally, this implies that $\ell(x,x)$ is subadditive. Intuitively, this tells us that, with the tall cache assumption, any algorithm that balances spatial and temporal locality of reference will not have performance limited by temporal locality. Many cache-obliviously optimal algorithms aim to balance spatial and temporal locality, thus requiring the tall cache assumption to achieve optimality. \end{onlyapp} \begin{onlymain} \end{onlymain} \begin{onlyapp} \subsection{A single LoR source does not represent the working set}\label{no-onefinger} In this section, we show that computing the general LoR cost using only a single source (with the minimum cost) is insufficient to represent the working set. Specifically, we show the potential discrepancy between such a formulation of LoR cost and the smooth LRU cost. We formally define this single-source definition of LoR cost of accessing $e_i$ as $\jrh{\ell}{E,i} = \min_{k=1}^{i-1}\ell(|e_i - e_k|, \dist{E,k,i})$. To show the discrepancy between this formulation and the LRU cost, we consider the specific locality function that corresponds to the LRU cost for a specific memory size $M$ and block size $B$: $\ell_{M,B}(d,\delta) = \max\left(\min\left(1,\frac{d}{B}\right), \min\left(1, \left\lfloor\frac{\delta}{M/B}\right\rfloor\right)\right)$. Given an array of elements, $a$, located in contiguous memory, we define $a[i]$ as the $i$-th element in array $a$. Consider an execution sequence $E$ that first accesses $a[0]$ and $a[B]$, then performs a series of \emph{stages} of accesses of elements within the range $[0,B]$. At the first stage, $a[B/2]$ is accessed. At the second stage, $a[B/4]$ and $a[3B/4]$ are accessed. At stage 3, $a[B/8]$, $a[3B/8]$, $a[5B/8]$ and $a[7B/8]$ are accessed, and so on for $\log{B}$ stages. By tall cache, we know that $M>2B$, so at any stage $k$, the blocks containing elements accessed during the previous stage are in the working set. Thus, the LRU cost of execution sequence $E$ is \begin{align*} &\lrushift{M,B}{E} = \\ &\overbrace{\sum_{k=0}^{\log{B}-1}}^{\text{stages}} \overbrace{\sum_{i=0}^{2^k-1}}^{\text{accesses}} \frac{1}{B}\sum_{s=0}^{B-1}\begin{cases} 0 & \text{ if }\left \{\frac{(2i+1)\cdot B}{2^{k{+}1}}{+} s\right \}_B = \left ( \left\{ \frac{2i\cdot B}{2^{k{+}1}}{+}s\right\}_B \text{ or } \left\{\frac{(2i+2)\cdot B}{2^{k+1}}{+}s\right \}_B \right )\\ 1 & \text{ otherwise} \end{cases} \end{align*} For every access, all elements element between $a[0]$ and $a[B]$ will always be in the working set, for every shift value $s$. Thus, the cost is 0 at each stage after the first two accesses, so $\lrushift{M,B}{E} =2$. The single-source LoR cost, $\jrh{\ell_{M,B}}{E}$, however, depends only on the single access in the working set with the smallest spatial distance (i.e., minimum LoR cost). Since, at each stage, the accesses from the previous stage have temporal distance $<\frac{M}{B}$, the temporal component of the locality function is always 0 and $\frac{d}{B}$ dictates the cost of each access. At each stage, the spatial distance, $d$, decreases by a factor 2, thus \begin{align*} \jrh{\ell_{M,B}}{E} &= \overbrace{\sum_{k=0}^{\log{B}-1}}^{\text{stages}} \overbrace{\sum_{i=0}^{2^k-1}}^{\text{accesses}} \ell_{M,B}\left(\frac{B}{2^{k+1}}, 1\right) \\ &= \sum_{k=0}^{\log{B}-1} \left ( 2^k \cdot \ell_{M,B}\left(\frac{B}{2^{k+1}}, 1\right)\right) \intertext{using the locality function, $\ell_{M,B}(d, \delta)$ defined above, we get} \jrh{\ell_{M,B}}{E} &= \sum_{k=0}^{\log{B}-1} \left ( 2^k \cdot \frac{B/2^{k+1}}{B} \right ) \\ &= \sum_{k=0}^{\log{B}-1} \frac{1}{2} \\ &= \frac{\log{B}}{2} \end{align*} Thus, the single-source cost formulation does not generalize the LRU cost, while using two sources does (as we prove in Lemma~\ref{equiv-LR}). \end{onlyapp} \begin{onlymain} \end{onlymain} \begin{onlyapp} \subsection{Graphical visualization}\label{graphical} One additional benefit of using the bidimensional locality function based on spatial and temporal distance is that it allows us to visualize an execution sequence graphically. We consider a series of accesses in execution sequence $E$ as points in a 2-dimensional plane. The point representing access $e_i$ is plotted with the $x$ and $y$ coordinates corresponding to the spatial position, $e_i$, and the temporal position, $t(E,i)$, respectively. Figure~\ref{fig:visualize-smooth} illustrates this visualization for a series of accesses with locality function $\ell_{M,B}(d,\delta)$ (defined in Section~\ref{sec:jump-CO}). The cost of access $e_i$ is simply computed from the LoR cost with sources $e_L$ and $e_R$ (the previous access with the minimum locality function cost to the left and right, respectively). We can visually determine which previous accesses correspond to $e_L$ and $e_R$: if a previous access is outside the gray region (i.e., $\delta > \frac{M}{B}$ or $d > B$), the cost is 1. Otherwise, it is simply $\frac{d}{B}$. \begin{figure}[h] \center \includegraphics{visualize-smooth} \caption{Graphical visualization of accesses in the general LoR model with locality function $\ell_{M,B}$. At time of access $e_i$, all prior accesses within the past time $\frac{M}{B}$ (above the dashed horizontal line) have $g(\delta) = 0$. The locality cost to jump from any element outside the gray region has the maximum cost of 1. In this example, there is both an $e_L$ and $e_R$ with cost $<1$.} \label{fig:visualize-smooth} \end{figure} \begin{comment} ** Needs to be fixed for two-finger if we want it in appendix. ** \begin{figure}[t] \center \includegraphics{visualize-scan} \caption{Graphical visualization of accesses when repeatedly scanning $k$ elements in the jump model with jump function $j_{M,B}$, when $k>M$ (left) and $k \leq M$ (right). In both cases, we only need to consider the jump cost from $e_L$ and each access in the initial scan has cost $j(1,\frac{1}{B})$ (the minimum finger is the previously accessed element). If $k > M$ (left) the first element is not in the working set when starting the second scan (access $e_k$ has cost $1$) and each subsequent access also costs $j(1,\frac{1}{B})$. When $k \leq M$ (right), every access after the first scan has cost 0 because it is in the working set.} \label{fig:visualize-scan} \end{figure} This method of visualization provides additional intuition when analyzing an execution sequence. For example, consider an execution sequence $E$ that repeatedly scans $k$ contiguous elements in memory. With the jump function $\ell_{M,B}(d,\delta) = \max\left (\min\left(1, \frac{d}{B}\right), \min\left(1, \left\lfloor\frac{\delta }{M/B}\right\rfloor\right) \right)$, we can see, graphically, how $k$ affects the overall cost of $E$. Figure~\ref{fig:visualize-scan} illustrates the graphical representation of $E$ when $k > M$ (left) and when $k \leq M$ (right). For this access pattern, we only need to consider the jump cost from $e_L$, since the cost from $e_R$ will always be 1. When $k>M$, each access has cost $\ell_{M,B}(1,\frac{1}{B})$ because $\ell_{M,B}(1,\frac{1}{B}) < \ell(0,\frac{k}{B})$ (every element is evicted from memory before it is accessed again). When $k \leq M$, after the first scan every element has temporal distance $<\frac{M}{B}$ (is in the working set), so $\ell_{M,B}(0,\delta) < \ell(1,\frac{1}{B})$ and the cost of each access is 0. \end{comment} \end{onlyapp} \begin{onlymain} \subsection{Equivalence to cache-oblivious cost}\label{sec:jump-CO} As with the query-type LoR model, we can define a specific class of locality functions that correspond to the cost in the cache-oblivious model. For the general LoR model we define $\ell_{M,B}(d,\delta)~=~\max\left (\min\left(1, \frac{d}{B}\right), \min\left(1, \left\lfloor\frac{\delta}{M/B}\right\rfloor \right) \right)$ to be the bidimensional locality function that correspond to the cache-oblivious cost with memory size $M$ and block size $B$. \begin{restatable}{thrm}{equivLR}\label{equiv-LR} For any execution sequence $E$, $\jlrk{E}{M,B} = \lrushift{M,B}{E}$, thus $$\coshift{M,B}{E} \leq \jlrk{E}{M,B} \leq 2\cdot \coshift{\frac{M}{2},B}{E}$$ \end{restatable} \end{onlymain} \begin{onlyproof} \begin{proof} To prove that $\jlrk{E}{M,B} = \lrushift{M,B}{E}$, we consider the cost of performing access $e_i \in E$. Assume that, when accessing $e_i$, $e_L$ is the nearest element to the \emph{left} of $e_i$ ($e_L \leq e_i$) that is in the working set, i.e., $L<i$, $e_L \in \mathcal{W}^{\textsc{lru}}_{^{M,B}}(E,i-1)$, and $e_i-e_L$ is minimized. If there is no such element to the left of $e_i$ in the working set, then we say that $e_L = -\infty$. Similarly, assume that $e_R$ is the nearest element in $\mathcal{W}^{\textsc{lru}}_{^{M,B}}(E,i-1)$ to the \emph{right} of $e_i$ ($e_i \leq e_R$), and if there is no such element, then $e_R= +\infty$. By this definition, for any access $e_i$, we can simply consider the spatial components, because, if no element is within temporal distance $\frac{M}{B}$, the spatial distance is $\infty$ and the $\ell_{M,B}$ cost is 1. We consider three possible cases for the spatial distance of $e_L$ and $e_R$ from access $e_i$: \\ \noindent \textbf{Case 1: }$(e_i-e_L) \ge B$ AND $(e_R-e_i)\ge B$ There is no element in the working set within distance $B$ of $e_i$, then, for all alignment shifts, $0 \leq s < B$, we know that $e_i \not\in \mathcal{W}^{\textsc{lru}}_{M,B}(E,i-1)$. Thus, \begin{align*} \lrushift{M,B}{E,i} &= E[\co{M,B}{E_{B-\text{smooth}},i}] \\ &= \frac{1}{B}\sum_{s=0}^{B-1} 1 \\ &= 1 \intertext{and the LoR model cost is} \jlrk{E,i}{M,B} &= \max\left( \ell_{M,B}(e_i-e_L, \dist{E,i,L}) + \ell_{M,B}(e_R - e_i, \dist{E,i,R}) - 1, 0 \right ) \\ &= \max( 1 + 1 - 1, 0) = 1 \end{align*} We note that this includes the cases where $e_L$ and/or $e_R$ do not exist, since, we set $e_L = -\infty$ and/or $e_R=\infty$, respectively, in such cases. \\ \noindent \textbf{Case 2: }$(e_i - e_L) < B$ OR $(e_R-e_i) < B$ Only one side (left or right) is within distance $B$ of $e_i$. W.l.o.g, assume that $(e_R - e_i) < B$ and $(e_i - e_L) \ge B$. Since $(e_i - e_L) \ge B$, for all shifts $0\leq s < B-1$, we know that $e_L \not\in \mathcal{W}^{\textsc{lru}}_{M,B}(E,i-1)$. Thus, the smoothed LRU cost is simply \begin{align*} \lrushift{M,B}{E,i} &= \frac{1}{B}\sum_{s=0}^{B-1} \begin{cases} 1 & \text{ if } \lfloor\frac{e_R+s}{B}\rfloor \neq \lfloor\frac{e_i+s}{B}\rfloor \\ 0 & \text{ otherwise} \end{cases}\\ &= \frac{e_R - e_i}{B} \intertext{The LoR cost is} \jlrk{E,i}{M,B} &= \max\left( \min\left(1, \frac{e_i - e_L}{B}\right) + \min\left(1,\frac{e_R - e_i}{B}\right) - 1, 0 \right ) \\ &= \max\left( 1 + \frac{(e_R - e_i)}{B} - 1, 0\right) = \frac{(e_R - e_i)}{B} \end{align*} A symmetric argument holds in the case where $(e_i - e_L) < B$ and $(e_R-e_i) \geq B$. \\ \noindent \textbf{Case 3: }$(e_i - e_L) < B$ AND $(e_R-e_i) < B$ Both $e_L$ and $e_R$ are within distance $B$ of $e_i$, so the smoothed LRU cost depends on the number of alignment shifts, $s$, for which $e_i$ is not in the same block as either $e_L$ or $e_R$, i.e., \begin{align*} \lrushift{M,B}{E,i} &= \frac{1}{B}\sum_{s=0}^{B-1} \begin{cases} 1 & \text{ if } \lfloor\frac{e_i+s}{B}\rfloor \neq \lfloor\frac{e_R+s}{B}\rfloor \text{ \emph{and} } \lfloor\frac{e_i+s}{B}\rfloor \neq \lfloor\frac{e_L+s}{B}\rfloor \\ 0 & \text { otherwise} \end{cases} \intertext{For simplicity, assume that at alignment shift $s=0$, $e_i$ is in the last location of the block of size $B$. Thus, the shifts from $s=0$ to $s=(B-1)$ define a $2B$ range around $e_i$ (i.e., $[e_i-B, e_i+B]$). We define $p(e_L)$ and $p(e_R)$ to be the indexes of $e_L$ and $e_R$ in this $2B$ range, respectively. For all $0\leq s \leq p(e_L)$, $e_i$ is in the same block as $e_L$. Similarly, for all $(p(e_R)-B) \leq s < B$, $e_i$ is in the same block as $e_R$ Thus, the cost is simply the number of shifts, $s$, where the entire block of size $B$ containing $e_i$ is strictly between $p(e_L)$ and $p(e_R)$, i.e.,} \lrushift{M,B}{E,i} &= \frac{1}{B} \sum_{s=p(e_L)}^{p(e_R)-B} 1 \\ &= \frac{p(e_R) - B - p(e_L)}{B} \intertext{and, since the cost cannot be negative, this becomes} &= \max\left(\frac{p(e_R) - p(e_L)}{B} - 1, 0\right) \intertext{We know that $p(e_R) = B + e_R - e_i$ and $p(e_L) = B - (e_i - e_L)$, thus} \lrushift{M,B}{E,i} &= \max\left(\frac{(B + e_R - e_i) - (B - (e_i - e_L))}{B} - 1, 0\right) \\ &= \max\left(\frac{e_R - e_i}{B} + \frac{e_i - e_L}{B} - 1, 0\right) \intertext{Since both $e_L$ and $e_R$ are within distance $B$ of $e_i$, this is equal to LoR cost, i.e.,} \jlrk{E,i}{M,B} &= \max\left( \min\left(1, \frac{e_i - e_L}{B}\right) + \min\left(1,\frac{e_R - e_i}{B}\right) - 1, 0 \right ) \\ &= \max\left(\frac{e_i - e_L}{B} + \frac{e_R - e_i}{B} - 1, 0\right) \intertext{Thus, for any access access, $e_i \in E$,} \jlrk{E,i}{M,B} &= \lrushift{M,B}{E,i} \end{align*} where $\ell_{M,B}(d,\delta) =\max\left (\min\left(1, \frac{d}{B}\right), \min\left(1, \left\lfloor\frac{\delta}{M/B}\right\rfloor \right) \right)$. Since they are equivalent for any access, $e_i \in E$, then for any execution sequence $E$, \begin{align*} \jlrk{E}{M,B} &= \lrushift{M,B}{E} \end{align*} Since the cache-oblivious cost is computed assuming ideal cache replacement, and LRU cache replacement with twice the memory is 2-competitive with ideal cache, we have $$\coshift{M,B}{E} \leq \lrushift{M,B}{E} \leq \jlrk{E}{M,B} \leq \lrushift{M,B}{E} \leq 2\cdot \coshift{\frac{M}{2},B}{E}$$ \end{proof} \end{onlyproof} \begin{onlymain} \noindent While the above theorem gives us a relation between the cache-oblivious cost and LoR cost, it does not give us an asymptotic equivalence for the same $M$. This stems from the discrepancy between the CO cost and LRU cost for the same memory size and is due to the cache-oblivious model using ideal cache replacement. We define a class of algorithms that we call memory-smooth\xspace, for which this issue does not impact asymptotic cost. Intuitively, the class of memory-smooth\xspace algorithms excludes those that are tuned for a particular memory size. \begin{definition}\label{def:cosmooth} Algorithm $A$ is memory-smooth\xspace if and only if increasing the memory size by a constant factor does not asymptotically change the runtime. That is, for any execution sequence $E_{A}$ generated by $A$, $$ \forall_{c>0} \forall_{B\ge 1} \forall_{M \ge B^2} \Big[ \coshift{M,B}{E_{A}} = \Theta(\coshift{c\cdot M,B}{E_{A}}) \Big] $$ \end{definition} \end{onlymain} \begin{onlyapp} \iffalse \section{Problems that are not memory-smooth\xspace}\label{non-cosmooth} To illustrate what memory-smooth\xspace problems and algorithms are, we provide an example of an algorithm that is clearly not memory-smooth\xspace. Problem $P$ is defined as follows: given an input of $n$ elements, repeatedly scan the first $k<n$ elements (where $k$ is a fixed parameter of the problem definition) a total of $2^n$ times. Algorithm $A$ solves this by simply scanning the first $k$ elements repeatedly. The smoothed cost of an execution sequence $E_A$ generated by $A$ has cost \begin{align*} \lrushift{M,B}{E_A} = \begin{cases} \left\lceil\frac{k\cdot 2^n}{B}\right\rceil &\text{if } M < k \\ \left\lceil\frac{k}{B}\right\rceil &\text{otherwise} \end{cases} \end{align*} Thus, how $k$ relates to $M$ has a significant impact on the cost. Furthermore, this algorithm can have drastically different LRU and cache-oblivious costs. Consider the case where $M = k-1$. With LRU cache replacement, every element will be evicted before it is accessed again, incurring a cost of $\lrushift{M,B}{E_A} = \left\lceil \frac{k\cdot 2^n}{B}\right\rceil$. Ideal cache replacement, however, can simply keep the first $k-1$ elements in memory at all times and incur a single cache miss per scan, for a cost of $\coshift{M,B}{E_A} = \left\lceil\frac{k}{B}\right\rceil + 2^n$. \fi \end{onlyapp} \begin{onlymain} \begin{lemma}\label{bivariate:equal} If algorithm $A$ is memory-smooth\xspace, then for any $M$, $B$, and execution sequence $E_A$ generated by $A$, $\jlrk{E_A}{M,B} = \Theta(\coshift{M,B}{E_A})$. \end{lemma} \begin{proof} If $A$ is memory-smooth\xspace, then $\coshift{M,B}{E_A} = \Theta(\coshift{\frac{M}{2},B}{E_A})$. Thus, by Theorem~\ref{equiv-LR}, $\jlrk{E_A}{M,B} = \Theta(\coshift{M,B}{E_A})$. \end{proof} \begin{comment} ** Maybe move to appendix if can explain simply and correctly ** \begin{corollary}\label{cosmooth-basic} If the cache-oblivious cost of an algorithm depends, asymptotically, only on the size of the input $n$ and block size $B$, then it is memory-smooth\xspace. \end{corollary} \begin{proof} If the cache-oblivious cost of algorithm $A$ only depends on the input size $n$ and the block size $B$, then it does not depend on $M$. Therefore, for any $M$ and $B$, $\coshift{M,B}{E_A} = \coshift{2M,B}{E_A}$, so $A$ is memory-smooth\xspace. \end{proof} \begin{lemma} Any cache-oblivious algorithm $A$ that has cost $\coshift{M,B}{E_A} = \frac{f(n,B)}{g(M)}$ is memory-smooth\xspace if $g(M)$ is subadditive. \end{lemma} \begin{proof} Consider algorithm $A$ that generates execution sequence $E_A$ with cache-oblivious cost $\coshift{M,B}{E_A} = \frac{f(n,B)}{g(M)}$, for some subadditive function $g(M)$. Since $f(n,B)$ does not depend on $M$, \begin{align*} \coshift{2M,B}{E_A} &= \frac{f(n,B)}{g(2M)} = \frac{f(n,B)}{C\cdot g(M)} = \frac{\coshift{M,B}{E_A}}{C} \end{align*} where $1\leq C \leq 2$. We have that $\coshift{2M,B}{E_A} = \frac{f(n,B)}{C\cdot g(M)}$, thus $\coshift{M,B}{E_A} = \Theta(\coshift{2M,B}{E_A})$ and $A$ is memory-smooth\xspace. \end{proof} \end{comment} \subsection{Algorithm optimality} We can extend our definitions of optimality from Section~\ref{uni-optimal} to include algorithms that use memory. We simply say that Definition~\ref{def:jumpopt} applies to bidimensional locality functions as well. For general definitions of LRU optimality and cache-oblivious optimality, we define $W^{\textsc{LRU}}_{M,B}(P,A,n) = \max_{I \in \mathcal{I}_n^P} (\lrushift{M,B}{E(A,I)})$ to be the worst-case LRU cost and $W_{M,B}(P,A,n) =$ $\max_{I \in \mathcal{I}_n^P} (\coshift{M,B}{E(A,I)})$ to be the worst-case cache-oblivious cost of algorithm $A$ on problem instances of size $n$ of problem $P$, for specific block size $B$ and memory size $M$. \begin{definition}\label{def:coopt-bivar} Algorithm $A_{\text{CO-Opt}}$ is asymptotically CO-optimal iff, \begin{align*} \exists_{c,n_0} \forall_{n>n_0} \forall_{B\geq 1} \forall_{M\geq B^2} \forall_{A \in \mathcal{A}_P} \Big [ W_{M,B}(P,A_{\text{CO-Opt}},n) = c\cdot W_{M,B}(P,A,n) \Big ] \end{align*} \end{definition} We note the restriction that $M \geq B^2$ corresponds to the ``tall-cache'' assumption that is typical of cache-obliviously optimal algorithms~\cite{DBLP:journals/tcs/Silvestri08,DBLP:conf/stoc/BrodalF03}. \begin{definition}\label{def:coopt-bivar} Algorithm $A_{\text{LRU-Opt}}$ is asymptotically LRU-optimal iff, \begin{align*} \exists_{c,n_0} \forall_{n>n_0} \forall_{B\geq 1} \forall_{M\geq B^2} \forall_{A \in \mathcal{A}_P} \Big [ W^{\textsc{LRU}}_{M,B}(P,A_{\text{LRU-Opt}},n) = c\cdot W^{\textsc{LRU}}_{M,B}(P,A,n) \Big ] \end{align*} \end{definition} \begin{comment} To discuss how optimality relates between existing cost models and the jump model, we precisely define optimality for each. Given problem $P$ and algorithm $\textsc{OPT}^{\textsc{co}}_{^{M,B}}$ that solves $P$ with optimal (minimal) cache-oblivious cost for memory size $M$ and block size $B$, \begin{definition} Algorithm $A$ is cache-obliviously optimal (\emph{CO optimal}) if, for \emph{any} memory size $M$ and block size $B$, \begin{align*} \coshift{M,B}{E_A} &= \Theta(\coshift{M,B}{E_{\textsc{OPT}}}) \end{align*} where $E_A$ and $E_{\textsc{OPT}}$ are execution sequences generated by $A$ and $\textsc{OPT}^{\textsc{co}}_{^{M,B}}$, respectively, that solve the same instance of problem $P$. \end{definition} Similarly, we define $\textsc{OPT}^{\textsc{lru}}_{^{M,B}}$ to be the algorithm that solves $P$ with the minimum smoothed LRU cost for memory size $M$ and block size $B$. \begin{definition} Algorithm $A$ is \emph{LRU optimal} if, for \emph{any} memory size $M$ and block size $B$, \begin{align*} \lrushift{M,B}{E_A} &= \Theta(\lrushift{M,B}{E_{\textsc{OPT}}}) \end{align*} where $E_A$ and $E_{\textsc{OPT}}$ are execution sequences generated by $A$ and $\textsc{OPT}^{\textsc{lru}}_{^{M,B}}$, respectively, that solve the same instance of problem $P$. \end{definition} Finally, we define $\textsc{OPT}_{j}$ to be the algorithm that solves problem $P$ with the minimum modified jump cost for specific jump function $j(\cdot,\cdot)$. \begin{definition} Algorithm $A$ is \emph{Jump optimal} if, for any bivariate, subadditive/threshold jump function $j(\cdot,\cdot)$, \begin{align*} \jlr{E_A} = \Theta(\jlr{E_{\textsc{OPT}}}) \end{align*} where $E_A$ and $E_{\textsc{OPT}}$ are execution sequences that solve the same instance of problem $P$ and are generated by $A$ and $\textsc{OPT}_{j}$, respectively. \end{definition} \end{comment} \begin{comment} \begin{lemma}\label{jump-LRU} For any problem $P$, if algorithm $A$ solves $P$ and is jump optimal, it is also LRU-optimal. \end{lemma} \begin{proof} If algorithm $A$ is optimal for all bivariate subadditive/threshold jump functions, then it is optimal for jump function $j_{M,B}$, for any $M$ and $B$. By Theorem~\ref{equiv-LR}, it follows that $A$ is LRU optimal for any $M$ and $B$. \end{proof} \end{comment} \begin{comment} ** Don't think this observation is necessary. ** \begin{observation}\label{M-or-B} If algorithm $A$ is LRU optimal, then it is asymptotically optimal for every combination of $M$ and $B$. Therefore, for a single, fixed memory size $M=m$, $A$ is optimal for every block size $B$. Conversely, for a single, fixed block size $B=b$, it is optimal for every memory size $M$. \john{This needs to be stated better. What do you mean by optimal for fixed memory size, etc?}\ben{Is this clearer? Trying to say that we can keep either $M$ or $B$ fixed and $A$ is optimal for all values of the other. This is used in the proof of the next lemma.} \end{observation} \end{comment} \begin{comment} \begin{lemma}\label{LRU-jump} For any problem $P$, if algorithm $A$ solves $P$ and is LRU optimal, then it solves $P$ jump optimally. \end{lemma} \end{comment} \begin{restatable}{thrm}{lrujumpopt}\label{lru-jump-optimal} Algorithm $A$ that solves \emph{B}-stable\xspace problem $P$ is LRU optimal if and only if it is LoR optimal. \end{restatable} \end{onlymain} \begin{onlyproof} \begin{proof} If algorithm $A_{\text{LoR}}$ is optimal for all bidimensional locality functions, then it is optimal for locality functions $\ell_{M,B}$, for any $M$ and $B$. By Theorem~\ref{equiv-LR}, it follows that $A_{\text{LoR}}$ is LRU optimal for any $M$ and $B$. \begin{align*} \intertext{To prove that LRU optimal algorithms are also LoR optimal, we consider problem $P$ and algorithm $A_{\text{LRU}}$ that solves $P$ with optimal $LRU$ cost, i.e.,} &\exists_{c,n_0} \forall_{n>n_0} \forall_{B\geq 1} \forall_{M\geq B^2} \forall_{A \in \mathcal{A}_P} \Big [ W^{\textsc{LRU}}_{M,B}(P,A_{\text{LRU}},n) = c\cdot W^{\textsc{LRU}}_{M,B}(P,A,n) \Big ] \intertext{And by the definition of the worst-case cost $W$,} &\exists_{c,n_0} \forall_{n>n_0} \forall_{B\geq 1} \forall_{M\geq B^2} \forall_{A \in \mathcal{A}_P} \\ &\Big [ \max_{I \in \mathcal{I}^P_n} (\lru{M,B}{E(A_{\text{LRU}},I)}) \leq c\cdot \max_{I \in \mathcal{I}^P_n} (\lru{M,B}{E(A,I)}) \Big ] \intertext{Since $P$ is \emph{B}-stable\xspace, there is some instance $I_w \in \mathcal{I}_n^P$ for each $A$ such that} &\exists_{c,n_0} \forall_{n>n_0} \forall_{B\geq 1} \forall_{M\geq B^2} \forall_{A \in \mathcal{A}_P} \Big [ \max_{I \in \mathcal{I}^P_n} (\lru{M,B}{E(A_{\text{LRU}},I)}) \leq c (\lru{M,B}{E(A,I_w)}) \Big ] \intertext{and by Lemma~\ref{co-sco},} &\exists_{c,n_0} \forall_{n>n_0} \forall_{B\geq 1} \forall_{M\geq B^2} \forall_{A \in \mathcal{A}_P} \\ &\Big [ \frac{1}{2}\max_{I \in \mathcal{I}^P_n} (\lrushift{M,B}{E(A_{\text{LRU}},I)}) \leq 2c (\lrushift{M,B}{E(A,I_w)}) \Big ] \intertext{therefore, by Lemma~\ref{equiv-LR}} &\exists_{c,n_0} \forall_{n>n_0} \forall_{B\geq 1} \forall_{M\geq B^2} \forall_{A \in \mathcal{A}_P} \Big [ \max_{I \in \mathcal{I}^P_n} (\jlrk{E(A_{\text{LRU}},I)}{M,B}) \leq 4c (\jlrk{E(A,I_w)}{M,B}) \Big ] \intertext{Since this inequality holds for all $\ell_{M,B}$ functions, we define a series of such functions that we use to represent any bidimensional locality function. Recall that $\ell_{M,B}$ functions are of the form } &\ell_{M,B}(d,\delta)=\max\left (\min\left(1, \frac{d}{B}\right), \min\left(1, \left\lfloor\frac{\delta}{M/B}\right\rfloor \right) \right) \end{align*} where $B\ge 1$ and $M\geq B^2$. Consider an arbitrary bidimensional locality function $\ell(d,\delta) = \max(f(d), g(\delta))$. By Lemma~\ref{lin_comb}, we can represent the $f(d)$ component by a linear combination of $n$ query-type $\ell_B$ functions (and therefore using the spatial component of $\ell_{M,B}$ functions). By our definition of bidimensional locality functions, $g(\delta) = \left\lfloor \frac{\delta}{x}\right\rfloor$, for some integer $x$. Thus, we simply set the temporal component of every one of our $\ell_{M,B}$ functions to be $g(\delta)$. For a given bidimensional locality function $\ell$, we define $\ell^{\ell}_{k}$ to be the $k$-th such $\ell_{M,B}$ function that we use to represent it, i.e., $\ell^{\ell}_{k} = \max(\alpha^{\ell}_k \ell_{\beta^{\ell}_k}, g(\delta))$.\footnote{We note that, because we are limited to $M \geq B^2$ for our $\ell_{M,B}$ functions, we can only construct functions where $f(k) \geq g(k)$, for all $k>0$. However, our definition of bidimensional locality functions includes this restriction, as it corresponds to the tall cache assumption (discussed in Section~\ref{bivariate-jump}). } Thus, we have \begin{align*} &\exists_{c',n_0} \forall_{n>n_0} \forall_{A \in \mathcal{A}_P} \forall_{\ell \in \mathcal{L}} \Big [ \sum_{k=1}^{n} \max_{I \in \mathcal{I}^P_n} \Big(\jr{\ell^{\ell}_{k}}{E(A_{\text{LRU}},I)} \Big) \leq c' \sum_{k=1}^{n} \Big(\jr{\ell^{\ell}_{k}}{E(A,I_w)} \Big) \Big ] \intertext{Instance $I_w$ cannot result in greater cost than the instance that maximizes the total cost, so} &\exists_{c',n_0} \forall_{n>n_0} \forall_{A \in \mathcal{A}_P} \forall_{\ell \in \mathcal{L}} \\ &\Big [ \sum_{k=1}^{n} \max_{I \in \mathcal{I}^P_n} \Big(\jr{\ell^{\ell}_{k}}{E(A_{\text{LRU}},I)} \Big) \leq c' \max_{I \in \mathcal{I}^P_n} \sum_{k=1}^{n} \Big(\jr{\ell^{\ell}_{k}}{E(A,I)} \Big) \Big ] \intertext{Moving the max outside of the summation can only decrease the cost of the left hand side of the inequality, thus} &\exists_{c',n_0} \forall_{n>n_0} \forall_{A \in \mathcal{A}_P} \forall_{\ell \in \mathcal{L}} \\ &\Big [ \max_{I \in \mathcal{I}^P_n} \sum_{k=1}^{n} \Big(\jr{\ell^{\ell}_{k}}{E(A_{\text{LRU}},I)} \Big) \leq c' \max_{I \in \mathcal{I}^P_n} \sum_{k=1}^{n} \Big(\jr{\ell^{\ell}_{k}}{E(A,I)} \Big) \Big ] \intertext{ The proof of Corollary~\ref{lin_comb_sum} applies, giving us} &\exists_{c'',n_0} \forall_{n>n_0} \forall_{A \in \mathcal{A}_P} \forall_{\ell \in \mathcal{L}} \Big [ \max_{I \in \mathcal{I}^P_n} (\jlr{E(A_{\text{LRU}},I)}) \leq c''\cdot \max_{I \in \mathcal{I}^P_n} (\jlr{E(A,I)}) \Big ] \intertext{Using our definition of the worst-case LoR cost,} &\exists_{c'',n_0} \forall_{n>n_0} \forall_{A \in \mathcal{A}_P} \forall_{\ell \in \mathcal{L}} \Big [ W_{\ell}(P,A_{\text{LRU}},n) \leq c'' \cdot W_{\ell}(P,A,n) \Big ] \end{align*} Therefore, any LRU optimal algorithm is also LoR optimal. \end{proof} \end{onlyproof} \begin{onlymain} \begin{restatable}{thrm}{cosmoothoptiff}\label{cosmooth-opt-iff} If algorithm $A$ is memory-smooth\xspace and solves \emph{B}-stable\xspace problem $P$, then it is CO optimal if and only if it is LoR optimal. \end{restatable} \begin{proof} Since the cache-oblivious model assumes ideal cache replacement, for any execution sequence $E$, \begin{align*} \coshift{M,B}{E} &\leq \lrushift{M,B}{E} \leq 2\cdot \coshift{\frac{M}{2},B}{E} \intertext{Assume algorithm $A$ is memory-smooth\xspace and solves problem $P$. Then for any execution sequence $E_A$ generated by $A$,} \coshift{\frac{M}{2},B}{E_A} &= \Theta(\coshift{M,B}{E_A}) \intertext{Therefore,} \lrushift{M,B}{E_A} &= \Theta(\coshift{M,B}{E_A}) \end{align*} Since the LRU cost and CO cost are asymptotically equivalent for every execution sequence generated by $A$, then $A$ is asymptotically LRU optimal if and only if it is asymptotically CO optimal and, by Theorem~\ref{lru-jump-optimal}, $A$ is LoR optimal if and only if it is CO optimal. \end{proof} \begin{comment} \begin{theorem}\label{EM-CO} For any memory-smooth\xspace problem $P$ and algorithm $A$ that solves it, $A$ is CO optimal if and only if it is EM optimal and memory-smooth\xspace. \ben{Kind of complicated. A better way of saying this?} \end{theorem} \begin{proof} It is straightforward to prove that, if $A$ is CO optimal, it must be EM optimal and memory-smooth\xspace. If $A$ is CO optimal, it is optimal for any $M$ and $B$ and is therefore EM optimal. Furthermore, if problem $P$ is memory-smooth\xspace and $A$ is optimal, then by definition $A$ must be memory-smooth\xspace. Next we prove the inverse: if $A$ is EM optimal and memory-smooth\xspace, then it is CO optimal. For the sake of contradiction, assume that $A$ is EM optimal and memory-smooth\xspace, but is not CO optimal. Since it is EM optimal, it is optimal for some memory size $M=m$ and block size $B=b$, \begin{align*} \coshift{m,b}{E_A} &= \Theta(\coshift{m,b}{E_{OPT}}) \intertext{and if $A$ is not CO optimal, there is some $M=m'$ for which it is not asymptotically optimal, i.e.,} \coshift{m',b}{E_A} &= \omega(\coshift{m',b}{E_{OPT}}) \intertext{where $E_{OPT}$ is generated by the algorithm $\textsc{OPT}^{\textsc{co}}_{^{M,B}}$ and solves the same instance of problem $P$ as $E_A$. $A$ is memory-smooth\xspace, so increasing $M$ by a factor $c$ decreases the cost by $\Theta(c)$, so} \coshift{m,b}{E_A} &= \Theta\left(\frac{m'}{m} \cdot \coshift{m',b}{E_A}\right) \\ &= \omega\left(\frac{m'}{m}\cdot \coshift{m',b}{E_{OPT}}\right) \intertext{However, if the problem $P$ is memory-smooth\xspace, then the optimal algorithm, $\textsc{OPT}^{\textsc{co}}_{^{M,B}}$, must be memory-smooth\xspace, thus} \coshift{m,b}{E_A} &= \omega(\coshift{m,b}{E_{OPT}}) \intertext{which contradicts our statement that $A$ is EM optimal for $M=m$. A symmetric proof can be shown for block size, $B$. Thus, if $A$ is EM optimal and memory-smooth\xspace, it must be optimal for all $M$ and $B$ and is therefore CO optimal.} \end{align*} \end{proof} Intuitively, the above Theorem tells us that for any problem that is memory-smooth\xspace, any algorithm that is memory-smooth\xspace and optimal for a single memory size $M$ and block size $B$ is optimal for \emph{every} $M$ and $B$ (i.e., CO optimal). Thus, for any EM optimal algorithm $A$, if we can prove that it is memory-smooth\xspace and the underlying problem is also memory-smooth\xspace, then we $A$ must be CO optimal. \subsubsection{Case study: X is CO optimal} Theorem~\ref{EM-CO} provides us with a powerful tool to prove that algorithms are CO optimal. \ben{Example showing how we can easily prove if something is CO optimal or not?} \end{comment} \subsection{Modeling a multi-level memory hierarchy}\label{multilevel} We define a memory hierarchy $\mathcal{H} = (H_1, H_2, \ldots, H_{|\mathcal{H}|})$ to be a sequence of triples, where each triple represents a level of the memory hierarchy. The $i$-th triple, $H_i = (M_i, B_i, C_i)$ represents the memory size, block size, and relative cost of an access at the $i$-th level of the memory hierarchy, respectively. Thus, the total cache-oblivious cost of execution sequence $E$ on memory hierarchy $\mathcal{H}$ is $\coshift{\mathcal{H}}{E} = \sum_{H_i \in \mathcal{H}}(C_i \cdot \coshift{M_i,B_i}{E})$, and we similarly define $\lrushift{\mathcal{H}}{E}$ to be the LRU cost of $E$ on memory hierarchy $\mathcal{H}$. \begin{restatable}{thrm}{hierarchycoopt}\label{hierarchy-coopt} If algorithm $A_{\text{CO-Opt}}$ is CO optimal and solves \emph{B}-stable\xspace problem $P$, then it is asymptotically optimal for any memory hierarchy, $\mathcal{H}$, i.e., $$\exists_{c,n_0} \forall_{n>n_0} \forall_{\mathcal{H}} \forall_{A \in \mathcal{A}_P} \Big [ \max_{I \in \mathcal{I}^P_n} (\coshift{\mathcal{H}}{E(A_{\text{CO-opt}},I)}) \leq c\cdot \max_{I \in \mathcal{I}^P_n} (\coshift{\mathcal{H}}{E(A,I)}) \Big ]$$ \end{restatable} \end{onlymain} \begin{onlyproof} \begin{proof} If $A_{\text{CO-Opt}}$ is asymptotically CO optimal, then by definition, \begin{align*} &\exists_{c,n_0} \forall_{n>n_0} \forall_{B\geq 1} \forall_{M\geq B^2} \forall_{A \in \mathcal{A}_P} \\ &\Big [ \max_{I \in \mathcal{I}^P_n} (\co{M,B}{E(A_{\text{CO-Opt}},I)}) \leq c\cdot \max_{I \in \mathcal{I}^P_n} (\co{M,B}{E(A,I)}) \Big ] \intertext{And by Lemma~\ref{co-sco},} &\exists_{c,n_0} \forall_{n>n_0} \forall_{B\geq 1} \forall_{M\geq B^2} \forall_{A \in \mathcal{A}_P} \\ &\Big [ \max_{I \in \mathcal{I}^P_n} (\coshift{M,B}{E(A_{\text{CO-Opt}},I)}) \leq 4c\cdot \max_{I \in \mathcal{I}^P_n} (\coshift{M,B}{E(A,I)}) \Big ] \intertext{It follows from the proof of Theorem~\ref{lru-jump-optimal} that if $P$ is \emph{B}-stable\xspace, $A_{\text{CO-Opt}}$ is also asymptotically optimal for a weighted maximum of CO costs, for different $M$ and $B$ values. Therefore,} &\exists_{c,n_0} \forall_{n>n_0} \forall_{\mathcal{H}} \forall_{A \in \mathcal{A}_P} \Big [ \max_{I \in \mathcal{I}^P_n} (\coshift{\mathcal{H}}{E(A_{\text{CO-opt}},I)}) \leq 4c\cdot \max_{I \in \mathcal{I}^P_n} (\coshift{\mathcal{H}}{E(A,I)}) \Big ] \end{align*} \end{proof} \end{onlyproof} \begin{onlymain} \noindent In the LoR model, all features of a single level of a memory hierarchy are represented by a bidimensional locality function, $\ell$. Thus, in the LoR model we define a memory hierarchy as a set of \emph{weighted} locality functions, $\mathcal{L}_{\mathcal{H}}$, i.e., $\mathcal{L}_{\mathcal{H}} = \{ C \cdot \ell_{M,B} | (M,B,C) \in \mathcal{H} \}$ The LoR cost of execution sequence $E$ on a hierarchy represented by $\mathcal{L}_{\mathcal{H}}$ is $\jlrmulti{E} = \sum_{\ell \in \mathcal{L}_{\mathcal{H}}}(\jlrk{E}{})$. \begin{theorem} For any execution sequence $E$., $\lrushift{\mathcal{H}}{E} = \jlrmulti{E}$ \end{theorem} \begin{proof} It follows from Lemma~\ref{equiv-LR} that, if the cost at every level is equal, the total cost for any memory hierarchy $\mathcal{H}$ must be equal. \end{proof} \subsubsection{Geometrically increasing hierarchy} On modern computers, the block size, memory size, and cost per access increases by an order of magnitude or more at each level of the memory hierarchy. Thus, we consider the family of hierarchies that have geometrically increasing parameters. Specifically, we consider hierarchies such that, for all $H_i \in \mathcal{H}$, $B_{i+1} = c\cdot B_i$, where $c \ge 2$. We define functions relating memory and cost to the block size: $M_i = \mu(B_i)$ and $C_i = \gamma(B_i)$, where $\mu(B)$ and $\gamma(B)$ are \emph{non-decreasing} for all levels of the hierarchy. \begin{restatable}{lem}{hierarchycomp}\label{hierarchy-2competitive} For any execution sequence $E$ and geometrically increasing memory hierarchy $\mathcal{H}$, the maximum cost difference between any two levels of the memory hierarchy bounds the difference between the cache-oblivious cost and LoR cost, i.e., $$\jlrmulti{E} = O\left(\max_{i=2}^{|\mathcal{H}|} \left(\frac{C_{i}}{C_{i-1}} \cdot \coshift{\mathcal{H}}{E})\right) \right)$$ \end{restatable} \end{onlymain} \begin{onlyproof} \begin{proof} By Theorem~\ref{equiv-LR}, we have that $\jlrk{E}{M,B} \leq 2\cdot \coshift{\frac{M}{2},B}{E}$. Decreasing $B$ can only increase the total cost, so \begin{align*} \jlrk{E}{M,B} &\leq 2\cdot \coshift{\frac{M}{2},\frac{B}{2}}{E} \intertext{Since $\mathcal{H}$ is a memory hierarchy with $B$ and $M$ increasing geometrically with each level,} \jlrk{E}{M_{i+1},B_{i+1}} &\leq 2\cdot \coshift{M_{i},B_{i}}{E} \intertext{Using the total cost including the relative cost of each level of the hierarchy ($C_i = \gamma(B_i)$), we get} \jlrmulti{E} &= \sum_{\ell_i \in \mathcal{L}_{\mathcal{H}}}\left (\jlrk{E}{i}\right) \\ &= \sum_{i=1}^{|\mathcal{H}|} \left ( C_i \cdot \jlrk{E}{M_i,B_i} \right) \\ &\leq 2 \cdot \sum_{i=2}^{|\mathcal{H}|}\left (C_{i} \cdot \coshift{M_{i-1},B_{i-1}}{E}\right) \\ &\leq 2 \cdot \sum_{i=2}^{|\mathcal{H}|}\left (\frac{C_{i}}{C_{i-1}} \cdot C_{i-1} \cdot \coshift{M_{i-1},B_{i-1}}{E}\right) \\ &\leq 2\cdot \max_{i=2}^{|\mathcal{H}|} \left(\frac{C_{i}}{C_{i-1}}\right) \cdot \sum_{l=2}^{|\mathcal{H}|}\left ( C_{i-1} \cdot \coshift{M_{i-1},B_{i-1}}{E}\right) \\ &\leq 2\cdot \max_{i=2}^{|\mathcal{H}|} \left(\frac{C_{i}}{C_{i-1}} \cdot \coshift{\mathcal{H}}{E})\right) \\ &= O\left(\max_{i=2}^{|\mathcal{H}|} \left(\frac{C_{i}}{C_{i-1}} \cdot \coshift{\mathcal{H}}{E})\right) \right) \end{align*} \end{proof} \end{onlyproof} \begin{onlymain} \begin{restatable}{thrm}{hierarchypolycost}\label{hierarchy-polycost} For any geometrically increasing memory hierarchy, $\mathcal{H}$, if $\gamma(B) = B^c$ for any constant $c$, then $\jlrmulti{E} = \Theta(\coshift{\mathcal{H}}{E})$, for any execution sequence $E$. \end{restatable} \end{onlymain} \begin{onlyproof} \begin{proof} By Lemma~\ref{hierarchy-2competitive}, \begin{align*} \jlrmulti{E} &= O\left(\max_{i=2}^{|\mathcal{H}|}\left(\frac{\gamma(B_{i})}{\gamma(B_{i-1})} \cdot \coshift{\mathcal{H}}{E}) \right)\right) \intertext{If $\gamma(B) = B^c$ for some constant $c$, then the largest two levels of the hierarchy maximize the difference, so} &= O\left(\frac{\gamma(B_{|\mathcal{H}|})}{\gamma(B_{|\mathcal{H}|-1})} \cdot \coshift{\mathcal{H}}{E}) \right) \intertext{Since $\mathcal{H}$ is geometrically increasing, $B_{|\mathcal{H}|} = d \cdot B_{|\mathcal{H}|-1}$ for some constant $d$. Thus, if $\gamma(B) = B^c$ for some constant $c$, then} \frac{\gamma(B_{|\mathcal{H}|})}{\gamma(B_{|\mathcal{H}|-1})} &\leq \frac{(B_{|\mathcal{H}|})^c}{(B_{|\mathcal{H}|-1})^c} \\ &= \frac{(d \cdot B_{|\mathcal{H}|-1})^c}{(B_{|\mathcal{H}|-1})^c} \\ &= d^c \intertext{$d$ and $c$ are constants, so $d^c$ is also a constant, therefore,} \jlrmulti{E} &= O(d^c \cdot \coshift{\mathcal{H}}{E}) \\ &= O(\coshift{\mathcal{H}}{E}) \intertext{The cache-oblivious cost is computed assuming ideal cache replacement, so it cannot be greater than the LoR cost, therefore} \jlrmulti{E} &= \Theta(\coshift{\mathcal{H}}{E}) \end{align*} \end{proof} \end{onlyproof} \begin{onlymain} \begin{comment} \begin{restatable}{thrm}{hierarchyonefinger}\label{hierarchy-onefinger} \ben{Remove this? If we want to keep it, need to separately define the one-finger vs. two-finger jump costs above.} If $\mathcal{J}$ is a set of jump functions that correspond to a geometrically increasing memory hierarchy and $\gamma(B) = O(B)$ and is non-decreasing, then the one-finger jump cost is sufficient to compute the cache-oblivious cost, i.e. $\jmulti{E} = \Theta(\coshift{\mathcal{H}}{E})$. \end{restatable} \end{comment} \end{onlymain} \begin{onlyproof} \begin{comment} \begin{proof} We begin by focusing on a single level of the memory hierarchy ($l$) and a single access, $e_i$. Again, consider the three cases from the proof of Lemma~\ref{equiv-LR}. For case 1, since no element is in the same block as $e_i$, using only the nearest element, $e_j$, will result in the same cost (i.e., $\jr{j_l(\cdot,\cdot)}{E,i}=1$). For case 2, only one of $e_L$ or $e_R$ is within $B_l$ distance of $e_i$. Assume that $e_L$ is within distance $B_l$. Since $e_j$ is the closest element on either side, $e_j = e_L$ and the resulting cost will be $j_l(|e_i-e_j|, \dist{E,i,j})$, which is equivalent to the cost using both $e_L$ and $e_R$. For case 3, both $e_L$ and $e_R$ impact the cache-oblivious cost and, therefore, the expected cost on one level of the memory hierarchy will be inaccurate if we compute it using only the nearest element, $e_j$. However, we are concerned with the total cost of access $e_i$ across all levels of the memory hierarchy. Let $s$ be the level of the memory hierarchy with the smallest block size, $B_s$, such that $e_j$ is within distance $B_s$ (i.e., $|e_i - e_j| < B_s$). The one-finger jump cost at level $s$ is \begin{align*} \jr{j_s(\cdot,\cdot)}{E,i} &\le \frac{B_s - 1}{B_s} \intertext{All levels of the hierarchy with smaller block size have a cost of 1. For each level of the hierarchy with a larger block size, the cost decreases because the block size increases geometrically, i.e.,} \jr{j_{s+1}(\cdot,\cdot)}{E,i} &\le \frac{B_s - 1}{B_{s+1}} = \frac{B_s - 1}{c\cdot B_{s}}, \\ \jr{j_{s+2}(\cdot,\cdot)}{E,i} &\le \frac{B_s - 1}{c^2\cdot B_s}, \\ &\cdots \intertext{where $c>2$. Thus, the total one-finger jump cost of access $e_i$ across all levels of the memory hierarchy is} \jmulti{E,i} &= \sum_{j_l \in \mathcal{J}}\jr{j_l(\cdot,\cdot)}{E,i} \\ &\le \sum_{l=1}^{s-1} 1 + \sum_{l=s}^{|\mathcal{J}|} \frac{B_s - 1}{c^{(l-s)} \cdot B_s} \\ &= (s-1) + \frac{B_s-1}{B_s} \cdot \left (1 + \frac{1}{c} + \frac{1}{c^2} + \cdots\right ) \\ &\leq (s-1) + 2 \cdot \frac{B_s-1}{B_s} \leq s + 1 \end{align*} Using both $e_L$ and $e_R$, the cost must be at least $s - 1$, since all levels of the memory hierarchy $<s$ have a cost of 1. Therefore, computing the cost with only the single nearest element, $e_j$, results in, at most, a constant factor larger cost than two-finger jump cost, $\jlrmulti{E,e_i}$, when $\gamma(B) = O(B)$. By Theorem~\ref{hierarchy-polycost}, for any execution sequence $E$, \begin{align*} \jmulti{E} = O(\jlrmulti{E}) = O(\coshift{\mathcal{H}}{E}) \end{align*} \end{proof} \end{comment} \end{onlyproof} \begin{onlymain} \begin{comment} ** Maybe try to clean up for appendix... ** \ben{A bit rough after this. Might not be worth including, unless we really want the universal time diagram and discussion.} We make some further observations about how bounding $\gamma(B)$ affects the jump model. However, we first prove a few useful corollaries. \begin{corollary}\label{JtoJprime_bivariate} Let $J'(2^i, 2^j)$ ($\forall i>0, j>0$) denote the total number of jumps with spatial distance $d$ such that $2^i \leq d < 2^{i+1}$ and temporal distance $\delta$ such that $2^j \leq \delta < 2^{j+1}$ during execution sequence $E$. Then $\jlr{E} = \Theta\left( \sum_{i=0}^\infty J'(2^i, \mu(2^i)) j(2^i, \mu(2^i))\right)$, where $\mu(\cdot)$ is some function. \end{corollary} \begin{proof} By our definition of subadditive bivariate jump functions, the spatial cost, $f(d)$, and temporal cost, $g(\delta)$, are subadditive during the interval at which they dominate. Furthermore, for any jump, the cost $j(d,\delta) = \max(f(d),g(\delta))$ is either dominated by spatial or temporal cost. Thus, it follows from Lemma~\ref{JtoJprime} that $\jlr{E} = \Theta\left( \sum_{i=0}^\infty J'(2^i, \mu(2^i)) j(2^i, \mu(2^i))\right)$. \end{proof} \begin{corollary}\label{Jprime-cosmooth} If $A$ is memory-smooth\xspace, then for any execution sequence $E_A$ generated by $A$, the number of jumps, $J'(2^i, \mu(2^i))$, is non-increasing with both spatial and temporal distance. \end{corollary} \begin{proof} TODO \end{proof} \begin{lemma} If $\mathcal{J}$ is a set of jump functions that correspond to a geometrically increasing memory hierarchy and $\gamma(B) = \omega(B)$, then, for any memory-smooth\xspace algorithm $A$, the total Jump cost across the entire hierarchy is dominated by the jump function associated with the largest level, i.e, \begin{align*} \jlrmulti{E_A} = \Theta(\jlrk{E_A}{|\mathcal{J}|}) \end{align*} \end{lemma} \begin{proof} Since $B$ (and therefore $M$) is geometrically increasing, the cost associated with the jump function of memory level $l$ is \begin{align*} \jlrk{l}{E} &= \Theta\left (C_l \cdot \sum_{i=l}^{\infty} J'(2^i,\mu(2^i))\cdot j_l(2^i,\mu(2^i))\right) \\ &= \Theta\left(\gamma(2^l) \cdot \sum_{i=l}^{\infty} J'(2^i,\mu(2^i))\cdot j_l(2^i,\mu(2^i))\right) \end{align*} \noindent where the cost function $\gamma(B) = \omega(B)$. Since $J'()$ is subadditive, \begin{align*} \hat{JR}_{j_{l}(\cdot,\cdot),C_{l}}(E) &= \Theta\left( \gamma(2^{l}) \cdot \left (J'(2^l, \mu(2^l))\cdot j_l(2^l,\mu(2^l)) + \sum_{i=l+1}^{\infty} J'(2^i,\mu(2^i))\cdot j_l(2^i,\mu(2^i))\right )\right) \\ &= \Theta\left( \gamma(2^{l}) \cdot \sum_{i=l+1}^{\infty} J'(2^i,\mu(2^i))\cdot j_l(2^i,\mu(2^i))\right) \end{align*} \noindent The jump functions, $j_l(\cdot, \cdot)$ and $j_{l+1}(\cdot,\cdot)$ are each subadditive and bound by the maximum cost of 1, so \begin{align*} \hat{JR}_{j_{l}(\cdot,\cdot),C_{l}}(E) &= \Theta \left( \frac{\gamma(2^{l})}{\gamma(2^{l+1})} \hat{JR}_{j_{l+1}(\cdot,\cdot),C_{l+1}}(E) \right ) \end{align*} Thus, if $\gamma(B)$ is super-linear (i.e., $\gamma(B) = \omega(B)$), \begin{align*} \hat{JR}_{j_{l+1}(\cdot,\cdot),C_{l+1}}(E) = o(\hat{JR}_{j_{l}(\cdot,\cdot),C_{l}}(E)) \end{align*} \noindent for all levels of the memory hierarchy, $l$. Therefore, the cost at the largest level asymptotically dominates the overall cost. \end{proof} This lemma implies that, when the cost of memory loads increases super-linearly among levels of the memory hierarchy, memory-smooth\xspace algorithms have cost dominated by the largest level of memory. In such a case, we can define a single, universal time function based on the cost of the largest level \begin{align*} t(E,e_i) = t_{|\mathcal{J}|}(E,e_i) = \sum_{k=0}^{i-1}(\hat{JR}_{j_{|\mathcal{J}|}(\cdot,\cdot)}(E,e_k)) \end{align*} \noindent A universal time function allows us to graphically visualize an access pattern, $E$, as we did when considering only a single memory system. However, since we have multiple levels of memory hierarchy, when accessing element $e_i$ we can visualize the levels of the memory hierarchy as concentric rectangles, illustrated in Figure~\ref{fig:visualize-hierarchy}. The rectangle representing memory level $l$ has width $2B_l$ and height $M_l$. We can then easily see the cost of access $e_i$ by determining the smallest level $l$ such that a prior access falls within the corresponding rectangle. \begin{figure}[h] \center \includegraphics{visualize-hierarchy} \caption{Graphical visualization of accesses on a memory hierarchy of geometrically increasing size and memory size as a fixed function of $B$ (i.e., $M_l = \mu(B_l)$). We can determine the cost of each access simply by the level of the memory hierarchy that a previous access falls into, represented by concentric rectangles.} \label{fig:visualize-hierarchy} \end{figure} If $\gamma(B)$ is linear or sub-linear, then we cannot conclude that the largest level of the memory hierarchy will dominate. However, if a single level does \emph{not} dictate overall performance, we can simplify our model. First, we can use the simpler definition of the LoR model cost that uses only a single nearest element, $\jmulti{E}$, rather than both left and right, $\jlrmulti{E}$. \end{comment} \end{onlymain} \section{Case study: Sorting} \subsection{Merging lists} The jump cost to merge 2 lists of length $n/2$ each would require $nj(1,1)$ cost to scan through the elements and an additional jump cost of $j(1,n/2)$ to access the first element of the second list. The total cost is $nj(1,1)+ j(1,n/2)$ which is asymptotically the cost of scanning through both the lists. The size of individual lists does not seem to matter much as long as the total size is the same. The cost to merge $k$ lists of size $n/k$ depends on whether $k\leq M$ or $k>M$. If $k\leq M$ then one single iteration of merging all the $k$ lists simultaneously is sufficient and it requires $nj(1,M)+kj(n/k,M)$ jump cost, which is roughly the cost of a scan through the entire input. When $k>M$, take $M$ lists and combine them into 1 list, to reduce the $k$ lists to $k/M$ lists by incurring a total cost of $\frac{k}{M}(\frac{nM}{k}j(1,M)+Mj(\frac{n}{k},M)\leq (n+1)(j(1,M)$. Thus the total jump cost to merge lists into a single list is at most $O\left(nj(1,M) \log_M k\right)$ which is asymptotically the cost of scanning through the input $O(\log_M k)$ times. This is the jump cost provided $M$ is known to the algorithm. \begin{comment} \begin{align*} \sum_{i=1}^{\log_M n }\left(nj(1,M)+ \frac{k}{M^i}j(M,nM^i/k)\right) &=n(\log_M n)j(M,1)+ k\sum_{i=1}^{\log_M n}\frac{j(M,nM^i/k)}{M^i}\\ &\leq n(\log_M n)j(M,1) + k \sum_{i=1}^{\log_M n}(nM^i/kM^{i})j(M,1)\\ &\leq n(\log_M n)j(M,1) + nj(M,1)\log_M n\\ & \leq 2( n(\log_M n)j(M,1) \\ \end{align*} The first term counts the cost of scanning the input $\log_M n$ times and the second term counts the cost for the jumps involved to find the first element of each of the lists. If we want to plug in the expected cost in the IO model $j(1,M)$ to be $1/B$ we get upper bound of $$ O\left(\frac{n(\log_M k)}{B}\right)$$ which gives a jump cost of $O\left(\frac{n\log_M n}{B}\right)$ for a implementation of merge sort over $n$ elements in Memory $M$. \end{comment} \begin{comment} \subsection{Sorting and permutation} Analysis of funnelsort algorithm:- The funnel sort uses a $k$-merger which takes $k$ lists and outputs $k^3$ elements in sorted order and these $k$-mergers are recursively built with $\sqrt{k}$ input mergers and 1 output merger. The base case merger is a 2-merger which when called merges two lists and outputs at most 8 elements. The largest $k$-merger which is used by a funnelsort implementation is an $n^{1/3}$ merger. Let $T(k)$ denotes the total jump cost of calling a $k$- merger to output $k^3$ elements in sorted order. Whenever a $k$-merger is called to output $k^3$ elements, it involves calling $\sqrt{k}$ input mergers $k\sqrt{k}$ times and the $\sqrt{k}$ merger is also required to be called $k\sqrt{k}$ times. Calling a $\sqrt{k}$ requires a jump of size at most $k^2$ (since the space required by a $k-$merger is $k^2$). The total jump cost of calling $T(k)$ is given by the following recursion \begin{align*} T(k)&=2k^{3/2} T(\sqrt{k}) + kj(k^2,M)\\ &= \sum_{i=1}^{\log\log k} 2^i k^{3-1/2^{i-1}} j(k^{1/2^{i-1}}, M) +k j(k^2,M)\\ &\leq \sum_{i=1}^{\log\log k +1} 2^i k^3(j(1,M)) \\ &=O(k^3\log k j(1,M)) \end{align*} Since the largest merger used by the funnelsort is $n^{1/3}$ merger, the total jump cost of running funnelsort with jump function $j$ is $O((n^{1/3})^3 \log n j(1, M))$ which is $O(n\log n j(1,M))$. Permutation in the jump model can be done in $O(nj(n,M))$ jump cost. Permutation can also be achieved by sorting using $O(n\log n j(1,M))$ cost. If $\log n \cdot j(1,M)> j(n,M)$ then brute force permutation is better else sorting is better. \end{comment} \section{Conclusion}\label{conclusion} Despite the increasing complexity of modern hardware architectures, the goal of many design and optimization principles remain the same: maximize locality of reference. Even many of the optimization techniques used by modern compilers, such as branch prediction or loop unrolling~\cite{DBLP:books/daglib/0022093}, can be seen as methods of increasing spatial and/or temporal locality. As we demonstrated in this work, cache-oblivious algorithms do just that, suggesting that the performance benefits of such algorithms extend beyond what was originally envisioned. \section{Case study: embedding a binary search tree into an array}\label{memoryless-binarysearch} As a case study to prove the efficacy of the jump model in proving algorithm optimality, we consider the problem of implicit search structures. Specifically, we consider the problem of taking a full and complete balanced Binary Search Tree (BST)~ref{XXX} and placing the nodes in memory so as to minimize the cost of traversing any root-to-leaf path. In the cache oblivious model, the so-called van Emde Boas (vEB) layout is known to be optimal, so we consider this layout in jump model to prove that it is LoR-optimal. Given a BST $T$ of height $H=\lceil \log{|T|}\rceil$, the vEB layout is defined recursively: if a tree T has two or more nodes, let $H_0 = \lceil \frac{H}{2}\rceil$ and let $T_0$ be the subtree of $T$ consisting of all nodes in $T$ with depth at most $H_0$, and let $T_1,T_2,\ldots,T_k$ be the subtrees of $T$ rooted at nodes with depth $H_0 + 1$, numbered from left to right. The van Emde Boas layout of $T$ consists of the van Emde Boas layout of $T_0$ followed by the van Emde Boas layouts of $T_1,\ldots,T_k$ in that order. The design principle behind this structure ensures good behaviour with regards to locality of reference; in a root-to-leaf walk in a vEB layout half of the time a node with be only one or two locations past the parent in the array. This is in contrast with, for example, a preorder, inorder, or level order layout where there will be only a constant number of nodes that are within a constant distance of their parent. Here we formalize the idea that this has the best possible locality of reference by proving optimality in the LoR model. Preliminary notations:- Suppose we are given a rooted BST with root $r$ and we are searching for a node $y$. The path followed in the BST while searching for $y$ is say $r,y_1,y_2 \ldots, y$. The process of going from one element in this path to its subsequent element in this path is defined as a jump. The span of a jump from node $x$ to node $y$ is defined as the difference in their physical locations on the forward embedding. The span of a search path $P$ is defined as the difference in the physical locations of the first node and the last node on $P$. Some basic assumptions are made regarding the binary search tree being embedded in the array. The BST $T_n$ is a full binary search tree with $n(=2^{d}-1)$ nodes. An embedding of $T_n$ into memory is a one-to-one function from the nodes of $T_n$ to the integers (the memory location); a forward embedding is one where every internal node is mapped to a smaller memory location than its children. In such an embedding, traversing any root-to-leaf path only moves forward in memory. A root-to-leaf path (or a path of length of $d$ ) is referred to as a search path and paths of length less than $d$ are called subpaths. The cost of a single jump of size $x$ is given by $f(x)$ which is a sub-additive function. By the above mentioned construction, a vEB embedding of a binary search tree is always a forward embedding. The cost of searching any leaf node (which lies in vEB tree say $T_i$) in the vEB tree with $n$ elements is given by the recursion $T(n)=$ Cost of all jumps in $T_0$+ Cost of the jump from $T_0$ to $T_i$+ Cost of all jumps in $T_i$. \newline Since $T_0$ and $T_i$ are vEB trees of size $\sqrt{n}$ and the jump from $T_0$ to $T_i$ can be of size $O(n)$, we get \begin{align*} T(n)&=2T(\sqrt{n})+ f(n) \intertext{which solves to} &=\sum_{i=0}^{\log\log n}2^if(n^{1/2^i}) \intertext{which can be rewritten as} &= \log n \sum_{k=0}^{\log \log n} \frac{1}{{2^{k}}} f\left( 2^{2^{k}}\right) \end{align*} which gives us the following lemma \begin{lemma}\label{upvEB} Given an sub-additive function $f$, the jump cost of searching in a vEB tree is at most $$\log n \sum_{k=0}^{\log \log n} \frac{1}{{2^{k}}} f\left( 2^{2^{k}}\right)$$ \end{lemma} \begin{lemma}\label{searchpath} In a forward embedding of a full binary tree, a search path of length $d$ covers $2^{d-1}$ space in expectation over all search paths of length $d$. \begin{comment} \john{I think the $\Theta$ is wrong}. \john{In any forward embedding of $T_n$, given a node $x$ and a random subpath from $x$ of length $d$ with final node $y$, the distance from $x$ to $y$ in memory is expected to be $\Omega(2^d)$.} \john{Not sure if we need this random formulation} \vkj{The theta was incorrect, I didnt think in terms of this simplified assumption for the embedding which helps. I was thinking in a general sense, but that sort of proof is probably not needed} \end{comment} \end{lemma} \begin{proof} A forward embedding of a full binary tree has $2^d$ elements, thus the average search path from the root to any leaf in the tree, will cover $2^{d-1}$ distance in the memory. \end{proof} \begin{corollary} For any node $t$ in a forward embedding of a binary search tree, a random walk from $t$ which takes $k$ steps cover $(2^{k-1})$ space in expectation. \end{corollary} \begin{proof} Since the binary search tree is full binary tree, the subtree rooted at node $t$ is also a complete binary tree. The proof follows from lemma \ref{searchpath}. \end{proof} \begin{lemma} Any search path of length $d$ that spans $X$ space must contain a jump of size at least $\frac{X}{d}$. \end{lemma} \begin{proof} The pigeonhole principle proves the claim. \end{proof} \begin{corollary} Any subpath of length $d$ that spans $X$ space must contain a jump of size at least $\frac{X}{d}$. \end{corollary} \begin{lemma}\label{with_half_prob} The probability that a search path has a jump of size at least $\frac{n}{2\log n}$ is atleast $1/2$ \end{lemma} \begin{proof} In any forward embedding of a full binary search tree, at most $n/2$ elements can occupy the first $n/2$ distance slots from the first element (root of the tree), which implies that at most $n/4$ leaves of the tree occupy the first $n/2$ locations. Thus at least $n/4$ leaves lie at distance greater than $n/2$ from the root, which implies that at least $n/4$ leaves require a jump of size greater than $\frac{n}{(2\log n)}$ to reach from the root. Thus the probability that a search for the leaf requires a jump of size $\frac{n}{(2\log n)}$ is at least 1/2. Similarly, for the first contiguous $n/2$ elements in the forward embedding, we can see that the probability that the search for a leaf here requires a jump of size $n/4(\log n)$ is at least 1/4. \end{proof} \begin{lemma}\label{jprimep} In every forward embedding of a BST, there exists a search path which has $\Omega(\frac{\log n}{4^k})$ jumps of size at least $\frac{2^{4^k-1}}{4^k}$ (where $k \in[1,\log_4 \log n]$). \end{lemma} \begin{proof} For every root to leaf $\log n$\footnote{the logs are taken in base 2 unless specified otherwise and ceiling on the functions is ignored} sized search path in the BST, split it into $\frac{\log n}{4^i}$ subpaths of $4^i$ jumps, for all integers $i\in [1, \log_4 \log n]$. For a search path $S$, lets denote $J_i(S)$ the set of the largest jump in each of its subpaths. When a search path is split into subpaths of $4^i$ jumps, each subpath covers $2^{4^i-1}$ space in expectation by Lemma \ref{searchpath}. Thus there exists at least one subpath (belonging to a search path say $P$) which covers at least $2^{4^i-1}$ space. Let $I_{k,i}$ be an indicator random variable which is set to 1 in an event that the subpath from $i4^k$th jump to $(i+1)4^k$th jump has a jump of size $\Omega(2^{4^k}/4^k)$ and 0 otherwise. By Lemma \ref{with_half_prob}, $P(X_{k,i})$ the probability that a subpath from $i4^k$th jump to $(i+1)4^k$th jump has a jump of size $\Omega(2^{4^k}/4^k)$ is at least 1/2. Let $I_k=\sum_{i=1}^{\frac{\log n}{4^k}}I_{k,i}$. \begin{align*} E[I_k]&= \sum_{i=1}^{\frac{\log n}{4^k}} E[I_{k,i}]\\ &=\sum_{i=1}^{\frac{\log n}{4^k}}{ P(X_{k,i})}\\ &=\frac{\log n}{2\cdot 4^k} \end{align*} Thus, by expectation argument we can deduce that there exists a path $P$ which has $J_i(P)$ to consist $\Omega(\frac{\log n}{4^k})$ jumps of size at least $\frac{2^{4^{k}-1}}{4^k}$. Observe that $ J_i(P)$ and $J_j(P) (i\neq j)$ might intersect. We define $J'_i(P)$ to be based on the $J_i(P)$ except that each jump appears in one $J'_i(P)$, namely a jump is in $J'_i(P)$ if $i$ was the smallest $J_i(P)$ where it appeared: $$ J'_k = J_k \setminus \bigcup_{i=k+1}^{\log_4 \log n} J_i $$ We know $|J_i(P)|=\frac{\log n}{4^i}$. Thus a simple bound on $J'_k(P)$ can be obtained as follows: \begin{align*} |J'_k(P)| & = \left|J_k(P) \setminus \bigcup_{i=k+1}^{\log_4 \log n} J_i(P) \right|\\ &= \Omega \left( |J_k(P)| - \sum_{i=k+1}^{\log_4 \log n} |J_i(P)| \right)\\ &= \Omega \left(\frac{\log n}{4^k} - \sum_{i=k+1}^{\log_4 \log n}\frac{\log n}{4^i} \right)\\ &= \Omega\left(\frac{\log n}{4^k}\right) \end{align*} \end{proof} \begin{lemma}\label{compLB}Lower bound on the cost to search an element in forward embedding of a BST is $$\Omega \left( \log n \sum_{i=1}^{\log \log n} \frac{1}{{2^{i}}} f\left( \frac{2^{2^{i}}}{2^i}\right) \right)$$. \end{lemma} \begin{proof} Lemma \ref{jprimep} gives a lower bound on number of jumps and their sizes in at least one search path which exists in every forward embedding of a BST. Since all the jumps in $J'_i(P)$ are disjoint and we know that $J'_i(P)$ has $\Omega\left(\frac{\log n}{4^i}\right)$ jumps of size at least $\frac{2^{4^i-1}}{4^i}$ we know there exists an element finding which incurs cost at least: \begin{align*} \text{cost}&=\sum_{i=1}^{\log_4 \log n} \Omega\left(\frac{\log n}{{4^i}}\right) f\left( \frac{2^{4^i-1}}{4^i}\right)\\ &= \Omega\left( \log n \sum_{i=1}^{\frac{1}{2}\log \log n} \frac{1}{{2^{2i}}} f\left( \frac{2^{2^{2i}-1}}{2^{2i}}\right) \right)\\ &= \Omega\left( \frac{\log n}{2} \sum_{i=1}^{\frac{1}{2}\log \log n} \frac{1}{{2^{2i}}} f\left( \frac{2^{2^{2i}}}{2^{2i}}\right) \right) \text{ since $f$ is subadditive}\\ & = \Omega \left( \log n \sum_{i=1}^{\log \log n} \frac{1}{{2^{i}}} f\left( \frac{2^{2^{i}}}{2^i}\right) \right) \end{align*} \end{proof} \begin{lemma}\label{linearlb}Lower bound on the cost to search an element in forward embedding of a BST is $f(n)$ provided $f$ is sub-additive \end{lemma} \begin{proof} The jump function is sub-additive which implies that the cost to search the furthest element at distance $n$ layout is at least $f(n)$ (incurred by a single jump).\end{proof} \begin{lemma}The upper bound on search cost for the vEB layout matches the lower bound on the search cost for any forward embedding of a BST \end{lemma} \begin{proof} There are two possible cases regarding the upper bound in Lemma \ref{upvEB} and lower bound for the cost of search in BST given by Lemma \ref{compLB} . The following is either true or false. . The following is either true or false. $$ \frac{1}{{2^{k+1}}} f\left( {2^{2^{k+1}}} \right) > c \frac{1}{{2^{k}}} f\left( {2^{2^{k}}}\right)$$ If the above statement is true then the upper bound can be simplified, since the sum forms a geometric series: $$\log n \sum_{k=1}^{\log \log n} \frac{1}{{2^k}} f\left( {2^{2^k}}\right)= \Theta\left( \log n \cdot \frac{1}{{2^{\log \log n}}} f\left( {2^{2^{\log \log n}}} \right) \right)=\Theta(f(n))$$ In this case the upper bound matches the lower bound in Lemma \ref{linearlb}. Then we are in the false case \begin{align*} \frac{1}{{2^{k+1}}} f\left( {2^{2^{k+1}}} \right) &\leq c \frac{1}{{2^{k}}} f\left( {2^{2^{k}}}\right)\\ f\left( {2^{2^{k+1}}} \right) &\leq 2 c f\left( {2^{2^{k}}}\right)\\ f\left( {2^{2\cdot 2^{k}}} \right) &\leq 2 c f\left( {2^{2^{k}}}\right)\\ f\left( {4^{2^{k}}} \right) &\leq 2 c f\left( {2^{2^{k}}}\right)\\ f\left( {2^{2^{k}}}\right) &\leq f\left( {4^{2^{k}}} \right) \leq 2 c f\left( {2^{2^{k}}}\right)\\ f\left( {4^{2^{k}}} \right) &= \Theta\left( f\left( {2^{2^{k}}} \right)\right)\\ \intertext{Substituting $k-1$ for $k$} f\left( {2^{2^{k}}} \right) &= \Theta\left( f\left( {\sqrt{2}^{2^{k}}} \right)\right)\\ \intertext{Now, since $\frac{2^{2^{k}}}{2^k} = O(2^{2^k})$ and $\frac{2^{2^{k}}}{2^k} = \Omega(\sqrt{2}^{2^k})$, we can conclude}\\ f\left( \frac{2^{2^{k}}}{2^k}\right) &= \Theta\left(2^{2^{k}} \right)\\ \intertext{Plugging this into the lower bound gives:}\\ \log n \sum_{k=1}^{\log \log n} \frac{1}{{2^{k}}} f\left( \frac{2^{2^{k}}}{2^k}\right) &= \Theta\left( \log n \sum_{k=1}^{\log \log n} \frac{1}{{2^{k}}} f\left( {2^{2^{k}}}{}\right) \right) \end{align*} \end{proof} Thus we get the following theorem. \begin{theorem} The vEB layout is an optimal forward embedding layout for a balanced BST in the jump model for any sub-additive jump function ($f$) and has the worst case jump cost of $$\log n \sum_{k=1}^{\log \log n} \frac{1}{{2^k}} f\left( {2^{2^k}}\right)$$ \end{theorem} \subsection{Finding Median} In this subsection we compute the cost of finding a median among $n$ elements. The $n$ elements are placed contiguously on a tape. The algorithm uses a minor modification of classic median of medians algorithm \cite{DBLP:books/daglib/0023376} to help analyzing the jump cost. The input is broken into chunks of size 5 and a median of each of these chunks is moved to the last location on the tape. This incurs a jump cost of $O(j(1,n)+nj(1,1)+\frac{nj(1,1)}{5})$, where $j$ is a subadditive function. Now the last $n/5$ locations on the tape contain the medians of all the 5-sized chunks. Recursively, we find the median of these $n/5$ elements and use that median to discard at least $3n/10$ elements from contention and perform selection of the appropriate rank on the rest of the elements which are at most $7n/10$ and this incurs $O(nj(1,1)+j(1,n))$ jump cost to scan through the entire list. Thus the jump cost ($J(n)$) for finding a median of $n$ elements is given by the following recursion. \begin{align*} J(n)&=5j(1,1) \text{\hspace{20pt} If $n\leq $5}\\ &=J(n/5)+J(7n/10)+ [j(1,n)+nj(1,1)+n/5j(1,1)]+[nj(1,1)+j(1,n)]\text{ Otherwise}\\ \end{align*} The recursion solves to \begin{align*} J(n)&=J(n/5)+J(7n/10)+ [j(1,n)+nj(1,1)+n/5j(1,1)]+[nj(1,1)+j(1,n)]\\ &= J(n/5)+J(7n/10)+ O(nj(1,1)) \hspace{21pt}\text{ since $j$ is subadditive}\\ &= O(nj(1,1)) \hspace{125pt}\text{ since $1/5+7/10<1$} \end{align*} \subsection{Distribution Sort} In this subsection we shall analyze the jump cost of Distribution Sort\cite{DBLP:conf/focs/FrigoLPR99}, which is a cache obliviously optimal sorting algorithm. Like Funnelsort, this algorithm also performs $O(n\log n)$ comparisons to sort $n$ elements and incurs $O(\frac{n}{B}\log_M n)$ cache misses, where $B$ is the size of the block in the main memory which is of size $M$. The algorithm is an external memory algorithm, wherein if you need a location not in memory, then the entire block containing that location is brought into memory by an optimal cache replacement policy. This algorithm uses a ``bucket splitting" technique to select pivots incrementally during the distribution step. The distribution sort works as follows. \begin{enumerate} \item Partition the input of $n$ elements into $\sqrt{n}$ subarrays $S_1,S_2\ldots S_{\sqrt{n}}$ of $\sqrt{n}$ size and sort these subarrays individually. \item Distribute the sorted subarrays into $q\leq \sqrt{n}$ buckets $B_1,B_2\ldots B_q$ with size $n_1,n_2\ldots n_q$ respectively such that \begin{itemize} \item All elements in $B_i$ are less than all elements in $B_j$. $(\forall i<j)$ \item $n_x\leq 2\sqrt{n}+1;\forall x\leq q$ \end{itemize} \item Recursively sort each of the buckets. \end{enumerate} While distributing the elements across $O(\sqrt{n})$ buckets it is ensured that every bucket has a pivot and all the elements in a bucket are smaller than its corresponding pivot. Every bucket has a size of at most $2\sqrt{n}+1$. In case, a bucket $B_j$'s size becomes $2\sqrt{n}+1$ and its pivot is $p_j$, a median finding algorithm is used to break $B_j$ into two buckets with at least $\sqrt{n}$ elements in each bucket. The median becomes the pivot of the bucket which has the smallest $\sqrt{n}$ elements among the $2\sqrt{n}+1$ elements. The pivot of the bucket containing the remaining $\sqrt{n}$ elements larger than the median is $p$. To make optimal use of cache, the algorithm uses the following subroutine to distribute elements from subarrays to buckets. \begin{algorithm}[H] \SetAlgoLined Distribute$(i,j,m)$\\ \eIf{m=1} { CopyElems$(i,j)$ }{ Distribute$(i,j,m/2)$\\ Distribute$(i+m/2,j,m/2)$\\ Distribute$(i,j+m/2,m/2)$\\ Distribute$(i+m/2,j+m/2,m/2)$\\ } \caption{Distribution subroutine} \end{algorithm} \john{I don't get this description. Are the buckets ordered? Do the number of buckets change during this process? That seems bad} \vkj{Yes, their numbering changes over time. But changes happen only $O(\sqrt{n})$ times. For example they may be stored in arbitrary order $B_,B_4,B_2\ldots$ and if $B_2$ splits into $B_2$ and $B_3$, then all the buckets with index greater than 2, will have their indices incremented by 1 and then $B_3$ is appended at the end.} When subroutine Distribute$(i,j,m)$ is called it is assumed that all elements remaining in subarray $i,i+1\ldots i+m-1$ are greater than the pivot of block $B_{j-1}$. Under this assumption, Distribute$(i,j,m)$ then tries to distribute the elements in subarrays $i,i+1,\ldots , (i+m-1)$ into the buckets $j,j+1,\ldots, (j+m-1)$ if they belong to any of those blocks. $CopyElems(i,j)$ just copies all the elements in subarray $S_i$ which are less than the pivot of bucket $B_j$ to the bucket $B_j$. If while copying elements from $S_i$ to $B_j$, the size of $B_j$ becomes more than $2\sqrt{n}+1$, then the copyElems subroutine is temporarily put on hold. The bucket id of all buckets with id greater than $j$ is incremented by 1. Then, the bucket $B_j$ is broken into two buckets $B_j$ and $B_{j+1}$ by calling a cache oblivious median finding subroutine. Once the bucket is broken into two parts, the copyElems subroutine continues copying elements from $S_i$ which belong in bucket $B_j$. The initial call made by the cache oblivious distribution sort algorithm is $Distribute(1,1,\sqrt{n})$. \john{The bucket splitting is done by the copyelems?} \vkj{during copyelems if the bucket size exceeds the threshold then a bucket split subroutine is called.} The total work (a.k.a number of comparisons) done by distribution sort is known to be $O(n\log n)$. Now we look at the jump cost of this algorithm assuming optimal cache replacement policy.\\ The jump cost of distribution subroutine involves two main components. \begin{enumerate} \item Cost to break a bucket of size $2\sqrt{n}+1$ into two buckets of size at least $\sqrt{n}$. \item Cost of switching between different Distribute() subroutines and copying elements from subarrays to buckets. \end{enumerate} \subsubsection{Cost to break a bucket of size $2\sqrt{n}+1$ into two buckets of size at least $\sqrt{n}$} Breaking a bucket of size $2\sqrt{n}+1$ into two buckets of $\sqrt{n}$ size can be done using $O(\sqrt{n} j(1,1))$ cost using a median finding algorithm described in the previous subsection, where $j(1,1)$ is the expected cost to read a location which is 1 spatial distance away from the most recently accessed location, which in expectation is $ O(1/B)$ in the cache oblivious model. Since the median finding subroutine is called at most $O(\sqrt{n})$ times, the total jump cost involved in breaking the buckets is $O\bigg(\sqrt{n} \times\sqrt{n} j(1,1)\bigg)=O\bigg(n(j(1,1))\bigg)$. Another cost is the cost of moving the buckets, once it is broken into two parts. Everytime a bucket is split into two halves, one of the buckets is moved towards the end of the entire set of buckets, while the other bucket stays in its place. If during the course of the Distribute function the median finding algorithm is called for the $i^{th}$ time, one of the buckets has to be copied at most $2i\sqrt{n}$ distance away from the first bucket (since the new bucket is written after the space allocated for the $i$ existing buckets, each of which occupy $2\sqrt{n}$ contiguous memory locations) and will incur a jump cost of $j(2i\sqrt{n},1) + \sqrt{n}j(1,1)$. This leads us to the following lemma \begin{lemma}\label{copyelements} The total cost incurred in Distribute$(1,1,\sqrt{n})$ to copy elements from subarrays to the buckets and to split a bucket into two is at most \begin{equation}\label{copycost} O\bigg(nj(1,1) + \sum_{i=1}^{\sqrt{n}} (j(2i\sqrt{n},1)+\sqrt{n}j(1,1))\bigg)=O\bigg(nj(1,1) + \sqrt{n} j(n,1)\bigg \end{equation} \end{lemma} \begin{comment} Let $SC(n)$ be the cost of switching between subroutines when the distribute function is dealing with $n$ elements. Let $f(a,b)=\min\left((j(a,1),j(0,b)\right)$ \begin{align*} SC(2\sqrt{n})&= 3f(\sqrt{n},x)\\ SC(n)&\leq f(n/2,x) + 2f(\sqrt{n},x) +4SC(n/2)\\ &= 2\sum_{i=0}^{(\log n)/2}(4^if(\sqrt{n},x)) + \sum_{i=0}^{(\log n)/2}4^i(f(n/2^{i+1},x)) \end{align*} \end{comment} \subsubsection{Cost of switching between different Distribute() subroutines and copying elements from subarrays to buckets} The cost of switching between different Distribute() subroutines has two distinct costs involved. \begin{enumerate}[label=(\alph*)] \item Cost associated with switching between subarrays and moving elements from subarrays to buckets. \item Cost associated with switching between buckets. \end{enumerate} Let $f(a,b)= \min\bigg({j(a,1), j(0,b)}\bigg)\leq 1$ denote the jump cost to access a target location which is spatially located $a$ distance away from some location in memory and the target location was in the working set of size $b$. \begin{comment} Let $T(i,W)$ denote the jump cost for the jumps across the subarrays while switching between different distribute subroutines when the number of subarrays called by the subroutine is $i$ and Working set is of size $W$. When the Distribute() subroutine is called over $i>1$ subarrays, it is broken into four recursive calls for the distribute subroutine. Jump cost from the first (among the four) recursive call to the second recursive call requires a jump of size $\sqrt{n}$ (since it is the adjacent subarray), unless that subarray is already in the memory. Jump cost from the second recursive call to third recursive call involves a jump of size $i\sqrt{n}$, unless that subarray is already in the memory. The third recursive call is similar to the first recursive call and incurs a jump of size $\sqrt{n}$, unless that subarray is already in the memory. This gives us the following recursion. \begin{align*} T(i,W)&= 2T(i/2,W) +2T(i/2,i\sqrt{n}) + f(\sqrt{n},W) +f(i\sqrt{n},i\sqrt{n}) +f(\sqrt{n},i\sqrt{n})\\ &\leq 2T(i/2,W) +2T(i/2,i\sqrt{n}) + f(\sqrt{n},W) + 2f(i\sqrt{n},i\sqrt{n}) \end{align*} There are three possible values for $f(n,W)$ which gives us the following cases. \begin{itemize} \item If $ W<\alpha M$, then $f(\sqrt{n},W)=0$ \begin{itemize} \item $i\sqrt{n}<\alpha M$, then $f(i\sqrt{n},i\sqrt{n})=0$\\ $\implies T(i,W)\leq 2T(i/2,W) +2T(i/2,i\sqrt{n})$\\ \item $i\sqrt{n}>\alpha M$\\ $\implies T(i,W)\leq 2T(i/2,W) +2T(i/2,i\sqrt{n})+2$\\ \end{itemize} \item If $ W>\alpha M$ and $\sqrt{n}\leq B$, then $f(\sqrt{n},W)=\sqrt{n}/B$ \begin{itemize} \item $i\sqrt{n}<\alpha M$, then $f(i\sqrt{n},i\sqrt{n})=0$\\ $\implies T(i,W)\leq 2T(i/2,W) +2T(i/2,i\sqrt{n}) +\sqrt{n}/B$\\ \item $i\sqrt{n}>\alpha M$, then $f(i\sqrt{n},i\sqrt{n})=1$\\ $\implies T(i,W)\leq 2T(i/2,W) +2T(i/2,i\sqrt{n})+\sqrt{n}/B+2$\\ \end{itemize} \item If $ W>\alpha M$ and $\sqrt{n}>B$, then $f(\sqrt{n},W)=1$ \begin{itemize} \item $i\sqrt{n}<\alpha M$, then $f(i\sqrt{n},i\sqrt{n})=0$\\ $\implies T(i,W)\leq 2T(i/2,W) +2T(i/2,i\sqrt{n})+1$\\ \item $i\sqrt{n}>\alpha M$, then $f(i\sqrt{n},i\sqrt{n})=1$\\ $\implies T(i,W)\leq 2T(i/2,W) +2T(i/2,i\sqrt{n})+3$ \end{itemize} \end{itemize} Although we analyze using the jump model, the algorithm is designed for cache oblivious model and assumes that if a location(which is not in memory) is accessed, then the entire block containing that location is moved into the memory. Thus, when $W$ is less than or equal to $\alpha M$( for some $\alpha<1$), the requested subarray is already in the memory and $T(i,W)$ will cost nothing. Notice that the base case occurs when $i=1$. Thus $\forall G$, $T(1,G)=0$ since there are no recursive jump calls involved. Using $T(1,G)$ we can compute $T(2,G)$ to be $T(2,G)\leq 2T(1,G)+2T(1,\sqrt{n})+1+0+0=1$. The second and third jumps cost nothing since they are accessing distribute subroutines which are already in the memory. As long as $i\leq \alpha M/B$, we can see that $T(i,G)\leq 2T(i/2,G)+1$, since the third and fourth recursive subroutine calls involves subarrays which are already in memory and thus $T(i/2,i\sqrt{n}/2)=0$. Thus when $i\leq \alpha M/B$ we get $T(i,G)=O(i)$. Thus the distribution cost recursion can be rewritten as follows. \begin{align*} T(i,W)&\leq\begin{cases} 0 \hspace{162pt} {W\leq \alpha M}\\ O(M/B) \hspace{124pt} {W>\alpha M,i \leq \alpha M/B}\\ 2T(i/2,W) + 2T(i/2,i\sqrt{n}) + c \hspace{20pt} W>\alpha M, i>\alpha M/B\\ \end{cases} \end{align*} where $0\leq c\leq 3$ The distribution cost is given by $T(\sqrt{n},\infty)$ aka the cost to distribute $\sqrt{n}$ subarrays. \begin{align*} T(\sqrt{n},\infty)&\leq\sum_{k=1}^{\log B\sqrt{n}/M}2^kT(\frac{\sqrt{n}}{2^k},\frac{2n}{2^k}) + \frac{B\sqrt{n}}{M}T(M/B,\infty)+ O(\sqrt{n})\\ &=\sum_{k=1}^{\log B\sqrt{n}/M}2^kT(\frac{\sqrt{n}}{2^k},\frac{2n}{2^k}) + O(\sqrt{n})\\ &=O(nB/M) \end{align*} \begin{equation}\label{subarrayswitchcost} T(\sqrt{n},\infty)=O\left(\frac{n}{B}\right) \end{equation} Another cost to consider is jump cost involved with loading the buckets. Let $Q(i,W)$ denote the jump cost associated with jumps across the buckets while switching between different distribute subroutines when the number of buckets called by the subroutine is $i$ and the working set is of size $W$. The buckets which are needed to be loaded by a subroutine may be scattered across the memory and may not be stored in order. The recursion in this case is given by \begin{align*} Q(i,W)&=2Q(i/2,W)+2Q(i/2,i\sqrt{n})+f(d_1,W_1) + f(d_2,W_2) + f(d_3,W_3)\\ &\leq 2Q(i/2,W)+2Q(i/2,i\sqrt{n})+3 \end{align*} where $d_k$ is the jump distance between the recursive $k^{th}$ subroutine and ${(k+1)}^{th}$ subroutine. As argued previously, the base case is when $i=1$, then $Q(i,G)=0$ no matter what the value of $G$ might be. $Q(2,G)\leq 3$. As argued before we can see that $Q(i,W)=O(i)$ whenever $i\leq \alpha M/B$. So we get the same recursive formula \begin{align*} Q(i,W)&\leq\begin{cases} 0 \hspace{162pt} {W\leq \alpha M}\\ O(M/B) \hspace{124pt} {W>\alpha M,i \leq \alpha M/B}\\ 2T(i/2,W) + 2T(i/2,i\sqrt{n}) + c \hspace{20pt} W>\alpha M, i>\alpha M/B\\ \end{cases} \end{align*} Since each bucket has at least $\sqrt{n}$ elements, there are atmost $\sqrt{n}$ buckets. Thus, the total jump cost associated with switching between buckets is \begin{equation}\label{bucketswitchcost} Q(\sqrt{n},\infty)=O\left(\frac{n}{B}\right) \end{equation} The jump cost for sorting $n$ elements using distribution sort involves first breaking the input into $\sqrt{n}$ chunks of $\sqrt{n}$ size each which are sorted and then they are distributed across $O(\sqrt{n})$ buckets such that $\forall i,j$, all the elements in bucket $B_i$ is less than or equal to all the elements in bucket $B_j$ whenever $i<j$ . Finally these $O(\sqrt{n})$ buckets are then sorted to get the final solution. Thus, from equations \ref{copycost},\ref{subarrayswitchcost} and \ref{bucketswitchcost} the total cost for sorting $n$ elements $S(n)$ is given by the following recursion \begin{align*} S(n)= \begin{cases} 2\sqrt{n}S(\sqrt{n})+ O(\frac{n}{B})\hspace{41pt}\text{if } n>\alpha M\\ O(n/B) \hspace{91pt}\text{ Otherwise}\\ \end{cases} \end{align*} which solves to \begin{align*} S(n)&=O\left(\frac{n}{B}\log_M n\right) \end{align*} \end{comment} \vkj{Previous 2 variate analysis seems to be overcounting/incorrect. Its inaccurate compared to what it should be when you try to plug in the values. Let me suggest another way to calculate. Previous flawed analysis is commented out.} \textbf{Cost associated with switching between subarrays and moving elements from subarrays to buckets.} Let $T(i,W,d)$ denote the jump costs when the distribute subroutine switches between $i$ buckets, the size of the working set is $W$ and at the end of this subroutine the $d$ elements are moved from these $i$ subarrays to $i$ buckets. The goal is to compute $T(\sqrt{n},\infty,n)$\footnote{we assume $n$ to be of the form $2^{k}$ in order to simplify the analysis}, which would give the jump cost to transfer $n$ elements from $\sqrt{n}$ subarrays to $\Theta({\sqrt{n}})$ buckets. If the subroutine involves only one subarray and one bucket, then the jump cost is $O(dj(1,1) +1)$, where the cost to copy $d$ elements. Otherwise, if the number of subarrays involved is more than one, then the algorithm recursively breaks it into 4 smaller distribute subroutines as evident from the pseudocode for the Distribution subroutine. Switching between these 4 smaller subroutines requires 3 jumps. The parameter $d_k(1\leq k\leq 4)$ denotes the number of elements copied from a subarray to a bucket during the $k^{th}$ recursive subroutine call. As evident from the description of the algorithm in the original paper, switching between the first and second recursive subroutine of $T(i,W,d)$ requires the algorithm to jump to the adjacent subarray which is spatially $\sqrt{n}$ distance away and temporally $W+d_1$ away. Switching between the second and third recursive subroutine of $T(i,W,d)$ requires the algorithm to jump from the last subarray to the first subarray in distribute subroutine which are spatially $i\sqrt{n}$ distance apart and temporally $d_1+d_2$ apart, since $O(d_1+d_2)$ elements have been accessed since the last time the subarrays involved in the first subroutine were called. Switching between the third and fourth recursive subroutine of $T(i,W,d)$ is similar to switching between the first and second recursive subroutine and thus requires the algorithm to jump to the adjacent subarray which is spatially $\sqrt{n}$ distance away but temporally $d_2+d_3$ away. The working set $W_1$ for the first subroutine is $W$. For the second subroutine the working set $W_2$ is $W+O(d_1)$. When the third subroutine is called, all the subarrays associated with this subroutine have been in the working set in the past $O(d_1+d_2)$ time and thus the working set $W_3$ in this case is $O(d_1+d_2)$. Similarly the working set($W_4$) of the fourth subroutine is $O(d_2+d_3)$ since all the subarrays associated with this subroutine have been in the working set in the past $O(d_2+d_3)$ time. Thus, the jump cost of the first, second and third jumps between the 4 subroutines is $f(\sqrt{n},W)$, $f(i\sqrt{n},d_1+d_2)$ and $f(\sqrt{n},d_2+d_3)$ respectively which gives us the following recursion for $T(i,W,d)$. \begin{comment} \begin{align*} T(i,W,d)&=O(\lfloor d/B\rfloor) \hspace{50pt}\text{ if $i$=1, $W\leq\alpha M$}\\ &=O(\lceil d/B\rceil) \hspace{50pt}\text{ if $i$=1, $W>\alpha M$}\\ &= T\left(\frac{i}{2},W,d_1\right) +T\left(\frac{i}{2},W,d_2\right) + T\left(\frac{i}{2},\max(d_1+d_2,\frac{iB}{2}),d_3\right)+ T\left(\frac{i}{2},\max(d_2+d_3,\frac{iB}{2}),d_4\right)\\ &\phantom{{}=1} + f(\sqrt{n},W) +f(i\sqrt{n},d_1+d_2) +f(\sqrt{n},d_2+d_3) \hspace{20pt} \text{if $i\geq 2$}\\ \end{align*} where $d=\sum_{k=1}^4 d_k$.\\ A simpler version is of the same recursion is as follows \end{comment} \begin{align*} T(i,W,d)&=O(dj(1,1)+1) \hspace{50pt}\text{ if $i$=1}\\ &\leq T\left(\frac{i}{2},W_1,d_1\right) +T\left(\frac{i}{2},W_2,d_2\right) + T\left(\frac{i}{2},W_3,d_3\right)+ T\left(\frac{i}{2},W_4,d_4\right)\\ &\phantom{{}=1} + f(\sqrt{n},W) +f(i\sqrt{n},d_1+d_2) +f(\sqrt{n},d_2+d_3) \hspace{20pt} \text{Otherwise}\\ \end{align*} where $d=\sum_{k=1}^4 d_k$. If the temporal distance in $f()$ is less than a certain threshold say $W'$, then the jump cost is essentially zero. In these cases, the third jump and the fourth jump would be of cost zero and only the first jump between the internal subroutines would be possibly non-zero. Let $m$ be the number of subarrays in the subroutine at which point $W\leq W'$ and thus the second and third jumps are of zero cost. Taking this threshold $W'$ into consideration we get the following recursion for the jump cost. \begin{align*} T(i,W,d)&=O(dj(1,1)+i) \hspace{50pt}\text{ if $i\leq m$}\\ &\leq T\left(\frac{i}{2},W_1,d_1\right) +T\left(\frac{i}{2},W_2,d_2\right) + T\left(\frac{i}{2},W_3,d_3\right)+ T\left(\frac{i}{2},W_4,d_4\right)\\ &\phantom{{}=1} + 3 \hspace{20pt} \text{Otherwise}\\ \end{align*} This recursion solves to $$T(i,W,d)=O(d(j(1,1))+m+i^2/m)$$ Thus $T(\sqrt{n},\infty,n)=nj(1,1)+m+n/m$, where $m$ is determined by the threshold of the temporal distance. \begin{lemma}\label{subarrarys} The jump cost associated with switching between subarrays and moving elements from subarrays to buckets is $$T(\sqrt{n},\infty,n)=O(n(j(1,1))+m+n/m)$$ where $m$ is the number of buckets in subroutine at which point $W$ is at most the threshold $W'$. \end{lemma} \begin{comment} The distribution sort algorithm in the original paper is assumed to use optimal cache replacement policy, which implies that at least $\Omega(M/B)$ subarrays/buckets have exactly one block associated with them in the memory, since the memory can accomodate $\Omega(M/B)$ blocks. This implies that for the subroutine $T(i,W,d)$ whenever $i<\alpha B$( for some constant $0<\alpha <1$), all the $i$ subarrays and $i$ buckets have a representative in the memory. If $1\leq i \leq\alpha B$, then all the locations(and the corresponding blocks) needed by the subroutine are required to be loaded just once in the memory and the cost of second ($f(i\sqrt{n}),d_1,d_2$) and third jump ($f(\sqrt{n},d_2+d_3)$) between the subroutines is 0. Thus, in this case the jump cost in $T(i,W,d)$ is $O(d/B+i)$, since we have to bring at least $i$ blocks into the memory and $d$ elements are moved from subarrays to buckets. Thus the recursion can alternatively be written as follows. \begin{align*} T(i,W,d)&=O(dj(1,1)+i) \hspace{36pt}\text{where $1\leq i\leq \alpha B$}\\ &\leq T\left(\frac{i}{2},W_1,d_1\right) +T\left(\frac{i}{2},W_2,d_2\right) + T\left(\frac{i}{2},W_3,d_3\right)+ T\left(\frac{i}{2},W_4,d_4\right)+ 3 \\ &\hspace{20pt} \text{Otherwise}\\ \end{align*} where $W_{3}=d_1+d_2+\frac{iB}{2}$ and $W_{4}=d_2+d_3+\frac{iB}{2}$\\ This recursion solves out to be $T(i,W,d)=O(B+dj(1,1)+i^2j(,1,1))$. Thus, $T(\sqrt{n},\infty,n)=O(B+nj(1,1)+nj(1,1))=O(B+nj(1,1))$ which gives us the following lemma. \begin{lemma}\label{subarrarys} The jump cost associated with switching between subarrays and moving elements from subarrays to buckets is $$T(\sqrt{n},\infty,n)=O(B+nj(1,1))$$ \end{lemma} \end{comment} \textbf{Cost associated with switching between buckets} The jump cost of switching between buckets also has the same recursive definition with few minor changes. Let $Q(i,W,d)$ denote the jump cost associated with switching between $i$ buckets, where $d$ elements are copied from $i$ subarrays to $i$ buckets and the working set is $W$. The change is that in case of buckets, the 3 jumps between the 4 recursive subroutine calls, have slightly different spatial distance parameters. The three jump costs would be $f(X_1,W)$, $f(X_2,d_1+d_2)$ and $f(X_3,d_2+d_3)$, where $X_1,X_2,X_3$ denote the spatial distance between the three buckets and they can have any arbitrary value. Analogous to the previous case, these spatial distance parameters are not significant, whenever $1< i<\alpha B$ and thus here also $Q(i,W,d)=O(dj(1,1)+i)$. The spatial distances $X_1,X_2,X_3$ do matter whenever $i>\alpha B$, but the jump cost is upper bounded by the cost $Q(i/2,W,d_1)+Q(i/2,W,d_2)+Q(i/2,W_3,d)+Q(i/2,W_4,d_4)$+3, since the function $f$ can have value at most 1. So its easy to notice that recursive equation remains the similar and thus the jump cost of switching between buckets is same. The bucket splitting subroutine which breaks a bucket of size $2\sqrt{n}+1$ into roughly two equal parts, is called $O(\sqrt{n})$ times during the execution of Distribute$(\sqrt{n},\infty,n)$. For sake of simplicity of calculation, whenever the median splitting subroutine is called we can assume that the entire memory is flushed before and after such a median finding subroutine is called by the algorithm, and reverted back to its previous state after the median finding algorithm has finished. However, this is not how the algorithm works. It is just used to simplify the analysis. Reloading the previously used locations into memory after this median finding subroutine ends, would cost at most $Mj(1,1)$ each time. Thus the total cost of reloading the memory to the state where it was before the median finding subroutine started would be at most $O(\sqrt{n}Mj(1,1))=o(nj(1,1))$ which would be dominated by the other jumps cost incurred during the execution of the distribute subroutine Distribute$(\sqrt{n},\infty,n)$. \vkj{Issues:- Need to write the above paragraph in a more convincing way to show that even though the working set is not exactly what we claim it to be.} \begin{lemma}\label{buckets} The jump cost associated with switching between buckets is $$Q(\sqrt{n},\infty,n)=O(n(j(1,1))+m+n/m)$$ where $m$ is the number of buckets in subroutine at which point $W$ is at most the threshold $W'$. \end{lemma} The jump cost for sorting $n$ elements using distribution sort involves first breaking the input into $\sqrt{n}$ chunks of $\sqrt{n}$ size each which are sorted and then they are distributed across $O(\sqrt{n})$ buckets such that $\forall i,j$, all the elements in bucket $B_i$ is less than or equal to all the elements in bucket $B_j$ whenever $i<j$ . Finally these $O(\sqrt{n})$ buckets are then sorted to get the final solution. Thus, by Lemma \ref{copyelements}, Lemma \ref{subarrarys} and Lemma \ref{buckets}, the total cost for sorting $n$ elements $S(n,\infty)$ is given by the following recursion \begin{align*} S(n,W)= \begin{cases} 2\sqrt{n}S(\sqrt{n},W)+ O(n(j(1,1))+m+n/m)\hspace{41pt}\text{if } W>W'\\ O(nj(1,1))$$ \hspace{187pt}\text{ Otherwise}\\ \end{cases} \end{align*} In cache oblivious model, we get the following expected values for $f$. \begin{align*} f(a,b)= \begin{cases} 0 \hspace{46pt}\text{if } b<\alpha M\\ a/B \hspace{31pt}\text{if } b>\alpha M \text{ and } a\leq B\\ 1 \hspace{45pt}\text{if } b>\alpha M \text{ and } a>B\\ \end{cases} \end{align*} The expected value of $j(1,1)=1/B$ is the threshold working set $W'$ is $M$ which is the size of the main memory and correspondingly the value of $m$ is $B$. Substituting these values in the above recursion we can simplify the recursion to be as follows. \begin{align*} S(n,W)= \begin{cases} 2\sqrt{n}S(\sqrt{n},W)+ O(n/B+B+n/B)\hspace{41pt}\text{if } W>\alpha M\\ O(n/B)$$ \hspace{187pt}\text{ Otherwise}\\ \end{cases} \end{align*} Solving this recursion gives the cache oblivious run time of the algorithm to be $$S(n,\infty)=O(\frac{n\log_M n}{B})$$ \begin{theorem} The jump cost of cache oblivious distribution sort is $O\left(\frac{n}{B}\log_M n\right)$, if $O(1/B)$ is the expected jump cost to access an element which is 1 distance away from some element in the memory of size $M$. \end{theorem} \section{Deferred proofs}\label{proofs} \input{2-querytype} \input{3-bivariate} \excludecomment{onlymain} \excludecomment{onlyproof} \excludecomment{onlyapp} \input{2-querytype} \input{3-bivariate} \input{why-b-stable} \end{document} \section{Generalizing the jump model for $M$ and $B$} \john{This is rough and incomplete and will probably be removed} $CO(M,B)$ denotes $M$ blocks in main memory with blocks of size $B$. In the previous section, we can imagine that the execution pattern has a moving finger over the location that it has most recently accessed. In the generalized jump model we deal with several fingers (instead of just one), let $f_1,f_2\ldots f_M$ be the fingers (locations in the main memory) and let $S=T[e_1],T[e_2]\ldots T[e_t]$ be the locations to be accessed by the algorithm. Let $d$ be the distance of location $e_i$ with respect to some finger chosen by an optimal algorithm. {\color{red}{Is this necessary:-}{This algorithm chooses the finger in order to minimize the overall cost of the algorithm.}} $J(M,d)$ denotes the number of jumps of size $d$ from one of the $M$ fingers chosen by the optimal algorithm. {\color{red}{The cache replacement policy involves removing the finger which is nearest to the newly accessed finger. :- Can we avoid mentioning the replacement policy.}} \noindent There are the three possible scenarios regarding the cost to access a location. \begin{enumerate} \item If the accessed location $l_i$ is at distance $d>B$ from all the fingers then the cost to access is 1. \item If the accessed location $l_i$ is at distance less than $B$ from a set of fingers $F$ and all the fingers lie on same side of $l_i$ then the expected cost to access is $d/B$, where $d$ is the distance of the nearest finger in $F$. \item If the accessed location $l_i$ is at distance less than $B$ from a set of fingers F and at least one finger lies on each side of $l_i$, then expected cost to access $l_i$ is 0. \end{enumerate} Let $J_{\text{large}}(M,d)$ be the set of all jumps, where $M$ had either all the fingers at distance more than $B$ or the fingers which had distance less than $B$ from $l_i$ lied on the same side of $l_i$. (a.k.a. Cases 1 and 2) Let $J_{\text{small}}(M,d)$ be the set of all jumps with jump size $d$ less than $B$ and where $M$ had two fingers at distance less than $B$ from each other and both were on opposite sides of location being accessed.(a.k.a Case 3) Since all the jumps covered by $J_{\text{small}}(M,d)$ incur zero cost, we can concentrate on $J_{\text{large}}(M,d)$. Let $J'_{large}(M,2^i)\subset J'_{large}(M,d)$ be the number of jumps which have their jump distances in the range $[2^{i},2^{i+1})$ then analogous to Lemma \ref{min(*,*)} we can show that $$CO(M,B)=\Theta\left(\frac{1}{B}\sum_{i=0}^{\infty} J'_{\text{large}}(M,d)\left[\min\left({1,\frac{2^i}{B}}\right)\right]\right)$$ On looking deeper, it seems that the probability depends on whether $[d\leq B/2]$ OR $[B/2<d<B]$. In first case, the probability that the new access requires a new block is $\frac{d}{B}$. In the second case, the probability that the new access requires a new block is $\frac{2d-B}{B}$. But $\frac{2d-B}{B}<\frac{d}{B}$ whenever $d<B$ so it we round upper bound it to at most $d/B$ in both the cases. If the definition works out as expected then, a minor equation change in the proof of Lemma \ref{J_Implies_CO} a.k.a Jump optimal$\implies$ CO optimal and we can get the same result for the general case. Lemmas \ref{lin_comb} and \ref{leqn} do not use the previous definition (of $CO(B)$) in a major way and can be easily substituted to the new definition of $CO(M,B)$. \section{Few Observations which have not been used} Define $J(B)$ as follows: $$J(B)=\begin{cases} 2A(1)&\text{when }B=1 \\ 2A(B)-\displaystyle\sum_{i=1}^{\lg B} \frac{J(B/2^i)}{2^i} & \text{when }B>1 \end{cases} $$ \begin{lemma} $J(2^i)=2A(2^i)-A(2^{i-1})$ \end{lemma} \begin{proof} As an induction step, assume $J(2^i)=2A(2^i)-A(2^{i-1})$ (for $i\geq 2$) \begin{align*} J(2^{i+1}) &= 2A(2^{i+1})- \sum_{j=1}^{i+1} \frac{J(2^{i+1}/2^j)}{2^j} & \text{Definition of $J$} \\ % &= 2A(2^{i+1})- \frac{J(2^{i-(i+1)+1})}{2^{i+1}} - \sum_{j=1}^{i} \frac{J(2^{i-j+1})}{2^j} & \text{Math}\\ &= 2A(2^{i+1})- \frac{J(1)}{2^{i+1}} - \sum_{j=1}^{i} \frac{J(2^{i-j+1})}{2^j} & \text{Math}\\ &= 2A(2^{i+1})- \frac{2A(1)}{2^{i+1}} - \sum_{j=1}^{i} \frac{J(2^{i-j+1})}{2^j} & \text{Definition of $J(1)$}\\ &= 2A(2^{i+1})- \frac{A(1)}{2^i} - \sum_{j=1}^{i} \frac{J(2^{i-j+1})}{2^j} & \text{Math}\\ &= 2A(2^{i+1}) - \frac{A(1)}{2^i} - \sum_{j=1}^{i} \frac{2A(2^{i-j+1}) - A(2^{i-j})}{2^j} & \text{Induction}\\ &= 2A(2^{i+1}) - \frac{A(1)}{2^i} - \sum_{j=1}^{i} \left( \frac{A(2^{i-j+1})}{2^{j-1}} - \frac{A(2^{i-j})}{2^j}\right) & \text{Math}\\ &= 2A(2^{i+1}) - \frac{A(1)}{2^i} - \left( \sum_{j=0}^{i-1} \frac{A(2^{i-j})}{2^{j}} - \sum_{j=1}^{i} \frac{A(2^{i-j})}{2^j}\right) & \text{Change limits for clarity}\\ &= 2A(2^{i+1}) - \frac{A(1)}{2^i} -\left(\frac{A(2^{i})}{2^0} - \frac{A(2^{i-i})}{2^{i}} \right) & \text{Telescoping sum}\\ &=2A(2^{i+1})-A(2^i)&\text{Math} \end{align*} \end{proof} ----- \end{comment} \begin{corollary}\label{lin_comb} \ben{This proof was very complicated in Varun's write-up, is this too simple now?} If algorithm $A$ is has asymptotically optimal (minimal) jump cost for all jump functions of the form $j_B(2^i)=\min(1,\frac{2^i}{B})$, for all $B$, then it also optimal for all jump functions which can be expressed as \begin{align*} \sum_{k=0}^{x} a_k\min(1,\frac{2^i}{B_k}) \end{align*} for all $a_k > 0$, and $B_k > 0$. \end{lemma} \begin{proof} From the definition for jump model, we have \begin{align*} \exists_{c>0}\forall_B \left[ \left(\sum_{i=0}^{\infty} \min(1,\frac{2^i}{B})J'_{A_{opt}}(2^i) \right) \right. & \leq \left. c \min_{A\text{ solves }P}\left(\sum_{i=0}^{\infty} \min(1,\frac{2^i}{B})J'_{A}(d) \right)\right]\\ \intertext{Let $\{B_1,B_2\ldots B_x\}=G$ be different values of $B$, then $\forall l\in[1,x]$ we get} \exists_{c>0}\forall_{B_l\in G} \left[ \left(\sum_{i=0}^{\infty} \min(1,\frac{2^i}{B_l})J'_{A_{opt}}(2^i) \right) \right. & \leq \left. c \min_{A \text{ solves }P}\left(\sum_{i=0}^{\infty}\min(1,\frac{2^i}{B_l})J'_{A}(2^i) \right)\right] \intertext{multiplying by positive constant $a_l$ we get } \exists_{c>0}\forall_{B_l\in G} \left[ \left(\sum_{i=0}^{\infty} a_l\min(1,\frac{2^i}{B_l})J'_{A_{opt}}(2^i) \right) \right. & \leq \left. c \min_{A\text{ solves }P}\left(\sum_{i=0}^{\infty}a_l\min(1,\frac{2^i}{B_l})J'_{A}(2^i) \right)\right] \intertext{Taking the summation over all the functions of the form $a_l\min(1,2^i/B_l)$} \exists_{c>0} \left[ \left(\sum_{k=0}^{x}\sum_{i=0}^{\infty} a_k\min(1,\frac{2^i}{B_k})J'_{A_{opt}}(2^i) \right) \right. & \leq \left.c \min_{A\text{ solves }P}\left(\sum_{k=0}^{x}\sum_{i=0}^{\infty} a_k\min(1,\frac{2^i}{B_k})J'_{A}(2^i) \right)\right]\\ \exists_{c>0} \left[ \left(\sum_{i=0}^{\infty} \left(\sum_{k=0}^{x} a_k\min(1,\frac{2^i}{B_k})\right)J'_{A_{opt}}(2^i) \right) \right. & \leq \left.c \min_{A\text{ solves }P}\left(\sum_{i=0}^{\infty}\left( \sum_{k=0}^{x}a_k\min(1,\frac{2^i}{B_k})\right)J'_{A}(2^i) \right)\right] \end{align*} If the jump function can be expressed as $$\sum_{k=0}^{x} a_k\min(1,\frac{2^i}{B_k})$$ then it can be seen that algorithm $A_{opt}$ is jump optimal even for this function. \end{proof} \begin{lemma}\label{leqn} If a function $f(x)$ \{$x\in N$\} is composed of piecewise linear and sub-additive functions given by the following equation \begin{align*} f(x) &= \alpha_1 x &\text{$0\leq x\leq x_1$}\\ &=\alpha_2 x+c_2 &\text{$x_1\leq x\leq x_2$}\\ &=\alpha_3 x+c_3 &\text{$x_2\leq x\leq x_3$}\\ &\vdots\\ &=\alpha_k x+c_k &\text{$x_{k-1}\leq x\leq x_k$}\\ \end{align*} where $\alpha_1>\alpha_2\ldots >\alpha_k\geq 0$ (since the function $f(x)$ is sub-additive) and $c_i\geq0$. then $\exists\beta=\{\beta_1,\beta_2\ldots,\beta_k\}$ and $\exists\gamma=\{\gamma_1,\gamma_2\ldots,\gamma_k\}$ $f(x)$ can be represented as $\sum_{i=1}^k\min(\beta_ix,\gamma_i)$ \end{lemma} \begin{proof} For the sake of this proof, we assume that the function $f$ takes only integral parameters. Let $c_1=0$. We claim that $f$ can be represented as a linear combination of $min(*,*)$ functions given as follows $$g(x)=\sum_{i=1}^k min(\beta_i x,\gamma_i)$$ The values of $\beta_i$ and $\gamma_i$ are set as follows. \begin{align*} \beta_i &= \begin{dcases} \alpha_i & i = k \\ \alpha_{i}-\alpha_{i+1} & \forall i < k \end{dcases} \end{align*} \begin{align*} \gamma_i &= \begin{dcases} c_2 & i = 1 \\ c_{i+1}-c_{i} & 1< i < k\\ \infty & i = k \end{dcases} \\ \end{align*} Thus $$g(x)= \min((\alpha_1-\alpha_2)x,c_2) + \sum_{i=2}^{k-1}\min((\alpha_{i}-\alpha_{i+1})x,c_{i+1}-c_{i}) +(\alpha_kx)$$ \begin{enumerate} \item $g(x)=\alpha_1 x$ When $g(x)=\alpha_1 x$ , all the $min$ functions have to evaluate to their first operand. Thus $\forall i\in [2,k-1]$, $\alpha_i-\alpha_{i+1}x\leq c_{i+1}-c_i$. Therefore, $\alpha_ix+c_i\leq \alpha_{i+1}x + c_{i+1}$. Since $\alpha_i>\alpha_{i+1}$, it implies that $x$ is smaller than the value of $x'$ at which the two lines $y=\alpha_i x+c_i$ and $y=\alpha_{i+1} x+c_{i+1}$ meet. From the definition of $f(x)$, we know that the lines meet at $x'=x_i$. Thus whenever $g(x)=\alpha_1 x$ $x\leq x_i,\forall i$, which means that $x\leq x_1$. \item $g(x)=\alpha_ix+c_i$; $i$ such that $2\leq i\leq k$ Whenever $g(x)=\alpha_ix+c_i$, the first $i-1$ min functions evaluate to the second operand and the rest of the min functions evaluate to the first operand. By a similar argument, we can deduce that if $g(x)=\alpha_ix+c_i$ then $x\geq x_{i-1}$ and $x\leq{x_{i}}$. Thus, if $g(x)=\alpha_ix+c_i$ then $\forall i\in[2,k-1]$ $x_{i-1}\leq x\leq x_{i}$ \end{enumerate} This shows that for each integer in the range $[0,x_{k}]$ both $f$ and $g$ have the same evaluation for integral parameters. Thus, any piecewise linear subadditive function can be expressed as a linear combination of min functions. \end{proof} \begin{comment} \begin{lemma}\label{sandwich} Rounding off the jump distances to the nearest power of 2 over-estimates the cost incurred during an execution pattern by at most a factor of two. \end{lemma} \begin{proof} $l_1,l_2\ldots l_t$ are the indices which are accessed by the algorithm in that order. Thus the cost for the jump from index $l_{i-1}$ to index $l_{i}$ is $j(|l_{i}-l_{i-1}|)$ Let $x_i$ be an integer such that $2^{x_i}\leq |l_i-l_{i-1}| < 2^{x_i+1}$. Since $j$ is a sub-additive function, we have $j(2^{x_i})<j(|l_i-l_{i-1}|)\leq j(2^{x_i+1})$. We also have $j(2^{x_i+1})\leq 2j(2^{x_i})$. \\ Thus, $$j(2^{x_i})<j(|l_i-l_{i-1}|)\leq 2j(2^{x_i})$$ $$\sum_{i=1}^{t} j(2^{x_i})< \sum_{i=1}^{t}j(|e_i-e_{i-1}|) \leq 2\sum_{i=1}^{t} j(2^{x_i})$$ \end{proof} In the remaining sections we shall assume that the distances(if needed) have been rounded off appropriately to the nearest larger power of two. \john{The lemma is too wordy. You are just trying to say that $j(x)=\Theta(j(2^{\lceil \log_2 x \rceil}))$, right?} \vkj{Not exactly but sort of, I mean that the summation over all distances is $\Theta()$ . The lemma statement is now more concise} Let $J'(2^i)$ ($\forall i>0$) denote the total number of jumps whose sizes are in the range $[2^{i},2^{i+1})$ during an execution $E$ . In the proof of lemma \ref{sandwich} we can notice that the costs of all the jumps whose sizes were in range $[2^{i},2^{i+1})$ are overestimated using the same cost $j(2^{i+1})$. Thus \begin{equation}\label{JRA_twice} \sum_{i=0}^{\infty} J'(2^i)j(2^i)\leq JR_E^j\leq 2\sum_{i=0}^{\infty} J'(2^i)j(2^i) \end{equation} or simply \begin{equation}\label{JRA_twice1} JR_E^j = \Theta\left(\sum_{i=0}^{\infty} J'(2^i)j(2^i)\right) \end{equation} We now look at a simple variant of Cache Oblivious Model, where the main memory is of constant size. It can alternatively be imagined as though the access sequence over the input moves always along the same direction on the input tape. We refer to this simplified cache oblivious model as Cache Oblivious Forward Model and its running time is given by $COF(B)$. \end{comment} \begin{corollary}\label{min(*,*)} \begin{align*} \coshift{B}{E}= \Theta\left(\sum_{i=0}^{\infty} \jcountp{E}{2^i}\left[\min\left({1,\frac{2^i}{B}}\right)\right]\right) \end{align*} \end{corollary} \begin{proof} By definition, we have that the smoothed cache oblivious cost is \begin{align*} \coshift{B}{E} &= \frac{1}{B}\sum_{s=0}^{B-1}\left(\sum_{i=2}^{|E|} \begin{cases} 1 &\text{if } \lfloor \frac{e_{i}-s}{B}\rfloor \neq \lfloor \frac{e_{i-1}-s}{B} \rfloor \\ 0 &\text{otherwise} \end{cases}\right) \end{align*} We define $d=e_i - e_{i-1}$ to be the distance of an access from the previous access. If $d<B$, the cost incurred when accessing $e_i$ is $\frac{d}{B}$. Thus, if $d$ is in the range $[2^i,2^{i+1})$ (where $i\leq \log{B}-1$), then the cost to access $e_i$ is between $\frac{2^i}{B}$ and $\frac{2^{i+1}}{B}$. Thus, the total cost of all accesses with distance $d$ in the range $[2^i,2^{i+1})$ is between $\jcountp{E}{2^i}\cdot \frac{2^i}{B}$ and $\jcountp{E}{2^i}\cdot \frac{2^{i+1}}{B}$. Since the cost of all accesses with distance $d>B$ is 1, \begin{align*} \left[\sum_{i=0}^{\log{B} -1} \frac{2^i}{B}(\jcountp{E}{2^i}) + \sum_{i=\log{B} -1}^{\infty} \jcountp{E}{2^i} \right] &\leq \coshift{B}{E} \leq \left[\sum_{i=0}^{\log{B} -1} \frac{2^{i+1}}{B}(\jcountp{E}{2^i}) + \sum_{i=\log{B} -1}^{\infty} \jcountp{E}{2^i}\right] \intertext{Thus,} \coshift{B}{E} &= \Theta\left(\sum_{i=0}^{\infty} \jcountp{E}{2^i}\left[\min\left({1,\frac{2^i}{B}}\right)\right]\right) \end{align*} \end{proof} \begin{corollary}\label{J_Implies_CO} If algorithm $A$ is jump optimal for problem $P$, then it is CO-optimal for $P$. \end{corollary} \begin{proof} Since $A$ is jump optimal, then it is optimal for $j_B(d) = \min\left(1,\frac{d}{B}\right)$. Thus, Lemma~\ref{CO-jb} proves the corollary. then it is optimal for all subadditive jump functions, including the jump function \begin{align*} j_B(d) &= \min\left(1, \frac{d}{B}\right) \intertext{Consider an instance of $P$ and execution sequences $E_A$ and $E_{\textsc{OPT}}$, generated by $A$ and $\textsc{OPT}^{\textsc{co}}_{^{B}}$ that solve it. By Lemma~\ref{JtoJprime}, we have that} \jr{j(\cdot)}{E_A} &= \Theta\left(\sum_{i=0}^{\infty}\jcountp{E_A}{2^i}j(2^i) \right) \intertext{and from Corollary~\ref{min(*,*)},} \coshift{B}{E_A} &= \Theta\left(\sum_{i=0}^{\infty} \jcountp{E_A}{2^i}\left[\min\left({1,\frac{2^i}{B}}\right)\right]\right) \intertext{Thus, for the jump function $j_B(d)$ defined above,} \jr{j_B(\cdot)}{E_A} &= \Theta\left(\coshift{B}{E_A}\right) \intertext{If $E_A$ is not CO-optimal, then} \jr{j_B(\cdot)}{E_A} &= \omega\left(\coshift{B}{E_{\textsc{OPT}}}\right) \\ &= \omega\left(\jr{j_B(\cdot)}{E_{\textsc{OPT}}}\right) \end{align*} However, since we know $A$ is jump optimal, there can be no $E_{\textsc{OPT}}$ that has an asymptotically smaller jump runtime. Thus, $E_A$ must result in an asymptotically minimal cache oblivious cost and $A$ is CO-optimal. \end{proof} \begin{corollary}\label{lin_comb} \ben{Rewrite this with the stuff john came up with on the board.} If algorithm $A$ is has asymptotically optimal (minimal) jump cost for all jump functions of the form $j_B(2^i)=\min(1,\frac{2^i}{B})$, for all $B$, then it also optimal for all jump functions which can be expressed as \begin{align*} \sum_{k=0}^{C} a_k\min(1,\frac{2^i}{B_k}) \end{align*} for all constants $C>0$, $a_k > 0$, and $B_k > 0$. \end{corollary} \begin{proof} Algorithm $A$ is asymptotically optimal for jump functions $j_B(d) = \min(1,\frac{d}{B})$, for any $B$. Consider a jump function made up of a linear combination of a constant number of such functions, i.e., \begin{align*} j(d) &= \sum_{k=0}^C \min\left(1,\frac{d}{B_k}\right) \intertext{Since $A$ is asymptotically optimal for every term of the summation, it must be optimal for the entire summation. Multiplying each term by a constant also only adds a constant factor, thus $A$ is asymptotically optimal for all jump functions of the form} &\sum_{k=0}^C \left(a_k\cdot \min\left(1,\frac{d}{B_k}\right)\right) \end{align*} where $C$ is a constant and all $a_k$ and $B_k$ are positive constants. \end{proof} \begin{lemma}\label{leqn} Any function composed of piecewise linear and subadditive functions can be represented by a linear combination of $\min(\cdot,\cdot)$ functions. That is, if the function $f(x)$ is of the form \begin{align*} f(x) &= \alpha_1 x &\text{$0\leq x\leq x_1$}\\ &=\alpha_2 x+c_2 &\text{$x_1\leq x\leq x_2$}\\ &=\alpha_3 x+c_3 &\text{$x_2\leq x\leq x_3$}\\ &\vdots\\ &=\alpha_k x+c_k &\text{$x_{k-1}\leq x\leq x_k$}\\ \end{align*} where $\alpha_1>\alpha_2\ldots >\alpha_k\geq 0$ (since the function $f(x)$ is subadditive) and $c_i\geq0$, then \begin{align*} f(x) &= \sum_{i=1}^k \min(\beta_i\cdot x, \gamma_i) \end{align*} for some $\beta=\{\beta_1,\beta_2\ldots,\beta_k\}$ and $\gamma=\{\gamma_1,\gamma_2\ldots,\gamma_k\}$ \end{lemma} \begin{proof} \ben{In Varun's proof, he assumed that $f$ only takes integral values as input. I don't see why we need that restriction so I removed it.} We claim that $f$ can be represented as a linear combination of $min(\cdot,\cdot)$ functions. To show this, we construct such a function, $g(x)$ and show it is equivalent to $f$: \begin{align*} g(x)&=\sum_{i=1}^k min(\beta_i x,\gamma_i) \intertext{To show they are equivilent, we construct We assign the values of $\beta_i$ and $\gamma_i$ are set as follows} \beta_i &= \begin{dcases} \alpha_i & i = k \\ \alpha_{i}-\alpha_{i+1} & i < k \end{dcases} \\ \gamma_i &= \begin{dcases} c_2 & i = 1 \\ c_{i+1}-c_{i} & 1< i < k\\ \infty & i = k \end{dcases} \\ \end{align*} where $c_1=0$. Thus, we have \begin{align*} g(x)&= \sum_{i=1}^{k-1}\min((\alpha_{i}-\alpha_{i+1})x,c_{i+1}-c_{i}) +(\alpha_kx) \\ &= \min((\alpha_1-\alpha_2)x,c_2) + \sum_{i=2}^{k-1}\min((\alpha_{i}-\alpha_{i+1})x,c_{i+1}-c_{i}) +(\alpha_kx) \end{align*} Based on this formulation of $g(x)$, we consider two possible cases: \begin{enumerate} \item $g(x)=\alpha_1 x$ This occurs when all of the $\min$ functions that comprise $g(x)$ evaluate to their first operand. In this case, we have that $(\alpha_i-\alpha_{i+1})x\leq (c_{i+1}-c_i)$, for all $i < k$ and, therefore, $\alpha_ix+c_i\leq \alpha_{i+1}x + c_{i+1}$. Since $\alpha_i > \alpha_{i+1}$, we know that $x\leq x'_i$, where $x'_i$ is defined as the value such that \begin{align*} \alpha_i x'_i + c_i &= a_{i+1} x'_i + c_{i+1} \end{align*} by the definition of $f(x)$, we see that $x'_i = x_i$. We have that $x \leq x_i$, for all $i<k$. Therefore when $g(x)=a_1x$, $f(x)=a_1x$ as well. \item $g(x)=\alpha_ix+c_i$, for $2\leq i\leq k$ When $g(x)=\alpha_ix+c_i$, the first $i-1$ $\min$ functions evaluate to the second operand and the rest of the $\min$ functions evaluate to the first operand. Using the argument from case 1, we can deduce that if $g(x)=\alpha_ix+c_i$ then $x_{i-1}\leq x\leq{x_{i}}$. Thus, if $g(x)=\alpha_ix+c_i$ then $f(x) = \alpha_ix+c_i$, for all $2 \leq i \leq k$. \end{enumerate} \noindent For all input values $x$ for which $f(x)$ is defined, we have shown that $g(x) = f(x)$. For any function composed of piecewise linear and subadditive function, we can construct such a $g(x)$. Therefore, any piecewise linear subadditive function can be expressed as a linear combination of $\min$ functions. \end{proof} \begin{comment} \vkj{backup proof kept for comparison} \begin{proof} Let $c_1=0$. We claim that $f(x)$ can be represented as a linear combination of $min(*,*)$ functions given as follows \john{use $\Sigma$} $$g(x)=\sum_{i=1}^k min(\beta_i x,\gamma_i)$$ Trivially the function $g(x)$ evaluates to 0 whenever $x$ is 0. The min functions are designed in such a way that whenever $x_{i-1}<x\leq x_i$, the first $i-1$ min functions evaluate to second parameter in the function and the rest of the min functions evaluate to their first parameter. The actual values of $\beta_i$ and $\gamma_i$ will be revealed later. Thus, whenever $x_{i-1}<x\leq x_i$ the function $g(x)$ evaluates to $$\sum_{q=1}^{i-1} \gamma_{q} + \sum_{q=i}^{k}\beta_q x$$ To match $f(x)$ with $g(x)$ we get the following set of linear equations \begin{align*} \alpha_1x &=\sum_{i=1}^k\beta_ix \\ \alpha_2x +c_2&=\sum_{i=2}^k\beta_ix +\gamma_1 \\ \alpha_3x+c_3&=\sum_{i=3}^k\beta_ix +\sum_{i=1}^{2}\gamma_i\\ \vdots \\ \alpha_{k-1}x+c_{k-1}&=\sum_{i={k-1}}^k\beta_ix +\sum_{i=1}^{k-2}\gamma_i\\ \\ \alpha_kx+c_k&=\beta_kx+\sum_{i=1}^{k-1}\gamma_i\\ \end{align*} which gives us the following set of equations $$\alpha_i=\sum_{q=i}^k \beta_q$$ and $$\forall i\geq 2, c_i=\sum_{q=1}^{i-1}\gamma_q$$ Solving these linear equations we get \john{I think your presentation here is backwards. You need not show the work above. Just have these values below and show simply how this proves $f(x)$ and $g(x)$ are the same function.} \begin{align*} \beta_i &= \begin{dcases} \alpha_i & i = k \\ \alpha_{i}-\alpha_{i+1} & \forall i < k \end{dcases} \end{align*} \begin{align*} \gamma_i &= \begin{dcases} c_2 & i = 1 \\ c_{i+1}-c_{i} & 1< i < k\\ \infty \end{dcases} \\ \end{align*} Thus, any piecewise linear subadditive function can be expressed as a linear combination of min functions \end{proof} \end{comment} We note that, in the jump model, we define the jump distance as the number of memory locations between two accesses. Thus, the input to any jump function, $j$, is always a non-negative integer less than $n$ (the size of the input). \begin{corollary}\label{coro1} Any jump function $j$ can be expressed as linear combination of finitely many $\min(\cdot,\cdot)$ functions. \end{corollary} \begin{proof} We define jump distance, $d$, as the number of memory locations between the previous access and the current access, so jump functions only take integral inputs. We define jump functions to be subadditive, so Lemma~\ref{lin_comb} proves the corollary. Evaluate the jump function $j$ over all the possible $n$ distance parameters. This jump function (over the $n$ distance parameters) can also be expressed as $n$ distinct piecewise linear functions which are also subadditive, since $j$ is sub-additive. The linear combination of these piecewise linear functions would give a new jump function $j'$ which has same evaluation as $j$ whenever the input parameter is a non negative integer less than $n$. Thus all jump functions for non-negative integral input parameters can be expressed as linear combination of piecewise linear and subadditive functions and the application of Lemma~\ref{leqn} proves the corollary. \end{proof} \begin{comment} \section{Appendix} \subsection{Even more rough thoughts on memory} Recall: $$JR_{j(\cdot,\cdot)}(E) = \sum_{i=1}^{|E|} \min\left( 0 , \min_{\ell\leq e_i}j(e_i-\ell,w_E(\ell,i)+ \min_{r \geq e_i}j(r-e_i,w_E(r,i)-1\right) $$ $$ j_{M,B}(d,w) = \begin{cases} \frac{d}{B} & \text{if } d < B\text{ and }w < M \\1 & \text{otherwise} \end{cases} $$ Let $$ww_m(E,i)= \min_k ( \{e_{i-1}, e_{i-2} \ldots e_{i-m(k)} \} \cap [e_i-k ,e_i+k] \not = \emptyset )$$ This is a unified working set function, which depends on a function $m$ which we call the space-time ratio. In the geometric view, it asks what is the width $w$ of the largest empty rectangle of height $m(w)$ centered on the access $e_i$. Look at the cache oblivious runtimes of all levels, assuming geometric and $M=B$, and all weighted the same. I guess that: $$\sum_{b=0}^{\infty} JR_{j_{2^b,2^b}}(E) = \sum_{i=1}^{|E|} \Theta(\log(ww_{m(B)=B}(E,i)) $$ This could be generalized to, using a tall-cache like assumption, but still weighting the levels the same: $$\sum_{b=0}^{\infty} JR_{j_{2^b,2^{2b}}}(E) = \sum_{i=1}^{|E|} \Theta(\log(ww_{m(B)=B^2}(E,i)) $$ If you don't weigh the levels the same, and bigger levels cost more, they will still dominate. You should still get something similar, but the $\log ww$ would be replaced with a faster growing function. This gives the runtime as a function of $ww$ and an with a certain aspect ratio. I believe we can prove that CO optimal algorithms are optimal for any $ww$ and space-time aspect ratio, and that CO optimal algorithms are optimal for any $ww$ with at least a polynomial space-time aspect ratio. \iffalse \ben{All defns. below are old and replaced with the new section 5 and 5.1. We don't need bivariate jump function or new working set defn. because of LRU defn.} The jump function is a bivariate function defined as follows \begin{align*} j_{M,B}(d,w) &= \begin{cases} \frac{d}{B} & \text{if } d < B\text{ and }w < M \\ 1 & \text{Otherwise} \end{cases} \end{align*} where $d$ is the jump distance and $w$ is the size of the working set. $W(e_i)$ is the working set after accessing $e_{i}\in E$ . When a new location $e_i$ is accessed, it is given a cost based on how far $e_i$ is from a location whose contents are in the working set $W(e_{i-1})$ (working set after $e_{i-1}$ was accessed). The right cost for location $e_i$ is denoted as $R_i$ and is given by the $\min_{x > e_i} j_{M,B}(x-e_i,w)$ where $x$ denotes the tape address of an element in the working set. Analogously we can define the left cost $L_i$ to be $\min_{x \leq e_i} j_{M,B }(e_i-x,w)$.\newline The section involving forward embedding can also be looked at in terms of costs but only involves right cost. We define both right and left cost to capture several algorithms which involve looking at the memory locations in haphazard way which may not always go in specific direction. There are four possible cases regarding the cost to access a new location $e_i$. \begin{enumerate} \item $L_i>0, R_i>0\text{ and } L_i+R_i\leq 1$ There are two locations $m_l, m_r\in M$ such that $m_l$ and $m_r$ are at most 1 block apart and $e_i$ lies between $m_l$ and $m_r$. The expected cost is zero, since the block containing $m_l$ and $m_r$ are in the memory and thus the block containing $e_i$ is also already in the memory. \item $L_i>0,R_i>0, \text{ and } 1<L_i+R_i\leq 2$ There are two locations $m_l, m_r\in M$ such that $m_l$ and $m_r$ are 2 blocks apart and $e_i$ lies between $m_l$ and $m_r$. The expected cost in this case comes out to be $L_i+R_i-1$. \item $L_i<1,R_i=1$ There is a location $m_l\in M$ on the left of $e_i$ and $m_l$ lies at distance less than $B$ and if there are locations in $M$ which are on the right of $e_i$ they are at distance greater than $B$. The expected cost is $L_i$. \item $L_i=1,R_i=1$ There are no locations in $M$ which are at distance less than $B$ from $e_i$. The expected cost is 1. \end{enumerate} The access jump cost for location $e_i$ in all these cases can be expressed by a single cost function $max(0, L_i+R_i-1)$. This expected access jump cost for each access $e_i$ is same as the expected access cost for $e_i$ in the cache oblivious model with $M$ memory and blocks of size $B$. Thus, if we define the jump function to be $max(0,L_i+R_i-1)$ we get a useful relationship between jump model and cache oblivious model. The expected jump cost for an execution sequence $E$ given memory $M$ with block size $B$ is given by \begin{align*} JR_{j_{M,B}(\cdot,\cdot)}(E) &=\sum_{i=2}^{|E|} \left(\max \left(0,\min_{x \leq e_i} j_{M,B}(e_i-x,w) + \min_{x > e_i} j_{M,B}(x-e_i,w)-1\right) \right)\\ &=\sum_{i=2}^{|E|} \left(\max(0,L_i+R_i-1)\right) \end{align*} \begin{comment} An alternate cleaner way to look at this type of cost counting for the $i^{th}$ access is to charge unit cost for every access and then give discounts based on the current state of the memory and how far the location accessed at $i^{th}$ iteration is from the previous states. The memory $M$ consists of $|M|/B$ blocks corresponding to recent $|M|/B$ accesses which have been performed. When a new location $e_i$ is accessed, its given a discount based on how far $e_i$ is from a location whose contents are already in $M$. The right discount for location $e_i$ when the state of memory is $M$ is denoted as $R(M,B,e_i)$ and is given by the $\min(1,\max_{j}(0, \frac{m_j-e_i}{B}))$ where $m_j$ denotes the tape address of the $j^{th}$ element in $M$(The minimum and maximum function are used to insure that the right discount is between 0 and 1 inclusive). Analogously we can define the left discount to be $\min(1,\max_j(0, \frac{e_i-m_j}{B}))$. The min function tries to cap the discount to at most 1. If both $R(M,B,e_i)$ and $L(M,B,e_i)$ are both non-zero, then it means that two blocks which are adjacent on the tape are also in $M$ and $e_i$ belongs to either of these adjacent blocks. These discounts capture the cases in cache oblivious memory whether the new memory location being accessed is probably already in the memory. The section involving forward embedding can also be looked at in terms of discounts but only involves right discount. We define both right and left discounts to capture several algorithms which involve looking at the memory locations in haphazard way which may not always go in specific direction. In the best case, a location $e_i$ can be accessed for free. Thus the maximum discount which can availed is at most 1. Let nearest left neighbor of $e_i$ be the location $e_j\in M$ such that $e_i-e_j>0$ is minimum. Nearest right neighbor of $e_i$ is also defined analogously. The expected cost to access $e_i$ in cache oblivious memory $M$ is now defined in terms of discount as \begin{align*} CO(M,B,e_i)&=\begin{cases}1-R(M,B,e_i) \text{ \hspace{35pt} When $L$ is 0}\\ 1-L(M,B,e_i) \text{ \hspace{35pt} When $R$ is 0}\\ 0 \text{ \hspace{105pt} $L+R\leq 1$}\\ 0 \text{ \hspace{105pt} $L+R>1$ and nearest neighbors are in adjacent blocks}\\ 1-\max(L(M,B,e_i),R(M,B,e_i)) \text{...$L+R>1$ and nearest neighbors are 2 blocks apart} \end{cases} \end{align*} Which can be simplified to \vkj{The following seems to be the correct and simplest view, but it is not that useful to get other desired results, since it also depends on the relative distance between its nearest left and right neighbors} \begin{align*} CO(M,B,e_i)&=\begin{cases} 0 \text{ \hspace{90pt} left and right nearest neighbors both are 0/1 block apart}\\ 1-\max(L(M,B,e_i),R(M,B,e_i)) \text{ \hspace{160pt} otherwise}\\ \end{cases} \end{align*} \vkj{Note that in case 2, its a max function instead of summing up the left and right discounts, because if the sum is at least 1, it would imply that the expected cost is 0 which is not always true. Example left nearest neighbor is in block number $t-1$ and right nearest neighbor is block number $t+1$ and the searched element is in center of block $t$. So its better to find the nearest neighbor among the two and give the discount with respect to that neighbor} Thus the total expected cost of an execution sequence $E$ is $$CO_E(M,B)=\sum_{i=1}^{|E|}CO(M,B,e_i)$$ Whenever the nearest left neighbor of $e_i$ and nearest right neighbor of $e_i$ in $M$ are in the same block apart, then it means that $e_i$ is also in that block and hence already in $M$, making it free to access. Similarly, whenever the nearest left neighbor of $e_i$ and nearest right neighbor of $e_i$ in $M$ are exactly one block apart, it means that $e_i$ lies in either the block corresponding to its nearest left neighbor or the block corresponding to its right nearest neighbor, which implies that content of location $e_i$ is already in $M$. Otherwise at least one of the nearest neighbors on the right or left of $e_i$ are located at least 2 blocks away from the block of $e_i$ and thus the expected cost to access $e_i$ is $1-\max(L(M,B,e_i),R(M,B,e_i))$. \end{comment} The previous section can also be viewed as having just right cost where the access sequence is only in one direction. Such a simplistic view may not be always practical as exemplified by the post order traversal of a full binary tree which has forward embedding. The post order traversal requires to visit a node after all its descendents have to be accessed which is not a unidirectional traversal on a forward embedding of a binary tree. One issue to keep in mind is the memory/ cache replacement policy. By the same proof used in Theorem 6 of \cite{DBLP:journals/cacm/SleatorT85}, it can be shown that LRU policy on an $M$ sized cache is asymptotically optimal when compared to an optimal cache replacement policy with $M/2$ sized cache. \begin{comment} $J(M,d)$ is defined as the number of jumps of size $d$ when the memory was $M$. $J'(M,d)$ is defined as the number of jumps whose sizes are in the range $[d,2d)$ when the memory used is $M$. Lemma \ref{JtoJprime} and \ref{sandwich_cache} crossover to this model without significant changes. Thus $$JR_E^j(M)=\Theta\left(\sum_{i=1}^{\infty} J'(2^i) j(2^i)\right)$$ We need something analogous to Lemma \ref{min(*,*)} The issue is the following In Lemma \ref{min(*,*)}, the expected cost of a jump in the range $[2^i,2^{i+1})$ (where $i\leq \lg B-1)$ is between $\frac{2^i}{B}$ and $\frac{2^{i+1}}{B}$. But here the expected cost of jump in the range $[2^i,2^{i+1})$ is not the same. The expectation does not solely depend on the distance, but also on its right and left neigbours relative distance. If the right and left neighbors are close by then the expected cost is 0 else the expected cost is the jump distance of $e_i$ from its nearest neighbor divided by $B$. \end{comment} \begin{comment} Precisely, if the distance between nearest left and nearest right neighbor is less than $B$, then the expected cost is 0. If the distance between nearest left neighbor and nearest right neighbor is between $B$ and $2B$ then the expected cost is less than $d/B$ where $d$ is the distance of the nearest neighbor from $e_i$. A point to note on a unidirectional execution in forward embedding was the $|E|<n$ but in the generalized model this is not a requirement. In the simpler case, the small jumps didnt add up to be more significant than the large jumps. But in this case, the number and the corresponding cost of small jumps can far exceed the number and the corresponding cost of large jumps. If $e_i$ is accessed at time $i$ and $e_f$ already resides in $M$ and the distance between $e_i$ and $e_f$ is less than $B$ then we can probably bring this entire contiguous chunk into the memory, so that subsequent accesses are free. The jump cost is $j(|e_i-e_f|)$ and the elements in between them will have cost zero, since they are already in the memory. This alteration seems to capture locality of reference in a much more direct way. What this does is, if two somewhat nearby locations are accessed then all the locations between them are cached, anticipating use of locations between these two locations. \end{comment} \begin{lemma} $$CO_{(M,B)}E=JR_{j_{M,B(\cdot,\cdot)}}(E)$$ where $j_{M,B}(E)=max(0,L_i+R_i-1)$ \end{lemma} \begin{proof} From the definition of $CO_{M,B}(E)$ we know that \begin{align*} CO_{M,B}(E)&= \frac{1}{B}\sum_{s=0}^{B-1}\left(\sum_{i=2}^{|E|} \begin{cases} 1 &\text{if } \lfloor \frac{e_{i}-s}{B}\rfloor \neq \lfloor \frac{e_j-s}{B} \rfloor ,\forall e_j \in Horizon(E,M,B,i-1) \\ 0 &\text{otherwise} \end{cases}\right) \end{align*} This cache oblivious cost for running an execution $E$ where the memory has $M/B$ blocks of size $B$ can be interpreted in the following way. The expected cost to access a location $e_i\in E$ over all possible shifts $s$ is 1 if there are no locations $e_j$ in $ Horizon(E,M,B,i-1)$, such that $\lfloor \frac{e_{i}-s}{B}\rfloor \neq \lfloor \frac{e_j-s}{B} \rfloor$, since the block which has $e_i$ is not present in the memory. If there are two locations in $Horizon(E,M,B,i-1)$ which are less than $B$ distance apart and $e_i$ lies between those two locations, then the expected cost to $e_i$ is always 0, irrespective of the shift $s$. If there is a location $e_j$ (where $e_j$ is the nearest such location) in $Horizon(E,M,B,i-1)$ which is less than $B$ distance apart either on the right of $e_i$ or left of $e_i$ but not both, then the expected number of block misses is $|e_j-e_i|/B$, since there are $B-|e_j-e_i|$ number of shifts for which the cost is 0. Lastly if there are two locations (say $e_l$ and $e_r$) in $Horizon(E,M,B,i-1)$ which are each less than $B$ distance away from $e_i$ and both lie on either side of $e_i$ and the distance between the locations is more than $B$( but less than $2B$), then the expected number of block misses is $\frac{|e_l-e_i|}{B} +\frac{ |e_r-e_i|}{B}-1$. Thus for each $e_i\in E$, the expected cost in the Cache Oblivious model is same as the expected cost in the jump model with jump function $\max(0,L_i+R_i-1)$, which gives us the desired result. \end{proof} \begin{comment} \begin{lemma} $$CO_E(M,B)= \Theta\left(\sum_{i=0}^{\infty} J'(2^i)\left[\min\left({1,\frac{2^i}{B}}\right)\right]\right)$$ \end{lemma} \begin{proof} We know that \begin{align*} CO_E(M,B) &= \frac{1}{B}\sum_{s=0}^{B-1}\left(\sum_{i=2}^{|E|} \begin{cases} 1 &\text{if } \lfloor \frac{e_{i}-s}{B}\rfloor \neq \lfloor \frac{m_{j}-s}{B} \rfloor ,\forall m_j\in M \\ 0 &\text{otherwise} \end{cases}\right) \end{align*} The above characterization of $CO_E(M,B)$ can be viewed alternatively as follows. The expected cost incurred to access a location which is at distance $d<B$ from a recently accessed location is $d/B$. Similarly, the expected cost of a jump in the range $[2^i,2^{i+1})$ (where $i\leq \lg B-1)$ is between $\frac{2^i}{B}$ and $\frac{2^{i+1}}{B}$. Thus the expected cost of all jumps in the range $[2^i,2^{i+1})$ is between $J'(2^i)\frac{2^i}{B}$ and $J'(2^{i})\frac{2^{i+1}}{B}$ The expected cache oblivious cost for jumps larger than $B$ is 1, since it always counts as a cache miss. Thus, the expected cost of an arbitrary access sequence in cache oblivious model with memory blocks of size $B$, is $$\left[\sum_{i=0}^{\lg B -1} \frac{2^i}{B}(J'(2^i)) + \sum_{i=\lg B -1}^{n} J'(2^i) \right]\leq CO_E(M,B)\leq \left[\sum_{i=0}^{\lg B -1} \frac{2^{i+1}}{B}(J'(2^i)) + \sum_{i=\lg B -1}^{n} J'(2^i)\right]$$. Thus $$CO_E(M,B)= \Theta\left(\sum_{i=0}^{\infty} J'(2^i)\left[\min\left({1,\frac{2^i}{B}}\right)\right]\right)$$ \end{proof} \end{comment} \subsection{John's notes on what to add} \subsubsection{CO optimal is optimal for bivariate symmetric functions} (note this is written informally, needs to be formalized and proved) The jump distance from a set $S$ is defined as smallest jump needed to reach a target location from the locations in $S$. A jump function $j(W,d)$ in the bivariate scenario is dependent on the working set $W$ and the jump distance $d$ from the working set. A jump function is called symmetric if $j(W,d)=s(|W|,d)=s(d,|W|)$ for some sub-additive function $s$. We denote such a jump function as $j_s$. \begin{lemma} Suppose $A$ is CO-optimal. Then $A$ is asymptotically optimal in the bivariate jump model for all symmetric jump functions. \end{lemma} The jump cost to merge 2 lists of length $n/2$ each would require $nj(1,1)$ cost to scan through the elements and an additional jump cost of $j(1,n/2)$ to access the first element of the second list. The total cost is $nj(1,1)+ j(1,n/2)$ which is asymptotically the cost of scanning through both the lists. The size of individual lists does not seem to matter much as long as the total size is the same. The cost to merge $k$ lists of size $n/k$ depends on whether $k\leq M$ or $k>M$. If $k\leq M$ then one single iteration of merging all the $k$ lists simultaneously is sufficient and it requires $nj(1,M)+kj(n/k,M)$ jump cost, which is roughly the cost of a scan through the entire input. When $k>M$, take $M$ lists and combine them into 1 list, to reduce the $k$ lists to $k/M$ lists by incurring a total cost of $\frac{k}{M}(\frac{nM}{k}j(1,M)+Mj(\frac{n}{k},M)\leq (n+1)(j(1,M)$. Thus the total jump cost to merge lists into a single list is at most $O\left(nj(1,M) \log_M k\right)$ which is asymptotically the cost of scanning through the input $O(\log_M k)$ times. This is the jump cost provided $M$ is known to the algorithm. \begin{comment} \begin{align*} \sum_{i=1}^{\log_M n }\left(nj(1,M)+ \frac{k}{M^i}j(M,nM^i/k)\right) &=n(\log_M n)j(M,1)+ k\sum_{i=1}^{\log_M n}\frac{j(M,nM^i/k)}{M^i}\\ &\leq n(\log_M n)j(M,1) + k \sum_{i=1}^{\log_M n}(nM^i/kM^{i})j(M,1)\\ &\leq n(\log_M n)j(M,1) + nj(M,1)\log_M n\\ & \leq 2( n(\log_M n)j(M,1) \\ \end{align*} The first term counts the cost of scanning the input $\log_M n$ times and the second term counts the cost for the jumps involved to find the first element of each of the lists. \end{comment} If we want to plug in the expected cost in the IO model $j(1,M)$ to be $1/B$ we get upper bound of $$ O\left(\frac{n(\log_M k)}{B}\right)$$ which gives a jump cost of $O\left(\frac{n\log_M n}{B}\right)$ for a implementation of merge sort over $n$ elements in Memory $M$. \begin{comment} \subsection{Sorting and permutation} Analysis of funnelsort algorithm:- The funnel sort uses a $k$-merger which takes $k$ lists and outputs $k^3$ elements in sorted order and these $k$-mergers are recursively built with $\sqrt{k}$ input mergers and 1 output merger. The base case merger is a 2-merger which when called merges two lists and outputs at most 8 elements. The largest $k$-merger which is used by a funnelsort implementation is an $n^{1/3}$ merger. Let $T(k)$ denotes the total jump cost of calling a $k$- merger to output $k^3$ elements in sorted order. Whenever a $k$-merger is called to output $k^3$ elements, it involves calling $\sqrt{k}$ input mergers $k\sqrt{k}$ times and the $\sqrt{k}$ merger is also required to be called $k\sqrt{k}$ times. Calling a $\sqrt{k}$ requires a jump of size at most $k^2$ (since the space required by a $k-$merger is $k^2$). The total jump cost of calling $T(k)$ is given by the following recursion \begin{align*} T(k)&=2k^{3/2} T(\sqrt{k}) + kj(k^2,M)\\ &= \sum_{i=1}^{\log\log k} 2^i k^{3-1/2^{i-1}} j(k^{1/2^{i-1}}, M) +k j(k^2,M)\\ &\leq \sum_{i=1}^{\log\log k +1} 2^i k^3(j(1,M)) \\ &=O(k^3\log k j(1,M)) \end{align*} Since the largest merger used by the funnelsort is $n^{1/3}$ merger, the total jump cost of running funnelsort with jump function $j$ is $O((n^{1/3})^3 \log n j(1, M))$ which is $O(n\log n j(1,M))$. Permutation in the jump model can be done in $O(nj(n,M))$ jump cost. Permutation can also be achieved by sorting using $O(n\log n j(1,M))$ cost. If $\log n \cdot j(1,M)> j(n,M)$ then brute force permutation is better else sorting is better. \begin{comment} \subsection{Tall cache} Many cache-oblivious algorithms require what is known as the \emph{tall-cache} assumption. This assumes that $M\geq B^2$, which can typically be reduced to $M \geq B^{1+\epsilon}$ with minimal effort. Here we show the core of why this assumption is used, and what it means in the jump model. The canonical need to assume that $M\geq B^2$ lies in the following problem, which we call the \emph{disjoint scan}. Traverse a linked list of size $N$ embedded into an array such that there are $N$ sections contiguously embedded, but the $N$ sections could themselves appear in arbitrary order. \begin{figure}[h] \begin{tikzpicture} \node [rectangle split,rectangle split parts=6,rectangle split horizontal, text width=20mm,align=center,draw,dashed] (multi) {$N_1$\nodepart{two}$N_2$\nodepart{three}$\dots$\nodepart{four} $\dots$\nodepart{five}$N_{i-1}$\nodepart{six}$N_i$}; \node[draw,thick,fit=(multi),inner sep=-\pgflinewidth/2] {}; \draw[-latex] (multi.one south) to[out=-45,in=-135] (multi.six south); \draw[-latex] (multi.six south) to[out=-135,in=-45] (multi.two south); \draw[-latex] (multi.five south) to[out=-135,in=-45] (multi.four south); \draw[-latex] (multi.four south) to[out=-135,in=-45] (multi.three south); \draw[-latex] (multi.two north) to[out=45,in=135] (multi.five north); \end{tikzpicture} \caption{Traversal of $N$ sections in the array} \end{figure} An easy bound on the cache-oblivious runtime is: $$N \left\lceil \frac{N}{B} \right\rceil \leq \frac{N^2}{B} + N $$ The first term dominates when $N \geq B$, giving a runtime of $O( \frac{N^2}{B} ) $ which is optimal. However, when $N \leq B$ and does the term $N$ dominate? No, as a different analysis is possible with the tall cache assumption. Observe that if $N \leq B$ and $B^2 \leq M$, then $N^2 \leq M$. This means that the entire array of size $N^2$ fits in memory and thus none of the $\left\lceil \frac{N^2}{B} \right\rceil$ blocks will ever be loaded twice and the runtime will be $\left\lceil \frac{N^2}{B} \right\rceil$, which is the same time as to scan the array contiguously. Thus, what the tall cache assumption allows is that a scan of $N$ lists of size $N$ which are placed arbitrarily across array without affecting its runtime. What does the runtime look like in the jump model? A simple analysis would be that $$ N^2j(1,1) + Nj(N^2,1)$$ The first term is the cost of the scans, and the second term is an upper bound on the cost of the discontinuities. Clearly at each discontinuity you move to an item at distance at most $N^2$ from the previous access. If $j(1,1) \geq \frac{1}{N} j(N^2,1)$ then the first term dominates. But, subadditive only guarantees that $j(1,1) \geq \frac{1}{N^2} j(N^2,1)$. A better analysis is: $$ N^2j(1,1) + \sum_{i=0}^{\log N} 2^i j( \frac{N^2}{2^i},N^2) $$ For the cache-oblivious jump function this is: $$ N^2 \cdot \frac{1}{B} + \sum_{i=0}^{\log N} 2^i j( \frac{N^2}{2^i},N^2) $$ \end{comment} \subsection{Even more rough thoughts on memory} Recall: $$JR_{j(\cdot,\cdot)}(E) = \sum_{i=1}^{|E|} \min\left( 0 , \min_{\ell\leq e_i}j(e_i-\ell,w_E(\ell,i)+ \min_{r \geq e_i}j(r-e_i,w_E(r,i)-1\right) $$ $$ j_{M,B}(d,w) = \begin{cases} \frac{d}{B} & \text{if } d < B\text{ and }w < M \\1 & \text{otherwise} \end{cases} $$ Let $$ww_m(E,i)= \min_k ( \{e_{i-1}, e_{i-2} \ldots e_{i-m(k)} \} \cap [e_i-k ,e_i+k] \not = \emptyset )$$ This is a unified working set function, which depends on a function $m$ which we call the space-time ratio. In the geometric view, it asks what is the width $w$ of the largest empty rectangle of height $m(w)$ centered on the access $e_i$. Look at the cache oblivious runtimes of all levels, assuming geometric and $M=B$, and all weighted the same. I guess that: $$\sum_{b=0}^{\infty} JR_{j_{2^b,2^b}}(E) = \sum_{i=1}^{|E|} \Theta(\log(ww_{m(B)=B}(E,i)) $$ This could be generalized to, using a tall-cache like assumption, but still weighting the levels the same: $$\sum_{b=0}^{\infty} JR_{j_{2^b,2^{2b}}}(E) = \sum_{i=1}^{|E|} \Theta(\log(ww_{m(B)=B^2}(E,i)) $$ If you don't weigh the levels the same, and bigger levels cost more, they will still dominate. You should still get something similar, but the $\log ww$ would be replaced with a faster growing function. This gives the runtime as a function of $ww$ and an with a certain aspect ratio. I believe we can prove that CO optimal algorithms are optimal for any $ww$ and space-time aspect ratio, and that CO optimal algorithms are optimal for any $ww$ with at least a polynomial space-time aspect ratio. \subsection{Tall cache} Many cache-oblivious algorithms require what is known as the \emph{tall-cache} assumption. The tall cache assumption assumes that if $M$ is the size of main memory and $B$ is the size of the block in the memory, then $M\geq B^2$. Here we show the core of why this assumption is used, and what it means in the jump model. The canonical need to assume that $M\geq B^2$ lies in the following problem, which we call the \emph{disjoint scan}. Traverse a linked list of size $N$ embedded into an array such that there are $\sqrt{N}$ sections (of $\sqrt{N}$ elements each) contiguously embedded, but the $\sqrt{N}$ sections could themselves appear in arbitrary order. The Linked List $L=L_1->L_2->\ldots L_{\sqrt{N}}$ is embedded in an array, where $L_i$ are the $\sqrt{N}$ sized sublists constituting the list $L$ and the elements in each sublist are stored in order. \begin{figure}[h] \begin{tikzpicture} \node [rectangle split,rectangle split parts=6,rectangle split horizontal, text width=20mm,align=center,draw,dashed] (multi) {$L_1$\nodepart{two}$L_3$\nodepart{three}$\dots$\nodepart{four} $\dots$\nodepart{five}$L_{4}$\nodepart{six}$L_2$}; \node[draw,thick,fit=(multi),inner sep=-\pgflinewidth/2] {}; \draw[-latex] (multi.one south) to[out=-45,in=-135] (multi.six south); \draw[-latex] (multi.six south) to[out=-135,in=-45] (multi.two south); \draw[-latex] (multi.five south) to[out=-135,in=-45] (multi.four south); \draw[-latex] (multi.four south) to[out=-135,in=-45] (multi.three south); \draw[-latex] (multi.two north) to[out=45,in=135] (multi.five north); \end{tikzpicture} \caption{Traversal of $\sqrt{N}$ sections in the array} \end{figure} The cache oblivious cost to traverse one sublist is $\lceil \frac{\sqrt{N}}{B}\rceil$. Thus, an easy bound on the cache-oblivious runtime is: $$\sqrt{N} \left\lceil \frac{\sqrt{N}}{B} \right\rceil \leq \frac{N}{B} + \sqrt{N} $$ The first term dominates when $\sqrt{N} \geq B$, giving a runtime of $ \frac{N}{B} +o(N) $ which is optimal as this is minimum cost required to do a linear scan on an array of size $N$. However, when $\sqrt{N} < B$, does the term $\sqrt{N}$ dominate? If there is no tall cache assumption then the $\sqrt{N}$ term may dominate, which can be much greater than $\frac{N}{B}$. On the other hand a different analysis is possible with the tall cache assumption. Observe that if $\sqrt{N} < B$ and $B^2 \leq M$, then $N \leq M$. This means that the entire array of size $N$ fits in memory and thus none of the blocks (which were brought into memory at any point) will be evicted from memory and the runtime will be $\left\lceil \frac{N}{B} \right\rceil$. Thus, the tall cache assumption is crucial to allow a scan of $\sqrt{N}$ lists each of size $\sqrt{N}$ which are placed arbitrarily across an array without (asymptotically) affecting the number of blocks required to be brought in memory. What does the runtime look like in the jump model? There are two types of jump costs involved in an instance of disjoint scan. The first type of cost is the jump cost incurred due to jumps between different sublists. The second type of cost is the cost of scan through each sublist once we have reached the start of a sublist. A crude upper bound on the first type of jump cost is $\sqrt{N}j(N,1)$, since there are at most $\sqrt{N}$ such jumps and each jump can have a maximum jump cost of $j(N,1)$ each. The cost to do a linear scan on a list of size $\sqrt{N}$ is $\sqrt{N}j(1,1)$. Thus an upper bound on the second type of costs is $Nj(1,1)$. Thus the total jump cost for a disjoint scan is at most $$ Nj(1,1) + \sqrt{N}j(N,1)$$ Clearly at each discontinuity you move to a location at distance at most $N$ from the previous access. If $j(1,1) \geq \frac{1}{\sqrt{N}} j(N,1)$ then the first term dominates. But, the subadditive property of $j(1,1)$ only guarantees that $j(1,1) \geq \frac{1}{N} j(N,1)$. If the jump function cost is upper bounded by 1 and $j(1,1)$ is a non-zero constant independent of the value of $N$, then the jump cost is at most $O(N(j(1,1))+\sqrt{N})$. Clearly as $N$ grows, the first term will dominate and we can conclude that the jump cost for disjoint scan is $O(Nj(1,1))$. A slightly rigorous analysis can be done for the first type of cost in the following way. The cost of the first jump is $\min(j(N,1),j(1,N))$. The sum cost of the second and third jumps considering that the first jump location is now in the working set is $2\min(j(N,1),j(2\sqrt{N},N/2))$. In general, it can be seen that the cost of $2^{i}$th to $(2^{i+1}-1)$th discountinuities is given by $2^i\min(j(N,1),j(2^i\sqrt{N},N/2^i))$. Thus the jump cost incurred by the jumps across discontinuities is $$ \sum_{i=0}^{\log \sqrt{N}} 2^i\min (j(N,1)), j(2^i\sqrt{N},N/2^i))$$ \vkj{I am having trouble seeing this in a clean way. It seems a lot like the problem you mentioned few months back. A linked list is given and the cost of accessing an element is its index. Once an index $i$ is accessed the linked list is broken into two linked lists at the point of access. Now the cost to access any location is its index in the newly formed set of lists. The cost there also comes out to be $O(n)$ and in this case it should come out to be $O(nj(1,1))$. Although the issue is that this problem has $O(n)$ and has nothing to do with tall cache.} \end{document} \begin{comment} \begin{lemma}\label{sandwich_cache} \ben{Make this proof much simpler, it's pretty obvious. But this doesn't prove subadditivility, just powers of 2. Change to prove for correct defn. of subadditivity (if it's even necessary)} For any non-decreasing execution sequence, $E$, the cache oblivious cost, $\co{B}{E}$ and smoothed cache oblivious cost, $\coshift{B}{E}$ are subadditive functions of $B$. \ben{If this seems out of place, can move to optimality section.} \end{lemma} \begin{proof} Assume some fixed arbitrary memory alignment for the allocated memory when executing sequence $E$. We consider the cache oblivious cost when we have block sizes $B$, and $2B$ ($\co{B}{E}$ and $\co{2B}{E}$, respectively). \begin{align*} \intertext{First observation -- the number of blocks accessed when the blocks are of size $2B$ is at most the number of blocks accessed when the blocks are of size $B$, i.e.,} \co{2B}{E} &\leq \co{B}{E} \intertext{Second observation -- Since we consider a fixed alignment, every block of size $2B$ is composed of two blocks of size $B$, which we call $L$ and $R$. If $E$ requires accessing either $L$ or $R$, then cost incurred by both $\co{B}{E}$ and $\co{2B}{E}$ is 1. If $E$ requires accessing both $L$ and $R$, then cost incurred by $\co{B}{E}$ is 1 and $\co{2B}{E}$ is 2. Thus} \co{B}{E} & \leq 2\cdot\co{2B}{E} \intertext{Therefore,} \co{2B}{E} &\leq \co{B}{E} \leq 2\cdot\co{2B}{E} \intertext{so $\co{B}{E}$ is a sublinear function of $B$.} \intertext{The smoothed cache oblivious cost is defined as} \coshift{B}{E} &= \frac{1}{B} \sum_{s=0}^{B-1} \co{B}{E_s} \\ &\leq \frac{1}{B} \sum_{s=0}^{B-1} 2\cdot \co{2B}{E_s} \\ &= 2\cdot \coshift{2B}{E} \intertext{Thus, we have that} \coshift{2B}{E} &\leq \coshift{B}{E} \leq 2\cdot\coshift{2B}{E} \intertext{and $\coshift{B}{E}$ is also a subadditive function of $B$.} \end{align*} \end{proof} \begin{lemma}\label{JtoJprime} \ben{Fold into the proof that $SCO = JC_{j_B}$.} Let $\jcountp{E}{2^i}$ ($\forall_i>0$) denote the total number of jumps whose sizes are in the range $[2^{i},2^{i+1})$ during an execution $E$. Then $$\jr{j(\cdot)}{E} = \Theta\left( \sum_{i=0}^\infty \jcountp{E}{2^i} j(2^i)\right)$$. \end{lemma} \begin{proof} Observe that $j(2k)\leq 2j(k)$ because $j$ is sub-additive. \begin{align*} \jr{j(\cdot)}{E} &= \sum_{i=1}^\infty J_E(i)j(i) \\ \intertext{Since $j(2^{\lfloor \log i \rfloor})\leq j(i)\leq 2 j(\frac{i}{2}) \leq 2j(2^{\lfloor \log i \rfloor})$} \sum_{i=1}^\infty \jcount{E}{i}j(2^{\lfloor \log{i} \rfloor}) &\leq \sum_{i=1}^\infty \jcount{E}{i}j(i) \leq 2 \sum_{i=1}^\infty \jcount{E}{i}j(2^{\lfloor \log i \rfloor}) \intertext{Using the definition of $\jcountp{E}{2^i}$ we get} \sum_{i=0}^{\infty}\jcountp{E}{2^i}j(2^i)&=\sum_{i=0}^{\infty}\jcount{E}{i}j(2^{\lfloor \log{i} \rfloor}) \intertext{Thus} \sum_{i=0}^\infty \jcountp{E}{2^i}j(2^i) &\leq \jr{j(\cdot)}{E} \leq 2 \sum_{i=0}^\infty \jcountp{E}{i}j(2^i) \end{align*} \end{proof} This observation helps us to work with fair assumption that the jumps are of sizes which are powers of 2. \end{comment} \ben{Anything we want to move to Preliminaries or add to it?} Recall that the one-finger jump model cost computes the cost of an access $e_i$ as the jump cost from a single minimum finger (minimum jump cost from any previous access), i.e., $\jr{j(\cdot,\cdot)}{E,i} = \min_{k=1}^{i-1}j(|e_i - e_k|, \dist{E,k,i})$. The cache-oblivious or LRU costs, however, consider the entire contents of the working set when computing the expected cost of an access. Is this significant? That is, can the cache-oblivious or LRU cost be determined by considering only the single prior access that minimizes the jump cost? Alas, no. We show that for certain sets of execution patterns, this has the potential to cause a discrepancy between the jump cost and the cache-oblivious or LRU cost. But all is not lost, as later we will show that using two fingers (prior accesses) to compute the jump cost is sufficient for it to be asymptotically equivalent to the smoothed LRU cost. To demonstrate why using a single finger is insufficient, we consider a case where the jump cost, $\jr{j(\cdot,\cdot)}{E}$, differs from the smoothed LRU cost, $\lrushift{M,B}{E}$. \begin{comment} \subsection{A jump function for the cache oblivious model} Given a memory system with memory size $M$ and block size $B$, when performing access $e_i$, the jump cost from prior access $e_j$ is either determined by the spatial component or temporal component of the cost: if $\delta > \frac{M}{B}$, then $e_j$ is not in the LRU working set and the cost is 1, otherwise it is in the working set and the cost (likelihood of a cache miss) is determined solely by the spatial distance. Thus, we define our jump function that models the specific memory system based on the smoothed LRU cost, $\lrushift{M,B}{E}$, by considering the spatial and temporal components separately. As with the univariate case in Section~\ref{prelim}, the spatial component of the jump function (i.e., the cost when the jump source is in the working set) is $\frac{d}{B}$, where $d$ is the spatial distance (we use $\min\left(1, \frac{d}{B}\right)$, since the cache oblivious cost of an access cannot exceed 1). The temporal component, however, is simply a 0-1 threshold function: if $\delta > \frac{M}{B}$, then the source is not in memory and the temporal component is 1, otherwise it is 0. Thus, the jump function that we use to express the memory access cost of this system with memory size $M$, block size $B$, and LRU cache replacement is \begin{align*} j_{M,B}(d,\delta) &= \max\bigg(\overbrace{\min\left(1, \frac{d}{B}\right)}^{\substack{\text{spatial} \\ \text{component}}}, \overbrace{\min\bigg(1, \bigg\lfloor\frac{\delta}{M/B}\bigg\rfloor\bigg)}^{\substack{\text{temporal}\\ \text{component}}}\bigg) \end{align*} \end{comment} \begin{comment} We can, however, solve this issue with a \emph{two-finger jump model} that considers the elements with the minimum jump cost to the \emph{left} and \emph{right} of the access when computing the cost. \subsection{Two-finger jump model}\label{2-finger} We define $e_L$ and $e_R$ as the elements to the left and right of $e_i$, respectively, with the minimum jump cost. The two-finger jump cost of access $e_i$ is then \begin{align*} \jlr{E, i} &= \max\Bigg( \overbrace{\min_{\substack{\forall L<i \text{ s.t.} \\ e_L \leq e_i}} j(e_i-e_L, \dist{E,i,L})}^{\text{left side}} + \overbrace{\min_{\substack{\forall R<i \text{ s.t.} \\ e_R \geq e_i}} j(e_R-e_i, \dist{E,i,R})}^{\text{right side}} - 1, 0\Bigg ) \end{align*} \noindent Using this cost of a single access, we compute the cost of the entire execution sequence, $E$, as \begin{align*} \jlr{E} &= \sum_{i=0}^{|E|} \jlr{E,i} \\ \jlr{E} &= \sum_{i=0}^{|E|} \Bigg ( \max\Bigg( \min_{\substack{\forall L<i \text{ s.t.} \\ e_L \leq e_i}} j(e_i-e_L, \dist{E,i,L}) + \min_{\substack{\forall R<i \text{ s.t.} \\ e_R \geq e_i}} j(e_R-e_i, \dist{E,i,R}) - 1, 0\Bigg ) \Bigg ) \end{align*} \end{comment} \subsubsection{Geometrically increasing hierarchy} \section{Necessity of $B$ stability} \label{s:needbstable} \begin{lemma} \label{l:needbstable} There exists a problem $P$ which is not \emph{B}-stable\xspace and which has a CO optimal algorithm which is not LoR optimal. Thus theorem~\ref{t:main} would not hold if the restriction to $B$-stable algorithms were to be removed. \end{lemma} \begin{proof} Here we demonstrate a toy problem that meets the requirements of the lemma while also illustrating the unnaturalness of such problems. It has two candidate algorithms, one which has the same runtime on each instance, and a second one that for each instance has some values of $B$ that it runs faster than the first algorithm, and some that it runs more slowly than the first algorithm on, asymptotically. Thus for each $B$ the worst-case time of the first algorithm is better than the second, but there is no single bad instance for the second algorithm. Consider a problem $P$ and a set $\mathcal{A}$ of two algorithms $A_1$ and $A_2$. The problem, given an $n$, has a set of $n$ instances $\mathcal{I}_n=I^n_1, I^n_2, \ldots I^n_n$. The runtimes of the algorithms are given as follows: \begin{align*} CO_B(E(A_1),I^n_i) &= \Theta \lrp{ \min\left( \frac{n \log n \log \log n} {\log i}, \frac{i \cdot n \log n \log \log n}{ B \log i } \right)}\\ CO_B(E(A_2),I^n_i) &= \Theta\lrp{\frac{n \log n \log \log \log n}{\log B}} \end{align*} These runtimes can be realized through an appropriately twisted problem definition that forces an algorithm for each instance to read all elements in one of two sets of memory locations in order to be considered a valid algorithms. In particular our problem admits two algorithms, one of which, $A_2$, can solve any instance by performing $n\log n \log \log \log n$ reads in memory generated by $n \log \log \log n$ searches in a van Emde Boas search structure, and the other, $A_1$, by reading at memory locations generated by an arithmetic progression, where the step and number of locations depends on the instance. Accessing $k$ memory locations evenly spaced $\sigma$ apart takes time $\Theta(1+\min\left( k,\frac{k \sigma}{B} \right))$ in the CO model; thus the desired runtime of algorithm $A_1$ on instance $I^n_i$ can be forced by having the algorithm $A_1$ instance $I_i$ read $\frac{n \log n \log \log n}{\log i}$ memory locations evenly spaced $i$ apart. What are the worst-case runtimes of these algorithms? \begin{align*} W_B(P,A_1,n) &= \max_{I^n_i \in \mathcal{I}_n} CO_B(E(A_1,I^n_i ) \\ &= \max_{I^n_i \in \mathcal{I}_n}\min \overbrace{\left( \frac{n\log n \log \log n} {\log i}, \frac{i\cdot n \log n \log \log n}{ B \log i } \right)}^{\text{Equal when $i=B$}} \\ &= \frac{n \log n \log \log n}{\log B} \\ W_B(P,A_2,n)&=\frac{n\log n \log \log \log n}{\log B} \end{align*} So, looking at these two algorithms, $A_2$ is clearly the worst-case optimal in the CO model. Now,Recall the definition of \emph{B}-stable\xspace : Problem $P$ is \emph{\emph{B}-stable\xspace} if, for any algorithm $A$ that solves $P$, $$ \exists_{c,d} \forall_{n>d}\exists_{I_w}\forall_{B\ge 1} \Big [ \min_{A' \in \mathcal{A}_n^P} W_B(P,A',n) \le c \cdot \co{B}{E(A,I_w)} \Big ]. $$ Applying this to our problem gives: \begin{align*} \exists_{c,d} \forall_{n>d} \exists_{I^n_i \in \mathcal{I}_n} \forall_{B\ge 1} &\left[ \min\left(\frac{n\log n \log \log n}{\log B}, \frac{n\log n \log \log \log n}{\log B}\right) \right. \\ &\hspace{2cm} \le \left. c \cdot\min\left( \frac{n\log n \log \log n} {\log i}, \frac{i n\log n \log \log n}{ B \log i } \right)\right] \\ \exists_{c,d} \forall_{n>d} \exists_{I^n_i \in \mathcal{I}_n} \forall_{B\ge 1} &\left[ \frac{\log n \log \log \log n}{\log B} \right. \\ &\hspace{2cm} \le \left. c \cdot\min\left( \frac{n\log n \sqrt{\log \log n}} {\log i}, \frac{i n\log n \sqrt{\log \log n}}{ B \log i } \right) \right] \end{align*} This is false for all choices of ${I^n_i \in \mathcal{I}_n}$. Specifically, if $i\geq \log N$, then setting $B=2$ and using the first term of the min gives the following contradiction for any $c$ as $n$ grows: \begin{align*} \frac{n \log n \log \log \log n}{\log 2} &\leq c \cdot\min\left( \frac{n\log n \sqrt{\log \log n}} {\log i}, \frac{in \log n \sqrt{\log \log n}}{ B \log i } \right) \\ &\leq c \cdot \frac{n\log n \sqrt{\log \log n}} {\log i} \\&\leq c \cdot \frac{n\log n \sqrt{\log \log n}} {\log \log n} \\&= c \cdot \frac{n \log n } {\sqrt{\log \log n}} \end{align*} And, if $i\leq \log N$, then setting $B=N$ and using the second term on the right gives the following contradiction: \begin{align*} \frac{ n \log n \log \log \log n}{\log n}=n \log \log \log n&\leq c \cdot\min\left( \frac{n\log n \sqrt{\log \log n}} {\log i}, \frac{in \log n \sqrt{\log \log n}}{ B \log i } \right) \\&\leq c \cdot\frac{n \log n \log n \sqrt{\log \log n} }{ n \log \log n } \\&=c \cdot\frac{\log^2 n}{\sqrt{\log \log n} } \end{align*} This concludes the proof that $P$ is not $B$ stable. We now argue that while $A_2$ is CO optimal for $P$, and $A_1$ is not, with locality function $\ell(d)=\log(d)$ the reverse is true as $A_1$ will have the asymptotically better runtime with this locality function in the LoR model. In the introduction we mentioned that the LoR runtime with locality function $\ell(d)=\log d$ for searching in a VeB structure is $\Theta(\log n \log \log n)$, thus since $A_2$ does this $n \log \log \log n$ times its cost is $\Theta(n \log n \log \log n \log \log \log n)$. Algorithm $A_1$ is easy to analyze as on instance $I_i^n$ it accesses $\frac{n \log n \log \log n}{\log i}$ memory locations evenly spaced $i$ apart, thus its cost is $$ \frac{n \log n \log \log n}{\log i} \cdot \log i = \log n \log \log n . $$ Thus the CO-optimal $A_2$ has a LoR runtime (with $\ell(d)=\log d$) of $\Theta(\log n \log \log n \log \log \log n)$ which is a $\Theta(\log \log \log)$ factor worse than the non-CO-optimal $A_1$ with LoR runtime of $\log n \log \log n$. Since $A_2$ is not optimal for one locality function, it can not be optimal for all valid locality functions. \end{proof} What made this problem not \emph{B}-stable\xspace ? It was the fact that every instance was constructed to be faster for one algorithm for some values of $B$ and slower for others than the optimal worst-case algorithm. In this example $A_1$ ran instance $I_i^n$ slower than $A_2$ for $B$ close to $i$ and faster than $A_2$ for $B$ far from $i$. However, this is far from natural, to have an instance in effect encode faster-than-worse-case performance on selected $B$'s. In a standard data structure query, such as ``what is the predecessor of a given item in an ordered set,'' the query item itself has nothing that combined with the problem definition that allows a query to encode a preference for fast execution for certain $B$'s in a non-optimal algorithm. We note that this is very different than some algorithms which may ``hard-code'' some instances and make them fast; this does not pose a problem with regards to $B$-stability as this makes this instance fast for all values of $B$.
1,314,259,994,382
arxiv
\section{Introduction} \noindent Deep learning models have excelled in solving difficult problems in machine learning, including Natural Language Processing (NLP) tasks like text classification \citep{zhang2015character,kim2014convolutional} and language understanding \citep{devlin2019bert}. However, research has discovered that inputs can be modified to cause trained deep learning models to produce incorrect results and predictions \citep{szegedy2014intriguing}. Models in computer vision are vulnerable to these attacks \citep{goodfellow2015explaining}, and studies have found that models in the NLP domain are also vulnerable \citep{kuleshov2018adversarial,gao2018black,garg2020bae}. One use of these adversarial attacks is to test and verify the robustness of NLP models. \\\indent With the potential for adversarial attacks, there comes the need for prevention and protection. There are three main categories of defense methods: identification, reconstruction, and prevention \citep{goldblum2020data}. Identification methods rely on detecting either poisoned data or the poisoned model \citep{ijcai2019-647}. While reconstruction methods actively work to repair the model after training \citep{10.1145/3394171.3413546}, prevention methods rely on input preprocessing, majority voting, and other techniques to mitigate adversarial attacks \citep{goldblum2020data,alshemali2020generalization}. Although most NLP adversarial attacks are easily detectable, some new forms of adversarial attacks have become more difficult to detect like concealed data poisoning attacks \citep{wallace2021concealed} and backdoor attacks \citep{chen2020badnl}. The use of these concealed and hard-to-detect attacks has revealed new vulnerabilities in NLP models. Considering the increasing difficulty in detecting attacks, a more prudent approach would be to work on neutralizing the effect of potential attacks rather than solely relying on detection. Here we offer a novel and highly effective defense solution that preprocesses inputs by random perturbations to mitigate potential hard-to-detect attacks. \section{Related Work} \indent The work in this paper relates to the attack on NLP models using the TextAttack library \citep{morris2020textattack}, the current state-of-the-art defense methods for NLP models, and using randomness against adversarial attacks. \\\indent The TextAttack library and the associated GitHub repository \citep{morris2020textattack} represent current efforts to centralize attack and data augmentation methods for the NLP community. The library supports attack creation through the use of four components: a goal function, a search method, a transformation, and constraints. An attack method uses these components to perturb the input to fulfill the given goal function while complying with the constraints and the search method finds transformations that produce adversarial examples. The library contains a total of 16 attack model recipes based on literature. The work reported in this paper pertains to the 14 ready-to-use classification attack recipes from the TextAttack library. We believe that successful defense against such attacks will provide guidelines for the general defense of deep learning NLP classification models. \\\indent There are many methods to defend NLP models against adversarial attacks, including input preprocessing. Input preprocessing defenses require inserting a step between the input and the given model that aims to mitigate any potential attacks. \citet{alshemali2020generalization} use an input preprocessing defense that employs synonym set averages and majority voting to mitigate synonym substitution attacks. Their method is deployed before the input is run through a trained model. Another defense against synonym substitution attacks, Random Substitution Encoding (RSE) encodes randomly selected synonyms to train a robust deep neural network \citep{10.1007/978-3-030-55393-7_28}. The RSE defense occurs between the input and the embedding layer. \\\indent Randomness has been deployed in computer vision defense methods against adversarial attacks. \citet{levine2020robustness} use random ablations to defend against adversarial attacks on computer vision classification models. Their defense is based on a random-smoothing technique that creates certifiably robust classification. \citeauthor{levine2020robustness} defend against sparse adversarial attacks that perturb a small number of features in the input images. They found their random ablation defense method to produce certifiably robust results on the MNIST, CIFAR-10, and ImageNet datasets. \section{Input Perturbation Approach \& Adversarial Defense} \noindent The use and availability of successful adversarial attack methods reveal the need for defense methods that do not rely on detection and leverage intuitions gathered from popular attack methods to protect NLP models. In particular, we present a simple but highly effective defense against attacks on deep learning models that perform sentiment analysis. \\\indent The approach taken is based on certain assumptions about the sentiment analysis task. Given a short piece of text, we believe that a human does not need to necessarily analyze every sentence carefully to get a grasp on the sentiment. Our hypothesis is that humans can ascertain the expressed sentiment in a text by paying attention to a few key sentences while ignoring or skimming over the others. This thought experiment led us to make intermediate classifications on individual sentences of a review in the IMDB dataset and then combining the results for a collective final decision. \\\indent This process was refined further by considering how attackers actually perturb data. Usually, they select a small number of characters or tokens within the original data to perturb. To mitigate those perturbations, we choose to perform our own random perturbations. Because the attacking perturbations could occur anywhere within the original data, and we do not necessarily know where they are, it is prudent to randomly select tokens for us to perturb. This randomization has the potential to negate the effect the attacking perturbations have on the overall sentiment analysis. \\\indent We wish to highlight the importance of randomness in our approach and in possible future approaches for defenses against adversarial attacks. Positive impact of randomness in classification tasks with featured datasets can be found in work using Random Forests \citep{breiman2001random}. Random Forests have been useful in many domains to make predictions, including disease prediction \citep{lebedev2014random,corradi2018prediction,paul2017feature,khalilia2011predicting} and stock market price prediction \citep{khaidem2016predicting,ballings2015evaluating,nti2019random}. The use of randomness has made these methods of prediction robust and useful. We have chosen to harness the capability of randomness in defense of adversarial attacks in NLP. We demonstrate that the impact randomness has on our defense method is highly positive and its use in defense against adversarial attacks of neural networks should be explored further. We present two algorithms below---first with two levels of randomness, and the second with three. \subsection{Random Perturbations Defense} \noindent Our algorithm is based on random processes: the randomization of perturbations of the sentences of a review $R$ followed by majority voting to decide the final prediction for sentiment analysis. We consider each review $R$ to be represented as a set $R$ = $\{r_1, r_2,..., r_i,..., r_N\}$ of sentences $r_i$. Once $R$ is broken down into its sentences (Line 1 of Algorithm \ref{algorithm:algo}), we create $l$ replicates of sentence $r_i$: $\{\hat{r}_{i1},...,\hat{r}_{ij},...,\hat{r}_{il}\}$. Each replicate $\hat{r}_{ij}$ has $k$ number of perturbations made to it. Each perturbation is determined randomly (Lines 4-7). \\\indent In Line 5, a random token $t$ where $t \in \hat{r}_{ij}$ is selected, and in Line 6, a random perturbation is performed on $t$. This random perturbation could be a spellcheck with correction if necessary, a synonym substitution, or dropping the word. These perturbations were selected as they are likely to be the same operations an attacker performs, and they may potentially even counter the effect of a large portion of perturbations in attacked data. A spellcheck is performed using SpellChecker which is based in Pure Python Spell Checking. If a spellcheck is performed on a token without spelling error, then the token will not be changed. The synonym substitution is also performed in a random manner. A synonym set for token $t$ is found using the WordNet synsets \citep{wordnet}. Once a synonym set is found, it is processed to remove any duplicate synonyms or copies of token $t$. Once the synonym set is processed, a random synonym from the set is chosen to replace token $t$ in $\hat{r}_{ij}$. A drop word is when the randomly selected token $t$ is removed from the replicate altogether and replaced with a space. Conceptually speaking, the random perturbations may be chosen from an extended set of allowed changes. \\\indent Once $l$ replicates have been created for the given sentence $r_i$ and perturbations made to tokens, they are put together to create replicate review set $\hat{R}$ (Line 8). Then, in Line 9, each $\hat{r}_{ij} \in \hat{R}$ is classified individually as $f(\hat{r}_{ij})$ using classifier $f()$. After each replicate has been classified, we perform majority voting with function $V()$. We call the final prediction that this majority voting results in as $\hat{f}(R)$. This function can be thought of as follows (Line 12): $$\hat{f}(R) = V(\{f(\hat{r}_{ij})\ |\ \hat{r}_{ij} \in \hat{R}\}).$$ The goal is to maximize the probability that $\hat{f}(R) = f(R)$ where $f(R)$ is the classification of the original review $R$. In this paper, this maximization is done through tuning of the parameters $l$ and $k$. The certainty $T$ for $\hat{f}(R)$ is also determined for each calculation of $\hat{f}(R)$. The certainty represents how sure the algorithm is of the final prediction it has made. In general, the certainty $T$ is determined as follows (Lines 13-17): $$T = count(f(\hat{r}_{ij}) == \hat{f}(R))\ /\ N*l.$$ The full visual representation of this algorithm can be seen in Algorithm \ref{algorithm:algo} and in Figure \ref{fig:AlgoImage}. \begin{figure}[h] \centering \includegraphics[scale=.2]{images/RandomPerturbationsAlgoImage.png} \caption{Visual representation of Algorithm \ref{algorithm:algo}.} \label{fig:AlgoImage} \end{figure} \begin{algorithm}[h] \SetAlgoLined \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Parameters} \KwResult{$\hat{f}(R)$, the classification of $R$ after defense} \Input{Review $R = \{r_1, r_2,...,r_N\}$ where $r_i$ is a sentence} \Output{$l$ = number of copies made of each $r$, $k$ = number of corrections made per $r_i$, $C = \{c_1, c_2,..., c_k\}$, set of corrections} $\hat{R} = \emptyset$\\ \For{$r_i\ \epsilon\ R$}{ \For{j = 1 to l}{ $\hat{r}_{ij} = r_i$\\ \For{k}{ Select random token $t$ where $t \ \epsilon \ \hat{r}_{ij}$ \\Perform random correction $c\ \epsilon\ C$ to $t$ } Append $\hat{r}_{ij}$\ to\ $\hat{R}$ \\Classify: $f(r_{ij})$ } } $\hat{f}(R) = V(\{f(\hat{r}_{ij}) \ |\ \hat{r}_{ij} \epsilon \hat{R}\})$, $V()$ is a voting function \\\uIf{$f(\hat{R}) == negative$}{ $T = count(f(\hat{r}_{ij}) == negative)\ /\ N*l$ } \Else{ $T = count(f(\hat{r}_{ij} == positive)\ /\ N*l$ } \caption{Random Perturbation Defense} \label{algorithm:algo} \end{algorithm} \subsection{Increasing Randomness} \noindent Our first algorithm represented in Algorithm \ref{algorithm:algo} and in Figure \ref{fig:AlgoImage} shows randomness in two key points in the decision making process for making the perturbations. This is the main source of randomness for our first algorithm. In our next algorithm, we introduce more randomness into our ideas from our original algorithm to create a modified algorithm. This more random algorithm is visually represented in Figure \ref{fig:MoreRandImage} and presented in Algorithm \ref{algorithm:MoreRandAlgo}. This new defense method adds a third random process before making random corrections to a sentence. Randomly chosen $r_i$ from $R$ are randomly corrected to create replicate $\hat{r}_{j}$ which is placed in $\hat{R}$ (Lines 2-6). The original sentence $r_i$ is placed back into $R$ and a new sentence is randomly selected; this is random selection with replacement. This process of random selection is repeated until there is a total of $k$ replicates $\hat{r}_{j}$ in $\hat{R}$. This algorithm follows the spirit of Random Forests more closely than the first algorithm. \\\indent In Line 2, we randomly select a sentence $r_i$ from $R$. This is one of the main differences between Algorithm \ref{algorithm:algo} for Random Perturbations Defense and Algorithm \ref{algorithm:MoreRandAlgo} for Increased Randomness Defense. That extra random element allows for more randomization in the corrections we make to create replicates $\hat{r}_j$. In Lines 3 and 4, the process is practically identical to Lines 5 and 6 in Algorithm \ref{algorithm:algo}. The only difference is that only one random correction is being made to get the final replicate $\hat{r}_j$ for Increased Randomness Defense, while Random Perturbations Defense makes $k$ random corrections to get the final replicate $\hat{r}_{ij}$. \begin{algorithm}[h] \SetAlgoLined \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Parameters} \KwResult{$\hat{f}(R)$, the classification of $R$ after defense} \Input{Review $R = \{r_1, r_2,...,r_N\}$ where $r_i$ is a sentence} \Output{$k$ = number of replicates $\hat{r}_j$ made for $\hat{R}$, $C = \{c_1, c_2,..., c_k\}$, set of corrections} $\hat{R} = \emptyset$, $P = []$\\ \For{j = 1 to k}{ Randomly select $r_i \in R$ \\Select random token $t$ where $t \in r_i$ \\Perform random correction $c\ \epsilon\ C$ to $t$ to get $\hat{r}_j$ \\Append $\hat{r}_j$ to $\hat{R}$ } \For{j = 1 to k}{ Classify: $f(\hat{r}_j)$ \\Append results to predictions array $P$ } $\hat{f}(R) = V(P)$, $V()$ is a voting function \uIf{$f(\hat{R}) == negative$}{ $T = count(f(\hat{r}_{ij}) == negative)\ /\ N*l$ } \Else{ $T = count(f(\hat{r}_{ij} == positive)\ /\ N*l$ } \caption{Increased Randomness Defense} \label{algorithm:MoreRandAlgo} \end{algorithm} \begin{figure}[h] \centering \includegraphics[scale=.2]{images/MoreRandomnessImage.png} \caption{Visual representation of Algorithm \ref{algorithm:MoreRandAlgo} that includes more randomness.} \label{fig:MoreRandImage} \end{figure} \subsection{Overcoming the Attacks} \noindent We define an attack as making random perturbations to an input, specifically for this work, a review $R$. We assume a uniform distribution for randomness. We interpret these random changes to occur throughout each review $R$ with probability $\frac{1}{W}$ or $\frac{1}{N*m}$, where $W$ is the number of words in $R$, $N$ is the number of sentences in $R$, and $m$ is the average length of each sentence in $R$. We refer to this probability that an attack makes changes to the review text as $P_{attack}$ where $a$ is the total number of perturbations made by the attack: $$P_{attack} = \frac{a}{W} = \frac{a}{N*m}.$$ If each random perturbation performed by the attack has a probability of $\frac{1}{N*m}$, then our defense method needs to overcome that probability to overcome the attack. \\\indent Our two defense methods, Random Perturbations Defense and Increased Randomness Defense, both offer ways to overcome the attack, i.e., undo the attack change, with a probability greater than $\frac{a}{N*m}$. \newtheorem{theorem}{Proposition} \begin{theorem} Random Perturbations Defense overcomes an attack that makes a small number of random perturbations to a review document by having a probability greater than the attack probability $P_{attack}$. \end{theorem} \indent Our Random Perturbations Defense picks a random token $t$ from each sentence $r_i \in R$ and repeats $k$ times to get a final replicate $\hat{r}_{ij}$. This gives an initial probability that the defense picks a certain token from the text, or $P_{RPD}$, to be: $$P_{RPD} = \frac{N*l*m!}{k!(m-k)!}.$$ We find this probability from choosing $k$ tokens from $r_i$ with length $m$ which breaks down to a binomial coefficient $\binom{m}{k} = \frac{m!}{k!(n-k)!}$. This is then repeated $l$ times for each sentence in $R$ which equates to that initial probability being multiplied by $l$ and $N$. After doing some rearranging of the probabilities, we can see that for certain values of $l$ and $k$ where $k < m$: \small \[P_{RPD} = \frac{N^2 m^2 l(m-1)(m-2)...(m-k+1)}{k!} > a.\] \normalsize $P_{RPD}$ now is the total probability that the defense makes random changes to $lN$ tokens. We know that $W = N*m$, that $a =< W$ for the attack methods we are testing against, and that $k$ should be selected so that $k << W$. This means that we know $W^2 > a$, $W^2 > k!$, and $l(m-1)(m-2)...(m-k+1) > 0$ for the selected attack methods, which gives us the necessary conditions to assert that $P_{RPD} > P{attack}$. Therefore, our Random Perturbations Defense will overcome the $P_{attack}$ and should overcome the given attack method as stated in Proposition 1. \begin{theorem} Increased Randomness Defense overcomes an attack that makes a small number of random perturbations to a review document by having a probability greater than the attack probability $P_{attack}$. \end{theorem} \indent Our Increased Randomness Defense first chooses a random sentence $r_i$ which is selected with probability $\frac{1}{N}$. Next, we choose a random word within that sentence which is selected with probability $\frac{1}{m}$. This gives us a probability for changes as follows: $$P_{IRD} = \frac{1}{N} * \frac{1}{m} = \frac{1}{N*m}.$$ We can see that $P_{IRD} * a = P_{attack}$. We need to overcome the attack probability and we do this in two ways: we either find the attack perturbation by chance and reverse it, or we counterbalance the attack perturbation with enough replicates $\hat{r}_j$. With each replicate $\hat{r}_j$ created, we increase our probability $P_{IRD}$ so that our final probability for our Increased Randomness Defense is as follows: $$P_{IRD} = \frac{k}{N*m}.$$ As long as our selected parameter value for $k$ is greater than the number of perturbation changes made by the attack method $a$, then $P_{IRD} > P_{attack}$ and our Increased Randomness Defense method will overcome the given attack method as stated in Proposition 2. \section{Experiments \& Results} \subsection{Dataset \& Models} \noindent We used the IMDB dataset \citep{maas-EtAl:2011:ACL-HLT2011} for our experiments. Each attack was used to perturb 100 reviews from the dataset. The 100 reviews were selected randomly from the dataset with a mix of positive and negative sentiments. Note that the Kuleshov attack data \citep{kuleshov2018adversarial} only had 77 reviews. \\\indent The models used in this research are from the TextAttack \citep{morris2020textattack} and HuggingFace \citep{wolf-etal-2020-transformers} libraries. These libraries offer many different models to use for both attacked data generation and general NLP tasks. For this research, we used the \emph{bert-base-uncased-imdb} model that resides in both the TextAttack and HuggingFace libraries. This model was fine-tuned and trained with a cross-entropy loss function. This model was used with the API functions of the TextAttack library to create the attacked reviews from each of the attacks we used. We chose this model because BERT models are useful in many NLP tasks and this model specifically was fine-tuned for text classification and was trained on the dataset we wanted to use for these experiments. \\\indent The HuggingFace library was also used in the sentiment-analysis classification of the attacked data and the defense method. We used the HuggingFace transformer pipeline for sentiment-analysis to test our defense method. This pipeline returns either ``negative" or ``positive" to classify the sentiment of the input text and a score for that prediction \citep{wolf-etal-2020-transformers}. This pipeline was used to classify each replicate $\hat{r}_{ij}$ in our algorithm and is represented as the function $f()$. \subsection{Experiments} \noindent The attacks from the TextAttack library were used to generate attack data. Attack data was created from 7 different models from the library: BERT-based Adversarial Examples (BAE) \citep{garg2020bae}, DeepWordBug \citep{gao2018black}, FasterGeneticAlgorithm \citep{jia2019certified}, Kuleshov \citep{kuleshov2018adversarial}, Probability Weighted Word Saliency (PWWS) \citep{ren2019generating}, TextBugger \citep{li2019textbugger}, and TextFooler \citep{jin2020bert} \citep{morris2020textattack}. Each of these attacks were used to create 100 perturbed sentences from the IMDB dataset \citep{maas-EtAl:2011:ACL-HLT2011}. These attacks were chosen from the 14 classification model attacks because they represent different kinds of attack methods, including misspelling, synonym substitution, and antonym substitution. \\\indent Each attack method used for our experiments has a slightly different approach to perturbing the input data. Each perturbation method is unique and follows a specific distinct pattern and examples of these can be found in Figure \ref{fig:perturbedData}. The BAE attack determines the most important token in the input and replaces that token with the most similar replacement using a Universal Sentence Encoder. This helps the perturbed data remain semantically similar to the original input \citep{garg2020bae}. The DeepWordBug attack identifies the most important tokens in the input and performs character-level perturbations on the highest-ranked tokens while minimizing edit distance to create a change in the original classification \citep{gao2018black}. The FasterGeneticAlgorithm perturbs every token in a given input while maintaining the original sentiment. It chooses each perturbation carefully to create the most effective adversarial example \citep{jia2019certified}. The Kuleshov attack is a synonym substitution attack that replaces 10\% - 30\% of the tokens in the input with synonyms that do not change the meaning of the input \citep{kuleshov2018adversarial}. \\\indent The PWWS attack determines the word saliency score of each token and performs synonym substitutions based on the word saliency score and the maximum effectiveness of each substitution \citep{ren2019generating}. The TextBugger attack determines the important sentences from the input first. It then determines the important words in those sentences and generates 5 possible ``bugs" through different perturbation methods: insert, swap, delete, sub-c (visual similarity substitution), sub-w (semantic similarity substitution). The attack will implement whichever of these 5 generated bugs is the most effective in changing the original prediction \citep{li2019textbugger}. Finally, the TextFooler attack determines the most important tokens in the input using synonym extraction, part-of-speech checking, and semantic similarity checking. If there are multiple canididates to substitute with, the most semantically similar substitution will be chosen and will replace the original token in the input \citep{jin2020bert}. \begin{figure}[ht] \centering \includegraphics[scale=.28]{images/PerturbedData1.png} \caption{Example of what original data looks like and how the BAE \cite{garg2020bae} and TextBugger \cite{li2019textbugger} attack methods perturb data. The BAE attack method uses semantic similarity, while the Textbugger attack method uses visual similarity. } \label{fig:perturbedData} \end{figure} \indent After each attack had corresponding attack data, the TextAttack functions gave the results for the success of the attack. The accuracy of the sentiment-analysis task under attack, without the defense method, is reported in the first column in Table \ref{tab:results}. Each attack caused a large decrease in the accuracy of the model. The model began with an average accuracy of 80\% for the IMDB dataset. Once the attack data was created and the accuracy under attack was reported, the attack data was run through our Random Perturbations and Increased Randomness defense methods. All of the experiments were run on Google Colaboratory using TPUs and the Natural Language Toolkit \citep{Loper02nltk:the}. \subsection{Results} \noindent We began by testing on the HuggingFace sentiment analysis pipeline with the original IMDB dataset. This gave an original accuracy of 80\%. This percentage represents the goal for our defense method accuracy as we aim to return the model to its original accuracy, or higher. The accuracy under each attack is listed in Table \ref{tab:results} in the first column. These percentages show how effective each attack is at causing misclassification for the sentiment analysis task. The attacks range in effectiveness with PWWS \citep{ren2019generating} and Kuleshov \citep{kuleshov2018adversarial} with the most successful attacks at 0\% accuracy under attack and FasterGeneticAlgorithm \citep{jia2019certified} with the least successful attack at 44\% accuracy under attack, which is still almost a 40\% drop in accuracy. \begin{table}[h] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|c|c|} \hline \textbf{Attack} & \textbf{w/o Defense} & \textbf{w/ Defense} \\ \hline\hline BAE & 33\% & 80.80\%$\pm$1.47 \\ \hline DeepWordBug & 34\% & 76.60\%$\pm$1.85 \\ \hline FasterGeneticAlgo & 44\% & 82.20\%$\pm$1.72 \\ \hline Kuleshov* & 0\% & 60.00\%$\pm$2.24 \\ \hline PWWS & 0\% & 81.80\%$\pm$1.17 \\ \hline TextBugger & 6\% & 79.20\%$\pm$2.32 \\ \hline TextFooler & 1\% & 83.20\%$\pm$2.48 \\ \hline \end{tabular}} \caption{Accuracy for each of the attack methods under attack, and under attack with the defense method from Algorithm \ref{algorithm:algo} deployed with $l=7$ and $k=5$. The accuracy prior to attack is 80\%.} \label{tab:results} \end{table} \subsubsection{Random Perturbations Defense} For the Random Perturbations Defense to be successful, it is necessary to obtain values of the two parameters, $l$ and $k$. Each attack was tested against our Random Perturbations Defense 5 times. The accuracy was averaged for all 5 tests and the standard deviation was calculated for the given mean. The mean accuracy with standard deviation is presented for each attack in the second column of Table \ref{tab:results}. The results presented are for $l = 7$ and $k = 5$. These parameters were chosen after testing found greater values of $l$ and $k$ resulted in a longer run time and too many changes made to the original input; with lower values for $l$ and $k$, the model had lower accuracy and not enough perturbations to outweigh any potential adversarial attacks. The values behind this logic can be seen in Table \ref{tab:landk}. \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|} \hline \textbf{Attack} & \textbf{l} & \textbf{k} & \textbf{Accuracy w/ Defense}\\ \hline\hline BAE & 5 & 2 & 55\% \\ \hline BAE & 10 & 5 & 50\% \\ \hline BAE & 7 & 5 & 79\% \\ \hline \end{tabular} \caption{This table explains values of $l$ and $k$} \label{tab:landk} \end{table} \indent The defense method was able to return the model to original accuracy within statistical significance while under attack for most of the attacks with the exception of the Kuleshov method \citep{kuleshov2018adversarial}. The accuracy for the other attacks all were returned to the original accuracy ranging from 76.00\% to 83.20\% accuracy with the Random Perturbations defense deployed. This shows that our defense method is successful at mitigating most potential adversarial attacks on sentiment classification models. Our defense method was able to increase the accuracy of model while under attack for the FasterGeneticAlgorithm, PWWS, and TextFooler. These three attack methods with our defense achieved accuracy that was higher than the original accuracy with statistical significance. \subsubsection{Increased Randomness Defense} The Increased Randomness Defense was also tested on all seven of the attacks. Each attack was tested against this defense 5 times. The results for these experiments can be seen in Table \ref{tab:MoreRandRes}. There were tests done to determine what the proper value for $k$ should be. These tests were performed on the BAE \citep{garg2020bae} attack and the results can be found in Table \ref{tab:MoreRandk}. These tests revealed that 40-45 replicates $\hat{r}_j$ was ideal for each $\hat{R}$ with $k = 41$ being the final value used for the tests on each attack. This defense method was more efficient to use. \begin{table}[h] \centering \begin{tabular}{|c|c|c|} \hline \textbf{Attack} & \textbf{k} & \textbf{Accuracy w/ Defense} \\ \hline\hline BAE & 10 & 67\% \\ \hline BAE & 20 & 76\% \\ \hline BAE & 25 & 72\% \\ \hline BAE & 30 & 76\% \\ \hline BAE & 35 & 74\% \\ \hline BAE & 40 & 82\% \\ \hline BAE & 45 & 74\% \\ \hline BAE & 41 & 77\% \\ \hline \end{tabular} \caption{This table shows the results for the tests for different values of $k$ for the increased randomness experiments.} \label{tab:MoreRandk} \end{table} The runtime and the resources used for this method were lower than the original random perturbations defense method with the runtime for the Random Perturbations Defense being nearly 4 times longer than this increased random method. A comparison of the two defense methods on the seven attacks tested can be seen in Figure \ref{fig:Graph}. This defense was successful in returning the model to the original accuracy, within statistical significance, for most of the attacks with the exception of the Kuleshov attack \citep{kuleshov2018adversarial}. A t-test was performed to determine the statistical significance of the difference in the defense method accuracy to the original accuracy. \begin{table}[h] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|c|c|} \hline \textbf{Attack} & \textbf{w/o Defense} & \textbf{w/ Defense} \\ \hline\hline BAE & 33\% & 78.40\%$\pm$3.14 \\ \hline DeepWordBug & 34\% & 76.80\%$\pm$2.64 \\ \hline FasterGeneticAlgo & 44\% & 82.80\%$\pm$2.48 \\ \hline Kuleshov* & 0\% & 66.23\%$\pm$4.65 \\ \hline PWWS & 0\% & 79.20\%$\pm$1.72 \\ \hline TextBugger & 6\% & 77.00\%$\pm$2.97 \\ \hline TextFooler & 1\% & 80.20\%$\pm$2.48 \\ \hline \end{tabular}} \caption{Accuracy for increased randomness defense from Algorithm \ref{algorithm:MoreRandAlgo} against each attack method with $k = 41$. The accuracy prior to attack is 80\%.} \label{tab:MoreRandRes} \end{table} \begin{figure}[h] \centering \includegraphics[scale=.29]{images/GraphOfResults1.png} \caption{Comparing the average accuracy of the Random Perturbations Defense and the Increased Randomness Defense methods to the under attack accuracy without defense on the seven attacks.} \label{fig:Graph} \end{figure} \subsection{Comparison to Recent Defense Methods} Our defense methods are comparable to some recent defense methods created for text classification. Our defense method returns the model to the original accuracy within statistical significance. This is comparable to the work done by \citet{zhou2021defense} in their Dirichlet Neighborhood Ensemble (DNE) defense method. They were able to bring the model within 10\% of the original accuracy for CNN, LSTM, and BOW models for the IMDB dataset. However, their work is only applicable to synonym-substitution based attacks. Since our defense methods apply equally well to seven attacks, it is general and can be applied without determining the exact type of attack (assuming it is one of the seven). \\\indent Another recent defense method, Synonym Encoding Method (SEM), was tested on synonym-substitution attacks on Word-CNN, LSTM, Bi-LSTM and BERT models \citep{wangnatural}. This defense method was most successful on the BERT model and was able to return to the original accuracy within 3\% for the IMDB dataset. Our work is comparable to both DNE and SEM which represent recent work in defending NLP models against adversarial attacks and more specifically synonym-substitution based attacks. \\\indent WordDP is another recent defense method for adversarial attacks against NLP models \citep{wang-etal-2021-certified}. This defense method used Differential Privacy (DP) to create certified robust text classification models against word substitution adversarial attacks. They tested their defense on the IMDB and found that their WordDP method was successful at raising the accuracy within 3\% of the original clean model. This method outperformed other defense methods including DNE. This is similar to our defense method, but they do not include whether these results are statistically significant. \\\indent We also compare our defense methods, RPD and IRD, against these recent defense methods on cost and efficiency. Our RPD and IRD methods have comparable time complexity of $O(cn)$, where $c$ is the time it takes for classification and $n$ is the number of reviews. Each method has a similar constant that represents the number of perturbations and replicates made. We cannot directly compare the time complexity of our defense methods with the SEM, DNE, and WordDP methods. These recent defense methods require specialized training and/or encodings. Our RPD and IRD methods do not require specialized training or encodings, so they cannot be directly compared on time complexity. This means that the comparison between our methods and recent defense methods comes in the form of specialized training vs. input preprocessing. Training and developing new encodings tends to be more time consuming and expensive than input preprocessing methods that can occur during the testing phases. \section{Conclusion} The work in this paper details a successful defense method against adversarial attacks generated from the TextAttack library. These attack methods use multiple different perturbation approaches to change the predictions made by NLP models. Our Random Perturbations Defense was successful in mitigating 6 different attack methods. This defense method returned the attacked models to their original accuracy within statistical significance. Our second method, Increased Randomness Defense, used more randomization to create an equally successful defense method that was 4 times more efficient than our Random Perturbations Defense. Overall, our defense methods are effective in mitigating a range of NLP adversarial attacks, presenting evidence for the effectiveness of randomness in NLP defense methods. The work done here opens up further study into the use of randomness in defense of adversarial attacks for NLP models including the use of these defense methods for multi-class classification. This work also encourages a further mathematical and theoretical explanation to the benefits of randomness in defense of NLP models. \section*{Acknowledgement} The work reported in this paper is supported by the National Science Foundation under Grant No. 2050919. Any opinions, findings and conclusions or recommendations expressed in this work are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
1,314,259,994,383
arxiv
\section{Introduction} Throughout this paper, $K$ denotes a perfect field, and $\overline{K}$ an algebraic closure of $K$. Recall that a field $K$ is said to be perfect if every irreducible polynomial over $K$ has only simple roots in $\overline{K}$. \smallskip For $M\in\mathcal{M}_n(K)$, the set of $n\times{n}$ matrices with entries in $K$, $\sigma{(}M)$ denotes its spectrum, that is the set of its eigenvalues in $\overline{K}$. Two matrices $A,B\in\mathcal{M}_n(K)$ are said to be simultaneously triangularizable (denoted by ST) over $K$ if there exists a matrix $P\in{G}L_n(K)$ such that $P^{-1}AP$ and $P^{-1}BP$ are upper triangular. Thus such matrices have common invariant subspaces that form a complete flag over $K$. Note that if $A,B\in\mathcal{M}_n(K)$ commute, then they are ST over an extension field of $K$. In the sequel, $L$ denotes an extension field of $K$. \medskip In Section 2, we consider $A,B\in\mathcal{M}_n(K)$ and we assume that $A$ has $n$ distinct eigenvalues in $L$, an extension field of $K$, and that $\sigma(A)$ is explicitly known. We give an algorithm which allows to check whether or not $A$ and $B$ are ST over $L$. Moreover, when $A$ and $B$ are ST we obtain a basis of $L$ that diagonalizes $A$ and triangularizes $B$. \\ \indent In Section 3, we assume that $A,B\in\mathcal{M}_n(K)$ have a common invariant proper vector subspace of dimension $k$ over $L$. We recall some criteria for the existence of common invariant proper subspaces of matrices. Shemesh gives this efficient criterion, when $k=1$, in \cite{1} \begin{thm2} Let $A,B\in\mathcal{M}_n(\mathbb{C})$. Then $A$ and $B$ have a common eigenvector if and only if $$\bigcap_{p,q=1}^{n-1}\ker([A^p,B^q])\not=\{0\}.$$ \end{thm2} Note that the complexity of this test is in $O(n^5)$.\\ When $k\geq 2$, a particular case, that is sufficient for our purpose, is treated in \cite{7},\cite{6} as follows. If $U\in\mathcal{M}_n(K)$, then $U^{(k)}$ denotes its $k^{th}$ compound. \begin{thm} \label{tsatso} Let $A,B\in \mathcal{M}_n(\mathbb{C})$, $k\in\llbracket{2},n-1\rrbracket$ be such that $A$ has distinct eigenvalues. The following are equivalent \begin{enumerate} \item[(i)] $A,B$ have a common invariant subspace $W$ of dimension $k$. \item[(ii)] There exists $s\in\mathbb{C}$ such that $(A+sI_n)^{(k)}$ has distinct eigenvalues and $(A+sI_n)^{(k)},(B+sI_n)^{(k)}$ are invertible and have a common eigenvector in $\mathbb{C}^{\binom{n}{k}}$. Moreover this eigenvector is decomposable in the exterior product of $k$ vectors that constitute a basis of $W$. \end{enumerate} \end{thm} We can show that the complexity of this test is at most (when $k=n/2$) in $O(\dfrac{2^{5n}}{n^{5/2}})$. \begin{rem} The previous two results are also valid over a field that is algebraically closed and has characteristic $0$. \end{rem} \medskip In the sequel, we work on a perfect field $K$ and we will use the following notation \medskip \textbf{Notation}. Let $P\in K[x]$ be an irreducible polynomial of degree $n$. The splitting field $S_P$ of $P$ is $K(x_1,\cdots,x_n)$ where $x_1,\cdots,x_n$ are the roots of $P$ in $\overline{K}$. The Galois group of $P$ is the set of the $K$-automorphisms of $S_P$, that is $$Gal(S_P/K)=\{\tau\in Aut(S_P)\;|\;\forall{t}\in{K},\tau(t)=t\};$$ it is isomorphic to a subgroup of $\mathcal{S}_n$, the group of all the permutations of $\{1,\cdots,n\}$. If $M\in\mathcal{M}_n(K)$ and $\chi_M$, the characteristic polynomial of $M$, is irreducible, then $G_M$ denotes the Galois group of $\chi_M$. \medskip \indent Assume that $A,B\in\mathcal{M}_n(K)$ have a common invariant proper subspace of dimension $k$ over an extension field $L$ of $K$ and that $\chi_A$ is irreducible over $K$. We consider conditions that imply that $A,B$ commute. We show the following results. \begin{enumerate} \item[(i)] If $k\in\{1,n-1\}$, then $A,B$ commute. \item[(ii)] If $1\leq k\leq n-1$ and $G_A=\mathcal{S}_n$ or $G_A=\mathcal{A}_n$, then $AB=BA$. \item[(iii)] If $1\leq k\leq n-1$ and $n$ is a prime number, then $AB=BA$. \end{enumerate} The idea is as follows : let $F=[u_1,\cdots,u_k]$ be a $A$-invariant vector space where the $(u_i)_{i\leq k}$ are eigenvectors of $A$ associated to the eigenvalues $E=(\alpha_i)_{i\leq k}\subset\sigma(A)$. We seek elements of $G_A$ so that their orbits contain elements of $E$ and elements of $\sigma(A)\setminus E$. We consider $Bu_1\in F$ and we show that it is colinear to $u_1$.\\ In Section 4, we consider the case when $n=4,k=2$ and we show that the conclusion of (ii) may be false if we drop the hypothesis $G_A=\mathcal{S}_4$ or $G_A=\mathcal{A}_4$.\\ In Section 5, we use (i) and the simultaneous triangularization to solving the matrix equation $AX-XA=X^{\alpha}$ in a particular case. \section{An algorithm checking ST property} \begin{prop} \label{diag} Let $A,B\in\mathcal{M}_n(K)$ that are ST over $L$, an extension field of $K$. We assume that $A$ has distinct eigenvalues over $L$. Then there exists $S\in{G}L_n(L)$ such that $S^{-1}AS$ is diagonal and $S^{-1}BS$ is upper triangular. \end{prop} \begin{proof} There exists $P\in{G}L_n(L)$ such that $P^{-1}AP=T,P^{-1}BP=U$ where $T$ and $U$ are upper triangular. Note that $\sigma(A)\subset L$. The principal minors of $T$ are diagonalizable over $L$. By induction, we can construct a $T$-eigenvectors basis of $L^n$ such that the associated change of basis matrix is a upper triangular matrix $Q\in GL_n(L)$. Let $S=PQ$. Then $S^{-1}AS$ is diagonal and $S^{-1}BS=Q^{-1}UQ$ is upper triangular. \end{proof} \begin{rem} \label{propor} We may replace each column of $S$ with a proportional column. \end{rem} The previous result leads to an algorithm to check whether two such matrices are ST or not. Its complexity is in $O(n^4)$. \begin{prop} Let $A,B\in\mathcal{M}_n(K)$. We assume that $A$ has $n$ distinct eigenvalues in $L$, an extension field of $K$, and that we know explicitly $\sigma{(}A)$. Then we can decide if whether or not $A$ and $B$ are ST over $L$. If $A$ and $B$ are ST over $L$, then we obtain explicitly a matrix $S\in{G}L_n(L)$ that diagonalizes $A$ and triangularizes $B$. \end{prop} \begin{proof} Since $A$ has distinct eigenvalues in $L$, we can calculate, from $\sigma{(}A)$, a $A$-eigenvectors basis of $L^n$. Let $R$ be the associated matrix and $Z=R^{-1}BR=[z_{i,j}]$. \begin{itemize} \item[Case 1.] The matrices $A,B$ are ST. According to the proof of Proposition \ref{diag} and Remark \ref{propor}, there exists a permutation matrix $D$ such if $S=RD$, then $S^{-1}BS=D^{-1}ZD$ is upper triangular.\\ We consider the following algorithm \medskip \noindent $U:=\{1,\cdots,n\}$. \\ For every $i\leq n$, \\ \indent if the $i^{th}$ column of $Z$ is zero, then $\alpha_i:=n$ \\ \indent else $\alpha_i:=n-\sup \{j\leq n\;|\;z[j,i]\not= 0\}$. \\ For $r$ from $1$ to $n$, do\\ \indent find $i_r$ such that $\alpha_{i_r}=\sup_U \alpha_i$.\\ \indent if $\alpha_{i_r}< n-r$ then $BREAK$\\ \indent $d_r:=i_r$\\ \indent $U:=U\setminus\{i_r\}$.\\ If $r=n$ then $OUTPUT:=(d_1,\cdots,d_n)$\\ else $OUTPUT:=NULL$. \medskip \noindent The output $(d_1,\cdots,d_n)$ constitutes a convenient permutation. \item[Case 2.] The matrices $A,B$ are not ST. The previous algorithm gives the output $NULL$. \end{itemize} \end{proof} \begin{rem} When $A$ has multiple eigenvalues or $\sigma(A)$ is unknown, to find an efficient algorithm is hard. A finite rational algorithm which allows to check whether two given $n\times{n}$ complex matrices are ST is exposed in \cite[Theorem 6]{5}. The study of complexity of the presented algorithm is omitted in \cite{5} and, as the author shows in \cite{4}, this test is impractical for $n\geq{6}$. \end{rem} \section{Common invariant subspace and commutativity} \begin{prop} \label{subs} Let $A\in\mathcal{M}_n(K)$ such that $A$ has $n$ distinct eigenvalues in an extension field $L$ of $K$ and let $Z=\{B\in\mathcal{M}_n(K)\;|\;A,B \text{ have a common eigenvector }$\\ $\text{in }L^n\}$. Then $Z$ is the union of $n$ subspaces of $\mathcal{M}_n(K)$, each of them containing the commutant of $A$. \end{prop} \begin{proof} Let $\alpha\in\sigma{(}A)$, $L_{\alpha}=K[\alpha]$ and $[{{L}_{\alpha}}:K]=k_{\alpha}$. Let $u\in{{L}_{\alpha}}^n\setminus\{0\}$ be such that $Au=\alpha{u}$. If $B=[b_{i,j}]\in{Z}$, then the condition ``$Bu$ and $u$ are linearly dependent" can be written in the form of $n-1$ ${{L}_{\alpha}}$-linear conditions on the $(b_{i,j})_{i,j}$, that is $k_{\alpha}\times{(}n-1)$ $K$-linear conditions on the $(b_{i,j})_{i,j}$. Thus $B$ is in a $K$-vector space of dimension at least $n^2-k_{\alpha}(n-1)$ that contains the commutant of $A$. Finally $B$ goes through the union of $n$ such subspaces. \end{proof} \begin{rem} \label{invsp} One has several interesting properties when $\chi_A$ is irreducible over $K$\\ $i)$ The endomorphism $A$ has no invariant proper subspaces over $K$. \\ $ii)$ Since $K$ is a perfect field, $A$ has simple eigenvalues in $S_{\chi_A}$ and its commutant is $K[A]$ and has dimension $n$.\\ $iii)$ According to \cite[p. 51]{2} and $ii)$, any $A$-invariant subspace of dimension $k$ over $\overline{K}$ is spanned by $k$ $A$-eigenvectors. \end{rem} From now on, we suppose that $A$ and $B$ have a proper common invariant subspace of dimension $k$ over an extension field of $K$. \begin{thm} \label{comeig} Let $n\geq{2}$. Let $A,B\in\mathcal{M}_n(K)$ be such that they have a common eigenvector over an extension field of $K$. We assume that the characteristic polynomial of $A$ is irreducible over $K$. Then $AB=BA$. \end{thm} \begin{proof} \label{commut} Let $u$ be a common eigenvector and put $Au=\alpha u,Bu=\beta u$. Recall that $G_A$ is a transitive group, that is, there exist $(\tau_i)_{i=1,\cdots,n-1}\in G_A$ such that $$\sigma(A)=\{\alpha,\tau_1(\alpha),\cdots,\tau_{n-1}(\alpha)\}.$$ Moreover, $\tau_i(u)$ is defined componentwise and $A(\tau_i(u))=\tau_i(Au)=\tau_i(\alpha)\tau_i(u)$. Finally $\{u,\tau_1(u),\cdots,\tau_{n-1}(u)\}$ is an associated basis of eigenvectors of $A$. Thus $B(\tau_i(u))=\tau_i(Bu)=\tau_i(\beta)\tau_i(u)$ and $AB=BA$. \end{proof} We can slightly improve the previous result as follows. \begin{lem} \label{transp} If $A,B\in\mathcal{M}_n(L)$ have a common invariant subspace of dimension $k$ over $L$, then $A^T$ and $B^T$ have a common invariant subspace of dimension $n-k$ over $L$. \end{lem} \begin{proof} The common invariant subspace of dimension $k$, can be written $V=\{X\in{L}^n\;|\;\Lambda{X}=0\}$ where $\Lambda\in\mathcal{M}_{n-k,n}(L)$ has maximal rank $n-k$. Since $\ker(\Lambda)\subset\ker(\Lambda{A})$, there exists $Z\in\mathcal{M}_{n-k}(L)$ such that $\Lambda{A}=Z\Lambda$, that is $A^T\Lambda^T=\Lambda^TZ^T$. The $n-k$ columns of $\Lambda^T$ span a vector space of dimension $n-k$ that is invariant for $A^T$. \end{proof} \begin{cor} \label{hyper} Let $A,B\in\mathcal{M}_n(K)$ be such that they have a common invariant hyperplane over an extension field of $K$. We assume that the characteristic polynomial of $A$ is irreducible over $K$. Then $AB=BA$. \end{cor} \begin{proof} According to Lemma \ref{transp}, $A^T$ and $B^T$ have a common eigenvector and by Theorem \ref{comeig}, $A^TB^T=B^TA^T$, that implies $AB=BA$. \end{proof} Now we consider the case where $A$ and $B$ have a common invariant proper subspace of dimension $\geq{2}$. Recall that $\mathcal{A}_n$, the group of even permutations of $\{1,\cdots,n\}$, contains the cycles of odd length. \begin{thm} \label{invsub} Let $n\geq 3$ and $A,B\in\mathcal{M}_n(K)$ be such that they have a common invariant proper vector subspace over an extension field of $K$. We assume that $\chi_A$ is irreducible over $K$ and $G_A=\mathcal{S}_n$ or $G_A=\mathcal{A}_n$. Then $AB=BA$. \end{thm} \begin{proof} Since $\chi_A=\chi_{A^T}$ and according to Lemma \ref{transp}, we may change $k$ with $n-k$ and assume that $k\leq \dfrac{n}{2}$, that implies $k+2\leq n$. Let $F$ be a common invariant subspace of dimension $k\geq 2$ for $A,B$. According to Remark \ref{invsp}. $iii)$, the subspace $F$ is generated by certain eigenvectors $u_1,\cdots,u_k$ of $A$ respectively associated to the pairwise distinct eigenvalues of $A$: $\alpha_1,\cdots,\alpha_k$. Let $\sigma(A)=\{\alpha_1,\cdots,\alpha_k,\cdots,\alpha_n\}$. There exists $\tau\in \mathcal{A}_n\subset G_A$, a cycle of length $r=k+1$ if $k$ is even (resp. $r=k+2$ if $k$ is odd) such that, for every $1\leq i\leq r-1$, $\alpha_{i+1}=\tau(\alpha_i)$. Note that $F=[u_1,\cdots,\tau^{k-1}(u_1)]$ and $\tau^k(u_1)\notin F$. Assume that $Bu_1=\sum_{i=0}^q\lambda_i\tau^i(u_1)$ where $q\in\llbracket 1,k-1\rrbracket$, for every $i$, $\lambda_i\in S_{\chi_{A}}$ and $\lambda_q\not= 0$. Therefore $$Bu_{k-q+1}=B(\tau^{k-q}(u_1))=\sum_{i=0}^{q-1}\tau^{k-q}(\lambda_i)\tau^{k-q+i}(u_1)+\lambda_q\tau^k(u_1)\in F.$$ Then $\lambda_q=0$, that is a contradiction. Finally $Bu_1=\lambda_0u_1$ and we conclude by Theorem \ref{comeig}. \end{proof} We can wonder if we still get the same conclusion of Theorem \ref{invsub} when droping the hypothesis $G_A=\mathcal{S}_n$ or $G_A=\mathcal{A}_n$. The answer is no in general but is yes if $n$ is a prime. \begin{thm} Assume that $n\geq{3}$ is a prime number and let $A,B\in\mathcal{M}_n(K)$ be such that $\chi_A$ is irreducible over $K$. If $A$ and $B$ have a proper common invariant subspace, then $AB=BA$. \end {thm} \begin{proof} Let $F$ be a common invariant subspace of dimension $k\in\llbracket{2},n-1\rrbracket$ for $A,B$. Let $u\in F$ be an eigenvector of $A$ associated to $\alpha\in\sigma(A)$. Note that $n$ divides the cardinality of $G_A$. Since $n$ is prime and according to Cauchy's theorem, there exist $\tau\in G_A$ of order $n$. Necessarily the permutation $\tau$ is a cycle of length $n$ and $\sigma(A)=\{\alpha,\cdots,\tau^{n-1}(\alpha)\}$. Moreover $\{u,\cdots,\tau^{n-1}(u)\}$ is a basis of eigenvectors of $A$ and some among these vectors constitute a basis of $F$. Put $Bu=\lambda_0u+\sum_{0<i\leq n-1}\lambda_i\tau^i(u)$ where the $(\lambda_i)_i$ are in $\overline{K}$. Assume that there exists $p\in \llbracket{1},n-1\rrbracket$ such that $\lambda_p\not=0$. Since $n$ is prime and $k<n$, there exists an integer $q$ such that $\tau^q(u)\in F$ and $\tau^{q+p}(u)\notin F$. Therefore $$B(\tau^q(u))=\tau^q(\lambda_0)\tau^{q}(u)+\sum_{0<i<n,i\not= p}\tau^q(\lambda_i)\tau^{q+i}(u)+\tau^q(\lambda_p)\tau^{q+p}(u)\in F.$$ Thus $B(\tau^q(u))$ is written as a linear combination of the basis $\{u,\cdots,\tau^{n-1}(u)\}$ and the coefficients of the vectors that are not in $F$ are zero. Consequently $\lambda_p=0$, that is a contradiction. Finally $Bu=\lambda_0 u$ and we conclude by Theorem \ref{comeig}. \end{proof} \begin{rem} Consider $A,B\in\mathcal{M}_{35}(\mathbb{Q})$ such that $AB\not= BA$ (the verification is easy) and $G_A=\mathcal{S}_{35}$ or $\mathcal{A}_{35}$ (the verification is easy with the ``Magma'' software). Then, by Theorem \ref{invsub}, we deduce that $A,B$ admit no common invariant proper subspaces (the direct verification is impossible because the algorithm associated to Theorem \ref{tsatso} is impractical for $n>12$). \end{rem} \section{The case $n=4$} Assume that $A,B\in\mathcal{M}_4(K)$ have a common invariant subspace of dimension $k\in\{1,2,3\}$ and that $\chi_A$ is irreducible over $K$. If $k=1,3$, then from Theorem \ref{comeig} and Corollary \ref{hyper}, $AB=BA$. From Theorem \ref{invsub}, we obtain the same conclusion if $k=2$ and $G_A=\mathcal{S}_4$ or $\mathcal{A}_4$. It remains to study the cases where $A$ admits an invariant plane $\Pi$ and $G_A=\mathcal{C}_4$, the cyclic group with four elements, ${\mathcal{C}_2}^2$ or $\mathcal{D}_4$, the dihedral group with eight elements. Of course, if $K$ is a finite field, then necessarily $G_A=\mathcal{C}_4$. \medskip Let $A\in\mathcal{M}_n(K)$ be such that $\chi_A$ is irreducible over $K$ and $\Pi$ be a $A$-invariant plane. We denote by $r_A(\Pi)$ the dimension of the $K$-vector space of the matrices $B\in\mathcal{M}_n(K)$ such that $\Pi$ is a $B$-invariant plane. We will see that $r_A(\Pi)$ does not depend only on $k$ and $G_A$. \medskip \begin{prop} \label{diedral} Let $A\in\mathcal{M}_4(K)$ such that $\chi_A$ is irreducible, $G_A=\mathcal{D}_4$ and $\Pi$ is a $A$-invariant plane. Then $r_A(\Pi)=4$ or $8$. \end{prop} \begin{proof} There exist $\alpha_1,\alpha_2\in\sigma(A)$ such that $\Pi=\ker((A-\alpha_1 I_4)(A-\alpha_2 I_4))$. Let $L=K(\alpha_1,\alpha_2)$. Note that $H=\{\tau\in G_A\;|\;\tau(\alpha_1)=\alpha_2\}$ has two elements. Let $u$ be an eigenvector of $A$ associated to $\alpha_1$. If $\tau\in H$, then $\{u,\tau(u)\}$ is a basis of $\Pi$. Let $B\in\mathcal{M}_4(K)$ such that $\Pi$ is $B$-invariant. Therefore $Bu=\lambda u+\mu \tau(u)$ where $\lambda,\mu\in L$. \begin{itemize} \item Case 1. One element $\tau$ of $H$ has order $4$. Then $\sigma(A)=(\tau^i(\alpha_1))_{0\leq i\leq 3}$ and $(\tau^i(u))_{0\leq i\leq 3}$ is a basis of $K^n$ constituted by eigenvectors of $A$. Thus $$B(\tau(u))=\tau(\lambda)\tau(u)+\tau(\mu)\tau^2(u)\in\Pi,$$ that implies $\mu=0$. Therefore, for every $i$, $B(\tau^i(u))=\tau^i(\lambda)\tau^i(u)$ and $AB=BA$. \item Case 2. The elements of $H$ have order $2$. Then $$B(\tau(u))=\tau(\lambda)\tau(u)+\tau(\mu)u\in\Pi.$$ Let $(\tau_i)_{i=3,4}\in G_A$ such that $\tau_i(\alpha_1)=\alpha_i$. Clearly, $\{u,\tau(u),\tau_3(u),\tau_4(u)\}$ is a basis of eigenvectors of $A$ and, for $i=3,4$, $B(\tau_i(u))=\tau_i(Bu)$ depends only on $Bu$. Finally $B$ depends only on $\lambda,\mu\in L$ and $r_A(\Pi)=2[L:K]$. Necessarily $r_A(\Pi)<16$ and $[L:K]=4$ or $8$. Therefore $r_A(\Pi)=8$. \end{itemize} \end{proof} \begin{prop} \label{C4} Let $A\in\mathcal{M}_4(K)$ such that $\chi_A$ is irreducible, $G_A=\mathcal{C}_4$ and $\Pi$ is a $A$-invariant plane. Then $r_A(\Pi)=4$ or $8$. \end{prop} \begin{proof} We use the notations of Proposition \ref{diedral}. Here $[L:K]=4$ and $H$ has a unique element $\tau$.\\ \begin{itemize} \item Case 1. $\tau$ is a generator of $G_A$. As in the proof of Proposition \ref{diedral}, Case 1, we show that $AB=BA$. \item Case 2. $\tau$ has order $2$. As in the proof of Proposition \ref{diedral}, Case 2, we show that $r_A(\Pi)=2[L:K]=8$. \end{itemize} \end{proof} \begin{prop} Let $A\in\mathcal{M}_4(K)$ such that $\chi_A$ is irreducible, $G_A={\mathcal{C}_2}^2$ and $\Pi$ is a $A$-invariant plane. Then $r_A(\Pi)=8$. \end{prop} \begin{proof} Again we use the notations of Proposition \ref{diedral}. Here $[L:K]=4$, $H$ has a unique element $\tau$ and $\tau$ has order $2$. As in the proof of Proposition \ref{diedral}, Case 2, we show that $r_A(\Pi)=2[L:K]=8$. \end{proof} \begin{exam2}{}~ \begin{itemize} \item We consider the following instance where $K=\mathbb{Q}$, $\chi_A(x)=x^4+x^3+x^2+x+1$ and $\Pi_{\epsilon}=\ker(A^2+\dfrac{1+\epsilon\sqrt{5}}{2}A+I_4)$ where $\epsilon=\pm 1$. Here $G_A=\mathcal{C}_4$, the element of $H$ has order $2$ and, according to Proposition \ref{C4}, $r_A(\Pi_{\epsilon})=8$. In particular, the following pair $(A,B)$ is such that the planes $\Pi_{\epsilon}$ are invariant for $A,B$ and yet, $A$ and $B$ are not ST. $$\;\;\;\;\;\;\;\;\;\;\;A=\begin{pmatrix}0&0&0&-1\\1&0&0&-1\\0&1&0&-1\\0&0&1&-1\end{pmatrix},B=\begin{pmatrix}0&-1&0&2\\-1&-1&1&1\\0&0&0&1\\1&0&0&0\end{pmatrix}\text{ where } G_B=\mathcal{D}_4.$$ With the help of Theorem \ref{tsatso}, we show that $A,B$ admits only the planes $\Pi_{\epsilon}$ as proper common invariant subspaces over $\mathbb{C}$.\\ $i)$ Applying the Shemesh's criterion to the couples $(A,B)$ and $(A^T,B^T)$, we conclude that there are no solutions in dimensions $1$ or $3$.\\ $ii)$ We prove easily that $(A+I_4)^{(2)}$ and $(B+I_4)^{(2)}$ have two common eigenvectors $$u_{\epsilon}=[1,\dfrac{-\epsilon\sqrt{5}+1}{2},1,\dfrac{-\epsilon\sqrt{5}+1}{2},\dfrac{-\epsilon\sqrt{5}+1}{2},1]^T.$$ An easy calculation shows that $u_{\epsilon}$ is the exterior product of the vectors of a basis of $\Pi_{\epsilon}$.\\ \item Now we assume $K=\mathbb{Z}/7\mathbb{Z}$ and $\chi_A$ is as above. Then $\chi_A$ is irreducible and since $K$ is finite, $G_A=\mathcal{C}_4$. Moreover $5$ is not a square in $K$ and we can define $\sqrt{5}$ in an extension field of $K$ of dimension $2$. Then the planes $\Pi_{\epsilon}$, as above, are $A$-invariant. By the previous reasoning, we obtain $r_A(\Pi_{\epsilon})=8$. The matrices $A,B$, as above, admit the planes $\Pi_{\epsilon}$ as common invariant subspaces. We can show that $A,B$ have no other proper common invariant subspaces. Note that $\chi_B(x)=(x^2-x+4)(x^2+2x+2)$ and $B$ admits two invariant planes over $K$. \end{itemize} \end{exam2} \section{Solving a matrix equation} We give an application of Section 3 using the following known result \begin{thm2} (McCoy's theorem \cite{3}) Let $L$ be an algebraically closed field and $A,B\in\mathcal{M}_n(L)$. Then $A,B$ are ST over $L$ if and only if for any polynomial $p(\lambda,\mu)$ in non-commuting indeterminates, $p(A,B)(AB-BA)$ is nilpotent. \end{thm2} \begin{prop} \label{ST} Let $A=\begin{pmatrix}U&0_{p,q}\\0_{q,p}&V\end{pmatrix}\in\mathcal{M}_{p+q}(K)$ be such that $(U,V)\in \mathcal{M}_{p}(K)\times\mathcal{M}_{q}(K)$, $\chi_U$ and $\chi_V$ are distinct irreducible polynomials over $K$. If $B\in\mathcal{M}_{p+q}(K)$, then $A,B$ are ST over $\overline{K}$ if and only if $B$ is in the form $$B=\begin{pmatrix}f(U)&Q\\0_{q,p}&g(V)\end{pmatrix},\text{ respectively }B=\begin{pmatrix}f(U)&0_{p,q}\\Q&g(V)\end{pmatrix}$$ where $Q\in\mathcal{M}_{p,q}(K),\text{ respectively }Q\in\mathcal{M}_{q,p}(K)$ and $f,g\in K[x]$ are arbitrary. \end{prop} \begin{proof} ($\Rightarrow$) Clearly $\sigma(A)=\{\sigma(U),\sigma(V)\}$ and $A$ has $p+q$ distinct eigenvalues. The eigenvectors of $A$ are in the form $[u,0]^T$ where ${u}^T$ is an eigenvector of $U$ or $[0,v]^T$ where ${v}^T$ is an eigenvector of $V$. Note that $A,B$ have a common eigenvector and assume, for instance, that it is in the form $[u,0]^T$ with $Uu^T=\alpha u^T$. We adapt the proof of Theorem \ref{comeig}: there exist $(\tau_i)_{i=1\cdots p}\in G_U$ such that $\sigma(U)=\{\alpha,\tau_1(\alpha),\cdots,\tau_{p-1}(\alpha)\}$. We deduce that the $([\tau_i(u),0]^T)_{i\leq p}$ are eigenvectors of $B$ and $B$ is in the form $B=\begin{pmatrix}P&Q\\0_{q,p}&R\end{pmatrix}$ where $UP=PU$, that is $P=f(U)\in K[U]$. Then $AB-BA=\begin{pmatrix}0_{p}&UQ-QV\\0_{q,p}&VR-RV\end{pmatrix}$ and, more generally, $$p(A,B)(AB-BA)=\begin{pmatrix}0_{p}&*\\0_{q,p}&p(V,R)(VR-RV)\end{pmatrix}$$ where $p(\lambda,\mu)$ is any polynomial in non-commuting indeterminates $\lambda,\mu$. According to the McCoy's theorem, $A,B$ are ST over $\overline{K}$ if and only for any polynomial $p(\lambda,\mu)$ in non-commuting indeterminates, $p(V,R)(VR-RV)$ is nilpotent, that is equivalent to : $V,R$ are ST. Then $V,R$ have a common eigenvector and, according to Theorem \ref{comeig}, $VR=RV$, that is $R$ is a polynomial in $V$. \\ ($\Leftarrow$) Again using the McCoy's theorem, the converse is clear. \end{proof} Finally we apply Proposition \ref{ST} to solving a matrix equation. \begin{prop} Let $p,q$ be distinct positive integers. Let $A\in\mathcal{M}_{p+q}(K)$ be such that $\chi_A=\Phi\Psi$ where $\Phi$ and $\Psi$ are polynomials of degree $p$ and $q$, irreducible over $K$. Let $\alpha$ be a positive integer. Then the equation, in the unknown $X\in\mathcal{M}_{p+q}(K)$, \begin{equation} \label{shap} AX-XA=X^{\alpha} \end{equation} admits the unique solution $X=0$. \end{prop} \begin{proof} We may assume that $A=\begin{pmatrix}U&0_{p,q}\\0_{q,p}&V\end{pmatrix}$ where $U,V$ are the companion matrices of $\Phi,\Psi$. Since $X$ satisfies Equation (\ref{shap}), $A$ and $X$ are ST over $\overline{K}$ (cf. \cite{8}). According to Proposition \ref{ST}, necessarily $X$ has two possible forms, for instance this one $$X=\begin{pmatrix}f(U)&Q\\0_{p,q}&g(V)\end{pmatrix}\text{ and consequently }AX-XA=\begin{pmatrix}0_{p}&UQ-QV\\0_{q,p}&0_{q}\end{pmatrix}.$$ $i)$ Assume $\alpha=1$. Equation (\ref{shap}) reduces to $$f(U)=0\;,\;g(V)=0\;,\;UQ-QV=Q.$$ The last equation can be rewritten $\phi(Q)=Q$ where $\phi=U\otimes I_q-I_p\otimes V^T$ is the sum of two linear functions that commute. Therefore $$\sigma(\phi)=\{\lambda-\mu\;|\;\lambda\in\sigma(U),\mu\in\sigma(V)\}.$$ If there are non-zero solutions, then there exist $\lambda\in\sigma(U),\mu\in\sigma(V)$ such that $\lambda-\mu=1$. Since $\chi_U$ is the minimum polynomial of $\lambda$ over $K$, then $\chi_U(x+1)$ is the minimum polynomial of $\mu$ over $K$ and $\chi_U(x+1)=\chi_V(x)$. That implies $p=q$, a contradiction.\\ $ii)$ Assume $\alpha>1$. Equation (\ref{shap}) reduces to $$f(U)^{\alpha}=0\;,\;g(V)^{\alpha}=0\;,\;UQ-QV=0.$$ Let $\sigma(U)=(\lambda_i)_{i\leq p}$. Then $(f(\lambda_i))_{i\leq p}=\sigma(f(U))=\{0\}$. Since $f$ is a unitary polynomial of degree $p$, $f(x)=(x-\lambda_1)\cdots(x-\lambda_p)$. By Cayley-Hamilton Theorem, $f(U)=0$ and, in the same way, $g(V)=0$. By the reasonning used in $i)$, for every $\lambda\in\sigma(A),\mu\in\sigma(B)$, $\lambda-\mu\not= 0$ and $\phi$ is a linear bijection. We conclude that $Q=0$. \end{proof} \medskip \textbf{Acknowledgements.} The author thanks David Adam and Roger Oyono for many valuable discussions. The author thanks the referee for helpful comments. \bibliographystyle{plain}
1,314,259,994,384
arxiv
\section{Introduction}\label{Introduction} \allowdisplaybreaks \allowdisplaybreaks \section{Introduction} Freshness of the status information of various physical processes collected by multiple sensors is a key performance enabler in many applications of wireless sensor networks (WSNs) \cite{8187436,8469047,8901143}, e.g., surveillance in smart home systems and drone control. The Age of Information (AoI) was introduced as a destination centric metric that characterizes the freshness in status update systems \cite{6195689,6310931,5984917}. A status update packet of each sensor contains a time stamp representing the time when the sample was generated and the measured value of the monitored process. If at a time instant $t$, the most recently received status update packet contains the time stamp $U(t)$, the AoI is defined as $\Delta(t)=t-U(t)$. In other words, the AoI of each sensor is the time elapsed since the last received status update packet was generated at the sensor. The average AoI is the most commonly used metric to evaluate the AoI \cite{8469047,6195689,6310931,5984917,8486307,8006703,chen2019optimal,8606155,8901143,8123937,moltafet2019age}. The authors of \cite{8486307} considered a WSN in which sensors share one unreliable subchannel in each slot. They minimized the expected weighted sum AoI of the network by determining the transmission scheduling policy. The authors of \cite{8006703} considered an energy harvesting sensor and derived the optimal threshold in terms of remaining energy to trigger a new sample. The authors of \cite{8123937} considered an energy harvesting sensor and minimized the time average AoI by determining the optimal status update policy. The authors of \cite{chen2019optimal} considered two source nodes generating heterogeneous traffic with different power supplies and studied the peak-age-optimal status update scheduling. The authors of \cite{8606155} considered a wireless power transfer powered sensor network and studied performance of the system in terms of the average AoI. In this paper, we minimize the time average total transmit power of sensors by jointly optimizing the sampling action, the transmit power allocation, and the subchannel assignment in each slot under the constraints on the maximum time average AoI and maximum power of each sensor. To solve the proposed optimization problem, we apply the Lyapunov drift-plus-penalty method. To the best of our knowledge, joint optimization of the transmit power allocation, subchannel assignment, and sampling action with constrained AoI has not been studied earlier. The most related work to this paper is \cite{8486307}. Differently from \cite{8486307}, besides the sampling action of each sensor, we consider both transmit power allocation and subchannel assignment in each slot. \section{ System Model and Problem Formulation}\label{System Model and Problem Formulation} We consider a WSN consisting of a set $\mathcal{K}$ of $K$ sensors and one sink, as depicted in Fig. \ref{model}. The sink is interested in time-sensitive information from the sensors which measure a physical phenomenon. We assume a slotted communication with normalized slots ${t\in\{1,2,\dots\}}$, where in each slot, the sensors share a set $\mathcal{N}$ of $N$ orthogonal subchannels with bandwidth $W$ per subchannel. We consider that each sensor can control the sampling process by deciding whether to take a sample or not at the beginning of each slot $t$. We assume that the perfect channel state information is available at the sink. Let $\rho_{k,n}(t)$ denote the subchannel assignment at time slot $t$ as $\rho_{k,n}(t)\in \{0,1\}, \forall k \in\mathcal{K}, n \in\mathcal{N}$, where $\rho_{k,n}(t)=1$ indicates that subchannel $n$ is assigned to sensor $k$ at time slot $t$, and $\rho_{k,n}(t)=0$ otherwise. To ensure that at any given time slot $t$, each subchannel can be assigned to at most one sensor, the following constraint is used: \begin{align}\label{mn001} \textstyle\sum_{k\in\mathcal{K}}\rho_{k,n}(t)\le 1, n\in \mathcal{N}, \forall t. \end{align} \begin{figure} \centering \includegraphics[scale=.4]{Model.pdf} \caption{System model.} \vspace{-5mm} \label{model} \end{figure} Let $p_{k,n}(t)$ denote the transmitted power of sensor $k$ over subchannel $n$ at slot $t$. Then, the signal-to-noise ratio (SNR) with respect to sensor $k$ over subchannel $n$ at slot $t$ is given by \begin{align} \gamma_{k,n}(t)=\dfrac{p_{k,n}(t)|h_{k,n}(t)|^2}{WN_0}, \end{align} where $h_{k,n}(t)$ is the channel coefficient from sensor $k$ to the sink over subchannel $n$ at slot $t$ and $N_0$ is the noise power spectral density. Accordingly, the achievable rate for sensor $k$ over subchannel $n$ in slot $ t $ is given by \begin{align} r_{k,n}(t)=W\log_2\left(1+\gamma_{k,n}(t)\right). \end{align} The achievable data rate of sensor $k$ at slot $t$ is equal to the summation of achievable data rates over all assigned subchannels at slot $t$, expressed as $$ R_{k}(t)=\textstyle\sum_{n\in\mathcal{N}}\rho_{k,n}(t)r_{k,n}(t). $$ Let $b_k(t)$ denote the sampling action of sensor $k$ at time slot $t$ as $b_k(t)\in \{0,1\}, \forall k \in\mathcal{K}$, where $b_k(t)=1$ indicates that sensor $k$ takes a sample at the beginning of time slot $t$, and $b_k(t)=0$ otherwise. We assume that sampling time (i.e., the time needed to take a sample) is negligible. We consider that sensor $k$ takes a sample at the beginning of slot $t$ only if there are enough resources to transmit the sample during the same slot $t$. In other words, if sensor $k$ takes a sample at the beginning of slot $t$ (i.e., $b_k(t)=1$), the sample will be transmitted during the same slot $t$. To this end, we use the following constraint: \begin{align} R_{k}(t)= \eta b_{k}(t), k\in \mathcal{K}, \forall t, \end{align} where $\eta$ is the size of each status update packet (bits). This constraint ensures that when sensor $k$ takes a sample at the beginning of slot $t$ (i.e., $b_k(t)=1$), the achievable rate for sensor $k$ at slot $t$ is $R_{k}(t)= \eta$, which guarantees that the sample is transmitted during the slot. Let $\delta_k(t)$ denote the AoI of the sensor $k$ at the beginning of slot $t$. If sensor $k$ takes a sample at the beginning of slot $t$ (i.e., $b_k(t)=1$), the AoI at the beginning of slot $t+1$ drops to one, and otherwise (i.e., $b_k(t)=0$), the AoI is incremented by one. Accordingly, the evolution of $\delta_k(t)$ is characterized as \begin{align}\label{AoI1} \delta_k(t+1)&=\begin{cases} 1&,\text{if} \,\,b_k(t)=1;\\ \delta_k(t)+1&,\text{otherwise}. \end{cases} \end{align} \begin{figure} \centering \includegraphics[scale=.9]{AoI02.pdf} \caption{The evolution of the AoI of sensor $k$.} \vspace{-5mm} \label{AoI} \end{figure} The evolution of the AoI of sensor $k$ is illustrated in Fig. \ref{AoI}. The time average AoI of sensor $k$ is calculated as the area under the AoI curve, normalized by the observation interval. As it can be seen, during slot $t$, the area under the AoI curve of sensor $k$ is calculated as a sum of the areas of a triangle and a parallelogram. The area of the triangle is equal to $1/2$ and the area of the parallelogram is equal to $\delta_k(t)$. {Therefore, the time average AoI of sensor $k$ is calculated as \begin{align}\label{mnbhg010} \Delta_k&= \dfrac{1}{2}+\lim_{t\to \infty}\dfrac{1}{t}\textstyle\sum_{\tau=1}^{t}\delta_k(\tau). \end{align}} {To make the calculations tractable, we use a commonly used approach that instead of the time average AoI in \eqref{mnbhg010}, we consider the time average of expectation of the AoI \cite{8486307,neely2010stochastic,stochast009om}, given as \begin{align}\label{mnbhg01} \Delta_k&= \dfrac{1}{2}+\lim_{t\to \infty}\dfrac{1}{t}\textstyle\sum_{\tau=1}^{t}{\mathbb{E}}[\delta_k(\tau)], \end{align} where the expectation is with respect to the random wireless channel states and control actions made in reaction to the channel states\footnote{Through the paper, all expectations are taken with respect to the randomness of the wireless channel states and control actions made in reaction to the channel states.}.} We consider that the initial value of the AoI of all sensors is $\delta_k(1)=0, \,\,\forall k\in \mathcal{K}$. \subsection{Problem Formulation} Our objective is to minimize the time average total transmit power of sensors by optimizing the sampling action, the transmit power allocation, and the subchannel assignment in each slot subject to the maximum time average AoI and maximum power constraints for each sensor. Thus, the optimization problem is formulated as follows \begin{subequations}\label{eqo1} \begin{align}\label{eq8a} &\text{minimize} \,\,\lim_{t\to \infty}\dfrac{1}{t}\textstyle\sum_{\tau=1}^{t}\textstyle\sum_{k\in \mathcal{K}}\textstyle\sum_{n\in\mathcal{N}}\mathbb{E}[p_{k,n}(\tau)]\\&\label{eq8o2} \text{subject\,\,to}\hspace{.37cm} \textstyle\sum_{k\in\mathcal{K}}\rho_{k,n}(t)\le 1, \forall n\in \mathcal{N}, \forall t\\&\label{eqo3} \hspace{1.8cm}\textstyle\sum_{n\in\mathcal{N}}p_{k,n}(t)\le P_k^{\text{max}},\,\,\, k\in \mathcal{K}, \forall t\\&\label{eqo5} \hspace{1.8cm}\Delta_k\le \Delta^{\text{max}}_k,\,\,\, k\in \mathcal{K} \\&\label{eq1o5} \hspace{1.8cm}\textstyle\sum_{n\in\mathcal{N}}\rho_{k,n}(t)r_{k,n}(t)= \eta b_{k}(t), k\in \mathcal{K}, \forall t \\&\label{eq1o50} \hspace{1.8cm}\rho_{k,n}(t)\in\{0,1\}, k\in \mathcal{K}, n\in \mathcal{N}, \forall t \\&\label{eq1o51} \hspace{1.8cm} b_{k}(t)\in\{0,1\}, k\in \mathcal{K}, \forall t, \end{align} \end{subequations} with variables $\{p_{k,n}(t),\rho_{k,n}(t)\}_{k \in\mathcal{K}, n \in\mathcal{N}}$ and $\{b_{k}(t)\}_{k \in\mathcal{K}}$ for all ${t\in\{1,2,\dots\}}$. The constraints of problem \eqref{eqo1} are as follows. The inequality \eqref{eq8o2} constrains that each subchannel can be assigned to at most one sensor in each slot; the inequality \eqref{eqo3} constrains the power of each sensor with respect to the maximum budget $P_k^{\text{max}}$; the inequality \eqref{eqo5} is the maximum acceptable time average AoI constraint for each sensor; the equality \eqref{eq1o5} ensures that each sample is transmitted during one slot; \eqref{eq1o50} and \eqref{eq1o51} represent the feasible values for the subchannel assignment and sampling policy variables, respectively. The proposed optimization problem is a mixed integer programming problem where the constraints and the objective function both contain time averages over the optimization variables. In the following section, a dynamic control algorithm using the Lyapunov optimization approach is presented to solve optimization problem \eqref{eqo1}. \section{Solution Algorithm}\label{Solution Algorithm} We use the Lyapunov drift-plus-penalty method introduced in \cite{neely2010stochastic} and \cite{stochast009om} to solve the optimization problem \eqref{eqo1}. According to the drift-plus-penalty method, the time average constraint \eqref{eqo5} is enforced by transforming the problem into a queue stability problem. In other words, for each time average inequality in constraint \eqref{eqo5} a virtual queue is associated in such a way that the stability of these virtual queues implies the feasibility of constraint \eqref{eqo5}. To use the drift-plus penalty method, we rewrite constraint \eqref{eqo5} as follows \begin{align}\label{consta1} \lim_{t\to \infty}\dfrac{1}{t}\textstyle\sum_{\tau=1}^{t}\mathbb{E}[\delta_k(\tau)]\le \Delta^{\text{max}}_k-\dfrac{1}{2},\,\,\, k \in\mathcal{K}. \end{align} Let $\{Q_k(t)\}_{k\in \mathcal{K}}$ denote the virtual queues associated with constraint \eqref{consta1}. Then, the virtual queues are updated at the beginning of each time slot as \begin{align}\label{consta2} Q_k(t\!+\!1)\!=\!\max\!\left[Q_k(t)-\left(\Delta^{\text{max}}_k\!\!-\!\!\dfrac{1}{2}\right),0\right]\!\!+\!\delta_k(t\!+\!1), \forall k\!\in\! \mathcal{K}. \end{align} Here, we use the notion of strong stability; the virtual queues are strongly stable if \cite[Ch. 2]{neely2010stochastic} \begin{align}\label{consta3} \lim_{t\to \infty}\dfrac{1}{t}\textstyle\sum_{\tau=1}^{t}\mathbb{E}[Q_k(\tau)]<\infty, \forall k\in \mathcal{K}. \end{align} According to \eqref{consta3}, a queue is strongly stable if its time average backlog is finite. Next, we introduce the Lyapunov function and its drift which are needed to define the queue stability problem. Let $\bold{S}(t)=\{Q_k(t),\delta_k(t)\}_{k\in \mathcal{K}}$ denote the network state at the beginning of slot $t$, and $\bold{Q}(t)$ denote a vector containing all the virtual queues, i.e., ${\bold{Q}(t)=[Q_1(t),Q_2(t),\dots,Q_K(t)]}$. Then, a quadratic Lyapunov function $L(\bold{Q}(t))$ is defined by \cite[Ch. 3]{neely2010stochastic} \begin{align}\label{consta4} L(\bold{Q}(t))=\dfrac{1}{2}\textstyle\sum_{k\in\mathcal{K}}Q^2_k(t). \end{align} The Lyapunov function measures the network congestion: if the Lyapunov function is small, then all the queues are small, and if the Lyapunov function is large, then at least one queue is large. Therefore, by minimizing the expected change of the Lyapunov function from one slot to the next slot, queues $\{Q_k(t)\}_{k\in \mathcal{K}}$ can be stabilized \cite[Ch. 4]{neely2010stochastic}. The expected of the Lyapunov function from one slot to the next slot is defined as the drift in the Lyapunov function, which is defined as \begin{align}\label{consta5} \alpha(\bold{S}(t))=\mathbb{E}\left[L\left(\bold{Q}(t+1)\right)-L\left(\bold{Q}(t)\right)|\bold{S}(t)\right]. \end{align} According to the drift-plus-penalty minimization method, a control policy that minimizes the objective function of the optimization problem \eqref{eqo1} is obtained by minimizing the drift-plus-penalty in each slot $t$ \cite[Ch. 3]{neely2010stochastic}, i.e., \begin{align}\label{consta6} \alpha(\bold{S}(t))+V\textstyle\sum_{k\in \mathcal{K}}\textstyle\sum_{n\in\mathcal{N}}\mathbb{E}[p_{k,n}(t)], \end{align} subject to the following constraints \begin{subequations}\label{eqo2} \begin{align}\label{eq8a2} &\textstyle\sum_{k\in\mathcal{K}}\rho_{k,n}(t)\le 1, \forall n\in \mathcal{N}, \forall t\\&\label{eqo32} \textstyle\sum_{n\in\mathcal{N}}p_{k,n}(t)\le P_k^{\text{max}},\,\,\, k\in \mathcal{K}, \forall t \\&\label{eq1o52} R_{k}(t)=\eta b_{k}(t), k\in \mathcal{K}, \forall t \\&\label{eq1o502} \rho_{k,n}(t)\in\{0,1\}, k\in \mathcal{K}, n\in \mathcal{N}, \forall t \\&\label{eq1o512} b_{k}(t)\in\{0,1\}, k\in \mathcal{K}, \forall t, \end{align} \end{subequations} where $V\ge0$ is a parameter that represents how much we emphasize on the objective function (power minimization). Therefore, by varying $V$, a desired trade-off between the sizes of the queue backlogs and objective function can be obtained. Since minimizing the objective function \eqref{consta6} is intractable, we minimize an upper bound of \eqref{consta6} in each slot $t$ \cite[Ch. 3]{neely2010stochastic}. To find an upper bound for \eqref{consta6}, we use the following inequality in which for any $\hat{A}\ge0$, $\tilde{A}\ge0$, and $\bar{A}\ge0$ we have \cite[Ch. 3]{neely2010stochastic} \begin{align}\label{wrf01} \left(\max\left[\hat{A}-\tilde{A},0\right]+\bar{A}\right)^2\le\hat{A}^2+\tilde{A}^2+\bar{A}^2+2\hat{A}(\bar{A}-\tilde{A}). \end{align} By applying \eqref{wrf01} to \eqref{consta2}, an upper bound for $ Q^2_k(t+1)$ is given as \begin{align}\nonumber & Q^2_k(t+1)\le Q^2_k(t)+\left(\Delta^{\text{max}}_k-\dfrac{1}{2}\right)^2+\delta^2_k(t+1)+2Q_k(t)\\&\label{021mb4} \left(\delta_k(t+1)-(\Delta^{\text{max}}_k-\dfrac{1}{2})\right). \end{align} By applying \eqref{021mb4} to \eqref{consta6}, an upper bound for \eqref{consta6} is given as \begin{align}\nonumber &\alpha(\bold{S}(t))+V\sum_{k\in \mathcal{K}}\sum_{n\in\mathcal{N}}\mathbb{E}[p_{k,n}(t)]\le V\sum_{k\in \mathcal{K}}\sum_{n\in\mathcal{N}}\mathbb{E}[p_{k,n}(t)]+\\&\nonumber\dfrac{1}{2}\mathbb{E}\bigg[\sum_{k\in\mathcal{K}}\!\bigg((\Delta^{\text{max}}_k-\dfrac{1}{2})^2+\delta^2_k(t+1)+2Q_k(t)\big(\delta_k(t+1)-\\&\nonumber(\Delta^{\text{max}}_k-\dfrac{1}{2})\big)\bigg)\bigg|\bold{S}(t)\bigg]=V\textstyle\sum_{k\in \mathcal{K}}\textstyle\sum_{n\in\mathcal{N}}\mathbb{E}[p_{k,n}(t)]\\&\nonumber+\dfrac{1}{2}\sum_{k\in\mathcal{K}}\bigg((\Delta^{\text{max}}_k-\dfrac{1}{2})^2+\mathbb{E}[\delta^2_k(t+1)|\bold{S}(t)]+2Q_k(t)\\&\label{021mb40}\big(\mathbb{E}[\delta_k(t+1)|\bold{S}(t)]-(\Delta^{\text{max}}_k-\dfrac{1}{2})\big)\bigg). \end{align} To derive the upper bound for \eqref{consta6}, we need to determine $\mathbb{E}[\delta_k(t+1)|\bold{S}(t)]$ and $\mathbb{E}[\delta^2_k(t+1)|\bold{S}(t)]$. To this end, by using the evolution of the AoI in \eqref{AoI1}, $\delta_k(t+1)$ and $\delta^2_k(t+1)$ are calculated as \begin{align}\label{021mb401} &\delta_k(t+1)=b_k(t)+\left(1-b_k(t)\right)(\delta_k(t)+1), k\in \mathcal{K} \\&\nonumber \delta^2_k(t+1)=b_k(t)+\left(1-b_k(t)\right)(\delta_k(t)+1)^2, k\in \mathcal{K}. \end{align} By using the expressions in \eqref{021mb401}, $\mathbb{E}[\delta_k(t+1)|\bold{S}(t)]$ and $\mathbb{E}[\delta^2_k(t+1)|\bold{S}(t)]$ are given as \begin{align}\label{021mb4010} &\mathbb{E}[\delta_k(t+1)|\bold{S}(t)]\!=\!\mathbb{E}[b_k(t)]\!+\!(1\!-\!\mathbb{E}[b_k(t)])(\delta_k(t)\!+\!1), k\in \mathcal{K} \\&\nonumber \mathbb{E}[\delta^2_k(t\!+\!1)|\bold{S}(t)]\!=\!\mathbb{E}[b_k(t)]\!+\!(1\!-\!\mathbb{E}[b_k(t)])(\delta_k(t)\!+\!1)^2, k\!\in\! \mathcal{K} . \end{align} By substituting \eqref{021mb4010} into the right hand side of \eqref{021mb40}, the upper bound for \eqref{consta6} is given as \begin{align}\nonumber &V\sum_{k\in \mathcal{K}}\sum_{n\in\mathcal{N}}\mathbb{E}[p_{k,n}(t)]+\dfrac{1}{2}\sum_{k\in\mathcal{K}}\bigg( (\Delta^{\text{max}}_k-\dfrac{1}{2})^2+(\delta_k(t)+1)^2\\&\nonumber+(2Q_k(t)-1)(\delta_k(t)+1)+ \mathbb{E}[b_k(t)]\big(1-(\delta_k(t)+1)^2\\&\label{uppervn} -{{2Q_k(t)}}\delta_k(t)\big)\bigg). \end{align} Next, we explain the proposed dynamic algorithm to solve the optimization problem \eqref{eqo1}. The main steps of the algorithm are summarized in Algorithm 1. The algorithm observes the channel states $\{h_{k,n}(t)\}_{k\in \mathcal{K},n\in \mathcal{N}}$ and network state $\bold{S}(t)$ in each time slot $t$ and makes a control action to minimize \eqref{uppervn} subject to the constraints \eqref{eq8a2}-\eqref{eq1o512}. Note that the drift-plus penalty method exploits the \textit{opportunistically minimize an expectation} \cite[Ch. 8]{neely2010stochastic} to solve the subproblem in each slot. To solve the optimization problem \eqref{eqoi1} in each slot, we confine to use the exhaustive search algorithm. \begin{algorithm}[t] { \caption{Proposed solution algorithm for problem \eqref{eqo1}} \label{table-1} Step 1: \textbf{initialization}: set ${t=0}$, set $V$, and initialize $~~~~~~~~~~~~~$$\{Q_k(0),\delta_k(0)\}_{k\in \mathcal{K}}$, \\ \textbf{for} each time slot $t$ \textbf{do} \\ Step 2: Sampling action, transmit power, and subchannel $~~~~~~~~~$assignment: obtain $\{p_{k,n}(t),\rho_{k,n}(t)\}_{k \in\mathcal{K}, n \in\mathcal{N}}$ and $~~~~~~~~~$$\{b_{k}(t)\}_{k \in\mathcal{K}}$ by solving the following optimization $~~~~~~~~~~~~~$problem: \begin{align}\label{eqoi1} &\text{minimize} \,\,V\sum_{k\in \mathcal{K}}\sum_{n\in\mathcal{N}}p_{k,n}(t)+\\&\nonumber\hspace{1.5cm}\dfrac{1}{2}\sum_{k\in\mathcal{K}}\bigg( b_k(t)\big(1-(\delta_k(t)+1)^2-2Q_k(t)\delta_k(t)\big)\bigg)\\&\nonumber \text{subject\,\,to}\hspace{.2cm} \eqref{eq8a2}-\eqref{eq1o512}, \end{align} $~~~$with variables $\{p_{k,n}(t),\rho_{k,n}(t)\}_{k \in\mathcal{K}, n \in\mathcal{N}}$ and $~~~~~~~~~~~$$\{b_{k}(t)\}_{k \in\mathcal{K}}$, \\ Step 3: Queue update: update $\{Q_k(t+1),\delta_k(t+1)\}_{k\in \mathcal{K}}$ $~~~~~~~~~~~~$using \eqref{consta2} and \eqref{021mb401}, \\ $~~~~~~~~~~~~$Set $t=t+1$, and go to Step 2, \\ \textbf{end for} } \end{algorithm} \section{Numerical Results}\label{Numerical Results} In this section, we evaluate the performance of the proposed dynamic control algorithm presented in Algorithm 1. Due to the complexity of the exhaustive search solution used to solve the optimization problem \eqref{eqoi1}, we evaluate the performance of the system with a small number of sensors and subchannels. We consider ${K=2}$ sensors placed in a two-dimensional plane and ${N=2}$ subchannels with bandwidth ${W=180}$ kHz. The coordinate of sensor $1$ is $(0,300)$, the coordinate of sensor $2$ is $(300,0)$, and the coordinate of the sink is $(0,0)$. The channel coefficient from sensor $k$ to the sink over subchannel $n$ at slot $t$ is modeled by $h_{k,n}(t)=(d_k/d_0)^\xi c_{k,n}(t)$, where $d_k$ is the distance from sensor $k$ to the sink, $ d_0 $ is the far field reference distance, $\xi$ is the path loss exponent, and $c_{k,n}(t)$ is a Rayleigh distributed random coefficient. Accordingly, $(d_k/d_0)^\xi$ represents large scale fading and the term ${c_{k,n}(t)}$ denotes small scale Rayleigh fading. We set ${\xi=-3}$, $ {d_0=1} $, maximum acceptable average AoI of sensors ${\Delta_{k}^{\text{max}}=4, \forall k}$, the size of each packet ${\eta=600}$ Bytes, and the parameter of Rayleigh distribution is $0.5$. Fig. \ref{f02} depicts the average AoI of sensor 1 as a function of $V$. According to this figure, when $V$ increases, the average AoI of sensor 1 increases as well. This is because when $V$ increases, the backlogs of the virtual queues associated to the time average AoI constraints \eqref{eqo5} increase. We can also observe that the average AoI of the sensor is always smaller than the maximum acceptable average AoI ${\Delta_{k}^{\text{max}}=4}$. Fig. \ref{f03} illustrates the time average total transmit power as a function of $V$. The figure shows that when $V$ increases, the average total transmit power decreases. This is because when $V$ increases, more emphasis is set to minimize the total transmit power in the objective function of optimization problem \eqref{eqoi1}. Fig. \ref{f01} illustrates the trade-off between the average AoI and average total transmit power of the sensors for different values of $V$. By increasing $V$ the average AoI of different sensors increases and the average total transmit power decreases. Note that the average AoI of sensors remains always smaller than the maximum acceptable average AoI. From Figs. \ref{f02} and \ref{f01}, we observe that by increasing $V$ sufficiently high, the AoI values of the sensors eventually reach the maximum acceptable average AoI. Similarly, as it can be seen in Figs. \ref{f03} and \ref{f01}, for the high values of $V$, the average total transmit power of the sensors starts to saturate into a certain level. \begin{figure} \centering \includegraphics[scale=0.4]{AAge-eps-converted-to.pdf} \caption{ Average AoI of sensor 1 as a function of $V$. } \vspace{-5mm} \label{f02} \end{figure} \begin{figure} \centering \includegraphics[scale=0.41]{PPower-eps-converted-to.pdf} \caption{Average total transmit power of the sensors as a function of $V$. } \vspace{-5mm} \label{f03} \end{figure} \begin{figure} \centering \includegraphics[scale=0.39]{Tradeoff-eps-converted-to.pdf} \caption{Trade-off between the average total transmit power and average AoI of sensor 1 and sensor 2 as a function of $V$. } \vspace{-5mm} \label{f01} \end{figure} \section{Conclusions}\label{Conclusion} In this paper, we considered a status update system consisting of a set of sensors that can control the sampling action. The status update packets of the sensors are transmitted by sharing a set of orthogonal subchannels in each slot. We formulated an optimization problem to minimize the time average total transmit power of sensors with time average AoI and maximum power constraints for each sensor. To solve the proposed optimization problem, we used the Lyapunov drift-plus-penalty method. This method provides a trade-off between the average total transmit power and the average AoI of the sensors which were shown in the numerical experiments. \section*{Acknowledgements} This research has been financially supported by the Infotech Oulu, the Academy of Finland (grant 323698), and Academy of Finland 6Genesis Flagship (grant 318927). M. Codreanu would like to acknowledge the support of the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l{}odowska-Curie Grant Agreement No. 793402 (COMPRESS NETS). M. Moltafet would like to acknowledge the support of Finnish Foundation for Technology Promotion. \bibliographystyle{IEEEtran}
1,314,259,994,385
arxiv
\section{Introduction} \label{sec:introduction} \IEEEPARstart{I}{n} this paper we study possible approaches to the design of electrically thin layers (sheets) which would behave as {\em perfect absorbers} for normally-incident electromagnetic plane waves. We say that absorption in a layer at some frequency is ``perfect'' or ``total'', if all incident power is dissipated in the layer. This implies that both reflection and transmission coefficients are equal to zero. In this study we will consider only the case of normal incidence, thus, this term should not be confused with the {\em perfectly matched layer} or PML, which implies zero reflection coefficient at any incident angle and for any polarization of the incident wave. The theory and design of absorbers for electromagnetic radiation has a long history and there exists a large variety of designs, especially for microwave frequencies (see, e.g. \cite{RCS, radar_absorbers2, radar_absorbers}). However, in most of these designs the absorbing structure is backed by a reflecting wall (usually a metal surface), because most often the goal is to reduce microwave reflections from metal structures. Recently, there has been considerable interest in thin absorbing layers for situations where there is no reflector behind, so that the electromagnetic properties of the object which one wants to ``hide" can be arbitrary. Naturally, a thin reflector can be incorporated in the absorber structure, but often it is desirable to allow off-band electromagnetic waves to pass through the structure or avoiding conductors is one of the application requirements. Also, for infrared and visible-light applications the use of perfect reflectors as parts of absorbing layers is not practically possible except if the use of electrically thick layers of photonic crystals is allowed in design. Electrically thin matched absorbers can be realized in many ways, but, to the best of our knowledge, only a very limited set of opportunities has been explored so far. One known possibility is to combine two thin metamaterial layers with contrasting material parameters \cite{Bilotti2006} or combine a thin resistive sheet with an array of small resonant split rings (which realize the necessary magnetic response) \cite{Bilotti2011}. A fundamental limitation on the bandwidth of metal-backed absorbers has been discussed in \cite{Rozanov}. A review of recently introduced multilayer absorbers can be found in \cite{Watts}. Here we will not consider such two- or multilayer structures, concentrating on the basic and fundamentally simplest case of a single sheet with properly designed properties. These single-layer absorbers provide ultimately thin design solutions, because the layer thickness cannot be made smaller than just one layer of particles (molecules). Conceptually, the simplest possible thin absorbing sheet is a uniform or composite layer of electrically negligible thickness (impedance sheet). In this case, the incident electric field induces an infinitesimally thin sheet of electric current in the layer, which eventually leads to dissipation of the incident power, if the layer is lossy. However, it is obvious that in this case the absorbed power can reach only one half of the incident power, and the total absorption is not possible (e.g., \cite{add1,Pozar_array}). This follows from the fact that the induced current sheet symmetrically radiates plane waves in the forward and back directions. Zero transmission coefficient implies that the amplitude of this secondary wave behind the sheet equals to that of the incident wave (so that the two waves cancel each other behind the sheet), but this means that the reflection coefficient equals unity in the amplitude. Thus, in order to enable total absorption, we need to allow also magnetic current to be induced in the layer. Strictly speaking, this implies that the layer thickness cannot be negligibly small (electrically), at least if no natural magnetics are used, but it can be still made very small compared with the wavelength. In view of practical requirements in realization of layers with desired electromagnetic response, calling for the use of composite structures, we do not model the layer as a homogeneous sheet described by some surface susceptibility or impedance, but assume from the beginning that the layer is a composite structure: a single layer of small polarizable particles. Engineering these inclusions, we can tune the reflection and transmission responses of the composite layer. Such artificial sheets with engineered electromagnetic properties are called ``metasurfaces'' or ``metasheets'', see recent reviews \cite{Holloway,Shalaev_review}. The absorber designs which we will develop here will give the required polarizabilities of individual inclusions together with the appropriate array period. In order to have full design freedom in defining how the induced electric and magnetic moments of the absorbing dipolar particles depend on the incident fields, we assume that the particles are the most general bi-anisotropic particles, possibly nonreciprocal. Here, we will consider only the case of normal plane-wave incidence. In practice, performance stability for oblique incidence is an important issue. Conventional absorbers are made of material layers of considerable electrical thickness, and this thickness is the key parameter defining the resonant frequency. Because this thickness changes when the incidence angle of the incident wave deviates from the normal, a shift in the frequency of the absorption maximum is expected. The ultimately thin absorbing layers proposed in this paper are expected to be more stable for changes of the incident angle, but a separate study is needed to understand the angular dependence of response of these new structures. The condition for total absorption of normally incident plane waves by a single infinite periodic array of electric and magnetic dipoles is known from the antenna theory \cite{Pozar_array}. Let us assume that an infinite array with the period $a$ ($a$ is smaller than the wavelength in the surrounding space) in each unit cell contains one isotropic particle in which the incident electric and magnetic fields induce electric dipole moment $\_p$ and magnetic moment $\_m$. The two moments will be orthogonal: electric moment along the incident electric field and magnetic moment along the magnetic field. Arrays of both moments will create secondary plane waves, and in the forward direction the secondary electric field amplitude reads \begin{equation} E_{\rm forward}={-j\omega \over 2S}\left(\eta_0 p +{1\over \eta_0} m\right)\end{equation} where $S=a^2$ is the unit-cell area, so that $j\omega p/S$ is the surface-averaged electric current density and $j\omega m/S$ is the magnetic current surface density. $\eta_0=\sqrt{\mu_0/\epsilon_0}$ is the wave impedance of the surrounding space. Derivation of these formulas for plane-wave fields created by planar sheets of electric and magnetic currents can be found e.g. in \cite{Felsen_M}. In the opposite direction (the reflection direction), the same induced electric and magnetic currents generate a plane wave with the amplitude \begin{equation} E_{\rm back}={-j\omega \over 2S}\left(\eta_0 p -{1\over \eta_0} m\right)\end{equation} Now, we see that it is possible to choose the moments so that the secondary field would cancel the incident field $E_{\rm inc}$ in the forward direction (zero transmission coefficient) and at the same time the secondary field would be zero in the back direction (zero reflection coefficient). Obviously, the conditions are \begin{equation} p={-jS\over \omega \eta_0}E_{\rm inc},\qquad m=\eta_0^2 p \l{ccc} \end{equation} This arrangement of electric and magnetic current sheets is in fact a Huygens surface, and for volumetric material layers that would correspond to materials with equal relative permittivity and permeability. We note in passing that the use of volumetric materials with matched wave impedance in absorbers is well known, see e.g. \cite{RCS,Sol} or \cite[ch.~12]{basic}. Thus, a simple approach to realization of total absorbing layers is to arrange electrically and magnetically polarizable particles in a dense lattice and tune the polarizabilities so that \r{ccc} are satisfied. However, this is not the only possible approach. We only need to ensure that the dipole moments have the required values, but the particles in which these dipole moments are induced can be any electrically small objects which one can describe as dipole scatterers. We expect that there should be considerable design freedom and possibilities for realizing additional practically useful properties if we do not restrict the design space by the simplest case of small electrically and magnetically polarizable scatterers (like small magnetodielectric spheres, for example). In this paper we consider planar layers formed by electrically small particles modeled by the most general linear relations between the induced dipole moments $\_p$ and $\_m$ and the local fields $\mathbf{E}_{\rm loc}$ and $\mathbf{H}_{\rm loc}$ at the positions of the particles: \begin{equation} \left[ \begin{array}{c} \mathbf{p} \\ \mathbf{m}\end{array} \right] =\left[ \begin{array}{cc} \={\alpha}_{\rm ee}& \={\alpha}_{\rm em}\\ \={\alpha}_{\rm me}& \={\alpha}_{\rm mm} \end{array} \right]\cdot \left[ \begin{array}{c} \mathbf{E}_{\rm loc} \\ \mathbf{H}_{\rm loc}\end{array} \right]. \label{eq:e1}\end{equation} Although for the desired operation of the layer the induced dipole moments must satisfy the same conditions of the Huygens sheet \r{ccc}, the actual design space is vastly larger, since we can exploit the magneto-electric coupling parameters $\={\alpha}_{\rm em}$ and $\={\alpha}_{\rm me}$ to bring the induced moments to the desired balance and required amplitudes. Furthermore, additional functionalities will become possible, as we will see in the following. While the simple and well-known solution in form of electric and magnetic dipole particles corresponds to simple magnetodielectric layers with $\epsilon_r=\mu_r$ (if we think about layers of homogeneous materials), the general case of bi-anisotropic polarizabilities of individual particles corresponds to bi-anisotropic absorbing layers. In the past, chiral absorbing layers were studied in detail \cite{chiral87,chiral89,chiral89a,chiral92,chiral96,cloete,Koschny}, but only for metal-backed volumetric layers. The use of omega coupling phenomenon for matched absorber layers was explored in \cite{basic, omega, reference2}, but also only for material layers on perfectly conducting surfaces. Recently, different kinds of absorbers have been proposed to absorb electromagnetic waves in microwave or optical spectra \cite{Sajuyigbe1,Zhou1,Korolev1,Yuan1,Jiang1,Cui1,Shvets1}. As it was mentioned, most of these absorbers are backed with a metal sheet which limits their functionality for the wave coming from the other side. These absorbers contain more than one layer of particles and they are designed so that to absorb the wave from one side while they have some uncontrollable properties for the wave coming from the other side. Here, we answer important questions: How one can realize single-layer perfect absorbers from one side of the sheet and what functionalities can be realized for waves coming from the other side? Of course, this implies that there is no metal (PEC) ground plane as a part of the absorbing structure. Here, we study the possible use of single arrays of bi-anisotropic particles of all known classes: reciprocal chiral and omega and nonreciprocal Tellegen and ``moving" particles \cite{classes,basic}. Also in this paper we consider the use of particles which have hybrid electromagnetic properties of several classes, e.g. ``moving" chiral and Tellegen omega particles. The electromagnetic coupling of the artificial omega-Tellegen particle was measured experimentally in \cite{mTellegen}. It was shown that nonreciprocal electromagnetic coupling really exists in the particle and the electromagnetic coupling coefficient is commensurate by magnitude with the electric and magnetic polarizabilities. Described implementation of the particle implies presence of external magnetic field bias (3570 Oe in \cite{mTellegen}). The used material of ferrite inclusions was yttrium iron garnet. \section{Total absorption in arrays of general bi-anisotropic particles} \subsection{Effective polarizability dyadics of particles in periodic arrays} In this paper, we consider thin absorbers for normally incident plane waves and concentrate on uniaxial structures, isotropic in the plane of the layer. This property ensures that the absorber functions for arbitrary polarized incident plane waves. The orientation of the absorbing sheet in space is defined by the unit vector $\_z_0$, orthogonal to its plane. The layer consists of an array of electrically small uniaxial particles. As discussed above, total absorption requires at least electric and magnetic dipole moments induced in the particles, and the requirement of ultimately small thickness means that higher-order multipoles are negligible. Thus, we assume that the particles are bi-anisotropic particles characterized by four dyadic polarizabilities: electric, magnetic, electromagnetic, and magnetoelectric, which relate local electromagnetic fields to the induced electric and magnetic dipole moments as in (\ref{eq:e1}). The uniaxial symmetry allows only isotopic response and rotation around the axis $\_z_0$. Thus, all the polarizabilities in (\ref{eq:e1}) take the forms: \begin{equation} \begin{array}{c} \={\alpha}_{\rm ee}=\alpha_{\rm ee}^{\rm co}\overline{\overline{I}}_{\rm t}+\alpha_{\rm ee}^{\rm cr}\overline{\overline{J}}_{\rm t},\qquad \displaystyle \={\alpha}_{\rm mm}=\alpha_{\rm mm}^{\rm co}\overline{\overline{I}}_{\rm t}+\alpha_{\rm mm}^{\rm cr}\overline{\overline{J}}_{\rm t}\\\vspace*{.1cm}\displaystyle \={\alpha}_{\rm em}=\alpha_{\rm em}^{\rm co}\overline{\overline{I}}_{\rm t}+\alpha_{\rm em}^{\rm cr}\overline{\overline{J}}_{\rm t},\qquad \displaystyle \={\alpha}_{\rm me}=\alpha_{\rm me}^{\rm co}\overline{\overline{I}}_{\rm t}+\alpha_{\rm me}^{\rm cr}\overline{\overline{J}}_{\rm t}, \end{array}\label{eq:g1} \end{equation} where indices ${\rm co}$ and ${\rm cr}$ refer to the symmetric and antisymmetric parts of the corresponding dyadics, respectively. $\overline{\overline{I}}_{\rm t}=\overline{\overline{I}}-\mathbf{z}_0\mathbf{z}_0$ is the transverse unit dyadic and $\overline{\overline{J}}_{\rm t}=\mathbf{z}_0\times\overline{\overline{I}}_{\rm t}$ is the vector-product operator. The particles are arranged in a square lattice with the unit cell of the size $a\times a$. The grid is excited by an arbitrary polarized plane wave with the electric and magnetic fields of $\mathbf{E}_{\rm inc}$ and $\mathbf{H}_{\rm inc}$, respectively, which are uniform in the array plane (normal incidence). In this situation, the induced dipole moments are the same for all particles. We assume that the grid period $a$ is smaller than the wavelength, so that no grating lobes are generated. The local fields exciting the particles are the sums of the external incident field and the interaction field caused by the induced dipole moments in other particles: \begin{equation} \begin{array}{c} \mathbf{E}_{\rm loc}=\mathbf{E}_{\rm{inc}}+\overline{\overline{\beta}}_{\rm e}\cdot\mathbf{p} \vspace*{.2cm}\\\displaystyle \mathbf{H}_{\rm loc}=\mathbf{H}_{\rm inc}+\overline{\overline{\beta}}_{\rm m}\cdot\mathbf{m} , \end{array}\label{eq:h1} \end{equation} where $\overline{\overline{\beta}}_{\rm e}$ and $\overline{\overline{\beta}}_{\rm m}$ are the interaction constants. These dyadic coefficients are proportional to the two-dimensional unit dyadic $\overline{\overline{I}}_{\rm t}$. Explicit analytical expressions for the interaction constants can be found in \cite{basic}. Equations (\ref{eq:e1}) and (\ref{eq:h1}) can be re-written as relations between the induced dipole moments and the incident fields: \begin{equation} \left[ \begin{array}{c} \mathbf{p} \\ \mathbf{m}\end{array} \right] =\left[ \begin{array}{cc} \={\widehat{\alpha}}_{\rm ee} & \={\widehat{\alpha}}_{\rm em}\\\={\widehat{\alpha}}_{\rm me}& \={\widehat{\alpha}}_{\rm mm} \end{array} \right]\cdot \left[ \begin{array}{c} \mathbf{E}_{\rm inc} \\ \mathbf{H}_{\rm inc}\end{array} \right] , \label{eq:j1} \end{equation} where the effective polarizabilities (marked by hats) include the effects of particle interactions in the array. Explicit formulas for the effective polarizabilities in terms of the individual polarizabilities and interaction constants are given in \cite{Teemu}: \begin{equation} \hspace*{-.2cm}\begin{array}{c} \={\widehat{\alpha}}_{\rm ee}\!=\!\left(\overline{\overline{I}}_{\rm t}\!-\!\={\alpha}_{\rm ee}\!\cdot\!\overline{\overline{\beta}}_{\rm e}\!-\!\={\alpha}_{\rm em}\!\cdot\!\overline{\overline{\beta}}_{\rm m}\!\cdot\!(\overline{\overline{I}}_{\rm t}\!-\!\={\alpha}_{\rm mm}\!\cdot\!\overline{\overline{\beta}}_{\rm m})^{-1}\!\cdot\!\={\alpha}_{\rm me}\!\cdot\!\overline{\overline{\beta}}_{\rm e}\right)^{-1} \vspace*{.2cm}\\\displaystyle .\left(\={\alpha}_{\rm ee}+\={\alpha}_{\rm em}\cdot\overline{\overline{\beta}}_{\rm m}\cdot(\overline{\overline{I}}_{\rm t}-\={\alpha}_{\rm mm}\cdot\overline{\overline{\beta}}_{\rm m})^{-1}\cdot\={\alpha}_{\rm me}\right) \vspace*{.4cm}\\\displaystyle \={\widehat{\alpha}}_{\rm em}\!=\!\left(\overline{\overline{I}}_{\rm t}\!-\!\={\alpha}_{\rm ee}\!\cdot\!\overline{\overline{\beta}}_{\rm e}\!-\!\={\alpha}_{\rm em}\!\cdot\!\overline{\overline{\beta}}_{\rm m}\!\cdot\!(\overline{\overline{I}}_{\rm t}\!-\!\={\alpha}_{\rm mm}\!\cdot\!\overline{\overline{\beta}}_{\rm m})^{-1}\!\cdot\!\={\alpha}_{\rm me}\!\cdot\!\overline{\overline{\beta}}_{\rm e}\right)^{-1} \vspace*{.2cm}\\\displaystyle .\left(\={\alpha}_{\rm em}+\={\alpha}_{\rm em}\cdot\overline{\overline{\beta}}_{\rm m}\cdot(\overline{\overline{I}}_{\rm t}-\={\alpha}_{\rm mm}\cdot\overline{\overline{\beta}}_{\rm m})^{-1}\cdot\={\alpha}_{\rm mm}\right) \vspace*{.4cm}\\\displaystyle \={\widehat{\alpha}}_{\rm me}\!=\!\left(\overline{\overline{I}}_{\rm t}\!-\!\={\alpha}_{\rm me}\!\cdot\!\overline{\overline{\beta}}_{\rm e}\!\cdot\!(\overline{\overline{I}}_{\rm t}\!-\!\={\alpha}_{\rm ee}\!\cdot\!\overline{\overline{\beta}}_{\rm e})^{-1}\!\cdot\!\={\alpha}_{\rm em}\!\cdot\!\overline{\overline{\beta}}_{\rm m}\!-\!\={\alpha}_{\rm mm}\!\cdot\!\overline{\overline{\beta}}_{\rm m}\right)^{-1} \vspace*{.2cm}\\\displaystyle .\left(\={\alpha}_{\rm me}+\={\alpha}_{\rm me}\cdot\overline{\overline{\beta}}_{\rm e}\cdot(\overline{\overline{I}}_{\rm t}-\={\alpha}_{\rm ee}\cdot\overline{\overline{\beta}}_{\rm e})^{-1}\cdot\={\alpha}_{\rm ee}\right) \vspace*{.4cm}\\\displaystyle \={\widehat{\alpha}}_{\rm mm}\!=\!\left(\overline{\overline{I}}_{\rm t}\!-\!\={\alpha}_{\rm me}\!\cdot\!\overline{\overline{\beta}}_{\rm e}\!\cdot\!(\overline{\overline{I}}_{\rm t}\!-\!\={\alpha}_{\rm ee}\!\cdot\!\overline{\overline{\beta}}_{\rm e})^{-1}\!\cdot\!\={\alpha}_{\rm em}\!\cdot\!\overline{\overline{\beta}}_{\rm m}\!-\!\={\alpha}_{\rm mm}\!\cdot\!\overline{\overline{\beta}}_{\rm m}\right)^{-1} \vspace*{.2cm}\\\displaystyle .\left(\={\alpha}_{\rm mm}+\={\alpha}_{\rm me}\cdot\overline{\overline{\beta}}_{\rm e}\cdot(\overline{\overline{I}}_{\rm t}-\={\alpha}_{\rm ee}\cdot\overline{\overline{\beta}}_{\rm e})^{-1}\cdot\={\alpha}_{\rm em}\right). \end{array}\label{eq:l1} \end{equation} Because the interaction constants are diagonal dyadics, the symmetry properties of the effective polarizabilities are the same as for the individual particle polarizabilities (as defined in (\ref{eq:g1})): \begin{equation} \begin{array}{c} \={\widehat{\alpha}}_{\rm ee}=\widehat{\alpha}_{\rm ee}^{\rm co}\overline{\overline{I}}_{\rm t}+\widehat{\alpha}_{\rm ee}^{\rm cr}\overline{\overline{J}}_{\rm t},\qquad \displaystyle \={\widehat{\alpha}}_{\rm mm}=\widehat{\alpha}_{\rm mm}^{\rm co}\overline{\overline{I}}_{\rm t}+\widehat{\alpha}_{\rm mm}^{\rm cr}\overline{\overline{J}}_{\rm t}\\\vspace*{.1cm}\displaystyle \={\widehat{\alpha}}_{\rm em}=\widehat{\alpha}_{\rm em}^{\rm co}\overline{\overline{I}}_{\rm t}+\widehat{\alpha}_{\rm em}^{\rm cr}\overline{\overline{J}}_{\rm t},\qquad\displaystyle \={\widehat{\alpha}}_{\rm me}=\widehat{\alpha}_{\rm me}^{\rm co}\overline{\overline{I}}_{\rm t}+\widehat{\alpha}_{\rm me}^{\rm cr}\overline{\overline{J}}_{\rm t}\displaystyle. \end{array}\label{eq:k1} \end{equation} This can be checked by substituting (\ref{eq:g1}) in (\ref{eq:l1}). \subsection{Reflection and transmission coefficients} In the theory of absorbing sheets, we will distinguish between illuminations of the sheet from its two opposite sides, along $-\mathbf{z}_0$ and $\mathbf{z}_0$. In the rest of the paper, we will use double signs for these two cases, where the top and bottom signs correspond to the incident plane wave propagating in $-\mathbf{z}_0$ and $\mathbf{z}_0$ directions, respectively. In the incident plane wave, the electric and magnetic fields satisfy \begin{equation} \mathbf{H}_{\rm inc}=\mp\frac{1}{\eta_0}\overline{\overline{J}}_{\rm t}\cdot\mathbf{E}_{\rm inc}. \label{eq:m1}\end{equation} Thus, the dipole moments in (\ref{eq:j1}) can be written as \begin{equation} \displaystyle \left[ \displaystyle\begin{array}{c} \mathbf{p} \\ \mathbf{m}\end{array} \right] =\left[\displaystyle \begin{array}{c}\displaystyle \={\widehat{\alpha}}_{\rm ee}\mp\frac{1}{\eta_0}\={\widehat{\alpha}}_{\rm em}\cdot(\mathbf{z}_0\times\overline{\overline{I}}_{\rm t})\vspace{.1cm}\vspace*{.2cm}\\\displaystyle \={\widehat{\alpha}}_{\rm me}\mp\frac{1}{\eta_0}\={\widehat{\alpha}}_{\rm mm}\cdot(\mathbf{z}_0\times\overline{\overline{I}}_{\rm t}) \end{array}\right]\cdot \begin{array}{c} \mathbf{E}_{\rm inc} \end{array}. \label{eq:n1}\end{equation} Secondary plane waves (reflected and transmitted) are generated by surface-averaged current densities \begin{equation} \displaystyle \mathbf{J}_{\rm e}=\frac{j\omega}{S}\mathbf{p}, \qquad \mathbf{J}_{\rm m}=\frac{j\omega}{S}\mathbf{m}. \label{eq:o1} \end{equation} Radiation from infinite sheets of electric and magnetic currents can be easily solved \cite{Teemu} from the Maxwell equations: \begin{equation} \begin{array}{l} \displaystyle \mathbf{E}_{\rm r}=-\frac{j\omega}{2S}\left\{\left[\eta_0\widehat{\alpha}_{\rm ee}^{\rm co}\pm \widehat{\alpha}_{\rm em}^{\rm cr}\pm \widehat{\alpha}_{\rm me}^{\rm cr}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm co}\right]\overline{\overline{I}}_{\rm t}\right.\vspace*{.2cm}\\\displaystyle \hspace*{1.6cm}\left.+\left[\eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}\mp \widehat{\alpha}_{\rm em}^{\rm co}\mp \widehat{\alpha}_{\rm me}^{\rm co}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}\right]\overline{\overline{J}}_{\rm t}\right\}\cdot\mathbf{E}_{\rm inc} \end{array}\label{eq:q1} \end{equation} \begin{equation} \begin{array}{l} \displaystyle \mathbf{E}_{\rm t}=\left\{\left[1-\frac{j\omega}{2S}\left(\eta_0\widehat{\alpha}_{\rm ee}^{\rm co}\pm\widehat{\alpha}_{\rm em}^{\rm cr}\mp\widehat{\alpha}_{\rm me}^{\rm cr} +\frac{1}{\eta_0}\widehat{\alpha}_{\rm mm}^{\rm co}\right)\right]\overline{\overline{I}}_{\rm t}\right.\vspace*{.2cm}\\\displaystyle \hspace*{.3cm}\left. -\frac{j\omega}{2S} \left[\eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}\mp\widehat{\alpha}_{\rm em}^{\rm co} \pm\widehat{\alpha}_{\rm me}^{\rm co}+\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}\right] \overline{\overline{J}}_{\rm t}\right\}\cdot\mathbf{E}_{\rm inc}. \end{array}\label{eq:s1} \end{equation} Using these general expressions for the reflected and transmitted fields from general bi-anisotropic planar arrays, we are ready to study how we can make these fields equal to zero, as required for perfect absorbers. \subsection{General conditions for total absorption} \subsubsection{Total absorption from both sides of the sheet} The definition of a perfect absorber implies that \begin{equation} \begin{array}{c} \hspace*{.5cm}\mathbf{E}_{\rm r}=0,\hspace*{.5cm}\mathbf{E}_{\rm t}=0 \end{array}\label{eq:t1}. \end{equation} Equating to zero the expressions in square brackets in (\ref{eq:q1}) and (\ref{eq:s1}), we arrive to sufficient conditions for total absorption of arbitrarily polarized incident plane waves: \begin{equation} \begin{array}{c} \displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}\pm \widehat{\alpha}_{\rm em}^{\rm cr}\pm\widehat{\alpha}_{\rm me}^{\rm cr}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm co}=0 \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}\mp \widehat{\alpha}_{\rm em}^{\rm co}\mp \widehat{\alpha}_{\rm me}^{\rm co}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=0 \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}\pm\widehat{\alpha}_{\rm em}^{\rm cr}\mp\widehat{\alpha}_{\rm me}^{\rm cr}+\frac{1}{\eta_0}\widehat{\alpha}_{\rm mm}^{\rm co}=\frac{2S}{j\omega} \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}\mp\widehat{\alpha}_{\rm em}^{\rm co}\pm\widehat{\alpha}_{\rm me}^{\rm co}+\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=0. \end{array}\label{eq:u1} \end{equation} Because in the expressions for the reflected and transmitted fields (\ref{eq:q1}) and (\ref{eq:s1}) the terms proportional to $\overline{\overline{I}}_{\rm t}$ and $\overline{\overline{J}}_{\rm t}$ are orthogonal, these conditions are also the necessary conditions for total absorption. The exception for the last statement is the case of circularly polarized incident waves, when these conditions are sufficient but not necessary, opening still more design possibilities if only circularly polarized waves should be absorbed totally. For circularly polarized incidence, the total absorption conditions read \begin{equation} \begin{array}{l} \displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}\pm \widehat{\alpha}_{\rm em}^{\rm cr}\pm\widehat{\alpha}_{\rm me}^{\rm cr}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm co} \vspace*{.2cm}\\\displaystyle \hspace*{1.6cm}=(\pm j) \left[ \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}\mp \widehat{\alpha}_{\rm em}^{\rm co}\mp \widehat{\alpha}_{\rm me}^{\rm co}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}\right] \vspace*{.4cm}\\\displaystyle 1-\frac{j\omega}{2S}\left(\eta_0\widehat{\alpha}_{\rm ee}^{\rm co}\pm\widehat{\alpha}_{\rm em}^{\rm cr}\mp\widehat{\alpha}_{\rm me}^{\rm cr} +\frac{1}{\eta_0}\widehat{\alpha}_{\rm mm}^{\rm co}\right) \vspace*{.2cm}\\\displaystyle \hspace*{1.4cm}= (\mp j) \frac{j\omega}{2S} \left[\eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}\mp\widehat{\alpha}_{\rm em}^{\rm co} \pm\widehat{\alpha}_{\rm me}^{\rm co}+\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}\right] . \end{array}\label{eq:u1CP} \end{equation} Here $\pm j$ coefficients correspond to the two orthogonal polarizations of the incident circularly polarized fields. In the following, we will use the general sufficient conditions (\ref{eq:u1}) which ensure total absorption for arbitrary polarization of the incident waves. As one can see, these conditions connect symmetric and antisymmetric parts of the electric and magnetic polarizabilities to the antisymmetric and symmetric parts of the cross-coupling polarizabilities, respectively. This is a very important point because for reciprocal particles (for example, arbitrary shaped metal or dielectric particles) the antisymmetric parts of the electric and magnetic polarizabilities are zero (e.g., \cite{basic}) which limits symmetric components of electromagnetic coupling dyadics. Furthermore, we see from (\ref{eq:u1}) that for zero reflection it is not necessary to have absorption inside the particles, while it is necessary for zero transmission (note the imaginary quantity in the right-hand side of the third equation). Let us first analyze layers which exhibit the total absorption property from both sides of the sheet. In this case, conditions (\ref{eq:u1}) should hold for both choices of the $\pm$ signs, and we find that all the magnetoelectric coefficients must vanish: \begin{equation} \widehat{\alpha}_{\rm em}^{\rm cr}=\widehat{\alpha}_{\rm me}^{\rm cr}=\widehat{\alpha}_{\rm em}^{\rm co}= \widehat{\alpha}_{\rm me}^{\rm co}=0.\end{equation} Thus, we conclude that the only possible realization of total absorbers in form of a single layer of particles is the use of electrically and magnetically polarizable uniaxial particles with the polarizabilities balanced as in a Huygens' pair: \begin{equation} \widehat{\alpha}_{\rm ee}^{\rm co}={S\over j\omega \eta_0}, \qquad \widehat{\alpha}_{\rm mm}^{\rm co}=\eta_0^2 \widehat{\alpha}_{\rm ee}^{\rm co}\l{both_sides}\end{equation} (and all the other polarizability components equal zero). The effective polarizabilities which include the effect of particle interactions in the array should be, thus, purely imaginary, corresponding to a resonance where the particles show purely absorptive properties. The relations between the collective polarizabilities and the polarizabilities of the same particles in free space (\ref{eq:l1}) in this special case simplify to \begin{equation} {1\over \widehat{\alpha}_{\rm ee}^{\rm co}}={1\over \alpha_{\rm ee}^{\rm co}}-\beta_{\rm e},\qquad {1\over \widehat{\alpha}_{\rm mm}^{\rm co}}={1\over \alpha_{\rm mm}^{\rm co}}-\beta_{\rm m} . \end{equation} Using the known expression for the interaction constants in regular dipolar arrays \cite[eq.~(4.89)]{modeboo}, we can find the required particle polarizabilities in free space: \begin{equation} {1\over \alpha_{\rm ee}^{\rm co}}={\rm Re}(\beta_{\rm e})+j{k^3\over 6\pi\epsilon_0}+j{\omega \eta_0 \over 2 S},\end{equation} \begin{equation} {1\over \alpha_{\rm mm}^{\rm co}}={\rm Re}(\beta_{\rm m})+j{k^3\over 6\pi\mu_0}+j{\omega \over 2S\eta_0}.\end{equation} We again see that the reactive response of the individual particles should be such that together with the reactive part of the interaction field a resonance condition is satisfied. We can also check that the amplitude of the secondary plane waves created by the two dipolar arrays of the perfect absorber equal to one half of the incident field amplitude: \begin{equation} E_{\rm sc}=-{\eta_0\over 2}{j\omega p\over S}=-{\eta_0\over 2}{j\omega \over S}\widehat{\alpha}_{\rm ee}^{\rm co} E_{\rm inc}=-{1\over 2}E_{\rm inc}\end{equation} (we have substituted $\widehat{\alpha}_{\rm ee}^{\rm co}$ from \r{both_sides}). The field created by the magnetic-dipole array has the same amplitude. In the forward directions, the sum of these two plane waves compensates the incident field, and in the reflection direction these two plane waves are out of phase and the sum is zero. \subsubsection{Total absorption from one side of the sheet} Next, we consider single-layer sheets which work as total absorbers only from one side and study what functionalities can be engineered for illumination from the opposite side. From (\ref{eq:u1}), we know that the presence of cross-coupling polarizabilities (as well as the anti-symmetric parts of the electric and magnetic polarizabilities) causes asymmetry in interactions of the sheet with incident waves coming from the opposite directions. Let us assume that we satisfy (\ref{eq:u1}) for waves incident from one of the two sides. This corresponds to conditions \begin{equation} \begin{array}{c} \displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm co}=\mp ( \widehat{\alpha}_{\rm em}^{\rm cr}+\widehat{\alpha}_{\rm me}^{\rm cr}) \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=\pm (\widehat{\alpha}_{\rm em}^{\rm co}+ \widehat{\alpha}_{\rm me}^{\rm co}) \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}+\frac{1}{\eta_0}\widehat{\alpha}_{\rm mm}^{\rm co}= \frac{2S}{j\omega}\mp (\widehat{\alpha}_{\rm em}^{\rm cr}-\widehat{\alpha}_{\rm me}^{\rm cr}) \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}+\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=\pm (\widehat{\alpha}_{\rm em}^{\rm co}-\widehat{\alpha}_{\rm me}^{\rm co}). \end{array}\label{eq:teem5} \end{equation} Here, as above, the upper sign corresponds to $-\_z_0$ directed and the lower sign to the oppositely-directed incident plane waves. Using the same conditions for total absorption for the opposite incidence direction (taking the lower signs in (\ref{eq:u1}), (\ref{eq:q1}), and (\ref{eq:s1})), we find the reflected and transmitted electric fields for the same sheet when the incidence is from the other side: \begin{equation} \begin{array}{l} \displaystyle \mathbf{E}_{\rm r}=\frac{j\omega}{S}\left\{\pm \left[ \widehat{\alpha}_{\rm em}^{\rm cr}+\widehat{\alpha}_{\rm me}^{\rm cr}\right]\overline{\overline{I}}_{\rm t}\mp\left[ \widehat{\alpha}_{\rm em}^{\rm co}+ \widehat{\alpha}_{\rm me}^{\rm co}\right]\overline{\overline{J}}_{\rm t}\right\}\cdot\mathbf{E}_{\rm inc} \end{array} \label{eq:teem6} \end{equation} \begin{equation} \begin{array}{l} \displaystyle \mathbf{E}_{\rm t}=\frac{j\omega}{S}\left\{\pm \left[ \widehat{\alpha}_{\rm em}^{\rm cr}-\widehat{\alpha}_{\rm me}^{\rm cr}\right]\overline{\overline{I}}_{\rm t}\mp\left[ \widehat{\alpha}_{\rm em}^{\rm co}- \widehat{\alpha}_{\rm me}^{\rm co}\right]\overline{\overline{J}}_{\rm t}\right\}\cdot\mathbf{E}_{\rm inc}. \end{array}\label{eq:teem7} \end{equation} These equations show that tuning the layer to act as a perfect absorber from one side, it is possible to realize some special properties (in reflection and transmission) from the other side. To study these properties, we begin with the case of reciprocal structures. In this case, the electric and magnetic polarizabilties are symmetric dyadics ($\widehat{\alpha}_{\rm ee}^{\rm cr}=0$, $\widehat{\alpha}_{\rm mm}^{\rm cr}=0$) and the fields coupling coefficients satisfy \begin{equation} \widehat{\alpha}_{\rm em}^{\rm co}=-\widehat{\alpha}_{\rm me}^{\rm co},\qquad \widehat{\alpha}_{\rm em}^{\rm cr}=\widehat{\alpha}_{\rm me}^{\rm cr},\end{equation} corresponding to chiral and omega couplings \cite{basic}. Due to the reciprocity, the transmission coefficient is zero for waves incident from both sides. The second equation in (\ref{eq:teem5}) is satisfied identically, and from the last one, we see that the chirality parameter $\widehat{\alpha}_{\rm me}^{\rm co}=0$. Thus, if the sheet is tuned to work as a perfect absorber from one side, the chirality parameter is zero and there is no possibility to tune the reflection properties from the opposite side introducing chirality. On the other hand, the omega coupling coefficient $\widehat{\alpha}_{\rm me}^{\rm cr}$ is not fixed by the total absorption condition on one side, because from the first and third equations in (\ref{eq:teem5}) we find \begin{equation} \widehat{\alpha}_{\rm ee}^{\rm co}={S\over j\omega \eta_0}\mp {1\over \eta_0}\widehat{\alpha}_{\rm me}^{\rm cr} \l{om1}\end{equation} \begin{equation} \widehat{\alpha}_{\rm mm}^{\rm co}={\eta_0 S\over j\omega }\pm \eta_0\widehat{\alpha}_{\rm me}^{\rm cr}.\l{om2} \end{equation} Comparing with \r{both_sides}, we see that introducing omega coupling, we can maintain the property of total absorption from one of the sides with relaxed requirements on the electric and magnetic polarizabilities. For instance, we can possibly engineer the omega coupling parameter $\widehat{\alpha}_{\rm me}^{\rm cr}$ so that the required magnetic polarizability is much smaller than that dictated by \r{both_sides}. The reflection coefficient from the side opposite to the matched one we find from (\ref{eq:teem6}): \begin{equation} \mathbf{E}_{\rm r}=\pm \frac{2j\omega}{S} \widehat{\alpha}_{\rm em}^{\rm cr}\overline{\overline{I}}_{\rm t}\cdot\mathbf{E}_{\rm inc}. \l{omega_rotation}\end{equation} Thus, varying the omega coupling parameter, we can control the co-polarized reflection from the opposite side of the sheet, maintaining the matching and total absorption properties from one side. More functionalities become available if we allow nonreciprocal response of the particles. For simplicity, let us concentrate here on the cases where the magnetoelectric coupling is only due to nonreciprocity, assuming that the chirality and omega coupling coefficients are zero (the effects of chirality and omega coupling have been considered above). In these cases, the coupling coefficients satisfy \begin{equation} \widehat{\alpha}_{\rm em}^{\rm co}=\widehat{\alpha}_{\rm me}^{\rm co},\qquad \widehat{\alpha}_{\rm em}^{\rm cr}=-\widehat{\alpha}_{\rm me}^{\rm cr},\end{equation} corresponding to Tellegen and ``moving'' particles, respectively \cite{classes,basic}. From the first equation in (\ref{eq:teem5}), we find that $\eta_0\widehat{\alpha}_{\rm ee}^{\rm co}=\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm co}$ (Huygens' relation). From this and the third relation we get \begin{equation} \widehat{\alpha}_{\rm ee}^{\rm co}= {S\over j\omega \eta_0}\mp {1\over \eta_0}\widehat{\alpha}_{\rm em}^{\rm cr}.\l{mov_co_cr}\end{equation} The second and the last relations in (\ref{eq:teem5}) connect the anti-symmetric parts of the electric and magnetic polarizabilities with the Tellegen parameter: \begin{equation} \widehat{\alpha}_{\rm ee}^{\rm cr}=\pm {1\over \eta_0} \widehat{\alpha}_{\rm em}^{\rm co},\qquad \widehat{\alpha}_{\rm mm}^{\rm cr}=\mp \eta_0 \widehat{\alpha}_{\rm em}^{\rm co}.\end{equation} Thus, if the Tellegen coupling is present, its effects should be balanced with the nonreciprocity in both electric and magnetic polarizabilities. Tellegen coupling allows control of the reflection coefficient from the opposite side, since \begin{equation} \mathbf{E}_{\rm r}=\mp \frac{2j\omega}{S} \widehat{\alpha}_{\rm em}^{\rm co}\overline{\overline{J}}_{\rm t}\cdot\mathbf{E}_{\rm inc}.\end{equation} We see that the Tellegen sheet can be designed to work as a perfect absorber from one side and a twist polarizer in reflection from the other side. Finally, the antisymmetric part of the nonreciprocal coupling coefficient allows to control the transmission coefficient from the opposite side (for nonreciprocal sheets the transmission coefficient is not anymore necessarily symmetric): \begin{equation} \mathbf{E}_{\rm t}=\pm \frac{2j\omega}{S} \widehat{\alpha}_{\rm em}^{\rm cr}\overline{\overline{I}}_{\rm t}\cdot\mathbf{E}_{\rm inc}.\end{equation} This transmission coefficient equals unity if $\widehat{\alpha}_{\rm em}^{\rm cr}=\pm S/(2j\omega)$, in which case equation \r{mov_co_cr} shows that all the polarizabilities are in balance: \begin{equation} \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}=\pm\widehat{\alpha}_{\rm em}^{\rm cr}=\frac{1}{\eta_0}\widehat{\alpha}_{\rm mm}^{\rm co}=\frac{S}{j2\omega}. \end{equation} Using (\ref{eq:l1}), it is easy to show that the polarizabilities for each individual particle should also be in balance and equal to \begin{equation} \displaystyle\eta_0\alpha_{\rm ee}^{\rm co}=\pm\alpha_{\rm em}^{\rm cr}=\frac{1}{\eta_0}\alpha_{\rm mm}^{\rm co}={\eta_0\over 2}\displaystyle\frac{1}{\displaystyle \frac{j\omega\eta_0}{S}+\beta_{\rm e}}. \end{equation} We see that the required electric and magnetic effective polarizabilities are twice as small as compared to the simple case of isotropic dipole particles \r{both_sides}. However, the resulting amplitudes of the induced dipole moments and the amplitudes of the secondary plane waves are the same, because both moments are generated by both incident fields. If this nonreciprocal array is excited from the absorbing side, these secondary plane waves cancel the incident wave behind the sheet and they cancel each other in the reflection direction, same as for the simple isotropic array. But for the excitation from the opposite side, the induced dipole moments are zero, because contributions due to the applied electric and magnetic fields cancel out, and the sheet is transparent. We can conclude that this interesting structure has the property of the ultimately thin (a single layer of dipole particles) isolator: from one side it acts as a total absorber while from the other side, the sheet is transparent. Moreover, it appears that this is the only possible configuration having this property. \section{Uniaxial bi-anisotropic particles as components of totally absorbing arrays} Next we will discuss some possible designs of bi-anisotropic particles with the properties required for single-layer perfect absorbers. From the reciprocal classes, the most interesting and practically useful property is the omega coupling, since this effect gives flexibility in the requirements on the electric and magnetic polarizabilities and allows control over the reflection coefficient from the back side of the absorbing sheet (see \r{om1}--\r{omega_rotation}). \subsection{Wire omega particles} The classical topology of bi-anisotropic particles with omega coupling is an $\Omega$-shaped particle \cite{Saad,proposed,basic}. It is clear that for a single uniaxial omega particle made of a conducting wire (see picture in Table~\ref{ta:load_values}), in approximation of electrically small particles, polarizabilities are such that \cite{basic}, \begin{equation} \displaystyle \alpha_{\rm ee}^{\rm co}\alpha_{\rm mm}^{\rm co}=-\alpha_{\rm em}^{\rm cr}\alpha_{\rm me}^{\rm cr}=-(\alpha_{\rm em}^{\rm cr})^2. \label{eq:om4} \vspace*{.2cm}\\\displaystyle \end{equation} \begin{table*}[!t] \centering \caption{Conditions for perfect absorption} \begin{tabular}{|p{50mm}|p{50mm}|p{50mm}|} \hline \rowcolor[gray]{.9} \multicolumn{3}{|c|}{{\bf Condition for total absorption}} \\ \hline \rowcolor[gray]{.9} Wire Omega & Omega---Tellegen & Chiral---Moving \\ \hline \vspace{0.5mm} \includegraphics[width=0.3\textwidth]{omega} \hspace*{1.4cm} $ \displaystyle \widehat{\alpha}_{\rm ee}^{\rm co}=-\frac{1}{\eta_0^2}\widehat{\alpha}_{\rm mm}^{\rm co}$ & \vspace{0.5mm} \includegraphics[width=0.3\textwidth]{tellegen} \hspace*{.1cm} $ \begin{array}{c} \displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}\pm 2\widehat{\alpha}_{\rm em}^{\rm cr}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm co}=0 \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}\mp 2\widehat{\alpha}_{\rm em}^{\rm co}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=0 \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}+\frac{1}{\eta_0}\widehat{\alpha}_{\rm mm}^{\rm co}=\frac{2S}{j\omega} \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}+\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=0 \end{array}$ & \vspace{0.5mm} \includegraphics[width=0.3\textwidth]{moving} $ \begin{array}{c} \displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm co}=0 \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=0 \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}\pm2\widehat{\alpha}_{\rm em}^{\rm cr}+\frac{1}{\eta_0}\widehat{\alpha}_{\rm mm}^{\rm co}=\frac{2S}{j\omega} \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}\mp2\widehat{\alpha}_{\rm em}^{\rm co}+\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=0\vspace*{.5cm} \end{array}$ \\ \hline \multicolumn{3}{|c|}{{\bf Reflected and transmitted fields from the other side of a single-sided perfect absorber}} \\ \hline \rowcolor[gray]{.9} Omega & Tellegen & Moving \\ \hline \vspace*{.1cm}\hspace*{.6cm} $ \begin{array}{c} \displaystyle\hspace*{-.2cm} \mathbf{E}_{\rm r}=\pm \frac{2j\omega}{S} \widehat{\alpha}_{\rm em}^{\rm cr}\overline{\overline{I}}_{\rm t}\cdot\mathbf{E}_{\rm inc} \vspace*{.5cm}\\\displaystyle\hspace*{-.2cm} \mathbf{E}_{\rm t}=0\vspace*{.5cm} \end{array}$ & \vspace*{.1cm}\hspace*{.6cm} $ \begin{array}{c} \displaystyle\hspace*{-.2cm} \mathbf{E}_{\rm r}=\mp \frac{2j\omega}{S} \widehat{\alpha}_{\rm em}^{\rm co}\overline{\overline{J}}_{\rm t}\cdot\mathbf{E}_{\rm inc} \vspace*{.5cm}\\\displaystyle\hspace*{-.2cm} \mathbf{E}_{\rm t}=0\vspace*{.5cm} \end{array} $ & \vspace*{.1cm}\hspace*{.6cm} $ \begin{array}{c} \displaystyle\hspace*{-.2cm} \mathbf{E}_{\rm r}=0\vspace*{.5cm}\\\displaystyle \mathbf{E}_{\rm t}=\pm \frac{2j\omega}{S} \widehat{\alpha}_{\rm em}^{\rm cr}\overline{\overline{I}}_{\rm t}\cdot\mathbf{E}_{\rm inc}\vspace*{.5cm} \end{array} $ \\ \hline \end{tabular} \label{ta:load_values} \end{table*} This condition is a limitation on electromagnetic properties of a wire omega particle. Using (\ref{eq:l1}), we find that the effective polarizabilities of omega particles forming a periodical array satisfy the same relation \begin{equation} \displaystyle \widehat{\alpha}_{\rm ee}^{\rm co}\widehat{\alpha}_{\rm mm}^{\rm co}=-(\widehat{\alpha}_{\rm em}^{\rm cr})^2. \label{eq:om5} \vspace*{.2cm}\\\displaystyle \end{equation} Let us consider the limitation (\ref{eq:om5}) together with the first condition for total absorption in (\ref{eq:teem5}) (which is the condition for zero reflection from an array of omega particles). Combining these two equations, we get \begin{equation} \displaystyle \widehat{\alpha}_{\rm ee}^{\rm co}\pm \frac{2 j}{\eta_0}\sqrt{\widehat{\alpha}_{\rm ee}^{\rm co} \widehat{\alpha}_{\rm mm}^{\rm co}}-\frac{1}{\eta_0^2}\widehat{\alpha}_{\rm mm}^{\rm co}=0. \label{eq:om6} \vspace*{.2cm}\\\displaystyle \end{equation} From this simple quadratic equation one can obtain the relation \begin{equation} \displaystyle \widehat{\alpha}_{\rm ee}^{\rm co}=-\frac{1}{\eta_0^2}\widehat{\alpha}_{\rm mm}^{\rm co} \label{eq:om7} \end{equation} and using (\ref{eq:l1}), we get the same relation between the polarizabilities of individual particles in free space $\left(\alpha_{\rm ee}^{\rm co}=-\frac{1}{\eta_0^2}\alpha_{\rm mm}^{\rm co}\right)$, but this equation does not hold for passive omega particles, because the different signs of the imaginary parts of the electric and magnetic polarizabilities mean that the particle should be active. On the other hand, it is impossible to satisfy the third condition from (\ref{eq:teem5}) taking the limitation of (\ref{eq:om7}) into account. Therefore, a wire omega particles cannot be used for the design of perfect absorbers of this type. This is an interesting fact because earlier nearly-total absorption was predicted in structures which behave like omega particles \cite{mohammad}. However, there is a significant difference between the case studied in \cite{mohammad} and wire omega particles. For a wire omega particle all the polarizabilities have the same resonance frequency, but for the structure in \cite{mohammad}, one can tune the structural parameters so that different polarizabilities have different resonance frequencies. Thus, it appears possible to break the limitation of (\ref{eq:om7}) using other kinds of omega particles and achieve total absorption with the help of the omega-coupling phenomenon. \subsection{Omega-Tellegen particles} Within the nonreciprocal classes, the most interesting properties are the possibilities offered by nonreciprocal field coupling phenomena in array of particles. Realization of such particles requires inclusions of some nonreciprocal elements. The known structures for the microwave frequency range \cite{basic} include magnetized ferrite spheres coupled to specially shaped metal elements, see illustrations in Table~1. However, both these structures exhibit also reciprocal field coupling effects in addition to the desired nonreciprocal effects. A single uniaxial Tellegen particle shows also some omega field coupling due to the asymmetrical position of the metal strips with respect to the center of the ferrite sphere. For this reason, we call it omega-Tellegen particle. Its polarizability dyadics have the form \begin{equation} \left\{\begin{array}{l} \displaystyle \={\widehat{\alpha}}_{\rm ee}=\widehat{\alpha}_{\rm ee}^{\rm co}\overline{\overline{I}}_{\rm t}+\widehat{\alpha}_{\rm ee}^{\rm cr}\overline{\overline{J}}_{\rm t} \vspace*{.2cm}\\\displaystyle \={\widehat{\alpha}}_{\rm mm}=\widehat{\alpha}_{\rm mm}^{\rm co}\overline{\overline{I}}_{\rm t}+\widehat{\alpha}_{\rm mm}^{\rm cr}\overline{\overline{J}}_{\rm t} \vspace*{.2cm}\\\displaystyle \={\widehat{\alpha}}_{\rm em}=\={\widehat{\alpha}}_{\rm me}=\widehat{\alpha}_{\rm em}^{\rm co}\overline{\overline{I}}_{\rm t}+\widehat{\alpha}_{\rm em}^{\rm cr}\overline{\overline{J}}_{\rm t}. \end{array}\right. \label{eq:te3} \end{equation} Using relations (\ref{eq:u1}) and (\ref{eq:te3}), we get the following conditions for total absorption in omega-Tellegen arrays \begin{equation} \begin{array}{c} \displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}\pm 2\widehat{\alpha}_{\rm em}^{\rm cr}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm co}=0 \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}\mp 2\widehat{\alpha}_{\rm em}^{\rm co}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=0 \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}+\frac{1}{\eta_0}\widehat{\alpha}_{\rm mm}^{\rm co}=\frac{2S}{j\omega} \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}+\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=0. \end{array}\label{eq:te4} \end{equation} This shows that if we want to use the advantages offered by Tellegen coupling, we need to design the particle so that the omega coupling coefficient $\widehat{\alpha}_{\rm em}^{\rm cr}$ is properly balanced with the electric and magnetic polarizabilities. \subsection{Chiral-moving particles} Likewise, the known artificial moving particle \cite{basic,mov1,mov2} (picture in Table~1) exhibits reciprocal magnetoelectric coupling because of its chiral shape. The properties of such particle can be modeled by the polarizability dyadics of the form \begin{equation} \left\{\begin{array}{l} \displaystyle \={\widehat{\alpha}}_{\rm ee}=\widehat{\alpha}_{\rm ee}^{\rm co}\overline{\overline{I}}_{\rm t}+\widehat{\alpha}_{\rm ee}^{\rm cr}\overline{\overline{J}}_{\rm t} \vspace*{.2cm}\\\displaystyle \={\widehat{\alpha}}_{\rm mm}=\widehat{\alpha}_{\rm mm}^{\rm co}\overline{\overline{I}}_{\rm t}+\widehat{\alpha}_{\rm mm}^{\rm cr}\overline{\overline{J}}_{\rm t} \vspace*{.2cm}\\\displaystyle \={\widehat{\alpha}}_{\rm em}=-\={\widehat{\alpha}}_{\rm me}=\widehat{\alpha}_{\rm em}^{\rm co}\overline{\overline{I}}_{\rm t}+\widehat{\alpha}_{\rm em}^{\rm cr}\overline{\overline{J}}_{\rm t}. \end{array}\right. \label{eq:mo3} \end{equation} Using the relations (\ref{eq:u1}) and (\ref{eq:mo3}), the conditions for total absorption in the chiral-moving slab read \begin{equation} \begin{array}{c} \displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm co}=0 \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=0 \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}\pm2\widehat{\alpha}_{\rm em}^{\rm cr}+\frac{1}{\eta_0}\widehat{\alpha}_{\rm mm}^{\rm co}=\frac{2S}{j\omega} \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}\mp2\widehat{\alpha}_{\rm em}^{\rm co}+\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=0. \end{array}\label{eq:mo4} \end{equation} In this case, the chirality parameter $\widehat{\alpha}_{\rm em}^{\rm co}$ should be balanced with the anti-symmetric parts of the electric and magnetic polarizabilities. Implementation of total-absorption arrays using omega-Tellegen or chiral-moving particles presents significant difficulties. To the best of our knowledge, there are no analytical models to calculate the individual polarizabilities of these particles. The relations between the effective and individual polarizabilities for these particles become more involved. Finally, the design presupposes the use of ferrites and magnetic field bias which presents practical difficulties. On the other hand, these topologies offer unique properties such as a thin sheet operating as an isolator, and they clearly deserve further studies. \section{Conclusions} We have considered possible approaches for realization of perfect absorbers using ultimately thin (single layers of particles) structures. The thickness cannot be strictly zero (in the electromagnetic sense) because we must allow magnetic response in the layer. We have demonstrated that to realize total absorption from both sides of the sheet, we just need to realize balanced electric and magnetic polarizabilities and all magnetoelectric polarizabilities should be zero. Further, we have considered single-layer sheets which operate as perfect absorbers only when illuminated from one of the two sides of the sheet and studied what functionalities can be engineered for illumination from the opposite side. We have shown that introducing omega coupling in the constituent particles makes it possible to realize a layer which acts as a perfect absorber from one side with controllable co-polarized reflection from the opposite side of the sheet. For reciprocal structures, it has been shown that tuning the layer to act as a perfect absorber from one side does not allow to have any chirality in the layer. We have seen that allowing nonreciprocity in the properties of the absorbing particles offers possibilities for more functionalities. A Tellegen sheet can be designed to work as a perfect absorber from one side and a twist polarizer in reflection from the other side. The antisymmetric part of the nonreciprocal coupling coefficient (i.e., a layer of particles with the constitutive parameters of an artificial moving medium) makes it possible to achieve total absorption from one side, but controlled transmission coefficient from the opposite side. In particular, the regime when the layer acts as a perfect absorber from one side, while from the other side the sheet is transparent, is possible. This corresponds to the ultimately thin isolator. Finally, we have studied some particular examples of possible realizations of single-layer perfect absorbers with the use of omega, omega-Tellegen, and chiral-moving particles as canonical examples of uniaxial bi-anisotropic particles. The most interesting properties are offered by nonreciprocal bi-anisotropic particles. They can be realized in practice as near-field coupled magnetized ferrite inclusions and metal strips of wires, as was proposed in \cite{classes} (see pictures in Table~\ref{ta:load_values}). Now we are developing analytical models of omega-Tellegen and chiral moving particles which are expected to enable design and optimization of inclusions for proposed nonreciprocal absorbing layers.
1,314,259,994,386
arxiv
\section{Why remove classical time from quantum theory?} \noindent Dynamical evolution in quantum theory is described by the Schr\"{o}dinger equation. The time parameter which is used for describing this evolution is part of a classical spacetime. By classical spacetime we mean both the underlying spacetime manifold, as well as the gravitational field [equivalently the metric] which resides on it. As we know, the gravitational field is determined by the distribution of classical matter according to the laws of the general theory of relativity. What is perhaps not so well appreciated is that, in accordance with the Einstein hole argument, a physical meaning cannot be attached to the points of the underlying manifold unless a dynamically determined metric tensor field resides on it ~\cite{Christian:98, Singh:2009}. Thus one can reasonably assert that classical spacetime, and hence also the time parameter used to describe evolution in quantum theory, is determined by classical bodies and fields. Now, the dynamics of classical objects is itself a limiting case of quantum dynamics. We see here the circularity of time in quantum theory. Quantum theory depends on classical time. But classical time is well-defined only after one considers the classical limit of quantum theory (Fig. 1). \begin{figure} [ht] \centerline{\includegraphics{flowdiag.pdf}} \caption{The circularity of time in quantum theory} \end{figure}% We hence conclude that there should exist an equivalent new formulation of quantum theory which does not depend on classical time. We have argued elsewhere that such a new formulation is a limiting case of a stochastic non-linear theory. The non-linearity, which has to do with gravity, becomes significant in the approach to the Planck mass/energy scale and possibly plays a role in explaining the collapse of the wave-function during a quantum measurement ~\cite{Singh:2006, Singh:2009}. How should one go about constructing such a reformulation, which we will call Generalized Quantum Dynamics [GQD]? One is foregoing classical time, and along with it, the point structure of a spacetime manifold. A natural possibility is to replace the original spacetime by a non-commutative spacetime. Such a spacetime, and its associated dynamics, called Non-commutative Special Relativity [NSR], was proposed by us in a recent work ~\cite{Lochan-Singh:2011}. In NSR, evolution is described via a `proper time' constructed from taking the Trace over the non-commutative spacetime metric. As will be described in the next section, a GQD is arrived at by constructing the equilibrium statistical thermodynamics of the underlying non-commutative special relativity ~\cite{Lochan:2012}. Section IV then sketches ongoing work on how one possibly recovers classical spacetime and classical matter fields, from considerations of statistical fluctuations around a GQD. This work, when complete, would be central to achieving a fundamental understanding of why superpositions of position states are absent in the macroscopic, classical world (Fig. 2). \begin{figure} [ht] \centerline{\includegraphics{Figure2.pdf}} \caption{From a non-commutative spacetime to the classical world, via GQD} \end{figure}% One notices that in the transition from a GQD to the classical world, there is no sign of ordinary quantum theory [which depends on classical time]! That recovery must take place separately, and that is where the connection of the time problem with the measurement problem emerges. In Fig. 2, by classical world is meant a universe which is {\it dominated} by classical matter fields. Only when such a dominance is given, can one talk of the existence of a classical spacetime; otherwise the Einstein hole argument will again come into play and forbid the occurrence of the ordinary spacetime manifold. However, not all matter is classical; there is a sprinkling of `quantum' fields, whose dynamics must be derived from first principles, given a classical time. This is what is achieved by the theory of Trace Dynamics ~\cite{Adler:04, Adler:94, Adler-Millard:1996, RMP:2012} which is the classical dynamics of non-commuting matrices on a background classical spacetime. The equilibrium statistical thermodynamics of this matrix dynamics is shown to be the ordinary quantum theory. Statistical fluctuations around equilibrium are shown to lead to non-linear modifications of the quantum theory, and this non-linearity is responsible for collapse of the wave-function during a quantum measurement (Fig. 3). In the limit when the non-linearity becomes strongly dominant, the non-linear theory reduces to classical mechanics. \begin{figure} [ht] \centerline{\includegraphics{Figure3.pdf}} \caption{From Trace Dynamics to a nonlinear theory, via standard quantum theory} \end{figure}% The connection between the problem of time and the problem of measurement is the following. In our opinion, Trace Dynamics should perhaps not be treated as a stand-alone theory. Because it gives a matrix (equivalently operator) status to matter degrees of freedom, while retaining a point-like structure for spacetime. This will again run into the kind of difficulties implied by the Einstein hole argument: a non-commutative nature for matter degrees is not consistent with a commutative nature for spacetime degrees, unless a dominant classical matter background is available. Thus, a logical starting point for Fig. 3 is to place it at the top of Fig. 2. First one starts from a non-commutative special relativity and derives a GQD, and from there the classical world with a classical time. On this classical world one considers the matrix dynamics for select degrees of freedom (which are sub-dominant and not classical), and this eventually leads to a non-linear quantum theory. The physics which solves the problem of time in quantum theory is strongly correlated with the physics that solves the measurement problem in quantum theory (Fig. 4). Fig. 4 captures the philosophy of our approach, and the essence of this article. One starts from an NSR and arrives at a GQD. This is described in Section II. The transition from a GQD to the classical world is discussed in Section IV (the logical place would be Section III, but this work is as yet incomplete, and hence its discussion is left till the end). The derivation of ordinary quantum theory and the solution of the quantum measurement problem is discussed in Section III. \begin{figure} [ht] \centerline{\includegraphics{Figure4.pdf}} \caption{Solving the problem of time, and the problem of quantum measurement} \end{figure}% \section{A generalized quantum dynamics} The mathematical formulation leading up to a GQD ~\cite{Lochan:2012} is strongly motivated by and based on the theory of Trace Dynamics developed by Stephen Adler and collaborators ~\cite{Adler:04}. The new added element is the assumption of a non-commutative spacetime with operator (equivalently matrix) coordinates $(\hat{t}, \hat{x}, \hat{y}, \hat{z})$, for which a proper time is defined by taking a trace over a line-element: \begin{equation} ds^{2} = Trd\hat{s}^2\equiv Tr[d\hat{t}^2 - d\hat{x}^2 - d\hat{y}^2 - d\hat{z}^2]. \end{equation} This line element is invariant under coordinate transformations of the non-commuting coordinates, with their commutation relations being completely arbitrary. Fermionic / Bosonic matter degrees of freedom, described by non-commuting matrices, live on this spacetime, and are respectively characterized by whether they belong to odd / even sector of the graded Grassmann algebra. A classical dynamics of these non-commuting matrix degrees of freedom $\hat{q}^i$ can be constructed to describe evolution with respect to the proper time $s$: we call this a non-commutative special relativity [NSR]. Thus as in special relativity, a `particle' is assigned a set of four coordinates $(\hat{t}, \hat{x}, \hat{y}, \hat{z})$, a four velocity is defined by taking their derivative with respect to the proper time, and a canonically conjugate four momentum $\hat{p}^{i}$ is defined by taking the `trace derivative' of the Trace Lagrangian (trace of a polynomial function of coordinates and velocities) with respect to the four velocity. From the Trace Lagrangian, one derives Lagrange equations of motion, a Trace Hamiltonian, and Hamilton's equations, as in ordinary mechanics ~\cite{Lochan-Singh:2011}. The central feature of this matrix classical dynamics, which makes it different from point particle classical dynamics, is that it possesses a novel conserved charge: \begin{equation} \hat{Q} = \sum_{r\in B}[\hat{q}_r,\hat{p}_r] -\sum_{r\in F}\{\hat{q}_r,\hat{p}_r\}, \end{equation} where the commutators are for bosonic degrees of freedom, and anticommutators are for fermionic degrees. We note that the commutators/anti-commutators also include pairs such as $[\hat{E}^i,\hat{t}^i]$ and $\{\hat{E}^i,\hat{t}^i\}$, where $\hat{E}^i$ is the energy variable canonically conjugate to $\hat{t}^{i}$. This conserved charge $\hat{Q}$, which has the dimensions of action, is a consequence of the global unitary invariance of the Lagrangian and the Hamiltonian. It would be trivially zero in the case of point-particle mechanics, but that is not the case here, and its existence is all the more remarkable, because the individual $q-q$, $q-p$, and $p-p$ commutators / anti-commutators are non-zero and completely arbitrary. The existence of this charge plays a central role in the emergence of quantum theory from this underlying level, as we will see shortly. This matrix dynamics on a non-commutative `flat' space-time is according to us the fundamental dynamics, its symmetries being invariance of the operator spacetime metric under Lorentz transformations, and the global unitary invariance of the Lagrangian. However this is not the dynamics we observe in our laboratory experiments. Hence one proposes that this dynamics must be coarse-grained over, much the same way that coarse graining over the microscopic degrees of freedom reproduces the statistical thermodynamics of macroscopic systems. Thus we shall develop the statistical thermodynamics of the above classical matrix dynamics, employing entirely conventional methods and techniques of equibrium statistical mechanics. The classical matrices are analogous to the atoms of a gas, and the coarse-graining is anaologous to constructing the thermodynamics of the gas, leading to its approximate macroscopic thermodynamic description. It is remarkable that the thermodynamics of this matrix dynamics will be the sought for GQD, which is a precursor to quantum theory, and in that sense quantum theory is an emergent phenomenon. One starts by showing that a measure $d\mu$ can be defined in the phase space of the matrix degrees of freedom, and Liouville's theorem holds, demonstrating the conservation of phase space volume. A probability density distribution $\rho(H,T;\hat{Q},\lambda)$ is defined in the phase space, where the `temperature' $T$ and the matrix $\lambda$ are respectively the Lagrange multipliers introduced to respect the conservation of the Hamiltonian and the charge $\hat{Q}$. A canonical ensemble is constructed and an equilibrium distribution is arrived at by maximizing the entropy \begin{equation} S = - \int d \mu \rho \log \rho \end{equation} subject to the conservation constraints. As anticipated, the equilibrium distribution is given by \begin{equation} \rho = Z^{-1} \exp(Tr\lambda \hat{Q} - HT) \end{equation} with $Z$ being the partition function. An important result which can be proved is that the canonical ensemble average of $\hat{Q}$ is of the form \begin{equation} \langle\hat{Q}\rangle_{AV} = i_{eff}\hbar \end{equation} where $\hbar$ is a real positive constant of dimensions of action, and $i_{eff} = diag(i,-i,i,-i,...,i,-i)$ such that $Tri_{eff}=0$. Now, the phase space measure, as well as the canonical average of an observable ${\cal O}$ given by \begin{equation} \langle {\cal O} \rangle_{AV} = \int d\mu \rho {\cal O} \end{equation} are invariant under constant shifts of dynamial variables in phase space. This leads to an important Ward identity for a polynomial function $W(z)$ of the dynamical variables $z$ in phase space. Under the assumptions that $T$ is identified with the Planck scale, and we work much below that scale, and secondly that in the Ward identity the conserved charge $\hat{Q}$ can be replaced by its canonical average $i_{eff}\hbar$ the Ward identity simplifies greatly, to the following \begin{equation} \langle {\cal D} z_{eff}\rangle_{AV}=0; \quad {\cal D}z_{r eff} =i_{eff}[W_{eff}, z_{r eff}]-\hslash\sum_s\omega_{rs}\left(\frac{\delta {\bf W}}{\delta z_s}\right)_{eff}. \label{ModWard} \end{equation} This equation contains the essence of the sought for GQD! Here, $z_{eff}$ is that matrix component of the matrix dynamical variable $z$ which commutes with $i_{eff}$. Different choices of the polynomial $W$ lead to different important results which contain the mathematical essence of GQD. If $W$ is chosen to be the operator Hamiltonian $H$, this Ward identity becomes the Heisenberg equations of motion \begin{equation} \langle {\cal D} z_{eff}\rangle_{AV}=0; \quad {\cal D}z_{r eff} =i_{eff}[H_{eff}, z_{r eff}]-\hslash\dot{z}_{r eff}. \label{ModWard1} \end{equation} A dot denotes derivative with respect to the proper time $s$. We recall that the operator time $\hat{t}$ is one of the dynamical variables $z$. Next, if we choose $W=\sigma_vz_v$ we get \begin{equation} i_{eff}{\cal D}z_{r eff} =[z_{r eff}, \sigma_v z_{v eff}]-i_{eff}\hslash\omega_{rv}\sigma_v \label{ModWard2} \end{equation} which gives the emergent canonical commutation rules for the bosonic and fermionic degrees of freedom. Thus we obtain, what we call effective canonical commutators of the canonically averaged matter degrees of freedom. For a bosonic pair \begin{equation} [q^{\mu},q'^{\nu}]=0; \quad [q^{\mu},p_{\nu}]=i_{eff}\hslash\delta^{\mu}_{\nu}, \end{equation} while for a fermionic pair \begin{equation} \{q^{\mu},q'^{\nu}\}=0; \quad \{q^{\mu},p_{\nu}\}=i_{eff}\hslash\delta^{\mu}_{\nu}. \end{equation} This leads to the desired non-commutativity amongst configuration variables and the corresponding momenta of matter degrees of freedom, at the emergent level. Evidently, there is included now a operator time - energy commutation relation. In anticipation of the standard quantum theory that will eventually emerge from here, we identify the constant $\hbar$ with Planck's constant. In ths sense, a Generalized Quantum Dynamics which does not refer to a classical time emerges from the underlying non-commutative special relativity in the statistical thermodynamic limit ~\cite{Lochan:2012}. One does have a concept of time-evolution, but this evolution is with respect to the proper time $s$ constructed from the trace of the operator spacetime line-element. In Section IV we will discuss how one possibly proceeds from this GQD to recover classical time. Furthermore, since at the fundamental matrix level, the theory is Lorentz invariant as shown in ~\cite{Lochan-Singh:2011}, if we add another assumption of boundedness of $H_{eff}$ and existence of zero eigenvalue of $\vec{P}_{eff}$ corresponding to a unique eigenstate $\psi_0$, there exists a proposed correspondence between canonical ensemble average quantities and Lorentz-invariant Wightmann functions in the emergent field theory, $$ \psi_0^{\dagger}\langle P({z_{eff}})\rangle_{\hat{AV}}\psi_0=\langle vac |P({{\cal X}_{eff}})|vac \rangle. $$ We can also obtain an equivalent Schr\"{o}dinger picture corresponding to the emergent Heisenberg picture of space-time dynamics. For that, we define $$U_{eff}(s)=\exp{(-i_{eff}\hslash^{-1} s H_{eff})},$$ such that $$\frac{d}{ds}U_{eff}(s)=-i_{eff}\hslash^{-1}H_{eff}U_{eff}(s).$$ Then, for a Heisenberg state vector $\psi$ we form Schr\"{o}dinger picture state vector $\psi_{schr}(s)$, for space-time degrees of freedom $$ \psi_{schr}(s)=U_{eff}(s)\psi,$$ $$i_{eff}\hslash \frac{d}{ds}\psi_{schr}(s)=H_{eff}\psi_{schr}(s).$$ Thus we obtain Schr\"{o}dinger evolution for the phase-space variables at the canonical ensemble average level. We note that time and space continue to retain their operator status, although they now commute with each other. \section{Trace dynamics and the quantum measurement problem} Let us once again have a look at Fig. 4. We have thus far outlined how the lowermost arrow [NSR to GQD] is realized. In the next Section we will discuss the next arrow [GQD to the classical world]. For the purpose of the present section, let us assume the classical world as given: matter fields are classical and classical spacetime obeys the laws of general relativity. The universe is dominated by classical matter, which is responsible for the generation of a classical spacetime - in particular there exists a classical time with respect to which evolution can be defined. In such a classical world, how does one realize quantum theory, so essential to successfully describe the very large number of quantum phenomena observed in the laboratory? The traditional approach of course is to start from a classical dynamics for a system with given configuration varables and their canonical momenta, to replace Poisson brackets by commutation relations, hence introducing Planck's constant, and to replace Hamilton's equations of motion by Heisenberg equations of motion [equivalently the Schr\"{o}dinger equation]. This approach [and the equivalent path-integral formulation], although extremely successful, ought to be regarded as not completely satisfactory, and `phenomenological' in nature. Because it pre-assumes as given the knowledge of its own limiting case, namely classical dynamics. One should not have to `quantize' a classical theory; rather there should be some guiding symmetry principles for developing a quantum theory, and then deriving classical mechanics from quantum theory as a limiting case. This requirement is in the same spirit whereby one does not arrive at special relativity by `relativizing' Galilean mechanics, or one does not arrive at general relativity by `general relativizing' Newtonian gravitation. The more fundamental theory stands on its own feet, and the limiting case only arises as an approximation - the prior knowledge of the limiting case should not be essential for the construction of the fundamental theory. An offshoot of arriving at quantum theory by `quantization' is that this leaves us without an understanding of the absence of macroscopic superpositions [the Schr\"{o}dinger cat paradox] and of the quantum measurement problem. [Unless of course one accepts the many-worlds interpretation as an explanation, or one believes in Bohmian mechanics as being the correct mathematical formulation of quantum theory]. Trace Dynamics ~\cite{Adler:04} sets out to derive quantum theory from an underlying matrix dynamics where select matter degrees of degrees $\hat{q}^i$ are described by non-commuting matrices [whereas the rest of the matter fields, which dominate the Universe, continue to be treated as classical] and a classical [Minkowski] spacetime is a given. These matrices represent bosonic / fermionic degrees of freedom, depending on whether they belong to the even / odd sector of the graded Grassmann algebra. Like in the previous section, a classical dynamics is constructed for these matrix degrees, with the difference that now time evolution is with respect to a classical time, as opposed to a proper time constructed from the operator spacetime line-element. Given a Trace Lagrangian, one derives Lagrange's equations of motion, a Hamiltonian, and Hamilton's equations of motion. Once again, as a consequence of global unitary invariance there is a conserved charge with dimensions of action, the Adler-Millard charge \begin{equation} \tilde{C} = \sum_{r\in B}[\hat{q}_r,\hat{p}_r] -\sum_{r\in F}\{\hat{q}_r,\hat{p}_r\}, \end{equation} where the commutators are for bosonic degrees of freedom, and anticommutators are for fermionic degrees. This time round though, there is no pair such as $(E^i,t^i)$ in the commutators, because time is not an operator. In fact it should be emphasized that the construction in this section proceeds in very much the same fashion as in the previous section, except that a classical spacetime is given. More precisely, the approach adopted in the previous section was developed by us completely following the work of Adler and collaborators as described in this section. This matrix dynamics is Lorentz invariant, under transformation of the ordinary space-time coordinates. An equilibrium statistical mechanics for this matrix dynamics is constructed, as before, by maximizing the entropy, and as before it can be shown that the canonical average of $\tilde{C}$ takes the form \begin{equation} \langle\tilde{C}\rangle_{AV}=i_{eff}\hbar. \end{equation} A Ward identity holds, from which one deduces, after replacing the Adler-Millard charge by its canonical average, the standard quantum relations of quantum theory, the Heisenberg equations of motion, and by taking the non-relativistic limit one can write the equivalent description of the dynamics in terms of the Schr\"{o}dinger equation. The correspondence between canonical ensemble averages and Wightmann functions is proposed as before. In this way one recovers ordinary relativistic quantum field theory, and its non-relativistic limit, from the underlying classical matrix dynamics. This is the step described by the lower arrow in the upper half of Fig. 4. Something very remarkable is achieved next, by the upper arrow in the top half of Fig. 4. One examines the role played by the statistical fluctuations around equilibrium, for the case of the non-relativistic Schr\"{o}dinger equation. These are taken into account by revisiting the Ward identity, and instead of replacing $\tilde{C}$ by its canonical average, one replaces $\tilde{C}$ by the canonical average plus correction terms. These correction terms represent the ever-present statistical fluctuations around equilibrium, analogous to the Brownian motion corrections to equilibrium thermodynamics. These fluctuations induce a [linear] modification of the non-relativistic Schr\"{o}dinger equation, the modifications being caused by the stochastic fluctuations, and if one assumes the fluctuations to be of the white noise type, they can be described by the It\^o representation of Brownian motion. In order to make contact with the quantum measurement problem, one must now make a somewhat ad hoc assumption [which must eventually be justified from a deeper understanding of Trace Dynamics, and perhaps of the possible involvement of gravity]. The point is that the Schr\"{o}dinger equation, after including fluctuations, turns out not to be norm-preserving. Now one knows from particle number conservation in non-relativistic quantum theory that norm must be preserved during evolution. While norm-preservation must eventually be proved from deeper principles, for now one defines a new wave-function by dividing the original wave-function by its norm, so that the new wave-function preserves norm. This new wave-function obeys a {\it non-linear} Schr\"{o}dinger equation while continuing to depend on the statistical fluctuations. This non-linear Schr\"{o}dinger equation contains within itself a special class, which coincides with the so-called models of Continuous Spontaneous Localization [CSL] developed by Ghirardi, Rimini, Weber and Pearle ~\cite{Ghirardi:86, Pearle:76, Ghirardi2:90, Bassi:03} to explain the absence of macroscopic superpositions and to provide a dynamical explanation for the collapse of the wave-function during a quantum measurement. A prototype of such models is the one particle stochastic non-linear Schr\"{o}dinger equation ~\cite{Diosi:89} \begin{equation} \label{eq:qmupl1} d \psi_t = \left[ -\frac{i}{\hbar} H dt + \sqrt{\lambda} (q - \langle q \rangle_t) dW_t - \frac{\lambda}{2} (q - \langle q \rangle_t)^2 dt \right] \psi_t, \end{equation} where $q$ is the position operator of the particle, $\langle q \rangle_t \equiv \langle \psi_t | q | \psi_t \rangle$ is the quantum expectation, and $W_t$ is a standard Wiener process which encodes the stochastic effect. Evidently, the stochastic term is nonlinear and also nonunitary. The collapse constant $\lambda$ sets the strength of the collapse mechanics, and it is chosen proportional to the mass $m$ of the particle according to the formula $ \lambda = \frac{m}{m_0}\; \lambda_0, $ where $m_0$ is the nucleon's mass and $\lambda_0$ measures the collapse strength. This equation can be used to prove the absence of macroscopic superpositions and solve the quantum measurement problem, and furthermore its predictions for experiments in the mesoscopic regime differ from those of the standard linear Schr\"{o}dinger equation ~\cite{Bassi:03, RMP:2012, Essay:2012}. This allows the stochastic non-linear quantum quantum dynamics, and hence Trace Dynamics, albeit indirectly, to be confirmed or ruled out by laboratory tests in the foreseeable future. The structure of the equation naturally provides an amplification mechanism - collapse becomes more and more important for larger systems. Furthermore, as can be anticipated by the very nature of its construction [norm-prservation], this non-linear equation dynamically reproduces the Born probability rule for the random outcomes of successive quantum measurements on an observable. Although more remains to be done [why fluctuations should preserve norm; can the CSL model be uniquely derived from trace dynamics, is the collapse constant $\lambda$ a new constant of nature, or is it determined by already known fundamental constants via involvement of gravity in collapse], it is unquestionably true that trace dynamics provides a very natural and attractive avenue for understanding the origin of probabilities during quantum measurement, although the Schr\"{o}dinger dynamics is by itself deterministic. It has to do with the universal presence of statistical fluctuations: if the Schr\"{o}dinger equation is a thermodynamic approximation to the underlying matrix dynamics, the stochastic non-linear corrections to the Schr\"{o}dinger equation which are responsible for dynamical collapse, and the origin of probabilities, are a consequence of the unavoidable presence of fluctuations around thermodynamic equilibrium. It should also be emphasized that the theory of wave-function collapse discussed here [CSL] is a non-relativistic theory, as also is the starting point wherein the connection between trace dynamics and CSL is developed. Despite several attempts, a relativisitic theory of wave-function collapse does not yet exist ~\cite{RMP:2012}. One clear difficulty is that the norm-preservation condition, which permits the construction of the non-linear stochastic Schr\"{o}dinger equation, is not necessarily available anymore. \section{From the generalized quantum dynamics, to Trace Dynamics} The ideas discussed in this section are a report on work in progress, and hence have not yet taken final shape in terms of a mathematical formulation. Trace Dynamics takes a classical spacetime as given, and on this given spacetime it considers the matrix dynamics of selected degrees of freedom, for which quantum behaviour is derived. To our understanding, a fully consistent treatment of these select degrees, which is in accordance with the Einstein hole argument, should also associate an operator space-time with these degrees, as discussed in Section II. However, and this is crucial, one makes an {\it assumption} that this operator spacetime associated with these select degrees of freedom makes a very negligible impact on the classical spacetime produced by the dominant classical matter fields. This assumption is what allows one to proceed with a pre-given classical spacetime while developing trace dynamics. It is possible however, as discussed towards the end of this section, that this assumption may have to be revisited, in order to understand better the fundamental nature of EPR quantum correlations [no signalling, but yet an `action at a distance', as during the collapse of the wave-function]. One must face next the hard problem of understanding the transition from a GQD to a classical world. At a simplistic level, one could take the following approach. One should consider the statistical fluctuations about the equilibrium, at which GQD holds. However, one knows how to do that only in the non-relativistic case. The non-relativistic limit of the GQD cannot be defined by "going to speeds much less than speed of light", since time and space are still operators and there clearly is no classical notion of speed here. However, in the Lorentz transformations which define the invariance of the operator spacetime line-element, the one-parameter invariance along a given direction is defined by the parameter $\beta$ which in the classical limit is defined as $v/c$. A non-relativistic limit of GQD can hence be defined by taking the limit $\beta\ll 1$. In this limit one can demand that the fluctuations preserve norm in the Schr\"{o}dinger equation, in which case the Schr\"{o}dinger equation is transformed to a non-linear equation, of which the CSL type stochastic equation is a special case. Evolution is described with respect to the proper time $s$ defined from the trace of the operator spacetime element, and the Hamiltonian depends on configuration degrees of freedom which include operator time. As before, one can consider the many-particle macroscopic limit and show that macroscopic superpositions are absent. However, something else extremely significant happens now. The absence of macroscopic superpositions in the matter sector implies the absence of superpositions of different spacetime quantum states corresponding to the operator status of space and time, thereby leading to the {\it emergence of a classical spacetime}. This is an important lesson, even though yet understood only in the non-relativistic and flat case: the emergence of a classical macroscopic description for matter comes hand in hand with the emergence of classical spacetime - the two are inseparable, and this inseparability is entirely in accord with the Einstein hole argument. If quantum theory is an emergent phenomenon [emerging from trace dynamics], so is classical spacetime an emergent phenomenon [emerging again from the generalized trace dynamics]. The matrix degrees of freedom may well be called the `atoms of spacetime'. A greater challenge is to understand the relativistic case: how is the ordinary spacetime of special relativity to be recovered from GQD, when the norm-preservation condition is not apparently available. An even greater challenge is to recover classical gravity! When one proceeds from GQD to recover the classical world, not only should the classical spacetime manifold emerge, but there must emerge also classical gravity, which satisfies Einstein equations. Only then can consistency with the Einstein hole argument be ensured. Now GQD by itself has no gravity. Thus it seems we must return again to the lowermost level, and propose that gravity be introduced at the level of matrix dynamics itself, possibly by going from the `flat' operator spacetime element to the `curved' operator spacetime element: \begin{equation} ds^{2} = Trd\hat{s}^2\equiv Tr{\hat{g}_{\mu\nu}d\hat{x}^{\mu}d\hat{x}^{\nu}}. \end{equation} The expectation is that operator Einstein equations can be assumed to hold at the matrix dynamics level, and coarse graining would lead to Einstein equations for the canonically averaged operator metric, self-consistently coupled with the `curved space' GQD which depends on the canonically averaged operator metric. [While of course this idea remains to be developed mathematically, one cannot help noticing the resemblance it bears to the Schr\"{o}dinger-Newton system studied by Di\'{o}si ~\cite{Diosi:84} and Penrose ~\cite{Penrose:96} and others ~\cite{RMP:2012} in the context of studying gravity induced dynamical wavefunction collapse]. From here, one possibly proceeds to study the impact of statistical fluctuations on the equilibrium GQD and canonically averaged Einstein equations. This system is now non-linearly self-coupled, and it could be that one may not have to by hand bring in the assumption of norm-preservation to arrive at a stochastic non-linear CSL type collapse model which obeys the Born rule. In the macroscopic limit, such a non-linear system could be responsible for making both macroscopic objects and the associated spacetime and gravity behave classically. Once such a classical world is recovered, one can implement the construction described in Section III, for arriving at quantum theory starting from trace dynamics for the select degrees of freedom. Our ideas may provide a useful way out for a better understanding of the apparent `action at a distance' which seems to prevail during the seemingly instantaneous collapse of the wave-function and in EPR-type quantum correlations. Perhaps one must not entirely disregard the implications of the operator space-time metric line-element associated with the [sub-dominant] quantum system, as was done in Section III while deriving quantum theory on a given classical space-time background. A quantum system always `carries' such a line-element with itself, in the sense that the most fundamental matrix level of description always exists, although we coarse grain it to arrive at what we observe at a higher level. Seen from the viewpoint of this operator line-element, which is non-commutative in nature, there is no point-structure to the spacetime associated with it, no definite light-cone structure, and no pre-given causal order, although it does have operator-level Lorentz invariance. Thus from the point of view of this line-element, `wave-function collapse' can well happen in a unsurprising manner which otherwise appears as `instantaneous action at a distance' from the point of view of the externally given classical spacetime, because the latter possesses a causal structure. But this latter causal structure is not intrinsic to the quantum system under study - its something we choose to employ for our convenience, and then we `cry foul'! Indeed since there is no violation of special relativity in a EPR measurement, the apparent strangeness could simply be a case of trying to describe the process from an inaccurate perspective. Support for our idea also comes from an important recent paper ~\cite{Brukner:11}, where it has been shown that if one does not assume a predefined global causal order, there are multipartite quantum correlations which cannot be understood in terms of definite causal order and spacetime may emerge from a more fundamental structure in a quantum to classical transition. In summary, in this work we have addressed the two key fundamental obstacles which still hold us back from getting a better understanding of quantum theory: the problem of time and the problem of quantum measurement. The problem of time suggests that a fundamental description of spacetime which is more compatible with quantum theory than the conventional one, is a non-commutative spacetime. The passage from a non-commutative spacetime to the commutative one that we see around us is through a coarse graining: akin to a passage from microscopic Newtonian mechanics to macroscopic thermodynamics via statistical mechanics. Quantum theory also emerges as the equilibrium description from the underlying level via a coarse graining. Statistical thermodynamics invariably implies Brownian motion fluctuations around equilibrium, and these are what result in quantum theory being an approximation to a stochastic non-linear theory, and dynamically explain the collapse of the wave-function and the emergence of probabilities during a quantum measurement. Thus the problem of time and the problem of quantum measurement are related to each other; their solution possibly springs from the same underlying source. Ongoing laboratory experiments are testing whether quantum theory is indeed an approximation to a non-linear theory, and these experiments also indirectly test the idea that the issues of time and measurement in quantum theory are related to each other. \bigskip \noindent{\bf Acknowledgements:} It is a pleasure to thank Angelo Bassi, Suratna Das, Kinjalk Lochan and Hendrik Ulbricht for collaboration and fruitful discussions. I would like to thank the organizers of the conference Quantum Malta 2012 for holding a very stimulating conference, and the conference participants for insightful discussions. I am grateful to Thomas Filk for illuminating conversations on quantum theory, and for encouraging me to write this article. I would also like to thank Albrecht von M\"{u}ller and the Parmenides Foundation for organizing the Parmenides Workshop: The present - perspectives from physics and philosophy (Wildbad Kreuth, Germany, October, 2006) where some early ideas leading to the present work were described ~\cite{Singh:2009}. This work was made possible through the support of a grant from the John Templeton Foundation. The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the John Templeton Foundation. The support of the Foundational Questions Institute is also gratefully acknowledged. \smallskip A much more detailed bibliography of works relevant to this article can be found in ~\cite{RMP:2012}. \newpage
1,314,259,994,387
arxiv
\section{Introduction} \label{sec:Introduction} The aim of this work is to investigate the compatibility of the BRST reduction in deformation quantization, as introduced in \cite{bordemann.herbig.waldmann:2000a}, with the Hermiticity of star products. Deformation quantization as introduced in \cite{bayen.et.al:1978a} by Bayen, Flato, Fronsdal, Lichnerowicz and Sternheimer relies on the idea that the quantization of a symplectic or Poisson manifold $M$ representing the phase space of a classical mechanical system is described by a formal deformation of the commutative algebra of smooth complex-valued functions $\Cinfty(M)$. Explicitly, one defines a \emph{star product} $\star$ on $M$ being a $\mathbb{C}[[\lambda]]$-bilinear associative product on $\Cinfty(M)[[\lambda]]$ of the form \begin{equation} f\star g = f\cdot g + \sum_{r=1}^\infty \lambda^r C_r(f,g) \end{equation} for any $f,g\in\Cinfty(M)[[\lambda]]$, where $C_1(f,g) - C_1(g,f) = \I \{f,g\}$ and where all the terms $C_r$ are bidifferential operators vanishing on constants. Here the formal parameter $\lambda$ is supposed to be real. Thus the quantum observables are described by the non commutative algebra $(\Cinfty(M)[[\lambda]], \star)$. In order to get a *-algebra structure on the quantum observables we need to consider a *-involution for the star product. One calls the star product \emph{Hermitian} if the complex conjugation is an involution, i.e. if $\cc{f \star g} = \cc{g} \star \cc{f}$ for all $f,g\in \Cinfty(M)[[\lambda]]$. The existence and classification of general star products on Poisson manifolds has been provided by Kontsevich's famous formality theorem \cite{kontsevich:2003a} and the existence of Hermitian star products on symplectic manifolds was shown in \cite{neumaier:2001a,neumaier:2002a}. At the classical level it is possible to perform, under fairly general conditions, the phase space reduction which constructs from the original phase space $M$ one of a smaller dimension denoted by $M_\red$, see e.g. \cite{marsden.weinstein:1974a}. More precisely, suppose that a Lie group $\group{G}$ acts by symplectomorphisms resp. Poisson diffeomorphisms and that it allows an $\Ad^*$-equivariant momentum map $J\colon M\longrightarrow \liealg{g}^*$ with $0\in \liealg{g}^*$ as value and regular value, where $\liealg{g}$ denotes the Lie algebra of $\group{G}$. Then $C = J^{-1}(\{0\})$ is a closed embedded submanifold of $M$, called regular constraint surface. If the action is in addition proper and free, then the reduced manifold $M_\red$ given by the orbit space $C / \group{G}$ is again a symplectic resp. Poisson manifold. In the setting of deformation quantization a quantum reduction scheme has been introduced in \cite{bordemann.herbig.waldmann:2000a}, see also \cite{dippell.esposito.waldmann:2019a} for a more categorical approach to reduction in both the quantum and classical setting. One of the crucial ingredients is the notion of quantum momentum maps \cite{xu:1998a}. Given a star product on $M$, a quantum momentum map is a map $\boldsymbol{J} = J + \sum_{r=1}^\infty \lambda^r \boldsymbol{J}_r \colon M \longrightarrow \liealg{g}^*[[\lambda]]$ into the formal series of smooth functions on $M$ such that \begin{equation} \boldsymbol{J}(\xi) \star f - f \star \boldsymbol{J}(\xi) = \I \lambda \{J(\xi),f\} \quad \text{and} \quad \boldsymbol{J}(\xi) \star \boldsymbol{J}(\eta) -\boldsymbol{J}(\eta) \star \boldsymbol{J}(\xi) = \I \lambda \boldsymbol{J}([\xi,\eta]) \end{equation} for all $\xi,\eta\in\liealg{g}$ and $f\in \Cinfty(M)[[\lambda]]$. Here $\boldsymbol{J}(\xi) =\SP{\boldsymbol{J},\xi}$ denotes the pointwise dual pairing, see \cite{gutt.rawnsley:2003a,mueller-bahns.neumaier:2004a}. The map $\boldsymbol{J}$ is called \emph{quantum momentum map} and the pairs $(\star,\boldsymbol{J})$ are called \emph{equivariant star products}, see also \cite{reichert.waldmann:2016a,reichert:2017a,reichert:2017b} for a classification in the symplectic setting. The BRST approach provides then a tool to construct a reduced star product $\star_\red$ on $M_\red$ that is induced by the equivariant star product $(\star,\boldsymbol{J})$ on $M$ and thus implies that the deformation quantization is compatible with the classical phase space reduction. Here the abbreviation BRST stands for the particle physicists Becchi, Rouet, Stora \cite{becchi.rouet.stora:1976a} and Tyutin \cite{tyutin:2008a} who investigated gauge invariances by introducing new variables, the ``ghosts'' and ``antighosts''. see also \cite{henneaux.teitelboim:1992a} for further applications in physics. Kostant and Sternberg \cite{kostant.sternberg:1987a} transferred this idea to the setting of symplectic resp. Poisson geometry, introducing the classical BRST algebra $\mathcal{A}^{\bullet,\bullet} =\Anti^\bullet \liealg{g}^* \otimes\Anti^\bullet \liealg{g} \otimes \Cinfty(M)$ with ghost number grading $\mathcal{A}^{(n)} = \bigoplus_{n=k-\ell}\mathcal{A}^{k,\ell}$ and a corresponding super Poisson structure induced by the natural pairing of $\liealg{g}^*$ and $\liealg{g}$. The two characteristic features of the classical BRST algebra are the classical BRST operator $D \colon \mathcal{A}^{(\bullet)} \longrightarrow \mathcal{A}^{(\bullet+1)}$, satisfying $D^2=0$, and the ghost number derivation $\Gh$ inducing the ghost number grading. With these notions it was shown that one has the following isomorphism of Poisson algebras \begin{equation} \label{eq:ZeroBRSTCohomologyCongMred} \HBRST^{(0)}(\mathcal{A}) \cong \Cinfty(M_\red), \end{equation} where the classical BRST cohomology $\HBRST^{(\bullet)}(\mathcal{A})$ is the cohomology of $(\mathcal{A}^{(\bullet)},\{\argument,\argument\},D)$, see also \cite{forger.kellendonk:1992a}. As mentioned above, Bordemann, Herbig and Waldmann \cite{bordemann.herbig.waldmann:2000a} transferred this result to the setting of deformation quantization and constructed the standard ordered quantum BRST algebra $(\mathcal{A}^{(\bullet)}[[\lambda]],\star_\std,\boldsymbol{D}_\std)$ as formal deformation of the classical BRST algebra. In particular, they proved the quantum analogue of \eqref{eq:ZeroBRSTCohomologyCongMred}, namely \begin{equation} \mathcal{A}_\red = \boldHBRST^{(0)}(\mathcal{A}[[\lambda]]) \cong \Cinfty(M_\red)[[\lambda]]. \end{equation} Here $\boldHBRST^{(\bullet)}(\mathcal{A}[[\lambda]])$ denotes the cohomology of the quantized BRST algebra, the so-called quantum BRST cohomology, and the ghost number zero part $\mathcal{A}_\red$ is called reduced quantum BRST algebra. The above construction induces a star product $\star_\red$ on the reduced manifold, but if the star product on $M$ is Hermitian, the construction does not yield an involution for it. The main problem here is that in general homological algebra is not compatible with involutions and positive definite inner products. Therefore, the new question addressed in this work is whether one can modify the BRST reduction in such a way that it gives in addition an induced *-involution for the reduced star product. Note that there is also a different way to construct involutions for $\star_\red$ via *-representations, see \cite{bordemann:2005a,gutt.waldmann:2010a}. A more general reduction scheme can also be found in \cite{cattaneo.felder:2007a}. To this end we introduce a notion of abstract BRST algebras and investigate various concepts of involutions that are compatible with the gradings. We show that graded *-involutions with imaginary ghost operator are the best suited involutions as they guarantee the existence of non-trivial *-representations on pre-Hilbert spaces, which is necessary from the physical point of view to encode for example the superposition principle. Applying these abstract results to the setting of deformation quantization, we construct such an involution for the quantum BRST algebra $\mathcal{A}[[\lambda]]$ by means of a positive definite inner product on the Lie algebra as additional information. In this case, we prove that the so constructed *-algebra has sufficiently many positive linear functionals in the sense of \cite{bursztyn.waldmann:2000a,bursztyn.waldmann:2001a, bursztyn.waldmann:2005a,bursztyn.waldmann:2005b}, guaranteeing a non-trivial *-representation theory via GNS representations, see also \cite{bordemann.waldmann:1998a}. Finally, we introduce the adjoint quantum BRST operator $\boldsymbol{D}_\std^*$ and the quantum BRST quotient \begin{equation} \boldHBRSTtilde^{(\bullet)} (\mathcal{A}[[\lambda]]) = \frac{\ker \boldsymbol{D}_\std\cap \ker \boldsymbol{D}_\std^*} {\image \boldsymbol{D}_\std \cap \image \boldsymbol{D}_\std^*}. \end{equation} We show for compact Lie groups that its zero-th order is isomorphic to the reduced BRST algebra, i.e. \begin{equation} \boldHBRSTtilde^{(0)} (\mathcal{A}[[\lambda]]) \cong \mathcal{A}_\red \cong \Cinfty(M_\red)[[\lambda]]. \end{equation} The crucial ingredient in the proof is a $\Cinfty(C)^\group{G}[[\lambda]]$-valued inner product, similarly to algebra-valued inner products on Hilbert-modules \cite{lance:1995a}, but over $\mathbb{C}[[\lambda]]$ as in \cite{gutt.waldmann:2010a}. In particular, this isomorphism induces the complex conjugation as involution for $\star_\red$, hence the BRST reduction of Hermitian star products yields in this setting Hermitian reduced star products. In other words, we show that $\boldHBRST^{(\bullet)}(\mathcal{A}[[\lambda]])$ and $\boldHBRSTtilde^{(\bullet)}(\mathcal{A}[[\lambda]])$ are isomorphic in ghost number zero if the Lie group acting on $M$ is compact, which provides a large class of examples for the physically relevant invariants. The paper is organized as follows: In Section~\ref{section:Preliminaries} we recall the basics concerning the classical BRST algebra and its counterpart in deformation quantization. Then we introduce in Section~\ref{section:AbstractBRSTalgebras} the notion of abstract BRST algebras and look for compatible involutions and their *-representation theory. Having found a suitable concept of involutions we apply this idea in Section~\ref{section:InvolutionsforQuantumBRSTAlgebra} at first to the Grassmann part and then finally to the quantum BRST algebra. The results of this paper are partially based on the master thesis \cite{kraft:2018a}. \section{Preliminaries} \label{section:Preliminaries} \subsection{The Classical BRST Complex and Cohomology} \label{subsection:ClassicalBRSTComplexandCohomology} In this section we recall the description of the classical Marsden-Weinstein reduction via the classical BRST cohomology in order to establish the notation. We refer to \cite{bordemann.herbig.waldmann:2000a,forger.kellendonk:1992a, kostant.sternberg:1987a}. Let us consider a Hamiltonian $\group{G}$-space $(M,\group{G},J)$ consisting in a symplectic or Poisson manifold $(M,\omega)$ resp. $(M,\pi)$ and a Hamiltonian action $\Phi \colon \group{G} \times M \longrightarrow M$ with momentum map $J$. It is well-known that the quotient $M_\red = C / \group{G}$, where $C= J^{-1}(\{0\})$ with $0$ being a regular value of $J$, inherits a symplectic resp. Poisson structure from $M$ if the action is free and proper, see \cite{marsden.weinstein:1974a}. In addition, we can identify $\Cinfty(M_\red)$ with $\Cinfty(C)^\group{G}$. From now on we call $(M,\group{G},J,C)$ \emph{Hamiltonian $\group{G}$-space with regular constraint surface} and we denote by $\iota \colon C \rightarrow M$ the canonical embedding and by $\mathcal{I}_C = \ker \iota^*$ the \emph{vanishing ideal} of $C$. Using a tubular neighbourhood one can construct a \emph{prolongation map} \begin{equation} \prol \colon \Cinfty(C) \longrightarrow \Cinfty(M) \end{equation} with $\iota^* \prol = \id\at{\Cinfty(C)}$, see \cite[Lemma~2]{bordemann.herbig.waldmann:2000a}. This yields in particular $\Cinfty(C) \cong \Cinfty(M) /\mathcal{I}_C$. Note that if the action is proper on $M$, then the prolongation map can even be chosen to be $\group{G}$-equivariant. We aim to give another description of the Poisson algebra $\Cinfty(M_\red)$. Let us consider the $\mathbb{Z}\times\mathbb{Z}$-graded vector space \begin{equation} \mathcal{A}^{\bullet,\bullet} = \Anti^\bullet \liealg{g}^* \otimes \Anti^\bullet\liealg{g} \otimes \Cinfty(M), \end{equation} where the gradings are also called \emph{ghost} and \emph{antighost degree}. Then $\mathcal{A}$ carries a natural $\mathbb{Z}_2$-graded vector space structure $\mathcal{A} = \Anti^{\text{even}}(\liealg{g}^* \oplus \liealg{g})\otimes \Cinfty(M) \oplus \Anti^{\text{odd}}(\liealg{g}^*\oplus \liealg{g})\otimes \Cinfty(M)$. The $\mathbb{Z}$-grading \begin{equation} \mathcal{A}^{(n)} = \bigoplus_{n= k-\ell} \mathcal{A}^{k,\ell}, \end{equation} is called the \emph{ghost number} or \emph{total degree}. In particular, the ghost number grading and the $\mathbb{Z}\times \mathbb{Z}$-grading induce the same $\mathbb{Z}_2$-grading, so the notions of super derivations with respect to the $\mathbb{Z}_2$-grading and of graded derivations with respect to the ghost number grading coincide. With the $\wedge$-product of forms $(\alpha \otimes \xi)\wedge (\beta \otimes \eta) = (-1)^{k\ell} (\alpha \wedge \beta) \otimes (\xi \wedge \eta)$ for $\alpha \in \Anti^\bullet \liealg{g}^*, \beta\in\Anti^k\liealg{g}^*, \xi \in\Anti^\ell\liealg{g}$ and $\eta\in \Anti^\bullet\liealg{g}$ and the pointwise product of functions, $\mathcal{A}$ becomes an associative, super-commutative algebra that is graded with respect to all the above mentioned degrees. The element $ 1 \otimes 1\otimes 1$ is a unit and one has the following differentials: \begin{itemize} \item The vertical differential is the Chevalley-Eilenberg differential \begin{equation} \label{eq:CEonBRST} \delta \colon \Anti^\bullet \liealg{g}^* \otimes \Anti^\bullet\liealg{g} \otimes \Cinfty(M) \longrightarrow \Anti^{\bullet+1} \liealg{g}^* \otimes \Anti^\bullet\liealg{g} \otimes \Cinfty(M), \end{equation} where the representation of $\liealg{g}$ on $\Anti^\bullet \liealg{g}\otimes \Cinfty(M)$ is defined by \begin{equation} \label{eq:ActionofgonKoszulComplex} \liealg{g}\ni \xi \mapsto \rho(\xi) = \ad(\xi) \otimes \id + \id \otimes \{J(\xi),\argument\} \in \End(\Anti^\bullet \liealg{g}\otimes \Cinfty(M)). \end{equation} The corresponding cohomology is denoted by $\HCE^\bullet(\mathcal{A})$. \item The horizontal differential $\del\colon\mathcal{A}^{\bullet,\bullet} \longrightarrow \mathcal{A}^{\bullet,\bullet-1}$ is the extended Koszul differential, explicitly given by $\del(\alpha\otimes x \otimes f) = (-1)^k \alpha \otimes \ins(J)(x\otimes f)$ for all $\alpha \in \Anti^k\liealg{g}^*, x\in\Anti^\bullet\liealg{g}$ and $f\in \Cinfty(M)$. Here $\ins(J)$ means the left insertion of $J$, i.e. the standard interior product. \end{itemize} One can show that $(\mathcal{A}^{\bullet,\bullet}, \del, \delta)$ is a double complex and that the total differential \begin{equation} \label{eq:TotalDifferential} D = \delta + 2 \del \colon \mathcal{A}^{(\bullet)} \longrightarrow \mathcal{A}^{(\bullet+1)} \end{equation} is a well-defined coboundary operator on the total complex $\mathcal{A}^{(\bullet)}$, the so-called \emph{classical BRST operator}, see \cite[Section 4]{bordemann.herbig.waldmann:2000a}. Note that the factor $2$ in front of the Koszul differential in \eqref{eq:TotalDifferential} is just a convention and that the ghost and antighost degrees are not respected by $D$, but the total degree is. It turns out that $\mathcal{A}^{(\bullet)}$ has also a natural super Poisson stucture induced by the natural pairing of $\liealg{g}$ and $\liealg{g}^*$. Concerning the compatibility of this super Poisson structure with the grading and the BRST operator one finds the following properties: \begin{itemize} \item Let $2\gamma$ be the identity endomorphism of $\liealg{g}$, regarded as an element $\gamma = \frac{1}{2}\basis{e}^a \wedge \basis{e}_a \in \mathcal{A}^{1,1}$ in terms of a basis $\basis{e}_1,\dots, \basis{e}_n$ of $\liealg{g}$ with dual basis $\basis{e}^1,\dots, \basis{e}^n$. Then the ghost number grading of $\mathcal{A}^{(\bullet)}$ is induced by the \emph{ghost number derivation} $\Gh = \{\gamma,\argument\}$, i.e. $\phi \in \mathcal{A}^{(k)}$ if and only if $\Gh\phi = k \phi$. The element $\gamma\in \mathcal{A}^{(0)}$ is called \emph{ghost charge}. \item The total differential $D$ fulfils $D=\{\Theta,\argument\}$ with $\Theta = \Omega + J$ and $\Omega = -\frac{1}{2} [\argument,\argument] = -\frac{1}{4}f^i_{jk} \basis{e}^j\wedge\basis{e}^k\wedge \basis{e}_i$, where $f^i_{jk}$ are the structure constants of $\liealg{g}$. In particular, the classical BRST operator is an inner Poisson derivation of degree $1$ and the odd element $\Theta \in \mathcal{A}^{(1)}$ is called \emph{classical BRST charge}. \end{itemize} Summarizing, one calls the differential $\mathbb{Z}$-graded super Poisson algebra $(\mathcal{A}^{(\bullet)},D,\{\argument,\argument\})$ \emph{classical BRST algebra}, and the corresponding cohomology $\HBRST^{(\bullet)}(\mathcal{A}) = \ker D/\image D$ \emph{classical BRST cohomology}. Since the classical BRST operator is an inner Poisson derivation, it immediately follows that $\HBRST^{(\bullet)}(\mathcal{A})$ inherits a $\mathbb{Z}$-graded super Poisson structure from the classical BRST algebra. Moreover, $[1] \in \HBRST^{(\bullet)}(\mathcal{A})$ is a unit with respect to the $\wedge$-product, see \cite[Lemma~9]{bordemann.herbig.waldmann:2000a}. It has been proved that in ghost number zero one has the isomorphism \begin{equation} \label{eq:ClassicalIsoBRSTCE} \HBRST^{(0)}(\mathcal{A}) \cong \HCE^0(\liealg{g}, \Cinfty(C)) \cong \Cinfty(M_\red) \end{equation} of Poisson algebras, inducing a Poisson structure on the reduced manifold, see \cite[Prop. 10]{bordemann.herbig.waldmann:2000a}. \subsection{The Quantum BRST Complex and Cohomology} \label{section:QuantumBRSTComplexandCohomology} In a similar fashion, now one can perform all the above constructions in the framework of deformation quantization, where we follow again \cite{bordemann.herbig.waldmann:2000a}. The underlying vector space of the quantum BRST algebra are the formal power series $\mathcal{A}^{(\bullet)}[[\lambda]]$ with values in the classical BRST algebra, inheriting all the gradings of $\mathcal{A}$. Let $(M,\group{G},J)$ be a Hamiltonian $\group{G}$-space with star product $\star$. A \emph{quantum momentum map} is a formal series $\boldsymbol{J} = \sum_{r=0}^\infty \lambda^r \boldsymbol{J}_r\colon M \longrightarrow \liealg{g}^*[[\lambda]]$ of smooth functions $\boldsymbol{J}_r\colon M\longrightarrow \liealg{g}^*$ such that $\boldsymbol{J}_0 = J$ and such that $\boldsymbol{J}$ satisfies \begin{equation} \label{eq:DefiEquivariantStarProduct} \boldsymbol{J}(\xi) \star \boldsymbol{J}(\eta) - \boldsymbol{J}(\eta) \star \boldsymbol{J}(\xi) = \I\lambda \textbf{J}([\xi,\eta]) \quad \text{and} \quad \boldsymbol{J}(\xi)\star f - f \star \boldsymbol{J}(\xi) = \I \lambda \{J(\xi),f\} \end{equation} for all $\xi,\eta \in \liealg{g}$ and $f\in \Cinfty(M)[[\lambda]]$. The pair $(\star,\boldsymbol{J})$ is called \emph{equivariant star product}. The first property in \eqref{eq:DefiEquivariantStarProduct} is also called \emph{quantum covariance} and ensures that the quantum momentum map is a morphism of Lie algebras. Moreover, it implies that a Lie algebra representation $\boldsymbol{\rho}_M$ is given by \begin{equation} \label{eq:gRepQuantumAlgM} \boldsymbol{\rho}_M(\xi) = \frac{1}{\I \lambda} \ad(\boldsymbol{J}(\xi)) \end{equation} for $\xi\in\liealg{g}$. The second property implies that the star product is also $\group{G}$-\emph{invariant}, i.e. satisfies $\Phi_g^*(f\star h)=\Phi_g^*(f)\star \Phi_g^*(h)$ for all $g \in \group{G}, f,h \in \Cinfty(M)[[\lambda]]$, and that $\boldsymbol{\rho}_M$ coincides with the classical action $\rho_M(\xi) = - \Lie_{\xi_M}$. The quadruple $(M,\star,\group{G},\boldsymbol{J})$ is called \emph{Hamiltonian quantum $\group{G}$-space} if $(M,\group{G},J)$ is a Hamiltonian $\group{G}$-space with equivariant star product $(\star,\boldsymbol{J})$. Similarly, we call $(M,\star,\group{G},\boldsymbol{J},C)$ a \emph{Hamiltonian quantum $\group{G}$-space with regular constraint surface} if $(M,\group{G},J,C)$ is a Hamiltonian $\group{G}$-space with regular constraint surface and equivariant star product $(\star,\boldsymbol{J})$. Recall that in the symplectic case one can construct for every proper and strongly Hamiltonian group action an equivariant star product $(\star,J)$, i.e. with $\boldsymbol{J}=J$, see \cite[Sect.~5.8]{fedosov:1996a}. Such star products are also called \emph{strongly invariant}. Therefore, we assume from now on that $(M,\star,\group{G},\boldsymbol{J},C)$ is a Hamiltonian quantum $\group{G}$-space with regular constraint surface. A quantized version of the Grassmann part $\Anti^\bullet(\liealg{g}^* \oplus \liealg{g})$ can be constructed in the following way, see \cite[Sect.~5]{bordemann.herbig.waldmann:2000a}. Let $\mu$ denote the $\wedge$-product. The \emph{standard ordered star product} $\circ_\std$ for $ \Anti^\bullet ( \liealg{g}_\mathbb{C}^*\oplus \liealg{g}_\mathbb{C})[[\lambda]]$ is defined by \begin{equation} \label{eq:DefiStarStdGrassmann} a \circ_\std b = \mu \circ e^{2\I\lambda P^*} a \otimes b \end{equation} with $P^* = \jns(\basis{e}^k) \otimes \ins(\basis{e}_k)$, where $\jns$ denotes the right insertion and $\ins$ the left insertion. It is a formal deformation of the $\wedge$-product: it is a $\mathbb{C}[[\lambda]]$-bilinear, associative map such that for all homogeneous $a,b\in\Anti^\bullet ( \liealg{g}_\mathbb{C}^*\oplus \liealg{g}_\mathbb{C})[[\lambda]]$ one has $1\circ_\std a = a = a\circ_\std 1$ and \begin{equation} a \circ_\std b = a \wedge b + \sum_{r=1}^\infty \lambda^r C_r(a,b) \end{equation} with $C_1(a,b)-(-1)^{\abs{a}\abs{b}}C_1(b,a) = \I \{a,b\}$. Tensoring the standard ordered star product on the Grassmann part with the equivariant star product $(\star,\boldsymbol{J})$ for the functions, we obtain an associative product $\star_\std$ for $\mathcal{A}[[\lambda]]$. Explicitly, we have for $a, b\in \Anti^\bullet (\liealg{g}_\mathbb{C}^*\oplus \liealg{g}_\mathbb{C})[[\lambda]]$ and $f,g\in \Cinfty(M)[[\lambda]]$ \begin{equation} \label{eq:StarProductQuantumBRSTAlgebra} (a \otimes f) \star_\std (b \otimes g) = (a \circ_\std b) \otimes (f \star g). \end{equation} In analogy to the classical case one defines the \emph{standard ordered quantum BRST charge} by \begin{equation} \Theta_\std = \Omega + \boldsymbol{J} + \I \lambda \chi. \end{equation} Here $\chi\in \liealg{g}^* \subset \mathcal{A}^{(1)}[[\lambda]]$ is defined by $\chi(\xi)=\frac{1}{2}\tr(\ad(\xi))$ for all $\xi\in\liealg{g}$, whence $\Theta_\std$ coincides in the zero-th order of $\lambda$ with the classical $\Theta$. One can compute $\Theta_\std \star_\std \Theta_\std = 0$. Consequently, the \emph{standard ordered quantum BRST operator} is given by \begin{equation} \boldsymbol{D}_\std = \frac{1}{\I\lambda} \ad_\std(\Theta_\std), \end{equation} where $\ad_\std$ denotes the taking of the super commutator with respect to the standard ordered star product, and it is also a deformation of the classical BRST operator $D$. Then the \emph{standard ordered BRST algebra} $(\mathcal{A}^{(\bullet)}[[\lambda]], \star_\std, \boldsymbol{D}_\std)$ becomes a differential $\mathbb{Z}$-graded algebra with unit over $\mathbb{C}[[\lambda]]$. The standard ordered quantum BRST operator splits into two differentials: \begin{itemize} \item The \emph{quantized Chevalley-Eilenberg differential} $\boldsymbol{\delta}\colon \mathcal{A}^{\bullet,\bullet}[[\lambda]] \longrightarrow \mathcal{A}^{\bullet +1,\bullet}[[\lambda]]$, i.e. the Chevalley-Eilenberg differential on the quantum BRST complex induced by the quantum representation \begin{equation} \label{eq:gRepQuantumKoszul} \liealg{g} \ni \xi \longmapsto \boldsymbol{\rho}(\xi) = \ad(\xi)\otimes \id + \id \otimes \boldsymbol{\rho}_M(\xi), \end{equation} where $\boldsymbol{\rho}_M$ is the representation of $\liealg{g}$ on $\Cinfty(M)[[\lambda]]$ as in \eqref{eq:gRepQuantumAlgM}. \item The \emph{quantized Koszul differential} $\boldsymbol{\del} \colon \mathcal{A}^{\bullet,\bullet}[[\lambda]] \longrightarrow \mathcal{A}^{\bullet ,\bullet-1}[[\lambda]]$ defined by \begin{equation} \label{eq:QuantizedKoszulDifferential} \boldsymbol{\del}(x \otimes f) = \ins(\basis{e}^a) x \otimes f \star \boldsymbol{J}_a + \frac{\I \lambda}{2} \left( f^b_{ab} \ins(\basis{e}^a) + f^c_{ab} \basis{e}_c \wedge \ins(\basis{e}^a)\ins(\basis{e}^b) \right) \left(x\otimes f\right) \end{equation} for $x \in \Anti^\bullet (\liealg{g}_\mathbb{C}^*\oplus \liealg{g}_\mathbb{C})[[\lambda]]$ and $f \in \Cinfty(M)[[\lambda]]$. Note that the definition is independent of the basis. \end{itemize} By the equivariance of $\star$ we have $\boldsymbol{\rho} = \rho$ and hence the equality $\boldsymbol{\delta}=\delta$. Moreover, as in the classical case one has the splitting \begin{equation} \label{eq:StandardBRSTsplitting} \boldsymbol{D}_\std = \boldsymbol{\delta} + 2 \boldsymbol{\del}, \end{equation} see \cite[Thm.~17]{bordemann.herbig.waldmann:2000a} for further details. The corresponding cohomology $\ker\boldsymbol{D}_\std / \image \boldsymbol{D}_\std$ of the standard ordered quantum BRST algebra is denoted by $\boldHBRST^{(\bullet)}(\mathcal{A}[[\lambda]])$ and called \emph{quantum BRST cohomology}. Similarly to the classical setting, the quantum BRST cohomology is a $\mathbb{Z}$-graded associative algebra with $\mathbb{C}[[\lambda]]$-bilinear product $\star_\std$ induced by the associative multiplication $\star_\std$ of the quantum BRST algebra. One has $[a]\star_\std[b]=[a\star_\std b]$ for all $[a],[b]\in \boldHBRST^{(\bullet)}(\mathcal{A}[[\lambda]])$ with $\boldsymbol{D}_\std a = 0 = \boldsymbol{D}_\std b$ and $[1]\in \boldHBRST^{(\bullet)}(\mathcal{A}[[\lambda]])$ is a unit with respect to $\star_\std$. Finally we recall that there exists a \emph{deformed restriction map} \begin{equation} \boldsymbol{\iota^*} = \iota^* \circ S = \sum_{r=0} \lambda^r \boldsymbol{\iota^*}_r \colon \Anti^\bullet\liealg{g}^* \otimes \Cinfty(M)[[\lambda]] \longrightarrow \Anti^\bullet\liealg{g}^* \otimes \Cinfty(C)[[\lambda]], \end{equation} uniquely determined by the properties \begin{align} \boldsymbol{\iota^*}_0 = \iota^*, \quad \boldsymbol{\iota^*} \boldsymbol{\del}\at{\mathcal{A}^{\bullet,1}[[\lambda]]} = 0 \quad \text{and} \quad \boldsymbol{\iota^*} \prol = \id_{\Anti^\bullet\liealg{g}^* \otimes \Cinfty(C)[[\lambda]]}. \end{align} Here $S= \id_{\Cinfty(M)} + \sum_{r=1}^\infty \lambda^r S_r$ is a formal series of differential linear operators of $\Cinfty(M)$ with $S_r$ vanishing on constants. If the action of $\group{G}$ is in addition proper on $M$ then $S$ can be chosen to be $\group{G}$-equivariant. Extending $\boldsymbol{\iota^*}$ by zero to the whole BRST algebra $\mathcal{A}^{(\bullet)}[[\lambda]]$ one gets the following result, see \cite[Prop.~26, Thm.~29, Thm.~32]{bordemann.herbig.waldmann:2000a} for a proof and further details. \begin{proposition} \label{prop:QuantumBRSTCohomology} Let $(M,\star,\group{G},\boldsymbol{J},C)$ be a Hamiltonian quantum $\group{G}$-space with regular constraint surface and proper action on $M$. \begin{enumerate2} \item There exists a $\group{G}$-equivariant chain homotopy $\widehat{\boldsymbol{h}}$ for the augmented standard ordered BRST operator \begin{equation} \label{eq:AugentedBRSTOperator} \widehat{\boldsymbol{D}}_\std = \boldsymbol{D}_\std + \boldsymbol{\delta}^c +2\boldsymbol{\iota^*} \in\End \left( \left(\Anti^\bullet\liealg{g}^*\otimes \Cinfty(C)[[\lambda]]\right) \oplus \mathcal{A}[[\lambda]] \right), \end{equation} with $\boldsymbol{\delta}^c$ being the Chevalley-Eilenberg differential on $\Anti^\bullet\liealg{g}^*\otimes \Cinfty(C)[[\lambda]]$, where all maps are defined to be zero on the domains on which they were previously not defined. In particular, one has $\widehat{\boldsymbol{D}}_\std \widehat{\boldsymbol{h}} +\widehat{\boldsymbol{h}}\widehat{\boldsymbol{D}}_\std = 2\id$ with $\widehat{\boldsymbol{h}} = \prol + \boldsymbol{h}$ and \begin{equation} \boldsymbol{h} \colon \Anti^\bullet \liealg{g}^* \otimes \Anti^\bullet \liealg{g} \otimes \Cinfty(M)[[\lambda]] \longrightarrow \Anti^\bullet \liealg{g}^* \otimes\Anti^{\bullet+1} \liealg{g} \otimes \Cinfty(M)[[\lambda]]. \end{equation} \item The $\mathbb{C}[[\lambda]]$-linear map \begin{equation} \label{eq:QuantumIsoBRSTCE} \Psi \colon \boldHBRST^{(\bullet)}(\mathcal{A}[[\lambda]]) \longrightarrow \boldHCE^\bullet(\liealg{g},\Cinfty(C)[[\lambda]]) \cong \HCE^\bullet(\liealg{g},\Cinfty(C))[[\lambda]], \quad [a] \longmapsto [\boldsymbol{\iota^*} a] \end{equation} is an isomorphism with inverse $ \Psi^{-1}([c])=\left[\widehat{\boldsymbol{h}}c\right]$ for $[c]\in \boldHCE^\bullet(\liealg{g},\Cinfty(C)[[\lambda]])$. \item If the action is in addition free on $C$, then $\boldHBRST^{(0)}(\mathcal{A}[[\lambda]]) \cong \Cinfty(M_\red)[[\lambda]]$ and this construction induces a star product $\star_\red$ on $M_\red$ via \begin{equation} \label{eq:ReducedStarProduct} \pi^*(u_1\star_\red u_2) = \boldsymbol{\iota^*} (\prol(\pi^*u_1)\star \prol(\pi^* u_2)) \end{equation} for all $u_1,u_2\in\Cinfty(M_\red)[[\lambda]]$, the so-called \emph{reduced star product}. \end{enumerate2} \end{proposition} To shorten the notation we call $\mathcal{A}_\red = \boldHBRST^{(0)}(\mathcal{A}[[\lambda]]) $ \emph{reduced quantum BRST algebra}. \section{Abstract BRST Algebras and Different Types of Involutions} \label{section:AbstractBRSTalgebras} \subsection{Abstract BRST Algebras} \label{subsection:AbstractBRSTalgebras} Let $\ring{R}$ be an ordered ring with $\mathbb{Q}\subseteq \ring{R}$ and $\ring{C}=\ring{R}(\I)$ its complexification with $\I^2=-1$. The main example is $\ring{R} = \mathbb{R}[[\lambda]]$, see \cite{bordemann.waldmann:1998a,bursztyn.waldmann:2001a,bursztyn.waldmann:2005b} for a detailed discussion on *-representations and the GNS construction in this abstract setting. In the following, $\mathcal{A}$ denotes a $\mathbb{Z}_2$-graded associative algebra over $\ring{C}$ and $\ad(a)=[a,\argument]$ the super commutator with respect to the $\mathbb{Z}_2$-grading. \begin{definition}[BRST algebra] \label{defi:AbstractBRSTalgebra} Let $\mathcal{A}=\mathcal{A}_0 \oplus \mathcal{A}_1$ be a $\mathbb{Z}_2$-graded associative algebra over $\ring{C}=\ring{R}(\I)$. \begin{enumerate2} \item An even element $\gamma \in \mathcal{A}_0$ such that the inner derivation $\Gh = \ad(\gamma)=[\gamma,\argument]$ induces a $\mathbb{Z}$-grading on $\mathcal{A}$ by \begin{equation} \mathcal{A}^{(\bullet)} = \bigoplus_{k\in \mathbb{Z}} \mathcal{A}^{(k)} \quad \text{with} \quad \mathcal{A}^{(k)} = \{a\in\mathcal{A}\mid \Gh a = ka \} \end{equation} is called \emph{ghost charge}. The operator $\Gh$ is called \emph{ghost number operator} and the induced grading is called \emph{ghost number grading}. \item An odd element $\Theta$ with ghost number $+1$ and square zero, i.e. \begin{equation} \Theta \in \mathcal{A}^{(1)}_1 \quad \text{and} \quad \Theta^2 = 0, \end{equation} is called \emph{BRST charge}. The corresponding inner derivation $D =\ad(\Theta)$ is called \emph{BRST operator}. \end{enumerate2} The triple $(\mathcal{A},\gamma,\Theta)$ is then called \emph{BRST algebra} over $\ring{C}$. A \emph{morphism} $\Phi\colon (\mathcal{A},\gamma,\Theta) \longrightarrow (\mathcal{A}',\gamma',\Theta')$ of BRST algebras is an even morphism of $\mathbb{Z}_2$-graded associative algebras $\Phi\colon\mathcal{A}\longrightarrow \mathcal{A}'$ with \begin{equation} \label{eq:MorphismBRSTAlgebras} \Phi(\gamma) = \gamma' \quad \text{and} \quad \Phi(\Theta) = \Theta', \end{equation} and the category of BRST algebras is denoted by $\BRSTalgebras$. \end{definition} Note that the properties imply that $\Phi$ preserves the $\mathbb{Z}$-grading as well. We often encounter the setting that the $\mathbb{Z}_2$-grading is induced by the $\mathbb{Z}$-grading. In addition, $D = \ad(\Theta) \colon \mathcal{A}^{(\bullet)} \longrightarrow \mathcal{A}^{(\bullet +1)}$ and $D^2=0$ imply that the BRST operator is a coboundary operator, thus it defines a cohomology: \begin{definition}[BRST cohomology] \label{definition:AbstractBRSTCohomology} Let $(\mathcal{A},\gamma,\Theta)$ be a BRST algebra. Then \begin{equation} \HBRST^{(\bullet)} (\mathcal{A}) = \bigoplus_{k\in\mathbb{Z}} \HBRST^{(k)} (\mathcal{A}) \quad \text{with} \quad \HBRST^{(k)} (\mathcal{A}) = \frac{\ker D\at{\mathcal{A}^{(k)}}} {\image D\at{\mathcal{A}^{(k-1)}}} \end{equation} is called \emph{BRST cohomology} of $\mathcal{A}$. The \emph{reduced BRST algebra} is defined by \begin{equation} \label{eq:AbstractReducedBRSTAlgebra} \mathcal{A}_\red = \mathrm{H}^{(0)}_{\BRST,0}(\mathcal{A}). \end{equation} \end{definition} Since $D$ is an odd inner derivation, the cohomology is again a $\mathbb{Z}\times\mathbb{Z}_2$-graded associative algebra and $\mathcal{A}_\red$ is a well-defined associative subalgebra. The ghost number operator acts on $\HBRST^{(\bullet)} (\mathcal{A})$ via \begin{equation} \Gh_\BRST [a] = [\Gh a], \end{equation} i.e. $\HBRST^{(k)} (\mathcal{A})= \{[a]\in \HBRST^{(\bullet)} (\mathcal{A}) \mid \Gh_\BRST[a] = k[a]\} $. However, it is no longer an inner derivation as $\gamma$ is no cocycle \begin{equation} D \gamma = [\Theta,\gamma] = - [\gamma,\Theta] = - \Theta. \end{equation} If the $\mathbb{Z}_2$-grading is induced by the $\mathbb{Z}$-grading, then we have $\mathcal{A}_\red = \HBRST^{(0)}(\mathcal{A})$ as in the case of the quantum BRST cohomology. A straightforward computation shows that the assignment of a BRST algebra $(\mathcal{A},\gamma,\Theta)$ to its BRST cohomology $\HBRST^{(\bullet)}(\mathcal{A})$ and reduced BRST algebra $\mathcal{A}_\red$ is a functor from $\BRSTalgebras$ into the category of $\mathbb{Z}\times \mathbb{Z}_2$-graded algebras resp. algebras. Let us consider a *-involution for $\mathcal{A}$. Since we aim to get an induced involution on $\mathcal{A}_\red= \mathrm{H}^{(0)}_{\BRST,0}(\mathcal{A})$, the involution on the whole of $\mathcal{A}$ should respect the $\mathbb{Z}_2$-grading. We have two main possibilities for involutions on a $\mathbb{Z}_2$-graded algebra $\mathcal{A}$: \begin{itemize} \item \emph{Graded *-involutions} $I\colon \mathcal{A} \longrightarrow \mathcal{A}$, i.e. $\ring{C}$-antilinear involutive even maps with \begin{equation} \label{eq:DefiGradedInvolution} I(ab) = I(b)I(a) \end{equation} for all $a,b\in \mathcal{A}$. The pair $(\mathcal{A},I)$ is called \emph{graded *-algebra}. \item \emph{Super *-involutions} $S\colon \mathcal{A} \longrightarrow \mathcal{A}$, i.e. $\ring{C}$-antilinear involutive even maps with \begin{equation} \label{eq:DefiSuperInvolution} S(ab) = (-1)^{\abs{a}\abs{b}} S(b) S(a) \end{equation} for all homogeneous elements $a,b\in \mathcal{A}$ with degrees $\abs{a},\abs{b}$. The pair $(\mathcal{A},S)$ is called \emph{super *-algebra}. \end{itemize} A short computation shows that the graded resp. super *-involutions of the adjoint representations give a minus sign. This motivates the following rescaling: From now on $\gamma\in\mathcal{A}^{(0)}_0$ and $\Theta\in\mathcal{A}^{(1)}_1$ are the elements such that \begin{equation} \label{eq:RescaledGhostBRST} \Gh = \I\ad(\gamma) \quad \text{and} \quad D = \I \ad(\Theta). \end{equation} Note that the normalization does not change the cohomology of $D$ as well as the grading induced by $\Gh$ and that in the case of the quantum BRST algebra we already have a corresponding factor $\frac{1}{\I }$ in front of the super commutator. One can show that the notion of super and graded *-involutions can be mutually exchanged by rescaling the odd component of the involution by $\pm \I$. Thus it only remains to investigate possible compatibilities of involutions with the ghost number grading. As we ultimately want an induced *-involution on the even ghost number zero part of the BRST cohomology, the ghost number zero part should be invariant under the involution, too. There are again \emph{two main possibilities}: An involution that leaves the ghost number grading invariant, or an involution that inverts the ghost number grading. \begin{remark}[Involution leaving ghost number invariant] A super *-involution that leaves the ghost number degree invariant and with Hermitian BRST charge $\Theta^* = \Theta$ induces a super *-involution on the cohomology and a *-involution on $\mathcal{A}_\red$ in a functorial way. However, this kind of involution has a big disadvantage in connection with *-representations $\pi$ on pre-Hilbert spaces over $\ring{C}$, see \cite{bursztyn.waldmann:2001a}: In this case $\pi(\Theta)=0$ would vanish. The induced inner product on the physical space is in general still not positive definite, which leads to so-called no ghost theorems, compare e.g. \cite[Sect.~14.2]{henneaux.teitelboim:1992a}. \end{remark} The above remark is a consequence of a more general problem: \begin{remark} The theory of homological algebra is not compatible with star involutions resp. with the positive definiteness of inner products. In the case of three or more dimensions there are no canonically induced inner products on the cohomology as the following simple example shows: Consider $\mathbb{R}^3$ with Euclidean scalar product and differential \begin{equation*} \D = \left( \begin{smallmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{smallmatrix} \right). \end{equation*} Then $\ker \D / \image \D \cong \Span\{ ( 0 \; 1 \; 0)^\top\}$ and the quotient map is not compatible with the inner product. \end{remark} We are mainly interested in non-trivial *-representations of the reduced quantum BRST algebra with involution, where one of the motivations consists in implementing the superposition principle. Therefore, the above lack of positivity leads us to the study of other possibilities for involutions $^*$ on the BRST algebra $\mathcal{A}$ such that $D$ and $\Theta$ are not Hermitian, i.e. $D \neq D^*$ and $\Theta \neq \Theta^*$. \subsection{Graded *-Involution with Imaginary Ghost Operator} \label{subsection:GradedInvolutionwithImaginaryGhost} Consider a BRST algebra $(\mathcal{A},\gamma,\Theta)$ which has an additional graded *-involution $a\mapsto a^*$. Since super and graded *-involutions can be mutually exchanged, this is only a matter of convenience and no relevant choice. \begin{definition}[BRST *-algebra with imaginary ghost operator] \label{defi:BRSTstarAlgebrawithImaginaryGhost} Let $\mathcal{A}$ be a BRST algebra together with a graded *-involution $^*$ satisfying $\Gh^* =\I \ad(\gamma^*)= - \Gh$. Then one calls $(\mathcal{A},\gamma,\Theta,^*)$ \emph{BRST *-algebra with imaginary ghost operator}. A \emph{morphism} $\Phi\colon (\mathcal{A},^*)\longrightarrow (\mathcal{B},^*)$ of BRST *-algebras with imaginary ghost operators is a morphism $\Phi\colon\mathcal{A}\longrightarrow\mathcal{B}$ of BRST algebras that fulfils \begin{equation} \Phi(a^*) = \Phi(a)^* \end{equation} for all $a\in \mathcal{A}$. The corresponding category of BRST *-algebras with imaginary ghost operators is denoted by $\iBRSTalg$. \end{definition} Since the graded *-involution is compatible with the $\mathbb{Z}_2$-grading and since it inverts the ghost number grading, we directly see that $\mathcal{A}^{(0)}$ becomes a $\mathbb{Z}_2$-graded *-subalgebra of $\mathcal{A}$. Similarly, $\mathcal{A}^{(0)}_0$ becomes a *-subalgebra of $\mathcal{A}$. Moreover, we obtain the following behaviour of the ghost charge and the BRST operator under the graded *-involution. \begin{lemma} Let $(\mathcal{A},^*)$ be a BRST algebra with imaginary ghost operator. \begin{enumerate2} \item There exists a unique central Hermitian element $c\in \mathcal{A}^{(0)}_0$ such that one has \begin{equation} \gamma^* = -\gamma +c. \end{equation} \item Define the \emph{adjoint BRST operator} $D^* \colon \mathcal{A}^{(\bullet)}\longrightarrow \mathcal{A}^{(\bullet-1)}$ by $D^*=\I\ad(\Theta^*)$. Then one has $(D^*)^2=0$ and \begin{equation} \label{eq:Dstara} D^* a = (-1)^{\abs{a}}(Da^*)^* \end{equation} for homogeneous $a\in \mathcal{A}$ with degree $\abs{a}$. \end{enumerate2} \end{lemma} \begin{proof} Concerning the first point we have $ -\I\ad(\gamma) = \I \ad(\gamma^*)$ and thus $\gamma^* + \gamma$ is in the center of $\mathcal{A}$ as well as $\gamma,\gamma^* \in \mathcal{A}^{(0)}_0$, hence the statement is shown. For the second point note that $\Theta\in\mathcal{A}^{(1)}_1$, so $\Theta^* \in \mathcal{A}^{(-1)}_1$, and \eqref{eq:Dstara} follows from a short computation. \end{proof} The element $\Delta = \Theta\Theta^* + \Theta^*\Theta = \Delta^* \in \mathcal{A}^{(0)}_0$ is called \emph{Laplacian} and will play an important role in the representation theory. It follows that $\Theta$ and $\Theta^*$ are either linearly independent or both equal to zero as $\mathcal{A}^{(1)} \cap \mathcal{A}^{(-1)} = \{0\}$. Thus the kernel of $D$ is no *-subalgebra of $\mathcal{A}$ and there is no obvious way to obtain a *-structure on the BRST cohomology $\HBRST^{(\bullet)}(\mathcal{A}) = \ker D/\image D$ of $\mathcal{A}$ or at least on the reduced algebra. Therefore, the idea is to define a new quotient $(\ker D\cap \ker D^*) / (\image D\cap \image D^*)$ and to show that this construction yields a well-defined $\mathbb{Z}\times\mathbb{Z}_2$-graded algebra with graded *-involution. To this end we investigate the relation between the *-involution and the elements in $\ker D\cap \ker D^*$ and $\image D \cap \image D^*$. \begin{lemma} \label{lemma:SectionImagesStarSubideal} Let $(\mathcal{A},^*)$ be a BRST *-algebra with imaginary ghost operator. Then one has for all $a\in\mathcal{A}$: \begin{enumerate2} \item $ a\in \ker D\cap \ker D^* \quad \Longleftrightarrow \quad a,a^* \in \ker D \quad \Longleftrightarrow \quad a,a^* \in \ker D^*. $ \item $ a\in \image D\cap \image D^* \quad \; \, \Longleftrightarrow \quad a,a^* \in \image D \quad \; \Longleftrightarrow \quad a,a^* \in \image D^*$. \end{enumerate2} Consequently, the intersection $\ker D\cap \ker D^*$ is a $\mathbb{Z}\times\mathbb{Z}_2$-graded *-subalgebra of $\mathcal{A}$ and the set $\image D\cap \image D^* \subseteq \ker D\cap \ker D^*$ is a $\mathbb{Z}\times\mathbb{Z}_2$-graded *-ideal therein. \end{lemma} \begin{proof} The first two parts follow directly with \eqref{eq:Dstara}. In addition, we have for all homogeneous elements $a\in \ker D\cap \ker D^*$ and $De = D^* f \in \image D\cap \image D^*$ \begin{align*} a De = (-1)^{\abs{a}} D(ae) = (-1)^{\abs{a}} D^*(af), \end{align*} thus $\image D\cap \image D^*$ is a *-ideal in $\ker D\cap \ker D^*$. \end{proof} Hence we know that $(\ker D\cap \ker D^*) / (\image D\cap \image D^*)$ becomes a $\mathbb{Z}\times \mathbb{Z}_2$-graded algebra as well. \begin{definition}[Reduced BRST *-algebra] Let $(\mathcal{A},^*)$ be a BRST *-algebra with imaginary ghost operator. The \emph{BRST quotient} is defined by \begin{equation} \label{eq:BRSTQuotient} \HBRSTtilde^{(\bullet)}(\mathcal{A}) = \frac{\ker D\cap \ker D^*}{\image D\cap \image D^*}, \end{equation} and by \begin{equation} \label{eq:ReducedBRSTstarAlgebra} \widetilde{\mathcal{A}}_\red = \widetilde{\mathrm{H}}^{(0)}_{\BRST,0}(\mathcal{A}) \end{equation} one denotes the corresponding \emph{reduced BRST *-algebra}. \end{definition} Note that $\HBRSTtilde^{(\bullet)}(\mathcal{A})$ can in general \emph{not} be expressed as cohomology of some cohomological chain complex since it is only a quotient of an algebra with an ideal. Nonetheless, we sometimes call it cohomology in analogy to $\HBRST^{(\bullet)}(\mathcal{A})$ and to simplify the notation. We have the following result. \begin{lemma} \label{lemma:ReducedBRSTStarAlgebra} The BRST quotient $\HBRSTtilde^{(\bullet)}(\mathcal{A})$ is a $\mathbb{Z}\times \mathbb{Z}_2$-graded algebra with graded *-involution $^*$ defined by \begin{equation} [a]^* = [a^*], \end{equation} exchanging $\HBRSTtilde^{(k)}(\mathcal{A})$ with $\HBRSTtilde^{(-k)}(\mathcal{A})$ for all $k\in\mathbb{Z}$. The reduced BRST *-algebra $\widetilde{\mathcal{A}}_\red $ is a *-algebra. \end{lemma} \begin{proof} The properties follow directly by the above results and the compatibility of the *-involution with the grading. \end{proof} Just as for BRST algebras one shows that the passages from a BRST *-algebra with imaginary ghost operator to its BRST quotient and reduced BRST *-algebra are functorial: \begin{proposition} The assignment of a BRST *-algebra with imaginary ghost operator $(\mathcal{A},\gamma,\Theta,^*)$ to the BRST quotient $\HBRSTtilde^{(\bullet)}(\mathcal{A})$ is a functor into the category of $\mathbb{Z}\times\mathbb{Z}_2$-graded algebras with graded *-involution. Similarly, the assignment to the reduced BRST *-algebra $\widetilde{\mathcal{A}}_\red$ is a functor into the category of *-algebras. \end{proposition} Finally, we can prove that there is the following crucial relation between $\HBRSTtilde^{(\bullet)}(\mathcal{A})$ and $\HBRST^{(\bullet)}(\mathcal{A})$. \begin{proposition} \label{prop:InclBRSTQuotienttoCohomology} The map \begin{equation} I_\mathcal{A} \colon \HBRSTtilde^{(\bullet)}(\mathcal{A}) \longrightarrow \HBRST^{(\bullet)}(\mathcal{A}), \quad [a] \longmapsto I_\mathcal{A}([a]) = [a] \end{equation} is a well-defined morphism of $\mathbb{Z}\times\mathbb{Z}_2$-graded algebras. \end{proposition} \begin{proof} The well-definedness follows directly with the definitions of the quotients and the compatibility with the grading is clear as both $\HBRSTtilde^{(\bullet)}(\mathcal{A})$ and $\HBRST^{(\bullet)}(\mathcal{A})$ inherit the $\mathbb{Z}\times\mathbb{Z}_2$-grading of $\mathcal{A}$. \end{proof} \begin{remark} The important question is if this canonical morphism $I_\mathcal{A}$ is an isomorphism, which would justify our construction and yield a canonical involution on the BRST cohomology. In general, there seems to be no possibility to decide whether $I_\mathcal{A}$ is injective or surjective and one has to argue which reduction scheme fits better to the respective application. In Section~\ref{subsection:ComparisonofReducedQuantumBRSTAlgebras} we show that in our example of the quantum BRST algebra $I_\mathcal{A}$ is an isomorphism if restricted to the physically most relevant zero-th degree. \end{remark} \begin{remark} The above considerations show that in some cases it might be useful to consider instead of the usual cohomology the BRST quotient from \eqref{eq:BRSTQuotient}. To further justify this proposal one has to transfer the concepts of quasi-isomorphisms and chain homotopies from homological algebra to our setting. For the notion of quasi-isomorphisms there is an obvious choice: We call a morphism $\Phi\colon\mathcal{A}\longrightarrow\mathcal{B}$ of BRST *-algebras \emph{quasi-isomorphism} if it induces an isomorphism $\Phi \colon \HBRSTtilde^{(\bullet)}(\mathcal{A}) \rightarrow \HBRSTtilde^{(\bullet)}(\mathcal{B})$ on the BRST quotients. The case of chain homotopies is more subtle: One choice would be to consider a homotopy $h \colon \mathcal{A}^{(\bullet)} \rightarrow \mathcal{B}^{(\bullet -1)}$ between two morphisms $\Phi,\Psi \colon \mathcal{A} \rightarrow \mathcal{B}$ of BRST *-algebras with respect to the BRST operator $D$, i.e. a $\ring{C}$-linear map $h$ such that \begin{equation*} h D_\mathcal{A} + D_\mathcal{B} h = \Phi - \Psi. \end{equation*} Then the map $h^* \colon \mathcal{A}^{(\bullet)} \rightarrow \mathcal{B}^{(\bullet +1)}$, given on homogeneous elements $a\in \mathcal{A}$ by $h^*(a) = - (-1)^{\abs{a}} (h(a^*))^*$ turns out to be a chain homotopy with respect to $D^*$ between $\Phi$ and $\Psi$. In particular, in this case $\Phi$ and $\Psi$ induce the same maps on the BRST quotients. However, it is not yet clear to us if this is the compatibility we want to have and we plan to investigate it in a forthcoming paper. \end{remark} In the remaining part of this section we want to show that *-involutions with imaginary ghost operators lead indeed to a non-trivial *-representation theory on pre-Hilbert spaces, in contrast to the involutions with Hermitian BRST charges and Hermitian ghost operators. \subsection{BRST *-Representations and GNS Construction} \label{subsection:BRSTStarRep} We introduce a *-representation theory of BRST *-algebras with imaginary ghost operator and show that the representations can be reduced to *-representations of the reduced BRST *-algebras. In addition, we sketch an adapted GNS construction. The notions are based on the theory of pre-Hilbert spaces $\prehilb{H}$ as in \cite[Chapter~7]{waldmann:2007a}. Recall that a pre-Hilbert space over $\ring{C}$ is a $\ring{C}$-module $\prehilb{H}$ with positive definite inner product $\SP{\argument,\argument}$. Note that positivity, i.e. $\SP{\phi,\phi} > 0$ for all $\phi \in \prehilb{H}\setminus \{0\}$, makes sense in our setting as $\ring{R} \subset \ring{C} = \ring{R}(\I)$ is ordered. A map $A\colon \prehilb{H} \rightarrow \prehilb{H}$ is called adjointable if there exists a map $A^* \colon \prehilb{H} \rightarrow \prehilb{H}$ such that $\SP{A\phi, \psi} = \SP{\phi,A^* \psi}$ for all $\phi,\psi \in \prehilb{H}$. The set of adjointable maps is denoted by $\Bounded(\prehilb{H})$. These spaces can be adapted to our setting: \begin{definition}[BRST pre-Hilbert space] A \emph{BRST pre-Hilbert space} is a $\mathbb{Z}\times \mathbb{Z}_2$-graded $\ring{C}$-module $\prehilb{H}$ together with \begin{enumerate2} \item an odd endomorphism $\Theta_\prehilb{H} \in \End_1^{(1)}(\prehilb{H})$ of ghost number degree $+1$ with $\Theta_\prehilb{H}^2 = 0$, called \emph{BRST operator}, \item a \emph{ghost number operator} $\gamma_\prehilb{H}$ defined by \begin{equation} \I \gamma_\prehilb{H}\at{\prehilb{H}^{(k)}} = k \cdot\id\at{\prehilb{H}^{(k)}} \end{equation} and extended $\ring{C}$-linearly to all of $\prehilb{H}$, and \item an even graded positive definite inner product $\SP{\argument,\argument}$, \end{enumerate2} such that one has the compatibilities \begin{equation} \label{eq:CompatibilityGhosts} \gamma^*_\prehilb{H} = -\gamma_\prehilb{H} \quad \quad \text{ and } \quad \quad \Theta_\prehilb{H} \in \Bounded(\prehilb{H}). \end{equation} A morphism $T\colon \prehilb{H}\longrightarrow \prehilb{H}'$ of BRST pre-Hilbert spaces is an adjointable $\ring{C}$-linear even map intertwining the BRST and ghost operators. \end{definition} \begin{remark} Alternatively, one could also consider isometric maps as morphisms of BRST pre-Hilbert spaces, instead of adjointable ones. The isomorphisms in this category are then unitary intertwiners, not adjointable bijective intertwiners. In our general setting these notions lead indeed to different notions of equivalent representations, and we favour adjointable ones because there might exist isometric maps not allowing for an adjoint. \end{remark} Note that the definition directly implies that $\Bounded^{(\bullet)}(\prehilb{H}) = \Bounded(\prehilb{H}) \cap \End^{(\bullet)}(\prehilb{H})$ is a well-defined BRST *-algebra with imaginary ghost operator. As in the case of BRST *-algebras one can construct the usual BRST cohomology $\HBRST^{(\bullet)}(\prehilb{H}) = \ker \Theta_\prehilb{H} / \image\Theta_\prehilb{H}$, but there exists no canonical inner product on this quotient. Therefore, we define the BRST quotient \begin{equation} \HBRSTtilde^{(\bullet)}(\prehilb{H}) = \frac{\ker \Theta_\prehilb{H} \cap \ker \Theta^*_\prehilb{H}} {\image\Theta_\prehilb{H}\cap \ker \Theta^*_\prehilb{H}}. \end{equation} One can directly check that this quotient is again $\mathbb{Z} \times \mathbb{Z}_2$-graded and noting $\ker \Theta_\prehilb{H}^* = (\image \Theta_\prehilb{H})^\bot$ we can even show more: \begin{proposition} For a BRST pre-Hilbert space $\prehilb{H}$ one has $\image \Theta_\prehilb{H} \cap (\image\Theta_\prehilb{H})^\bot = \{0\}$ and thus \begin{equation} \HBRSTtilde^{(\bullet)}(\prehilb{H}) = \ker \Theta_\prehilb{H} \cap \ker \Theta^*_\prehilb{H} = \ker\Delta_\prehilb{H}, \end{equation} where $\Delta_\prehilb{H} = \left[\Theta_\prehilb{H}, \Theta^*_\prehilb{H}\right]$ denotes the Laplacian of $\Bounded^{(\bullet)}(\prehilb{H})$. In particular, the inner product on $\prehilb{H}$ restricts to a positive definite and non-degenerate inner product on $\HBRSTtilde^{(\bullet)}(\prehilb{H})$. \end{proposition} \begin{proof} One has $\image \Theta_\prehilb{H} \cap \ker \Theta^*_\prehilb{H} = \image \Theta_\prehilb{H} \cap (\image\Theta_\prehilb{H})^\bot = \{0\}$ and thus $\HBRSTtilde^{(\bullet)}(\prehilb{H}) = \ker \Theta_\prehilb{H} \cap \ker \Theta^*_\prehilb{H}$. The positive definiteness of the inner product on $\prehilb{H}$ implies for all $\phi\in \prehilb{H}$ \begin{align*} \SP{\phi, \Delta_\prehilb{H} \phi } = \SP{\Theta_\prehilb{H}^* \phi,\Theta_\prehilb{H}^*\phi } + \SP{\Theta_\prehilb{H} \phi,\Theta_\prehilb{H}\phi} \geq 0, \end{align*} which entails $\ker \Theta_\prehilb{H} \cap \ker \Theta^*_\prehilb{H} =\ker\Delta_\prehilb{H}$. Hence the inner product on $\prehilb{H}$ restricts to $\HBRSTtilde^{(\bullet)}(\prehilb{H})$. It is positive definite and in particular non-degenerate. \end{proof} Elements in the kernel of $\Delta_\prehilb{H}$ are called \emph{harmonic}. The above proposition shows that in the case of a BRST pre-Hilbert space $\prehilb{H}$ the BRST quotient $\HBRSTtilde^{(\bullet)}(\prehilb{H})$ and the \emph{reduced BRST pre-Hilbert space} \begin{equation} \widetilde{\prehilb{H}}_\red = \widetilde{\mathrm{H}}^{(0)}_{\BRST,0}(\prehilb{H}) \end{equation} inherit both positive definite inner products and one easily sees that all the passages are functorial. We have again a canonical inclusion of the BRST quotient into the BRST cohomology, compare Proposition~\ref{prop:InclBRSTQuotienttoCohomology}. Now it is even injective by the positive definiteness: \begin{proposition} Let $\prehilb{H}$ be a BRST pre-Hilbert space. Then the canonical map $I_\prehilb{H}\colon \HBRSTtilde^{(\bullet)}(\prehilb{H}) \longrightarrow \HBRST^{(\bullet)}(\prehilb{H})$ is injective. \end{proposition} \begin{proof} Suppose $[\phi]= I_\prehilb{H}([\phi])= I_\prehilb{H}([\psi])=[\psi]$ for $[\phi],[\psi]\in \HBRSTtilde^{(\bullet)}(\prehilb{H})$. This implies $\phi-\psi = \Theta_\prehilb{H}\chi \in \image \Theta_\prehilb{H} \cap \ker \Theta_\prehilb{H} \cap \ker \Theta^*_\prehilb{H}$ and we can compute \begin{align*} \SP{\Theta_\prehilb{H}\chi,\Theta_\prehilb{H}\chi} = \SP{\chi, \Theta^*_\prehilb{H}\Theta_\prehilb{H}\chi} = 0. \end{align*} But by the positive definiteness of the inner product this implies $0 = \Theta_\prehilb{H}\chi = \phi -\psi$. \end{proof} \begin{remark} The above result is a more general version of the easy part of the Hodge theorem: the injectivity of the inclusion of the harmonic differential forms into the de Rham cohomology of a Riemannian manifold, see e.g. \cite[Lemma~4.15]{morita:2001a}. The difficult part is to show the surjectivity that does not always hold in our general situation. \end{remark} Now we can define BRST *-representations of BRST *-algebras with imaginary ghost operators and their reduction: \begin{definition}[BRST *-representation] \label{defi:brststarrep} Let $(\mathcal{A},\gamma,\Theta,^*)$ be a BRST *-algebra with imaginary ghost operator. A \emph{BRST *-representation} of $\mathcal{A}$ on a BRST pre-Hilbert space $\prehilb{H}$ is a morphism \begin{equation} \rho \colon \mathcal{A} \longrightarrow \Bounded^{(\bullet)}(\prehilb{H}) \end{equation} of BRST *-algebras with imaginary ghost operator. An \emph{intertwiner} $T$ between two such BRST *-representations $(\prehilb{H},\rho)$ and $(\prehilb{H}',\rho')$ of $\mathcal{A}$ is a morphism $T \colon \prehilb{H} \longrightarrow \prehilb{H}'$ of BRST pre-Hilbert spaces that satisfies in addition \begin{equation} T \circ \rho(a) = \rho'(a) \circ T \end{equation} for all $a \in \mathcal{A}$. \end{definition} Since all the passages from $\mathcal{A}$ to $\HBRSTtilde^{(\bullet)}(\mathcal{A})$ and $\widetilde{\mathcal{A}}_\red$ as well as from $\prehilb{H}$ to $\HBRSTtilde^{(\bullet)}(\prehilb{H})$ and $\widetilde{\prehilb{H}}_\red$ are functorial, we obtain the following behaviour of BRST *-representations under the BRST reduction: \begin{proposition} \label{lemma:ReductionofBRSTStarRep} Consider a BRST *-algebra $(\mathcal{A},\gamma,\Theta,^*)$ with imaginary ghost operator and a BRST *-representation $\rho$ of $\mathcal{A}$ on a BRST pre-Hilbert space $\prehilb{H}$. Then \begin{equation} \widetilde{\rho}_\BRST \colon \HBRSTtilde^{(\bullet)}(\mathcal{A}) \longrightarrow \Bounded^{(\bullet)}\left(\HBRSTtilde^{(\bullet)}(\prehilb{H})\right), \quad \quad \widetilde{\rho}_\BRST([a])[\phi] = [\rho(a)\phi] \end{equation} yields a *-representation of $\HBRSTtilde^{(\bullet)}(\mathcal{A})$ on $\HBRSTtilde^{(\bullet)}(\prehilb{H})$ which is compatible with all degrees. Moreover, the restriction \begin{equation} \widetilde{\rho}_\red = \left(\widetilde{\rho}_\BRST\at{\widetilde{\mathrm{H}}^{(0)}_{\BRST,0} (\mathcal{A})}\right)\at{\widetilde{\mathrm{H}}^{(0)}_{\BRST,0}(\prehilb{H})} \end{equation} yields a *-representation of $\widetilde{\mathcal{A}}_\red$ on $\widetilde{\prehilb{H}}_\red$. All the assignments are functorial. \end{proposition} Finally, we apply the general formalism of the GNS construction to the case of a BRST *-algebra with imaginary ghost operator, i.e. we construct BRST *-representations out of suitable linear functionals. We recall at first the usual GNS construction from \cite[Section~7.2.2]{waldmann:2007a}: \begin{remark}[GNS representation] \label{remark:gnsrepresentation} Let $\mathcal{A}$ be a *-algebra over $\ring{C}$ and $\omega\colon \mathcal{A}\longrightarrow \ring{C}$ a positive linear functional, i.e. $\omega(a^*a)\geq 0$ for all $a\in \mathcal{A}$. Then one has $\omega(b^*a) = \cc{\omega(a^* b)}$ for all $a,b\in\mathcal{A}$ as well as the Cauchy-Schwarz inequality. The subset \begin{equation*} \mathcal{I}_\omega = \{ a\in\mathcal{A}\mid \omega(a^*a)=0\} = \{ a\in\mathcal{A} \mid \omega(b^*a)=0 \;\;\forall \; b\in \mathcal{A}\} = \{ a\in\mathcal{A}\mid \omega(a^*b)=0 \;\; \forall \; b\in \mathcal{A}\} \end{equation*} is a left ideal in $\mathcal{A}$, the so-called \emph{Gel'fand ideal}. The quotient $\prehilb{H}_\omega =\mathcal{A}/\mathcal{I}_\omega$ becomes a left $\mathcal{A}$-module in the canonical way by setting $\pi_\omega(a)\psi_b=\psi_{ab}$ for $a,b\in\mathcal{A}$, where $\psi_b\in\prehilb{H}_\omega$ denotes the equivalence class of $b$. One has a positive definite inner product $\SP{\psi_a,\psi_b}_\omega=\omega(a^*b)$ on $\prehilb{H}_\omega$ and $\pi_\omega$ turns out to be a *-representation of $\mathcal{A}$, the so-called \emph{GNS representation} with respect to $\omega$. \end{remark} The representations of BRST *-algebras $\mathcal{A}$ should be compatible with the $\mathbb{Z}\times\mathbb{Z}_2$-grading, whence we have to consider $\mathbb{Z}\times\mathbb{Z}_2$-homogeneous positive linear functionals $\omega \colon \mathcal{A} \rightarrow \ring{C}$, i.e. such positive linear functionals that vanish on all degrees except $\mathcal{A}^{(0)}_0$. In this case one easily sees that $\mathcal{I}_\omega$ and $\prehilb{H}_\omega$ are $\mathbb{Z}\times\mathbb{Z}_2$-graded and that the GNS representation is compatible with the degrees. Even more, vectors in $\prehilb{H}_\omega$ with different degrees are orthogonal. Since the $\mathbb{Z}$-grading of the BRST algebra is induced by $\gamma$, we require in addition $\pi_\omega(\gamma) = \gamma_{\prehilb{H}_\omega}$. Therefore, one needs a further condition on $\omega$ as a straightforward computation shows: \begin{proposition} Consider a BRST *-algebra $(\mathcal{A},\gamma,\Theta,^*)$ with imaginary ghost operator and an even, positive linear functional $\omega\colon \mathcal{A}\rightarrow \ring{C}$. If the ghost charge satisfies $\gamma\in \mathcal{I}_\omega$, i.e. $\omega(a\gamma)=\omega(\gamma^* a)=0$ for all $a\in \mathcal{A}$, then $\omega$ is homogeneous with respect to the $\mathbb{Z}\times\mathbb{Z}_2$-grading and one has \begin{equation} \pi_\omega(\gamma) = \gamma_{\prehilb{H}_\omega}. \end{equation} In particular, $(\prehilb{H}_\omega = \mathcal{A}/\mathcal{I}_\omega, \gamma_\omega = \pi_\omega(\gamma), \Theta_\omega = \pi_\omega(\Theta))$ is a BRST pre-Hilbert space and \begin{equation} \pi_\omega \colon \mathcal{A} \longrightarrow \Bounded^{(\bullet)}(\prehilb{H}_\omega) \end{equation} is a BRST *-representation of $\mathcal{A}$. \end{proposition} Thus we have found a way to generalize the GNS construction to BRST *-algebras with imaginary ghost operator, which gives us an explicit method to construct BRST *-representations. \begin{remark} The next question one could ask is if for such a linear functional $\omega$ the reduced representations $\widetilde{(\pi_\omega)}_\BRST$ and $\widetilde{(\pi_\omega)}_\red$ are again GNS representations of some linear functionals on the quotients. In particular, it is interesting if there is a canonical way to construct the corresponding linear functionals if they exist. It turns out that there is a positive answer to both questions if one requires in addition $\omega(\Delta)= 0$. This reflects the compatibility of $\omega$ with the BRST charge $\Theta$ and its adjoint $\Theta^*$ that are responsible for the reduction. \end{remark} \begin{remark} Note that *-involutions and analogues of GNS representations are also studied in the more general context of involutive categories, see e.g. \cite{jacobs:2012a}. One can interpret the BRST *-algebras as special involutive monoids in the involutive monoidal category of mixed complexes: Recall that the objects $(K^\bullet,d,D)$ in this category are $\mathbb{Z}$-graded $\ring{C}$-modules $K^\bullet$ that are both chain complex $(K^\bullet,d)$ and cochain complex $(K^\bullet,D)$, where $dD + Dd$ may be non-zero. One can check that the functor $K^\bullet \mapsto \cc{K}^\bullet$ given by $\cc{K}^n = \cc{K^{-n}}$ turns the category into an involutive monoidal category. BRST *-algebras are then special involutive monoids, where the chain and cochain map as well as the grading are given by inner derivations. The above GNS construction turns out to be a special case of \cite[Thm.~7.1]{jacobs:2012a}. However, we want to stress that in our setting positivity plays a crucial role since we are interested in *-representations on pre-Hilbert spaces. In the general framework we are not aware of such an implementation of positivity. \end{remark} \section{*-Involutions for the Quantum BRST Algebra} \label{section:InvolutionsforQuantumBRSTAlgebra} In this section we apply the above results and construct a *-involution for the quantum BRST algebra $\mathcal{A}^{(\bullet)}[[\lambda]] = \Anti^\bullet \liealg{g}^* \otimes \Anti^\bullet \liealg{g} \otimes \Cinfty(M)[[\lambda]]$ corresponding to a Hamiltonian quantum $\group{G}$-space $(M,\star,\group{G},\boldsymbol{J})$. \subsection{Graded *-Involutions on the Grassmann Algebra} \label{subsection:GradedInvolutiononGrassmann} Let us assume that we have an equivariant and Hermitian star product on $M$, so that the complex conjugation is an involution on $\Cinfty(M)[[\lambda]]$. Thus we only need to find a suitable *-involution for the Grassmann algebra leading to a quantum BRST algebra having sufficiently many positive functionals. A first possibility for an involution is the complex conjugation. Unfortunately, we can check that it is neither a graded nor a super *-involution with respect to the standard ordered star product. We define a standard ordered representation in analogy with the case of cotangent bundles \cite{neumaier:1998a}. Let $\iota^*$ be the restriction \begin{equation} \iota^*\colon \Anti^\bullet(\liealg{g}_\mathbb{C}^*\oplus \liealg{g}_\mathbb{C})[[\lambda]] \longrightarrow \Anti^\bullet \liealg{g}_\mathbb{C}^*[[\lambda]], \end{equation} i.e. $\iota^*$ sets all forms with a nontrivial $\liealg{g}$-part to zero. Moreover, we denote by \begin{equation} \pr^* \colon \Anti^\bullet \liealg{g}_\mathbb{C}^*[[\lambda]] \longrightarrow \Anti^\bullet (\liealg{g}_\mathbb{C}^*\oplus \liealg{g}_\mathbb{C})[[\lambda]] \end{equation} the inclusion map. It immediately follows $\iota^*\pr^* = \id_{\Anti^\bullet \liealg{g}_\mathbb{C}^*[[\lambda]]}$ and we can define the following representation. \begin{definition}[Standard ordered representation] \label{defi:StandardOrderedRepGrassmann} The \emph{standard ordered representation} \begin{equation} \rho_\std \colon \left( \Anti^\bullet (\liealg{g}_\mathbb{C}^*\oplus \liealg{g}_\mathbb{C})[[\lambda]], \circ_\std \right) \longrightarrow \End(\Anti^\bullet \liealg{g}_\mathbb{C}^*[[\lambda]]) \end{equation} of $\left(\Anti^\bullet (\liealg{g}_\mathbb{C}^*\oplus \liealg{g}_\mathbb{C})[[\lambda]], \circ_\std\right)$ is defined by \begin{equation} \label{eq:RhoStdDefinition} \rho_\std(a)\alpha = \iota^*\left(a \circ_\std \pr^*\alpha \right) \end{equation} for $a\in \Anti^\bullet(\liealg{g}_\mathbb{C}^*\oplus \liealg{g}_\mathbb{C})[[\lambda]]$ and $\alpha\in \Anti^\bullet \liealg{g}_\mathbb{C}[[\lambda]]$. \end{definition} Then we directly see that $\rho_\std$ is $\mathbb{C}[[\lambda]]$-linear and satisfies $\rho_\std(1) = \id_{\Anti^\bullet \liealg{g}_\mathbb{C}^*[[\lambda]]}$. \begin{remark} The idea comes from the theory of Clifford algebras, from which we know \begin{equation} \label{eq:CliffordIso} \Anti^\bullet\left( \liealg{g}_\mathbb{C}^*\oplus \liealg{g}_\mathbb{C} \right) \cong \Clifford( \liealg{g}_\mathbb{C}^* \oplus \liealg{g}_\mathbb{C}; \SP{\argument,\argument} ) \cong \Clifford(\liealg{g}_\mathbb{C}^* \oplus \liealg{g}_\mathbb{C}) \cong \End(\Anti^\bullet \liealg{g}_\mathbb{C}^*), \end{equation} since all non-degenerate bilinear symmetric inner products are equivalent on $\mathbb{C}^{2n}$, see e.g. \cite[Prop.~2.4]{meinrenken:2013a}. Note that here the first isomorphism is an isomorphism of vector spaces, whereas the other two are isomorphisms of Clifford algebras. We transferred this idea to the quantized setting. \end{remark} \begin{proposition} \label{prop:StandardOrderedRep} The standard ordered representation $\rho_\std$ defined by \eqref{eq:RhoStdDefinition} is a faithful representation of $\left(\Anti^\bullet (\liealg{g}_\mathbb{C}^*\oplus \liealg{g}_\mathbb{C})[[\lambda]],\circ_\std\right)$ on $\Anti^\bullet \liealg{g}_\mathbb{C}^*[[\lambda]]$, i.e. we have \begin{equation} \rho_\std(a) \rho_\std(\widetilde{a})\alpha = \rho_\std ( a \circ_\std \widetilde{a}) \alpha. \end{equation} for all $a,\widetilde{a} \in \Anti^\bullet (\liealg{g}_\mathbb{C}^*\oplus \liealg{g}_\mathbb{C})[[\lambda]]$ and $\alpha \in \Anti^\bullet \liealg{g}_\mathbb{C}^*[[\lambda]]$. \end{proposition} \begin{proof} The properties follow from lengthy but straightforward computations. \end{proof} Using the definition of $\rho_\std$ we can immediately compute \begin{equation} \label{eq:RhoStdofBasis} \rho_\std(1) = \id, \quad \rho_\std(\basis{e}_i) = 2 \I\lambda \ins(\basis{e}_i) \quad \text{and} \quad \rho_\std(\basis{e}^i) = \basis{e}^i \wedge \cdot \; \end{equation} for the elements $1,\basis{e}_1,\dots,\basis{e}_n,\basis{e}^1,\dots, \basis{e}^n$ that generate $(\Anti^\bullet(\liealg{g}_\mathbb{C}^*\oplus \liealg{g}_\mathbb{C})[[\lambda]],\circ_\std)$. It is known that $\Anti^\bullet \liealg{g}_\mathbb{C}^*$ has a structure of a pre-Hilbert space over $\mathbb{C}$, which extends to $\mathbb{C}[[\lambda]]$. \begin{lemma} Let $g$ be a positive definite symmetric bilinear inner product on $\liealg{g}$. Then it induces a positive definite sesquilinear product $\SP{\argument,\argument}_*$ on $\Anti^\bullet \liealg{g}_\mathbb{C}^*[[\lambda]]$ via \begin{equation} \label{eq:ProductonGrassmann} \SP{a_1\wedge \dots\wedge a_k, b_1\wedge \dots \wedge b_k}_* = \mathsf{det}(g^{-1}(a_i,b_j)) \end{equation} for all $a_1,\dots,a_k,b_1,\dots,b_k \in \liealg{g}_\mathbb{C}^*$. In particular, $(\Anti^\bullet \liealg{g}_\mathbb{C}^*[[\lambda]], \SP{\argument,\argument}_*)$ is a pre-Hilbert space over $\mathbb{C}[[\lambda]]$. \end{lemma} In order to get an involution on $\Anti^\bullet(\liealg{g}_\mathbb{C}^*\oplus \liealg{g}_\mathbb{C})[[\lambda]]$ which is independent of $\lambda$, we define the rescaled inner product $\SP{\argument,\argument}$ for each $\Anti^k \liealg{g}_\mathbb{C}^*[[\lambda]]$ by \begin{equation} \label{eq:ProductonGrassmannScaled} \SP{a,b} = (2\lambda)^k \SP{a,b}_* , \quad \text{where} \quad a,b \in \Anti^k \liealg{g}_\mathbb{C}^*[[\lambda]]. \end{equation} Note that for $\lambda=0$, the inner product $\SP{\argument,\argument}$ on $\Anti^\bullet \liealg{g}_\mathbb{C}^*[[\lambda]]$ is degenerate, but the corresponding *-involution on $\Anti^\bullet(\liealg{g}_\mathbb{C}^*\oplus \liealg{g}_\mathbb{C})[[\lambda]]$ resp. $\Anti^\bullet(\liealg{g}_\mathbb{C}^*\oplus \liealg{g}_\mathbb{C})$ is still well-defined. \begin{proposition} \label{prop:GrassmannStarRep} The standard ordered representation $\rho_\std$ of $\Anti^\bullet(\liealg{g}_\mathbb{C}^*\oplus \liealg{g}_\mathbb{C})[[\lambda]]$ from Definition~\ref{defi:StandardOrderedRepGrassmann} is a *-representation with respect to the graded *-involution induced by \begin{equation} \label{eq:StarInvolutionGrassmann} \xi^* = -\I g^\flat(\xi) \quad \text{and} \quad \alpha^* = -\I g^\sharp(\alpha) \quad \forall \; \xi \in \liealg{g}_\mathbb{C}, \; \alpha\in \liealg{g}^*_\mathbb{C}. \end{equation} Moreover, $\gamma=\frac{1}{2}\basis{e}^k \wedge \basis{e}_k$ fulfils $\gamma^* = - \gamma$. \end{proposition} \begin{proof} For $c\in\mathbb{C}$ we have \begin{align*} \SP{\rho_\std(\basis{e}_\ell)^*c,\basis{e}^j} = \SP{c,\rho_\std(\basis{e}_\ell) \basis{e}^j} = 2\I\lambda \SP{c, \ins(\basis{e}_\ell) \basis{e}^j} = 2\I\lambda \cc{c} \delta^j_\ell = \I g_{\ell k}\cc{c}\SP{ \basis{e}^k,\basis{e}^j} = \SP{\rho_\std(-\I g_{\ell k}\basis{e}^k)c ,\basis{e}^j}, \end{align*} and analogously $\SP{\rho_\std(\basis{e}^\ell)^* \basis{e}^j , c} = \SP{\rho_\std(\frac{g^{\ell k}}{\I}\basis{e}_k)\basis{e}^j,c}$. In other words, we get $ \rho_\std(\basis{e}_\ell)^* = \rho_\std(-\I g^\flat(\basis{e}_\ell)) $ and $ \rho_\std(\basis{e}^\ell)^* = \rho_\std(-\I g^\sharp(\basis{e}^\ell)) $. Furthermore we have $\rho_\std(c)^*=\rho_\std(\cc{c})$, inducing the involution from \eqref{eq:StarInvolutionGrassmann}. Finally we compute \begin{align*} 2 \gamma^* = (\basis{e}^k \circ_\std \basis{e}_k)^* = - g^{km}g_{kn} \basis{e}^n \circ_\std \basis{e}_m = - 2\gamma. \end{align*} \end{proof} The above result shows that this graded *-involution is in some sense a ``natural'' one since it is induced by the above representation $\rho_\std$. Moreover, one can show that the standard ordered representation $\rho_\std$ from Definition~\ref{defi:StandardOrderedRepGrassmann} is even unitarily equivalent to a GNS representation. Finally, we want to show that we have sufficiently many positive linear functionals. We recall \cite[Def.~2.7]{bursztyn.waldmann:2001a}: Let $(\algebra{A},^*)$ be a *-algebra over $\ring{C}= \ring{R}(\I)$. Then $\algebra{A}$ has \emph{sufficiently many positive linear functionals} if for any non-zero Hermitian element $h=h^*\in \algebra{A}\setminus \{0\}$ there exists a positive linear functional $\omega \colon \algebra{A} \longrightarrow \ring{C}$ with $\omega(h) \neq 0$. The non-deformed Grassmann algebra has obviously not sufficiently many positive linear functionals as $a^*\wedge a =0$ for all $a\in \Anti^k \liealg{g}^* \otimes \Anti^\ell \liealg{g}$ with $k+\ell > n$, hence the Cauchy-Schwarz inequality implies $\omega(a)=0$ for all such $a$ and all positive linear functionals $\omega$, in particular for the Hermitian ones. \begin{proposition} \label{prop:DeformedGrassmannSufficientlyPosFunc} Let $g$ be a positive definite and symmetric bilinear inner product, inducing the involution $^*$ via \eqref{eq:StarInvolutionGrassmann}. Then the deformed Grassmann algebra $(\Anti^\bullet(\liealg{g}_\mathbb{C}^*\oplus \liealg{g}_\mathbb{C})[[\lambda]], \circ_\std,^*)$ has sufficiently many positive linear functionals. \end{proposition} \begin{proof} At first, note that if $\omega \colon \Anti^\bullet(\liealg{g}_\mathbb{C}^*\oplus \liealg{g}_\mathbb{C})[[\lambda]] \longrightarrow \mathbb{C}[[\lambda]]$ is a positive linear functional, then \begin{align*} \omega_b \colon \Anti^\bullet( \liealg{g}_\mathbb{C}^*\oplus \liealg{g}_\mathbb{C} )[[\lambda]] \ni a \longmapsto \omega_b(a)=\omega(b^* \circ_\std a \circ_\std b) \in \mathbb{C}[[\lambda]] \end{align*} is also positive and linear. Consider now the projection $\delta\colon \Anti^\bullet( \liealg{g}_\mathbb{C}^*\oplus \liealg{g}_\mathbb{C})[[\lambda]] \longrightarrow \mathbb{C}[[\lambda]]$ that is a positive linear functional. For elements $c \in \mathbb{R}[[\lambda]]\setminus \{0\}$ we have $\delta(c)\neq 0$, hence we can restrict ourselves to elements of non-trivial $\mathbb{Z}\times \mathbb{Z}$-degree. Take an orthonormal basis $\basis{e}_1, \dots, \basis{e}_n$ of $\liealg{g}$ with respect to $g$ with orthonormal dual basis $\basis{e}^1, \dots, \basis{e}^n$ of $\liealg{g}^*$ with respect to $g^{-1}$. In particular, we have $\basis{e}_j^* = -\I \basis{e}^j$. Then every non-zero Hermitian element $h$ has to be the sum of elements of the form $ a = (-\I )^{i+j} \;\cc{c} \; \basis{e}^{k_1}\wedge \dots \wedge \basis{e}^{k_j}\wedge \basis{e}_{\ell_i}\wedge \dots \wedge \basis{e}_{\ell_1} + c \; \basis{e}^{\ell_1}\wedge \dots \wedge \basis{e}^{\ell_i}\wedge \basis{e}_{k_j} \wedge \dots \wedge \basis{e}_{k_1} $ with $i,j=0,\dots,n$ not both equal to zero and with $c\in \mathbb{C}[[\lambda]]\setminus \{0\}$. Choose now $ b = c_1 \; \basis{e}^{k_1}\wedge \dots \wedge \basis{e}^{k_j} + c_2 \; \basis{e}^{\ell_1}\wedge \dots \wedge \basis{e}^{\ell_i} $ with $c_1,c_2 \in \mathbb{C}[[\lambda]]\setminus \{0\}$. Since $a \circ_\std b = \mu \circ e^{2\I\lambda \jns(\basis{e}^k)\otimes \ins(\basis{e}_k)} (a\otimes b)$ we get \begin{align*} \delta_b(a) = \delta ( b^* \circ_\std a \circ_\std b) & = (2\I\lambda)^{i+j} (-\I)^i \left( (-1)^j \; \cc{c} \cc{c_1}c_2 + c c_1 \cc{c_2}\right). \end{align*} Choosing for example $c_1 = \cc{c}$ as well as $c_2$ such that $(-1)^j c_2 = \cc{c_2}$ yields $\delta_b(a) \neq 0$. The above procedure can be easily extended to a general Hermitian element. \end{proof} \begin{remark} Even though the complex conjugation yields no involution for the standard ordered star product, one can check that it is a well-defined super *-involution for the Weyl ordered one, see \cite{bordemann.herbig.waldmann:2000a} for a definition. However, in this setting one can show that there are no non-trivial positive linear functionals. \end{remark} \subsection{Comparison of the Reduced Quantum BRST Algebras} \label{subsection:ComparisonofReducedQuantumBRSTAlgebras} Throughout this section assume that $(M,\star,\group{G},\boldsymbol{J}, C)$ is a Hamiltonian quantum $\group{G}$-space with regular constraint surface, proper action on $M$ and Hermitian star product. Moreover, choose a positive definite inner product $g$ on the Lie algebra $\liealg{g}$ with induced involution $^*$ on $\left(\Anti^\bullet (\liealg{g}^*_\mathbb{C}\oplus\liealg{g}_\mathbb{C})[[\lambda]], \circ_\std\right)$ as in \eqref{eq:StarInvolutionGrassmann}. \begin{lemma} \label{lemma:QuantumBRSTStarAlgebrawithImaginaryGhost} The triple $(\mathcal{A}[[\lambda]],\star_\std,^*)$ is a BRST *-algebra with imaginary ghost operator and it has sufficiently many positive linear functionals. \end{lemma} \begin{proof} We immediately get a graded *-involution on the whole quantum BRST algebra $\mathcal{A}^{(\bullet)}[[\lambda]]$, again denoted by $^*$, via $(\alpha\otimes f)^* = \alpha^* \otimes \cc{f}$. In particular, it follows that \begin{equation*} \left( (\alpha\otimes f)\star_\std (\beta \otimes g)\right)^* = (\alpha \circ_\std \beta)^* \otimes \cc{f\star g} = (\beta^* \circ _\std\alpha^*) \otimes (\cc{g}\star \cc{f}) ) = (\beta \otimes g)^* \star_\std (\alpha \otimes f)^* \end{equation*} for all $\alpha,\beta \in \Anti^\bullet(\liealg{g}^*_\mathbb{C} \oplus \liealg{g}_\mathbb{C})[[\lambda]]$ and $f,g\in \Cinfty(M)[[\lambda]]$. For the BRST charge $\gamma = \frac{1}{2} \basis{e}^k \wedge \basis{e}_k$ we have already seen $\gamma^* = -\gamma$ in Proposition~\ref{prop:GrassmannStarRep}, thus the ghost number derivation $\Gh = \frac{1}{\I\lambda}\ad(\gamma)$ fulfils $ \Gh^* = - \Gh. $ Therefore, we have constructed a graded *-involution with imaginary ghost operator. The only thing remaining to be shown is that the quantum BRST algebra has sufficiently many positive linear functionals. By Proposition~\ref{prop:DeformedGrassmannSufficientlyPosFunc} the Grassmann part has sufficiently many positive linear functionals and $(\Cinfty(M)[[\lambda]],\star)$ with complex conjugation as involution has sufficiently many positive linear functionals, see \cite[Prop.~5.3]{bursztyn.waldmann:2000a}. Moreover, \cite[Prop.~2.8]{bursztyn.waldmann:2001a} states that a unital *-algebra has sufficiently many positive linear functionals if and only if it has a faithful *-representation on a pre-Hilbert space. Both the Grassmann algebra $\left(\Anti^\bullet(\liealg{g}^*_\mathbb{C}\oplus \liealg{g}_\mathbb{C})[[\lambda]],\circ_\std\right)$ and the functions are unital *-algebras and the Grassmann part has already such a *-representation $\rho_\std $ as discussed in Proposition~\ref{prop:GrassmannStarRep}. In particular, we know \begin{equation} \label{eq:ProofGrassmannRep} \tag{$*$} \rho_\std( x_{ij}) \colon \Anti^k \liealg{g}_\mathbb{C}^*[[\lambda]] \longrightarrow \begin{cases} \Anti^{k + i - j} \liealg{g}_\mathbb{C}^*[[\lambda]] \quad&\text{for}\; k \geq j \\ \{0\} &\text{else} \end{cases} \end{equation} for all $x_{ij} \in \Anti^i \liealg{g}_\mathbb{C}^*[[\lambda]] \otimes \Anti^j \liealg{g}_\mathbb{C}[[\lambda]]$. Let $\pi \colon \Cinfty(M)[[\lambda]] \longrightarrow \prehilb{H}$ be a faithful *-representation of the functions on a pre-Hilbert space $\prehilb{H}$. It remains to show that \begin{equation*} \rho = \rho_\std \otimes \pi \colon \mathcal{A}[[\lambda]] \longrightarrow \Bounded(\Anti^\bullet \liealg{g}_\mathbb{C}^*[[\lambda]] \otimes \prehilb{H}) \end{equation*} is injective. Consider a general element $\sum_{i,j,\alpha_{ij}} x_{ij \alpha_{ij}} \otimes F^{ij \alpha_{ij}} \in \mathcal{A}[[\lambda]]$, where $\{x_{ij\alpha_{ij}} \}_{\alpha_{ij}}$ is a basis of $\Anti^i \liealg{g}_\mathbb{C}^*\otimes \Anti^j \liealg{g}_\mathbb{C}$, and where $F^{ij\alpha_{ij}}\in \Cinfty(M)[[\lambda]]$. Let $k$ be the minimal index such that $\sum_{i\alpha_{i k}} x_{i k \alpha_{ik}}\otimes F^{i k \alpha_{ik}} \neq 0$. Using \eqref{eq:ProofGrassmannRep} we have $\rho_\std(x_{ik\alpha_{ik}}) z \in \Anti^i \liealg{g}^*_\mathbb{C}$ for $z \in \Anti^k\liealg{g}^*_\mathbb{C}[[\lambda]]$, so the images for different $i=0,1,\dots,n$ are either zero or linearly independent, allowing us to fix the index $i$. A straightforward computation shows \begin{equation*} \rho_\std ( \basis{e}^{j_1}\wedge \dots \wedge \basis{e}^{j_i} \wedge \basis{e}_{\ell_k}\wedge \dots \wedge\basis{e}_{\ell_1} ) z = (2\I\lambda)^k \basis{e}^{j_1} \wedge \dots \wedge \basis{e}^{j_i} \wedge \ins(\basis{e}_{\ell_k})\wedge \dots \wedge\ins(\basis{e}_{\ell_1}) z. \end{equation*} Choosing now $\phi \in \prehilb{H}$ such that $\pi\left( F^{r_k \dots r_1}_{p_1\dots p_i}\right) \phi \neq 0$ for some sets of indices $\{r_1,\dots,r_k\}$ and $\{p_1,\dots,p_i\}$ yields \begin{align*} \rho &\left( \basis{e}^{j_1}\wedge \dots \wedge \basis{e}^{j_i} \wedge \basis{e}_{\ell_k}\wedge \dots \wedge\basis{e}_{\ell_1} \otimes F^{\ell_k \dots \ell_1}_{j_1\dots j_i} \right)\left( \basis{e}^{r_1}\wedge \cdots \wedge \basis{e}^{r_k} \otimes \phi \right) \\ & = k! \; (2\I\lambda)^k \basis{e}^{j_1}\wedge \dots \wedge \basis{e}^{j_i}\otimes\pi\left(F^{r_k\dots r_1}_{j_1\dots j_i}\right) \phi \neq 0. \end{align*} \end{proof} Now we can define as in the general setting from Section~\ref{section:AbstractBRSTalgebras} an \emph{adjoint standard ordered BRST operator} $\boldsymbol{D}_\std^*$ by \begin{equation} \label{eq:AdjointStandradOrderedBRSTOperator} \boldsymbol{D}_\std^* = \frac{1}{\I \lambda} \ad_\std ( \Theta_\std^*). \end{equation} We get two different quantum BRST cohomologies: on one hand, the usual quantum BRST cohomology $\boldHBRST^{(\bullet)}(\mathcal{A}[[\lambda]])$ from Proposition~\ref{prop:QuantumBRSTCohomology} with corresponding reduced quantum BRST algebra \begin{equation} \mathcal{A}_\red = \boldHBRST^{(0)}(\mathcal{A}[[\lambda]]) = \frac{\ker\boldsymbol{D}_\std \cap \mathcal{A}^{(0)}[[\lambda]] } {\image \boldsymbol{D}_\std \cap \mathcal{A}^{(0)}[[\lambda]]}. \end{equation} On the other hand, we have the BRST quotient $\boldHBRSTtilde^{(\bullet)} (\mathcal{A}[[\lambda]])$ from \eqref{eq:BRSTQuotient} with corresponding \emph{reduced quantum BRST *-algebra} \begin{equation} \widetilde{\mathcal{A}}_\red = \boldHBRSTtilde^{(0)}(\mathcal{A}[[\lambda]]) = \frac{\ker\boldsymbol{D}_\std \cap \ker\boldsymbol{D}_\std^* \cap \mathcal{A}^{(0)}[[\lambda]] } {\image \boldsymbol{D}_\std \cap\image \boldsymbol{D}_\std^*\cap \mathcal{A}^{(0)}[[\lambda]]} \end{equation} that is indeed a *-algebra by Lemma~\ref{lemma:ReducedBRSTStarAlgebra}. Therefore, the natural question is whether we can compare $\boldHBRST^{(\bullet)}(\mathcal{A}[[\lambda]])$ with $\boldHBRSTtilde^{(\bullet)}(\mathcal{A}[[\lambda]])$, in particular in ghost number zero. The rest of the section consists in the proof of the following main result. \begin{theorem} \label{thm:AredtildeIsomorphicAred} Let $(M,\star,\group{G},\boldsymbol{J},C)$ be a Hamiltonian quantum $\group{G}$-space with regular constraint surface, compact Lie group and Hermitian star product $\star$. Moreover, choose a positive definite inner product on the corresponding Lie algebra $\liealg{g}$, inducing the involution $^*$. Then one has \begin{equation} \label{eq:IsomorphismReducedBRSTAlg} \widetilde{\mathcal{A}}_\red \cong \Cinfty(C)^\group{G}[[\lambda]] \cong \mathcal{A}_\red \end{equation} with isomorphism $\boldsymbol{\iota^*}\colon \widetilde{\mathcal{A}}_\red \longrightarrow \Cinfty(C)^\group{G}[[\lambda]]$ and inverse $\prol$. \end{theorem} We already know from Proposition~\ref{prop:QuantumBRSTCohomology} that there is an isomorphism \begin{equation*} \boldsymbol{\iota^*} \colon \mathcal{A}_\red = \frac{\ker \boldsymbol{D}_\std \cap \mathcal{A}^{(0)}[[\lambda]]} {\image \boldsymbol{D}_\std \cap \mathcal{A}^{(0)}[[\lambda]] } \longrightarrow \Cinfty(C)^\group{G}[[\lambda]] \end{equation*} with inverse $\widehat{\boldsymbol{h}}\at{\Cinfty(C)^\group{G}[[\lambda]]} =\prol$, where we understand $\boldsymbol{\iota^*}$ to act on the representatives of the equivalence classes and $\prol$ to map into the corresponding equivalence class. We directly observe the following: \begin{lemma} The map \begin{equation} \label{eq:IotaonReducedBRSTStarAlg} \boldsymbol{\iota^*} \colon \widetilde{\mathcal{A}}_\red \longrightarrow \Cinfty(C)^\group{G}[[\lambda]] \end{equation} is well-defined with right inverse $\prol$. Moreover, $\prol$ is also a left inverse if \begin{equation} \label{eq:DifferenceInIntersectionImages} \sum_{i,\alpha_i } \prol\boldsymbol{\iota^*} \left( x_{i\alpha_i }\otimes F^{i\alpha_i } \right) - \sum_{i,\alpha_i } x_{i\alpha_i }\otimes F^{i\alpha_i } \in \image \boldsymbol{D}_\std \cap \image \boldsymbol{D}_\std^* \end{equation} holds for any element $\sum_{i,\alpha_i } x_{i\alpha_i }\otimes F^{i\alpha_i } \in\ker \boldsymbol{D}_\std \cap \ker \boldsymbol{D}_\std^* \cap \mathcal{A}^{(0)}[[\lambda]]$, where $\{x_{i\alpha_i }\}_{\alpha_i }$ denotes a basis of $\Anti^i\liealg{g}^* \otimes \Anti^i \liealg{g}$, and $F^{i\alpha_i }\in \Cinfty(M)[[\lambda]]$ for any $i=0,1,\dots,n$ and $\alpha_i $. \end{lemma} \begin{proof} The map~\eqref{eq:IotaonReducedBRSTStarAlg} is well-defined as $\boldsymbol{\iota^*}$ vanishes on $ \image \boldsymbol{D}_\std \cap \image \boldsymbol{D}^*_\std \cap \mathcal{A}^{(0)}[[\lambda]]$. Moreover, since $\phi \in \Cinfty(C)^\group{G}[[\lambda]]$ implies $\cc{\phi}\in \Cinfty(C)^\group{G}[[\lambda]]$ and since $\boldsymbol{\delta}=\delta$, we have $\boldsymbol{\iota^*}\prol =\id_{\Cinfty(C)[[\lambda]]}$ and \begin{align*} \boldsymbol{D}_\std (\prol (\phi)) & = (\boldsymbol{\delta}+2\boldsymbol{\del})(\prol (\phi)) = \prol \delta^c \phi = 0, \\ \boldsymbol{D}_\std^* (\prol (\phi )) & = \left((\boldsymbol{\delta}+2\boldsymbol{\del}) \left(\prol\left( \cc{\phi}\right)\right)\right)^* = \left(\prol \delta^c \cc{\phi}\right)^* = 0. \end{align*} So $\prol$ is still a well-defined right inverse of $\boldsymbol{\iota^*}$. \end{proof} These conditions can be further simplified by exploiting the chain homotopy from Proposition~\ref{prop:QuantumBRSTCohomology}. \begin{proposition} \label{prop:ConditionsforRedAlgIso} Let $(M,\star,\group{G},\boldsymbol{J},C)$ be a Hamiltonian quantum $\group{G}$-space with regular constraint surface, proper action on $M$ and Hermitian star product $\star$. Moreover, let $g$ be a positive definite inner product on $\liealg{g}$ inducing the involution $^*$ via \eqref{eq:StarInvolutionGrassmann}. Then $\boldsymbol{\iota^*} \colon \widetilde{\mathcal{A}}_\red \longrightarrow \Cinfty(C)^\group{G}[[\lambda]]$ is an isomorphism with inverse $\prol$ if \begin{equation} \label{eq:IotaCommutesWithCConFzero} \cc{\boldsymbol{\iota^*}F^0} = \boldsymbol{\iota^*}\cc{F^0} \end{equation} for $\sum_{i,\alpha_i } x_{i\alpha_i }\otimes F^{i\alpha_i } \in\ker \boldsymbol{D}_\std \cap \ker \boldsymbol{D}_\std^* \cap \mathcal{A}^{(0)}[[\lambda]]$ with $F^0 = x_{0\alpha_0} \otimes F^{0\alpha_0} \in \Cinfty(M)[[\lambda]]$ as above. \end{proposition} \begin{proof} We consider at first the augmented standard ordered BRST operator $\widehat{\boldsymbol{D}}_\std = \boldsymbol{D}_\std + \boldsymbol{\del}^c + 2 \boldsymbol{\iota^*}$. We know from Proposition~\ref{prop:QuantumBRSTCohomology} that $ \widehat{\boldsymbol{D}}_\std\widehat{\boldsymbol{h}} + \widehat{\boldsymbol{h}}\widehat{\boldsymbol{D}}_\std = 2\id$, which entails \begin{align*} \sum_{i,\alpha_i }\prol \boldsymbol{\iota^*} \left( x_{i\alpha_i }\otimes F^{i\alpha_i }\right) = \frac{1}{2}\sum_{i,\alpha_i }\widehat{\boldsymbol{h}} \widehat{\boldsymbol{D}}_\std \left( x_{i\alpha_i }\otimes F^{i\alpha_i } \right) = \sum_{i,\alpha_i } \left( x_{i\alpha_i }\otimes F^{i\alpha_i } \right) - \frac{1}{2}\sum_{i,\alpha_i }\boldsymbol{D}_\std \widehat{\boldsymbol{h}} \left( x_{i\alpha_i }\otimes F^{i\alpha_i } \right) \end{align*} as $\widehat{\boldsymbol{D}}_\std \widehat{\boldsymbol{h}} \left(\sum_{i,\alpha_i } x_{i\alpha_i }\otimes F^{i\alpha_i }\right) = \boldsymbol{D}_\std \widehat{\boldsymbol{h}} \left( \sum_{i,\alpha_i }x_{i\alpha_i }\otimes F^{i\alpha_i }\right)$. Applying the *-involution yields \begin{align*} \sum_{i,\alpha_i }\cc{\prol\boldsymbol{\iota^*} \left( x_{i\alpha_i }\otimes F^{i\alpha_i }\right)} &= \sum_{i,\alpha_i }\left( x_{i\alpha_i }\otimes F^{i\alpha_i }\right)^* + \frac{1}{2}\sum_{i,\alpha_i }\boldsymbol{D}^*_\std \left( \widehat{\boldsymbol{h}} \left( x_{i\alpha_i }\otimes F^{i\alpha_i } \right) \right)^* \end{align*} by the definition of $\boldsymbol{D}^*_\std$. Because of $\left(\sum_{i,\alpha_i }x_{i\alpha_i }\otimes F^{i\alpha_i }\right)^* \in\ker \boldsymbol{D}_\std \cap \ker \boldsymbol{D}_\std^* \cap \mathcal{A}^{(0)}[[\lambda]]$ we also get \begin{align*} \sum_{i,\alpha_i }\left( x_{i\alpha_i }\otimes F^{i\alpha_i }\right)^* & = \sum_{i,\alpha_i }\prol\boldsymbol{\iota^*} \left( x_{i\alpha_i }\otimes F^{i\alpha_i }\right)^* + \frac{1}{2}\sum_{i,\alpha_i }\boldsymbol{D}_\std \widehat{\boldsymbol{h}} \left( x_{i\alpha_i }\otimes F^{i\alpha_i }\right)^*. \end{align*} Thus to prove the desired \eqref{eq:DifferenceInIntersectionImages} it suffices to show \begin{equation*} \sum_{i,\alpha_i }\cc{\prol\boldsymbol{\iota^*} \left( x_{i\alpha_i }\otimes F^{i\alpha_i }\right)} - \sum_{i,\alpha_i }\prol\boldsymbol{\iota^*} \left( x_{i\alpha_i }\otimes F^{i\alpha_i }\right)^* \in \image \boldsymbol{D}_\std^*, \end{equation*} which is fulfilled if $\cc{\boldsymbol{\iota^*}F^0} = \boldsymbol{\iota^*}\cc{F^0}$. \end{proof} In general we do not know if $F^0$ is $\group{G}$-invariant, i.e. $\boldsymbol{\delta}F^0=\delta F^0=0$, as the higher orders of $\sum_{i, \alpha_i }x_{i\alpha_i } \otimes F^{i\alpha_i }$ could cancel this term under $\boldsymbol{D}_\std$. Hence we can not apply \cite[Cor.~4.6]{gutt.waldmann:2010a}, giving exactly the property $\boldsymbol{\iota^*}\cc{f}=\cc{\boldsymbol{\iota^*}f}$ for invariant functions $f\in \Cinfty(M)[[\lambda]]$. The idea to check \eqref{eq:IotaCommutesWithCConFzero} is to construct an inner product on $\Cinfty(C)[[\lambda]]$ with values in $\Cinfty(C)^\group{G}[[\lambda]]$ and a corresponding *-representation of $(\Cinfty(M)[[\lambda]],\star)$, see \cite[Def.~5.4]{gutt.waldmann:2010a}. \begin{definition}[Algebra-valued inner product] \label{definition:algebravaluedinnerproduct} Let $\group{G}$ be a compact Lie group. The $\Cinfty(C)^\group{G}[[\lambda]]$-\emph{valued inner product} on $\Cinfty(C)[[\lambda]]$ is pointwise defined by \begin{equation} \label{eq:SPred} \SP{\phi, \psi}_\red (c) = \int_\group{G} \left( \boldsymbol{\iota^*} \left( \cc{\prol(\phi)} \star \prol(\psi) \right) \right) (\Phi_{g^{-1}}(c)) \D^\lefttriv g \end{equation} for all $\phi, \psi \in \Cinfty(C)[[\lambda]]$ and $c\in C$, where $\D^\lefttriv g$ denotes the left invariant Haar measure. \end{definition} Then $\SP{\argument,\argument}_\red$ is well-defined, $\mathbb{C}[[\lambda]]$-sesquilinear and can be rewritten in the following way, see \cite[Lemmas~5.6, 5.8 and 5.9]{gutt.waldmann:2010a}. \begin{proposition} \label{prop:AlgebraValuedInnerProduct} The map $\SP{\argument, \argument}_\red$ defines a non-degenerate inner product on $\Cinfty(C)[[\lambda]]$ with values in the invariant functions $\Cinfty(C)^\group{G}[[\lambda]]$. One can rewrite it in an alternative way \begin{align} \label{eq:SPredsimple} \SP{\phi, \psi}_\red = \boldsymbol{\iota^*} \int_{\group{G}} \Phi^*_{g^{-1}} \left( \cc{\prol(\phi)} \star \prol(\psi) \right) \D^\lefttriv g = \boldsymbol{\iota^*} \int_{ \group{G}} \cc{\prol(\Phi^*_{g^{-1}}\phi)} \star \prol(\Phi^*_{g^{-1}} \psi) \D^\lefttriv g. \end{align} In particular, one has for all $\phi,\psi \in \Cinfty(C)[[\lambda]]$ \begin{equation} \label{eq:SPredHermitian} \cc{\SP{\phi,\psi}}_\red = \SP{\psi,\phi}_\red. \end{equation} \end{proposition} Recall the left action $\bullet$ of $(\Cinfty(M)[[\lambda]],\star)$ on $\Cinfty(C)[[\lambda]]$ from \cite[Def.~3.7]{gutt.waldmann:2010a} that is given by \begin{equation} \label{eq:LeftRepofMonC} \Cinfty(M)[[\lambda]] \times \Cinfty(C)[[\lambda]] \ni (f,\phi) \longmapsto f \bullet \phi = \boldsymbol{\iota^*} (f \star \prol (\phi)) \in \Cinfty(C)[[\lambda]]. \end{equation} The key point is now that this action yields a $^*$-representation, see \cite[Prop.~5.11]{gutt.waldmann:2010a}. \begin{proposition} \label{proposition:LeftModuleIsRepresentation} The action $\bullet$ is a $^*$-representation of $(\Cinfty(M)[[\lambda]], \star)$ on $\Cinfty(C)[[\lambda]]$ with respect to the inner product $\SP{\argument,\argument}_\red$, i.e. we have for all $\phi, \psi \in \Cinfty(C)[[\lambda]]$ and $f \in \Cinfty(M)[[\lambda]]$ \begin{equation} \label{eq:fbulletIsAdjointable} \SP{\phi, f \bullet \psi}_\red = \SP{\cc{f} \bullet \phi, \psi}_\red. \end{equation} \end{proposition} Now we can finally prove Theorem~\ref{thm:AredtildeIsomorphicAred}. \begin{proof}[of Theorem~\ref{thm:AredtildeIsomorphicAred}] By Proposition~\ref{prop:ConditionsforRedAlgIso} it suffices to show \begin{equation*} \cc{\boldsymbol{\iota^*}F^0} = \boldsymbol{\iota^*}\cc{F^0}, \end{equation*} where $F^0 = 1 \otimes F^0$ is again the lowest order of some $\sum_{i,\alpha_i } x_{i\alpha_i }\otimes F^{i\alpha_i } \in\ker \boldsymbol{D}_\std \cap \ker \boldsymbol{D}_\std^* \cap \mathcal{A}^{(0)}[[\lambda]]$. Just as above, $\{x_{i\alpha_i }\}_{\alpha_i }$ is a basis of $\Anti^i\liealg{g}^* \otimes \Anti^i \liealg{g}$ and $F^{i\alpha_i }\in \Cinfty(M)[[\lambda]]$ for all $i=0,1,\dots,n$ and all $\alpha_i $. By the construction of $\prol$ and assuming $M=M_\nice$ in the notation of \cite[Sec.~2.2]{gutt.waldmann:2010a} we know $\prol(1)=1\in \Cinfty(M)[[\lambda]]$ for the constant function $1\in\Cinfty(C)[[\lambda]]$. Thus we get \begin{align*} F^0 \bullet 1 = \boldsymbol{\iota^*}( F^0 \star 1) = \boldsymbol{\iota^*} F^0 \in \Cinfty(C)^\group{G}[[\lambda]] \end{align*} as well as $\cc{F^0} \bullet 1 = \boldsymbol{\iota^*} \cc{F^0} \in \Cinfty(C)^\group{G}[[\lambda]]$, which implies $\Phi^*_{g^{-1}} \boldsymbol{\iota^*}F^0= \boldsymbol{\iota^*}F^0$ and $\Phi^*_{g^{-1}}\boldsymbol{\iota^*}\cc{F^0}= \boldsymbol{\iota^*}\cc{F^0}$. Using \eqref{eq:SPredsimple} we can compute \begin{align*} \SP{1, F^0 \bullet 1}_\red = \int_\group{G} \boldsymbol{\iota^*} \left( \cc{\prol\left(\Phi^*_{g^{-1}}1\right)} \star \prol\left(\Phi^*_{g^{-1}} \boldsymbol{\iota^*}F^0\right) \right) \D^\lefttriv g = \boldsymbol{\iota^*}F^0 \int_\group{G}\D^\lefttriv g, \end{align*} and analogously \begin{align*} \SP{ \cc{F^0} \bullet 1, 1}_\red = \int_\group{G} \boldsymbol{\iota^*} \left( \cc{\prol\left(\Phi^*_{g^{-1}}\boldsymbol{\iota^*}\cc{F^0}\right)} \star \prol\left(\Phi^*_{g^{-1}} 1\right) \right) \D^\lefttriv g = \cc{\boldsymbol{\iota^*} \cc{F^0}}\int_\group{G} \D^\lefttriv g , \end{align*} which together with \eqref{eq:fbulletIsAdjointable} implies the desired $\boldsymbol{\iota^*}F^0= \cc{\boldsymbol{\iota^*} \cc{F^0}}$. \end{proof} If the action is in addition free on $C$, we know that $M_\red = C/\group{G}$ is a smooth manifold and with Proposition~\ref{prop:QuantumBRSTCohomology} we have an induced star product $\star_\red$ on $\Cinfty(M_\red)[[\lambda]]\cong \Cinfty(C)^\group{G}[[\lambda]]$ given by \begin{equation} \pi^*(u_1\star_\red u_2) = \boldsymbol{\iota^*} (\prol(\pi^*u_1)\star\prol(\pi^* u_2)) \end{equation} for $u_1,u_2\in\Cinfty(M_\red)[[\lambda]]$. Thus we immediately get: \begin{corollary} \label{cor:CCasInvolutiononMred} Let $(M,\star,\group{G},\boldsymbol{J},C)$ be a Hamiltonian quantum $\group{G}$-space with regular constraint surface, positive definite inner product on $\liealg{g}$ and Hermitian star product $\star$. In addition, let the compact Lie group $\group{G}$ act freely on $C$. Then one has \begin{equation} \label{eq:IsomorphismReducedBRSTAlgMred} \widetilde{\mathcal{A}}_\red \cong \Cinfty(M_\red)[[\lambda]] \cong \mathcal{A}_\red. \end{equation} Moreover, $\boldsymbol{\iota^*}$ induces the complex conjugation as involution on $(\Cinfty(M_\red)[[\lambda]],\star_\red)$. \end{corollary} \begin{proof} The fact that this construction induces the complex conjugation as involution for the reduced star product follows as in \cite[Prop.~4.7]{gutt.waldmann:2010a}. Explicitly, we have \begin{align*} \pi^*\cc{(u_1\star_\red u_2)} &= \cc{\boldsymbol{\iota^*} \left( \prol (\pi^* u_1)\star \prol( \pi^* u_2)\right)} = \boldsymbol{\iota^*}\left(\cc{ \prol (\pi^* u_1) \star\prol (\pi^* u_2)}\right) \\ &= \boldsymbol{\iota^*} \left( \prol (\pi^* \cc{u_2}) \star \prol( \pi^*\cc{ u_1})\right) = \pi^*(\cc{u_2} \star_\red \cc{u_1}) \end{align*} for all $u_1,u_2 \in \Cinfty(M_\red)[[\lambda]]$. \end{proof} There exists another construction of a *-involution for $\star_\red$ via the GNS representation for a suitably chosen positive functional depending on a density, see \cite[Thm.~4.17]{gutt.waldmann:2010a}. In comparison to this one we get always the same *-involution for the reduced star product, independently on the inner product $g$ on $\liealg{g}$. This is due to the fact that the choice of a density for the positive functional is a non-canonical one, whereas all inner products on the Lie algebra lead to isomorphic reduced *-algebras. Also, in the only order where $\boldsymbol{\iota^*}\at{\mathcal{A}^{(0)}[[\lambda]]}$ does not vanish identically, i.e. on the functions $\mathcal{A}^{0,0}[[\lambda]] = \Cinfty(M)[[\lambda]]$, the induced involution is always just the complex conjugation. \begin{remark} From \cite[Cor.~4.6]{gutt.waldmann:2010a} we know that one has $\boldsymbol{\iota^*}\cc{f} = \cc{\boldsymbol{\iota^*}f}$ for $\group{G}$-invariant functions $f\in \Cinfty(M)[[\lambda]]$, which implies that the complex conjugation is an involution for $\star_\red$. Furthermore, as one needs this identity to show that $\SP{\argument,\argument}_\red$ satisfies for all $\phi,\psi \in \Cinfty(C)[[\lambda]]$ \begin{align*} \cc{\SP{\phi,\psi}}_\red = \SP{\psi,\phi}_\red \quad \text{and} \quad \SP{\phi, f \bullet \psi}_\red = \SP{\cc{f} \bullet \phi, \psi}_\red, \end{align*} it is not surprising that the construction via the algebra-valued inner product yields again the complex conjugation as involution on $(\Cinfty(M_\red)[[\lambda]],\star_\red)$. Therefore, it is important to remark that the complex conjugation is induced by an isomorphism with the reduced quantum BRST *-algebra $\widetilde{\mathcal{A}}_\red \cong \Cinfty(M_\red)[[\lambda]]$ if $M_\red$ exists as smooth manifold. \end{remark}
1,314,259,994,388
arxiv
\section*{Introduction} Gene expression is the process by which the genetic code inscribed in the DNA is transformed into proteins. The process consists of four main steps: \emph{transcription} of a DNA gene into an mRNA molecule, \emph{translation} of the mRNA molecule to a protein, degradation of mRNA molecules, and degradation of proteins. During mRNA translation, macromolecules called ribosomes move unidirectionally along the mRNA molecule, decoding it codon by codon into a corresponding chain of amino acids that is folded to become a functional protein. Translation is a fundamental biological process, and understanding and re-engineering this process is important in many scientific disciplines including medicine, evolutionary biology, and synthetic biology~\cite{Alberts2002}. New methods that measure gene-specific translation activity at the whole-genome scale, like polysome profiling \cite{Arava2003} and ribosome profiling \cite{Ingolia2009}, have led to a growing interest in mathematical models for translation. Such models can be used to integrate and explain the rapidly accumulating biological data as well as to predict the outcome of various manipulations of the genetic machinery. Recent methods that allow \emph{real-time imaging} of translation on a single mRNA transcript in vivo (see, e.g.~ \cite{Yan2016,Wu2016,Morisaki2016,ChongWang2016}) are expected to provide even more motivation for developing and analyzing powerful dynamical models of translation. Down-regulation of translation is important in cell biology, medicine, and biotechnology. For example, in many organisms small RNA genes, such as microRNAs, hybridize to the mRNA in specific locations~\cite{Ghildiyal2009,Inui2010} in order to down-regulate translation initiation or elongation \cite{Fabian2010,Filipowicz2008} and/or promote mRNA degradation. Alterations in the expression of microRNA genes contribute to the pathogenesis of most, if not all, human malignancies \cite{Croce2009}, and many times cancer cells are targeted via generating tumor specific RNA interference (RNAi) genes that down-regulate the oncogenes~\cite{Tavazoie2008,Zhang2003,Devi2006}. Furthermore, many viral therapeutic treatments and viral vaccines are based on the attenuation of mRNA translation in the viral genes \cite{Ben-Yehezkel2015,Goz2015,Kaspar2005,Coleman2008,Perez2009}. Down regulation of mRNA translation in an \emph{optimal} manner is also related to fundamental biomedical topics such as molecular evolution and functional genomics \cite{Tuller2010c,Forman2008,Fang2004}. Here we study for the first time optimal down regulation of translation in a dynamical model of translation. A standard model for translation is the \emph{totally asymmetric simple exclusion process}~(TASEP) \cite{Shaw2003,TASEP_tutorial_2011}. In this model, particles hop randomly along an ordered lattice of sites. Simple exclusion means that a particle cannot hop into a site that is occupied by another particle. This models hard exclusion between the particles, and creates an indirect coupling between the particles. Indeed, if a particle remains in the same site for a long time then all the particles preceding this site cannot move forward leading to a ``traffic jam''. In the context of translation, the lattice is the mRNA molecule; the particles are the ribosomes; and hard exclusion means that a ribosome cannot move forward if the codon in front of it is covered by another ribosome. In the \emph{homogeneous} TASEP~(HTASEP) all the transition rates within the lattice are assumed to be equal and normalized to $1$, and thus the model is specified by an input rate $\alpha$, an exit rate $\beta$, and an order $N$ denoting the number of sites in the lattice. TASEP is a fundamental model in non-equilibrium statistical mechanics that has been used to model numerous natural and artificial processes including traffic flow, surface growth, communication networks, evacuation dynamics and more~\cite{TASEP_book,tasep_ad_hoc_nets}. The \emph{ribosome flow model}~(RFM)~\cite{reuveni} is a nonlinear, continuous-time compartmental model for the unidirectional flow of ``material" along a chain of $n$ consecutive compartments (or sites). It can be derived via a mean-field approximation of TASEP~\cite{TASEP_book,solvers_guide}. In the~RFM, the state variable $x_i(t): \R_+ \to [0,1]$, $i=1,\dots,n$, describes the normalized amount (or density) of ``material'' in site~$i$ at time~$t$, where $x_i(t)=1$ [$x_i(t)=0$] indicates that site $i$ is completely full [completely empty] at time $t$. Thus, the vector~$x(t):=\begin{bmatrix}x_1(t)&\dots&x_n(t)\end{bmatrix}'$ describes the density profile along the chain at time~$t$. A parameter~$\lambda_i>0$, $i=0,\dots,n$, controls the transition rate from site~$i$ to site~$i+1$, where~$\lambda_0$ [$\lambda_n$] is the initiation [exit] rate (see Fig.~\ref{fig:rfm}). The output rate at time~$t$ is~$R(t)=\lambda_n x_n(t)$. In the context of translation, the ``material'' are the moving ribosomes, and each site represents a group of codons, i.e. the mRNA is coarse-grained into~$n$ consecutive sites of codons. Thus,~$R(t)$, the output flow of ribosomes at time~$t$, is the \emph{protein production rate} at time $t$. It is known that the RFM admits a unique \emph{steady-state} production rate denoted by $R=R(\lambda)$~\cite{RFM_stability}, where~$\lambda:=\begin{bmatrix}\lambda_0&\dots\lambda_n\end{bmatrix}'$. \begin{figure} \begin{center} \includegraphics[width= 14cm,height=3cm]{rfm_sys_gen.eps} \caption{The RFM models unidirectional flow along a chain of $n$ sites. The state variable~$x_i(t)\in[0,1]$ represents the density of site $i$ at time $t$. The parameter $\lambda_i>0$ controls the transition rate from site~$i$ to site~$i+1$, with~$\lambda_0>0$ [$\lambda_n>0$] controlling the initiation [exit] rate. The output rate at time $t$ is~$R(t) =\lambda_n x_n(t)$. } \label{fig:rfm} \end{center} \end{figure} Here, we use the RFM to analyze how to maximally down-regulate mRNA translation. To do this, we formulate the following general optimization problem. Given an mRNA molecule with~$n$ sites, and a convex and compact region of feasible transition rates $\Omega^{n+1}$, find a vector~$\lambda^* \in\Omega^{n+1} $ such that~$R(\lambda^*)=\min_{\lambda\in\Omega^{n+1} }R(\lambda)$. In other words, the problem is how to select transition rates, within a feasible region, such that the production rate is minimized (see Fig.~\ref{fig:Figure_RFM_blocking}). To the best of our knowledge, this is the first time that such a problem is analyzed in a dynamical model of mRNA translation. \begin{figure} \begin{center} \includegraphics[width= 8cm,height=7cm]{Figure_RFM_blocking.eps} \caption{ The problem we consider is how to efficiently select transition rates along the mRNA molecule, within a given set of possible rates, such that the protein production rate is minimized. In practice, translation rate modification can be done by introducing mutations into the gene or by designing a corresponding RNAi molecule. } \label{fig:Figure_RFM_blocking} \end{center} \end{figure} As a concrete example, consider an RFM with dimension $n$ and rates~$\BLMD$. Given a ``total reduction budget''~$b\in[0,\min\{\bar \lambda_i\}]$, define the feasible set~$\Omega^{n+1}\subset \R^{n+1}_+$ by \[ \left \{ \begin{bmatrix} \bar \lambda_0-\varepsilon_0 &\dots& \bar \lambda_n-\varepsilon_n \end{bmatrix} :\varepsilon_i\geq 0, \;\varepsilon_0+\dots+\varepsilon_n=b \right \}. \] In other words, the feasible set is the set of all the rates obtained by applying a ``total reduction budget''~$b$ in the rates of the given mRNA molecule. The question is how to distribute the total reduction budget over the rates so as to obtain the minimal possible protein production rate. We prove that: \begin{itemize} \item If some rate~$\bar \lambda_k$ is a ``bottleneck" rate, in a sense that will be made precise below, then an optimal reduction in protein production rate is obtained by using all the reduction budget~$b$ to further decrease~$\bar \lambda_k$; \item If all the given rates are equal, i.e.~$\bar \lambda_0=\dots=\bar \lambda_n$, then the transition rate at the middle of the mRNA molecule is the bottleneck rate, and thus an optimal reduction in protein production rate is obtained by using all the reduction budget to reduce this transition rate. \end{itemize} Thus, in this case there exists a single site such that mutating it yields the maximal inhibition of translation. Our results allow to determine where this site is located. The remainder of this paper is organized as follows. We first briefly review some known results on the RFM that are needed for our purposes. The following section poses the problem of down-regulating the steady-state protein production rate in the RFM in an optimal manner, and then describes our main results. Analysis of the RFM is non-trivial, as this is a nonlinear dynamical model. In particular, the mapping from~$\lambda$ to~$R(\lambda)$ is nonlinear and does not admit a closed-form expression. Nevertheless, by combining tools from convex optimization and eigenvalue sensitivity theory, we show that this optimization problem is tractable in some cases, and rigorously prove several results that have interesting biological implications. The final section summarizes and describes several directions for further research. To increase the readability of this paper, all the proofs are placed in the Appendix. \section*{Ribosome Flow Model} The dynamics of the RFM with $n$ sites is given by $n$ nonlinear first-order ordinary differential equations: \begin{align}\label{eq:rfm} \dot{x}_1&=\lambda_0 (1-x_1) -\lambda_1 x_1(1-x_2), \nonumber \\ \dot{x}_2&=\lambda_{1} x_{1} (1-x_{2}) -\lambda_{2} x_{2} (1-x_3) , \nonumber \\ \dot{x}_3&=\lambda_{2} x_{ 2} (1-x_{3}) -\lambda_{3} x_{3} (1-x_4) , \nonumber \\ &\vdots \nonumber \\ \dot{x}_{n-1}&=\lambda_{n-2} x_{n-2} (1-x_{n-1}) -\lambda_{n-1} x_{n-1} (1-x_n), \nonumber \\ \dot{x}_n&=\lambda_{n-1}x_{n-1} (1-x_n) -\lambda_n x_n. \end{align} If we define~$x_0(t):=1$ and $x_{n+1}(t):=0$ then~\eqref{eq:rfm} can be written more succinctly as \be\label{eq:rfm_all} \dot{x}_i=\lambda_{i-1}x_{i-1}(1-x_i)-\lambda_i x_i(1-x_{i+1}),\quad i=1,\dots,n. \ee Eq.~\eqref{eq:rfm_all} can be explained as follows. The flow of material from site~$i$ to site~$i+1$ at time~$t$ is~$\lambda_{i} x_{i}(t)(1 - x_{i+1}(t) )$. This flow is proportional to $x_i(t)$, i.e. it increases with the density at site~$i$, and to $(1-x_{i+1}(t))$, i.e. it decreases as site~$i+1$ becomes fuller. This corresponds to a ``soft'' version of a simple exclusion principle. Note that the maximal possible flow from site~$i$ to site~$i+1$ is the transition rate~$\lambda_i$. Let~$x(t,a)$ denote the solution of~\eqref{eq:rfm} at time~$t \ge 0$ for the initial condition~$x(0)=a$. Since the state-variables correspond to normalized density levels, with~$x_i(t)=0$ [$x_i(t)=1$] representing that site~$i$ is completely empty [full] at time~$t$, we always assume that~$a$ belongs to the closed $n$-dimensional unit cube: $ C^n:=\{x \in \R^n: x_i \in [0,1] , i=1,\dots,n\}. $ Let $\Int(C^n)$ [$\partial C^n$] denote the interior [boundary] of $C^n$. It is straightforward to verify that~$\partial C^n$ is repelling, i.e. if $a\in\partial C^n$ then $x(t,a)\in\Int(C^n)$ for all $t>0$, so~$C^n$ and also~$\Int(C^n)$ are invariant sets for the dynamics. An important property of the~RFM is the symmetry between the ``particles'' (i.e. ribosomes) moving from left to right and ``holes'' (i.e. "lack" of ribosomes) moving from right to left (in the TASEP literature, this property is sometimes referred to as the ``particle-hole" symmetry). Indeed, let~$q_j(t):=1-x_{n+1-j}(t)$, $i=1,\dots,n$. Then \begin{align*} \dot q_1&= \lambda_n(1-q_1) -\lambda_{n-1}q_1(1-q_2), \\ \dot q_2&= \lambda_{n-1}q_1(1-q_2) -\lambda_{n-2}q_2(1-q_3) , \\ &\vdots\\ \dot q_n&= \lambda_{1}q_{n-1}(1-q_n) -\lambda_{0}q_n. \end{align*} This is another RFM, but now with rates $\lambda_n,\dots,\lambda_0$. The RFM has been used to model and analyze the flow of ribosomes along the mRNA molecule during the process of mRNA translation. The (soft) simple exclusion principle corresponds to the fact that ribosomes have volume and cannot overtake one another. It is important to mention that it has been shown in \cite{reuveni} that the correlation between the production rate based on modeling using RFM and using TASEP over all {\em S. cerevisiae} endogenous genes is~$0.96$. In addition, it has also been shown there that the RFM model agrees well with biological measurements of ribosome densities. Furthermore, it was also shown that the RFM model predictions correlate well (correlations up to~$0.6$) with protein levels in various organisms (e.g. {\em E. coli}, {\em S. pombe}, {\em S. cerevisiae}). Given the high levels of bias and noise in measurements related to gene expression and the inherent stochasticity of intracellular biological processes (see e.g. \cite{Diament2016,Kaern2005}), these correlation values demonstrate the relevance of the RFM in this context. \subsection{Steady-State Spectral Representation} Ref.~\cite{RFM_stability} has shown that the RFM is a \emph{tridiagonal cooperative dynamical system}~\cite{hlsmith}, and that~\eqref{eq:rfm} admits a \emph{unique} steady-state point~$e=e(\LMD) \in \Int(C^n)$ that is globally asymptotically stable, that is, $\lim_{t\to\infty} x(t,a)=e$ for all $a\in C^n$ (see also~\cite{RFM_entrain}). This means that the ribosomal density profile always converges to a steady-state profile that depends on the rates, but not on the initial condition. In particular, the output rate~$R(t)=\lambda_n x_n(t)$ converges to a steady-state value~$R:=\lambda_n {e}_n$. At steady-state (i.e, for~$x=e$), the left-hand side of all the equations in~\eqref{eq:rfm} is zero, so \begin{align} \label{eq:ep} \lambda_0 (1- {e}_1) & = \lambda_1 {e}_1(1- {e}_2)\nonumber \\& = \lambda_2 {e}_2(1- {e}_3) \nonumber \\ & \vdots \nonumber \\ &= \lambda_{n-1} {e}_{n-1} (1- {e}_n) \nonumber \\& =\lambda_n {e}_n\nonumber\\&=R. \end{align} This yields \begin{align}\label{eq:rall} R=\lambda_i e_i(1-e_{i+1}), \quad i=0,\dots,n, \end{align} where $e_0:=1$ and $e_{n+1}:=0$. Ref.~\cite{RFM_max} used these expressions to provide a \emph{spectral representation} of the mapping from the set of rates~$\lambda$ to the steady-state output rate~$R$. Let $\R^n_+:=\{y\in \R^n: y_i \geq 0,\; i=1,\dots,n\}$ and $\R^n_{++}:=\{y\in \R^n: y_i> 0,\; i=1,\dots,n\}$. \begin{Theorem}\cite{RFM_max}\label{thm:spect} Given an RFM with rates~$\lambda=\begin{bmatrix}\lambda_0&\dots&\lambda_n \end{bmatrix}'$, let~$R=R(\lambda)$ denote its steady-state production rate. Define an $(n+2)\times(n+2)$ Jacobi matrix~$A=A(\lambda)$ by \be\label{eq:bmatrox} A:= \begin{bmatrix} 0 & \lambda_0^{-1/2} & 0 &0 & \dots &0&0 \\ \lambda_0^{-1/2} & 0 & \lambda_1^{-1/2} & 0 & \dots &0&0 \\ 0& \lambda_1^{-1/2} & 0 & \lambda_2^{-1/2} & \dots &0&0 \\ & &&\vdots \\ 0& 0 & 0 & \dots &\lambda_{n-1}^{-1/2} & 0& \lambda_{n }^{-1/2} \\ 0& 0 & 0 & \dots &0 & \lambda_{n }^{-1/2} & 0 \end{bmatrix} . \ee Then: \begin{enumerate} \item The eigenvalues of~$A$ are real and distinct, and if we order them as~$\zeta_1<\dots<\zeta_{n+2}$ then~$\zeta_{n+2}=(R(\lambda))^{-1/2}$. \item Let~$s_i(\lambda) :=\frac{\partial }{\partial \lambda_i} R(\lambda)$, i.e. the sensitivity of~$R$ with respect to (w.r.t.) the rate~$\lambda_i$. Let $v\in\R^{n+2}_{++}$ denote an eigenvector of~$A$ corresponding to the eigenvalue $\zeta_{n+2}$. Then \be\label{eq:sense} s_i(\lambda) = \frac{2 R^{3/2}}{\lambda_i^{3/2} v'v} v_{i+1} v_{i+2} ,\quad i=0,\dots,n. \ee \end{enumerate} \end{Theorem} This means that the steady-state production rate, and its sensitivity with respect to the transition rates, can be computed efficiently using numerical algorithms for computing the eigenvalues and eigenvectors of tridiagonal matrices. Theorem~\ref{thm:spect} also implies that \be\label{eq:rhom} R(c\lambda_0,\dots,c\lambda_n)=cR( \lambda_0,\dots, \lambda_n),\quad\text{for all }c>0, \ee i.e. $R(\lambda)$ is homogeneous of degree one. Another important implication of Theorem~\ref{thm:spect} is that~$R$ is a \emph{strictly concave} function of the transition rates~$\{\lambda_0,\dots,\lambda_n\}$ over~$\R^{n+1}_{++}$~\cite{RFM_max}. Also, it implies that~$\frac{\partial }{\partial \lambda_i} R>0$ for all~$i$, that is, an increase in any of the rates yields an increase in the steady-state production rate. For more on the analysis of the RFM, and also networks of interconnected~RFMs, using tools from systems and control theory, see e.g.~\cite{zarai_infi,RFM_max,RFM_sense,RFM_feedback,RFMR,rfm_control,RFM_model_compete_J}. \section*{Main Results }\label{sec:main} We begin by posing a general minimization problem for the steady-state production rate in the~RFM. \begin{Problem}\label{prob:min} Given a convex and compact feasible set of transition rates~$\Omega^{n+1} \subset \R^{n+1}_{++}$, find $\lambda^*\in\Omega^{n+1}$ such that~$R(\lambda^*)= \min_{\lambda\in\Omega^{n+1}}R(\lambda). $ \end{Problem} From the biological point of view, the feasible set of transition rates~$\Omega^{n+1}$ depends on all the biophysical constraints on the transition rates along the coding sequence. For example, the maximal/minimal decoding rate of a codon (e.g. via its adaptation to the tRNA pool) \cite{Dana2014B}, the maximal possible effect of mRNA folding (after codon substitution) on each codon \cite{TullerGB2011}, the maximal possible effect (after amino acid substitution) of the interaction of the ribosome with amino acids of the nascent peptide~\cite{Sabi2015}, and the maximal elongation slow down due to interaction with microRNAs~\cite{Ghildiyal2009,Inui2010}. Below we explain how to pose various interesting biological problems in the framework of Problem~\ref{prob:min}. Examples include finding the minimal number of mutations that down regulate translation of a gene/mRNA under a certain ``total reduction budget''. This is practically important when we use costly (in terms of time and money) gene editing approaches. Another related question is how to down regulate translation of a gene/mRNA with a maximal number of mutations. This is important when attenuating viral replication rate for generating a safe live attenuated vaccine. A large number of mutations reduces the probability of reverting. One may also define the feasible set in Problem~\ref{prob:min} in such a way that some rates cannot be changed. This is relevant for example when some codons along the mRNA cannot be modified. Indeed, various positions along the mRNA affect regulatory mechanisms that we may not want to alter (e.g. co-translational folding, splicing, translation). It is well-known (see, e.g.~\cite[Thm.~7.42]{beck2014}) that if~$f:\Omega^{n+1} \to\R$ is a continuous and {strictly} convex function defined over a convex and compact set~$\Omega^{n+1}$ then all the maximizers of~$f$ over~$\Omega^{n+1}$ are extreme points of~$\Omega^{n+1}$ (for more on the problem of maximizing a convex function, or equivalently, minimizing a concave function, see e.g.~\cite{concave_prog}). Combining this with the fact that~$R$ is a strictly concave function of the transition rates over~$\R^{n+1}_{++}$ implies the following. \begin{Proposition}\label{prop:main_sol} Every solution of Problem~\ref{prob:min} is an extreme point of~$\Omega^{n+1}$. \end{Proposition} In particular, if the set of extreme points of~$\Omega^{n+1}$ is finite then one can always solve Problem~\ref{prob:min} by simply calculating~$R(\lambda)$ for all~$\lambda$ that are extreme points of~$\Omega^{n+1}$, and then finding the minimum of these values. In particular, if~$\Omega^{n+1}$ is a convex polytope then the extreme points are just the vertices of~$\Omega^{n+1}$. Thus, when the biophysical constraints lead to a feasible set of rates that is a convex polytope then it is computationally straightforward to determine how to modify the rates so as to obtain the largest decrease in translation rate under reasonable biophysical constraints. In the remainder of this section, we consider three special cases of Problem~\ref{prob:min} for which it is also possible to obtain analytic results. \begin{Problem}\label{prob:sd} Given an RFM with $n$ sites, rates~$\bar \lambda_0,\dots, \bar \lambda_n$, and a ``total reduction budget''~$b\in[0,\min\{\bar \lambda_i\}]$, let~$\Omega^{n+1}=\Omega^{n+1}(\bar \lambda,b) $ be the set \be\label{eq:omeg_cov} \left \{\begin{bmatrix} \bar \lambda_0-\varepsilon_0,\dots,\bar \lambda_n-\varepsilon_n\end{bmatrix} : \varepsilon_i \geq 0, \sum_{i=0}^n \varepsilon_i=b \right \}. \ee Find~$\lambda^* \in \Omega^{n+1} $ such that~$R(\lambda^*)=\min_{\lambda\in\Omega^{n+1} } R(\lambda)$. \end{Problem} In other words, $\Omega^{n+1} $ is the set of all the rates that can be obtained by applying a total reduction~$b$ to the given rates~$\bar \lambda_i$. From a mathematical point of view, $b$ provides a bound on the total possible rate reduction. It also couples the reduction in different rates, as a larger reduction in one rate must be compensated by smaller reductions in other rates so that the total reduction will not exceed~$b$. From a synthetic biology point of view,~$b$ can be used to capture the idea of maximally inhibiting the production rate while minimizing the side-effects of this down regulation. For example, a very small value of~$b$ forces a solution with small modifications in all the rates. This is expected of course to minimize the effect of the mutations on the fitness of the cell/organism. For example, since co-translation folding \cite{Zhang2009,Pechmann2013,Tuller2015} is related to the ribosome transition rates along the mRNA, smaller changes in the rates are expected to have a smaller effect on protein folding (and thus on the functionality of the protein and the overall organismal fitness). Smaller changes in the transition rates are also related to a ``simpler'' biological solution in the sense of fewer mutations, less miRNAs, etc. The next example demonstrates Problem~\ref{prob:sd}. \begin{Example}\label{exa:firdem} Consider an RFM with dimension $n=4$ and transition rates \[ \bar \lambda_0=0.85,\; \bar \lambda_1=0.92, \;\bar \lambda_2=0.78,\;\bar \lambda_3=0.57, \; \bar \lambda_4=0.88. \] The steady-state production rate is $R(\bar \lambda_0,\dots,\bar \lambda_4) = 0.2308$ (all numbers are to four digit accuracy). Suppose that the total reduction budget is $b=0.1$. Then, for example, the vector \[ \lambda:=\bar \lambda-\begin{bmatrix} 0.05 & 0 & 0 & 0.02 & 0.03 \end{bmatrix}, \] belongs to~$\Omega^5$, and~$R(\lambda)=0.2260$. An optimal solution is $\lambda^*:=\begin{bmatrix} 0.85 & 0.92 & 0.78 & 0.47 & 0.88 \end{bmatrix}'\in\Omega^5$, with $R(\lambda^*)=0.2140$. Note that this corresponds to reducing~$b$ from the rate~$\bar \lambda_3$, which is the minimum of all the rates~$\bar \lambda_i$, leaving all the other rates unchanged. \hfill{$\square$} \end{Example} Let~$d^i\in\R^{n+1}$ denote the~$(i+1)$'th column of the~$(n+1)\times(n+1)$ identity matrix. The set~$\Omega^{n+1}(\bar \lambda,b)$ is a convex polytope with vertices: \[ v^i : = \begin{bmatrix} \bar \lambda_0&\dots&\bar \lambda_n \end{bmatrix}' -b d^i ,\quad i=0,\dots,n. \] If there exists an index~$i$ such that~$\bar \lambda_i=b$ then it is clear that an optimal solution is to reduce~$\bar \lambda_i$ to~$0$, as then the steady-state production rate will be zero. So we always assume that~$b$ takes values in the set~$[0,\min\{\bar \lambda_i\}-\rho]$, for some~$\rho>0$. This means that Problem~\ref{prob:sd} is a special case of Problem~\ref{prob:min}, as $\Omega^{n+1}(\bar \lambda,b)$ is a convex polytope contained in~$\R^{n+1}_{++}$. By Prop.~\ref{prop:main_sol}, every solution of Problem~\ref{prob:sd} is contained in the set~$\{v^0,\dots, v^n\}$. In other words, every minimizer corresponds to reducing \emph{all} the available budget~$b$ from a single rate. This immediately yields a simple and efficient algorithm for solving Problem~\ref{prob:sd}: use the spectral representation of~$R$ to compute~$R(v^i)$, $i=0,\dots,n$, and then find the minimum of all these values. Since the matrix~$A$ in~\eqref{eq:bmatrox} is symmetric and tridiagonal, calculating~$R(v^i)$ can be done efficiently even for large values of~$n$. We wrote a simple (and unoptimized) MATLAB script for solving Problem~\ref{prob:sd}, and ran it on a MAC laptop with a~$2.6$ GHz Intel core $i7$ processor. For an RFM with $n=500$ (a typical coding region includes a few hundred codons \cite{Zhang2000}), rates $\bar \lambda_i=1$, $i=0,\dots,500$, and~$b=0.1$, the optimal solution is found in~$3.14$ seconds. \begin{comment} \begin{Example}\label{exp:n2l1} Consider an RFM with dimension $n=2$, and rates~$\bar \lambda_i=1.0$, $i=0,1,2$. Here,~$R(\bar \lambda)= 0.3820$ (all numbers are to four digit accuracy). Let $b=0.5$. Fig.~\ref{fig:poly_n2} depicts the feasible set $\Omega^3$. The three extreme points of $\Omega^3$ are: \begin{align*} v^1&:=\begin{bmatrix} 0.5 & 1.0 & 1.0 \end{bmatrix}',\\ v^2&:=\begin{bmatrix} 1.0 & 0.5 & 1.0 \end{bmatrix}', \\%\quad \text{(blue circle)},\\ v^3&:=\begin{bmatrix} 1.0 & 1.0 & 0.5 \end{bmatrix}'. \end{align*} Since $R(v^1)=0.2929$, $R(v^2)=0.2679$, and $R(v^3)=0.2929$, it follows that $\lambda^*=v^2$. In other words, the optimal knock down is obtained by using all the reduction budget to reduce the rate~$\bar \lambda_1$. \end{Example} \begin{figure}[t] \begin{center} \includegraphics[width= 9cm,height=8cm]{poly_n2.eps} \caption{Feasible set~$\Omega^3$ for the RFM with rates $\bar \lambda_i=1 $, $i=0,1,2$ and~$b=0.5$ in Example~\ref{exp:n2l1}. Shown also are the three extreme points $v^1$ (red diamond), $v^2$ (blue square), and $v^3$ (black circle).} \label{fig:poly_n2} \end{center} \end{figure} \end{comment} Example~\ref{exa:firdem} may suggest that reducing the slowest transition rate by~$b$ always yields an optimal solution, but in general this is not true (see Example \ref{ex:bn} below). One may also consider a different feasible set in Problem~\ref{prob:sd}, namely, \[ \left \{\begin{bmatrix} \bar \lambda_0-\varepsilon_0,\dots,\bar \lambda_n-\varepsilon_n\end{bmatrix} : \varepsilon_i \geq 0, \sum_{i=0}^n \varepsilon_i\leq b \right \} , \] i.e. here the total reduction is \emph{up to}~$b$. However, by Theorem~\ref{thm:spect} $\frac{\partial }{\partial \lambda_i}R(\lambda) >0$ for all $i$, and thus an optimal solution for this problem is guaranteed to agree with an optimal solution of Problem~\ref{prob:sd}. The next example demonstrates the effect of increasing the total reduction rate~$b$ on the optimal solution of Problem~\ref{prob:sd}. \begin{Example} Consider an RFM with dimension $n=10$, and rates~$\bar \lambda_i=1$, $i=0,\dots,n$. Here $R(\bar \lambda)=0.2652$. We calculated the optimal solution~$\lambda^*$ for different values of~$b$, and also the value $\Delta R(b):=R(\bar \lambda)-R(\lambda^*)$, that is, the optimal reduction in protein rate that can be obtained for various values of~$b$. Figure~\ref{fig:n10_dR} depicts~$\Delta R$ as a function of~$b$. It may be seen that~$\Delta R$ increases quickly with~$b$ (specifically, the relation is superlinear).\hfill{$\square$} \end{Example} \begin{figure} \begin{center} \includegraphics[width= 8cm,height=7cm]{n10_dR.eps} \caption{$\Delta R$ as a function of $b$ for an RFM with dimension $n=10$ and rates~$\bar \lambda_i=1$, $i=0,\dots,10$.} \label{fig:n10_dR} \end{center} \end{figure} \subsection{Optimal reduction and sensitivities} It is also possible to derive theoretical results on the structure of an optimal solution~$\lambda^*$ in Problem~\ref{prob:sd} using the sensitivities $s_i(\lambda) :=\frac{\partial}{\partial \lambda_i} R(\lambda)$. Note that these can be computed efficiently using~\eqref{eq:sense}. \begin{Proposition}\label{prop:sense} Consider Problem~\ref{prob:sd}. If there exist~$i,j\in\{0,\dots,n\}$ such that \be\label{eq:sencom} s_i(\bar \lambda)<s_j(\bar \lambda ) \ee then any optimal solution~$\lambda^*$ satisfies~$\lambda^*_i=\bar \lambda_i$. \end{Proposition} In other words, if the sensitivity of the steady-state production rate to rate~$\lambda_i$ at~$\bar \lambda$ is lower than some other sensitivity then an optimal solution will \emph{not} include a reduction in~$\bar \lambda_i$. Indeed, it is better to distribute the reduction budget over some other, more sensitive, rates. \begin{Remark}\label{rem:sens} Note that since $R $ is a strictly concave function of the rates, \begin{align*} \frac{\partial }{\partial \lambda_i}s_i(\bar\lambda) &=\frac{\partial^2}{\partial \lambda_i^2} R(\bar \lambda) <0, \end{align*} for any~$\bar \lambda\in\R^{n+1}_{++}$ and any~$ i\in\{0,\dots,n\}$. In other words, a decrease in $\bar \lambda_i$ increases the sensitivity w.r.t. this rate. \end{Remark} Proposition~\ref{prop:sense} leads to the following definition. \begin{Definition}\label{def:btl} Given an RFM with rates~$\bar \lambda$, a transition rate $\bar \lambda_j$ is called a \emph{bottleneck rate} if $s_j(\bar \lambda) > s_i(\bar \lambda)$, for all $i\ne j $. \end{Definition} In other words, a bottleneck rate is one with a maximal sensitivity. Combining this with Proposition~\ref{prop:sense} immediately yields the following result. \begin{Corollary}\label{coro:optsimp} Given an RFM with rates~$\bar \lambda$, suppose that~$s_j(\bar\lambda )$ is a bottleneck rate. Then the unique optimal solution to Problem~\ref{prob:sd} is obtained by reducing~$\bar \lambda_j$ by~$b$. \end{Corollary} An important observation is that the slowest rate along the~mRNA molecule and the bottleneck rate may be different. The next example demonstrates this. \begin{Example}\label{ex:bn} Consider an RFM with dimension $n=4$, and rates~$\bar \lambda_3=1.85$, $\bar \lambda_i=2.0$, $i=0,1,2,4$. In this case, $s_0(\bar \lambda)=0.0297 $, $s_1(\bar \lambda)=0.0687$, $s_2(\bar \lambda)=0.0901$, $s_3(\bar \lambda)= 0.0856$, and~$s_4(\bar \lambda)=0.0343 $. Thus, although the minimal rate is~$\bar \lambda_3$, the bottleneck rate is~$\bar \lambda_2$. In particular, the optimal solution will be to reduce~$\bar \lambda_2$ by~$b$, and not~$\bar \lambda_3$, even though~$\bar \lambda_3$ is the minimal rate.\hfill{$\square$} \end{Example} However, note that Remark~\ref{rem:sens} implies that if some rate~$\lambda_i$ is decreased enough then it will eventually become a bottleneck rate. Proposition~\ref{prop:sense} can be used to derive analytic results in cases where we can obtain explicit information on the sensitivities at a point~$\bar\lambda \in \R^{n+1}_+$. The next two results demonstrate this. \begin{Proposition}\label{prop:cases} Consider an RFM with dimension~$n$ and with equal rates, i.e.~$\bar \lambda_0=\dots=\bar \lambda_n$. If~$n$ is even then the unique optimal solution to Problem~\ref{prob:sd} is:~$\lambda^*=\bar \lambda-bd^{ n/2 }$. If~$n$ is odd then there are two optimal solutions:~$\lambda^*=\bar \lambda-bd^{\lfloor n/2 \rfloor}$ and~$\lambda^*=\bar \lambda-bd^{\lfloor n/2 \rfloor+1}$. \end{Proposition} In other words, in the case where all the rates are equal, the bottleneck is at the center of the chain. These results are closely related to the fact that in a dynamic model for phosphorelay~\cite{phos_relays}, that is very similar to the~RFM, the middle layer in the model is the most sensitive to changes in the input. This also agrees with the so called ``edge-effect'' in the HTASEP~\cite{toward_prod_rates,edge_tasep_2009,PhysRevE.76.051113}, i.e. the fact that the steady-state output rate is less sensitive to the rates that are close to the edges of the chain. For more on the sensitivity of TASEP to manipulations in the initiation, hopping, and exit rates, see~\cite{PhysRevE.76.051113,foulaadvand2008asymmetric,Chou2004,PhysRevE.58.1911}. Another case where analytic results can be derived is when the rates in the RFM lead to equal steady-state occupancies along the mRNA molecule. This happens when $\lambda_1=\lambda_2=\cdots=\lambda_{n-1}=\lambda_0+\lambda_n$ (see~\eqref{eq:ep}). \begin{Proposition}\label{prop:e_uniform} Consider an RFM with dimension~$n$ and rates~$\bar \lambda$ such that $\bar e_1=\cdots=\bar e_n:=e_c$, i.e. all the steady-state occupancies are equal, and $e_c$ denotes their common value. \begin{enumerate} \item If $e_c < 1/2$ then the unique optimal solution to Problem~\ref{prob:sd} is \be\label{eq:opt_un1} \lambda^*= \bar \lambda - bd^0. \ee \item If $e_c > 1/2$ then the unique optimal solution to Problem~\ref{prob:sd} is \be\label{eq:opt_un2} \lambda^*= \bar \lambda - bd^n. \ee \item If $e_c=1/2$ then~\eqref{eq:opt_un1} and~\eqref{eq:opt_un2} are the optimal solutions. \end{enumerate} \end{Proposition} In other words, if the equal occupancy is relatively low [high] then maximal inhibition of the production rate is obtained by reducing the total reduction rate from the initiation [exit] rate, leaving all the other rates unchanged. . \begin{Example} Consider Problem~\ref{prob:sd} for an RFM with~$n=5$, rates~$\bar\lambda=\begin{bmatrix} 1 & 5/2 & 5/2 & 5/2 & 5/2 & 3/2 \end{bmatrix}'$, and~$b=1/2$. Note that in this case~$\bar e_1=\cdots=\bar e_5=2/5$. A calculation yields $R(\bar \lambda - bd^0)=0.3999$, $R(\bar \lambda - bd^1)=0.5651$, $R(\bar \lambda - bd^2)=0.5762$, $R(\bar \lambda - bd^3)=0.5829$, $R(\bar \lambda - bd^4)=0.5874$, and $R(\bar \lambda - bd^5)=0.5746$, so the optimal solution is~$\lambda^*=\bar \lambda - bd^0$. Since $e_c<1/2$, this agrees with Proposition~\ref{prop:e_uniform}.\hfill{$\square$} \end{Example} In some cases it may be more natural to define the transition rate reduction in relative rather than absolute terms. This is captured by the following optimization problem. \begin{Problem}\label{prob:sd_rel} Given an RFM with $n$ sites, rates~$\bar \lambda_0,\dots, \bar \lambda_n$, and a total reduction budget~$q\in[0,1)$, let~$\Gamma^{n+1}=\Gamma^{n+1}(\bar \lambda,q) \subset \R^{n+1}_{++}$ be the set \be\label{eq:omeg_cov_rel} \left \{\begin{bmatrix} \bar \lambda_0(1-\delta_0),\dots,\bar \lambda_n(1-\delta_n)\end{bmatrix} : \delta_i \geq 0, \sum_{i=0}^n \delta_i=q \right \}. \ee Find~$\lambda^* \in \Gamma^{n+1} $ such that~$R(\lambda^*)=\min_{\lambda\in\Gamma^{n+1} } R(\lambda)$. \end{Problem} For~$i\in \{0,\dots,n \}$, let $D^i\in\R^{(n+1)\times(n+1)}$ denote the $(n+1)\times(n+1)$ identity matrix, but with entry~$(i+1,i+1)$ changed to~$1-q$. The set $\Gamma^{n+1} $ is a convex polytope with vertices $ u^i : = D^i \bar \lambda $, $i=0,\dots,n$. Thus, Problem~\ref{prob:sd_rel} is also a special case of Problem~\ref{prob:min}, and so the minimizer~$\lambda^*$ satisfies~$\lambda^* \in \{u^0,\dots,u^n\}$. In practice, each codon (or coding region) admits a minimal and a maximal possible decoding rate. There are also minimal and maximal values for the initiation rate. These bounds are determined by the biophysical properties of the transcript and the intracellular environment. To model this, we can modify the optimization problems described above to include a bound~$\ell_i$ on the maximal allowed reduction of rate~$i$, for~$i=0,\dots,n$. The next problem demonstrates such a modification for Problem~\ref{prob:sd}. \begin{Problem}\label{prob:sdg} Consider an RFM with $n$ sites and rates~$\bar \lambda_0,\dots, \bar \lambda_n$. Given a total reduction budget~$b\in[0,\min\{\bar \lambda_i\}-\rho]$, for some~$\rho>0$, and also bounds~$0<\ell_i<\bar \lambda_i$, $i=0,\dots,n$, with~$\sum_{i=0}^n \ell_i > b$, let~$\Omega^{n+1}$ be as defined in Problem~\ref{prob:sd}, and let \begin{align}\label{eq:sdg_const} \Psi^{n+1}&:=\{\lambda\in\R^{n+1}_{++} : \lambda_i\in[\bar \lambda_i - \ell_i, \bar \lambda_i],\; i=0,\dots,n\}, \nonumber \\ \Phi^{n+1}&:=\Omega^{n+1} \cap \Psi^{n+1}. \end{align} Find~$\lambda^* \in \Phi^{n+1}$ such that~$R(\lambda^*)=\min_{\lambda\in\Phi^{n+1}} R(\lambda)$. \end{Problem} In other words, the feasible set~$\Phi^{n+1}$ in Problem~\ref{prob:sdg} is the intersection of the set $\Omega^{n+1}$ (defined in Problem~\ref{prob:sd}), and the closed $(n+1)$-dimensional cube $\Psi^{n+1}$ that models constraints on the maximal possible reduction of each rate. Since $\Phi^{n+1}$ is compact and convex (being the intersection of two compact and convex sets), Problem~\ref{prob:sdg} admits a solution that is an extreme point of $\Phi^{n+1}$. In general, not all the rates can be reduced by $b$, and thus an optimal solution may include a reduction of \emph{several} rates. \begin{Example}\label{exp:sdg} Consider Problem~\ref{prob:sdg} for an RFM with dimension $n=2$, rates~$\bar \lambda_i=1.0$, $i=0,1,2$, and parameters $b=0.85$, and $\ell_i=0.4$, $i=0,1,2$. In other words, the total possible reduction is~$0.85$, but any rate can be reduced by no more than~$0.4$. Fig.~\ref{fig:poly_inter_n2} depicts the feasible set~$\Phi^{3}$ (blue polytope) that is the intersection of the set~$\Omega^3$ (gray polytope) and the set~$\Psi^3$ (green cube). Shown also are the three extreme points of $\Phi^3$: \begin{align*} v^1&:=\begin{bmatrix} 0.95 & 0.6 & 0.6 \end{bmatrix}' \text{ (red circle) },\\ v^2&:=\begin{bmatrix} 0.6 & 0.6 & 0.95 \end{bmatrix}' \text{ (blue circle)}, \\ v^3&:=\begin{bmatrix} 0.6 & 0.95 & 0.6 \end{bmatrix}' \text{ (magenta circle)}. \end{align*} A calculation yields $R(v^1)=R(v^2)=0.2538$, whereas~$R(v^3)=0.2764$. It follows that $\lambda^*=v^1$ and $\lambda^*=v^2$ are optimal solutions. Note that these solutions correspond to reducing several rates along the mRNA molecule. Note also that $s(\bar\lambda)=\begin{bmatrix} 0.1056 & 0.1708 & 0.1056 \end{bmatrix}'$, so both optimal solutions correspond to a maximal possible reduction in a most sensitive rate, and a maximal possible reduction in another most sensitive rate.\hfill{$\square$} \end{Example} \begin{figure} \begin{center} \includegraphics[width= 9cm,height=8cm]{poly_inter_n2.eps} \caption{The sets $\Omega^3$ (gray polytope), $\Psi^3$ (green cube), and $\Phi^3$ (blue polytope) in Example~\ref{exp:sdg}.} \label{fig:poly_inter_n2} \end{center} \end{figure} In some cases, there may be positions along the coding region that we cannot modify due to their potential effect on various intracellular processes. An important advantage of Problem~\ref{prob:sdg} is that it allows capturing this by simply setting some of the~$\ell_i$s to zero. On the other hand, in down regulation of a viral gene it may be desirable to distribute the synonymous codon modifications over many mRNA sites in order to reduce the chance of spontaneous mutations yielding the original wild type. This is captured by Problem~\ref{prob:sdg} when we set the~$\ell_i$s to small non-zero values, as then an optimal solution will include a transition rate reduction in many sites. \subsection{A biological example} To demonstrate how the results above can be used to analyze translation and provide guidelines for re-engineering the mRNA, we consider the {\em S. cerevisiae} gene {\em YBL025W} that encodes the protein {\em RRN10} which is related to regulation of RNA polymerase I. This gene has $145$ codons (excluding the stop codon). Similarly to the approach used in \cite{reuveni}, we divided this mRNA into $6$ consecutive pieces: the first piece includes the first $24$ codons (that are also related to later stages of initiation~\cite{Tuller2015}). The other pieces include~$25$ non-overlapping codons each, except for the last one that includes~$21$ codons. To model this using an RFM with~$n=5$ sites, we first estimated the elongation rates~$\lambda_1,\dots,\lambda_5$ using ribo-seq data for the codon decoding rates~\cite{Dana2014B}, normalized so that the median elongation rate of all {\em S. cerevisiae} mRNAs becomes~$6.4$ codons per second \cite{Karpinets2006}. The site rate is~$(\text{site time})^{-1}$, where site time is the sum over the decoding times of all the codons in this site. These rates thus depend on various factors including availability of tRNA molecules, amino acids, Aminoacyl tRNA synthetase activity and concentration, and local mRNA folding~\cite{Dana2014B,Alberts2002,Tuller2015}. Note that if we replace a codon in a site of mRNA by a synonymous slower codon then the decoding time increases and thus the rate associated with this site decreases. The initiation rate (that corresponds to the first piece) was estimated based on the ribosome density per mRNA levels, as this value is expected to be approximately proportional to the initiation rate when initiation is rate limiting \cite{reuveni,HRFM_steady_state}. Again we applied a normalization that brings the median initiation rate of all {\em S. cerevisiae} mRNAs to be $0.8$ \cite{Chu2014}. Adding the initiation time~($1/0.4482$) to the site time of the first piece yields an RFM model with~$n=5$ and parameters: \[\begin{bmatrix}\bar \lambda_0 & \dots &\bar \lambda_5 \end{bmatrix} = \begin{bmatrix} 0.1678 & 0.2572 & 0.2758 & 0.2514 & 0.2612 & 0.3002 \end{bmatrix}. \] A calculation yields that the steady-state production rate in this RFM is~$R = 0.0732$. In order to analyze the solution of Problem~\ref{prob:sd} for this RFM we calculated the sensitivities using~\eqref{eq:sense}. This yields: $ s(\bar \lambda)=\begin{bmatrix} 0.0795 & 0.0669 & 0.0611 & 0.0578 & 0.0328 & 0.0092 \end{bmatrix} $, so~$\bar \lambda_0$ is a bottleneck rate. This means that the solution for Problem~\ref{prob:sd} is to reduce all the reduction budget~$b$ from~$\bar\lambda_0$. In biological terms, this suggests that maximal inhibition of production should be based on replacing some (or all) of the first~$24$ codons with slower synonymous codons. For comparison with the optimization scenarios described below, consider the total budget~$b=0.0089$. The solution for Problem~~\ref{prob:sd} is then to reduce~$\lambda_0$ by~$b$, and this yields \be\label{eq:prodff} R^*=0.0725. \ee Reducing~$\lambda_0$ by~$b$ in the model is possible by substituting codons in the first site with their slowest synonymous mutation (for example, the third codon AGA should be replaced by the synonymous codon~CGG, increasing the codon decoding time from~$0.1128$ seconds to~$0.2246$ seconds). Now suppose that we are not interested in modifying these codons because in this region there are various regulatory signals that we may not want to change (see, for example,~\cite{Tuller2015}). To maximize inhibition of production rate under this constraint, we apply Problem~\ref{prob:sdg}, with $\ell_0 = 0$, and $\ell_i >b$ for all~$i\not =0$. Now the optimal solution is to reduce~$b$ from~$\bar \lambda_1$. Note that $\bar \lambda_1$ has the second largest sensitivity. This yields $R^*=0.0726$, and is, as expected, higher than the value in~\eqref{eq:prodff}. Again, the biological data shows that such a reduction can be done by synonymously replacing codons $34$ (GCT with GCA), $35$ (GTT with GTA), $36$ (CCT with CCC), $38$ (CCG with CCC), $39$ (TTC with TTT), and $49$ (GTG with GTA). Finally, to demonstrate mutations in multiple sites, we used the data to find a scenario where a set of mutations yields the same total decrease in the rates. This can be done by synonymously replacing codons $21$ (GTG with GTA), $29$ (GAA with GAG), $58$ (TTC with TTT), $82$ (AAG with AAA), $110$ (CTA with CTG), and $141$ (GCG with GCA), leading to \[ \lambda=\begin{bmatrix} 0.1677 & 0.2557 & 0.2733 & 0.2489 & 0.2599 & 0.2991 \end{bmatrix}'. \] Note that all the rates are reduced and that the total reduction is $b$. This yields $R=0.0727$, which is again higher than the value in~\eqref{eq:prodff}. \section*{Discussion} There are several approaches for effectively down-regulating translation. Global down-regulation can be achieved by controlling basic translation factors or by using drugs that induce ribosome stalling \cite{Greenberg1986,Clemens2000,Kozak1992}. Here we consider down regulation of specific genes via targeting specific codons/regions in these genes. This leads to the problem of finding the codon regions that have the most effect on the steady-state production rate. We study this problem of optimal down regulation of mRNA translation using a mathematical model for ribosome flow, the RFM. All possible modifications of the rates define a feasible set of rates, and, under certain conditions, we give a simple algorithm for finding the optimal solution, that is, the rates that lead to a maximal decrease in the protein production rate. For some specific cases, we also derive theoretical results on the optimal solution. Our results show that the solution must focus on the positions along the mRNA molecule where the transition rate has the strongest effect on the protein production rate. However, this position is not necessarily the one with the minimal rate (though in many cases there are correlations between the two definitions). Many previous studies in the field emphasized the importance of the translation bottleneck \cite{Zhang1994,Tuller2010c,Chou2004}, however, this is always defined as the minimal rate. We believe that the sensitivity of the coding region sites should be further studied in order to understand better the evolution of transcripts and their design. The optimization problems posed here are flexible enough to capture various scenarios. For example, in some cases it may be desirable to introduce a \emph{minimal} number of changes in the transcript to obtain the desired decrease in the translation rate. Indeed, generating mutations and using suitable RNAi molecules is costly in time and money. Also, any change in the translation rates can affect various important phenomena such as co-translational folding \cite{Zhang2009,Pechmann2013,Tuller2015}, as well as other properties that are encoded in the coding region \cite{Tuller2015,Cartegni2002,Stergachis2013}. In other cases, such as generating a down-regulated virus strain, it may be desirable to introduce as many mutations as possible. There are various approaches for synthesizing molecules that block mRNA translation (see e.g. \url{http://www.gene-tools.com/choosing_the_optimal_target}). In practice, when determining an optimal position to target (e.g. with RNAi molecules) one must take into account additional biophysical aspects. For example, the~GC content at the different regions along the mRNA, the folding of the mRNA, the potential binding affinity of the RNAi and the mRNA, potential un-desired binding of the RNAi to additional mRNAs or regions within the mRNA, etc. Nevertheless, we feel that out results can be integrated to improve the design of such tools. In practice, there are many mRNA molecules in the cell and they all compete for the finite pool of free ribosomes. In particular, if more ribosomes are stuck in a traffic jam on a certain mRNA molecule then the pool of free ribosomes is depleted yielding a reduction in the production rates in other mRNA molecules. The RFM is a model for ribosome flow along a single isolated mRNA molecule. This is a reasonable model when the expression levels (e.g. the mRNA levels and the total number of ribosomes on the mRNA molecules related to the gene) are relatively low, so that changes in the translation dynamics on one mRNA have a negligible effect on the pool of ribosomes and thus on the other mRNAs. A model for a network of~RFMs, interconnected via a dynamic pool of free ribosomes, has been studied in~\cite{RFM_model_compete_J}. It may be of interest to study the problem of down regulation of a specific mRNA molecule within this framework. In this case, one can also down regulate the mRNA indirectly by affecting the ribosomal pool. However, the tools used here do not directly apply, as the convexity results for a single chain do not necessarily carry over to the case of a network of RFMs. The results here suggest several biological experiments for studying the problem of optimal down regulation and, in particular, validating the theoretical predictions derived using the~RFM. Libraries encoding the same protein using mRNAs with different codons (but similar mRNA levels and translation initiation rates) can be generated as was done in \cite{Ben-Yehezkel2015}. For each variant the protein levels, that are expected to monotonically increase with the production rate~\cite{reuveni}, can be measured either via a reporter protein \cite{Ben-Yehezkel2015} or directly \cite{Schwan2011}. The codon decoding rates can be estimated based on ribo-seq experiments \cite{Ben-Yehezkel2015,Dana2014B}. Such an experimental testbed can be used to validate the results reported in this study. \section*{Appendix: Proofs} {\sl Proof of Proposition~\ref{prop:sense}.} Consider Problem~\ref{prob:sd}, and suppose that~\eqref{eq:sencom} holds. We need to show that~$\lambda^*_i=\bar \lambda_i$. Seeking a contradiction, assume that~$\lambda^*_i<\bar \lambda_i$. By Prop.~\ref{prop:main_sol},~$\lambda^*=\bar \lambda -bd^i$, so in particular $ R(\bar \lambda -bd^i)\leq R( \bar \lambda -bd^j). $ Since~$R$ is a homogeneous function of the rates, we conclude that $ R(c \bar \lambda - c bd^i)\leq R( c \bar \lambda - c bd^j) $ for any~$c>0$. Now taking~$c>0$ sufficiency small yields~$ \frac{\partial R(\bar \lambda)}{\partial \lambda_i} \geq\frac{\partial R(\bar \lambda)}{\partial \lambda_j}$. This contradicts~\eqref{eq:sencom}.~\IEEEQED {\sl Proof of Proposition~\ref{prop:cases}. } In the case where all the rates are equal there exists a closed-form expression for the sensitivities~\cite{RFM_sense}, namely, \[ s_i=\frac{\sin\left(\frac{i+1}{n+3}\pi\right) \sin\left(\frac{i+2}{n+3}\pi\right)}{2(n+3)\cos^3\left(\frac{\pi}{n+3}\right)}, \quad i=0,\dots,n. \] This means that \be\label{eq:cosk} s_i= \frac{a -\cos \left(\frac{2i+3 }{n+3}\pi \right )}{b}, \ee where~$a,b>0$ are constants that do not depend on~$i$. If~$n$ is even then the cosine function in~\eqref{eq:cosk} admits a unique minimum at~$i=n/2$, and combining this with Proposition~\ref{prop:sense} completes the proof. If~$n$ is odd then the cosine function in~\eqref{eq:cosk} admits two minima: at~$\lfloor n/2 \rfloor$ and at~$\lfloor n/2 \rfloor+1$. Now arguing as in the proof of Proposition~\ref{prop:sense} and using the particle-hole symmetry of the RFM completes the proof.~\IEEEQED {\sl Proof of Proposition~\ref{prop:e_uniform}. } If $\bar e_1=\cdots=\bar e_n:=e_c$, then~\eqref{eq:ep} yields \be\label{eq:l_vals_un} \bar \lambda_i=\begin{cases} 1, & i=0,\\ e_c^{-1}, & i=1,\dots,n-1,\\ e_c^{-1}-1, & i=n, \end{cases} \ee where we scaled~$\bar \lambda_0$ to one w.l.o.g. In this case, the Perron eigenvector $v\in\R^{n+2}_{++}$ of the matrix $A(\bar \lambda)$ is given by (see also~\cite{RFM_sense}): \be\label{eq:vi_uni_e} v_i=\begin{cases} 1, & i=1, \\ \mu^{(i-1)/2} e_c^{-1/2}, & 2 \le i \le n+1, \\ \mu^{n/2}, & i=n+2, \end{cases} \ee where $\mu:=e_c/(1-e_c)$. We consider two cases. If~$e_c=1/2$ then $v'v=2(n+1)$ and applying Theorem~\ref{thm:spect} yields the sensitivities: \be\label{eq:si_un1} s_i= \begin{cases} \frac{1}{2(n+1)}, & i=0, \\ \frac{1}{4(n+1)}, & 1\le i \le n-1,\\ \frac{1}{2(n+1)}, & i=n. \end{cases} \ee Thus, $s_0=s_n>s_j$, for all~$j\not \in\{0,n\}$, and arguing as in the proof of Proposition~\ref{prop:sense} and using the particle-hole symmetry implies that the two optimal solutions are~$\bar \lambda-bd^0$ and~$\bar \lambda-bd^n$. If $e_c\ne 1/2$ then Theorem~\ref{thm:spect} yields \be\label{eq:si_un2} s_i= \begin{cases} \frac{1-2e_c}{1-\mu^{n+1}}, & i=0, \\ \frac{e_c(1-2e_c)}{1-\mu^{n+1}}\mu^i, & 1\le i \le n-1,\\ \frac{\mu^{n+1}(1-2e_c)}{1-\mu^{n+1}} & i=n. \end{cases} \ee When $e_c<1/2$ [$e_c>1/2$]~\eqref{eq:si_un2} yields $s_0>s_j$, for all~$j\ne 0$ [$s_n>s_j$, for all~$j\ne n$]. Combining this with Proposition~\ref{prop:sense} completes the proof.~\IEEEQED
1,314,259,994,389
arxiv
\section{Introduction} \noindent We discuss loopless and coloopless $p$-matroids, by $p$-matroid we mean a vector matroid $M\cong M[A]$ for some matrix $A$ of size $m \times n$ over the field $F = GF(p),$ for prime $p$. We denote the set of column labels of $M$ (viz. the ground set of $M$) by $E$, the set of circuits of $M$ by $\mathcal {C}(M),$ and the set of independent sets of $M$ by $\mathcal {I}(M).$ For undefined, standard terminology in graphs and matroids, see Oxley \cite{ox}.\\ \noindent Malavadkar et al. \cite{mjg} defined the splitting operation for $p$-matroids as : \begin{definition}\label{def0} Let $M\cong M[A]$ be a $p$-matroid on ground set $E,$$\{a,b\} \subset E,$ and $\alpha\neq 0$ in $ GF(p)$. The matrix $A_{a,b}$ is constructed from $A$ by appending an extra row to $A$ which has coordinates equal to $\alpha$ in the columns corresponding to the elements $a,$$b,$ and zero elsewhere. Define the splitting matroid $M_{a,b}$ to be the vector matroid $M[A_{a,b}].$ The transformation of $M$ to $M_{a,b}$ is called the splitting operation. \end{definition} \noindent A circuit $C \in \mathcal{C}(M)$ containing $\{a,b\}$ is said to be a $p$-circuit of $M,$ if $C \in \mathcal{C}(M_{a,b}).$ And if $C$ is a circuit of $M$ containing either $a$ or $b,$ but it is not a circuit of $M_{a,b},$ then we say $C$ is an $np$-circuit of $M.$ For $a,b\in E,$ if the matroid $M$ contains no $np$-circuit then splitting operation on $M$ with respect to $a,b$ is called trivial splitting. \\ Note that the class of connected $p$-matroids is not closed under splitting operation. \begin{example}\label{ex2} The vector matroid $M \cong M[A]$ represented by the matrix $A$ over the field $GF(3)$ is connected, whereas the splitting matroid $M_{1,4}\cong M[A_{1,4}] $ is not connected. \begin{center} $\mathbf{A} = \begin{pNiceMatrix [first-col, first-row, code-for-first-col = \color{black}, code-for-first-row = \color{black}] & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 2 \\ & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 \\ & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 \\ & 0 & 0 & 0 & 1 & 2 & 1 & 1 & 0 \\ & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \end{pNiceMatrix} \qquad \mathbf{A_{1,4}} = \begin{pNiceMatrix [first-col, first-row, code-for-first-col = \color{black}, code-for-first-row = \color{black}] & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 2 \\ & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 \\ & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 \\ & 0 & 0 & 0 & 1 & 2 & 1 & 1 & 0 \\ & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\ & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \end{pNiceMatrix}$ \end{center} \end{example} It is interesting to see that the vector matroid $M'_{1,4} \cong M[A'_{1,4}],$ which is a single element extension of $M_{1,4},$ is connected.\\ \begin{center} $\mathbf{A'_{1,4}} = \begin{pNiceMatrix [first-col, first-row, code-for-first-col = \color{black}, code-for-first-row = \color{black}] & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9\\ & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 2 & 0\\ & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0\\ & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0\\ & 0 & 0 & 0 & 1 & 2 & 1 & 1 & 0 & 0\\ & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \end{pNiceMatrix}$ \end{center} \noindent This example motivates us to investigate the question: If $M$ is a connected $p$-matroid and $M_{a,b}$ is the splitting matroid of $M,$ then does there exist a single element extension of the splitting matroid that is connected? In the next section, we answer this question by defining element splitting operation on a $p$-matroid $M$ which is splitting operation on $M$ followed by a single element extension. \section{Element Splitting Operation} In this section, we define element splitting operation on a $p$-matroid $M$ and characterize its circuits. \begin{definition}\label{def1} Let $M\cong M[A]$ be a $p$-matroid on ground set $E,$$\{a,b\} \subset E,$ and $M_{a,b}$ be the corresponding splitting matroid. Let the matrix $A_{a,b}$ represents $M_{a,b}$ on $GF(p).$ Construct the matrix $A'_{a,b}$ from $A_{a,b}$ by adding an extra column to $A_{a,b},$labeled as $z,$ which has the last coordinate equal to $\alpha\neq 0$ and the rest are equal to zero. Define the element splitting matroid $M'_{a,b}$ to be the vector matroid $M[A'_{a,b}]$. The transformation of $M$ to $M'_{a,b}$ is called the element splitting operation. \end{definition} \begin{remark} rank$(A)<$ rank$(A'_{a,b})=$ rank$(A)+1.$ If the rank functions of $M$ and $M'_{a,b}$ are denoted by $r$ and $r',$respectively, then $r(M) < r'(M'_{a,b})= r(M) + 1.$ \end{remark} \noindent Let $C=\{v_1,v_2,\dots,v_k\},$ where $v_i, i= 1,2, \ldots,k$ are column vectors of the matrix $A,$ be an $np$-circuit of $M$ containg only $a.$ Assume $v_1=a,$ without loss of generality. Then there exist non-zero scalars $\alpha_1,\alpha_2,\dots,\alpha_k \in GF(p)$ such that $\alpha_1 v_1+ \alpha_2 v_2+ \ldots+\alpha_k v_k \equiv 0(mod\:p).$ Let $\alpha_z \in GF(p)$ be such that $\alpha_z + \alpha_1 \equiv 0 (mod\:p).$ Note that $\alpha_z \neq 0.$ Then in the matrix $A'_{a,b},$ we have $\alpha_1 v_1+ \alpha_2 v_2+ \ldots+\alpha_k v_k + \alpha_z z \equiv 0(mod\:p).$ Therefore the set $C \cup z=\{v_1,v_2,\dots,v_k, z\}$ is a dependent set of $M'_{a,b}.$ If both $a,b\in C,$ then by the similar arguments, we can show that $C \cup z$ is a dependent set of $M'_{a,b}.$ \noindent In the next Lemma, we characterize the circuits of $M'_{a,b}$ containing the element $z.$ \begin{lemma}\label{L1} Let $C$ be a circuit of $p$-matroid $M.$ Then $C \cup z$ is a circuit of $M'_{a,b}$ if and only if $C$ is an $np$-circuit of $M.$ \end{lemma} \begin{proof} First assume that $C \cup z$ is a circuit of $M'_{a,b}.$ If $C$ is not an $np$-circuit of $M,$ then it is a $p$-circuit of $M,$ and hence it is also a circuit of $M_{a,b}$ and $M'_{a,b},$ as well. Thus we get a circuit $C$ contained in $C \cup z,$ a contradiction. \noindent Conversely, suppose $C$ is an $np$-circuit of $M.$ Then $C$ is an independent set of $M'_{a,b}.$ As noted earlier, $C \cup z$ is a dependent set of $M'_{a,b}.$ On the contrary, assume that $C \cup z$ is not a circuit of $M'_{a,b},$ and $C_1 \subset C \cup z $ be a circuit of $M'_{a,b}.$\\ \textbf{Case 1}: $z \notin C_1.$ Then $C_1$ is a circuit contained in $C,$ which is contradictory to the fact that $C$ is independent in $M'_{a,b}.$\\ \textbf{Case 2}: $z \in C_1.$ Then $C_1 \setminus z$ is a dependent set of $M$ contained in the circuit $C$ which is not possible. Thus we get $ C \cup z$ is a circuit of $M'_{a,b}.$ \end{proof} \noindent We denote the collection of circuits described in Lemma \ref{L1} by $\mathcal {C}_z.$ \begin{theorem} Let $M$ be a $p$-matroid on ground set $E$ and $\{a,b\}\subset E.$ Then $\mathcal{C}(M'_{a,b})= \mathcal{C}(M_{a,b})\cup\mathcal {C}_z.$ \end{theorem} \begin{proof} The inclusion $\mathcal{C}(M_{a,b})\cup\mathcal {C}_z \subset \mathcal{C}(M'_{a,b})$ follows from the Definition \ref{def1} and Lemma \ref{L1}. For the other inclusion, let $C \in \mathcal{C}(M'_{a,b}).$ If $z \notin C,$ then $C \in \mathcal{C}(M_{a,b}).$ Otherwise, $C \in \mathcal {C}_z.$ \end{proof} \begin{example}\label{ex1} Consider the matroid $R_8,$ the vector matroid of the following matrix $A$ over field $GF(3)$. \begin{center} $\mathbf{A} = \begin{pNiceMatrix [first-col, first-row, code-for-first-col = \color{blue}, code-for-first-row = \color{blue}] & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ & 1 & 0 & 0 & 0 & 2 & 1 & 1 & 1 \\ & 0 & 1 & 0 & 0 & 1 & 2 & 1 & 1 \\ & 0 & 0 & 1 & 0 & 1 & 1 & 2 & 1 \\ & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 2 \\ \end{pNiceMatrix} \qquad \mathbf{A'_{3,5}} = \begin{pNiceMatrix [first-col, first-row, code-for-first-col = \color{blue}, code-for-first-row = \color{blue}] & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ & 1 & 0 & 0 & 0 & 2 & 1 & 1 & 1 & 0 \\ & 0 & 1 & 0 & 0 & 1 & 2 & 1 & 1 & 0\\ & 0 & 0 & 1 & 0 & 1 & 1 & 2 & 1 & 0 \\ & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 2 & 0 \\ & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 \\ \end{pNiceMatrix}$ \end{center} \noindent For $a=3$, $b=5$ and $\alpha=1$ the representation of element splitting matroid $M'_{3,5}$ over $GF(3)$ is given by the matrix $A'_{3,5}$. The collection of circuits of $M,$ $M_{3,5}$ and $M'_{3,5}$ is given in the following table. \begin{center} \begin{tabular}{ | m{4cm}| m{4cm} | m{5cm} |} \hline \textbf{~~~~~~~~~Circuits of $M$} & \textbf{~~~~~~~~Circuits of $M_{2,4}$} & \textbf{~~~~~~~~Circuits of $M'_{2,4}$}\\ \hline $\{1, 2, 3, 4,5\}$, $\{1, 2, 7, 8\}$, $\{1, 4, 6, 7\}$ & $\{1, 2, 3, 4,5\}$, $\{1, 2, 7, 8\}$, $\{1, 4, 6, 7\}$ & $\{1, 2, 3, 4,5\}$, $\{1, 2, 7, 8\}$, $\{1, 4, 6, 7\}$ \\ \hline $ \{2, 4, 6, 8\}$,$\{3, 5, 6, 7, 8\}$ & $\{2, 4, 6, 8\}$,$\{3, 5, 6, 7, 8\}$ & $\{2, 4, 6, 8\}$,$\{3, 5, 6, 7, 8\}$ \\ \hline - & $\{1, 2, 3, 5, 6, 7\}$, $\{1, 2, 3, 5, 6, 8\}$ & $\{1, 2, 3, 5, 6, 7\}$, $\{1, 2, 3, 5, 6, 8\}$\\ \hline -& $\{1, 3, 4, 5, 6, 8\}$, $\{1, 3, 4, 5, 7, 8\}$ & $\{1, 3, 4, 5, 6, 8\}$, $\{1, 3, 4, 5, 7, 8\}$\\ \hline -& $\{2, 3, 4, 5, 6, 7\}$, $\{2, 3, 4, 5, 7, 8\}$ & $\{2, 3, 4, 5, 6, 7\}$, $\{2, 3, 4, 5, 7, 8\}$ \\ \hline $\{1, 2, 3, 4, 6\}$, $\{1, 2, 3, 4, 7\}$ & - & $\{1, 2, 3, 4, 6, 9\}$, $\{1, 2, 3, 4, 7, 9\}$ \\ \hline $ \{1, 2, 3, 4, 8\}$, $\{1, 2, 5, 6\} $ & - & $\{1, 2, 3, 4, 8, 9\}$, $\{1, 2, 5, 6, 9\}$ \\ \hline $\{1, 3, 5, 7\}$, $\{1, 3, 6, 8\}$ & - & $\{1, 3, 5, 7, 9\}$, $\{1, 3, 6, 8, 9\}$ \\ \hline $\{1, 4, 5, 8\}$, $\{1, 5, 6, 7, 8\} $ & - & $\{1, 4, 5, 8, 9\}$, $\{1, 5, 6, 7, 8, 9\} $ \\ \hline $\{2, 3, 5, 8\}$, $\{2, 3, 6, 7\}$ & - & $\{2, 3, 5, 8, 9\}$, $\{2, 3, 6, 7, 9\}$ \\ \hline $\{2, 4, 5, 7\}$, $\{2, 5, 6, 7, 8\}$ & - & $\{2, 4, 5, 7, 9\}$, $\{2, 5, 6, 7, 8, 9\}$ \\ \hline $\{3, 4, 5, 6\}$, $\{3, 4, 7, 8\}$ & - & $\{3, 4, 5, 6, 9\}$, $\{3, 4, 7, 8, 9\}$ \\ \hline $\{4, 5, 6, 7, 8\}$ & - & $\{4, 5, 6, 7, 8, 9\}$ \\ \hline \end{tabular} \end{center} \end{example} \subsection{Independent sets, Bases and Rank function of $M'_{a,b}$} In this section, we describe independent sets, bases and rank function of $M'_{a,b}.$ Denote the set $\mathcal{I}_z=\{I\cup z : I\in \mathcal{I}(M)\}.$ \begin{lemma}\label{Indep} Let $M\cong M[A]$ be a $p$-matroid with ground set $E$ and $M'_{a,b}$ be its element splitting matroid. Then $\mathcal{I}(M'_{a,b})=\mathcal{I}(M_{a,b})\cup \mathcal{I}_z$ \end{lemma} \begin{proof} Notice that $\mathcal{I}(M_{a,b})\cup \mathcal{I}_z \subseteq \mathcal{I}(M'_{a,b}).$ For other inclusion, assume $ T \in \mathcal{I}(M'_{a,b}).$ If $z\notin T,$ then $T\in \mathcal{I}(M_{a,b}).$ And if $z\in T,$ then $T\setminus\{z\} \in \mathcal{I}(M_{a,b}).$ That is $T=I\cup z$ for some $I\in\mathcal{I}(M_{a,b}).$ \begin{description} \item \textbf{Case 1} : $I\in \mathcal{I}(M).$ Then $T\in \mathcal{I}_z.$ \item \textbf{Case 2} : $I=C\cup I'$ where $C$ is an $np$-circuit of $M$ and $I' \in \mathcal {I}(M).$ Then by Lemma \ref{L1}, $C\cup z$ is a circuit of $M'_{a,b}$ contained in $T,$ a contradiction. \end{description} \end{proof} \begin{lemma}\label{k} Let $M$ be a $p$-matroid and $\{a,b\} \subset E.$ Then $\mathcal{B}(M'_{a,b})=\mathcal{B}(M_{a,b}) \cup \mathcal{B}_z,$ where $\mathcal{B}_z = \{ B \cup z : B \in \mathcal{B}(M)\}.$ \end{lemma} \begin{proof} It is easy to observe that $\mathcal{B}(M_{a,b})\cup \mathcal{B}_z \subseteq \mathcal{B}(M'_{a,b}).$ Next assume that $B \in \mathcal{B}(M'_{a,b}).$ Then $rank (B)=rank (M)+1.$ If $B$ contains $z,$ then $B\setminus z$ is an independent set of $M_{a,b}$ of size $rank(M).$ Then by similar arguments given in the proof of Lemma \ref{Indep}, $B = I \cup z$, for some $I \in \mathcal{I}(M)$. Therefore $B\setminus z$ is a basis of $M$ and $B \in \mathcal{B}_z.$ If $z\notin B,$ then $B$ is an independent set of size $rank(M)+1.$ Therefore $B \in \mathcal{B}(M_{a,b}).$ \end{proof} \noindent In the following lemma, we provide the rank function of $M'_{a,b}$ in terms of the rank function of $M.$ \begin{lemma}\label{s} Let $r$ and $r'$ be the rank functions of the matroids $M$ and $M'_{a,b},$ respectively. Suppose $S\subseteq E(M).$ Then $r'(S \cup z) = r(S)+1, $ and \begin{equation} \begin{split} r'(S) &= r(S) , \text { ~~~~~ if S contains no np-circuit of M; and}\\ &= r(S)+1,\text {~ if S contains an np-circuit of M.} \end{split} \end{equation} \end{lemma} \begin{proof} The equality $r'(S \cup z) = r(S)+1$ follows from the definition. The proof of the Equation(1) is discussed in Corollary 2.13 of \cite{mjg}. \end{proof} \section{Connectivity of element splitting $p$-matroids} Let $M$ be a matroid having ground set $E,$ and $k$ be a positive integer. The $k$-separation of matroid $M$ is a partition $\{S, T\}$ of $E$ such that $|S|, |T|\geq k$ and $r(S) + r(T)- r(M) <k.$ For an integer $n \geq 2,$ we say $M$ is $n$-connected if $M$ has no $k$- separation, where $1 \leq k \leq n-1.$ \noindent In the following theorem, we provide a necessary and sufficient condition to preserve the connectedness of a $p$-matroid under element splitting operation. \begin{theorem}\label{con1} Let $M$ be a connected $p$-matroid on ground set $E.$ Then $M'_{a,b}$ is a connected $p$-matroid on ground set $E\cup \{z\}$ if and only if $M_{a,b}$ is the splitting matroid obtained by applying non-trivial splitting operation on $M.$ \end{theorem} \begin{proof} First assume that $M'_{a,b}$ is a connected $p$-matroid on ground set $E\cup \{z\}.$ On the contrary, suppose $M_{a,b}$ is obtained by applying trivial splitting operation. Then $M$ contains no $np$ circuits with respect to the splitting by elements $a,b.$ Now, let $S=\{z\}$ and $T=E.$ Then $r'(S) + r'(T) -r'(M'_{a,b}) = 1 + r (E)- (r(M)+1) = 0 < 1$ gives a 1-separation of $M'_{a,b},$ which is a contradiction. \noindent For converse part, assume that $M_{a,b}$ is the splitting matroid obtained by applying non-trivial splitting operation on $M.$ Suppose that, $M'_{a,b}$ is not connected. It means $M'_{a,b}$ has $1$-separation, say $\{S,T\}.$ Then $|S|, |T|\geq 1$ and \begin{equation}\label{eq2} r'(S) + r'(T) -r'(M'_{a,b}) < 1. \end{equation} $\mathbf{Case~ 1:}$ Assume $S =\{z\}.$ Then $T$ contains an $np$ circuit. Then Equation \ref{eq2} gives, $1+(1+r(T))-r(M)-1 < 1 \implies$ $r(T)<r(M),$ which is not possible.\\ $\mathbf{Case~ 2:}$ Assume $|S| \geq 2,$ $ z\in S.$ If $T$ contains no $np$-circuit then Equation \ref{eq2} yields, $(r(S\setminus z)+1)+r(T)-r(M)-1 < 1,$ that is $r(S\setminus z)+r(T)-r(M) < 1 .$ Therefore $\{S\setminus z, T\}$ gives $1-$separation of $M,$ a contradiction. Further, if $T$ contains an $np$-circuit, then $r'(S)=r(S\setminus z)+1,$ $r'(T)=r(T)+1.$ By Equation \ref{eq2}, we get $(r(S\setminus z)+1)+(r(T)+1)-r(M)-1 < 1 ,$ which gives $r(S\setminus z)+r(T)-r(M) < 0 ,$ which is not possible. So in either case such separation does not exist. Therefore $M'_{a,b}$ is connected. \end{proof} \noindent In Example \ref{ex1}, the $p$-matroid $R_8 \cong M[A]$ and its element splitting $p$-matroid $M'_{3,5} \cong M[A'_{3,5}]$ both are connected. In the next result we give a necessary and sufficient condition to preserve $3$-connectedness of a $p$-matroid under element splitting operation. \begin{theorem} Let $M$ be a $3$-connected $p$-matroid. Then $M'_{a,b}$ is $3$-connected $p$-matroid if and only if for every $t\in E(M)$ there is an $np$-circuit of $M$ not containing $t.$ \end{theorem} \begin{proof} Let $M'_{a,b}$ be $3$-connected $p$-matroid. On contrary, if there is an element $t\in E(M)$ contained in every $np$-circuit of $M.$ Take $S=\{z, t\}$ and $T=E\setminus S.$ Then $r'(S)+r'(T)-r'(M'_{a,b})=r(\{t\})+1+r(T)-r(M)-1=r(\{t\})+r(T)-r(M)=1<2.$ Because, in this case, $t\in cl(T)$ hence $r(T)=r(M).$ That is $\{S, T\}$ forms a $2$-separation of $M'_{a,b},$ a contradiction.\\ For converse part suppose, for every $t\in E(M)$ there is an $np$-circuit of $M$ not containing $t.$ On the contrary assume that $M'_{a,b}$ is not a $3$-connected matroid. Then there exists a $k$ separation, for $k \leq 2,$ of $M'_{a,b}.$ By Theorem \ref{con1}, $k$ can not be equal to $1$. For $k=2,$ let $\{S,T\}$ be a $2$-separation of $M'_{a,b}.$ Then $\{S,T\}$ is a partition of $E\cup \{z\}$ such that $|S|, |T|\geq 2$ and \begin{equation}\label{eq3} r'(S) + r'(T)-r'(M'_{a,b}) <2. \end{equation} $\mathbf{Case~1:}$Suppose $S=\{z,t\},$ $t\in E(M).$ By hypothesis, $T$ contains an $np$-circuit not containing $t.$ Then Equation \ref{eq3} gives, $(r(\{t\})+1)+(1+r(T))-r(M)-1 < 2$ $\implies$ $r({t})+r(T)-r(M)< 1.$ Thus $\{\{t\}, T\}$ forms a $1$-separation of $M,$ which is a contradiction.\\ $\mathbf{Case~2:}$ Suppose $ z\in S$ and $|S|\geq 3.$ If $T$ contains no $np$-circuit then Equation \ref{eq3} yields $(r(S\setminus z)+1)+r(T)-r(M)-1 < 2 \implies$ $r(S\setminus z)+r(T)-r(M) < 2.$ Therefore $\{S\setminus z, T\}$ gives a $2$-seperation of $M,$ a contradiction.\\ Further, if $T$ contains an $np$-circuit, then $r'(S)=r(S\setminus z)+1,$ $r'(T)=r(T)+1.$ By Equation \ref{eq3}, we get $(r(S\setminus z)+1)+(r(T)+1)-r(M)-1 < 2 \implies$ $r(S\setminus z)+r(T)-r(M) < 1.$ Thus, $\{S\setminus z, T\}$ gives a $1$-seperation of $M,$ a contradiction. So in either case such partition does not exist. Therefore $M'_{a,b}$ is $3$-connected. \end{proof} \section{Applications} \noindent For Eulerian matroid $M$ on ground set $E$ there exists disjoint circuits $C_1,$$C_2,$ $\ldots$,$C_k$ of $M$ such that $E= C_1 \cup C_2 \cup ...\cup C_k.$ \noindent We call the collection $\{C_1,C_2,\ldots,C_k\}$ a circuit decomposition of $M.$\\ \noindent Let $\{a,b\} \subset E$. We say a circuit decomposition \~{C} $=\{C_1,C_2,\ldots,C_k\}$ of $M$ an \textit{$ep$-decomposition} of $M$ if it contains exactly one $np$-circuit with respect to the $a,b$ splitting of $M.$ In the next proposition, we give a sufficient condition to yield Eulerian $p$-matroids from Eulerian $p$-matroids after element splitting operation. \begin{proposition} Let $M$ be Eulerian $p$-matroid and $a,b\in E$. If $M$ has an \textit{$ep$-decomposition}, then $M'_{a,b}$ is Eulerian $p$-matroid. \end{proposition} \begin{proof} Let \~{C} $=\{C_1,C_2,\ldots,C_k\}$ be an \textit{$ep$-decomposition} of $M$ and $C_1$ be the $np$-circuit in it. Then $C_1 \cup z$ is a circuit of $M'_{a,b}.$ Thus $\{C_1 \cup z, C_2, \ldots ,C_k\}$ is the desired circuit decomposition of $M'_{a,b}.$ \end{proof} \begin{proposition} Let $M'_{a,b}$ is Eulerian $p$-matroid and \~{C}$=\{C_1, C_2, \ldots ,C_k\}$ be a circuit decomposition of $M'_{a,b}$. If \~{C} contains no member which is a union of an $np$-circuit and an independent set of $M,$ then $M$ is Eulerian and has an $ep$-decomposition. \end{proposition} \begin{proof} Assume, without loss of generality, $z \in C_1.$ Then $C_1 \in \mathcal{C}_z$ and $C_1 \setminus z$ is an $np$-circuit of $M.$ We will show $C_1 \setminus z$ contains both $a$ and $b.$ On the contrary assume that $C_1 \setminus z$ contains only $a.$ Then $b \in C_i$ for some $i \in \{2,3,\ldots,k\}.$ Since $C_i$ is also a circuit of $M_{a,b}$ containing only $b,$ by Theorem 2.10 of \cite{mjg} it must be a union of an $np$-circuit and an independent set of $M,$ which is a contradiction to the hypothesis. Therefore $C_1 \setminus z$ contains both $a$ and $b$ and the collection $\{C_1\setminus z, C_2, \ldots ,C_k\}$ forms an $ep$-decomposition of $M.$ \end{proof} \noindent In Example \ref{ex1}, the matroid $R_8$ is Eulerian with $ep$-decomposition $E=C_1 \cup C_2,$ where $C_1 =\{2,4,6,8\}$ is a $p$-circuit and $C_2=\{1,3,5,7\}$ is an $np$-circuit. An element splitting matroid $M'_{3,5}$ is also Eulerian with circuit decomposition $E \cup z=C_1 \cup (C_2 \cup z).$ \noindent M. Borowiecki \cite{bro} defined hamiltonian matroid as a matroid containing a circuit of size $r(M)+1.$ This circuit is called the hamiltonian circuit of the matroid $M.$ In the next corollary, we give a sufficient condition to yield hamiltonian matroid from hamiltonian matroid after the element splitting operation. \begin{corollary} If $M$ is hamiltonian matroid with an $np$-circuit of size $r(M)+1,$ then $M'_{a,b}$ is hamiltonian. \end{corollary} \begin{proof} Let $C$ be an $np$-circuit of $M$ of size $r(M)+1.$ Then by Proposition \ref{L1}, $C\cup z$ is a circuit in $M'_{a,b}$ of size $r(M)+2.$ \end{proof} \noindent In Example \ref{ex1}, the matroid $R_8 \cong M[A]$ is hamiltonian and its element splitting matroid $M'_{3,5} \cong M[A'_{3,5}]$ is also hamiltonian.
1,314,259,994,390
arxiv
\section{Introduction} One of the fundamental principles of synthetic biology is the construction of biological standardized parts and devices which are interchangeables. A proper characterization of these parts and devices appears as a key issue in order to make them reusable in a predictive way. In the recent past scientists have witnessed several initiatives towards the design and fabrication of synthetic biological components and systems as a promising way to explore, understand and obtain beneficial applications from nature. For instance, in the post genomic era one of the most fascinating challenges scientists are facing is to understand how the phenotypic behaviour of living cells arise out of the properties of their complex network of signalling proteins. While the interacting biomolecules perform many essential functions in these systems, the underlying design principles behind the functioning of such intracellular networks still remain poorly understood \cite{elowitz,becskei}. Several initiatives have been reported in this line of thought to uncover some key working principles of such genetic regulatory networks via quantitative analysis of some relatively simple, experimentally well characterized, artificial genetic circuits. It has been shown that custom made gene-regulatory circuits with any desired property can be constructed from simple regulatory elements \cite{monod}. These properties include bistability, multistability or oscillatiory behaviour of genetic circuits in various microorganisms such as bacteriophage switch \cite{ptashne} or the cyanobacterium circadian oscillator \cite{ishiura}. As one example, the genetic {\it toggle switch}, a synthetic, bi-stable gene-regulatory network in {\it Escherichia coli}, was shown to provide a simple theory that uncovers the conditions necessary for bi-stability \cite{gardner,stricker}. Further, artificial positive feedback loops (PFLs) have been used as genetic amplifiers in order to enhance the responses of weak promoters and in the creation of eukaryotic gene switches \cite{becskei2}. Sayut et al. demonstrated the construction and directed evolution of two PFLs based on the LuxR transcriptional activator and its cognate promoter, Pluxl \cite{sayut}. These circuits may have application in metabolic engineering or gene therapy that requires inducible gene expressions \cite{weber,walz}. The desired performance of these synthetic networks and in turn the resultant phenotype is strongly dependent on the expression level of the corresponding genes, which is further controlled by several factors such as promoter strength, cis- and trans-acting factors, cell growth stage, the expression level of various RNA polymerase-associated factors and other gene-level regulation characteristics \cite{gardner,becskei}. Thus, one important ingredient to elucidate gene function and genetic control on phenotype would be to have access to well-characterized promoter libraries. These promoter libraries would be in turn useful for the design and construction of novel biological systems. There have been several initiatives to control gene expression through the creation of promoter libraries \cite{kumar,santos}. Alper et al., \cite{alper} have reported a methodology to develop a completely characterized, homogeneous, broad-range, functional promoter library with the demonstration of its applicability to analysis of genetic control. Since Miller published \cite{miller} a proposal for a measurement standard for $\beta$-galactosidase assays, yet much work has been done with no conclusive standard being established \cite{liang,smolke,khlebnikov}. The main goal in calibration is measuring a {\it query} value up to an established {\it standard}. A good {\it device} should be unique, reliable and easy to use; additionally it should circumvent, to all possible extent, any noise that could alter the measurement. Recently a methodology \cite{kelly} has been reported to characterize the activity of promoters by using two different cell strains. In the present study we propose the use of a synthetic gene regulatory network as a framework to characterize different promoter specifications by using a single-cell strategy. In this context characterization stands for evaluating the parameters of a {\it query} promoter as compared to a standard promoter acting as a “scale”. The proposed device, the promoter “calibrator”, works on the principle of comparing a specific input signal which will be sensed by promoters of different sensing strengths and, as an output, produces fluorescence of specific colours which allows quantifying the relative strength of the promoters. Analyses were carried out in order to find out relevant model parameters and the corresponding range of model parameter values which are compatible with the performance of this calibrating biological design over a spectrum of given input . This contribution is organized as follows: in the first part, ``Design'', the structure and working principle are explained and the mathematical model resulting from the construction is established. In section \ref{sec:numana}, ``Numerical Analysis of the System'', we analyze the dynamics of the model equations in regard to its stability, functional parameter regions and sensitivity or robustness vs. the change in certain key parameter values. In the following section, a proof of concept design is proposed in order to choose the right parameters to actually perform the experimental validation of our concepts and have a system that gives a clear and stable signal that can be interpreted. Finally, the conclusions resulting from our paper are exposed. \section{Design} \subsection{Biological principles} Our promoter calibrator is composed of two promoters (each with two parts: a sensing and a repressed domain), two repressors proteins and two fluorescent protein outputs (see Fig. \ref{fig:diagr}). Each promoter is inhibited by the repressor, transcription of which is promoted by the opposing promoter. Fluorescence protein levels will be directly related to repressor protein levels, activated in turn by their sensing promoters. Hence, different sensing strengths will cause a difference in the expression of the fluorescence proteins, detectable by means of single cell fluorescence as changes in the color patterns of the individual cell or cell sample. \begin{figure} \begin{center} \includegraphics[width=7cm]{diagr2.eps} \caption{Design of the proposed promoter calibrator. It is composed of two promoters (with two parts each: a sensing and a repressed domain) one of the sensing promoters is the {\it device} promoter and the other is the {\it query} promoter. The repressed domains are controled by the two repressors proteins ($x$ and $y$). Each promoter is inhibited by the repressor which is transcribed from the opposing promoter. Fluorescence proteins levels will be proportional to repressor protein levels, which, in turn, will be promoted by the sensing promoters.} \label{fig:diagr} \end{center} \end{figure} In our scheme, one of the sensing promoters acts as the {\it device} promoter to which the strength of a given {\it query} promoter is quantitatively compared. The main use of this device is to characterize different promoter specifications (sensing affinities and cooperativities) compared to some standard. One of the main usefulness of this design lies in the potential modularity of the system: by changing the sensing part of the promoters, other sensing promoters could be calibrated; this change can be carried out by a simple, straight-forward cloning step. Modularity also boasts the potential of this device as it can be implemented in a potentially unlimited set of systems. \subsection{Mathematical model} The behaviour of the proposed promoter calibrator can be understood via an effective mathematical model. The model is considered to be effective as transcription and translation have been modeled as a lumped reaction. The separation of transcription and translation otherwise involves a response delay. We seek to classify dynamic behaviors depending upon the change in model parameters and determine which experimental parameters should be fine-tuned in order to obtain a satisfactory performance of our device. The time dependent changes in repressor and sensing protein (input) concentrations is shown in equations (\ref{eq1}-\ref{eq3}). Subsequent to the biological design, reporter protein concentrations are directly related to repressor protein concentrations. \begin{eqnarray} \frac{dx}{dt}&=&\alpha_1\frac{\left(\frac{p_s}{k_1}\right)^{n_1}}{1+\left(\frac{p_s}{k_1}\right)^{n_1}} \frac{1}{1+\left(\frac{y}{k_y}\right)^{n_y}}-\beta_x x +\gamma_x \label{eq1}, \\ \frac{dy}{dt}&=&\alpha_2\frac{\left(\frac{p_s}{k_2}\right)^{n_2}}{1+\left(\frac{p_s}{k_2}\right)^{n_2}} \frac{1}{1+\left(\frac{x}{k_x}\right)^{n_x}}-\beta_y y +\gamma_y \label{eq2}, \\ \frac{dp_s}{dt}&=&-\beta_{p_s} p_s\label{eq3}. \end{eqnarray} The {\it device} and {\it query} promoters activate the production of repressor protein $x$ and $y$, respectively, and their concentration is related directly to the concentration of fluorescence proteins. Thus these variables will be treated as equivalent from the modelling point of view. Parameters $\alpha_1$ and $\alpha_2$ represent the effective rate of synthesis of repressor proteins $x$ and $y$, respectively; $\alpha$ is a lumped parameter that takes into account the net effect of various activities such as RNA polymerase binding, RNA elongation and termination of transcript, ribosome binding and polypeptide elongation and will be modified by repression and sensing effects. The $\beta_x$, $\beta_y$ and $\beta_{p_s}$ are the degradation constants of repressor protein $x$, repressor protein $y$ and sensing protein $p_s$, respectively. The sensing protein concentration $p_s$ will depend on the sensed input, will be easy to change in a given experiment and is used as the main input variable in our calibrator experiments. It is important to note that a slow rate of degradation is assumed for the sensing protein, implying a nearly constant level over a reasonable experimental time interval. Basal level rates of synthesis of proteins $x$ and $y$ are denoted by $\gamma_x$ and $\gamma_y$, respectively. Repressor and sensing responses are assumed to follow Hill equation dynamics: promoter-binding monomers form multimers by positive allosterism and attach to its cognate promoter with saturating behaviour. Binding cooperativities are described by Hill coefficients $n_x$ and $n_y$ for repressor domains corresponding to $x$ and $y$ respectively, and $n_1$ and $n_2$ for sensing domains corresponding to {\it device} and {\it query} promoter respectively. The extent of the saturation rate is described by half saturation constants or Michaelis constants, denoted by parameter $k_x$ and $k_y$ for repressor domains corresponding to $x$ and $y$ respectively and $k_1$ and $k_2$ for sensing domains corresponding to {\it device} and {\it query} promoter respectively. The total number of promoter sites is assumed to be conserved and the total concentration of both promoters is chosen to be identical. In our construction, the crossrepressing part will be kept unchanged while different sensing domains may be attached to it. The aim is to establish a protocol to accurately quantify differences between the sensing promoter parameters ($\alpha_{1,2}$, $k_{1,2}$). Crossrepression parameters ($k_{x,y}$, $\beta_{x,y}$ and $n_{x,y}$) are structural parameters that must be chosen in such a way that the fluorescence response of the system gives us stable, sensitive and robust indication about the quantitative relations between the sensing promoter parameters. The dynamic analysis of the system will help us to take the right decisions on which are the most appropriate values for these structural parameters. The next sections are devoted to the dynamical analysis in order to determine the sensitivity and robustness of the system for different ranges of the structural parameters. The commercial software package Mathematica (Wolfram), was used for model development and simulation. In the numerical calculations we have used the following dimensionless variables: \begin{eqnarray} X&=&\frac{x}{k_x} \\ Y&=&\frac{y}{k_y} \\ \tau&=&t\beta_x \\ \bar{\alpha}_{1,2}&=&\frac{\alpha_{1,2}}{\beta_x k_{x,y}}\\ \bar{\gamma}_{x,y}&=&\frac{\gamma_{x,y}}{\beta_x k_{x,y}} \end{eqnarray} therefore, the units in the plots of the figures in this work are given in units of $k_x$ or $k_y$ for the $x$ and $y$ repressor proteins concentrations and time in units of $\frac{1}{\beta_x}$. For the adimensional variables, Equations (\ref{eq1}-\ref{eq2}) take the form: \begin{eqnarray} \frac{dX}{d\tau}&=&\bar{\alpha}_{1}\frac{\left(\frac{p_s}{k_1}\right)^{n_1}}{1+\left(\frac{p_s}{k_1}\right)^{n_1}} \frac{1}{1+Y^{n_y}}- X +\bar{\gamma}_{x} \label{eq1ad}, \\ \frac{dY}{d\tau}&=&\bar{\alpha}_{2}\frac{\left(\frac{p_s}{k_2}\right)^{n_2}}{1+\left(\frac{p_s}{k_2}\right)^{n_2}} \frac{1}{1+X^{n_x}}-R Y +\bar{\gamma}_{y} \label{eq2ad}, \end{eqnarray} where $R$ is the ratio $\frac{\beta_y}{\beta_x}$. \section{Numerical analysis of the system}\label{sec:numana} The simplifying assumption of considering sensing proteins for which the degradation constant $\beta_{p_s}$ is much smaller than the rest ($\beta_{p_s}\ll \beta_x,\beta_y$) was made in order to classify the possible dynamic scenarios of our model. Given this assumption, in a first order of approximation we have, \begin{eqnarray} \frac{dp_s}{dt}&=&-\beta_{p_s}p_s\approx 0. \end{eqnarray} In such approach, the concentration of sensing protein $p_s$ is constant during the evolution time of the rest of the internal variables of the system. This assumption leads to a system of two autonomous coupled non-linear ordinary differential equations dependent on the variables $x$ and $y$, eqs. (\ref{eq1ad}-\ref{eq2ad}), in which $p_s$ is fixed although it can be easily changed within a given experiment. This is not true for the rest of parameters which are more difficult to modify in a given experiment. This approximation transforms the system into: \begin{eqnarray} \frac{dX}{d\tau}&=&\bar{\alpha}^\prime_1\frac{1}{1+Y^{n_y}}-X +\bar{\gamma}_x \label{eq5}, \\ \frac{dY}{d\tau}&=&\bar{\alpha}^\prime_2\frac{1}{1+X^{n_x}}-R Y +\bar{\gamma}_y \label{eq6}. \end{eqnarray} where the new parameters $\bar{\alpha}^\prime_i$ (effective transcription factors) are given by the following expression: \begin{eqnarray} \bar{\alpha}^\prime&=&\alpha \frac{\left(\frac{p_s}{k}\right)^{n}}{1+\left(\frac{p_s}{k}\right)^{n}} \label{eq7}. \end{eqnarray} In the limit in which the constants $k_{x,y}$, $\beta_{x,y}$, $\gamma_{x,y}$ are equal, this equations describe the biological equivalent of an electronic {\it comparator}, that is, a device which compares two voltages or currents and switches its output to the larger signal. In the biological equivalent, our comparator would select for the larger of the two $\bar{\alpha}$'s, as exemplified in Fig. \ref{fig:concentr}, which represent the evolution of the system for the cases in which the {\it query} promoter has a higher and lower effective transcription factor compared to the device promoter, respectively. \begin{figure} \begin{center} \includegraphics[width=7cm]{concentr1.eps} \\ \includegraphics[width=7cm]{concentr2.eps} \caption{Typical response of the proposed promoter calibrator. In the upper figure the concentration of the $x$ protein (solid line) in the steady state is higher while in the figure below the concentration of the $y$ protein (dashed line) is higher.} \label{fig:concentr} \end{center} \end{figure} In any case, our aim is to construct a device, termed a {\it calibrator}, which not only selects the stronger affinity but also allows quantifying the relative strength of both promoters. Although the comparator is a fundamental part of this device, a deeper understanding of the dynamics of the system is required for its application as a calibrator device in real biological environments. \subsection{Dynamic analysis of the calibrator} The dynamical analysis of the system given by Eqs. (\ref{eq5}-\ref{eq6}) requires the determination of its steady state solutions and their linear stability. The steady states $(x_{ss},y_{ss})$ are given by the intersection of the null clines: \begin{eqnarray} F_1(X,Y)&=&\bar{\alpha}^\prime_1\frac{1}{1+Y^{n_y}}-X +\bar{\gamma}_x=0 \textrm{, } \label{eq8} \end{eqnarray} and \begin{eqnarray} F_2(X,Y)&=&\bar{\alpha}^\prime_2\frac{1}{1+X^{n_x}}-R Y +\bar{\gamma}_y=0 . \label{eq9} \end{eqnarray} The analytical solution of Eqs. (\ref{eq8}-\ref{eq9}) cannot be obtained, hence numerical methods must be used. The linear stability of the steady states is determined by the sign of the eigenvalues of the Jacobian matrix, \begin{eqnarray} M&=&\left(\begin{array}{cc} \frac{\partial F_1}{\partial X} & \frac{\partial F_1}{\partial Y} \\ \frac{\partial F_2}{\partial X} & \frac{\partial F_2}{\partial Y} \end{array}\right)_{X=X_{ss},Y=Y_{ss}}\label{eq10} \end{eqnarray} which are given by \begin{eqnarray} \lambda_\pm&=&-\frac{1+R}{2}\pm\frac{1}{2}\sqrt{(R-1)^2+4\Delta} \textrm{, } \label{eq11}\\ \Delta&=&\frac{n_x n_y(X_{ss}-\bar{\gamma}_x)(\bar{\alpha}^\prime_1+\bar{\gamma}_x-X_{ss})(Y_{ss}R-\bar{\gamma}_y)(\bar{\alpha}^\prime_2+\bar{\gamma}_y-Y_{ss}R)}{\bar{\alpha}^\prime_1\bar{\alpha}^\prime_2X_{ss}Y_{ss}}. \label{eq12} \end{eqnarray} From the analysis of the previous equations (\ref{eq8}-\ref{eq9}), we deduce that, for the positive steady state solutions ($X_{ss}>0$ and $Y_{ss}>0$), the following mathematical constraints hold: $\bar{\alpha}^\prime_1>X_{ss}-\bar{\gamma}_x>0$ and $\bar{\alpha}^\prime_2>Y_{ss}R-\bar{\gamma}_y>0$, respectively. Thus, taking into account (\ref{eq11}-\ref{eq12}), we observe that $\Delta>0$ and $\lambda_-$ is always negative. However, $\lambda_+$ can be either negative, for $\Delta>R$, or positive, for $\Delta<R$, resulting in either stable nodes (sinks) or unstable saddles, respectively. The condition $\Delta=R$ is satisfied at certain critical values of the parameters at which precisely one of the steady state solutions of the system changes its stability. In order to highlight the specific aspects of the calibrator dynamics, we will in the following sections consider a number of special cases. Specifically we will examine the (fully) symmetrical calibrator, $\bar{\alpha}^\prime_1=\bar{\alpha}^\prime_2=\bar{\alpha}^\prime$, $n_x = n_y = n$, $k_x = k_y = k$, $\beta_x=\beta_y\Rightarrow R=1$ and $\bar{\gamma}_x=\bar{\gamma}_y=\bar{\gamma}$, and the partially symmetrical calibrator, with the same specifications except that $\bar{\alpha}^\prime_1$ and $\bar{\alpha}^\prime_2$ may differ. At the end of the section some general considerations about dynamics of the system in the most general case will made. \subsection{The fully symmetrical calibrator ($\bar{\alpha}_1^\prime=\bar{\alpha}_2^\prime=\bar{\alpha}^\prime$)} From the analysis of Eqs. (\ref{eq8}-\ref{eq9}) it is shown that there is always a fixed point with $Y_{ss}=X_{ss}$ and that there exists a minimum value of $X_m$ such that for parameters resulting in $X_{ss}>X_m$, three steady states exist, otherwise only one. Using $\bar{\alpha}^\prime$ as free parameter and taking fixed values for the rest, i.e., $n$, $R$ and $\bar{\gamma}$, the condition $\Delta=R=1$, together with \Eq{eq8}, allows to obtain the critical values $\bar{\alpha}^\prime_m$ and $X_m$ that characterize the appearance of the bifurcation, namely: \begin{eqnarray} 1&=&\frac{n^2(\bar{\gamma}-X_m)^2(\bar{\gamma}-X_m+\bar{\alpha}^\prime_m)^2}{X_m^2\bar{\alpha}^{\prime 2}_m} \end{eqnarray} whose values can be obtained by numerical methods. For example, for $n=2$, $k=80$, $\beta=0.069$ and $\bar{\gamma}=0.1$, yields $\bar{\alpha}^\prime_m=11.24$ and $x_m=81.46$ or, in the dimensionless variables: $X=1.018$ and $\bar{\alpha}^\prime=2.036$. Figure \ref{fig:bifurc}, shows the bifurcation diagram for $X_{ss}$ as function of $\bar{\alpha}$ showing that for $\bar{\alpha}>\bar{\alpha}_m$ there are three steady states. \begin{figure} \begin{center} \includegraphics[width=7cm]{bifurc.eps} \caption{Bifurcation diagram for $X_{ss}$.} \label{fig:bifurc} \end{center} \end{figure} This analysis shows that the (fully) symmetrical calibrator possesses three fixed points for $\bar{\alpha}^\prime_1>\bar{\alpha}^\prime_m$: a saddle ($\vec{x_M}$) with $X_{ss}=Y_{ss}$, and two sinks, one with $X_{ss}>Y_{ss}$ and another one with $X_{ss}<Y_{ss}$, referred to as $\vec{x}_R$ and $\vec{x}_L$, respectively. This behaviour is typical of the occurrence of a (supercritical) pitchfork bifurcation and bistable behaviour. Regarding the possible trajectories of the dynamic variables, Figure \ref{fig:vectorfi} illustrates the phase plane of Eqs. (\ref{eq5}-\ref{eq6}), where the steady states are located at the intersection of the null clines eqs.(\ref{eq8}-\ref{eq9}) represented by dashed lines. The solid lines are the stable ($W^S$) and unstable ($W^U$) manifolds of the saddle fixed point $\vec{x_M}$. The stable manifold $W^S$ divides the phase plane in two regions, the first and second octants corresponding to the attraction basins of the sinks $\vec{x}_R$ and $\vec{x}_L$, respectively. Different possible trajectories in the phase plane are depicted for a given number of initial conditions, where the arrows indicate the flow direction. \begin{figure} \begin{center} \includegraphics[width=7cm]{grafcal.ps} \caption{Phase plane, showing the unstable equilibrium point (the point where the two dashed lines touch in the center) and the two steady state solutions (points where the dashed lines touch close to each axis). The arrows show the path the system would do starting from any point in the phase space.} \label{fig:vectorfi} \end{center} \end{figure} In a calibrator experiment the initial value of the repressor protein concentrations $x$ and $y$ would be zero and hence the phase plane trajectories would depart from the origin in Figure 4. For values of $\bar{\alpha}^\prime$ larger than $\bar{\alpha}^\prime_m$, the system becomes unpredictable, as small perturbations in the trajectories would potentially push the system into any of the attraction basins of the sinks $\vec{x_R}$ and $\vec{x_L}$. \subsection{The partially symmetrical calibrator} We consider now the more general scenario in which $\bar{\alpha}^\prime_1$ and $\bar{\alpha}^\prime_2$ may differ being the rest of variables equal ($n_x = n_y = n$, $k_x = k_y = k$, $\beta_x=\beta_y=\beta$ and $\bar{\gamma}_x=\bar{\gamma}_y=\bar{\gamma}$). The condition $\Delta=R$ which characterizes the occurrence of the pitchfork bifurcations now reads: \begin{eqnarray} 1&=&\frac{n^2(\bar{\gamma}-X_{ss})(\bar{\gamma}-Y_{ss})(\bar{\gamma}-X_{ss}+\bar{\alpha}^\prime_1)(\bar{\gamma}-Y_{ss}+\bar{\alpha}^\prime_2)}{X_{ss}Y_{ss}\bar{\alpha}^\prime_x\bar{\alpha}^\prime_y}\label{eq:bifurc} \end{eqnarray} that shall be solved together with Eqs. (\ref{eq8}-\ref{eq9}) for the fixed points of the system. Fig. \ref{fig:diffal} shows the result of the numerical simulation of the resulting system of equations (with initial conditions $X=Y=0$) by slightly changing the value of $\bar{\alpha}_2^\prime$ with respect to $\bar{\alpha}_1^\prime$. The figure shows the results of different simulations for $\bar{\alpha}_1^\prime=$3.0, $n=3$ and $\bar{\alpha}_2^\prime=\epsilon\bar{\alpha}_1^\prime$ with $\epsilon=$0.5, 0.6, 0.7, ..., 1.0, ..., 1.5. The results for $\epsilon<1$ are the points in the right down corner of the plot. One can see that these points positions are very insensitive to the value of $\bar{\alpha}_2^\prime$. There is only one point in the center of the plot, which corresponds to $\bar{\alpha}_1^\prime=\bar{\alpha}_2^\prime$, it is the unstable saddle, and small perturbations in the system will drive the system away from this solution to either of the other two steady state solutions. Once $\epsilon>1$, the system goes to the solutions where $Y_{ss}>X_{ss}$ which are represented by the points in the upper left corner. For these points the maximum value of $\bar{\alpha}^\prime$ is growing and one can observe that the solution is sensitive to this value. So the steady state solution into which the system falls is only sensitive to the bigger value between $\bar{\alpha}_1^\prime$ and $\bar{\alpha}_2^\prime$ and changes in the smaller among these two parameters has no sensible effect in the final solution. \begin{figure} \begin{center} \ig{0.65}{difaln2.eps} \caption{Results for the simulation of the partially symmetric calibrator. On the right, close to the x-axis ($\bar{\alpha}^\prime_2<\bar{\alpha}^\prime_1$ for these points), there are many points at the same position, showing that, for the parameter region where a bifurcation happens, the solution is insensitive to the value of the weakest between the two $\bar{\alpha}^\prime$s.}\label{fig:diffal} \end{center} \end{figure} For the case in which the calibrator falls within the region of bistability, if $\bar{\alpha}^\prime_2<\bar{\alpha}^\prime_1$ the orbits departing from the origin of Fig. \ref{fig:diffal} would fall within the attraction basin of solution $\vec{x_R}$. It is nevertheless observed that $\vec{x_R}$ is quite insensitive to the actual $\bar{\alpha}^\prime_2 / \bar{\alpha}^\prime_1$ ratio. In consequence, the system would show a stable but rather insensitive response to different {\it query} promoters. On the other hand, if $\bar{\alpha}^\prime_1<\bar{\alpha}^\prime_2$, the orbits departing from the origin would fall within the attraction basin of solution $\vec{x_L}$, which changes appreciably as a function of the $\bar{\alpha}^\prime_2 / \bar{\alpha}^\prime_1$ ratio. Thus the system would not only be stable, but also rather sensitive to changes in the effective {\it query} promoter affinity. It should be kept in mind that the sensing protein concentration, $p_s$, can be used to modify $\bar{\alpha}^\prime_1$, $\bar{\alpha}^\prime_2$, which changes from unity to $\bar{\alpha}_{1,2}$ as $p_s$ changes from zero to infinity and therefore the ratio $\bar{\alpha}^\prime_2 / \bar{\alpha}^\prime_1$ changes with $p_s$. We can also define the fluorescence ratio as the ratio of $X/Y$ if $X<Y$ and $Y/X$ if $Y>X$. This will be the intensity ratio of the two fluorescences once the system reaches stability. In Fig. \ref{fig:ratio1} we show a plot of this ratio for different values of $\bar{\alpha}^\prime_2 / \bar{\alpha}^\prime_1$. This ratio grows until it reaches its maximum when $\bar{\alpha}^\prime_2=\bar{\alpha}^\prime_1$ and then it decreases. Another observation about this parameter is that the bigger $\bar{\alpha}^\prime_1$ is, the less sensible to the ratio $\bar{\alpha}^\prime_2/\bar{\alpha}^\prime_1$ the fluorescence ratio will be. \begin{figure} \begin{center} \begin{tabular}{c} \ig{0.65}{fluorn2.eps}\\ \ig{0.65}{fluorn1.eps} \end{tabular} \caption{Upper plot: Fluorescence ratio for different values of $\bar{\alpha}^\prime_y/\bar{\alpha}^\prime_x$ ($\bar{\alpha}_x^\prime=$2.5). The blue points are solutions where $X>Y$ and in the red points $Y>X$. Lower plot: Fluorescence ratio for different values of $\bar{\alpha}^\prime_y/\bar{\alpha}^\prime_x$ and for different values of $\bar{\alpha}_x^\prime$ (Solid line:$\bar{\alpha}_x^\prime=$2, dashed line:$\bar{\alpha}_x^\prime=$3, dotted line:$\bar{\alpha}_x^\prime=$4).}\label{fig:ratio1} \end{center} \end{figure} \subsection{The calibrator dynamics in the general case} The theorem of Andronov and Pontryagin \cite{guckenheimer} states that Eqs. (\ref{eq5}-\ref{eq6}) in the symmetrical case are structurally stable, since every fixed point is hyperbolic (its eigenvalues have a non-null real part) and there are no orbits connecting two saddles (since there is only one). Structural stability implies that the phase plane topology is preserved under small perturbations of the parameters. Hence, the phase plane of Eqs. (\ref{eq5}-\ref{eq6}) in the case that $\bar{\alpha}^\prime_x\approx\bar{\alpha}^\prime_y$, $n_x\approx n_y$, $k_x\approx k_y$, $\beta_x\approx\beta_y$ and $\bar{\gamma}_x\approx\bar{\gamma}_y$, is topologically equivalent to that shown in Fig. 4, meaning that there is a continuous function (homeomorphism) between both phase planes. Changing the ratio of other structural parameters of the calibrator has similar results as in the partially symmetrical case. For a given range close to the value 1 for the ratio of each parameter ratio ($n_{x/y}$, $\beta_{x/y}$, ...) the bifurcation appears while far from the value 1 the bifurcation cannot be seen. The range is usually bigger, the bigger the values for $\bar{\alpha}^\prime_{1,2}$ are. In Fig. \ref{fig:chabet} we show, as an example, the range where the bifurcation appears for different values of $\beta_x/\beta_y$. \begin{figure} \begin{center} \ig{0.65}{betaa.eps} \caption{(Color online) Position of $X_{ss}$ for different values of $\beta_y/\beta_x$. In the black curve $\bar{\alpha}_x^\prime=\bar{\alpha}_y^\prime=$2, in the blue curve $\bar{\alpha}_x^\prime=\bar{\alpha}_y^\prime=$3 and in the red one $\bar{\alpha}_x^\prime=\bar{\alpha}_y^\prime=$4.}\label{fig:chabet} \end{center} \end{figure} If $R<1$ the orbits departing from the origin ($X=Y=0$) would fall within the attraction basin of solution $\vec{x}_L$, on the other hand if $R>1$ the orbits departing from the origin would fall within the attraction basin of solution $\vec{x}_R$. \subsection{Calibrator performance analysis: robustness and response time} In order to use this system to measure the relative strength between two promoters, one should keep in mind two factors. The first important factor is the right choice for the parameters of the repressor proteins and {\it device} promoter in order to have a robust system, that gives a stable response that can be easily interpreted. Second, is the time response of the device, that means, how long does the system needs to reach its steady state solution. When the equations are written in the dimensionless form, the parameters $k_x$ and $k_y$ do not appear explicitly, see eqs. (\ref{eq1ad}-\ref{eq2ad}). These parameters appear implicit in the definition of the variables $X$ and $Y$ and in the $\bar{\gamma}$ parameters (which have small influence in the dynamics of the system). By choosing $k_x=k_y$ the results will be easier to interpret since the fluorescence is directly related to the concentrations of the proteins $x$ and $y$ and, by setting $k_x=k_y$, the fluorescence intensity ratio ($X/Y$ and $Y/X$) and the fluorescence intensity difference ($|X-Y|$) will be directly proportional to these parameter calculated with the real protein concentrations. An experiment made with the calibrator would consist of cloning a plasmid with the calibrator genetic circuit assembled with the {\it device} promoter (whose parameters one have to choose) among known ones and with a {\it query} promoter whose parameters are unknown. The plasmid should be inserted in cells in solutions of the signaling protein at different concentrations $p_s$. Each promoter is modeled through two parameters, $\bar{\alpha}_{1/2}$ and $k_{1/2}$, $1/2$ stand for {\it device/query} promoter. While at low $p_s$ concentrations both promoters are weak and give a weak fluorescence response, at high $p_s$ concentrations, both promoters are saturated and their strength is maximal. From the fluorescence intensities at these high concentrations of the signaling protein it is possible to establish the relative strength of the two promoters $\bar{\alpha}_2/\bar{\alpha}_1$. In figures \ref{fig:diff} and \ref{fig:rat} we show plots of the fluorescence difference defined as $|X-Y|$ and the fluorescence ratio $X/Y$ for three different values of $\bar{\alpha}_1$ and varying $\bar{\alpha}_2$ at high signaling protein concentrations (the effective strength of both promoters is maximum). \begin{figure} \begin{center} \ig{0.65}{distch.eps} \caption{(Color online) The fluorescence difference for different values of $\bar{\alpha}_1$ as a function of $\bar{\alpha}_2$. Note that for values of $\bar{\alpha}_2$ sufficiently higher than $\bar{\alpha}_1$ the fluorescence difference increases linearly with the value of $\bar{\alpha}_2$.}\label{fig:diff} \end{center} \end{figure} \begin{figure} \begin{center} \begin{tabular}{c} \ig{0.65}{ratio1.eps} \\ \ig{0.65}{ratio2.eps} \end{tabular} \caption{(Color online) Upper plot: the fluorescence ratio $X/Y$ as a function of $\bar{\alpha}_2/\bar{\alpha}_1$ for different values of $\bar{\alpha}_1$. One can clearly see that for similar values of $\bar{\alpha}_1$ and $\bar{\alpha}_2$, when the bifurcation occurs, the system goes to a state where the repressor protein of the stronger promoter completely dominates the system. Lower plot: Detail of the region where $\bar{\alpha}_2>\bar{\alpha}_1$.} \label{fig:rat} \end{center} \end{figure} The first thing to note from figures \ref{fig:diff} and \ref{fig:rat} is that, if the {\it query} promoter is stronger than the {\it device} one, the {\it device} fluorescence ($X$) will be strongly suppressed, and the fluorescence intensity coming from the {\it query} promoter is proportional to its strength (the response of the system is linear). That means, choosing a weak {\it device} promoter, one can establish the relative strength of other promoters by a simple proportionality law given by the linear response plotted in figure \ref{fig:diff}. At each different $p_s$ concentration, the effective strength of the {\it device} and {\it query} promoters is different, see eq. (\ref{eq7}). The parameter that distinguishes two promoters, with respect to the $p_s$ concentration, is their Michaelis constants, $k_{1,2}$. The parameters $k_{1,2}$ mark the rhythm at which the effective strength of each promoter grows. If a promoter has a small value of $k$, at low $p_s$ concentrations of the signaling protein, the promoter is already acting at full strength, while for high values of $k$ the promoter saturates only at high values of $p_s$. We have already established to choose a small value for the {\it device} promoter $\bar{\alpha}_1$, so we expect the {\it query} promoters to have $\bar{\alpha}_2>\bar{\alpha}_1$. If $k_2<k_1$, the effective strength of the {\it query} promoter is always bigger than the relative strength of the {\it device} one, and in the experiment one observes that the luminosity associated with the {\it query} promoter is stronger for any value of the signaling protein concentration $p_s$. On the other hand, if one chooses a small value for $k_1$, already at low $p_s$ concentrations the strength of the {\it device} promoter saturates, and if $k_1$ is small enough it saturates before the effective strength of the {\it query} promoter reaches a value bigger than $\bar{\alpha}_1$. In this situation one would observe at low concentrations of $p_s$ the luminosity of the {\it device} promoter stronger than the one coming from the {\it query} promoter. Then, at some critical value of $p_{s}=p_{sc}$ both strength are equal and for $p_s>p_{sc}$ the stronger fluorescence is the one from the {\it query} promoter. For $n_1=n_2$, the value of $k_2$ given in units of $k_1$ as a function of $p_{sc}$ (also in units of $k_1$) is given by: \begin{eqnarray} k_2&=&\sqrt[n]{p_{sc}^n\left(\frac{\bar{\alpha}_2}{\bar{\alpha}_1}-1\right)-\frac{\bar{\alpha}_2}{\bar{\alpha}_1}},\label{eq:k2}\\ p_{sc}&=&\sqrt[n]{\left(k_2^n-\frac{\bar{\alpha}_2}{\bar{\alpha}_1}\right)\left(\frac{\bar{\alpha}_1}{\bar{\alpha}_2-\bar{\alpha}_1}\right)}.\label{eq:psc} \end{eqnarray} In figure \ref{fig:fluors} we show a few examples of results one might expect for different values of $k_2$. \begin{figure}[h] \begin{center} \begin{tabular}{c} \ig{0.55}{fluor1.eps} \\\ig{0.55}{fluor2.eps} \\\ig{0.55}{fluor3.eps} \end{tabular} \caption{In all plots $\bar{\alpha}_1$=$k_1$=1 and $\bar{\alpha}_2$=2. In the upper plot $k_2$=2, in the center $k_2$=3 and in the bottom plot $k_2$=4. The values for $p_{sc}$ are respectively: $\sqrt{2}$, $\sqrt{7}$ and $\sqrt{14}$.}\label{fig:fluors} \end{center} \end{figure} \begin{figure} \begin{center} \ig{0.65}{tempresp.eps} \caption{Response time of the system for different values of $\bar{\alpha}_1$ and $\bar{\alpha}_2$. For values of the $\bar{\alpha}^\prime$s that the system presents a bifurcation, the response time can be large because the system spend time in its non-equilibrium solution.}\label{fig:temp} \end{center} \end{figure} So, the construction of the calibrator device, as we present it, would be the following: first one chooses a very weak promoter which has a small Michaelis constant to act as the {\it device} promoter in the calibrator. Second step is to define a standard, to choose a known promoter, clone the calibrator device with it as {\it query} promoter and perform a measurement of the fluorescence intensity of this standard promoter at high $p_s$ concentrations. This fluorescence intensity is the standard one, to which we can compare other promoters. Now performing the experiment with another promoter acting as {\it query} promoter one obtains another value for the luminosity that we can compare with the standard one. The higher or lower this luminosity is with respect to the standard, the stronger or weaker the promoter is compared with the standard, so one can establish the value of $\alpha_2$. Knowing $\alpha_2$ one can perform the same measurement for different $p_s$ concentrations in order to establish the critical value of $p_s$ where the {\it query} fluorescence becomes higher than the {\it device} one. Knowing the value of $p_{sc}$ it is possible to establish the value of $k_2$ by means of eq. \ref{eq:k2} (assuming both promoters have the same $n$). Now that we have established the ideal parameters for the {\it device} promoter (weak strength and small Michaelis constant) and set $k_x=k_y$ and $\beta_x=\beta_y$ the last important factor is the time response of the system. In figure \ref{fig:temp} we show plots for the $t_f$, the time the systems needs to reach its steady state\footnote{The system actually goes asymptotically to its steady state without really reaching it. What we have calculated is the time needed so that the sum of the absolute values of the derivatives of $X$ and $Y$ reach a small value (0.01).} for different values of $\bar{\alpha}_1$. One observes that the time response of the system has a peak with the maximum around 30$\beta_x^{-1}$ when the effective strength of both promoters is equal and then it goes to a rather stable value close to 7$\beta_x^{-1}$. For a realistic value of $\beta_x$ like 0.069 min$^{-1}$ the peak value for $t_f$ is 7 hours, while for most of the measurements (the calibrator at different $p_s$ concentrations) this time should be around two hours. \section{Conclusions} In the present study we have proposed a biological device that works as a promoter calibrator in which the strength of a collection of {\it query} promoters can be measured against the strength of a {\it device} promoter. Some of the key features of the proposed design are its single cell character, high modularity and handy construction: a unique molecular cloning permits the change of the promoter ready to be calibrated. The designed performance of the proposed biological device has been demonstrated by means of an effective mathematical model. The sensitivity analysis of the model shows that there is a sensible relation between the relative promoter strengths and the final steady fluorescence’s measured by the system. Furthermore, a response time analysis shows that the device can produce a large difference in the repression protein concentrations and in turn in the corresponding fluorescence in approximately two hours. Finally our promoter calibrator principle may lead to an improvement in the modeling and characterizations of systems in Synthetic Biology, which frequently rely on arbitrarily characterized, or even non-characterized, promoters. \section*{acknowledgements} This work has been funded by MICINN TIN2009-12359 project ArtBioCom, the Spanish Ministerio de Educación y Ciencia through the program Juan de la Cierva, the FPI grant program of the Generalitat Valenciana and the Beca de recerca predoctoral from the Universitat Rovira i Virgili. The authors would also like to thank the Valencia iGEM 2007 team and Enrique O'Connor for useful discussions. \bibliographystyle{spbasic}
1,314,259,994,391
arxiv
\section{\textbf{New Model of Boundary Coupled Neuron Network}} In this paper, we present a new model of boundary coupled neuron network in terms of the following system of the partly diffusive Hindmarsh-Rose equations, \begin{equation} \label{cHR} \begin{split} \frac{\partial u}{\partial t} & = d \Delta u + au^2 - bu^3 + v - w + J, \\ \frac{\partial v}{\partial t} & = \alpha - v - \beta u^2, \\ \frac{\partial w}{\partial t} & = q (u - c) - rw, \\ \frac{\partial u_i}{\partial t} & = d \Delta u_i + au_i^2 - bu_i^3 + v_i - w_i + J, \quad 1 \leq i \leq m, \\ \frac{\partial v_i}{\partial t} & = \alpha - v_i - \beta u_i^2, \quad 1 \leq i \leq m, \\ \frac{\partial w_i}{\partial t} & = q (u_i - c) - rw_i, \quad 1 \leq i \leq m, \end{split} \end{equation} for $t > 0,\; x \in \Omega \subset \mathbb{R}^{n}$ ($n \leq 3$), where $\Omega$ is a bounded domain and its boundary $$ \partial \Omega = \Gamma = \bigcap_{i = 0}^m \Gamma_i $$ is locally Lipschitz continuous, where the boundary pieces $\Gamma_i, i = 0, 1, \cdots, m,$ are measurable and mutually non-overlapping. Here $(u_i, v_i, w_i), \,i = 1, \cdots, m,$ are the state variables for the \emph{neighbor neurons} denoted by $\mathcal{N}_i, i = 1, \cdots, m$, coupled with the \emph{central neuron} denoted by $\mathcal{N}_c$ whose state variables are $(u, v, w)$. In this system \eqref{cHR}, the variables $u(t, x)$ and $u_i(t,x)$ refer to the membrane electrical potential of a neuron cell, the variables $v(t, x)$ and $v_i(t, x)$ called the spiking variables represent the transport rate of the ions of sodium and potassium through the fast ion channels, and the variables $w(t, x)$ and $w_i(t, x)$ called the bursting variables represent the transport rate across the neuron cell membrane through slow channels of calcium and other ions. The coupling boundary conditions affiliated with the system \eqref{cHR} are given by \begin{equation} \label{nbc} \begin{split} &\frac{\partial u}{\partial \nu} (t, x) = 0, \;\; \text{for} \; x \in \Gamma_0, \;\; \frac{\partial u}{\partial \nu} (t, x) + pu = pu_i, \;\, \text{for} \; x \in \Gamma_i, \; 1 \leq i \leq m; \\ &\frac{\partial u_i}{\partial \nu} (t, x) = 0, \;\, \text{for} \; x \in \Gamma \backslash \Gamma_i, \;\, \frac{\partial u_i}{\partial \nu} (t, x) + p u_i = pu, \;\, \text{for} \; x \in \Gamma_i, \; 1 \leq i \leq m. \end{split} \end{equation} where $\partial/\partial \nu$ stands for the normal outward derivative, $p > 0$ is the coupling strength constant and the switch functions of the neuron $\mathcal{N}_i, 1 \leq i \leq m$, are $$ \xi_i (x) = \begin{cases} 1, \vspace{5pt} & \text{if \, $x \in \Gamma_i$}; \\[3pt] 0, \vspace{5pt} & \text{if \, $x \in \Gamma \backslash \Gamma_i$}. \end{cases} $$ The initial conditions to be specified are denoted by \begin{equation} \label{inc} \begin{split} &u(0, x) = u^0 (x), \;\quad v(0, x) = v^0 (x), \quad \, w(0, x) = w^0 (x), \quad \; x \in \Omega, \\[3pt] &u_i(0, x) = u_i^0 (x), \quad v_i(0, x) = v_i^0 (x), \quad w_i (0, x) = w_i^0 (x), \quad x \in \Omega, \end{split} \end{equation} for $1 \leq i \leq m$. All the parameters in this system \eqref{cHR} including the input electrical current $J$ are positive constants except a reference value of the membrane potential of neuron cells $c = u_R \in \mathbb{R}$. In this study of the neuron network \eqref{cHR}-\eqref{inc}, we shall work with the following Hilbert spaces for the subsystem of three equations for each involved single neuron: $$ H = L^2 (\Omega, \mathbb{R}^3), \quad \text{and} \quad E = H^1 (\Omega) \times L^2 (\Omega, \mathbb{R}^2). $$ Also define the product spaces $$ \mathbb{H} = [L^2 (\Omega, \mathbb{R}^3)]^{1+m} \quad \text{and} \quad \mathbb{E} = [H^1 (\Omega) \times L^2 (\Omega, \mathbb{R}^2)]^{1+m} $$ for the entire system \eqref{cHR}-\eqref{inc}. The norm and inner-product of the Hilbert space $\mathbb{H}, \, H$ or $L^2(\Omega)$ will be denoted by $\| \, \cdot \, \|$ and $\inpt{\,\cdot , \cdot\,}$, respectively. The norm of $\mathbb{E}$ or $E$ will be denoted by $\| \, \cdot \, \|_E$. We use $| \, \cdot \, |$ to denote the vector norm or the measure of set in $\mathbb{R}^n$. The initial-boundary value problem \eqref{cHR}-\eqref{inc} can be formulated as an initial value problem of the evolutionary equation: \begin{equation} \label{pb} \begin{split} \frac{\partial}{\partial t} \begin{pmatrix} g\\ g_i \end{pmatrix} = \begin{pmatrix} A & 0 \\ 0 & A_i \end{pmatrix} & \begin{pmatrix} g\\ g_i \end{pmatrix} + \begin{pmatrix} f (g) \\ f (g_i) \end{pmatrix}, \;\; 1 \leq i \leq m, \;\; t > 0, \\[4pt] &g(0) = g^0 \in H, \quad g_i (0) = g_i^0 \in H. \end{split} \end{equation} Here $g(t) = \text{col}\, (u(t, \cdot), v(t, \cdot ), w(t, \cdot))$ and $g_i(t) = (u_i(t, \cdot), v_i (t, \cdot ), w_i(t, \cdot))$. The initial data functions are $g^0 = \text{col}\,(u^0, v^0, w^0)$ and $g_i^0 = \text{col}\, (u_i^0, v_i^0, w_2^0),$ for $1 \leq i \leq m$. The nonpositive, self-adjoint and diagonal operator $\mathcal{A} = \text{diag}\, (A, A_1, \cdots, A_m)$ is defined by the block operators \begin{equation} \label{opA} A = A_i = \begin{bmatrix} d \Delta \quad & 0 \quad & 0 \\[3pt] 0 \quad & - I \quad & 0 \\[3pt] 0 \quad & 0 \quad & - r I \end{bmatrix}, \quad 1 \leq i \leq m, \end{equation} with the domain $$ D(\mathcal{A}) = \{ \text{col}\, (h, h_1, \cdots, h_m) \in [H^2(\Omega) \times L^2 (\Omega, \mathbb{R}^2)]^{1+m}: \text{\eqref{nbc} satisfied}\}. $$ Due to the continuous Sobolev imbedding $H^{1}(\Omega) \hookrightarrow L^6(\Omega)$ for space dimension $n \leq 3$ and by the H\"{o}lder inequality, the nonlinear mapping \begin{equation} \label{opf} \begin{pmatrix} f(g) \\[3pt] f(g_i) \end{pmatrix} = \begin{pmatrix} au_1^2 - bu_1^3 + v_1 - w_1 + J \\[3pt] \alpha - \beta u_1^2 \\[3pt] q (u_1 - c) \\[3pt] au_2^2 - bu_2^3 + v_2 - w_2 + J \\[3pt] \alpha - \beta u_2^2 \\[3pt] q (u_2 - c) \end{pmatrix} : E \times E \longrightarrow H \times H \end{equation} is a locally Lipschitz continuous mapping for $1 \leq i \leq m$. We shall consider the weak solution of this initial value problem \eqref{pb}, cf. \cite[Section XV.3]{CV} and the corresponding definition presented in \cite{PY, PYS}. The following proposition can be proved by the Galerkin approximation method. \begin{proposition} \label{pps} For any given initial state $(g^0, g_1^0, \cdots , g_m^0) \in \mathbb{H}$, there exists a unique local weak solution $(g(t, g^0), g_1 (t, g_1^0), \cdots, g_m (t, g_m^0)), \, t \in [0, \tau]$, for some $\tau > 0$, of the initial value problem \eqref{pb} formulated from the problem \eqref{cHR}-\eqref{inc}. The weak solution continuously depends on the initial data and satisfies \begin{equation} \label{soln} (g, g_1, \cdots, g_m) \in C([0, \tau]; \,\mathbb{H}) \cap C^1 ((0, \tau); \,\mathbb{H}) \cap L^2 ([0, \tau]; \,\mathbb{E}). \end{equation} If the initial state is in $\mathbb{E}$, then the solution is a strong solution with the regularity \begin{equation} \label{ss} (g, g_1, \cdots, g_m) \in C([0, \tau]; \,\mathbb{E}) \cap C^1 ((0, \tau); \,\mathbb{E}) \cap L^2 ([0, \tau]; \,D(A) \times D(A_i)^m). \end{equation} \end{proposition} The basics of infinite dimensional dynamical systems or called semiflow generated by parabolic partial differential equations are referred to \cite{CV, SY, Tm}. \begin{definition} \label{Dabsb} Let $\{S(t)\}_{t \geq 0}$ be a semiflow on a Banach space $\mathscr{X}$. A bounded set $B^*$ of $\mathscr{X}$ is called an absorbing set of this semiflow, if for any given bounded set $B \subset \mathscr{X}$ there exists a finite time $T_B \geq 0$ depending on $B$, such that $S(t)B \subset B^*$ permanently for all $t \geq T_B$. \end{definition} \section{\textbf{Global Existence of Solutions and Absorbing Semiflow}} First we prove the global existence of weak solutions in time for the initial value problem \eqref{pb} of the boundary coupled partly diffusive Hindmarsh-Rose equations. \begin{theorem} \label{Tm} For any given initial state $(g^0, g_1^0, \cdots , g_m^0) \in \mathbb{H}$, there exists a unique global weak solution in time, $(g(t), g_1 (t), \cdots, g_m(t)), \, t \in [0, \infty)$, of the initial value problem \eqref{pb} formulated from the original initial-boundary value problem \eqref{cHR}-\eqref{inc}. \end{theorem} \begin{proof} Sum up the $L^2$ inner-products of the $u$-equation with $C_1 u(t)$ and the $u_i$-equation with $C_1 u_i(t)$ for $1 \leq i \leq m$, the constant $C_1 > 0$ to be chosen, we get \begin{equation*} \begin{split} &\frac{C_1}{2} \frac{d}{dt} \left(\|u \|^2 + \sum_{i = 1}^m \|u_i\|^2 \right) + C_1 d \left(\| \nabla u \|^2 + \sum_{i=1}^m \|\nabla u_i\|^2 \right) \\ = &\, C_1 \int_\Omega \left[(au^3 -bu^4 + u v - u w +J u) + \sum_{i=1}^m (au_i^3 -bu_i^4 + u_i v_i - u_i w_i +Ju_i)\right] dx \\ + &\,d C_1\, \sum_{i=1}^m \int_{\Gamma_i} \left(p(u_i - u)u + p(u - u_i) u_i \right) \, dx \\ = &\, \int_\Omega C_1 (au^3 -bu^4 + u v - u w +J u)\, dx \\ + &\, \sum_{i=1}^m \int_\Omega (C_1 (au_i^3 -bu_i^4 + u_i v_i - u_i w_i +Ju_i) \, dx - d C_1p \, \sum_{i=1}^m \int_{\Gamma_i} ( u - u_i)^2\, dx \\ \leq &\, C_1 \int_\Omega \left[(au^3 -bu^4 + u v - u w +J u) + \sum_{i=1}^m (au_i^3 -bu_i^4 + u_i v_i - u_i w_i +Ju_i)\right] dx, \end{split} \end{equation*} by the coupling boundary condition \eqref{nbc}. Then sum up the $L^2$ inner-products of the $v$-equation with $v (t)$ and the $v_i$-equation with $v_i(t)$ for $1 \leq i \leq m$, we obtain \begin{equation*} \begin{split} &\frac{1}{2} \frac{d}{dt} (\|v \|^2 + \sum_{i=1}^m \| v_i\|^2) = \int_\Omega \left[(\alpha v - \beta u^2 v - v^2 + \sum_{i=1}^m (\alpha v_i - \beta u_i^2 v_i - v_i^2) \right]dx \\ \leq &\int_\Omega \left[\alpha v +\frac{1}{2} (\beta^2 u^4 + v^2) - v^2 + \sum_{i=1}^m (\alpha v_i +\frac{1}{2} (\beta^2 u_i^4 + v_i^2) - v_i^2)\right] dx \\ \leq &\int_\Omega \left[(1 + m)\alpha^2 +\frac{1}{2} \beta^2 (u^4 + \sum_{i=1}^m u_i^4) - \frac{3}{8} (v^2 + \sum_{i=1}^m v_i^2) \right] dx, \end{split} \end{equation*} and similarly for the $w$-equation and $w_i$-equation, $1 \leq i \leq m$, we have \begin{equation*} \begin{split} &\frac{1}{2} \frac{d}{dt} (\|w \|^2 + \sum_{i=1}^m \| w_i \|^2) = \int_\Omega \left[(q (u - c)w - rw^2) + \sum_{i=1}^m (q (u_i - c)w_i - rw_i^2) \right] dx \\ \leq & \int_\Omega \left[\frac{q^2}{2r} (u - c)^2 + \frac{1}{2} r w^2 - r w^2 + \sum_{i=1}^m \left(\frac{q^2}{2r} (u_i - c)^2 + \frac{1}{2} r w_i^2 - r w_i^2\right)\right] dx \\ \leq & \int_\Omega \left[\frac{q^2}{r} \left(u^2 + \sum_{i=1}^m u_i^2 + (1 + m)c^2\right) - \frac{r}{2} \left(w^2 + \sum_{i=1}^m w_i^2\right)\right] dx. \end{split} \end{equation*} To treat the nonlinear integral terms on the right-hand side of the first inequality above, we choose the positive constant to be $C_1 = \frac{1}{b} (\beta^2 + 4)$. Then \begin{equation} \label{C1u} \begin{split} &\int_\Omega (- C_1 b u^4)\, dx + \int_\Omega (\beta^2 u^4)\, dx \leq \int_\Omega (-4 u^4)\, dx, \\ &\int_\Omega (- C_1 b u_i^4)\, dx + \int_\Omega (\beta^2 u_i^4)\, dx \leq \int_\Omega (-4 u_i^4)\, dx, \quad i= 1, \cdots, m. \end{split} \end{equation} Using the Young's inequality in an appropriate way, we deduce that \begin{equation} \label{C3u} \begin{split} &\int_\Omega C_1 au_i^3\, dx \leq \frac{3}{4} \int_\Omega u^4\, dx + \frac{1}{4}\int_\Omega (C_1 a)^4 \, dx \leq \int_\Omega u^4\, dx + (C_1 a)^4 |\Omega|, \\ &\int_\Omega C_1 au_i^3\, dx \leq \frac{3}{4} \int_\Omega u_i^4\, dx + \frac{1}{4}\int_\Omega (C_1 a)^4 \, dx \leq \int_\Omega u_i^4\, dx + (C_1 a)^4 |\Omega|, \end{split} \end{equation} for $i = 1, \cdots, m$. Moreover, we have \begin{equation} \label{uvw} \begin{split} &C_1 \int_\Omega \left( (uv -uw + ju) + \sum_{i=1}^m (u_i v_i - u_i w_i + Ju_i)\right) dx \\[2pt] \leq &\, \int_\Omega \left(2(C_1 u)^2 + \frac{1}{8} v^2 + \frac{(C_1 u)^2}{r} + \frac{1}{4} r w^2 + C_1 u^2 + C_1J^2 \right) dx \\ + &\, \int_\Omega \sum_{i=1}^m \left(2(C_1 u_i)^2 + \frac{1}{8} v_i^2 + \frac{(C_1 u_i)^2}{r} + \frac{1}{4} r w_i^2 + C_1 u_i^2 + C_1J^2 \right) dx \end{split} \end{equation} where on the right-hand side of the inequality \eqref{uvw} we can further treat the terms involving $u^2$ and $u_i^2$ as follows, \begin{equation} \label{ur} \begin{split} &\int_\Omega \left(2(C_1 u)^2 + \frac{(C_1 u)^2}{r} + C_1 u^2 + \sum_{i=1}^m \left[2(C_1 u_i)^2 + \frac{(C_1 u_i)^2}{r} + C_1 u_i^2\right] \right) dx \\ \leq &\, \int_\Omega \left(u^4 + \sum_{i=1}^m u_i^4 \right) dx + (1 + m)\left[C_1^2 \left(2 +\frac{1}{r}\right) + C_1\right]^2 |\Omega |. \end{split} \end{equation} Besides we have \begin{equation} \label{uq} \int_\Omega \frac{1}{r} q^2 \left(u^2 + \sum_{i=1}^m u_i^2\right) dx \leq \int_\Omega \left(u^4 + \sum_{i=1}^m u_i^4\right) dx + \frac{q^4}{r^2}(1 + m) |\Omega|. \end{equation} Substitute the estimates \eqref{C1u} -\eqref{uq} into the first three differential inequalities in this proof and then sum them up to obtain \begin{equation} \label{g2} \begin{split} &\frac{1}{2} \frac{d}{dt} \left[C_1 \left(\|u\|^2 + \sum_{i = 1}^m \|u_i\|^2\right) + \left(\|v\|^2 + \sum_{i=1}^m \| v_i\|^2\right) + \left(\|w\|^2 + \sum_{i=1}^m \| w_i \|^2\right) \right] \\ &\; + C_1 d\, \left(\|\nabla u \|^2 + \sum_{i=1}^m \|\nabla u_i \|^2\right) \\[2pt] \leq &\, C_1 \int_\Omega \left[(au^3 -bu^4 + u v - u w +J u) + \sum_{i=1}^m (au_i^3 -bu_i^4 + u_i v_i - u_i w_i +Ju_i)\right] dx \\ &+ \int_\Omega \left[(1 + m)\alpha^2 +\frac{1}{2} \beta^2 (u^4 + \sum_{i=1}^m u_i^4) - \frac{3}{8} (v^2 + \sum_{i=1}^m v_i^2) \right] dx \\ &+ \int_\Omega \left[\frac{q^2}{r} \left(u^2 + \sum_{i=1}^m u_i^2 + (1 + m)c^2\right) - \frac{r}{2} \left(w^2 + \sum_{i=1}^m w_i^2\right)\right] dx \\ \leq & \int_\Omega (3 - 4)\left(u^4 + \sum_{i=1}^m u_i^4 \right) dx + \int_\Omega \left(\frac{1}{8} - \frac{3}{8}\right) \left(v^2 + \sum_{i=1}^m v_i^2\right) dx \\ &+ \int_\Omega \left(\frac{1}{4} - \frac{1}{2} \right) r \left(w^2 + \sum_{i=1}^m w_i^2 \right) dx \\ &+ \, (1 + m)|\Omega | \left( (C_1 a)^4 + C_1 J^2 + \left[C_1^2 \left(2 +\frac{1}{r}\right) + C_1\right]^2 + 2\alpha^2 + \frac{q^2 c^2}{r} + \frac{q^4}{r^2} \right) \\ = &\, - \int_\Omega \left(\left[u^4 + \sum_{i=1}^m u_i^4 \right] + \frac{1}{4} \left[v^2 + \sum_{i=1}^m v_i^2\right] + \frac{r}{4} \left[w^2 + \sum_{i=1}^m w_i^2 \right] \right) dx + C_2 (1 + m)|\Omega |, \end{split} \end{equation} where $C_2 = 2(C_1 a)^4 + 2C_1 J^2 + 2\left[C_1^2 \left(2 +\frac{1}{r}\right) + C_1\right]^2 + 4\alpha^2 + \frac{2q^2 c^2}{r} + \frac{2q^4}{r^2}$ is a constant. From \eqref{g2} it follows that \begin{equation} \label{E1} \begin{split} &\frac{d}{dt} \left[C_1 \left(\|u\|^2 + \sum_{i = 1}^m \|u_i\|^2\right) + \left(\|v\|^2 + \sum_{i=1}^m \| v_i\|^2\right) + \left(\|w\|^2 + \sum_{i=1}^m \| w_i \|^2\right) \right] \\ + \,2& \int_\Omega \left(\left[u^4 + \sum_{i=1}^m u_i^4 \right] + \frac{1}{4} \left[v^2 + \sum_{i=1}^m v_i^2\right] + \frac{r}{4} \left[w^2 + \sum_{i=1}^m w_i^2 \right] \right) dx \leq C_2 (1 + m)|\Omega|, \end{split} \end{equation} for $t \in I_{max} = [0, T_{max})$, the maximal time interval of solution existence. Note that in the first part of the integral term of \eqref{E1} we have $$ \frac{1}{4} \left(C_1 u^2 - \frac{C_1^2}{16}\right) \leq u^4 \quad \text{and} \quad \frac{1}{4} \left(C_1 u_i^2 - \frac{C_1^2}{16}\right) \leq u_i^4, \quad 1 \leq i \leq m. $$ Then \eqref{E1} yields the following differential inequality \begin{equation} \label{E2} \begin{split} &\frac{d}{dt} \left[C_1 \left(\|u\|^2 + \sum_{i = 1}^m \|u_i\|^2\right) + \left(\|v\|^2 + \sum_{i=1}^m \| v_i\|^2\right) + \left(\|w\|^2 + \sum_{i=1}^m \| w_i \|^2\right) \right] \\ &+ \, r^* \left[C_1 \left(\|u\|^2 + \sum_{i = 1}^m \|u_i\|^2\right) + \left(\|v\|^2 + \sum_{i=1}^m \| v_i\|^2\right) + \left(\|w\|^2 + \sum_{i=1}^m \| w_i \|^2\right) \right] \\ &\leq \frac{d}{dt} \left[C_1 \left(\|u\|^2 + \sum_{i = 1}^m \|u_i\|^2\right) + \left(\|v\|^2 + \sum_{i=1}^m \| v_i\|^2\right) + \left(\|w\|^2 + \sum_{i=1}^m \| w_i \|^2\right) \right] \\ &+ \, \frac{1}{2} \int_\Omega \left(\left[u^2 + \sum_{i=1}^m u_i^2 \right] + \left[v^2 + \sum_{i=1}^m v_i^2\right] + r \left[w^2 + \sum_{i=1}^m w_i^2 \right] \right) dx \\ &\leq \left(C_2 + \frac{C_1^2}{32}\right)(1 + m) |\Omega |, \end{split} \end{equation} where $r^* = \frac{1}{2} \min \{1, r\}$. Apply the Gronwall inequality to \eqref{E2}. Then we obtain the following bounding estimate of the weak solutions: \begin{equation} \label{dse} \begin{split} &\|g(t, g^0)\|^2 + \sum_{i=1}^m \|g_i (t, g_i^0)\|^2 \\ \leq &\, \frac{\max \{C_1, 1\}}{\min \{C_1, 1\}}e^{- r^* t} \left(\|g^0\|^2 + \sum_{i=1}^m \|g_i^0\|^2\right) + \frac{M}{\min \{C_1, 1\}} |\Omega | \\ \leq &\, \frac{\max \{C_1, 1\}}{\min \{C_1, 1\}} \left(\|g^0\|^2 + \sum_{i=1}^m \|g_i^0\|^2\right) + \frac{M}{\min \{C_1, 1\}} |\Omega | \end{split} \end{equation} for $t \in I_{max} = [0, T_{max}) = [0, \infty)$, where \begin{equation} \label{M} M = \frac{1 + m}{r^*}\left(C_2 + \frac{C_1^2}{32}\right). \end{equation} The estimate \eqref{dse} shows that the weak solution $g(t, x)$ will never blow up at any finite time because it is bounded uniformly on the existence time interval. Therefore, for any initial data in $\mathbb{H}$, the unique weak solution of the initial value problem \eqref{pb} of the boundary coupled neuron network \eqref{cHR}-\eqref{inc} exists in $\mathbb{H}$ globally in time. \end{proof} The global existence and uniqueness of the weak solutions and their continuous dependence on the initial data enable us to define the solution semiflow $\{S(t): \mathbb{H} \to \mathbb{H}\}_{t \geq 0}$ of the boundary coupled Hindmarsh-Rose neuron network system \eqref{cHR}-\eqref{inc} on the space $\mathbb{H}$ as follows, \begin{equation} \label{HRS} S(t): (g^0, g_1^0, \cdots, g_m^0) \longmapsto (g(t, g^0), g_1(t, g_1^0), \cdots, g_m(t, g_m^0)), \quad t \geq 0. \end{equation} We call this semiflow $\{S(t)\}_{t \geq 0}$ the \emph{boundary coupling Hindmarsh-Rose semiflow}. \begin{theorem} \label{Hab} There exists an absorbing set for the boundary coupling Hindmarsh-Rose semiflow $\{S(t)\}_{t \geq 0}$ in the space $\mathbb{H}$, which is the bounded ball \begin{equation} \label{abs} B^* = \{ h \in \mathbb{H}: \| h \|^2 \leq Q\} \end{equation} where $Q = \frac{M |\Omega |}{\min \{C_1, 1\}} + 1$. \end{theorem} \begin{proof} This is the consequence of the uniform estimate \eqref{dse} in Theorem \ref{Tm} because \begin{equation} \label{lsp} \limsup_{t \to \infty} \, \left(\|g(t)\|^2 + \sum_{i=1}^m \|g_i(t)\|^2\right) < Q = \frac{M |\Omega |}{\min \{C_1, 1\}} + 1 \end{equation} for all weak solutions of \eqref{pb} with any initial data in $\mathbb{H}$. Moreover, for any given bounded set $B = \{h \in \mathbb{H}: \|h \|^2 \leq \rho \}$ in $\mathbb{H}$, there exists a finite time \begin{equation} \label{T0B} T_0 (B) = \frac{1}{r^*} \log^+ \left(\rho \, \frac{\max \{C_1, 1\}}{\min \{C_1, 1\}}\right) \end{equation} such that all the solution trajectories started from the set $B$ will permanently enter the bounded ball $B^*$ shown in \eqref{abs} for $t \geq T_0(B)$. \end{proof} \section{\textbf{Synchronization of the Boundary Coupled Neuron Neiwork}} Synchronization for ensemble of neurons and for complex neuron network or some artificial neural network is one of the central and significant topics in neuroscience and in the theory of artificial intelligence. We introduced a new concept of synchronization dynamics for a neuron network. \begin{definition} \label{DaD} For the dynamical system generated by a model differential equation such as \eqref{pb} of multiple neurons with whatever type of coupling, define the \emph{asynchronous degree} in a state space $\mathscr{X}$ to be $$ deg_s (\mathscr{X})= \sum_{j} \sum_{k} \, \sup_{g_j^0, \, g_k^0 \in \mathscr{X}} \, \left\{\limsup_{t \to \infty} \, \|g_j (t) - g_k(t)\|_{\mathscr{X}}\right\}, $$ where $g_j(t)$ and $g_k(t)$ are any two solutions of the model differential equation with the initial states $g_j^0$ and $g_k^0$, respectively. Then the coupled neuron network is said to be asymptotically synchronized in the space $\mathscr{X}$, if $deg_s (\mathscr{X}) = 0$. \end{definition} In this section, we shall prove the main result of this work on the asymptotic synchronization of the boundary coupled Hindmarsh-Rose neuron network described by \eqref{cHR}-\eqref{inc} in the space $H$. This result provides a quantitative threshold for the coupling strength and the stimulation signals to reach the asymptotic synchronization. To address mathematically this synchronization problem of the neuron network specified in Section 1, denote by $U_i(t) = u(t) - u_i (t), V_i(t) = v(t) - v_i(t), W_i(t) = w(t) - w_i(t)$, for $i = 1, \cdots, m$. Then for any given initial states $g^0$ and $g_i^0, \cdots, g_m^0$ in the space $H$, the difference between the solutions associated with the neuron $\mathcal{N}_c$ and the neuron $\mathcal{N}_i$ is $$ g (t, g^0) - g_i (t, g_i^0) = \text{col}\, (U_i(t), V_i(t), W_i(t)), \quad t \geq 0. $$ By subtraction of the corresponding three pairs of equations of the $i$-th neuron from the central neuron in \eqref{cHR}, we obtain the differencing Hindmarsh-Rose equations as follows. For $i = 1, \cdots, m$, \begin{equation} \label{dHR} \begin{split} \frac{\partial U_i}{\partial t} & = d \Delta U_i + a(u + u_i)U_i - b(u^2 + u u_i + u_i^2)U_i + V_i - W_i, \\ \frac{\partial V_i}{\partial t} & = - V_i - \beta (u + u_i)U_i, \\ \frac{\partial W_i}{\partial t} & = q U_i - r W_i. \end{split} \end{equation} Here is the main result on the synchronization of the boundary coupled Hindmarsh-Rose neuron network. \begin{theorem} \label{ThM} If the threshold condition for stimulation signal strength of the boundary coupled Hindmarsh-Rose neuron network is satisfied that for any given initial conditions $g^0, g_i^0 \in H$, \begin{equation} \label{SC} p\, \liminf_{t \to \infty} \,\int_{\Gamma_i} U_i^2(t, x)\, dx > R \, |\Omega|, \quad i = 1, \cdots, m, \end{equation} where \begin{equation} \label{R} R = \frac{1 + m}{r^* \min \{C_1, 1\}}\left[\frac{C_1^2}{32} + C_2\right] \left[\eta_2\, d \,|\Omega | + \left[\frac{8\beta^2}{b} + \frac{a^2}{b} + \frac{b}{16\beta^2 r} \left[q - \frac{8\beta^2}{b}\right]^2\right] \right] \end{equation} with $C_1 = \frac{1}{b} (\beta^2 + 4)$, $\eta_2 > 0$ being the constant in Poincar\'{e} inequality \eqref{Pcr}, and \begin{equation} \label{C2} C_2 = 2(C_1 a)^4 + 2C_1 J^2 + 2\left[C_1^2 \left(2 +\frac{1}{r}\right) + C_1\right]^2 + 4\alpha^2 + \frac{2q^2 c^2}{r} + \frac{2q^4}{r^2}, \end{equation} then the boundary coupled Hindmarsh-Rose neuron network generated by \eqref{pb} is asymptotically synchronized in the space $H$ at a uniform exponential rate. \end{theorem} \begin{proof} Step 1. Take the $L^2$ inner-products of the first equation in \eqref{dHR} with $KU_i(t)$, the second equation in \eqref{dHR} with $V_i(t)$, and the third equation in \eqref{dHR} with $W_i(t)$, where $K > 0$ to be chosen. Then sum them up and use Young's inequalities to get \begin{equation} \label{eG} \begin{split} &\frac{1}{2} \frac{d}{dt} (K\|U_i (t)\|^2 + \|V_i (t)\|^2 + \|W_i (t)\|^2) + d K \|\nabla U_i (t)\|^2 + \|V_i (t)\|^2 + r\, \|W_i (t)\|^2 \\[3pt] = &\,\int_{\Gamma} K\frac{\partial U_i}{\partial \nu}\, U_i \, dx + \int_\Omega K (a(u + u_i)U_i^2 - b(u^2 + u u_i + u_i^2) U_i^2 )\, dx \\[2pt] + &\, \int_\Omega \left(K U_i V_i -\beta (u + u_i)U_i V_i + (q - K)U_i W_i \right) dx \\[2pt] \leq &\,\int_{\Gamma} K\frac{\partial U_i}{\partial \nu}\, U_i \, dx + \int_\Omega \left(K a(u + u_i)U_i^2 -\beta (u + u_i)U_i V_i - K b\,(u^2 + u u_i + u_i^2) U_i^2 \right) dx \\ + &\, \left(K^2 + \frac{1}{2r} (q - K)^2\right) \|U_i (t)\|^2 + \frac{1}{4}\|V_i (t)\|^2 + \frac{r}{2}\|W_i (t)\|^2, \quad t > 0. \end{split} \end{equation} By the the boundary coupling condition \eqref{nbc}, the boundary integral in \eqref{eG} yields \begin{equation} \label{bdt} \begin{split} &\int_{\Gamma} K\frac{\partial U_i}{\partial \nu}\, U_i \, dx = K\int_{\Gamma}\, \sum_{i = 1}^m p [(u_i - u) - (u - u_i)]\, U_i \, dx \\[3pt] = &\, - 2K p \int_{\Gamma_i} U_i^2(t, x)\, dx - 2Kp \int_{\Gamma \backslash (\Gamma_0 \cup \Gamma_i)} u^2(t, x)\, dx \end{split} \end{equation} for $1 \leq i \leq m$. We estimate another integral term on the right-hand side of \eqref{eG}, \begin{equation} \label{nlt} \begin{split} &\int_\Omega \left(K a(u + u_i)U_i^2 -\beta (u + u_i)U_i V_i - K b\,(u^2 + u u_i + u_i^2) U_i^2 \right) dx \\[3pt] \leq &\, \int_\Omega \left(K a(u + u_i)U_i^2 - \beta (u + u_i)U_i V_i - \frac{K b}{2}(u^2 + u_i^2) U_i^2 \right) dx \\[3pt] \leq &\, \int_\Omega \left(K a(u + u_i)U_i^2 + 2\beta^2 (u^2 + u_i^2)U_i^2 + \frac{1}{4} V_i^2 - \frac{K b}{2}(u^2 + u_i^2) U_i^2 \right) dx. \end{split} \end{equation} Now we choose the constant multiplier $K$ to be \begin{equation} \label{lbd} K = \frac{8 \beta^2}{b} > 0. \end{equation} Then \eqref{nlt} is reduced to \begin{equation} \label{me} \begin{split} &\int_\Omega \left(K a\,(u + u_i)U_i^2 -\beta (u + u_i)U_i V_i - K b\,(u^2 + u u_i + u_i^2) U_i^2 \right) dx \\[5pt] \leq &\, \int_\Omega \left(K a(u + u_i)U_i^2 + \frac{1}{4} V_i^2 - \frac{K b}{4}(u^2 + u_i^2) U_i^2 \right) dx \\[3pt] = &\, \frac{1}{4} \|V_i(t)\|^2 + \int_\Omega \left( a(u + u_i) - \frac{b}{4}(u^2 + u_i^2) \right) KU_i^2\, dx \\ = &\, \frac{1}{4} \|V_i(t)\|^2 + \int_\Omega \left[\frac{2a^2}{b} - \left(\frac{a}{b^{1/2}} - \frac{b^{1/2}}{2}\, u \right)^2 - \left(\frac{a}{b^{1/2}} - \frac{b^{1/2}}{2}\, u_i \right)^2 \right] KU_i^2\, dx \\ \leq &\, \frac{1}{4} \|V_i(t)\|^2 + \frac{2K a^2}{b} \|U_i (t)\|^2 . \end{split} \end{equation} Substitute \eqref{bdt} and \eqref{me} into \eqref{eG}. Then for $1 \leq i \leq m$ it holds that \begin{equation} \label{meq} \begin{split} &\frac{1}{2} \frac{d}{dt} (K \|U_i (t)\|^2 + \|V_i (t)\|^2 + \|W_i (t)\|^2) + 2Kp \int_{\Gamma_i} U_i^2(t, x)\, dx \\ &+ 2Kp \int_{\Gamma \backslash (\Gamma_0 \cup \Gamma_i)} u^2(t, x)\, dx + dK \, \|\nabla U_i (t)\|^2 + \|V_i (t)\|^2 + r\, \|W_i (t)\|^2 \\ \leq &\, \left(K^2 + \frac{K a^2}{b} + \frac{1}{2r} (q - K)^2\right) \|U_i (t)\|^2, \quad t > 0. \end{split} \end{equation} Step 2. By Poincar\'{e} inequality, there exist positive constants $\eta_1$ and $\eta_2$ depending only on the spatial domain $\Omega$ and its dimension such that \begin{equation} \label{Pcr} \eta_1 \|U_i (t)\|^2 \leq \|\nabla U_i (t)\|^2 + \eta_2 \left(\int_\Omega U_i (t, x)\, dx\right)^2, \quad 1 \leq i \leq m. \end{equation} On the other hand, Theorem \ref{Hab} with \eqref{M} and \eqref{lsp} confirm that \begin{equation} \label{Lsup} \limsup_{t \to \infty}\, \left[\|g(t)\|^2 + \sum_{i=1}^m \|g_i(t)\|^2\right] \leq \frac{1 + m}{r^* \min \{C_1, 1\}}\left(C_2 + \frac{C_1^2}{32}\right) |\Omega |. \end{equation} Note that $$ \|U_i (t)\|^2 \leq 2 (\|u(t)\|^2 + \|u_i(t)\|^2) \leq 2 \left(\|g(t)\|^2 + \sum_{i=1}^m \|g_i(t)\|^2\right). $$ Then it follows from \eqref{Pcr} and \eqref{Lsup} that, for any given bounded set $B \subset H$ and any initial data $g^0, g_i^0 \in B$, we have \begin{equation} \label{Keq} \begin{split} & \frac{d}{dt} (K \|U_i (t)\|^2 + \|V_i (t)\|^2 + \|W_i (t)\|^2) + 4Kp \int_{\Gamma_i} U_i^2(t, x)\, dx \\[5pt] & + 2\,\eta_1 dK\, \|U_i (t)\|^2 + \|V_i (t)\|^2 + r \|W_i (t)\|^2 \\[3pt] \leq &\, 2\eta_2\, dK \left(\int_\Omega U_i (t, x)\, dx\right)^2 + \left(K^2 + \frac{K a^2}{b} + \frac{1}{2r} (q - K)^2\right) \|U_i (t)\|^2 \\ \leq &\, 2\eta_2\, dK |\Omega | \|U_i (t)\|^2 + 2\left(K^2 + \frac{K a^2}{b} + \frac{1}{2r} (q - K)^2\right) \|U_i (t)\|^2. \\ \leq &\, \frac{4(1 + m)}{r^* \min \{C_1, 1\}}\left(C_2 + \frac{C_1^2}{32}\right) |\Omega | \left[\eta_2\, dK |\Omega | + \left(K^2 + \frac{K a^2}{b} + \frac{1}{2r} (q - K)^2\right) \right] \end{split} \end{equation} for $t > T_B$. The differential inequality \eqref{Keq} is written as \begin{equation} \label{Synq} \begin{split} & \frac{d}{dt} (K \|U_i (t)\|^2 + \|V_i (t)\|^2 + \|W_i (t)\|^2) + 4Kp \int_{\Gamma_i} U_i^2(t, x)\, dx \\[3pt] & + 2\,\eta_1 dK\, \|U_i (t)\|^2 + \|V_i (t)\|^2 + r \|W_i (t)\|^2 < 4KR\,|\Omega |, \quad t > T_B. \end{split} \end{equation} The constants $K = 8\beta^2/b$ in \eqref{lbd} and $R > 0$ in \eqref{R} are independent of initial data. Under the condition that the stimulation signal strength of the boundary coupling $p\int_{\Gamma_i} U_i^2(t, x)\, dx, 1 \leq i \leq m,$ satisfies \eqref{SC}, there exists a sufficiently large $\tau (g^0, g_i^0) > 0$ depending on the initial data such that the following two inequalities are satisfied: \begin{equation} \label{QQ} \|g (\tau, g^0)\|^2 \leq Q, \quad \|g_i (\tau, g_i^0)\|^2 \leq Q, \;\; 1 \leq i \leq m, \end{equation} where the constant $Q$ is in \eqref{abs}, and the threshold crossing inequality \begin{equation} \label{pl} 4Kp \int_{\Gamma_i} U_i^2(t, x)\, dx > 4KR\,|\Omega |, \quad t > \tau. \end{equation} The inequality \eqref{pl} signifies that the boundary coupling effect $p \int_{\Gamma_i} U_i^2(t, x)\, dx$ exceeds the synchronization threshold $R |\Omega|$. Therefore, from \eqref{Synq} we have the differential inequalities: For $i = 1, \cdots, m$, \begin{equation} \label{Gwq} \begin{split} & \frac{d}{dt} (K \|U_i (t)\|^2 + \|V_i (t)\|^2 + \|W_i (t)\|^2) \\[5pt] &+ \min \{2\eta_1 d, 1, r\} (K\, \|U_i (t)\|^2 + \|V_i (t)\|^2 + \|W_i (t)\|^2) \\ \leq & \frac{d}{dt} (K \|U_i (t)\|^2 + \|V_i (t)\|^2 + \|W_i (t)\|^2) \\[5pt] &+ 2\,\eta_1 dK\, \|U_i (\tau)\|^2 + \|V_i (\tau)\|^2 + r \|W_i (\tau)\|^2 < 0, \quad t > \tau. \end{split} \end{equation} Finally we apply the Gronwall inequality to \eqref{Gwq} and reach the conclusion that for all $i = 1, \cdots, m$, \begin{equation} \label{dsyn} \begin{split} &K \|U_i (t)\|^2 + \|V_i (t)\|^2 + \|W_i (t)\|^2 \\[5pt] \leq &\, e^{- \mu (t - \tau)} (K \|U_i (\tau)\|^2 + \|V_i (\tau)\|^2 + \|W_i (\tau)\|^2) \\[5pt] \leq &\, 2e^{- \mu (t - \tau)} \max \{K, 1\} Q \to 0, \quad \text{as} \;\; t \to \infty, \end{split} \end{equation} where $\mu = \min \{2\eta_1 d, 1, r\}$ is the uniform rate. Thus for any $j, k = 1, \cdots, m$, we have \begin{equation} \label{gjk} \begin{split} &\sup_{g_j^0, g_k^0 \in H} \left\{ \limsup_{t \to \infty} \|g_j (t, g_j^0) - g_k (t, g_k^0)\|_H \right\} \\ \leq &\,\sup_{g_j^0, g^0 \in H} \left\{ \limsup_{t \to \infty} \|g_j (t, g_j^0) - g (t, g^0)\|_H \right\} \\ + &\,\sup_{g_k^0, g^0 \in H} \left\{ \limsup_{t \to \infty} \|g (t, g^0) - g_k (t, g_k^0)\|_H \right\} \to 0, \quad \text{as}\;\; t \to \infty. \end{split} \end{equation} Therefore it is proved that $$ deg_s (\text{H})= \sum_{j = 0}^m \sum_{k=0}^m \, \sup_{g_j^0, g_k^0 \in L^2(\Omega, \mathbb{R}^3)} \, \left\{\limsup_{t \to \infty} \|g_j (t) -g_k(t)\|_{L^2(\Omega, \mathbb{R}^3)}\right\} = 0. $$ Here $g_0 (t, g_0^0) = g(t, g^0)$ for $i = 0$. It shows that the boundary coupled Hindmarsh-Rose neuron network generated by \eqref{pb} is asymptotically synchronized in the space $H = L^2 (\Omega, \mathbb{R}^3)$ at a uniform exponential rate. The proof is completed. \end{proof} Remark 1. The presentation of this paper shows the asymptotic synchronization of a boundary coupled Hindmarsh-Rose neuron network locally of multiple neurons around a central neuron. This theory can be directly extended to a large-scale neuron network in the sense that each involved neuron is viewed as a central neuron with its own neighbor neurons. Remark 2. Although there are mathematical models and many studies of neuron network in terms of ordinary differential equations, biologically the partly diffusive partial-ordinary differential equations will be more realistic for modeling the dynamics of neuron network because the neuron coupling and neuronal signal transmission usually take place on the boundary of the cell domain through bio-electrical potential stimulation signals which is related only to the first component $u$-equations. The main theorem in this paper provides a sufficient condition for realization of the asymptotic synchronization of this kind boundary coupled neuron network. The threshold for triggering the synchronization may possibly be reduced through further investigations. \bibliographystyle{amsplain}
1,314,259,994,392
arxiv
\section{Introduction} \renewcommand{\headrulewidth}{0pt} \cfoot{In \textit{ACL}, pages 7916-7929, 2022} \thispagestyle{fancy} Text summarization is an important natural language processing (NLP) task, aiming at generating concise summaries for given texts while preserving the key information. It has extensive real-world applications such as headline generation~\cite{nenkova2011automatic}. {In this paper, we focus on the setting of sentence summarization~\cite{Rush_2015,filippova-etal-2015-sentence}}. State-of-the-art text summarization models are typically trained in a supervised way with large training corpora, comprising pairs of long texts and their summaries \cite{zhang2020pegasus, aghajanyan2020better, aghajanyan2021muppet}. However, such parallel data are expensive to obtain, preventing the applications to less popular domains and less spoken languages. Unsupervised text generation has been attracting increasing interest, because it does not require parallel data for training. One widely used approach is to compress a long text into a short one, and to reconstruct it to the long text by a cycle consistency loss \cite{miao2016language,wang-lee-2018-learning,baziotis-etal-2019-seq}. Due to the indifferentiability of the compressed sentence space, such an approach requires reinforcement learning (or its variants), which makes the training difficult \cite{kreutzer-etal-2021-offline}. Recently, \newcite{schumann-etal-2020-discrete} propose an edit-based approach for unsupervised summarization. Their model maximizes a heuristically defined scoring function that evaluates the quality (fluency and semantics) of the generated summary, achieving higher performance than cycle-consistency methods. However, the search approach is slow in inference because hundreds of search steps are needed for each data sample. Moreover, their approach can only select words from the input sentence with the word order preserved. Thus, it is restricted and may generate noisy summaries due to the local optimality of search algorithms. To address the above drawbacks, we propose a Non-Autoregressive approach to Unsupervised Summarization (NAUS). The idea is to perform search as in \newcite{schumann-etal-2020-discrete} and, inspired by \newcite{NEURIPS2020_7a677bb4}, to train a machine learning model to smooth out such noise and to speed up the inference process. Different from \newcite{NEURIPS2020_7a677bb4}, we propose to utilize \textit{non-autoregressive} decoders, which generate all output tokens in parallel due to our following observations: $\bullet$ Non-autoregressive models are several times faster than autoregressive generation, which is important when the system is deployed. $\bullet$ The input and output of the summarization task have a strong correspondence. Non-autoregressive generation supports encoder-only architectures, which can better utilize such input--output correspondence and even outperform autoregressive models for summarization. $\bullet$ For non-autoregressive models, we can design a length-control algorithm based on dynamic programming to satisfy the constraint of output lengths, which is typical in summarization applications but cannot be easily achieved with autoregressive models. We conducted experiments on Gigaword headline generation \cite{graff2003english} and DUC2004 \cite{duc2004} datasets. Experiments show that our NAUS achieves state-of-the-art performance on unsupervised summarization; especially, it outperforms its teacher (i.e., the search approach), confirming that NAUS can indeed smooth out the search noise. Regarding inference efficiency, our NAUS with truncating is 1000 times more efficient than the search approach; even with dynamic programming for length control, NAUS is still 100 times more efficient than search and several times more efficient than autoregressive models. Our NAUS is also able to perform length-transfer summary generation, i.e., generating summaries of different lengths from training. \section{Approach} \begin{figure*}[!t] \centering \includegraphics[width=\linewidth]{diagram.pdf} \caption{The overview of our NAUS approach. In each search step, input words corresponding to grey cells are selected. Besides, the blue arrow refers to the training process, and the green arrow refers to inference.} \end{figure*} In our approach, we first follow \newcite{schumann-etal-2020-discrete} and obtain a summary by discrete search towards a heuristically defined objective function (\S\ref{ss:search}). Then, we propose a non-autoregressive model for the summarization task (\S\ref{ss:NAG}). We present the training strategy and the proposed length-control algorithm in \S\ref{ss:DP}. \subsection{Search-Based Summarization}\label{ss:search} Consider a given source text $\mathbf{x}=\left(\mathrm x_{1},\mathrm x_{2}, \ldots, \mathrm x_{n}\right)$. The goal of summarization is to find a shorter text $\mathbf{y}=\left(\mathrm y_{1}, \mathrm y_{2}, \ldots,\mathrm y_{m}\right)$ as the summary. Our work on unsupervised summarization follows the recent progress of search-based text generation~\cite{liu-etal-2020-unsupervised,liu2021simulated,kumar2020iterative}. \newcite{schumann-etal-2020-discrete} formulate summarization as word-level extraction (with order preserved), and apply edit-based discrete local search to maximize a heuristically designed objective. Specifically, the objective function considers two aspects: (1) a language fluency score $f_{{\mathrm{LM}}}(\mathbf{y})$, given by the reciprocal of a language model's perplexity; and (2) a semantic similarity score $f_{\mathrm{SIM}}(\mathbf{y}; \mathbf{x})$, given by the cosine embeddings. The overall objective combines the two aspects as \begin{align}\label{eqn:score} f(\mathbf{y} ; \mathbf{x})=f_{{\mathrm{LM}}}(\mathbf{y}) \cdot f_{\mathrm{SIM}}(\mathbf{y} ; \mathbf{x})^{\gamma} \end{align} where $\gamma$ is a weighting hyperparameter. Interested readers are referred to \newcite{schumann-etal-2020-discrete} for the details of the scoring function. Further, the desired summary length can be specified as a hard constraint, achieved by searching only among sentences of the correct length. Suppose the desired summary length is $T$, the approach selects $T$ random words from the input, and maximizes the scoring function~\eqref{eqn:score} by changing the selection and non-selection of two words. A greedy hill-climbing algorithm determines whether the change is accepted or not. In other words, a change is accepted if the score improves, or rejected otherwise. Such a process continues until a (possibly local) optimum is found. A pilot analysis in \newcite{schumann-etal-2020-discrete} shows that words largely overlap between a source text and its reference summary. This explains the high performance of such a word extraction approach, being a state-of-the-art unsupervised summarization system and outperforming strong competitors, e.g., cycle consistency \cite{wang-lee-2018-learning, baziotis-etal-2019-seq}. \subsection{Non-Autoregressive Model for Summarization}\label{ss:NAG} Despite the high performance, such edit-based search has several drawbacks. First, the search process is slow because hundreds of local search steps are needed to obtain a high-quality summary. Second, their approach only extracts the original words with order preserved. Therefore, the generated summary is restricted and may be noisy. To this end, we propose a Non-Autoregressive approach to Unsupervised Summarization (NAUS) by learning from the search results. In this way, the machine learning model can smooth out the search noise and is much faster, largely alleviating the drawbacks of search-based summarization. Compared with training an autoregressive model from search~\cite{NEURIPS2020_7a677bb4}, non-autoregressive generation predicts all the words in parallel, further improving inference efficiency by several times. Moreover, a non-autoregressive model enables us to design an encoder-only architecture, which is more suited to the summarization task due to the strong correspondence between input and output, which cannot be fully utilized by encoder--decoder models, especially autoregressive ones. Specifically, we propose to use multi-layer Transformer~\cite{attentionisallyouneed} as the non-autoregressive architecture for summarization. Each Transformer layer is composed of a multi-head attention sublayer and a feed-forward sublayer. Additionally, there is a residual connection in each sublayer, followed by layer normalization. Let $X^{(n)} \in \mathbb{R}^{T \times d}$ be the representation at the $n$th layer, where $T$ is the number of words and $d$ is the dimension. Specially, the input layer $X^{(0)}$ is the embeddings of words. Suppose we have $h$ attention heads. The output of the $i\text{th}$ head in the $n$th attention sublayer is $A^{(n)}_i = \operatorname{softmax} \Big(\tfrac{Q_i K_i^{\top}}{\sqrt{d_{k}}}\Big) V_i $, where $Q_i$, $K_i$, and $V_i$ are matrices calculated by three distinct multi-layer perceptrons (MLPs) from $X^{(n-1)}$; $d_{k}$ is the attention dimension. Multiple attention heads are then concatenated: \begin{equation}\nonumber A^{(n)} = \operatorname { Concat }\big(A^{(n)}_1, \ldots,A^{(n)}_h\big) W_{O} \end{equation} where $W_{O} \in \mathbb{R}^{d \times d}$ is a weight matrix. Then, we have a residual connection and layer normalization by \begin{equation} \bar{A}^{(n)} = \operatorname{LayerNorm}\big(X^{(n-1)} + A^{(n)}\big) \label{eqn:res1} \end{equation} Further, an MLP sublayer processes $\bar{A}^{(n)}$, followed by residual connection and layer normalization, yielding the $n$th layer's representation \begin{equation} X^{(n)} = \operatorname{LayerNorm}\big(\bar{A}^{(n)} + \operatorname{MLP}(\bar{A}^{(n)})\big) \label{eqn:res2} \end{equation} The last Transformer layer $X^{(N)}$ is fed to $\operatorname{softmax}$ to predict the {words of the} summary in a non-autoregressive manner, that is, the probability at the $t$th step is given by $\operatorname{softmax}(W \bm x_t^{(N)})$, where $\bm x_t^{(N)}$ is the $t$th row of the matrix $X^{(N)}$ and $W$ is the {weight matrix}. It is emphasized that, in the vocabulary, we include a special blank token $\epsilon$, which is handled by dynamic programming during both training and inference (\S\ref{ss:DP}). This enables us to generate a shorter summary than the input with such a multi-layer Transformer. Our model can be thought of as an encoder-only architecture, differing from a typical encoder--decoder model with cross attention~\cite{attentionisallyouneed,baziotis-etal-2019-seq,zhou-rush-2019-simple}. Previously, \newcite{sunon} propose a seemingly similar model to us, but put multiple end-of-sequence (EOS) tokens at the end of the generation; thus, they are unable to maintain the correspondence between input and output. Instead, we allow blank tokens scattering over the entire sentence; the residual connections in Eqns~\eqref{eqn:res1} and~\eqref{eqn:res2} can better utilize such input--output correspondence for summarization. \subsection{Training and Inference}\label{ss:DP} In this section, we first introduce the Connectionist Temporal Classification (CTC) training. Then, we propose a length-control decoding approach for summary generation. \textbf{CTC Training.} The Connectionist Temporal Classification \cite[CTC,][]{10.1145/1143844.1143891} algorithm allows a special blank token $\epsilon$ in the vocabulary, and uses dynamic programming to marginalize out such blank tokens{, known as \textit{latent alignment}~\cite{saharia-etal-2020-non}}. In addition, non-autoregressive generation suffers from a common problem that words may be repeated in consecutive steps~\cite{gu2017non,leedeterministic}; thus, CTC merges repeated words unless separated by $\epsilon$. For example, the sequence of tokens $a\epsilon\epsilon aabb\epsilon$ is reduced to the text $aab$, denoted by $\Gamma(a\epsilon\epsilon aabb\epsilon)=aab$. Concretely, the predicted likelihood is marginalized over all possible fillings of $\epsilon$, i.e., all possible token sequences that are reduced to the groundtruth text: \begin{equation}\label{eqn:marginal} P(\mathbf{y}|\mathbf{x})=\sum\nolimits_{\mathbf{w} : \Gamma (\mathbf w)=\mathbf y} P(\mathbf{w}|\mathbf{x}) \end{equation} where $P(\mathbf w|\mathbf x)$ is the probability of generating a sequence of tokens $\mathbf w$. Although enumerating every candidate in $\{\mathbf w:\Gamma(\mathbf w)= \mathbf y\}$ is intractable, such marginalization fortunately can be computed by dynamic programming in an efficient way. Let $\alpha_{s,t}=\sum_{\mathbf w_{1:s}:\Gamma(\mathbf w_{1:s})=\mathbf y_{1:t}}P(\mathbf w_{1:s}|\mathbf x)$ be the marginal probability of generating $\mathbf y_{1:t}$ up to the $s$th decoding slot. Moreover, $\alpha_{s,0}$ is defined to be the probability that $\mathbf w_{1:s}$ is all $\epsilon$, thus not having matched any word in $\mathbf y$. The $\alpha_{s,t}$ variable can be further decomposed into two terms $\alpha_{s,t}=\alpha_{s,t}^{\epsilon}+ \alpha_{s,t}^{\neg\epsilon}$, where the first term is such probability with $\mathrm w_s=\epsilon$, and the second term $\mathrm w_s\ne\epsilon$. Apparently, the initialization of $\alpha$ variables is \begin{align} &\alpha^{\epsilon}_{1,0}= P(\mathrm {w}_1=\epsilon|\mathbf x) \\\label{eqn:init2} &\alpha^{\neg\epsilon}_{1,1}= P(\mathrm {w}_1 = \mathrm{y}_1|\mathbf x)\\\label{eqn:init3} &\alpha^{\epsilon}_{1,t}=0, \forall t\geq1\\\label{eqn:init4} &\alpha^{\neg\epsilon}_{1,t}=0, \forall t>1 \text{ or } t=0 \end{align} Eqn.~\eqref{eqn:init3} is because, at the first prediction slot, the empty token $\epsilon$ does not match any target words; Eqn.~\eqref{eqn:init4} is because the predicted non-$\epsilon$ first token must match exactly the first target word. The recursion formula for $\alpha^{\epsilon}_{s,t}$ is \begin{equation} \nonumber \alpha^{\epsilon}_{s,t}=\alpha_{s-1,t}P(\mathrm{w}_t=\epsilon|\mathbf{x}) \end{equation} since the newly predicted token $\epsilon$ with probability $P(\mathrm w_t=\epsilon|\mathbf x)$ does not match any target word, inheriting $\alpha_{s-1,t}$. The recursion formula for $\alpha^{\neg\epsilon}_{s,t}$ is \begin{equation} \nonumber \alpha^{\neg\epsilon}_{s,t}= \left\{\begin{array}{lc} \left( \alpha^{\epsilon}_{s-1,t-1} + \alpha^{\neg\epsilon}_{s-1,t} \right) P(\mathrm{w}_s=\mathrm{y}_t|\mathbf{x}), &\\ \quad\quad\quad\quad\quad\quad\quad\quad\quad\ \text { if } \mathrm{y}_t = \mathrm{y}_{t-1} \\ \left( \alpha_{s-1,t-1}+\alpha^{\neg \epsilon}_{s-1,t} \right) P(\mathrm{w}_s=\mathrm{y}_t|\mathbf{x}), &\\ \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \text{ otherwise} \end{array}\right . \label{eqn:CTC_recursion_2} \end{equation} Here, $\mathrm w_s$ is not $\epsilon$, so we must have $\mathrm w_s=\mathrm y_t$, having the predicted probability $P(\mathrm w_s=\mathrm y_t|\mathbf x)$. If $\mathrm y_t=\mathrm y_{t-1}$, then we have two sub-cases: first, $\mathbf{w}_{1:s-1}$ is reduced to $\mathbf{y}_{1:t-1}$ with $\mathrm w_{s-1}=\epsilon$ separating two repeating words in $\mathbf y$, having probability $\alpha_{s-1,t-1}^\epsilon$; or second, $\mathbf{w}_{1:s-1}$ is reduced to $\mathbf{y}_{1:t}$ with $\mathrm w_{s-1}=\mathrm y_t \ne \epsilon$, having probability $\alpha_{s-1}^{\neg\epsilon}$, which implies we are merging $\mathrm w_{s-1}$ and $\mathrm w_s$. If $\mathrm y_t\ne \mathrm y_{t-1}$, $\mathbf{w}_{1:s-1}$ is reduced to either $\mathbf{y}_{1:t-1}$ or $\mathbf{y}_{1:t}$. In the first case, $\mathrm{w}_{s-1}$ can be either $\epsilon$ or non-$\epsilon$, given by $\alpha_{s-1,t-1}=\alpha_{s-1,t-1}^\epsilon+\alpha_{s-1,t-1}^{\neg\epsilon}$. In the second case, we must have $\mathrm{w}_{s-1} \neq \epsilon$, which has a probability of $\alpha^{\neg \epsilon}_{s-1,t}$. Finally, $\alpha_{|\mathbf{w}|,|\mathbf{y}|}$ is the marginal probability in Eqn.~\eqref{eqn:marginal}, as it is the probability that the entire generated sequence matches the entire target text. The CTC maximum likelihood estimation is to maximize the marginal probability, which is equivalent to minimizing the loss $-\alpha_{|\mathbf{w}|,|\mathbf{y}|}$. Since the dynamic programming formulas are differentiable, the entire model can be trained by backpropagation in an end-to-end manner with auto-differentiation tools (such as PyTorch). \textbf{Length-Control Inference.} {Controlling output length is the nature of the summarization task, for example, displaying a short news headline on a mobile device.} Moreover, \newcite{schumann-etal-2020-discrete} show that the main evaluation metric ROUGE \cite{lin2004rouge} is sensitive to the summary length, and longer summaries tend to achieve higher ROUGE scores. Thus, it is crucial to control the summary length for fair comparison. We propose a length-control algorithm by dynamic programming (DP), following the nature of CTC training. However, our DP is an approximate algorithm because of the dependencies introduced by removing consecutive repeated tokens. Thus, we equip our DP with a beam search mechanism. We define $\mathscr B_{s,t}$ to be a set of top-$B$ sequences with $s$ predicted tokens that are reduced to $t$ words. $\mathscr B_{s,t}$ is constructed by three scenarios. \begin{figure}[!t] \centering \includegraphics[width=.7\linewidth]{DP.pdf} \caption{Illustration of our length-control algorithm.} \label{fig:length_control} \end{figure} First, the blank token $\epsilon$ is predicted for the $s$th generation slot, and thus the summary length $t$ remains the same, shown by the blue arrow in Figure~\ref{fig:length_control}. This yields a set of candidates \begin{align}\label{eqn:rec1} &\mathscr B_{s,t}^{(1)} = \big\{ \mathbf{b} \oplus \epsilon \, : \, \mathbf{b} \in \mathscr B_{s-1,t}\big\} \end{align} where $\oplus$ refers to string/token concatenation. Second, a repeated word is predicted for the $s$th generation slot, i.e., $\mathrm b_{s-1}$ for a subsequence $\mathbf b$ of length $s-1$. In this case, the summary length $t$ also remains the same, also shown in the blue arrow in Figure~\ref{fig:length_control}. This gives a candidate set \begin{align}\label{eqn:rec2} &\mathscr B_{s,t}^{(2)} = \big\{ \mathbf{b}\oplus \mathrm{b}_{s-1} \, : \, \mathbf{b} \in \mathscr B_{s-1,t}\big\} \end{align} Third, a non-$\epsilon$, non-repeating word $\mathrm w_s$ is generated, increasing the summary length from $t-1$ to $t$, shown by the red arrow in Figure~\ref{fig:length_control}. This gives \begin{align}\nonumber \mathscr B_{s,t}^{(3)} = \operatorname{top}_B\big\{ \mathbf{b}\oplus \mathrm{w} \, : \, & \mathbf{b} \in \mathscr B_{s-1,t-1},\mathrm{w}_s \neq \epsilon,\\ &\mathrm{w}_s \neq\mathrm{b}_{s-1}\big\} \label{eqn:rec3} \end{align} where $\operatorname{top}_B$ selects the best $B$ elements by the probability $P(\mathrm{w}_s|\mathbf x)$. Based on the three candidates sets, we select top-$B$ sequences to keep the beam size fixed: \begin{align} \label{eqn:rec4} \mathscr B_{s,t} =\operatorname{top}_B (\mathscr B_{s,t}^{(1)}\cup \mathscr B_{s,t}^{(2)} \mathscr\cup \mathscr B_{s,t}^{(3)} ) \end{align} where the sequences are ranked by their predicted joint probabilities. \begin{theorem} \label{theorem:dp} (1) If repeating tokens are not merged, then the proposed length-control algorithm with beam size $B=1$ finds the exact optimum $\mathscr B_{S,T}$ being the most probable length-$T$ sentence given by $S$ prediction slots. (2) If we merge repeating tokens predicted by CTC-trained models, the above algorithm may not be exact. \end{theorem} Appendix~\ref{app:proof} presents the proof of the theorem and provides a more detailed analysis, showing that our length-control algorithm, although being approximate inference, can generate a summary of the desired length properly. Compared with truncating an overlength output, our approach is able to generate more fluent and complete sentences. Also, our length-control algorithm is different from conventional beam search, shown in Appendix~\ref{app:beam}. \section{Experiments} \label{sec:experiment} \subsection{Setup} \textbf{Datasets.} We evaluated our NAUS model on Gigaword headline generation and DUC2004 datasets. The headline generation dataset~\cite{Rush_2015} is constructed from the Gigaword news corpus \cite{graff2003english}, where the first sentence of a news article is considered as input text and the news title is considered as the summary. The dataset contains 3.8M/198K/1951 samples for training/validation/test. Based on the analysis of the training size in Appendix~\ref{app:details}, we used 3M samples for training NAUS. It should be emphasized that, when NAUS learns from search, we only use the input of the training corpus: we perform search~\cite{schumann-etal-2020-discrete} for each input, and train our NAUS from the search results. Therefore, we do not utilize any labeled parallel data, and our approach is unsupervised. Moreover, we considered two settings with desired summary lengths of 8 and 10, following~\newcite{schumann-etal-2020-discrete}. Our NAUS is trained from respective search results. The DUC2004 dataset \cite{duc2004} is designed for testing only with 500 samples, where we also take the first sentence of an article as the input text. Our NAUS is transferred from the above headline generation corpus. Based on the length of DUC2004 summaries, we trained NAUS from search results with 13 words, also following \newcite{schumann-etal-2020-discrete} for fair comparison. \textbf{Evaluation Metrics.} We evaluated the quality of predicted summaries by ROUGE scores \footnote{https://github.com/tagucci/pythonrouge} ~\cite{lin2004rouge}, which are the most widely used metrics in previous work~\cite{wang-lee-2018-learning, baziotis-etal-2019-seq, zhou-rush-2019-simple}. Specifically, ROUGE-$n$ evaluates $n$-gram overlap between a predicted summary and its reference summary; ROUGE-L, instead, measures the longest common sequence between the predicted and reference summaries. Different ROUGE variants are adopted in previous work, depending on the dataset. We followed the standard evaluation scripts and evaluated headline generation by ROUGE F1~\cite{wang-lee-2018-learning,baziotis-etal-2019-seq,schumann-etal-2020-discrete} and DUC2004 by Truncate ROUGE Recall~\cite{dorr-etal-2003-hedge,west-etal-2019-bottlesum}. In addition to summary quality, we also evaluated the inference efficiency of different methods, as it is important for the deployment of deep learning models in real-time applications. We report the average inference time in seconds for each data sample, and compare the speedup with \newcite{schumann-etal-2020-discrete}'s search approach, which achieves (previous) state-of-the-art ROUGE scores. Our experiments were conducted on an i9-9940X CPU and an RTX6000 graphic card. Appendix~\ref{app:details} presents additional implementation details. \begin{table*}[!t] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{|c|c|c|c|r|rrrr|rr|} \hline \multirow{2}{*}{Group} & \multirow{2}{*}{\#} & \multicolumn{2}{c|}{\multirow{2}{*}{Approach}} & \multirow{2}{*}{Len} & \multicolumn{4}{c|}{ROUGE F1} & \multirow{2}{*}{Inf.Time} & \multirow{2}{*}{Speedup} \\ \cline{6-9} & & \multicolumn{2}{c|}{} & & R-1 & R-2 & R-L & $\Delta$R & & \\ \hline\hline \multirow{5}{*}{\begin{tabular}[c]{@{}c@{}}A\\ (desired\\ length 8)\end{tabular}} & 1 & \multirow{1}{*}{Baseline} & $\text{Lead (8 words)}^\dagger$ & 7.9 & 21.39 & 7.42 & 20.03 & -11.12 & -- & -- \\\cline{2-11} & 2 & \multirow{2}{*}{Search} & $\text{\newcite{schumann-etal-2020-discrete}}^\dagger$ & 7.9 & 26.32 & 9.63 & 24.19 & 0.18 & -- & -- \\ & 3 & & Our replication & 7.9 & 26.17 & \textbf{9.69} & 24.10 & 0 & 6.846 & 1x \\ \cline{2-11} & 4 & \multicolumn{1}{c|}{\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Learn from \\ search\end{tabular}}} & \newcite{sunon} & 7.7 & 26.88 & 9.37 & 24.54 & 0.83 & 0.017 & 403x \\ & 5 & & NAUS (truncate) & 7.8 & 27.27 & 9.49 & 24.96 & 1.76 & \textbf{0.005} & \textbf{1369x} \\ & 6 & & NAUS (length control) & 7.8 & \textbf{27.94} & 9.24 & \textbf{25.51} & \textbf{2.73} & 0.041 & 167x \\ \hline\hline \multirow{8}{*}{\begin{tabular}[c]{@{}c@{}}B\\ (desired \\ length 10)\end{tabular}} & 7 & \multirow{3}{*}{Baseline} & $\text{Lead (10 words)}^\dagger$ & 9.8 & 23.03 & 7.95 & 21.29 & -10.2 & -- & -- \\ & 8 & & $\text{\newcite{wang-lee-2018-learning}}^\dagger$ & 10.8 & 27.29 & 10.01 & 24.59 & -0.58 & -- & -- \\ & 9 & & $\text{\newcite{zhou-rush-2019-simple}}^\dagger$ & 9.3 & 26.48 & 10.05 & 24.41 & -1.53 & -- & -- \\ \cline{2-11} & 10 & \multirow{2}{*}{Search} & $\text{\newcite{schumann-etal-2020-discrete}}^\dagger$ & 9.8 & 27.52 & \textbf{10.27} & 24.91 & 0.23 & -- & -- \\ & 11 & & Our replication & 9.8 & 27.35 & 10.25 & 24.87 & 0 & 9.217 & 1x \\ \cline{2-11} & 12 & \multicolumn{1}{c|}{\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Learn from \\ search\end{tabular}}} & \newcite{sunon} & 9.4 & 27.86 & 9.88 & 25.51 & 0.78 & 0.020 & 461x\\ & 13& & NAUS (truncate) & 9.8 & 28.24 & 10.04 & 25.40 & 1.21 & \textbf{0.005} & \textbf{1843x} \\ & 14 & & NAUS (length control) & 9.8 & \textbf{28.55} & 9.97 & \textbf{25.78} & \textbf{1.83} & 0.044& 210x \\\hline \end{tabular} } \caption{Results on the Gigaword headline generation test set. \textbf{Len:} Average length of predicted summaries. \textbf{R-1, R-2, R-L:} ROUGE-1, ROUGE-2, ROUGE-L. \textbf{$\Delta$R:} The difference of total ROUGE (sum of R-1, R-2, and R-L) in comparison with the (previous) state-of-the-art search method under replication. \textbf{Inf.Time:} Average inference time in seconds for one sample on an i9-9940X CPU and a RTX6000 GPU. \textbf{Speedup:} Relative to \newcite{schumann-etal-2020-discrete}. $^\dagger$Results quoted from previous papers; others are given by our experiments. } \label{table:giga_performance} \end{table*} \subsection{Results and Analyses} \textbf{Main Results.} Table \ref{table:giga_performance} presents the performance of our model and baselines on the Gigaword headline test set. For a fair comparison, we categorize all approaches by average summary lengths of \textasciitilde8 and \textasciitilde10 into Groups A and B, respectively. The Lead baseline extracts the first several words of the input sentence. Despite its simplicity, the Lead approach is a strong summarization baseline adopted in most previous work \cite{fevry-phang-2018-unsupervised,baziotis-etal-2019-seq}. \begin{table}[!t] \centering \resizebox{0.5\textwidth}{!}{% \begin{tabular}{|c|rrrr|rr|} \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{ROUGE Recall} & \multirow{2}{*}{\!\!\!\!Time} & \multicolumn{1}{l|}{\multirow{2}{*}{\!\!\!\!Speedup}\!\!} \\ \cline{2-5} & \multicolumn{1}{c}{R-1} & \multicolumn{1}{c}{R-2} & \multicolumn{1}{c}{R-L} & \multicolumn{1}{c|}{$\Delta$R} & & \\ \hline\hline $\text{Lead (75 characters)}^\dagger$ & 22.50 & 6.49 & 19.72 & -8.34 & -- & -- \\ $\text{\newcite{zajic2004bbn}}^\dagger$ & 25.12 & 6.46 & 20.12 & -5.35 & -- & -- \\ $\text{\newcite{baziotis-etal-2019-seq}}^\dagger$ & 22.13 & 6.18 & 19.30 & -9.44 & -- & -- \\ $\text{\newcite{west-etal-2019-bottlesum}}^\dagger$ & 22.85 & 5.71 & 19.87 & -8.62 & -- & -- \\ \hline $\!\!\text{\newcite{schumann-etal-2020-discrete}}^\dagger$\!\!\!\! & 26.04 & \textbf{8.06} & 22.90 & -0.05 & -- & -- \\ \!\!Our replication\!\!\! &26.14 &8.03 &22.88 & 0 &\!\!\!\!12.314 & 1x \\ \hline \newcite{sunon} & 26.25 & 7.66 & 22.83 & -0.31 & 0.022 & 559x \\ NAUS (truncate) & 26.52 & 7.88 & 22.91 & 0.26 & \textbf{0.005} & \textbf{2463x} \\ NAUS (length control) & \textbf{26.71} & 7.68 & \textbf{23.06} & \textbf{0.40} & 0.048 & 257x \\\hline \end{tabular}% } \caption{Results on the DUC2004 dataset. $^\dagger$Quoted from previous papers.} \label{table:duc2004_performance} \end{table} \newcite{wang-lee-2018-learning} utilize cycle consistency~\cite{miao2016language} for unsupervised summarization; {the performance is relatively low, because the cycle consistency loss cannot ensure the generated text is a valid summary.} \newcite{zhou-rush-2019-simple} perform beam search towards a step-by-step decomposable score of fluency and contextual matching. Both are unable to explicitly control the summary length: in a fair comparison of length 10 (Group~B, Table~\ref{table:giga_performance}), their performance is worse than the (previous) state-of-the-art approach \cite{schumann-etal-2020-discrete},\footnote{\newcite{schumann-etal-2020-discrete} present a few variants that use additional datasets for training language models (in an unsupervised way). In our study, we focus on the setting without data augmentation, i.e., the language model is trained on non-parallel the Gigawords corpus.} which performs edit-based local search. Our NAUS approach follows \newcite{schumann-etal-2020-discrete}, but trains a non-autoregressive model from search results. We consider two settings for controlling the summary length: truncating longer summaries and decoding with our proposed length-control algorithm. Both of our variants outperform \newcite{schumann-etal-2020-discrete} by 1.21--2.73 in terms of the total ROUGE score (Rows~5--6 \& 13--14, Table~\ref{table:giga_performance}). As mentioned, \newcite{schumann-etal-2020-discrete} only extract original words with order preserved, yielding noisy sentences. Our NAUS, as a student, learns from the search-based teacher model and is able to smooth out its noise. This is a compelling result, as our student model outperforms its teacher. Regarding inference efficiency, our NAUS method with truncating is more than 1300 times faster than \newcite{schumann-etal-2020-discrete}, because we do not need iterative search. Even with dynamic programming and beam search for length control, NAUS is still over 100 times faster. This shows our NAUS is extremely efficient in inference, which is important for real-time applications. Although the efficiency of \newcite{wang-lee-2018-learning} and \newcite{zhou-rush-2019-simple} is not available, we still expect our approach to be a few times faster (despite our higher ROUGE scores) because their models are autoregressive. By contrast, our NAUS is non-autoregressive, meaning that it predicts all words simultaneously. We will provide a controlled comparison between autoregressive and non-autoregressive models in Table~\ref{table:giga_ablation_result}. Table~\ref{table:duc2004_performance} shows the results on the DUC2004 dataset. The cycle-consistency approach \cite{baziotis-etal-2019-seq,west-etal-2019-bottlesum} does not perform well on this dataset, outperformed by an early rule-based syntax tree trimming approach \cite{zajic2004bbn} and the state-of-the-art edit-based search~\cite{schumann-etal-2020-discrete}. The performance of our NAUS model is consistent with Table \ref{table:giga_performance}, outperforming all previous methods in terms of the total ROUGE score, and being 100--1000 times faster than the search approach \cite{schumann-etal-2020-discrete}. In general, the proposed NAUS not only achieves state-of-the-art ROUGE scores for unsupervised summarization, but also is more efficient when deployed. Results are consistent on both datasets, demonstrating the generality of our NAUS. \textbf{In-Depth Analyses.} We conduct in-depth analyses on the proposed NAUS model in Table~\ref{table:giga_ablation_result}. Due to the limit of time and space, we chose the Gigaword headline generation as our testbed. All the autoregressive (AR) and non-autoregressive (NAR) variants learn from the search output of our replication (Rows~2~\&~11), where we achieve very close results to those reported in \newcite{schumann-etal-2020-discrete}. \begin{table}[!t] \centering \resizebox{0.5\textwidth}{!}{% \begin{tabular}{|c|ccrrrrr|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\#}} & \multicolumn{2}{|c|}{\multirow{2}{*}{Approach}} & \multicolumn{4}{c|}{ROUGE Recall} & \multicolumn{1}{l|}{\multirow{2}{*}{\!\!\!Speedup}\!\!\!} \\ \cline{4-7} \multicolumn{1}{|c|}{} & \multicolumn{2}{|c|}{} & \multicolumn{1}{c}{R-1} & \multicolumn{1}{c}{R-2} & \multicolumn{1}{c}{R-L} & \multicolumn{1}{c|}{$\Delta$R} & \multicolumn{1}{l|}{} \\ \hline\hline\multicolumn{8}{|c|}{Group A (desired length 8)} \\ \hline 1& \multicolumn{1}{|c|}{\multirow{2}{*}{Search}} & \multicolumn{1}{c|}{\!\!\text{\citeauthor{schumann-etal-2020-discrete}}\!\!} & 26.32 & 9.63 & 24.19 & \multicolumn{1}{r|}{0.18} & -- \\ 2 & \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{\!\!Our replication\!\!} & 26.17 & \textbf{9.69} & 24.10 & \multicolumn{1}{r|}{0} & 1x \\ \hline 3 & \multicolumn{1}{|c|}{AR} & \multicolumn{1}{c|}{Transformer (T)} & 26.65 & 9.51 & 24.67 & \multicolumn{1}{r|}{0.87} & 58x \\ \hline 4& \multicolumn{1}{|c|}{\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}NAR\\ enc-dec\end{tabular}}} & \multicolumn{1}{c|}{Vanilla} & 24.87 & 8.33 & 22.74 & \multicolumn{1}{r|}{\!\!\!\!-4.02} & 571x \\ 5 & \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{CTC (T)} & 27.30 & 9.20 & 24.96 & \multicolumn{1}{r|}{1.5} & 571x \\ 6 & \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{CTC (LC)} & 27.76 & 9.13 & 25.33 & \multicolumn{1}{r|}{2.26} & 149x \\ \hline 7 & \multicolumn{1}{|c|}{\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}NAR\\ \!\!\!enc-only\!\!\!\end{tabular}}} & \multicolumn{1}{c|}{\newcite{sunon}} & 26.88 & 9.37 & 24.54 & 0.83 & \multicolumn{1}{|c|}{403x}\\ 8& & \multicolumn{1}{|c|}{Our NAUS (T)} & 27.27 & 9.49 & 24.96 & \multicolumn{1}{r|}{1.76} & \!\!\textbf{1396x} \\ 9 & \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{Our NAUS (LC)} & \textbf{27.94} & 9.24 & \textbf{25.51} & \multicolumn{1}{r|}{2.73} & 167x \\ \hline\hline \multicolumn{8}{|c|}{Group B (desired length 10)} \\ \hline 10 & \multicolumn{1}{|c|}{\multirow{2}{*}{Search}} & \multicolumn{1}{c|}{\!\!\!\text{\citeauthor{schumann-etal-2020-discrete}}\!\!\!} & 27.52 & \textbf{10.27} & 24.91 & \multicolumn{1}{r|}{0.23} & -- \\ \!\!11\!\! & \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{\!\!\!Our replication\!\!\!} & 27.35 & 10.25 & 24.87 & \multicolumn{1}{r|}{0} & 1x \\ \hline \!\!12\!\! & \multicolumn{1}{|c|}{AR} & \multicolumn{1}{c|}{Transformer (T)} & 27.06 & 9.63 & 24.55 & \multicolumn{1}{r|}{\!\!\!\!-1.23} & 66x \\ \hline \!\!13\!\! & \multicolumn{1}{|c|}{\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}NAR\\ enc-dec\end{tabular}}} & \multicolumn{1}{c|}{Vanilla} & 25.77 & 8.69 & 23.52 & \multicolumn{1}{r|}{\!\!\!\!-4.49} & 709x \\ \!\!14\!\! & \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{CTC (T)} & 28.14 & 10.07 & 25.37 & \multicolumn{1}{r|}{1.11} & 709x \\ \!\!15\!\! & \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{CTC (LC)} & 28.45 & 9.81 & 25.63 & \multicolumn{1}{r|}{1.42} & 192x \\ \hline \!\!16\!\! & \multicolumn{1}{|c|}{\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}NAR\\ \!\!\!enc-only\!\!\!\end{tabular}}} & \multicolumn{1}{|c|}{\newcite{sunon}} & 27.86 & 9.88 & 25.51 & 0.78 & \multicolumn{1}{|c|}{461x} \\ 17& & \multicolumn{1}{|c|}{Our NAUS (T)} & 28.24 & 10.04 & 25.40 & \multicolumn{1}{r|}{1.21} & \!\!\textbf{1843x} \\ \!\!18\!\! & \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{Our NAUS (LC)} & \textbf{28.55} & 9.97 & \textbf{25.78} & \multicolumn{1}{r|}{\textbf{1.83}} & 210x \\ \hline \end{tabular}% } \caption{Model analysis on headline generation. \textbf{AR:} Autoregressive models. \textbf{NAR enc-dec:} Non-autoregressive encoder--decoder. \textbf{NAR enc-only:} Non-autoregressive encoder-only. \textbf{T:} Truncating. \textbf{LC:} Length control. All AR and NAR models use the Transformer architecture.} \label{table:giga_ablation_result} \end{table} We first tried vanilla encoder--decoder NAR Transformer \cite[Rows~4~\&~13,][]{gu2017non}, where we set the number of decoding slots as the desired summary length; thus, {the blank token} and the length-control algorithm are not needed. As seen, a vanilla NAR model does not perform well, and CTC largely outperforms vanilla NAR in both groups (Rows 5--6 \& 14--15). Such results are highly consistent with the translation literature~\cite{saharia-etal-2020-non,imputer,gu-kong-2021-fully,qian2020glancing,huang2021non}. The proposed encoder-only NAUS model outperforms encoder--decoder ones in both groups in terms of the total ROUGE score, when the summary length is controlled by either truncating or length-control decoding (Rows 8--9 \& 17--18). Profoundly, our non-autoregressive NAUS is even better than the autoregressive Transformer (Rows~3 \&~12). We also experimented with previous non-autoregressive work for supervised summarization~\cite{sunon}\footnote{To the best of our knowledge, the other two non-autoregressive supervised summarization models are \newcite{yang-etal-2021-pos} and \newcite{pmlr-v139-qi21a}. Their code and pretrained models are not available, making replication difficult.} in our learning-from-search setting. Although their approach appears to be encoder-only, it adds end-of-sequence (EOS) tokens at the end of the generation, and thus is unable to utilize the input--output correspondence. Their performance is higher than vanilla NAR models, but lower than ours. By contrast, NAUS is able to capture such correspondence with the residual connections, i.e., Eqns.~\eqref{eqn:res1} and~\eqref{eqn:res2}, in its encoder-only architecture. Generally, the efficiency of encoder-only NAR\footnote{The standard minimal encoder--decoder NAR model has 6 layers for the encoder and another 6 layers for the decoder~\cite{attentionisallyouneed}. Our NAUS only has a 6-layer encoder. Our pilot study shows that more layers do not further improve performance in our encoder-only architecture.} (without length-control decoding) is \textasciitilde2 times faster than encoder--decoder NAR and \textasciitilde20 times faster than the AR Transformer. Further, our length-control decoding improves the total ROUGE score, compared with truncating, for both encoder--decoder CTC and encoder-only NAUS models (Rows~6, 9, 15, \& 18), although its dynamic programming is slower. Nevertheless, our non-autoregressive NAUS with length control is \textasciitilde200 times faster than search and \textasciitilde3 times faster than the AR Transformer. \textbf{Additional Results.} We present additional results in our appendices: \ref{app:beam}. Analysis of Beam Search \ref{app:case}. Case Study \ref{app:human}. Human Evaluation \ref{app:transfer}. Length-Transfer Summarization \section{Related Work} Summarization systems can be generally categorized into two paradigms: extractive and abstractive. Extractive systems extract certain sentences and clauses from input, for example, based on salient features~\cite{zhou-rush-2019-simple} or feature construction~\cite{he2012document}. Abstraction systems generate new utterances as the summary, e.g., by sequence-to-sequence models trained in a supervised way~\cite{zhang2020pegasus,liurefsum}. Recently, unsupervised abstractive summarization is attracting increasing attention. \newcite{yang-etal-2020-ted} propose to use the Lead baseline (first several sentences) as the pseudo-groundtruth. However, such an approach only works with well-structured articles (such as CNN/DailyMail). \newcite{wang-lee-2018-learning} and \newcite{baziotis-etal-2019-seq} use cycle consistency for unsupervised summarization. \newcite{zhou-rush-2019-simple} propose a step-by-step decomposable scoring function and perform beam search for summary generation. \newcite{schumann-etal-2020-discrete} propose an edit-based local search approach, which allows a more comprehensive scoring function and outperforms cycle consistency and beam search. Our paper follows \newcite{schumann-etal-2020-discrete} but trains a machine learning model to improve efficiency and smooth out search noise. Previously, \newcite{NEURIPS2020_7a677bb4} fine-tune a GPT-2 model based on search results for unsupervised paraphrasing; \newcite{jolly2021search} adopt the search-and-learning framework to improve the semantic coverage for few-shot data-to-text generation. We extend previous work in a non-trivial way by designing a non-autoregressive generator and further proposing a length-control decoding algorithm. {The importance of controlling the output length is recently realized in the summarization community. \newcite{baziotis-etal-2019-seq} and \newcite{sunon} adopt soft penalty to encourage shorter sentences; \newcite{yang-etal-2021-pos} and \newcite{pmlr-v139-qi21a} control the summary length through POS tag and EOS predictions. None of these studies can control the length explicitly. \newcite{song-etal-2021-new} is able to precisely control the length by progressively filling a pre-determined number of decoding slots, analogous to the vanilla NAR model in our non-autoregressive setting. } Non-autoregressive generation is originally proposed for machine translation~{\cite{gu2017non, guo2020fine, saharia-etal-2020-non}}{, which is later extended to other text generation tasks. \newcite{wiseman-etal-2018-learning} address the table-to-text generation task, and model output segments by a hidden semi-Markov model \cite{ostendorf1996hmm}, simultaneously generating tokens for all segments.} \newcite{jia2021flexible} apply non-autoregressive models to extractive document-level summarization. \newcite{sunon} stack a non-autoregressive BERT model with a conditional random field (CRF) for abstractive summarization; since the summary is shorter than the input text, their approach puts multiple end-to-sequence (EOS) tokens at the end of the sentence, and thus is unable to utilize the strong input--output correspondence in the summarization task. \newcite{yang-etal-2021-pos} apply auxiliary part-of-speech (POS) loss and \newcite{pmlr-v139-qi21a} explore pretraining strategies for encoder--decoder non-autoregressive summarization. All these studies concern supervised summarization, while our paper focuses on unsupervised summarization. We adopt CTC training in our encoder-only architecture, allowing blank tokens to better align input and output words, which is more appropriate for summarization. \section{Conclusion} In this work, we propose a non-autoregressive unsupervised summarization model (NAUS), where we further propose a length-control decoding algorithm based on dynamic programming. Experiments show that NAUS not only archives state-of-the-art unsupervised performance on Gigaword headline generation and DUC2004 datasets, but also is much more efficient than search methods and autoregressive models. Appendices present additional analyses and length-transfer experiments. \textbf{Limitation and Future Work.} Our paper focuses on unsupervised summarization due to the importance of low-data applications. One limitation is that we have not obtained rigorous empirical results for supervised summarization, where the developed model may also work. This is because previous supervised summarization studies lack explicit categorization of summary lengths~\cite{yang-etal-2020-ted,pmlr-v139-qi21a}, making comparisons unfair and problematic \cite{schumann-etal-2020-discrete}. Such an observation is also evidenced by \newcite{sunon}, where the same model may differ by a few ROUGE points when generating summaries of different lengths. Nevertheless, we have compared with \newcite{sunon} in our setting and show the superiority of the NAUS under fair comparison. We plan to explore supervised summarization in future work after we establish a rigorous experimental setup, which is beyond the scope of this paper. \section{Acknowledgments} We thank Raphael Schumann for providing valuable suggestions on the work. We also thank the Action Editor and reviewers for their comments during ACL Rolling Review. The research is supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) under grant No.~RGPIN2020-04465, the Amii Fellow Program, the Canada CIFAR AI Chair Program, a UAHJIC project, a donation from DeepMind, and Compute Canada (www.computecanada.ca).
1,314,259,994,393
arxiv
\section{Introduction} \label{intro} Modern astronomy rests upon highly technological ground and space-based observatories that are capable of probing the sky with high sensitivity in almost all bands of the electromagnetic spectrum. As a consequence, extremely large and rapidly growing amounts of high-quality digital data are being accumulated. Archive data centers, that often openly provide ready-to-use data products based on consolidated data format like FITS, together with the rapid increase of computing power and network communication speed, and the existence of world-wide initiatives like the Virtual Observatory (VO) \citep[see e.g.][]{VO} are providing unprecedented opportunities to obtain high quality multi-frequency data. A new era of scientific discovery, based on large amounts of archival and fresh data covering the entire electromagnetic spectrum and accumulated over a very wide time interval, has started. Existing digital archives typically include astronomical data of one of the following types \begin{enumerate} \item data produced as part of diverse and unrelated scientific projects proposed by single observes to one specific astronomical facility \item data from large surveys of the sky carried out in different energy bands \item data from short or long-term monitoring of specific sources \item data taken as part of large multi-observatory programs to simultaneously observe specific targets in specific energy bands. \end{enumerate} In this contribution I use blazars, a special type of extragalactic sources that emit highly variable radiation across the electromagnetic spectrum, to illustrate how this new opportunity of accessing large amounts of spectral and timing data is currently exploited in terms of techniques for visualization and analysis. I also briefly describe some new software tools, developed within the VO or related activities, that can be used to efficiently retrieve and analyze multi-frequency multi-temporal archival data. \section{Blazars as an example of multi-frequency multi-temporal data analysis} \label{blazars} Blazars are a special type of Active Galactic Nuclei (AGN) that are known to be strong emitters in all bands of the electromagnetic spectrum. These peculiar sources, known since the discovery of AGN fifty years ago \cite{Schmidt}, display very unusual properties like superluminal motion and are the most variable persistent sources in the extragalactic sky. The extreme properties of blazars are thought to be the result of emission from charged particles interacting with a magnetic field in a jet of plasma that moves at relativistic speeds and happens to point very close to the line of sight \citep[see][ for a review]{UP95}. These are conditions that can happen only rarely, and that is why only about 3,000 blazars are known \citep{bzcat}, compared to over one million AGN (http://quasars.org/milliquas.htm). \begin{figure}[h!] \includegraphics[width=19pc]{3C454SED.png} \caption{The SED of the blazar 3C454.3 built including over 30,000 multi-frequency independent measurements. Note the extremely large variability at optical, UV, X-ray and especially at gamma-ray energies where the brightest measurement is about 10,000 larger than the weakest detection.} \label{SED3c454.3} \end{figure} Over the past several years blazars have been observed, often repeatedly, at all frequencies; in some cases, especially in the radio and optical bands, some of the brighter ones have been monitored for long periods. Consequently there are many databases and catalogs that include measurements of blazars at all frequencies (e.g. radio, mm, IR, optical, UV, X-ray and gamma-ray). The broad-band emission in blazars is traditionally represented as Spectral Energy Distributions (SED), that is plots of intensity (usually flux density multiplied by frequency, $\nu f(\nu)$, (or luminosity, $\nu L(\nu)$) versus energy or, equivalently, frequency - $\nu$, of the emitted radiation. As an example, figure \ref{SED3c454.3} shows the SED of the blazar 3C454.3, currently one of the most densely populated existing SED as it includes approximately 30,000 independent flux measurements collected over a time period of more than thirty years. A large fraction of the data shown in Fig. \ref{SED3c454.3} comes from monitoring programs and from on-line databases like UMRAO (dept.astro.lsa.umich.edu/datasets/umrao.php) at 5, 8 and 14.5 GHz, OVRO (www.astro.caltech.edu/ovroblazars)\citep{Richards} at 15GHz, Mets\"{a}hovi (metsahovi.aalto.fi/en/research/projects/quasar/) at 37~GHz, SMARTS (www.astro.yale.edu/smarts)\citep{Bonning} in the optical and infrared bands, WEBT (www.aoto.inaf.it/blazars/webt) \citep{Villata} at optical, IR and radio frequencies, the BeppoSAX and Swift data bases in the X-rays, and Fermi in the gamma-ray band (www.asdc.asi.it/mmia). The 1GeV light-curve (that appears as a vertical line at 2.4 10$^{23}$ Hz in Fig. \ref{SED3c454.3}) was built with Fermi-LAT data using the adaptive-bin method developed by \cite{Lott} Another important example of multi-frequency data acquisition is the organization of campaigns of simultaneous observations of one or more sources involving several different facilities. The data collected in these cases are more homogeneous. An example of this approach is the Planck, Swift, Fermi and ground-based simultaneous observations of a large sample of blazars, including 175 sources selected according to four different criteria in the radio, X-ray and gamma-ray band \citep{GiommiPlanck}. Figure \ref{SED3C279} shows the SED of the source 3C279 taken from \cite{GiommiPlanck} which includes {\it simultaneous data} covering a spectral range of 15 orders of magnitudes. The simultaneous measurements from the instruments of Planck (LFI, HFI) \cite{Tauber}, Swift (UVOT, XRT) \cite{Gehrels} and Fermi (LAT) \cite{Atwood} are shown as red points, while quasi-simultaneous data (i.e. observations carried our within two months of each other) are plotted as orange points. Archival data taken at different random times appear as gray points. Simultaneous data is clearly crucial for measuring the parameters related the emission process, like the energy where the emitted power peaks and the intensity level of the peak. \begin{figure}[h!] \includegraphics[width=19pc]{3C279SED.png} \caption{The SED of the blazar 3C279 built with simultaneous Planck, Swift and Fermi data \citep{GiommiPlanck}, shown as red symbols, and with non-simultaneous archival data appearing as gray points.} \label{SED3C279} \end{figure} As Figs. \ref{SED3c454.3} and \ref{SED3C279} demonstrate, the amplitude of variability in blazars is a strong function of the energy where the emission occurs, ranging from a factor of a few in the radio band, and up to a factor 10,000! at 1GeV. This dependence requires that the time scale of variability in each energy band must be properly taken into consideration when dealing with multi-frequency data that is not simultaneous. Clearly, observation times must be visualized with methods that go beyond simple spectral distributions like that of Fig. \ref{SED3c454.3}. \begin{figure}[h!] \includegraphics[width=20pc]{3c454_3_3Dsed.png} \caption{The multi-frequency emission of the blazar 3C454.3 covering the period 2000 - 2013 is represented as a 3D plot generated using TOPCAT, a popular application developed within Virtual Observatory projects.} \label{TSED3c454.3} \end{figure} \begin{figure}[h!] \includegraphics[width=19pc]{MWLightcurves.png} \caption{The time evolution of the emission from 3C454.3 at different frequencies is shown as a 2D plot. Only a limited number of energy bands can be shown this way.} \label{MWlightcurves} \end{figure} \section{SED software tools} \label{tools} As described above the remarkable number of measurements now accessible and the very large intensity variations observed in several well-known sources require that the interpretation of the physical emission processes is carried out by analyzing the SEDs in the time domain. Until recently, however, existing software packages for building SEDs did not take into account of time. This is rapidly changing and new tools capable of handling the time variable are appearing or are planned for the near future. Figure \ref{TSED3c454.3} is an example of how the time dependance of the energy distribution of a source (3C454.3 in this case) can be visualized as a 3D plot, illustrating in a single picture how time scales, flare details, variability amplitudes and time lags vary across the electromagnetic spectrum. This plot was generated using the TOPCAT application (http://www.star.bris.ac.uk/$\sim$mbt/topcat), which is a widely used interactive graphical viewer developed as part of some UK and Euro-VO projects. In the following I briefly describe two new software tools that can be used to download multi-frequency measurements from many different catalogs, databases and sky surveys, and build and analyze blazar SEDs. \subsection{The IRIS SED analysis tool} IRIS is a JAVA application developed as part of the activities of the Virtual Astronomical Observatory (VAO), the US contribution to the world-wide VO initiative. IRIS can retrieve data using VO protocols from the National Extragalactic Database (NED) and from the ASI Science Data Center (ASDC). It can be used to plot and fit Spectral Energy Distributions in a number of ways. As an example Fig. \ref{iris} shows a session of IRIS showing the SED of the blazar MKN421 displayed as a $\nu$f($\nu$) vs $\nu$ plot. The current version of this desktop application (V2.0) only allows limited control of the time variable, and visualization must be done using energy or frequency (in various units) on the X-axis. The application can be downloaded from the VAO web pages at the following link http://www.usvao.org/science-tools-services/iris-sed-analysis-tool/ \begin{figure}[h!] \includegraphics[width=19pc]{IRIS.png} \caption{The IRIS application (V2.0) showing the SED of the blazar MKN421 built with flux measurements taken from NED and ASDC.} \label{iris} \end{figure} \subsection{The ASDC SED builder} The ASDC SED builder is a web-based application developed at the ASDC (www.asdc.asi.it). \begin{figure}[h!] \includegraphics[width=19pc]{SEDtool.png} \caption{The ASDC builder V3.0 available on the WEB at http://tools.asdc.asi.it/SED} \label{TEDTool} \end{figure} The current version (V3.0, see Fig. \ref{TEDTool}) allows users to build SEDs using data from a large number of catalogs, on-line services, also in combination with personal data. This version of the tool can handle time resolved SEDs and multi-frequency light-curves. The service can be accessed at tools.asdc.asi.it/SED. \subsection{SEDs and the time domain} \begin{figure}[h!] \includegraphics[width=19pc]{DiscrCorrFunction.png} \caption{The Z-transformed discrete cross-correlation function of the emission from 3C454.3 in different energy bands, compared to the flux emitted at 1 GeV. Significant correlation is clearly present with time lags that range from nearly zero to several weeks, depending on the frequency considered.} \label{zdcf3c454.3} \end{figure} \begin{figure}[h!] \includegraphics[width=21pc]{ZDCF_Mkn421.png} \caption{The Z-transformed discrete cross-correlation function of the X-ray (1keV) and Gamma-ray (1GeV) emission of the blazar MKN 421. No correlation is present between the two energy bands.} \label{zdcfmkn421} \end{figure} One important question in the analysis of the multi-frequency emission in cosmic sources is whether the emission in different energy bands is correlated. In almost the totality of cases the measurements available are sparse and not uniformly sampled. An efficient method of measuring the amount of correlation between the emission in two energy bands with sparse data is the Z-transformed Discrete Correlation Function \citep[ZDCF, see e.g.][]{alexander}. Fig. \ref{zdcf3c454.3} shows the ZDCF of the emission form 3C454.3 at 1mm, 37GHz and 15GHz compared to the gamma-ray emission at 1GeV. The fluxes are clearly correlated but with time lags that range from approximately 0, for the case of the mm band, to several weeks depending on the frequency in the radio band. Note from Fig. \ref{MWlightcurves} that, although the black (1GeV) and green (1mm) light-curves show the same overall behavior in terms of peaks and minima occurring approximately at the same time, the emission at 1GeV displays more structured variability, reflecting different details in the emission mechanism. The optical light curve (red points) follows the 1GeV light curve also in the fine detail, although data in this band is certainly more sparse than at other frequencies. Figure \ref{zdcfmkn421} gives the discrete correlation function of the X-ray (1~keV from Swift-XRT data) and gamma-ray (1~GeV from Fermi-LAT data) emission from MKN421 recorded between the summer 2008 and spring 2012. No correlation is observed in this case. This is interesting since MKN 421 is a blazar of the HBL type, that is a blazar that radiates up to the TeV band, and the radiation from this type of sources is often interpreted as due to a single homogeneous component. The complete lack of correlation between the flux emitted in the X-ray and gamma-ray band challenges this simple interpretation. An efficient and novel way of representing fast and large variations in the energy distribution of cosmic sources is to run in a sequence a set of frames, each representing the status of the SED in a particular time interval, like in a movie. This, of course, is possible only if a sufficiently large number of measurements across the electromagnetic spectrum are available at all times. This requirement is already satisfied for a small number of bright sources today; the very rapid increase in data production that we are experiencing ensure that many others will follow in the future. \section{Conclusions and prospects for the future} \label{conclusion} Significant progress in the visualization and analysis of multi-frequency multi-temporal astronomical data has been made recently, mostly as part of the VO world-wide initiative and of related activities. This is true both in terms of availability of data from catalogs, databases and surveys, as well as in the development of new software tools. The discovery potential offered by the exponential increase of available data, computer power and communications speed is extremely large. In this contribution I described some examples of the multi-frequency data sets and tools that are available today. Of course there is ample room for improvements and much more progress is expected in the near future. \begin{figure}[h!] \includegraphics[width=18pc]{MovieFrame1.png} \caption{A frame from a SED movie of the blazar 3C454.3 ceovering the period 2008-2013 built using the prototype described in the text. The top panel shows how the 1GeV flux varies as a function of time. The red dashed vertical lines mark the time interval considered in this frame (that is between 2010.880 and 2010.890). Spectral data taken during the same time interval, when the source showed the maximum flux in the gamma ray and in most other energy bands, are shown in the bottom panel as red symbols.} \label{MovieFrame1} \end{figure} \begin{figure}[h!] \includegraphics[width=18pc]{MovieFrame2.png} \caption{A second frame from a SED movie of 3C454.3. The time interval corresponding to the red points in this case is 2010.825 - 2010.830, just after the maximum emission in the gamma-ray band.} \label{MovieFrame2} \end{figure} With this motivation, and in an attempt of implementing new methods of visualizing variability of the emission across the electromagnetic spectrum, I developed a prototype software tool to run in sequence SEDs corresponding to different time slices, that is to produce SED movies. Figures \ref{MovieFrame1} and \ref{MovieFrame2} show two frames taken(stills) from the SED movie of 3C454.3 made with this prototype, corresponding to a period of approximately three days in late 2010 when the blazar underwent a very large gamma-ray flare. The data taken during the frame period is shown in red color, while all the remaining data is plotted in light gold color. The top panel shows the intensity of 3C454.3 in the gamma-ray band (1GeV) as a function of time and is used as a way to illustrate the passing of time. The bottom panel shows the full SED of the source using the same color coding for the data. This prototype will be further developed in the near future, likely as a collaborative effort among international institutions, and made openly accessible within the ASDC SED builder. A sample of a full SED movie of the blazar 3C454.3 is currently available on-line and can be seen at the main page of the current version of the ASDC SED tool (V3.0). Usually blazar SED data is compared to theoretical models that are based on the current best understanding of the physical processes responsible for the emission of the multi-frequency radiation. When possible, this is done using simultaneous data gathered through observational campaigns involving ground-based and satellite observatories. However, as shown in Figs. \ref{MWlightcurves} and \ref{zdcf3c454.3} variability in different bands follow different dynamical timescales and may show significant time lags implying that fitting of simultaneous data may not be enough for a full comprehension of the physics at work in the source. New methods of analyzing data, that go beyond fitting simultaneous SEDs and fully take into account of the dynamical time-scales of the emission processes in different energy bands, require the development of new analysis tools. This is a challenge both for theoretician and scientific software developers to be addressed in the near future. The advent of advanced visualization and fitting techniques such as SED movies and dynamical model fitting will likely happen soon, providing more diagnostic and interpretation power. \bigskip \noindent {\bf Acknowledgments} I acknowledge the use of archival data and software tools from the ASDC, a facility managed by the Italian Space Agency (ASI). Part of this work is based on archival data from the NASA/IPAC Extragalactic Database (NED). \nocite{*} \bibliographystyle{elsarticle-num}
1,314,259,994,394
arxiv
\section{Introduction} \vspace*{-1ex} Darwinian evolution incorporates three basic processes: mutation, selection and random drift. In evolving populations mutations create variation, which produce fitness differences for selection to act upon and random drift accounts for stochastic effects arising due to the finite size of the population. The consideration of finite populations has a long tradition in population genetics \cite{fisher:1930,wright:1931,moran:1962ef, kimura:1968aa} with frequency independent selection. Often, finite size effects can be viewed as corrections to a continuum theory based on an infinite population \cite{peng:2004aa,cohen:2005aa}. Here, we analyze such finite population effects for frequency dependent selection. Under frequency dependent selection, or co-evolutionary dynamics, the fitness of each individual is affected by interactions with other members of the population. The corresponding mathematical framework is evolutionary game theory \cite{maynard-smith:1982to,nowak:2004aa}. In the absence of mutations and in the deterministic limit of large well-mixed populations where individuals interact randomly, the replicator equation \cite{taylor:1978wv,hofbauer:1998mm} describes the change in frequency of the different strategic types in the population. Including mutations in this limit leads to the replicator-mutator equation \cite{bomze:1995rm,page:2002an}. Interestingly, results can be quite different when turning to coevolutionary processes in finite populations based on the Moran process \cite{moran:1962ef,nowak:2004pw,taylor:2004wv}. The frequency dependent Moran process is a stochastic birth-death process where an individual is randomly selected for reproduction with a probability proportional to its fitness. The clonal offspring then replaces a randomly chosen individual in the population such that the population size remains constant. If there are only two types of individuals in the population, the evolutionary process can be mapped onto a random walk in one dimension \cite{antal:2005aa}. For more than two types the dynamics becomes significantly more difficult \cite{imhof:2005oz}. Further complications arise when individuals no longer interact randomly but, instead, spatial structure leads to limited local interactions. Traditionally, this is modeled by arranging individuals on a spatial lattice \cite{nowak:1992pw,lindgren:1994to,szabo:1998wv} or on more general networks \cite{ebel:2002aa,szabo:2002mf,vukov:2005fa,santos:2005bb,santos:2006pn,ohtsuki:2006na} and leads to new phenomena, which includes critical phase transitions \cite{szabo:2002te}, and can affect the evolutionary process even in frequency independent settings \cite{lieberman:2005qx,antal:2006pr}. \\ \indent For two strategic types in well-mixed populations we established an explicit connection between microscopic stochastic processes in finite populations and the deterministic limit of infinite populations \cite{traulsen:2005hp}. The finite size effects are captured in the drift and diffusion terms of a Fokker-Planck equation where the diffusion term vanishes with $1/N$ for increasing population sizes. In the present paper the Fokker-Planck equation is generalized to an arbitrary number of strategies as well as to cover mutational changes of the strategic type. \\ \indent In Sec.\ \ref{gmps}, the frequency dependent Moran process is generalized to $d$ strategies including mutations. The derivation of the corresponding Fokker-Planck equation is recalled in Sec.\ \ref{fpes} and the stationary distribution is derived in Sec.\ \ref{sds}. This covers frequency independent and frequency dependent selection. As applications, we consider the Prisoner's Dilemma and the Snowdrift game. The relation to dynamics in infinite populations is derived in Sec.\ \ref{repmuts}. An example for an alternative microscopic process is provided in Sec.\ \ref{lurs}. \vspace*{-2ex} \section{Generalized Moran process} \label{gmps} \vspace*{-1ex} The frequency dependent Moran process has been introduced as a stochastic process among two types \cite{nowak:2004pw}. Although the extension to more strategies is straightforward \cite{imhof:2005oz}, the dynamics becomes significantly more difficult. In complete analogy to the Moran process among two individuals, an individual is chosen at random proportional to fitness in every time step. This individual produces offspring which replaces a randomly chosen individual. In addition to the original frequency dependent Moran process, we include mutations, i.e.\ an individual of type $k$ produces offspring of type $j$ with probabiliy $q_{jk}$, where $\sum_{j=1}^d q_{jk}=1$ and $d$ is the number of different types in the population. The special case of vanishing mutations is recovered by setting $q_{jk}= \delta_{jk}$. Note that there are no restrictions on the frequency of mutations or on the states in which mutations occur. For $d>2$, the state space of the system is not longer a simple one-dimensional chain, but the discretized simplex $S^N_d=\lbrace(i_1,\ldots,i_d) \in N_0^d: \sum_{j=1}^d i_j = N\rbrace$, where $i_j$ is the number of individuals of type $j$ and $N$ is the total size of the population. The fitness $\pi_j$ of an individual of type $j$ is a linear combination of the payoff from interactions given by the entries of the $d \times d$ payoff matrix $M$ with elements $m_{jk}$ and a baseline fitness, which is set to one for convenience. Hence, \begin{equation} \pi_j(i_1,\ldots,i_d) = 1-w+w\frac{\sum_{k=1}^d m_{jk} i_k-m_{jj}}{N-1}, \end{equation} where self interactions are excluded by subtracting the payoff $m_{jj}$. The selection intensity $w$ determines the relative contributions of the baseline fitness and the interactions to the total fitness of an individual. For $w \to 0$ selection is weak and payoffs from the game represent small perturbations to the constant background fitness. For strong selection, $w \to 1$, the influence of the background fitness vanishes and fitness is determined entirely by interactions. Since individuals are selected {\em proportional} to fitness, the fitness has to be a positive number. This can be guaranteed by payoff matrices with positive entries only, $m_{jk} \geq 0$ for all $j$ and $k$, or by a sufficiently small $w$. Based on this fitness definition, the probability $T_{k j}$ that an individual of type $k$ is replaced by an individual of type $j$ can be calculated. $T_{k j}$ is the product of the probability that the individual chosen at random for elimination is of type $k$ and the probability that an offspring of type $j$ is produced. There are two possibilities to produce such an offspring: An individual of type $j$ is chosen for reproduction with probability $i_j \pi_j/ (N \phi$), where $\phi = \sum_{m=1}^d \pi_m i_m/N$, is the average payoff in the population. It produces identical offspring (with probability $q_{jj}$). The second possibility is that an individual of type $l$ produces an offspring of type $j$ due to mutations (with probability $q_{lj}$). Hence, the probability $T_{k j}$ is given by \begin{equation} T_{k j}(i_1,...,i_d) = \frac{1}{ N \phi } \sum_{l=1}^d i_l \pi_l(i_1,\ldots,i_d) \; q_{lj} \frac{i_k}{N}. \label{Morantransprob} \end{equation} These transition probabilities describe the effect of selection and mutation in finite populations. The probabilistic nature of this coevolutionary process accounts for random drift. Since the state space of this system is no longer a one-dimensional chain for $d>2$, many standard methods can no longer be applied. However, as in the case $d=2$ a Fokker-Planck equation for the dynamics of the system can be derived for large populations. Although we are motivated by the generalized frequency dependent Moran process, the calculations following in Secs.\ \ref{fpes} and \ref{sds} are valid in a more general sense. In particular, they apply to any coevolutionary birth death process, such as the local update rule discussed in \cite{traulsen:2005hp}, see Sec.\ \ref{lurs}. \section{Fokker-Planck equation} \label{fpes} \vspace*{-1ex} In analogy to Ref.\ \cite{traulsen:2005hp} for $d=2$, a general Fokker-Planck equation for the probability density of strategies $\rho({\boldsymbol x})$ in an evolutionary game among $d$ types can be derived. The constant population size of the generalized Moran process leads to a normalization of ${\boldsymbol x}$, $\sum_{j=1}^d x_j =1$. Thus, we have $d-1$ independent variables. As shown in the Appendix, a Kramers-Moyal expansion yields the Fokker-Planck equation: \begin{equation} \label{FPE} \dot \rho({\boldsymbol x}) = - \sum_{k=1}^{d-1} \frac{\partial}{\partial x_k} \rho({\boldsymbol x}) a_{k}({\boldsymbol x}) +\frac{1}{2} \! \sum_{j,k=1}^{d-1} \frac{\partial^2}{\partial x_k \partial x_j}\rho({\boldsymbol x}) b_{jk}({\boldsymbol x}), \end{equation} where ${\boldsymbol x} = (x_1,\ldots, x_d)$ with $x_j = i_j/N$. The drift coefficients are given by \begin{equation} a_{k}({\boldsymbol x}) = \sum_{j=1}^d T_{j k}({\boldsymbol x}) - T_{k j}({\boldsymbol x}) \label{drift} \end{equation} and can be interpreted as an effective flow into state $k$: The first term is the probability that the number of $k$ individuals increases, whereas the second term describes the probability for transitions from $k$ to other states. For the diffusion matrix, we find \begin{equation} b_{jk}({\boldsymbol x}) \!\! = \!\! \frac{1}{N} \!\!\left[ -T_{j k}({\boldsymbol x}) - T_{k j}({\boldsymbol x}) +\delta_{jk} \sum_{l=1}^d T_{j l}({\boldsymbol x})+T_{l j}({\boldsymbol x}) \! \right]. \end{equation} For two types, $d=2$, and without mutations, the Fokker-Planck equation from \cite{traulsen:2005hp} is recovered. Eq.\ (\ref{FPE}) describes the dynamics on the basis of changes in the probability distribution. Traditionally, coevolutionary systems are often described by considering dynamical equations for the state of the system. General conclusions are then based on averages made over several realizations of the process. Here, the correspondence between both descriptions becomes clear, since the Fokker-Planck equation (\ref{FPE}) corresponds to a stochastic differential equation \cite{gardiner:1985bv,honerkamp:1994mm,kampen:1997xg}. The noise arises only from stochastic updating and is therefore not correlated in time. Hence, the It{\^o} calculus can be applied to derive a Langevin equation describing the development of $x$. Equation (\ref{FPE}) corresponds to the stochastic replicator-mutator equation \begin{equation} \dot x_k = a_k({\boldsymbol x}) + \sum_{j=1}^{d-1} {c_{kj}({\boldsymbol x})} \xi_j(t) \label{langevin} \end{equation} where $ {c_{kj}}$ is defined by $\sum_{l=1}^{d-1} {c_{kl}({\boldsymbol x})} {c_{lj}({\boldsymbol x})} = b_{kj}({\boldsymbol x})$. Each element of the vector ${\boldsymbol \xi}$ is Gaussian white noise with unit variance. The different elements are uncorrelated, $\langle \xi_k(t) \xi_j(s) \rangle = \delta_{kj} \delta(t-s)$. Equation (\ref{langevin}) allows to approximate fluctuations arising from finite populations by Langevin terms that appear in a replicator equation. For any given payoff matrix, selection pressure, population size and mutation rate, it provides a quantitative description of the fluctuations introduced by stochastic microscopic update processes. \section{Stationary distribution} \label{sds} \vspace*{-1ex} The stationary distribution of strategies $\rho^{\ast}({\boldsymbol x})$ can be derived from the Fokker-Planck equation (\ref{FPE}), which can be written in the form \begin{equation} \label{flowequation} \dot \rho({\boldsymbol x}) = -\nabla {\boldsymbol J}({\boldsymbol x}), \end{equation} where the $d-1$ elements of the probability current ${\boldsymbol J}({\boldsymbol x})$ are \begin{equation} J_k({\boldsymbol x}) = \rho({\boldsymbol x}) a_{k}({\boldsymbol x}) - \frac{1}{2} \sum_{j=1}^{d-1} \frac{\partial}{\partial x_j}\rho({\boldsymbol x}) b_{jk}({\boldsymbol x}). \end{equation} In general, the stationary solution $\dot \rho({\boldsymbol x})= 0$ is equivalent to a probability current without sources. A special case of this is a probability current vanishing everywhere, i.e.\ $J_k({\boldsymbol x})=0$ for all ${\boldsymbol x}$. This leads to \begin{equation} \ \sum_{k=1}^{d-1} b_{jk}({\boldsymbol x}) \frac{\partial }{\partial x_k} \rho^{\ast}({\boldsymbol x}) = \rho^{\ast}({\boldsymbol x}) \! \left[2 a_j({\boldsymbol x}) \!-\! \sum_{k=1}^{d-1} \frac{\partial}{\partial x_k} b_{jk}({\boldsymbol x})\right]. \label{rhoeq} \end{equation} If we exclude the degenerate cases with $\det b_{jk} ({\boldsymbol x})= 0$, the matrix $b_{jk}({\boldsymbol x})$ can be inverted and Eq.~(\ref{rhoeq}) reduces to \begin{equation} \frac{\partial}{\partial x_i} \! \ln \rho^{\ast}({\boldsymbol x}) \!\!=\!\! \sum_{j=1}^{d-1} b^{-1}_{ij}({\boldsymbol x}) \!\left[2 a_j({\boldsymbol x}) \! - \! \sum_{k=1}^{d-1} \frac{\partial}{\partial x_k} b_{jk}({\boldsymbol x})\right] \!\!=\! \Gamma_i({\boldsymbol x}) \end{equation} A solution of this equation only exists if ${\boldsymbol \Gamma} ({\boldsymbol x})$ is a gradient \cite{kampen:1997xg,gardiner:1985bv}. In that case, the stationary solution of Eq.\ (\ref{FPE}) can be obtained from a line integral between ${\boldsymbol x_0}$ and ${\boldsymbol x}$ (which is independent of the path) \begin{equation} \rho^{\ast}({\boldsymbol x}) ={\cal N} \exp\left[ \int_{\boldsymbol x_0}^{\boldsymbol x} {\boldsymbol \Gamma} \left({\boldsymbol y} \right) \cdot d{\boldsymbol y} \right], \label{stathighd} \end{equation} where $\cal N$ is a normalization constant. However, in co-evolutionary systems the requirement that ${\boldsymbol \Gamma}({\boldsymbol x})$ is a gradient is often not fulfilled. In that case, a stationary solution may still exist in which the ansatz of a vanishing probability current, ${\boldsymbol J}({\boldsymbol x}) ={\boldsymbol 0}$, is not valid although the divergence of ${\boldsymbol J}({\boldsymbol x})$ remains zero, $\nabla {\boldsymbol J}({\boldsymbol x})=0$. In these cases the stationary distribution has to be derived by different means. A particularly simple case where these problems do not arise is $d=2$, where only two types $A$ and $B$ are present. The state of the system can be described by the fraction $x$ of $A$ individuals. The remaining fraction $1-x$ consists of $B$ individuals. Assuming a mutation rate $u$ from $A$ to $B$ as well as from $B$ to $A$, the transition probabilities read \begin{eqnarray} T_{BA}({x}) & = & \frac{ x \pi_A(x) (1-u)+(1-x) \pi_B(x) u}{\phi} (1-x),\nonumber \\ T_{AB}({x}) & = & \frac{ x \pi_A(x) u + (1-x) \pi_B(x) (1-u)}{\phi} x. \end{eqnarray} The drift and diffusion terms in the Fokker-Planck equation become \begin{eqnarray} a(x) & = & x (1-x) \frac{\pi_A(x)-\pi_B(x)}{\phi} \\ \nonumber && + u \frac{ (1-x) \pi_B(x)- x \pi_A(x)}{\phi} \\ b(x) & = & \frac{1}{N} x (1-x) \frac{\pi_A(x)+\pi_B(x)}{\phi} \\ \nonumber && + \frac{u}{N} \frac{ x \pi_A(x) +(x-1) \pi_B(x)}{\phi}(2x-1). \end{eqnarray} The stationary solution of Eq.\ (\ref{FPE}) can now be computed as outlined above and is given by \begin{equation} \label{statdist} \rho^{\ast}(x) = {\cal N} \exp \left[\int_0^x \Gamma(y) dy \right], \end{equation} where $\Gamma(y)=b^{-1}(y)(2 a(y)- b'(y))$, $b'(y) = \frac{d}{dy} b(y)$ and ${\cal N}= \int_0^1 \exp \left[\int_0^x \Gamma(y) dy \right] dx$. In general, the derivation of a stationary distribution of strategies under the effect of mutations cannot be solved analytically. Earlier approaches have approximated such distributions either by assuming weak mutations \cite{imhof:2006ee}, where only transition probabilities between absorbing states have to be considered, or by neglecting all mutations except those in the absorbing states \cite{claussen:2005eh}. The distribution given by Eq.\ (\ref{statdist}) for large $N$ is valid for arbitrary selection intensity $w$, mutation rate $u$, and any payoff matrix. \subsection{Neutral dynamics} \vspace*{-1ex} In the frequency dependent Moran process, often weak selection, $w \ll 1$, is considered \cite{nowak:2004pw,ohtsuki:2006na}. For weak selection, the interactions are only corrections to a background of neutral dynamics \cite{kimura:1968aa,crow:1970ck}. Therefore, we first discuss the stationary distribution for $w=0$, where the game has no influence on the fitness. In this case, ${\boldsymbol \Gamma}({\boldsymbol x})$ is a gradient, which is equivalent to the assumption that the probability current vanishes everywhere, and the stationary distribution can be computed directly from Eq.~(\ref{stathighd}). For $d=2$, we find for the drift term $a(x) = u(1-2x)$ and for the diffusion term $b(x) = {(u(2x-1)^2+2x(1-x))}/{N}$. Both results are valid for the case in which the mutation rates from $A$ to $B$ and vice versa are $u$. Hence, $\Gamma(x)$ is given by \begin{equation} \Gamma(x)=-2 \frac{(u (2+N) -1) (2 x-1)}{u (2 x-1)^2+2x (1-x) }. \end{equation} \begin{figure}[thbp] \includegraphics[totalheight=9cm,angle=270]{./fig1.eps} \caption{ For neutral selection, $w=0$, three different scenarios occur for the stationary distribution of the system: For $u<u_c=1/(2+N)$, the system spends most of the time near the absorbing states, leading to maxima of the stationary distribution at $i=0$ and $i=N$, as shown for $u=0.005$. For $u=u_c \approx 0.01$, the probability to leave the absorbing states due to mutations and the probability to reach them by random drift are approximately equal, leading to a uniform distribution. For $u>u_c$, there is on average more than one mutation per generation and the stationary distribution is centered around $i=N/2$. In all three cases, numerical simulations of the stationary distribution depicted by symbols agree very well with the theoretical result Eq.~(\ref{statdist}) shown as lines (simulations are averages over $10^8$ time steps, $N=100$). } \label{neutral} \end{figure} $\Gamma(x)$ vanishes for the critical mutation rate $u_c = 1/(2+N)$. For $u<u_c$, $\Gamma(x)$ has a minimum at $x=1/2$. Accordingly, the stationary probability distribution has maxima at the boundaries when only few mutations per generation occur. On the other hand, for $u>u_c$ there is a maximum at $x=1/2$. Mutations lead the system away from the states $x=0$ and $x=1$ and the distribution is centered around the middle where both strategies have equal frequencies, see Fig. \ref{neutral}. For $u=u_c$ we have $\Gamma(x)=0$ and hence Eq.\ (\ref{statdist}) predicts a uniform distribution. However, simulations show a distribution that decays slightly toward the boundaries for small $N$ (cf.\ Fig.\ \ref{neutral}). Strictly speaking, a uniform distribution is observed only in the limit $N \to \infty$. Critical mutation rates can also be computed for general $d$ in the case of symmetric mutations in which the mutation rate between two different states is $u$, i.e.\ $q_{lj} = u + \delta_{lj}(1-du)$. All elements of ${\boldsymbol \Gamma}({\boldsymbol x})$ vanish for $u=u_c=1/(d+N)$, because $2 a_j({\boldsymbol x}) = \sum_{k=1}^{d-1} \frac{\partial}{\partial x_k} b_{jk}({\boldsymbol x})$. Hence, a uniform distribution is expected for $u=u_c$ in the limit of $N \to \infty$. For smaller mutation rate $u<u_c$, the distribution has maxima at the corners of the simplex where only one strategy is present. For larger $u>u_c$, the distribution has one maximum in the interior of the simplex. Fig.~\ref{neutrald} illustrates these two different cases for neutral selection with three different types, $d=3$. \begin{figure}[htbp] \includegraphics[totalheight=4.2cm]{./fiA2.eps} \caption{ (Color online) For $d=3$ strategies and neutral selection ($w=0$), the stationary distribution in the strategy space spanned by the simplex $S_3$ is shown encoded by a color scale, where bright colors indicate high values. (a) For mutation rates higher than the critical mutation rate $u_c = 1/(3+N)$, mutations drive the system away from the absorbing states at the corners of the simplex. Due to the symmetry of the system, the stationary distribution has a maximum centered in the middle of the simplex where $x=y=z$. (b) If the mutation rates are smaller than than the critical mutation rate $u_c$, the system spends considerable time in the absorbing states at the corners. Occasionally, the edges are reached by mutations. However, since the system typically reaches the corners again before the next mutation occurs, the stationary probability density in the interior is very small. In both cases, the simulations of the stationary probability distribution shown here (averages over $10^8$ time steps) do not deviate significantly from the analytical result Eq.\ (\ref{stathighd}): for $u=0.05$ the maximal deviation is $\approx 2 \%$ and for $u=0.005$ it is $\approx 6\%$ (population size $N=60$). } \label{neutrald} \end{figure} \subsection{Frequency dependent selection} \vspace*{-2ex} For $w>0$, interactions lead to frequency dependent transition probabilities and to corrections of the stationary distribution. Depending on the nature of the game and the strength of selection, significant differences from neutral evolution are possible. As examples, we consider two generic cases of $2 \times 2$ games, the Prisoner's Dilemma and the Snowdrift game. In the Prisoner's Dilemma \cite{rapoport:1965pd,axelrod:1984yo}, two players choose simultaneously whether to cooperate or to defect. The cost of cooperation is $c$, while the benefit from cooperation is $b$. The interaction of a cooperator with a defector leads to a payoff of $-c$ for the cooperator and $b$ for the defector. When two cooperators interact, they both get the payoff $b-c$, whereas the interaction of two defectors leads to a payoff of $0$. Thus, irrespective of the other player's move, it is always better to defect. However, the reward for mutual cooperation would be the mutually preferred outcome and hence the dilemma. The game is described by the payoff matrix \begin{equation} M = \left( \begin{array}{cc} m_{11} & m_{12} \\ m_{21} & m_{22} \end{array} \right) = \left( \begin{array}{cc} b-c & -c \\ b & 0 \end{array} \right), \end{equation} \begin{figure}[thbp] \includegraphics[totalheight=9cm,angle=270]{./fig3.eps} \caption{ Stationary distribution of strategies in a Prisoner's Dilemma under the influence of mutations and selection in a finite population. The distribution undergoes a qualitative change with increasing population size. Main figure: For $N=50$, the stationary distribution for $w=0.2$ has a maximum at the Nash equilibrium $x=0$, where only defectors are present, as $u$ is below the critical mutation rate $u_c =0.02$. When selection is weak, it resembles the result from neutral selection ($w=0.01$). Inset: When the population size is increased to $N=10000$, the mutation rate is above the critical value $u_c \approx 10^{-4}$ and the situation resembles the result expected from the deterministic description of the replicator-mutator equation: The stationary distribution has a maximum at $x^{\ast} \approx 0.14$ for $w=0.2$, which is the stable fixed point of the replicator-mutator equation. For weak selection, $w=0.01$, the stable fixed point is located at $x^{\ast} \approx 0.47$ and the distribution around this fixed point is wider due to stronger fluctuations. In all cases, numerical simulations of the frequency dependent Moran process (symbols) agree well with the analytical solution Eq.\ (\ref{statdist}) for the stationary distribution depicted by lines ($b=1$, $c=0.25$, mutation rate $u=0.01$, average over $10^8$ time steps). } \label{pd} \end{figure} The fitness is now a linear combination of the background fitness and the average payoff from interactions. For cooperators, the fitness becomes $\pi_C = 1-w+w ((b-c) (i-1) -c\,(N-i))/N$, whereas defectors have fitness $\pi_D = 1-w+w \, b \, i /N$. Since the fitness of defectors is always higher than the fitness of cooperators, $\pi_D>\pi_C $, any individual is better off not cooperating, despite the fact that mutual cooperation leads to a higher average payoff. Without mutations the majority of stochastic trajectories would therefore end up in the absorbing state with $100 \%$ defectors and only few trajectories end up in pure cooperation. For $u <u_c= 1/(N+2)$, the stationary distribution keeps a maximum at $x=0$, but also a second (smaller) maximum at the other absorbing state $x=1$, see Fig.~\ref{pd}. For $u > u_c$, mutations drive the system away from these states and a maximum appears for intermediate values of $x$. The position of this maximum is determined by the intensity of selection $w$ and the payoff matrix. For increasing $N$, the situation may change from $u<u_c$ to $u>u_c$. Therefore, the stochastic effects arising from the finite population size are clearly visible for small $N$, whereas large $N$ resembles a stationary distribution that is similar to the one expected from the deterministic replicator-mutator equation, see Sec.~\ref{repmuts}. As a second example, we consider the Snowdrift game \cite{hauert:2004bo} also known as "Hawk-Dove game" or "Chicken". In this game, two players again choose between cooperation and defection. While the payoffs for defectors are as in the Prisoner's Dilemma, cooperation becomes more favorable: The payoff of a cooperator against a defector is now $b-c$ and therefore higher than the payoff of a defector against a defector. On the other hand, it is still lower than the payoff for mutual cooperation, $b-c/2$. The payoff matrix is given by \begin{equation} M = \left( \begin{array}{cc} m_{11} & m_{12} \\ m_{21} & m_{22} \end{array} \right) = \left( \begin{array}{cc} b-\frac{c}{2}& b-c \\ b & 0 \end{array} \right). \end{equation} \begin{figure}[thbp] \includegraphics[totalheight=9cm,angle=270]{./fig4.eps} \caption{ The Snowdrift game exhibits a stable interior equilibrium and typically coexistence of the two different types is observed. Only for small mutation rates and weak selection, the absorbing states are reached. Therefore, the stationary distribution has a maximum in the interior. For weak selection ($w=0.01$) this maximum is close to $x=0.5$. With stronger selection, the maximum moves toward the mixed equilibrium at $x^{\ast} \approx 0.856$. Numerical simulations (symbols) are in good agreement with the theoretical result depicted by full lines ($b=1$, $c=0.25$, mutation rate $u=0.01$, averages over $10^8$ time steps, $N=200$).} \label{sd} \end{figure} Again, the payoff is a linear combination of the background fitness and the average payoff from interactions. In the Snowdrift game, rare strategies are always favored, resulting in a drift away from the absorbing states $x=0$ and $x=1$. Instead, the distribution is centered around a stable equilibrium in the interior. With decreasing intensity of selection, the distribution around this equilibrium becomes wider. In the limit $N \to \infty$ and without mutations ($u=0$), the distribution narrows and converges to $x^{\ast} = 2(b-c)/(2b-c)$, the only stable fixed point of the replicator dynamics. Finite size fluctuations lead to a distribution around this deterministic fixed point while mutations move the maximum of this distribution toward $x=0.5$. As for the Prisoner's Dilemma, simulations of the stationary distribution agree well with the analytical result Eq.\ (\ref{statdist}), cf.\ Fig. \ref{sd}. \section{Replicator-Mutator equation} \label{repmuts} \vspace*{-1ex} In the limit of infinite population size, our framework establishes a natural connection to the traditional deterministic description of coevolutionary dynamics. The deterministic nature of this limit arises from the fact that the diffusion matrix ${b_{kj}({\boldsymbol x})}$ and thus the finite size fluctuations vanish with $1/\sqrt{N}$ as $N \to \infty$. Hence, the Langevin equation becomes deterministic and the dynamics is given by the drift term, $\dot x_k = \sum_{k=1}^d \left( T_{jk} - T_{kj} \right)$. Since the mutation rate per individual is fixed, the effect of mutations prevails in the limit of $N \to \infty$. In this case, Eq.\ (\ref{langevin}) incorporating mutation and selection can be written as \begin{eqnarray} \label{adrepmut} \dot x_k & = & \frac{1}{\phi} \sum_{j=1}^d x_j \pi_j({\boldsymbol x}) q_{jk}-x_k. \end{eqnarray} This is the replicator-mutator equation corresponding to the adjusted replicator dynamics \cite{maynard-smith:1982to}. So far, replicator-mutator equations \cite{bomze:1995rm,page:2002an} have not been derived from a microscopic birth-death process. They are the natural extension of the deterministic replicator equation \cite{taylor:1978wv,hofbauer:1998mm} that take into account mutations between strategies. Without mutations, $q_{jk}= \delta_{jk}$, types that have a fitness above the average fitness in the population increase in abundance, while types with a fitness below average decrease in abundance. Mutations interfere with this selection dynamics: Only types that have a fitness above the average {\em and} that have a sufficiently small mutation rate increase in frequency. Hence, the stable fixed points of the system represent a balance between selection dynamics and mutation dynamics. \section{Local update process} \label{lurs} \vspace*{-1ex} So far, we have only considered the frequency dependent Moran process. However, our framework can also be applied to other coevolutionary processes. As an example, we generalize the local update process \cite{traulsen:2005hp} to an arbitrary number of strategies. In the local update process, an individual $1$ is chosen at random and compares its payoff to another randomly chosen individual $2$. With probability \begin{equation} p = \frac{1}{2} + \frac{w}{2} \frac{\pi_2-\pi_1}{\Delta \pi} \label{localtrans} \end{equation} $1$ adapts the strategy of $2$ (where the constant $\Delta \pi$ is the maximal possible payoff difference and $w$ denotes the intensity of selection) \cite{fn01}. For $d$ different types, the transition probabilities are given by \begin{equation} T_{kj}({\boldsymbol x}) = {x_k x_j}\left(\frac{1}{2} + \frac{w}{2} \frac{\pi_j ({\boldsymbol x}) -\pi_k ({\boldsymbol x})}{\Delta \pi} \right). \end{equation} From the transition probabilities, we can calculate the drift term $a_{k}({\boldsymbol x}) = (w/\Delta \pi) \; x_k \left( \pi_k({\boldsymbol x})- \phi \right)$, which is independent of $N$. The diffusion term is independent of the payoffs and simplifies to $b_{jk}({\boldsymbol x}) =x_j \left( \delta_{jk}- x_k \right)/N$. Thus, up to a constant rescaling of time, we recover the standard replicator dynamics for $d$ types in the limit $N \to \infty$, \begin{equation} \dot x_k = x_k \left( \pi_k({\boldsymbol x})- \phi \right). \label{repmut} \end{equation} Up to a {\em dynamical} rescaling of time, Eq.\ (\ref{repmut}) is identical to Eq.\ (\ref{adrepmut}) without mutations. The natural extension of Eq.\ (\ref{repmut}) that includes mutations (while retaining the time scale) is the standard replicator-mutator equation \cite{page:2002an}, \begin{equation} \dot x_k = \sum_{j=1}^d x_j \pi_j({\boldsymbol x}) q_{jk} -x_k \phi = \sum_{j,l=1}^d x_j \pi_j({\boldsymbol x}) (x_l q_{jk} - x_k q_{kl}). \end{equation} For general $q_{jk}$, it is not possible to define a microscopic process in which the transition probabilities only depend on payoff differences that leads to this differential equation. Only for the special case of vanishing mutations $q_{jk} = \delta_{jk}$, this is possible. A microscopic mechanism involving spontaneous mutations has been proposed in \cite{helbing:1996aa}. This yields the transition rates \cite{fn02} \begin{equation} T_{kj}({\boldsymbol x}) = {x_k x_j}\left(\frac{1}{2} + \frac{w}{2} \frac{\pi_j ({\boldsymbol x}) -\pi_k ({\boldsymbol x})}{\Delta \pi} \right) + x_k {q_{kj}}. \label{lurtransprob} \end{equation} As shown in \cite{helbing:1996aa}, the limit $N \to \infty$ results in the differential equation \begin{equation} \dot x_k = \frac{w}{\Delta \pi} x_k \left(\pi_k({\boldsymbol x}) - \phi \right) + \sum_{j=1}^d \left( x_j q_{jk} - x_k q_{kj} \right). \end{equation} This selection mutation equation reduces to the standard replicator dynamics for $q_{jk} = \delta_{jk}$ \cite{hofbauer:1985jm,hofbauer:1998mm}. It describes selection under the influence of {\em spontanteous} mutations. Such spontaneous mutations can also be incorporated into other coevolutionary processes, leading to the same additive mutation term $\sum_{j=1}^d \left( x_j q_{jk} - x_k q_{kj} \right)$. \section{Discussion} \vspace*{-1ex} Traditional descriptions of co-evolutionary systems rely on the assumption that populations are sufficiently large such that they can safely be considered infinite. In this case, accidental fixation of types with low fitness is not possible. For the more realistic case of finite populations, only few analytical methods exist that are often restricted to two types. Hence, these systems often rely on simulations which are very illuminative for special cases, but give little insight into the general dynamics of these systems. Here, we have shown how to extend the replicator dynamics in order to account for fluctuations arising from finite population sizes for any number of strategies and arbitrary mutation rates. When allowing for mutations, homogeneous states in which only one type is present are no longer absorbing boundaries and, instead, the system converges to a stationary distribution. This stationary distribution is derived using standard methods from statistical physics and depends on the population size, the mutation rate and the interaction parameters, i.e., the game. The fluctuation term allows for the calculation of corrections to the replicator dynamics for finite populations and is an important step in comparing individual based coevolutionary systems with deterministic dynamics. Mutations are not a source of fluctuations, but provide a mechanism that leads to a mixing of different types. However, fluctuations do not arise solely from finite populations. Instead, there are additional sources of noise that might have to be incorporated in other ways, such as noise from heterogeneity of agents or external noise that can influence interactions. Our approach can be applied to any number of strategies $d$, but is based on the assumption that the population size is sufficiently large such that population densities can be approximated by smooth functions. In high dimensional spaces, such a continuum description may not always be appropriate \cite{tsimring:1996aa}. Starting from microscopic descriptions for individual based coevolutionary processes with an arbitrary number of strategies, we have derived a replicator mutator equation with additional first order corrections that describe stochastic effects arising from finite populations. \clearpage
1,314,259,994,395
arxiv
\section{Introduction} Over 360 million people worldwide are Deaf or Hard of Hearing (DHH) \cite{mitchell2006many, blanchfield2001severely}. In the U.S. alone, over 15\% people are DHH, and regularly rely on captioning while watching videos to perceive salient auditory information \cite{larwan2019}. To provide quality captioning services to this group, it is essential to monitor the quality of captioning regularly. Regulators, e.g., the Federal Communication Commission (FCC) in the U.S. \cite{fcc2014} are entrusted with regularly checking the quality of caption transcription generated by different broadcasters. However, given the abundant production of captioned live TV broadcasts, caption evaluation is a tedious and costly task. DHH viewers are often dissatisfied with the quality of captioning provided in live contexts, which provide less time for caption production than pre-recorded contexts \cite{metric2021akhter, Kushalnagar2018}. If regulatory organizations that measure the quality of captions used quality metrics that better reflect the DHH users' preferences, DHH viewers' experience may improve. Existing metrics used in transcription or captioning include Word Error Rate (WER) \cite{WER_2018} or Number of Error in Recognition (NER) \cite{NER_2015}. As noted by \citet{highlight_kafle_2019}, a major shortcoming of these metrics is that they do not consider the importance of individual words when measuring the accuracy of captioned transcripts (comparing to the reference transcript) and most metrics assign equal weights to each word. DHH viewers rely more heavily on important keywords while skimming through caption text \cite{highlight_kafle_2019}. Motivated by these shortcomings, prior work had proposed metrics which assign differential importance weights to individual words in captioned text when calculating an evaluation score \cite{Kafle_ACE, Kafle2019}. Specifically, this prior work leveraged word2vec-based word embeddings to generate and propagate features to another layer of the network \cite{kafle-huenerfauth-2018-corpus}. We build on this prior work and propose an updated approach. The feature space we are using contains both contextual and semantic information of the captioned text, which is crucial in conversational setting, often common in TV, and may better capture long-distance semantic and syntactic relationships. Thus, in this work, we contribute more current strategies for calculating importance of words in transcript text, toward a metric that takes word-importance into account when evaluating captions. Our contributions in this paper include: \begin{enumerate} \item \textbf{We conducted a comparative correlation analysis between human-annotated importance scores for words in conversational transcripts and aggregated lexical semantic score generated from: (a) word2vec-based word embeddings as in prior work contrasted with (b) BERT-based contextualized embeddings}. Our findings revealed that scores generated from contextualized embeddings had higher correlation with the human-annotated word-importance scores. \item \textbf{We contribute data consisting of BERT contextualized word embeddings, paired with their word-importance scores, to augment a prior dataset of human-assigned importance scores for words in conversational transcripts \cite{kafle-huenerfauth-2018-corpus}.} This enhanced data can be used by researchers for constructing improved caption-evaluation metrics or by researchers studying conversational discourse. \item \textbf{To illustrate the use of this dataset, we show how interpretable classical machine-learning models can be trained to determine the importance of words using these contextualized word embedding vectors from our data.} In this proof-of-concept study, we show how these data can be used in training models. We leave detailed evaluation and comparison of models for future work. \end{enumerate} \section{Related Work} \subsection{Word Importance Prediction} NLP researchers have explored approaches to determine word-importance for various downstream tasks, e.g. term weight determination when querying text \cite{dai2020importance}, for text summarization \cite{hong-nenkova-2014-improving} or text classification \cite{sheikh:hal-01331720}. Prior research on identifying and scoring important words in a text has largely focused on the task of keyword or important-term extraction \cite{dai2020importance,sheikh:hal-01331720}. This task involves identifying words in a document that densely summarize it. Several automatic keyword-extraction techniques have been investigated, including unsupervised methods such as interpolation of Term Frequency and Inverse Document Frequency (TF-IDF) weighting \cite{tf_idf_2010}, Positive Pointwise Mutual Information (PPMI) \cite{PPMI_Bouma}, word2vec embedding \cite{sheikh:hal-01331720}, and supervised methods that leverage linguistic features from text for word importance estimation \cite{dai2020importance, kafle-huenerfauth-2018-corpus}. While the conceptualization of word importance as a keyword-extraction problem has enabled retrieving relevant information from large textual or multimedia datasets \cite{dai2020importance, Shah02astudy}, this approach may not generalize across domains and functional, situational contexts of language use. For instance, given the meandering nature of topic transitions in television news broadcasts or talk shows \cite{Kafle_ACE}, when processing caption transcripts, a model of word importance that is more local may be more successful, rather than considering the entire transcript of the broadcast or show. \subsection{Caption Evaluation Methods} Several caption evaluation approaches have been proposed \cite{WER_2018, NCAM}, with some approaches specifically taking into account the perspective of DHH participants \cite{kafle-huenerfauth-2018-corpus, metric2021akhter}. The most common caption evaluation used by different regulatory organizations is Word Error Rate (WER) \cite{WER_2018}. While penalizing insertion, deletion, and substitution errors in transcripts, a limitation of WER is that it considers importance of each word token equally. To address this, \citet{NCAM} proposed a metric that assign weights to words in a text, but this probabilistic approach has not been trained on weights set to address priorities assigned by actual caption users. In the most closely related work, \citet{kafle-huenerfauth-2018-corpus} investigated models for predicting word-importance during captioned one-on-one conversations. Their Automatic Caption Evaluation (ACE) framework utilized a variety of linguistic features to predict which words in a caption text were most important to its meaning, and which would be most problematic if incorrectly transcribed in a caption. Prior research on determining the importance of a word in a document had shown that an embedding can characterize a word’s syntactic (e.g., word dependencies) and semantic character (e.g., named entity labeling), which in turn can help estimate a word’s importance \cite{sheikh:hal-01331720}. Thus, \citet{kafle-huenerfauth-2018-corpus} used word2vec embeddings of words in the transcript. In this paper, we examine whether an alternative embedding, based on BERT, would lead to superior models of word-importance. \subsection{Annotation of Word Importance Scores} In this work, we contribute a dataset that augments a previously-released dataset from \citet{kafle-huenerfauth-2018-corpus}, consisting of a 25,000-token subset of the Switchboard corpus of conversational transcripts \cite{SWITCHBOARD_1992}. \citet{kafle-huenerfauth-2018-corpus} asked a pair of human annotators to assign word-importance scores to each word within these transcripts, on a range from 0.0 to 1.0, where 1.0 was most important. After partitioning scores into 6 discrete categories: [0-0.1), [0.1-0.3), [0.3-0.5), [0.5-0.7), [0.7-0.9), and [0.9 - 1], they trained a Neural Network-based classifier, using Long Short Term Memory (LSTM), to predict the importance category of each word in these transcripts. We augment this annotated corpus with recent contextualized word embeddings from BERT \cite{devlin-etal-2019-bert}, pairing up the embeddings with the hand-annotated word importance data. \section{Corpus Augmentation} \subsection{Extracting Word Embeddings Vectors} We have augmented the dataset described above, and will be releasing the version that includes two embeddings per word token: BERT contextualized word embeddings and word2vec embeddings. With this paper, we will be releasing the BERT-generated contextualized word embeddings\footnote[1]{\url{https://nyu.databrary.org/volume/1447}} of 25,000 tokens, each with a feature vector of length 768, augmented with the human-annotated word-importance scores\footnote[2]{\url{http://latlab.ist.rit.edu/lrec2018/}}. To enable comparison with the work of \citet{kafle-huenerfauth-2018-corpus}, we extracted a word2vec \cite{rehurek2011gensim} embedding vector of length 100 for each word that occurred at least twice within each transcript. Next, we employed the pre-trained BERT model entitled \emph{bert-base-uncased} \cite{devlin-etal-2019-bert} to generate a contextualized word-embedding vector for each word within transcripts. For each word within each sentence, using BERT, we generated a three-dimensional embedding of shape $32 \times 12 \times 768$. These embeddings were created based upon the architecture of the pre-trained BERT model that included 32 transformer blocks, 12 attention heads and 768 hidden layers. We follow prior work that has reshaped or composed the three dimensions into a one-dimensional vector while retaining similar semantic information \cite{turton2020deriving}. After performing these operations, for each word we obtained a contextualized embedding vector of length $768$. \begin{figure} \small \centering \begin{minipage}{0.23\textwidth} \includegraphics[width=0.95\textwidth]{bert.png} \caption*{(a)} \end{minipage} \begin{minipage}{0.23\textwidth} \includegraphics[width=0.97\textwidth, height=3.4cm]{embed.png} \caption*{(b)} \end{minipage} \caption{Scatter plots for (a) the human-annotated score vs. BERT embedding-based semantic score, and (b) the human-annotated score vs. the word2vec embedding-based semantic score. The first 1200 words from the dataset are shown. } \label{fig:correlation} \end{figure} \begin{table}[h] \small \centering \begin{tabular}{|l|l|l|l|} \hline \diag{.1em}{2.1cm}{Method}{Word} & \emph{sunday} & \emph{noise} & \emph{plan} \\ \hline Human-assigned score & 0.60 & 0.40 & 0.70 \\ \hline BERT & 0.10 & 0.42 & 0.61 \\ \hline word2vec & 0.35 & 0.17 & 0.18 \\ \hline \end{tabular} \caption{Three sample words, \emph{sunday}, \emph{noise}, and \emph{plan} have been excerpted from one transcript. The human-assigned importance of these importance score are 0.60, 0.40, and 0.70. For \emph{noise} and \emph{plan}, aggregated scores generated from word2vec-based embedding are 0.17 and 0.18, which does not belong to the same importance categories annotated. On the contrary, Bert-based embedding generates a score that aligns with human-assigned importance for \emph{noise} and \emph{plan}. However, for \emph{sunday}, the word2vec-based semantic score is relatively closer to the actual importance score than BERT-based embedding. In fact, \emph{sunday} appears as an isolated response to someone's question in transcript.} \label{tab:model_parameters_9} \end{table} \subsection{Correlation Analysis to Assess Fit with Word Importance Scores} After calculating two types of embeddings for each word in this dataset, we asked which one would be more useful within a model to predict word importance. Prior work on the state-of-art word-importance learning algorithm Neural Bag-of-Words (NBOW) has revealed that learning importance of words within a sentence is effective while using the mean of each word-embedding vector as a feature \cite{sheikh:hal-01331720}. Following this common practice for determining word importance \cite{kalchbrenner-etal-2014-convolutional, dai2020importance}, we calculated the mean of each word-embedding vector, to represent its word semantic score \cite{sheikh:hal-01331720}. For both the word2vec and BERT-based embeddings, for each sentence in the transcript, we normalized word-semantic scores within the sentence, to obtain a value in a [0,1] range for each word. BERT embeddings produce sub-word tokens for a complete word and to handle such a scenario we have computed the average of the sub-words to calculate the final composite semantic score. After performing this operation across sentences in the transcripts, we conducted an analysis to determine which form of pre-trained embedding (word2vec or BERT) better correlated with human-produced annotations of word importance in the original dataset. The values based on word2vec were correlated with human annotations with a Pearson correlation coefficient of $r=0.30$, and for the BERT-based scores, the coefficient was $r=0.41$. A Fisher $z$-transformation \cite{upton2014dictionary} revealed that word semantic scores generated using BERT contextualized word embeddings were significantly better correlated ($z = -3.05, p<0.001$) with human-assigned scores than word2vec counterparts. Based on these findings, we decided to use BERT contextualized embeddings in continued analysis. We also tried another traditional approach called TF-IDF to calculate a semantic score for words. A correlation analysis between the score generated by TF-IDF and human annotations resulted in a Pearson correlation coefficient of $r=0.25$, which was lower than the coefficient generated using word2vec word embedding. \section{Predicting Word Importance \vspace{-0.4em}} To demonstrate how to use our dataset to predict the importance of each word, we have begun to investigate several supervised learning methods. The independent variable is the processed $ 768 \times 1$ BERT-embedding vector of each word, and the output variable is the human-labeled importance score, discretized into six classes, for each word in the dataset. \begin{table}[t] \small \centering \begin{tabular}{|l|l|l|} \hline Method & F1 Score & RMSE \\ \hline \begin{tabular}[l]{@{}l@{}}Multi-layer Perceptron\end{tabular} & $0.10$ & $1.29$ \\ \hline \begin{tabular}[l]{@{}l@{}}Random-Forest\end{tabular} & $0.25$ & $1.02$ \\ \hline \begin{tabular}[l]{@{}l@{}}Linear Support Vector \end{tabular} & $0.51$ & $0.99$ \\ \hline \textbf{Logistic Regression} & \textbf{$0.57$} & $0.92$ \\ \hline \end{tabular} \caption{ Supervised classification performance showing macro-averaged F1 score and Root Mean Squared Error.} \label{tab:model_parameters_2} \end{table} This classification experiment partitioned the corpus into 80\% training, 10\% development, and 10\% test set. This partition has been directly adapted from \cite{kafle-huenerfauth-2018-corpus}. We evaluated the model using two measures: (i) Root Mean Square Error (RMSE) - the deviation of the model predictions from the human-assigned categories, and (ii) the F1 measure for classification performance. For classification, we categorized annotation scores into the 6 levels, as described above: [0-0.1), [0.1-0.3), [0.3-0.5), [0.5-0.7), [0.7-0.9), and [0.9 - 1]. Table \ref{tab:model_parameters_2} illustrates that the better performing supervised model (of four traditional approaches) in predicting the importance class is Logistic Regression with F1-score $0.57$ and RMSE $0.92$. Even if the classes are discretized, we are generating continuous value for each word. And since both the human and supervised model generated scores, we calculated this RMSE. Among other approaches, the Linear Support Vector Classifier achieves F1-score 0.51, Random-Forest achieves 0.25, and Multi-layer Perceptron achieves 0.10. \begin{table}[t] \small \begin{center} \begin{tabular}{l|l|l|l|l|l|l|l|} \multicolumn{8}{c}{Predicted Label}\\ \cline{2-8} & & 1 & 2 & 3 & 4 & 5 & 6 \\ \cline{2-8} \cline{2-8} \multirow{5}{*}{\rotatebox{90}{True Label}} & 1 & \cellcolor[HTML]{BBBCCF} 0.69 & 0.21 & 0.18 & 0.15 & 0.18 & 0.00 \\ \cline{2-8} & 2 & 0.22 & \cellcolor[HTML]{CBBCCF}0.64 & 0.25 & 0.26 & 0.13 & 0.33 \\ \cline{2-8} & 3 & 0.05 & 0.12 & \cellcolor[HTML]{CCBCCF}0.48 & 0.11 & 0.18 & 0.00 \\ \cline{2-8} & 4 & 0.02 & 0.02 & 0.03 & \cellcolor[HTML]{CCBCCF}0.48 & 0.06 & 0.11 \\ \cline{2-8} & 5 & 0.01 & 0.01 & 0.04 & 0.00 & \cellcolor[HTML]{CCBCCF}0.40 & 0.00 \\ \cline{2-8} & 6 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & \cellcolor[HTML]{CBBCCF}0.56 \\ \cline{2-8} \end{tabular} \caption{Normalized confusion matrices for Logistic Regression for classification into six word importance classes using BERT-generated embeddings-based score.} \label{tab:model_parameters_3} \end{center} \end{table} \section{Limitations and Future Work} There are several limitations of this ongoing research that we intend to address in future work. \begin{itemize} \item In our current research, we have determined a semantic score for each word using three methods. Future research can use other methods to generate the semantic score and retrospectively compare the generated semantic score with the score assigned by the human annotators. \item The findings from this analysis leaves the room for future improvements, since we did not modify the hyperparameters to observe how accurately the models would predict the importance of words. Therefore, future research can explore variations of these models. \item Future directions may include collecting additional data to balance the distribution of importance classes. In addition, given the role of part of speech (POS) for word importance in texts \cite{Shah02astudy}, a next step could be to investigate POS with contextual word embedding for predicting word importance. Since TV captions often represent conversational speech with filler words, e.g., \emph{hmm} or \emph{yeah}, future research could consider alternative strategies to score the importance of such words. \item \citet{hutchinson-etal-2020-social} and \citet{hassan-etal-2021-unpacking-interdependent} demonstrate that a large language model like BERT can introduce bias relating to people with disabilities into a task. Therefore, future work can investigate whether BERT is introducing any latent bias in predicting importance of words from DHH viewers' perspective. \end{itemize} \section{Conclusion} The analysis presented above has revealed that BERT contextualized word-embedding can better represent the importance of words compared to word2vec embeddings, which had been used in prior work on word-importance prediction \cite{Kafle_ACE}. Research indicates that DHH viewers often follow key terms while skimming through captions, and researchers have proposed approaches to guide DHH readers to quickly identify keywords in caption text through visual highlighting \cite{highlight_kafle_2019}. Our findings may allow broadcasters to use embeddings to determine the important words within a sentence and to highlight those words in captions, to support DHH viewers' ability to read \cite{amin-preference-2020} the captions effectively. In this study, a traditional Logistic Regression algorithm performed better at predicting importance classes. We are also broadly investigating how to accurately measure the quality of caption transcriptions that are broadcast during live TV programs from the perspective of DHH viewers. We plan to incorporate predictive models into new word-importance weighted metrics, to better capture the usability of live captioning from DHH users' perspective. \cite{amin-speakerid-2022} \section{Ethics Statement} This work advocates for improved inclusion of DHH individuals. A risk of the study is that results may not generalize across conversational corpora. \section*{Acknowledgments} This material is based on work supported by the Department of Health and Human Services under Award No.\ 90DPCP0002-0100, and by the National Science Foundation under Award No. DGE-2125362. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Department of Health and Human Services or National Science Foundation.
1,314,259,994,396
arxiv
\section{Introduction} Geometry, and in particular differential geometry, is now considered a powerful tool to study statistical systems. Indeed, information geometry \cite{info1,info2,info3}, which started with the seminal paper by Rao~\cite{Rao} has emerged from studies of invariant geometrical structure involved in statistical inference. It defines a Riemannian metric together with dually coupled affine connections in a manifold of probability distributions. These geometric structures play important roles not only in statistical inference but also in wider areas of information sciences, such as machine learning, signal processing, optimization, neuroscience, mathematics and, of course, physics \cite{info1,info2,info3}. Thermodynamic geometry (TG), a specific application of information geometry methods to equilibrium thermodynamics, started with an initial \cite{Rao,Wein1975} definition of a metric for statistical systems, i.e. a measure of the ``distance'' between different thermal equilibrium configurations, later refined in ref.~\cite{Ruppeiner:1979} by determining the metric tensor, $g_{\mu \nu}$, through the Hessian of the entropy density. This definition of $g_{\mu \nu}$ is crucial since the resulting distance is in inverse relation with the fluctuation probability between equilibrium states and, moreover, it leads to the ``interaction hypothesis'', i.e the correspondence between the absolute value of the scalar curvature $R$ (an intensive variable, with units of a volume, evaluated by the metric) and $\xi^3$, the cube of the correlation length, $\xi$, of the thermodynamic system. Indeed, a covariant and consistent thermodynamic fluctuation theory can be developed~\cite{Ruppeiner:1995zz}, which generalizes the classical fluctuations theory and offers a theoretical justification to the physical meaning of $R$. TG has been tested in many different systems: in phase coexistence for Helium, Hydrogen, Neon and Argon ~\cite{Ruppeiner:2011gm}, for the Lennard-Jones fluids~\cite{May:2012,May:2013}, for ferromagnetic systems and liquid-liquid phase transitions~\cite{Dey:2011cs}; in the liquid-gas like first order phase transition in dyonic charged AdS black hole~\cite{Chaturvedi:2014vpa}; in the Hawking-Page transitions in Gauss-Bonnet-AdS black holes~\cite{Sahay:2017hlq}. More recently~\cite{,Castorina:2018ayy,Castorina:2018gsx}, TD has been applied to field theories and, in particular, to Quantum-Chromodynamics (QCD) at large temperature and low baryon density, to evaluate the \mbox{(pseudo-)~critical} deconfinement temperature $T_c$ and to compare the results with the Hadron Resonance Gas models. In this paper a systematic application of TD to the Nambu - Jona Lasinio (NJL) model is carried out. This study is not only interesting per se, since the NJL model gives clear indications on some dynamical mechanism, as chiral symmetry, for low energy QCD but also because a QCD fundamental property, quark confinement, is missing in NJL model with some interesting consequences on the geometrical description. The TD approach is recalled in Sec.~\ref{sec:TG} and in Sec.~\ref{sec:NJL} the phase diagram of the Nambu-Jona Lasinio model is discussed. Sec.~\ref{sec:NJLG} is devoted to the thermodynamic geometry description of chiral symmetry restoration in NJL model in the chiral limit and for finite fermion masses. The geometrical difference in describing QCD and NJL phase transitions is considered in Sec.~\ref{sec:QCD} and Sec.~\ref{sec:CC} contains our comments and conclusions. \section{Thermodynamic Geometry \label{sec:TG}} In this section the procedure to define the thermodynamic metric is briefly recalled (the details are in ref. \cite{Ruppeiner:1995zz,Ruppeiner:1998}) and the description of phase transitions by the scalar curvature, $R$, is discussed, making also use of the application to real fluids. \subsection{Thermodynamic metric} Let $A_U$ be a large thermodynamic system (universe) and let us consider an open subsystem $A$ with thermodynamic coordinates $a^0$, the internal energy density, and $a^i$, the number densities of particles of different species. The probability density to find $A$ in the ``point'' $a=(a^0,a^1,\cdots)$ is given by \begin{equation}\label{eq:Probability} P(a,a_U)\,d^na =C\;e^{S_U(a,a_U)}\;d^na\;, \end{equation} being $C$ a normalization constant, \mbox{$a_U=(a^0_U,a^i_U,\cdots)$} denotes the state of the universe and $S_U$ its total entropy, formally regarded as an exact function of the parameters of $A$ and $A_U$. On the basis of the maximum entropy principle and in the framework of Consistent and Covariant Fluctuation Theory (CCFT)~\cite{Ruppeiner:1995zz}, the thermodynamic properties of $A$ can be studied through the introduction of a quadratic form, \begin{equation}\label{eq:dist} \left(\Delta \ell\right)^2 = g_{\mu\nu} \;\Delta a^\mu \; \Delta a^\nu \;, \end{equation} where $\Delta a^\mu = a^\mu - a^\mu_U$ and \begin{equation}\label{eq:g2} g_{\mu\nu} = - \frac{\partial^2 s}{\partial a^\mu \partial a^\nu}\Bigg|_{a=a_U } \end{equation} defines a positive-definite Riemannian metric on the space of thermodynamic states as the Hessian of the entropy density, $s$, with respect its natural variables $a^\mu$. One can show~\cite{Ruppeiner:1995zz} that previous formulas give the probability of the spontaneous fluctuations between equilibrium states. Indeed, by expanding eq.~\eqref{eq:Probability} up to second order for $a \simeq a_U$, the maximum entropy state, one finds the classical gaussian normalized fluctuation probability density: \begin{equation}\label{eq:PG} \begin{split} P(a,a_U)\,d^na = &\left(\frac{V}{2\,\pi} \right)^{\frac{n}{2}} \sqrt{g_U}\;\times\\ &\times\;\exp\left\{-\frac{V}{2}\,g_{\mu\nu}\,\Delta a^\mu \,\Delta a^\nu\right\}\,d^na\;, \end{split} \end{equation} being $g$ the determinant of $g_{\mu\nu}$ and $\sqrt{g}\,d^n x$ the usual invariant volume on a Riemannian manifold. In the analysis of the phase transitions in NJL model by thermodynamic geometry we shall consider a two dimensional manifold, where the intensive coordinates are $\beta=1/T$ and $\gamma = -\mu/T$, with $\mu$ chemical potential. Moreover the metric~\eqref{eq:g2} turns out to be related with the derivatives of the potential $\phi=p/T$, where $p$ is the pressure~\cite{Ruppeiner:1998}: \begin{equation} g_{\mu\nu} = \left(\begin{array}{cc} \phi_{,\beta\beta} & \phi_{,\beta\gamma}\\ \phi_{,\beta\gamma} & \phi_{,\gamma\gamma} \end{array}\right) \;, \end{equation} with the usual comma notation for derivatives. The scalar curvature $R$ simply becomes \begin{equation} R = \frac{1}{2\,g^2}\; \left|\begin{array}{ccc} \phi_{,\beta\beta} & \phi_{,\beta\gamma} & \phi_{,\gamma\gamma}\\ \phi_{,\beta\beta\beta} & \phi_{,\beta\beta\gamma} & \phi_{,\beta\gamma\gamma}\\ \phi_{,\beta\beta\gamma} & \phi_{,\beta\gamma\gamma} & \phi_{,\gamma\gamma\gamma} \end{array}\right| \;. \end{equation} \subsection{Phase transition in thermodynamic geometry} The main results of the thermodynamic geometry within Ruppeiner's formulation~\cite{Ruppeiner:1995zz} are: 1)~the (inverse) relation between the line element and the fluctuation probability between equilibrium states; 2)~the, so called, \textit{Interaction hypothesis}: the absolute value of the scalar curvature $R$ is proportional to a power of the correlation length, i.e. $|R|\sim\xi^d$, where $d$ is the effective spatial dimension of the underling thermodynamic system. The meaning of the correlation length and of the scalar curvature can be represented as in Fig.\ref{fig:xi} (a schematic picture due to Widom~\cite{Widom:1974}): the intricate line represents what the surface of density $\rho(r)=\rho_0$ might look at any instant. This surface separates two sides with local mean densities $\rho>\rho_0$ and $\rho<\rho_0$. By tracing any straight line, the intersection points with the surface $\rho_0$ are separated by an average distance equal to $\xi$. Because such points are separated by the same mean distance $\xi$, whatever the direction of the line, it is convenient to think that regions as volume elements (``droplets'') of dimension $R\sim \xi^d$. Figure~\ref{fig:Rvari} shows a schematic summary of different configurations. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{sez1fig1} \caption{Schematic picture of the meaning of $\xi$: the intricate line represents the surface of $\rho(r)=\rho_0$, i.e. that separating two sides with local mean densities $\rho>\rho_0$ and $\rho<\rho_0$. By tracing any straight line, the intersection points are separated by an average distance equal to $\xi$. Figure~from~\cite{Ruppeiner:2012}.} \label{fig:xi} \end{figure} \begin{figure} \centering \includegraphics[width=0.75\columnwidth]{sez1fig2} \caption{Schematic pictures of different possible particle arrangements: (a) cluster of particles with volume $|R|$ pulled together by the attractive part of the interparticle interaction ($R<0$); (b) a repulsive solid-like cluster held up by hard-core particle repulsion ($R>0$); (c-d) a fluid in two phases near the critical point: the bottom half is a liquid phase containing vapor droplets with volume $|R_l|$. The top half is a coexisting vapor phase containing liquid droplets with volume $|R_v|$. In (c) $|R_v|=|R_l|$ and the droplets are commensurate, in (d) liquid and vapor phases have incommensurate droplets; (e) liquid phase; (f) solid phase with $R>0$. Figure~from~\cite{Ruppeiner:2012}.} \label{fig:Rvari} \end{figure} The interaction hypothesis has been confirmed by the study of the classical ideal gas ($R=0$~\cite{Ruppeiner:1979}) and of the van der Waals gas~\cite{Ruppeiner:1995zz}, for which, near the liquid-vapor critical point, $T_c$, the curvature is $R\sim \left|\left(T-T_c\right)/T_c\right|^{-2}$. Other confirmations come from the study of the Takahashi Gas~\cite{Ruppeiner:1995zz}, the Curie-Weiss model~\cite{Janyszek:1989}, the ferromagnetic monodimensional Ising model ~\cite{Janyszek:1990}. For a more complete list of applications see Tab.~I of Ref.~\cite{Ruppeiner:2010}. The relation between $|R|$ and $\xi^d$ is easy to verify for second-order phase transitions, since $R$ diverges, but the criterium to define a new phase in term of the curvature $R$ for a first order phase transition or a crossover is less clear. The approach called \textit{$R$-Crossing Method} (RCM) ~\cite{Ruppeiner:2011gm} is often applied to define first order phase transitions. It is based on the continuity of the scalar curvature: knowing the thermodynamic quantities in the two phases, i.e. $R$, one can build up the transition curve by imposing the continuity of $R$. The RCM, coherent with Widom’s microscopic description of the liquid-gas coexistence region (i.e. with the idea that the correlation lengths of the two phases must be the same at the transition) has been tested in systems with different features: vapor-liquid coexistence line for the Lennard-Jones fluids~\cite{May:2012,May:2013}, first and second order phase transitions of mean-field Curie-Weiss model (ferromagnetic systems), liquid-liquid phase transitions~\cite{Dey:2011cs}, phase transitions of cosmological interest as the liquid-gas-like first order phase transition in dyonic charged AdS black hole~\cite{Chaturvedi:2014vpa}. Another criterion, applied in the study of first order phase transitions in real fluids~\cite{Ruppeiner:2012} and Lennard-Jones systems~\cite{May:2013} is a first kind discontinuity in $R$. Finally, two different phases can be linked by a crossover, as for the QCD deconfinement transition. Also in this case there is no definitive conclusion on the behavior of $R$, although, it has been recently shown~\cite{Castorina:2018ayy} that the condition $R = 0$ predicts a temperature for the transition from QCD to the Hadron resonance Gas at low baryon density in agreement with freeze out curve~\cite{Floris:2014pta,Das:2014qca,Adamczyk:2017iwn} and (within $10\%$) with lattice data \cite{Steinbrecher:2018phh,Bazavov:2017dus}. Another interesting aspect of the geometrical approach to phase transitions is that the sign of the scalar curvature brings information on the microscopic interactions, since $R$ turns out to be positive for fermi statistical interactions and negative in the bosonic case ~\cite{Janyszek:1990b,Ubriaco:2016}. Therefore a change in sign of $R$ is an indication of the balance between effective interactions, even when no transition occurs, and theoretical curves with $R=0$ in pure fluids identify some anomalous behaviors observed in the experimental data of several substances (in particular, water)~\cite{Ruppeiner:2017,Ruppeiner:2012}. A transition from $R > 0$ to $R < 0$ has been also shown for the Lennard-Jones system~\cite{May:2013,May:2012} and Anyon gas~\cite{Mirza:2008fy,Ubriaco:2013}. For black holes~\cite{Sahay:2010tx}, the the change in sign of the curvature occurs at the Hawking-Page transition temperature, therefore associated with the condition $R = 0$. In the next sections we shall apply the thermodynamic geometry approach to NJL phase diagram both in the chiral limit and for finite fermion mass. The behavior of the scalar curvature in the quantitative description of the critical line in the $T - \mu$ plane will be pointed out. \subsection{An example: real fluids} The geometrical study of fluids is based on the Helmholtz free energy per volume, $f$, in terms of $(T,\rho)$ coordinates ($T$ is the temperature, $\rho = N/V$ is the particle density) and the corresponding thermodynamic line element is given by~\cite{Ruppeiner:2012} \begin{equation} \Delta \ell^2 = -\frac{1}{T}\left(\frac{\partial^2 f}{\partial T^2}\right)_\rho \Delta T^2 + \frac{1}{T}\left(\frac{\partial^2 f}{\partial \rho^2}\right)_T \Delta \rho^2 \end{equation} The scalar curvature turns out to be \begin{equation} R = \frac{1}{\sqrt{g}}\;\left[ \frac{\partial }{\partial T}\left(\frac{1}{\sqrt{g}}\;\frac{\partial g_{\rho\rho}}{\partial T}\right) + \frac{\partial }{\partial \rho}\left(\frac{1}{\sqrt{g}}\;\frac{\partial g_{TT}}{\partial \rho}\right) \right] \;, \end{equation} with \begin{equation} g_{TT} = -\frac{1}{T}\left(\frac{\partial^2 f }{\partial T^2} \right)_{\rho} \;,\qquad g_{\rho\rho} = \frac{1}{T}\left(\frac{\partial^2 f }{\partial \rho^2} \right)_{T} \;. \end{equation} and $g=g_{TT}\,g_{\rho\rho}$. In Ref.~\cite{Ruppeiner:2012,Ruppeiner:2015,Ruppeiner:2017} the real fluid free energy is modeled on the NIST Chemistry WebBook and $R$ is evaluated in the liquid and vapor phases and along the liquid-vapor coexistence curve ending at the critical point $T_c$. At the critical point $R \rightarrow - \infty$ with a power law behavior and in the asymptotic critical region, i.e very close to the critical temperature, the values of the scalar curvature evaluated in the two phases coincide. However in other regions of the thermodynamic parameter space the values of $R$ in the liquid and the vapor phases~\cite{Ruppeiner:2012} are quite different and mesoscopic fluctuating structures of different sizes occur in the two phases (see fig.~\ref{fig:Rvari}.d). In the phase diagram of fluids, $R$ is generally found to be negative since the average molecular distances are such that the attractive part of the intermolecular potential dominates. However different anomalous regions, i.e. with $R>0$, exist (see fig.4 in ref. \cite{Ruppeiner:2017}). They are localized: (a) in the supercritical liquid region, near the melting line; (b) in the liquid phase near the triple point (for water); (c) in the vapor phase, in some regions called ``repulsive clusters''~\cite{Ruppeiner:2017}. The thermodynamic states for cases (a) and (b), named solid-like-liquid states, emerge when the liquid organizes into solid-like structures at large densities, with a small intermolecular average separation. The states in ``repulsive cluster'' areas (case c), are characterized by values of $R$ much larger than the volume of a single molecule and by low density and have been observed in 97 different fluids (except those consisting of the simplest molecules) along the saturated vapor phase curve. \section{ Nambu - Jona Lasinio Model}\label{sec:NJL} In Nambu–Jona Lasinio (NJL) model with two flavors ($f=u,\;d$), the $SU(2)$ lagrangian~\cite{Klevansky:1994,Klevansky:1999,Buballa:2003qv} is given by \begin{equation} \begin{split} \mathcal L_{SU(2)}=&\overline \psi_f \left(i\,\partial\!\!\!/-m\right)\psi_f +\\ & + G\,\left[\left(\overline \psi_f \psi_f\right)^2+\left(\overline \psi_f i \gamma_5 \overrightarrow \tau\psi_f\right)^2\right]\;, \end{split} \end{equation} being $G$ a dimensionful coupling, $m$ the current quark mass ($m=0$ is the chiral limit) and $\overrightarrow \tau$ the Pauli matrices. In mean-field approximation the thermodynamic potential, $\Omega$, at finite temperature and chemical potential turns out to be~\cite{Buballa:2003qv} \begin{equation}\label{eq:Omega} \Omega(M_f) = \frac{\left(M_f-m\right)^2}{4\,G} +N_f\;\Omega_f \;, \end{equation} with \begin{equation}\label{eq:OmegaF} \begin{split} \Omega_f =& - 2\,N_c\int \frac{d^3 p}{(2\,\pi)^3}\;E_f -\\ & - 2\,N_c\,T\int \frac{d^3 p}{(2\,\pi)^3}\;\ln\left[1+e^{-\textstyle{\frac{E_f+\mu_f}{T}}}\right] -\\ &- 2\,N_c\,T\int \frac{d^3 p}{(2\,\pi)^3}\; \left[1+e^{-\textstyle{\frac{E_f-\mu_f}{T}}}\right] \;, \end{split} \end{equation} where $M_f$ is the dynamically generated mass, $E_f=\sqrt{p^2+M_f^2}$, $N_c$ and $N_f$ are the number of colors and flavors respectively, $\mu_f$ is the quark $f$ chemical potential and the integrals are regulated by a cutoff $\Lambda$. For $m_u=m_d$, $\mu=\mu_u=\mu_d$, the generated quark mass is $M=M_u=M_d$ . To evaluate the minimum of $\Omega$ by eq.~\eqref{eq:Omega}, one has to solve the self-consistent gap equation \begin{equation}\label{eq:GAP} M = m-2\,G\,\left<\overline \psi \psi\right>\;, \end{equation} where $\left<\overline \psi \psi\right>$ is the quark-antiquark condensate: \begin{equation} \begin{split} \left<\overline \psi \psi\right>= -2\,N_c\,N_f\!\!\int\!\!\! \frac{d^3p}{(2\,\pi)^3}\,\frac{M}{E} \; \Psi(T,\mu) \;, \end{split} \end{equation} with \begin{equation}\label{eq:Psi} \Psi(T,\mu)= 1- n_+(\mu) - n_-(\mu) \end{equation} and \begin{equation}\label{eq:n} n_\pm(\mu) = \frac{1}{1+\exp\left\{\frac{E\pm\mu}{T} \right\}} \;. \end{equation} For three flavors with $m_u=m_d=m$, and $m_s\neq m$, one has $M_u=M_d\neq M_s$, and the $SU(3)$ lagrangian is~\cite{Buballa:2003qv} \begin{equation} \mathcal L_{SU(3)}=\overline \psi \left(i\,\partial\!\!\!/-\widehat m\right)\psi + \mathcal L_4+\mathcal L_6\;, \end{equation} where \begin{equation}\label{eq:l4} \mathcal L_4 = G\,\sum_a\left[\left(\overline \psi \lambda_a\,\psi\right)^2+\left(\overline \psi i \gamma_5 \lambda_a\psi\right)^2\right] \end{equation} and the 't Hooft interaction, $\mathcal L_6$, is given by \begin{equation} \mathcal L_6 = -K\Bigg[\det \overline \psi \left(1+\gamma_5\right)\psi +\det \overline \psi \left(1-\gamma_5\right)\psi\Bigg]\;, \end{equation} with $\psi = \left(u,d,s\right)^T$, $\widehat m=\text{diag}\left(m,m,m_s\right)$, $\lambda_0 = \sqrt{2/3}\;\mathbb{1}_{3\times 3}$, being $\mathbb{1}_{3\times 3}$ the $3\times3$ identity matrix, and where $\lambda_a$ ($a=1,\;\ldots,\;8$) are the Gell-Mann matrices and $K$ and $G$ dimensionful couplings. The gap equations, \begin{equation}\label{eq:GAP3q} \begin{split} M_i = m_i -4\,G\,\left<\overline \psi_i\psi_i\right> +2\,K\,&\left<\overline \psi_j\psi_j\right>\left<\overline \psi_k\psi_k\right>\\ &\qquad (j,\;k\neq i) \;, \end{split} \end{equation} are coupled with the quark condensates \begin{equation}\label{eq:c} \begin{split} \left<\overline \psi_i \psi_i\right>= -2\,N_c\int \frac{d^3p}{(2\,\pi)^3}\,\frac{M_i}{E_i} \; \Psi_i \;, \end{split} \end{equation} where \begin{equation} \Psi_i =1 - \frac{1}{1+e^{\frac{E_i+\mu_i}{T}}} - \frac{1}{1+e^{\frac{E_i-\mu_i}{T} }} \;, \end{equation} and the mean-field thermodynamic potential $\Omega$ turns out to be ~\cite{Buballa:2003qv} \begin{equation}\label{eq:Omega3q} \begin{split} \Omega = \sum_{f=u,d,s}\Omega_f &+ 2\,G\,\sum_{f=u,d,s}\left<\overline \psi_f \psi_f\right>^2-\\ & -4\,K\,\left<\overline u u\right>\left<\overline d d\right>\left<\overline s s\right>\;, \end{split} \end{equation} with $\Omega_f$ in eq.~\eqref{eq:OmegaF}. Finally, the potential we need for the thermodynamic geometry calculations is \begin{equation}\label{eq:phi} \phi(\beta,\gamma) = \frac{P}{T} = -\Omega(\beta,\gamma)\;\beta \;, \end{equation} where $P=-\Omega$ is the pressure. \section{Thermodynamic geometry of chiral symmetry restoration in NJL model}\label{sec:NJLG} \subsubsection{Two flavors in the chiral limit} \label{sec:NJL2chi} Let us first discuss the chiral limit ($m=0$) for two flavors, starting from the breaking of chiral symmetry at $T=\mu=0$, with the value of the dynamical mass $M_0(0,0)=300\,MeV$, corresponding to $\Lambda=650$~MeV and $G=5.01\times 10^{-6}\;MeV^{-2}$~\cite{Klevansky:1994,Klevansky:1999}. The well known solution $M(T,\mu)$ of the gap equation~\eqref{eq:GAP}, for different values of the temperature and of the quark chemical potential, is plotted in Fig.~\ref{fig:M2fChi}.a and \ref{fig:M2fChi}.b. The restoration of the chiral symmetry is a first order phase transition at large chemical potential and a second order one at low $\mu$. \begin{figure} \centering \includegraphics[width=\columnwidth]{M2fChiT} \includegraphics[width=1.02\columnwidth]{M2fChiMu} \caption{a) The dynamically generated mass, $M$, in the NJL model with two favors in the chiral limit ($m_u = m_d = 0$ MeV) and again the temperature. Black line is for $\mu = 0$ MeV; the others are for growing $\mu$, up to $\mu = 300$ MeV and with step of $\Delta \mu = 20$ MeV. b) $M$ as a function of the chemical potential $\mu$. Black line is for $T =10$ MeV; the others are for growing $T$, up to $T = 170$ MeV and with step of $\Delta T = 20$ MeV.} \label{fig:M2fChi} \end{figure} The study of the critical line of the symmetry restoration, $T(\mu)$, by thermodynamic geometry requires the, straightforward but laborious, calculation of the scalar curvature $R$, reported in appendix~\ref{app:NJL2f}. It turns out that $|R|$ diverges at the critical temperature, i.e. there is a second order phase transition, for $\mu <\mu^\star \simeq 290$ MeV, as shown in fig.~\ref{fig:R2fChiII} for $\mu=0$. For $\mu>\mu^\star$ there is, instead, a first order phase transition. The dynamically generated mass, $M$, now takes the characteristic behavior plotted in figure~\ref{fig:MChiT30}, where the black curves (both the continuous and the dotted) are for $T=30$~MeV and the two light-gray lines define the spinodal points. Between the two spinodal (light-gray) lines one can evaluate three different scalar curvatures: the first one for the higher-mass branch (black curve in figure~\ref{fig:MChiT30}); the second one for $M=0$~MeV and the last one is related to the $M$-branch that interpolates between $M=0$ and the upper $M$-curve (dotted curve in figure~\ref{fig:MChiT30}). At fixed temperature and between the spinodal lines (see fig.~\ref{fig:MChiT30}), there is a discontinuity in $|R|$ which identifies the two dashed curves in fig.~\ref{fig:T2fChi}. The crossing temperature from the I order phase transition to the II order turns out to be about $ 58$~MeV. For small $\mu$ and near the transition the curvature is negative, i.e. the interaction is mostly attractive, suggesting that the chiral symmetry restoration is due to thermal fluctuations. On the other hand, at large chemical potential $R$ turns out to be positive, indicating a screening of the potential and an increase of the repulsive interaction at large density. The complete critical line obtained by thermodynamic geometry is depicted in Figure~\ref{fig:T2fChi} where the continuous line shows the II order phase transition and the dashed lines the spinodal curves of the first order one. The green band is the region of negative $R$. \begin{figure} \centering \includegraphics[width=1\columnwidth]{FigNJL1} \caption{$R$ from $\mu=0$~MeV: second order phase transition.} \label{fig:R2fChiII} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{MchiT30} \caption{The dynamically generated mass $M$ in the 2 flavors NJL chiral model and temperature $T=30$~MeV.} \label{fig:MChiT30} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{T2fChiC} \caption{The transition temperature: continuous line is for II order phase transition and the dashed ones for the first order one. The transition point is at $\mu_\chi^\star = 290$~MeV and $T_\chi^\star=58$~MeV. The green band is the region of $R<0$} \label{fig:T2fChi} \end{figure} \subsubsection{Two flavors with chiral masses} With finite chiral quark masses, at high temperature and low chemical potential, there is a smooth crossover rather than a second-order phase transition. Moreover, the first-order phase boundary ends in a second-order endpoint~\cite{Buballa:2003qv}. The solution of the gap equation~\eqref{eq:GAP} (with $\Lambda=650$~MeV and $G=5.01\times 10^{-6}\;\text{MeV}^{-2}$ and $m_0=5.5$~MeV) as a function of $T$ and $\mu$ is shown in Figs.~\ref{fig:M2f}.a and \ref{fig:M2f}.b. To clarify the effect of the chiral mass in the calculation of the scalar curvature, Fig.~\ref{fig:R2f} shows that $R$ diverges in the chiral limit but for $m_0 \neq 0$, near the transition temperature, it has a minimum, corresponding to a maximum of $|R|$, i.e. to a finite correlation length. Therefore, $m_0 \neq 0$ changes the behavior of $R$ near the critical temperature: the divergence of the II order phase transition turns into a minimum in the negative $R$ region and the transition temperature evaluated by the maximum of $|R|$ is completely in agreement with that one obtained by chiral susceptibility (see eq.~\eqref{eq:chi} in appendix~\ref{app:NJL2f}). For low temperature and large chemical potential, the scalar curvature $R$ has the same behavior previously discussed in the chiral limit, i.e. a first order phase transition. The critical point, $\left(T^\star, \mu^\star\right)$ between the crossover and the first order phase transition depends on $m_0$ and for (the generally accepted value) $m_0=5.5$~MeV one has $\mu^\star\simeq 329$~MeV and $T^\star\sim 32$~MeV. \begin{figure} \centering \includegraphics[width=\columnwidth]{M2fT} \includegraphics[width=0.96\columnwidth]{M2fMu} \caption{a) The dynamical generated mass, $M$, in the NJL model with two favors of identical mass ($m_u = m_d= 5.5$ MeV) and again the temperature. Black line is for $\mu = 0$ MeV; the others are for growing $\mu$, up to $\mu = 340$ MeV and with step of $\Delta \mu = 20$ MeV. b) $M$ as a function of the chemical potential $\mu$. Black line is for $T =10$ MeV; the others are for growing $T$, up to $T = 400$ MeV and with step of $\Delta T = 20$ MeV. } \label{fig:M2f} \end{figure} \begin{figure} \centering \includegraphics[width=1\columnwidth]{FigNJL3} \caption{$R$ from $\mu=0$~MeV and different values of the bare mass $m_0$: continuous line is from $m_0=0$~MeV (the chiral limit) and $R$ shows a negative divergence. Dashed line is from $m_0=2.5$~MeV and the dotted from $m_0=5.5$~MeV; both show a finite region with negative $R$ around the transition temperature, which corresponds to the local maximum of $|R|$. } \label{fig:R2f} \end{figure} Figure~\ref{fig:T2f} shows the critical line for $m_0=5.5$~MeV: the continuous line is obtained by the maximum of $|R|$ and the dashed ones are the spinodal curves. The black circle is at $\mu^\star= 329$~MeV and $T^\star=32$~MeV. The green band is the region of $R<0$. \begin{figure} \centering \includegraphics[width=\columnwidth]{T2f} \caption{The transition temperature by the $R$ conditions and from $m_0=5.5$~MeV: continuous line is obtained by the local maximum of $|R|$, the dashed ones indicate the spinodal lines. The circle is at $\mu^\star= 329$~MeV and $T^\star=32$~MeV. The green band is the region of $R<0$. } \label{fig:T2f} \end{figure} \subsubsection{Three flavors} Three flavor NJL model is studied with the parameter values~\cite{Casalbuoni2005} \begin{equation} \begin{split} \Lambda = 631.4\;\text{MeV} &\quad\quad G\,\Lambda^2=1.835 \\ &K\,\Lambda^5 = 9.29 \\ m=5.5\;\text{MeV} &\quad\quad m_s=135.7\;\text{MeV} \end{split} \end{equation} and only one chemical potential ($\mu=\mu_d=\mu_u$, $\mu_s=0$). The dynamically generated masses $M_u=M_d$ and $M_s$ are now solutions of the system of eq.~\eqref{eq:GAP3q} and eq.~\eqref{eq:c}. Their behavior is similar to that one depicted in fig.~\ref{fig:M2f}, but with different values for light and strange quarks. Also in this case there is a crossover at low chemical potential and large $T$ and a first order phase transition at low temperature and large $\mu$. The behavior of the scalar curvature is essentially the same of the previous case with two flavors and physical masses. In figure~\ref{fig:NJLRchi} the ratios $\chi_s/\chi_{s max}$ (dashed line), $\chi_u/\chi_{u max}$ (dotted line) and $|R|/|R|_{max}$ (continuous line) are depicted to visualize that the maximum in $|R|$ corresponds to the peak of chiral susceptibilities. Figure~\ref{fig:T3f} shows the transition temperature by the evaluation of $R$: the continuous line is again obtained by the maximum of $|R|$ and the dashed ones are the spinodal curves. The black circle is at $\mu^\star\sim335$~MeV and $T^\star\sim35$~MeV. The green band is the region of negative $R$. \begin{figure} \centering \includegraphics[width=\columnwidth]{NJLRchi} \caption{The ratio $\chi_s/\chi_{s max}$ (dashed line), $\chi_u/\chi_{u max}$ (dotted line) and $|R|/|R|_{max}$ (continuous line) at $\mu=0$~MeV.} \label{fig:NJLRchi} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{T3f} \caption{The transition temperature by the $R$ conditions: continuous line is obtained by the local maximum of $|R|$, the dashed ones indicate the spinodal lines. The circle is at $\mu^\star\sim 335$~MeV and $T^\star \sim 35$~MeV.The green band is the region of $R<0$ } \label{fig:T3f} \end{figure} \subsection{ Thermal geometric definition of the phase transitions in NJL model: summary} It is useful to conclude this section by summarizing the geometrical definition of the phase transitions: \begin{itemize} \item a II order phase transition occurs for two flavors in the chiral limit ($m=0$) at low chemical potential. This transition is characterized by a divergent scalar curvature; \item for chiral masses, there is a crossover, both for two and three flavors, at low chemical potential and large $T$. The transition temperature is defined as the maximum of $|R|$ in the negative-$R$ region and it is in agreement with the chiral susceptibility analysis $\chi$~\cite{Zhao:2008} (eqs.~\eqref{eq:chi}, \eqref{eq:chiu} and \eqref{eq:chis} in appendix); \item there exists a I order phase transition at low temperature and large $\mu$, both with two and three flavors and both in the chiral limit or with chiral masses. This transition is related with a discontinuity in $R$. \end{itemize} \section{NJL model and QCD crossover}\label{sec:QCD} As seen in Sec.~\ref{sec:NJLG}, the NJL crossover can be identified by a local maximum of $|R|$. However, NJL model misses color confinement and therefore there is no a priori reason to apply the same geometric criterium for non perturbative QCD dynamics. Indeed, in the thermodynamic geometry description of QCD deconfinement transition in ref.~\cite{Castorina:2018ayy} the criterium $R=0$ has been used. It indicates the transition from a mostly fermionic system (as the quark-gluon plasma) to an essentially bosonic one (as the hadron resonance gas) but, as shown in fig.~\ref{fig:ChiVsR}, it exactly corresponds to the maximum of chiral susceptibility, confirming the well known \cite{Casher:1979vw,Banks:1979yr,Castorina:1981iy,Digal:2000ar} interplay between confinement and chiral symmetry breaking. \begin{figure} \centering \includegraphics[width=\columnwidth]{LchiVsR} \caption{The chiral susceptibility $\chi$ at $\mu=0$~MeV and as a function of the scalar curvature $R$ for physical value of the strange quark mass, $m_s$, and $m_s/m_\ell = 20$ (dotted line) or $m_s/m_\ell = 27$ (continuous line).} \label{fig:ChiVsR} \end{figure} The transition temperature evaluated by \mbox{$R=0$} is in agreement with the freeze-out hadronization curve and with the pseudo-critical temperature by lattice data within $10\%$ \cite{Castorina:2018ayy}. \section{Comments and Conclusions}\label{sec:CC} Thermodynamic geometry reliably describes the phase diagram of NJL model, both in the chiral limit and for finite mass, and indicates a geometrical interplay between chiral symmetry restoration/breaking and deconfinement/confinement regimes. Moreover in a very recent paper \cite{Ding:2018auz} the chiral phase transition temperature $T^0_c$, corresponding to a ``true'' chiral transition in the limit \mbox{$m_s/m_l >> 1$}, turns out to be about $25$ MeV less than the pseudo-critical temperature. Fig~\ref{fig:ChiVsR} suggests that a small variation from $m_s/m_l = 20 $ to $m_s/m_l = 27 $ changes the maximum of chiral susceptibility from $R=0$ to a finite value of $|R|$, as in NJL model. It could be possible that considering the effective chiral limit, i.e. $m_s/m_l >> 1$ one recovers by thermodynamic geometry a ``true'' chiral phase transition at lower temperature, with typical scaling laws. The role of color confinement in QCD in terms of thermodynamic geometry will be discussed in different models in a forthcoming paper. \vskip 10pt {\bf Acknowledgements} The authors thank H.Satz for useful comments.
1,314,259,994,397
arxiv
\section{Introduction} This paper is concerned with computing quantum sheaf cohomology rings, an analogue of quantum cohomology rings for heterotic strings. Quantum cohomology describes the operator product rings in A model topological field theories. Those operator product rings are deformations of the classical cohomology rings, and so are called `quantum cohomology' rings. The deformations encode information about minimal-area surfaces, and so quantum cohomology played an important role in the enumerative geometry revolution that swept through algebraic geometry starting in the early 1990s, and continues in various forms to this day. Quantum sheaf cohomology computes analogous invariants of pairs consisting of spaces $X$ together with vector bundles ${\cal E} \rightarrow X$ satisfying the conditions \begin{displaymath} \Lambda^{\rm top} {\cal E}^* \: \cong \: K_X, \: \: \: {\rm ch}_2({\cal E}) \: = \: {\rm ch}_2(TX). \end{displaymath} Such pairs define the `A/2 model,' a heterotic generalization of the A model. An analogue of quantum cohomology for the A/2 model was originally defined in \cite{ks} (motivated by physics considerations in \cite{abs}), and describes a deformation of the product structure on sheaf cohomology, for which reason this deformation has been named `quantum sheaf cohomology.' Much as in ordinary quantum cohomology, the deformation in question revolves around enumerative properties of $X$ -- specifically, one computes sheaf cohomology of induced sheaves over a moduli space of curves in $X$, corresponding physically to nonperturbative corrections to correlation functions of charged fields. Quantum sheaf cohomology and related notions have been further developed in a variety of recent papers including {\it e.g.} \cite{gk,mcom,s2,s3,ade,tan1,tan2,ms,mm2,m1,kmmp,gs2,mp1,pmp,mcorev,g1,mm3}. In this paper we shall outline general results for quantum sheaf cohomology for $X$ a compact toric variety and ${\cal E}$ a deformation of the tangent bundle of $X$, described as a deformation of the toric Euler sequence. In particular, in the past such computations have been done with either physics-based GLSM techniques (which so far have not been amenable to studying nonlinear deformations), or math-based computation-intensive brute-force Cech cohomology computations. One of the innovations of this paper and \cite{dgks} are a set of new ideas to radically simplify mathematics computations, which we use to obtain results of greater generality than previously obtainable with GLSM techniques. Utilizing those methods, we find, for example, that quantum sheaf cohomology rings, at least in these cases, are independent of nonlinear deformations, a result previously conjectured in \cite{mm2,kmmp}. Detailed proofs are left to \cite{dgks}. We begin in section~\ref{sect:genlprocedure} by describing the A/2 model (a holomorphic field theory), and outline the correlation function computations in that theory, first at a formal level, then describing generalities of linear sigma model (LSM) compactifications and induced sheaves over moduli spaces of curves. In section~\ref{sect:oneproj} we begin by computing the quantum sheaf cohomology of a projective space. Since the tangent bundle of a projective space is rigid, the result will automatically match the ordinary quantum cohomology ring, but this is a useful warm-up exercise and demonstration of some of the technology we are introducing that simplify general quantum sheaf cohomology computations. In section~\ref{sect:ex:proj}, we apply these ideas to compute quantum sheaf cohomology on a product of projective spaces. Briefly, quantum sheaf cohomology reduces to a classical sheaf cohomology computation over the LSM moduli spaces, and for a product of projective spaces, the LSM moduli spaces are again a product of projective spaces, so we work through classical sheaf cohomology for products of general projective spaces, then apply those results to quickly compute quantum sheaf cohomology for a deformation of the tangent bundle on ${\bf P}^1 \times {\bf P}^1$. Projective spaces are a bit simple, so in section~\ref{sect:ex:hirz} we compute quantum sheaf cohomology for a deformation of the tangent bundle on a Hirzebruch surface, which allows us to tackle issues such as nonlinear deformations and four-fermi interaction terms. In section~\ref{sect:genl} we describe general results (derived in detail in \cite{dgks}). In appendix~\ref{sect:glsm-4fermi} we derive an ansatz for four-fermi terms from GLSM's, that is used both in this paper and in \cite{dgks}. \section{General procedure and definitions} \label{sect:genlprocedure} First, let us briefly review the A/2 model. Recall that on the $(2,2)$ locus, the A model topological field theory is a twist of the $(2,2)$ supersymmetric nonlinear sigma model \begin{displaymath} \frac{1}{\alpha'} \int_{\Sigma} d^2z \left( \left( g_{\mu \nu} + i B_{\mu \nu} \right) \partial \phi^{\mu} \overline{\partial} \phi^{\nu} \: + \: \frac{i}{2} g_{\mu \nu} \psi_+^{\mu} D_{\overline{z}} \psi_+^{\nu} \: + \: \frac{i}{2} g_{\mu \nu} \psi_-^{\mu} D_z \psi_-^{\nu} \: + \: R_{i \overline{\jmath} k \overline{l}} \psi_+^i \psi_+^{\overline{\jmath}} \psi_-^k \psi_-^{\overline{l}} \right), \end{displaymath} which is amenable to rational curve counting. Specifically, the A model is defined by twisting worldsheet fermions into worldsheet scalars and vectors as follows \cite{edtft}: \begin{displaymath} \begin{array}{lcl} \psi_+^i \: \in \: \Gamma_{ C^{\infty} }\left( \phi^* T^{1,0}X \right), & \: \: \: & \psi_-^i \: \in \: \Gamma_{ C^{\infty} }\left( \overline{K}_{\Sigma} \otimes \left( \phi^* T^{0,1} X \right)^{*} \right), \\ \psi_+^{\overline{\imath}} \: \in \: \Gamma_{ C^{\infty} }\left( K_{\Sigma} \otimes \left( \phi^* T^{1,0} X \right)^{*} \right), & \: \: \: & \psi_-^{\overline{\imath}} \: \in \: \Gamma_{ C^{\infty} }\left( \phi^* T^{0,1} X \right). \end{array} \end{displaymath} The heterotic analogue of the A model, known as the A/2 model, is a twist of the $(0,2)$ nonlinear sigma model \begin{displaymath} \frac{1}{\alpha'} \int_{\Sigma} d^2z \left( \left(g_{\mu \nu} + i B_{\mu \nu} \right) \partial \phi^{\mu} \overline{\partial} \phi^{\nu} \:+ \: \frac{i}{2} g_{\mu \nu} \psi_+^{\mu} D_{\overline{z}} \psi_+^{\nu} \: + \: \frac{i}{2}h_{\alpha \beta} \lambda_-^{\alpha} D_{z} \lambda_-^{\beta} \: + \: F_{i \overline{\jmath} a \overline{b}} \psi_+^i \psi_+^{\overline{\jmath}} \lambda_-^a \lambda_-^{\overline{b}} \right), \end{displaymath} in which the fermions couple to bundles as follows: \begin{displaymath} \begin{array}{lcl} \psi_+^i \: \in \: \Gamma_{ C^{\infty} }\left( \phi^* T^{1,0} X \right), & \: \: \: & \lambda_-^a \: \in \: \Gamma_{ C^{\infty} }\left( \overline{K}_{\Sigma} \otimes \phi^* \overline{ \mathcal{E} }^* \right), \\ \psi_+^{\overline{\imath}} \: \in \: \Gamma_{ C^{\infty} } \left( K_{\Sigma} \otimes \left( \phi^* T^{1,0}X \right)^{*} \right), & \: \: \: & \lambda_-^{\overline{a}} \: \in \: \Gamma_{ C^{\infty} }\left( \phi^* \overline{ \mathcal{E} } \right), \end{array} \end{displaymath} where $\mathcal{E}$ is a holomorphic vector bundle on $X$. Anomaly cancellation requires \begin{displaymath} \Lambda^{\rm top} {\cal E}^* \: \cong \: K_X, \: \: \: {\rm ch}_2({\cal E}) \: = \: {\rm ch}_2(TX). \end{displaymath} (The second statement is the Green-Schwarz anomaly cancellation condition generic to all heterotic theories; the first is a condition specific to the A/2 twist, an analogue of the condition that the closed string B model can only propagate on spaces $X$ such that $K_X^{\otimes 2}$ is trivial \cite{s3,edtft}.) In fact, a specific choice of isomorphism $\Lambda^{\rm top} {\cal E}^* \cong K_X$ is part of the data needed to define the path integral. Although both left- and right-movers have been twisted, the theory defined by the twisting above is not a topological field theory, since the worldsheet does not have supersymmetry on left-movers. Nevertheless it is sufficiently close to a true topological field theory to enable mathematical computations. The RR states of the A/2 model generalizing the A model states are counted by sheaf cohomology $H^q(X, \Lambda^p \mathcal{E}^{\vee})$. In general terms, we understand correlation functions in the A/2 model as follows (see \cite{ks} for a more complete discussion). For a space $X$ with holomorphic vector bundle ${\cal E} \rightarrow X$ satisfying \begin{displaymath} \det {\cal E}^* \: \cong \: K_X, \: \: \: {\rm ch}_2({\cal E}) \: = \: {\rm ch}_2(TX), \end{displaymath} the classical contribution to a correlation function is \begin{displaymath} \langle {\cal O}_1 \cdots {\cal O}_n \rangle \: = \: \int_X \omega_1 \wedge \cdots \wedge \omega_n, \end{displaymath} where each $\omega_i$ is an element of $H^*(X, \Lambda^* {\cal E}^*)$, and corresponds to an operator ${\cal O}_i$. The correlation function can only be nonzero if \begin{displaymath} \omega_1 \wedge \cdots \wedge \omega_n \: \in \: H^{\rm top}\left(X, \Lambda^{\rm top} {\cal E}^* \right) \end{displaymath} and we get a number from this because of the isomorphism \begin{displaymath} \det {\cal E}^* \: \cong \: K_X \end{displaymath} and the fact that $H^{\rm top}(X, K_X) \cong {\bf C}$. In sectors of nonzero instanton degree, each ${\cal O}_i$ induces an element of $H^*({\cal M}, \Lambda^* {\cal F}^*)$, where ${\cal M}$ is the moduli space and ${\cal F}$ a sheaf on ${\cal M}$ induced by ${\cal E}$, as described in \cite{ks}. For example, if the moduli space ${\cal M}$ admitted a universal instanton $\alpha$, then ${\cal F} = R^0 \pi_* \alpha^* {\cal E}$. Schematically, if there are no $\psi_+^{\overline{\imath}}$, $\lambda_-^a$ zero modes, then the contribution to a correlation function in a sector of nonzero instanton degree will be of the form \begin{displaymath} \int_{\cal M} \tilde{\omega}_1 \wedge \cdots \wedge \tilde{\omega}_n, \end{displaymath} where each $\tilde{\omega}_i$ is an element of $H^*({\cal M}, \Lambda^* {\cal F}^*)$, and corresponds to an operator ${\cal O}_i$. In close analogy with the classical case, this contribution will be nonzero if \begin{displaymath} \tilde{\omega}_1 \wedge \cdots \wedge \tilde{\omega}_n \: \in \: H^{\rm top}\left({\cal M}, \Lambda^{\rm top} {\cal F}^* \right) \end{displaymath} and we get a number from this because the conditions \begin{displaymath} \det {\cal E}^* \: \cong \: K_X, \: \: \: {\rm ch}_2({\cal E}) \: = \: {\rm ch}_2(TX) \end{displaymath} imply (via Grothendieck-Riemann-Roch) that \begin{displaymath} \det {\cal F}^* \: \cong \: K_{\cal M}. \end{displaymath} If there are $\psi_+^{\overline{\imath}}$, $\lambda_-^a$ zero modes, then we have to make use of the four-fermi terms, as described in \cite{ks}. Define \begin{displaymath} {\cal F}_1 \: \equiv \: R^1 \pi_* \alpha^* {\cal E}, \: \: \: {\rm Obs} \: \equiv \: R^1 \pi_* \alpha^* TX, \end{displaymath} then one can formally identify the contribution of each four-fermi term with an insertion of \begin{displaymath} H^1\left( \mathcal{M}, \mathcal{F}^* \otimes \mathcal{F}_1 \otimes \left( \mbox{Obs} \right)^* \right). \end{displaymath} Assuming equal numbers of $\psi_+^{\overline{\imath}}$, $\lambda_-^a$ zero modes, correlation functions in such a sector will have the form \begin{displaymath} \int_{\cal M} \tilde{\omega}_1 \wedge \cdots \wedge \tilde{\omega}_n \wedge \alpha, \end{displaymath} where the $\tilde{\omega}_i$ are as before, and $\alpha$ is a wedge product of cohomology classes associated with four-fermi terms. Altogether the contribution can only be nonzero if \begin{displaymath} \tilde{\omega}_1 \wedge \cdots \wedge \tilde{\omega}_n \wedge \alpha \: \in \: H^{\rm top}\left( {\cal M}, \Lambda^{\rm top} {\cal F}^* \otimes \Lambda^{\rm top} {\cal F}_1 \otimes \Lambda^{\rm top} {\rm Obs}^* \right) \end{displaymath} and we get a number from this because in these circumstances the conditions \begin{displaymath} \det {\cal E}^* \: \cong \: K_X, \: \: \: {\rm ch}_2({\cal E}) \: = \: {\rm ch}_2(TX) \end{displaymath} imply (via Grothendieck-Riemann-Roch) that \begin{displaymath} \det {\cal F}^* \otimes \det {\cal F}_1 \otimes \det {\rm Obs}^* \: \cong \: K_{\cal M}. \end{displaymath} Now, let us begin to specialize to examples of the form we shall discuss in this paper. Consider a projective toric variety $X = X_{\Sigma}$ over ${\bf C}$ of dimension $n$ with fan $\Sigma$. The tangent bundle $TX$ is defined by a cokernel of the form \begin{displaymath} 0 \: \longrightarrow \: {\cal O}^{\oplus r} \: \stackrel{E}{\longrightarrow} \: \bigoplus_{i=1}^n {\cal O}\left(\vec{q}_i\right) \: \longrightarrow \: TX \: \longrightarrow \: 0, \end{displaymath} where $r$ is the rank of the Picard lattice, whose complexification we denote as \begin{displaymath} W = {\rm Pic}(X) \otimes_{\bf Z} {\bf C}. \end{displaymath} We will often denote ${\cal O}_X^{\oplus r}$ by $W \otimes {\cal O}_X$. The map $E$ acts by mapping the $a^{\rm th}$ ${\cal O}$ as \begin{displaymath} {\cal O} \: \stackrel{ q_{ai} \phi_i }{\longrightarrow} \: {\cal O}\left( \vec{q}_i \right), \end{displaymath} where $\phi_i \in \Gamma\left( {\cal O}\left( \vec{q}_i \right) \right)$ is a homogeneous coordinate on the toric variety (see for example \cite{cls}). Now, we shall consider deformations ${\cal E}$ of the tangent bundle above, defined by cokernels \begin{displaymath} 0 \: \longrightarrow \: {\cal O}^{\oplus r} \: \stackrel{E}{\longrightarrow} \: \bigoplus_{i=1}^n {\cal O}\left(\vec{q}_i\right) \: \longrightarrow \: {\cal E} \: \longrightarrow \: 0 \end{displaymath} for more general maps $E$. Each element of $E$ will be a polynomial. We will distinguish two types of contributions to $E$: ``linear'' and ``nonlinear'' deformations. Linear deformations involve monomials containing a single homogeneous coordinate (as in all of the maps defining the tangent bundle). Nonlinear deformations involve monomials containing a product of more than one homogeneous coordinate. We will use the `linear sigma model' moduli space ${\cal M}$. As explained in {\it e.g.} \cite{dr}, for the case above, this is constructed by expanding each of the homogeneous coordinates on $X$ in a basis of zero modes on ${\bf P}^1$, and interpreting the coefficients in the expansion as homogeneous coordinates on the moduli space. If \begin{displaymath} X \: = \: {\bf C}^n // \left( {\bf C}^{\times} \right)^r, \end{displaymath} where $({\bf C}^{\times})^r$ acts on ${\bf C}^n$ with weights $\vec{q}_i$, then the linear sigma model moduli space of maps of degree $\vec{d}$ is \begin{displaymath} {\cal M} \: = \: \left( \oplus_{i=1}^n H^0\left( {\bf P}^1, {\cal O}(\vec{q}_i \cdot \vec{d}) \right) \right) // \left( {\bf C}^{\times} \right)^r. \end{displaymath} It can be shown that the LSM moduli space ${\cal M}$ is smooth whenever the original toric variety is. (The basic point is that if we describe the toric variety as $({\bf C}^n - E)/G$, then singularities are at fixed points of $G$. See \cite{dgks}[section 4.1] for further details.) The induced sheaves ${\cal F}$, ${\cal F}_1$ can be constructed in an analogous fashion \cite{ks}, by expanding worldsheet GLSM fermions in a basis of zero modes and interpreting the coefficients as line bundles over the moduli space. Specifically, following the methods of \cite{ks}, one finds for present case that \begin{displaymath} 0 \: \longrightarrow \: {\cal O}^{\oplus r} \: \stackrel{E'^T}{\longrightarrow} \: \bigoplus_{i=1}^n H^0\left( {\bf P}^1, {\cal O}(\vec{q}_i \cdot \vec{d}) \right) \otimes_{ {\bf C} } {\cal O}(\vec{q}_i ) \: \longrightarrow \: {\cal F} \: \longrightarrow \: 0, \end{displaymath} \begin{displaymath} {\cal F}_1 \: \cong \: \bigoplus_{i=1}^n H^1\left( {\bf P}^1, {\cal O}(\vec{q}_i \cdot \vec{d} ) \right) \otimes_{ {\bf C} } {\cal O}(\vec{q}_i). \end{displaymath} The map $E'$ in the definition of ${\cal F}$ is induced from the corresponding map in the definition of ${\cal E}$. It is constructed by taking the map $E$ in ${\cal E}$ (a polynomial in homogeneous coordinates) and expanding in terms of homogeneous coordinates on the worldsheet ${\bf P}^1$. The components of the induced map $E'$ are then the coefficients of various monomials in the homogeneous coordinates on ${\bf P}^1$. To explain how the map is induced in more detail, let us consider the example of a Hirzebruch surface ${\bf F}_n$. To set notation, describe the Hirzebruch surface by the toric fan in Figure \ref{fig:FnFan}. \begin{figure}[h] \begin{center} \begin{picture}(150,150) \Line(75,75)(150,75) \Line(75,0)(75,150) \Line(0,110)(75,75) \Text(145,80)[b]{$u$} \Text(80,145)[l]{$s$} \Text(80,5)[l]{$t$} \Text(5,110)[b]{$v$} \Text(5,100)[t]{$(-1,n)$} \end{picture} \end{center} \caption{The fan for ${\bf F}_n$} \label{fig:FnFan} \end{figure} From the fan, we read off the relations between toric divisors \begin{displaymath} D_u \: = \: D_v, \: \: \: D_t \: = \: D_s \: + \: n D_v \end{displaymath} and the Stanley-Reisner ideal \begin{displaymath} D_u \cdot D_v \: = \: 0 \: = \: D_s \cdot D_t. \end{displaymath} The homogeneous coordinates $u$, $v$, $s$, $t$ (corresponding to the four toric divisors) have the following weights under two ${\bf C}^{\times}$ actions: \begin{center} \begin{tabular}{cccc} $u$ & $v$ & $s$ & $t$ \\ \hline $1$ & $1$ & $0$ & $n$ \\ $0$ & $0$ & $1$ & $1$ \end{tabular} \end{center} We describe a deformation $\cE^*$ of the cotangent bundle as the cokernel \begin{displaymath} 0 \: \longrightarrow \: \cE^* \: \longrightarrow \: {\mathcal O}(-1,0)^{\oplus 2} \oplus {\mathcal O}(0,-1) \oplus {\mathcal O}(-n,-1) \: \stackrel{E}{\longrightarrow} \: W \otimes {\mathcal O} \: \longrightarrow \: 0, \end{displaymath} where $W$ is a two-dimensional vector space, \begin{equation} \label{hirz-genl-map} E \: = \: \left[ \begin{array}{cc} A x & B x \\ \gamma_1 s & \gamma_2 s \\ \alpha_1 t + s f_1(u,v) & \alpha_2 t + s f_2(u,v) \end{array} \right], \end{equation} with \begin{displaymath} x \: \equiv \: \left[ \begin{array}{c} u \\ v \end{array} \right], \end{displaymath} $A$, $B$ constant $2 \times 2$ matrices, $\gamma_1$, $\gamma_2$, $\alpha_1$, $\alpha_2$ constants, and $f_{1,2}(u,v)$ homogeneous polynomials of degree $n$. (The matrices $A$, $B$, and $\gamma_{1,2}$ define linear deformations of the tangent bundle; the functions $sf_{1,2}(u,v)$ define nonlinear deformations.) To demonstrate the technology, consider for a moment maps of degree $(1,0)$. In this case, we get the induced sheaf \begin{displaymath} 0 \: \longrightarrow \: {\cal F}^* \: \longrightarrow \: {\cal O}(-1,0)^4 \oplus {\cal O}(0,-1) \oplus {\cal O}(-n,-1)^{n+1} \: \stackrel{E'}{\longrightarrow} \: W \otimes {\cal O} \: \longrightarrow \: 0, \end{displaymath} where the map $E'$ is induced from the map $E$ by expanding fields in zero modes and picking off terms with the same homogeneous coordinates on ${\bf P}^1$. Let us work through that in detail to illustrate the result. In the degree $(1,0)$ sector, we expand \begin{eqnarray*} u & = & u_0 a \: + \: u_1 b, \\ v & = & v_0 a \: + \: v_1 b, \\ s & = & s_0, \\ t & = & t_0 a^n \: + \: t_1 a^{n-1} b \: + \: \cdots \: + \: t_n b^n, \end{eqnarray*} where $a$, $b$ are homogeneous coordinates on ${\bf P}^1$. Then, in the original map $E$, we replace each field $u$, $v$, $s$, $t$ by its expansion in zero modes above, and pick off terms with the same homogeneous coordinates. In this fashion, we find \begin{equation} \label{fullzeromodeexp} E' \: = \: \left[ \begin{array}{cc} \left[ \begin{array}{cc} A & 0 \\ 0 & A \end{array} \right] x' & \left[ \begin{array}{cc} B & 0 \\ 0 & B \end{array} \right] x' \\[1em] \gamma_1 s_0 & \gamma_2 s_0 \\[1em] \alpha_1 t_0 \: + \: s_0 f_{10} u_0^n \: + \: s_0 f_{11} u_0^{n-1} v_0 & \alpha_2 t_0 \: + \: s_0 f_{20} u_0^n \: + \: s_0 f_{21} u_0^{n-1} v_0 \\ \: + \: \cdots \: + \: s_0 f_{1n} v_0^n & \: + \: \cdots \: + \: s_0 f_{2n} v_0^n \\[1em] \alpha_1 t_1 \: + \: s_0 f_{10} (n u_0^{n-1} u_1) \: + \: s_0 f_{11} u_0^{n-1} v_1 & \alpha_2 t_1 \: + \: s_0 f_{20} (n u_0^{n-1} u_1) \: + \: s_0 f_{21} u_0^{n-1} v_1 \\ \: + \: (n-1) s_0 f_{11} u_0^{n-2} u_1 v_0 \: + \: \cdots & \: + \: (n-1) s_0 f_{21} u_0^{n-2} u_1 v_0 \: + \: \cdots \\[1em] \cdots & \cdots \end{array} \right], \end{equation} where \begin{displaymath} x' \: = \: [ u_0, v_0, u_1, v_1 ]^T \end{displaymath} and \begin{displaymath} f_i(u,v) \: = \: f_{i0} u^n \: + \: f_{i1} u^{n-1} v \: + \: \cdots \: + \: f_{in} v^n. \end{displaymath} In $E'$, the lines with $t_0$, for example, correspond to coefficients of $a^n$, the lines with $t^1$ correspond to coefficients of $a^{n-1} b$, and so forth. It can be shown in general that ${\cal F}$ is locally-free whenever ${\cal E}$ is locally-free \cite{dgks}. Briefly, ${\cal F}$ will be locally-free whenever $E'$ is surjective. At any point on the GLSM moduli space, pick a point on ${\bf P}^1$ at which the corresponding map is nondegenerate, then the image of $E'$ is the image of $E$, hence surjectivity of $E$ implies surjectivity of $E'$. \section{Example: projective space} \label{sect:oneproj} Let us begin with an extremely simple example, namely ${\bf P}^n$. We will consider what appears formally to be a deformation of the tangent bundle of ${\bf P}^n$, defined by ${\cal E}$ below: \begin{displaymath} 0 \: \longrightarrow \: {\cal E}^* \: \longrightarrow \: Z_0 \: \stackrel{E}{\longrightarrow} \: W \otimes {\cal O} \: \longrightarrow \: 0, \end{displaymath} where \begin{displaymath} Z_0 \: = \: {\cal O}(-1)^{\oplus n+1}, \: \: \: E \: = \: A x, \end{displaymath} where $W$ is a one-dimensional vector space, $x$ is a vector of homogeneous coordinates on ${\bf P}^n$, and $A$ a constant $(n+1) \times (n+1)$ matrix. We say this appears to be a deformation; however, the tangent bundle of ${\bf P}^n$ admits no deformations, hence the matrix $A$ encodes, for nondegenerate $A$, mere reparametrizations. By contrast, for ${\bf P}^1\times {\bf P}^1$, which we shall study in the next section, generic deformations of the tangent bundle yield bundles which are not isomorphic to the original tangent bundle. Since we are simply giving a more complicated description of ${\bf P}^n$ with its tangent bundle, the quantum sheaf cohomology ring we compute should exactly match the ordinary quantum cohomology ring, which is what we shall find. This example will serve as a useful computational exercise, but we will not start generating new results until the next section. First, let us consider the classical cohomology ring. A nonzero correlation function arises from correlators of total degree $n$, equal to the dimension of ${\bf P}^n$. Classical correlation functions are then a map \begin{displaymath} {\rm Sym}^n W \: = \: H^0\left( {\rm Sym}^n W \otimes {\cal O} \right) \: \longrightarrow \: H^n\left(\Lambda^n {\cal E}^* \right). \end{displaymath} To determine the map, we use the generalized Koszul complex associated to $\Lambda^n {\cal E}^*$: \begin{displaymath} 0 \: \longrightarrow \: \Lambda^n {\cal E}^* \: \longrightarrow \: \Lambda^n Z_0 \: \longrightarrow \: \Lambda^{n-1} Z_0 \otimes W \: \longrightarrow \: \cdots \: \longrightarrow \: {\rm Sym}^n W \otimes {\cal O} \: \longrightarrow \: 0, \end{displaymath} which factorizes into a series of maps \begin{equation} \label{pn:1} 0 \: \longrightarrow \: \Lambda^n {\cal E}^* \: \longrightarrow \: \Lambda^n Z_0 \: \longrightarrow \: S_{n-1} \: \longrightarrow \: 0, \end{equation} \begin{equation} \label{pn:2} 0 \: \longrightarrow \: S_i \: \longrightarrow \: \Lambda^i Z_0 \otimes {\rm Sym}^{n-i} W \: \longrightarrow \: S_{i-1} \: \longrightarrow \: 0, \end{equation} \begin{equation} \label{pn:3} 0 \: \longrightarrow \: S_1 \: \longrightarrow \: Z \otimes {\rm Sym}^{n-1} W \: \longrightarrow \: {\rm Sym}^n W \otimes {\cal O} \: \longrightarrow \: 0. \end{equation} Now, $H^j(\Lambda^i Z_0)$ will vanish unless $j=n$, $i=n+1$ (or $i=j=0$, but we shall suppress that case as it will not be pertinent for our computations). Thus, from~(\ref{pn:1}) we find \begin{displaymath} H^n\left( \Lambda^n {\cal E}^* \right) \: \stackrel{\sim}{\longrightarrow} \: H^{n-1}(S_{n-1}), \end{displaymath} from~(\ref{pn:2}) we find \begin{displaymath} H^{i-1}(S_{i-1}) \: \stackrel{\sim}{\longrightarrow} \: H^i(S_i) \end{displaymath} for $2 \leq i \leq n-1$, and from~(\ref{pn:3}) we find \begin{displaymath} H^0({\rm Sym}^n W \otimes {\cal O}) \: \stackrel{\sim}{\longrightarrow} \: H^1(S_1), \end{displaymath} which implies that the map \begin{displaymath} {\rm Sym}^n W \: \stackrel{\sim}{\longrightarrow} \: H^n(\Lambda^n {\cal E}^*) \end{displaymath} is an isomorphism, as indicated. (As a consistency check, note that since $W$ is one-dimensional, ${\rm Sym}^n W$ is also one-dimensional.) Now, in principle the factorization of the generalized Koszul complex will stop giving isomorphisms if ever we need to compute $H^n(\Lambda^{n+1} Z_0)$. This will happen if we consider correlation functions with correlators of degree greater than $n$. For example, if we have degree $n+1$ correlators, then the correlation function computes a map \begin{displaymath} {\rm Sym}^{n+1} W \: \longrightarrow \: H^{n+1}(\Lambda^{n+1} {\cal E}^*) \: = \: 0. \end{displaymath} In this (trivial) case, we have the generalized Koszul complex \begin{displaymath} 0 \: \longrightarrow \: \Lambda^{n+1} {\cal E}^* (=0) \: \longrightarrow \: \Lambda^{n+1} Z_0 \: \longrightarrow \: \Lambda^n Z_0 \otimes W \: \longrightarrow \: \cdots \: \longrightarrow \: {\rm Sym}^{n+1} W \otimes {\cal O} \: \longrightarrow \: 0, \end{displaymath} which factorizes as \begin{equation} \label{pna:1} 0 \: \longrightarrow \: \Lambda^{n+1} {\cal E}^* (=0) \: \longrightarrow \: \Lambda^{n+1} Z_0 \: \longrightarrow \: S_n \: \longrightarrow \: 0, \end{equation} \begin{equation} \label{pna:2} 0 \: \longrightarrow \: S_i \: \longrightarrow \: \Lambda^i Z_0 \otimes {\rm Sym}^{n+1-i} W \: \longrightarrow \: S_{i-1} \: \longrightarrow \: 0, \end{equation} \begin{equation} \label{pna:3} 0 \: \longrightarrow \: S_1 \: \longrightarrow \: Z \otimes {\rm Sym}^n W \: \longrightarrow \: {\rm Sym}^{n+1} W \otimes {\cal O} \: \longrightarrow \: 0. \end{equation} As before, from~(\ref{pna:3}) we have \begin{displaymath} H^0\left( {\rm Sym}^{n+1} W \otimes {\cal O} \right) \: \stackrel{\sim}{\longrightarrow} \: H^1(S_1) \end{displaymath} and from~(\ref{pna:2}) we have \begin{displaymath} H^{i-1}(S_{i-1}) \: \stackrel{\sim}{\longrightarrow} \: H^i(S_i) \end{displaymath} for $2 \leq i \leq n$. Finally from~(\ref{pna:1}) we have \begin{displaymath} H^n(S_n) \: \stackrel{\sim}{\longrightarrow} \: H^n(\Lambda^{n+1} Z_0). \end{displaymath} Thus, the original correlation function necessarily vanishes: \begin{displaymath} {\rm Sym}^{n+1} W \: \longrightarrow \: H^{n+1}(\Lambda^{n+1} {\cal E}^*) = 0 \end{displaymath} but \begin{displaymath} {\rm Sym}^{n+1} W \: \stackrel{\sim}{\longrightarrow} \: H^n(\Lambda^{n+1} Z_0). \end{displaymath} From \cite{dgks}[section 3.3], the group $H^n(\Lambda^{n+1} Z_0)$ is a one-dimensional vector space generated by \begin{displaymath} \det (A \psi) \: = \: \left( \det A \right) \psi^{n+1}, \end{displaymath} where $\psi$ is a basis element for $W$. Thus, we find the classical sheaf cohomology ring is of the form \begin{displaymath} {\bf C}[\psi] / \left( \det(A \psi) \right) \: \cong \: {\bf C}[\psi] / \left( \psi^{n+1} \right). \end{displaymath} Now, let us turn to the quantum sheaf cohomology ring. We shall compute this by first computing the (classical) sheaf cohomology ring in any sector of fixed instanton degree, then relating sectors of different instanton number. In a sector of instanton number $d$, the linear sigma model moduli space of ${\bf P}^n$ is easily computed to be ${\bf P}^{(n+1)(d+1) -1}$. The induced bundle over the LSM moduli space is ${\cal F}$, where \begin{displaymath} 0 \: \longrightarrow \: {\cal F}^* \: \longrightarrow \: Z \: \stackrel{E'}{\longrightarrow} \: W \otimes {\cal O} \: \longrightarrow \: 0, \end{displaymath} where \begin{displaymath} Z \: = \: {\cal O}(-1)^{\oplus (n+1)(d+1)}, \end{displaymath} $W$ is the same one-dimensional vector space from previously, and \begin{displaymath} E' \: = \: \left[ \begin{array}{c} A x_0 \\ A x_1 \\ \vdots \\ A x_d \end{array} \right], \end{displaymath} where $x_i$ is a $(n+1)$-element vector of coefficients of fixed degree in the expansion of homogeneous coordinates of ${\bf P}^n$ in zero modes. We can now re-use the classical results above. For fixed instanton degree $d$, the sheaf cohomology ring is \begin{displaymath} {\bf C}[\psi] / \left( ( \det (A \psi) )^{d+1} \right) \: \cong \: {\bf C}[\psi] / \left( \psi^{(n+1)(d+1)} \right). \end{displaymath} Therefore, to preserve kernels, any relation between correlation functions in different sectors of fixed degree must be generated by \begin{displaymath} \langle {\cal O} \rangle_{0} \: \propto \: \langle {\cal O} \left( \det( A \psi) \right)^{d} \rangle_d. \end{displaymath} (This ensures that if ${\cal O}$ is an element of the quotiented ideal in the zero-degree sector, so that $\langle {\cal O} \rangle_0$ vanishes, then its image ${\cal O} ( \det (A \psi) )^d$ will be an element of the quotiented ideal in the sector of degree $d$, so that the corresponding correlation function also vanishes.) The relation above then implies that \begin{displaymath} \det (A \psi) \: = \: q \end{displaymath} for some constant $q$. (For example, this follows immediately in $d=1$, then higher degrees must just be a power.) Since $\det(A \psi) = ( \det A ) \psi$, this is equivalent to the relation \begin{displaymath} \psi^{n+1} \: = \: q', \end{displaymath} which is the standard quantum cohomology relation for a projective space ${\bf P}^n$. In the next sections, our tangent bundle deformations will, in general, yield bundles that are not isomorphic to the tangent bundle, so the quantum sheaf cohomology relations will be nontrivial. \section{Example: product of projective spaces} \label{sect:ex:proj} Mathematical computations of quantum sheaf cohomology have previously \cite{ks,gk,s2,g1} relied on brute-force Cech cohomology representations. One of the advancements of this paper and \cite{dgks} is the use of purely analytic methods to derive quantum sheaf cohomology. We will illustrate these advances through an explicit computation for general deformations of the tangent bundle of ${\bf P}^1 \times {\bf P}^1$. In particular, previously special deformations of the tangent bundle of ${\bf P}^1 \times {\bf P}^1$ have been computed with brute-force Cech techniques, so this seems an appropriate example to generalize here. We will begin by examining classical cup products for ${\bf P}^1 \times {\bf P}^1$, then classical cup products for ${\bf P}^n \times {\bf P}^m$, and then we will describe the quantum sheaf cohomology ring for ${\bf P}^1 \times {\bf P}^1$, which will ultimately be determined by the classical computations on products of more general projective spaces. \subsection{Classical cup products on ${\bf P}^1 \times {\bf P}^1$} \label{classical-p12-sect} In this section we will discuss how to compute classical cup products in the sheaf cohomology, without having to work through a Cech cohomology computation. Define $V = \Gamma( {\cal O}(1,0) )$, $\tilde{V} = \Gamma( {\cal O}(0,1) )$, $W = {\bf C}^2$. Define \begin{displaymath} Z_0 \: \equiv \: \left( V \otimes {\cal O}(-1,0) \right) \oplus \left( \tilde{V} \otimes {\cal O}(0,-1) \right). \end{displaymath} Then, the cotangent bundle deformation ${\cal E}^*$ is the kernel \begin{displaymath} 0 \: \longrightarrow \: {\cal E}^* \: \longrightarrow \: Z_0 \: \stackrel{E}{\longrightarrow} \: W \otimes {\cal O} \: \longrightarrow \: 0, \end{displaymath} where \begin{displaymath} E \: = \: \left[ \begin{array}{cc} Ax & Bx \\ C\tilde{x} & D\tilde{x} \end{array} \right], \end{displaymath} where $x$, $\tilde{x}$ are vectors of homogeneous coordinates on each ${\bf P}^1$ factor. First, let us compute the classical sheaf cohomology ring. Classical correlation functions are a map\footnote{ More formally, we could think of classical correlation functions and the map above as an element of \begin{displaymath} {\rm Ext}^2\left( {\rm Sym}^2 \left( W \otimes {\cal O} \right), \Lambda^2 {\cal E}^* \right), \end{displaymath} which corresponds to the exact sequence~(\ref{longex-l2e*}). Breaking that long sequence into two short exact sequences along $Q$ corresponds to writing the Ext element above as a product of elements of \begin{displaymath} {\rm Ext}^1\left( Q, \Lambda^2 {\cal E}^* \right), \: \: \: {\rm Ext}^1\left( {\rm Sym}^2 \left( W \otimes {\cal O} \right), Q \right), \end{displaymath} which correspond to the short exact sequences~(\ref{first-break}), (\ref{second-break}). } \begin{displaymath} {\rm Sym}^2 W \: = \: H^0\left( {\rm Sym}^2 W \otimes {\cal O} \right) \: \longrightarrow \: H^2\left( \Lambda^2 {\cal E}^* \right) \end{displaymath} and we will determine the ring structure by computing the kernel of that map. We will use the generalized Koszul complex of $\Lambda^2 {\cal E}^*$: \begin{equation} \label{longex-l2e*} 0 \: \longrightarrow \: \Lambda^2 {\cal E}^* \: \longrightarrow \: \Lambda^2 Z_0 \: \longrightarrow \: Z_0 \otimes \left( W \otimes {\cal O} \right) \: \longrightarrow \: {\rm Sym}^2 \left( W \otimes {\cal O} \right) \: \longrightarrow \: 0. \end{equation} It remains to compute the cup product above. First, split the long exact sequence~(\ref{longex-l2e*}) into a pair of short exact sequences: \begin{equation} \label{first-break} 0 \: \longrightarrow \: \Lambda^2 {\cal E}^* \: \longrightarrow \: \Lambda^2 Z_0 \: \longrightarrow \: Q \: \longrightarrow \: 0, \end{equation} \begin{equation} \label{second-break} 0 \: \longrightarrow \: Q \: \longrightarrow \: Z_0 \otimes \left( W \otimes {\cal O} \right) \: \longrightarrow \: {\rm Sym}^2\left( W \otimes {\cal O} \right) \: \longrightarrow \: 0, \end{equation} which define $Q$. Next, we shall evaluate the kernel of that map, that product, which will give us the classical sheaf cohomology ring structure. The short exact sequence~(\ref{second-break}) induces a map \begin{displaymath} \delta_1: \: H^0\left( {\rm Sym}^2 W \otimes {\cal O} \right) \: \longrightarrow \: H^1\left( Q \right) \end{displaymath} (from the associated long exact sequence). Moreover, because \begin{displaymath} H^*\left(Z_0 \otimes W \right) \: = \: 0 \end{displaymath} the map $\delta_1$ above is an isomorphism. The other short exact sequence, (\ref{first-break}), induces \begin{displaymath} 0 \: \longrightarrow \: H^1\left( \Lambda^2 Z_0 \right) \: \longrightarrow \: H^1(Q) \: \stackrel{\delta_2}{\longrightarrow} \: H^2\left( \Lambda^2 {\cal E}^* \right) \: \longrightarrow \: 0, \end{displaymath} using the fact that \begin{displaymath} H^1\left(\Lambda^2 {\cal E}^* = K_{ {\bf P}^1 \times {\bf P}^1 } \right) \: = \: 0, \: \: \: H^2\left( \Lambda^2 Z_0 \right) \: = \: 0. \end{displaymath} The classical cup product is then the composition \begin{equation} \label{classical-cup-product-map} H^0\left( {\rm Sym}^2 W \otimes {\cal O} \right) \: \stackrel{\delta_1}{\longrightarrow} \: H^1(Q) \: \stackrel{\delta_2}{\longrightarrow} \: H^2\left( \Lambda^2 {\cal E}^* \right). \end{equation} We have seen that $\delta_1$ is an isomorphism, but $\delta_2$ has a nontrivial kernel. Specifically, since \begin{displaymath} \Lambda^2 Z_0 \: = \: \left( \Lambda^2 V \otimes {\cal O}(-2,0) \right) \oplus \left( \Lambda^2 \tilde{V} \otimes {\cal O}(0,-2) \right) \oplus \left( V \otimes \tilde{V} \otimes {\cal O}(-1,-1) \right), \end{displaymath} we see that the kernel of the classical cup product is two-dimensional: \begin{displaymath} H^1\left( \Lambda^2 Z_0 \right) \: = \: \Lambda^2 V \oplus \Lambda^2 \tilde{V}. \end{displaymath} In fact, it can be shown \cite{dgks}[section 3.3] that the kernel of the cup product map~(\ref{classical-cup-product-map}) is defined by the relations \begin{eqnarray} \det\left( \psi A \: + \: \tilde{\psi}B \right) & = & 0, \label{classical-p12-a}\\ \det\left( \psi C \: + \: \tilde{\psi}D \right) & = & 0. \label{classical-p12-b} \end{eqnarray} These are the classical sheaf cohomology ring relations. Let us check that this correctly reproduces the results of \cite{gk}. In that paper, $A = D = I$, \begin{displaymath} B \: = \: \left[ \begin{array}{cc} \epsilon_1 & \epsilon_2 \\ \epsilon_3 & 0 \end{array} \right], \: \: \: C \: = \: \left[ \begin{array}{cc} \gamma_1 & \gamma_2 \\ \gamma_3 & 0 \end{array} \right]. \end{displaymath} There, the classical cohomology ring is given by \begin{eqnarray*} \psi^2 \: + \: \epsilon_1 \psi \tilde{\psi} \: - \: \epsilon_2 \epsilon_3 \tilde{\psi}^2 & = & 0, \\ \tilde{\psi}^2 \: + \: \gamma_1 \psi \tilde{\psi} \: - \: \gamma_2 \gamma_3 \psi^2 & = & 0. \end{eqnarray*} Applying the general methods above to the matrices $A$, $B$, $C$, $D$ here, we find that \begin{eqnarray*} \det\left( \psi A \: + \: \tilde{\psi}B \right) & = & \psi^2 \: + \: \epsilon_1 \psi \tilde{\psi} \: - \: \epsilon_2 \epsilon_3 \tilde{\psi}^2, \\ \det\left( \psi C \: + \: \tilde{\psi}D \right) & = & \tilde{\psi}^2 \: + \: \gamma_1 \psi \tilde{\psi} \: - \: \gamma_2 \gamma_3 \psi^2, \end{eqnarray*} and so we recover the results of \cite{gk} for the classical cohomology ring as a special case. Similarly, it is straightforward to check that this also agrees with the general results of \cite{mcom}, as we shall review later in section~\ref{sect:genl}. \subsection{Classical cup products on ${\bf P}^n \times {\bf P}^m$} Let us now quickly repeat the analysis of the previous subsection for a more general product of projective spaces, ${\bf P}^n \times {\bf P}^m$. In the next section, we will compute the quantum sheaf cohomology ring for ${\bf P}^1 \times {\bf P}^1$, which will be determined by classical computations on ${\bf P}^n \times {\bf P}^m$. As before, define $V = \Gamma({\cal O}(1,0))$, $\tilde{V} = \Gamma({\cal O}(0,1))$, $W = {\bf C}^2$. Define \begin{displaymath} Z \: = \: \left( V \otimes {\cal O}(-1,0) \right) \oplus \left( \tilde{V} \otimes {\cal O}(0,-1) \right). \end{displaymath} Then, as before, the cotangent bundle deformation ${\cal E}^*$ is the kernel \begin{displaymath} 0 \: \longrightarrow \: {\cal E}^* \: \longrightarrow \: Z \: \stackrel{E}{\longrightarrow} \: W \otimes {\cal O} \: \longrightarrow \: 0, \end{displaymath} where \begin{displaymath} E \: = \: \left[ \begin{array}{cc} \tilde{A} x & \tilde{B} x \\ \tilde{C} \tilde{x} & \tilde{D} \tilde{x} \end{array} \right], \end{displaymath} where $x$, $\tilde{x}$ are vectors of homogeneous coordinates on ${\bf P}^n$, ${\bf P}^m$, respectively, $\tilde{A}$, $\tilde{B}$ are $(n+1)\times(n+1)$ matrices, and $\tilde{C}$, $\tilde{D}$ are $(m+1)\times(m+1)$ matrices. As before, we think of classical correlation functions in this theory as maps \begin{displaymath} {\rm Sym}^{n+m} W \: \longrightarrow \: H^{n+m}\left( \Lambda^{\rm top} {\cal E}^* \right) \end{displaymath} and we compute the kernel, using the generalized Koszul complex associated to $\Lambda^{\rm top} {\cal E}^*$: \begin{displaymath} 0 \: \longrightarrow \: \Lambda^{n+m} {\cal E}^* \: \longrightarrow \: \Lambda^{n+m} Z \: \longrightarrow \: \Lambda^{n+m-1} Z \otimes W \: \longrightarrow \cdots \longrightarrow \: {\rm Sym}^{n+m} W \otimes {\cal O} \: \longrightarrow \: 0. \end{displaymath} To do computations, we split this into short exact sequences: \begin{equation} \label{pnpma} 0 \: \longrightarrow \: \Lambda^{n+m}{\cal E}^* \: \longrightarrow \: \Lambda^{n+m}Z \: \longrightarrow \: S_{n+m-1} \: \longrightarrow \: 0, \end{equation} \begin{equation} \label{pnpmb} 0 \: \longrightarrow \: S_i \: \longrightarrow \: \Lambda^i Z \otimes {\rm Sym}^{n+m-i} W \: \longrightarrow \: S_{i-1} \: \longrightarrow \: 0, \end{equation} \begin{equation} \label{pnpmc} 0 \: \longrightarrow \: S_1 \: \longrightarrow \: Z \otimes {\rm Sym}^{n+m-1} W \: \longrightarrow \: {\rm Sym}^{n+m} W \otimes {\cal O} \: \longrightarrow \: 0. \end{equation} Now, $H^j(\Lambda^i Z)$ will vanish unless $j=i-1=n,m$ (see for example \cite{dgks}) (or, alternatively, if $i=j=0$, but we shall suppress that case as it will not be pertinent for our computations). Thus, from~(\ref{pnpma}), we find \begin{displaymath} H^{n+m-1}(S_{n+m-1}) \: \stackrel{\sim}{\longrightarrow} \: H^{n+m}(\Lambda^{n+m}{\cal E}^*), \end{displaymath} from~(\ref{pnpmc}) we find \begin{displaymath} H^0({\rm Sym}^{n+m} W \otimes {\cal O}) \: \stackrel{\sim}{\longrightarrow} \: H^1(S_1), \end{displaymath} and from~(\ref{pnpmb}) we find a surjective map \begin{displaymath} H^{i-1}(S_{i-1}) \: \longrightarrow \: H^i(S_i) \end{displaymath} for $2 \leq i \leq n+m-1$. If $i-1 \neq n,m$, then the surjective map above is an isomorphism. If $i-1$ is either $n$ or $m$, then it has a nontrivial kernel, given by $H^{i-1}(\Lambda^i Z \otimes {\rm Sym}^{n+m-i}W)$. If $i-1=n\neq m$, then $H^{i-1}(\Lambda^i Z) \cong \Lambda^{\rm top}V$, and it can be shown \cite{dgks}[section 3.3] that this is generated by \begin{displaymath} \det\left( \psi \tilde{A} \: + \: \tilde{\psi} \tilde{B} \right), \end{displaymath} where $\{ \psi, \tilde{\psi} \}$ is a basis for $W$. Thus, the kernel of $H^{n}(S_{n}) \rightarrow H^{n+1}(S_{n+1})$ is generated by \begin{displaymath} \det\left( \psi \tilde{A} \: + \: \tilde{\psi} \tilde{B} \right). \end{displaymath} The case $i-1=m\neq n$ is nearly identical, so we omit its description. If $i-1=n=m$, the result is very similar. In this case, \begin{displaymath} H^{n}(\Lambda^{n+1} Z) \: = \: \Lambda^{n+1} V \oplus \Lambda^{n+1} \tilde{V} \end{displaymath} and the kernel of $H^{n}(S_{n}) \rightarrow H^{n+1}(S_{n+1})$ is generated by \begin{displaymath} \det\left( \psi \tilde{A} \: + \: \tilde{\psi} \tilde{B} \right), \: \: \: \det\left( \psi \tilde{C} \: + \: \tilde{\psi} \tilde{D} \right). \end{displaymath} Putting this together, we find that the classical sheaf cohomology ring of ${\bf P}^n \times {\bf P}^m$ with bundle ${\cal E}$ is generated by $\psi$, $\tilde{\psi}$ with relations \begin{displaymath} \det\left( \psi \tilde{A} \: + \: \tilde{\psi} \tilde{B} \right) \: = \: 0 \: = \: \det\left( \psi \tilde{C} \: + \: \tilde{\psi} \tilde{D} \right). \end{displaymath} \subsection{Quantum sheaf cohomology ring on ${\bf P}^1 \times {\bf P}^1$} \label{sect:qsc-p1p1} Define \begin{eqnarray*} Q & \equiv & \det( \psi A \: + \: \tilde{\psi} B ), \\ \tilde{Q} & \equiv & \det( \psi C \: + \: \tilde{\psi} D ). \end{eqnarray*} In this section, we will show that the quantum sheaf cohomology ring of ${\bf P}^1 \times {\bf P}^1$, with bundle ${\cal E}$ defined earlier, is given by \begin{displaymath} {\bf C}[\psi, \tilde{\psi} ] / (Q - q, \tilde{Q} - \tilde{q} ). \end{displaymath} First, we shall derive the form of the cohomology ring in each fixed instanton sector, then, we shall find relations between the sectors. We shall begin by deriving the ring for fixed instanton degree $(d,e)$. As outlined earlier, the linear sigma model moduli space is computed to be \begin{displaymath} {\cal M} \: = \: {\bf P}^{2d+1} \times {\bf P}^{2e+1}. \end{displaymath} Define $Z$ to be the following sheaf on ${\cal M}$: \begin{displaymath} Z \: \equiv \: \left( {\rm Sym}^d U \otimes V \otimes {\cal O}(-1,0) \right) \oplus \left( {\rm Sym}^e U \otimes \tilde{V} \otimes {\cal O}(0,-1) \right). \end{displaymath} The induced sheaf ${\cal F}^*$ is the kernel \begin{displaymath} 0 \: \longrightarrow \: {\cal F}^* \: \longrightarrow \: Z \: \longrightarrow \: W \otimes {\cal O}_{\cal M} \: \longrightarrow \: 0, \end{displaymath} where \begin{displaymath} U \: = \: \Gamma({\bf P}^1, {\cal O}(1)), \: \: \: V \: = \: \Gamma({\bf P}^1 \times {\bf P}^1, {\cal O}(1,0)), \: \: \: \tilde{V} \: = \: \Gamma({\bf P}^1 \times {\bf P}^1, {\cal O}(0,1)), \: \: \: W \: = \: {\bf C}^2, \end{displaymath} which is naturally induced from the short exact sequence defining ${\cal E}^*$, as discussed in section~\ref{sect:genlprocedure}. The desired correlation function in sector $(d,e)$ can be computed as a classical sheaf cohomology cup product on ${\cal M} = {\bf P}^{2d+1} \times {\bf P}^{2e+1}$. As we have already computed classical sheaf cohomology on a product of projective spaces, we can apply our results from the previous subsection. The induced maps are such that, for example, \begin{displaymath} \tilde{A} \: = \: {\rm diag}(A, A, \cdots, A) \end{displaymath} ($d+1$ copies), hence the classical sheaf cohomology ring relations are \begin{displaymath} \det\left( \psi \tilde{A} \: + \: \tilde{\psi} \tilde{B} \right) \: = \: \det\left( \psi A \: + \: \tilde{\psi} B \right)^{d+1} \: = \: 0, \end{displaymath} \begin{displaymath} \det\left( \psi \tilde{C} \: + \: \tilde{\psi} \tilde{D} \right) \: = \: \det\left( \psi C \: + \: \tilde{\psi} D \right)^{e+1} \: = \: 0, \end{displaymath} and so we immediately find that for fixed degree $(d,e)$, the sheaf cohomology groups $$H^*\left({\cal M}, \Lambda^* {\cal F}^*\right)$$ live in the polynomial ring \begin{displaymath} {\rm Sym}^{\cdot} W \, / \, (Q^{d+1}, \tilde{Q}^{e+1}). \end{displaymath} For example, for degree $(d,e) = (1,0)$, the kernel is spanned by the four polynomials \begin{displaymath} Q^2, \tilde{Q} \psi^2, \tilde{Q} \psi \tilde{\psi}, \tilde{Q} \tilde{\psi}^2 \end{displaymath} and it is straightforward to check that this is a correct property of the correlation functions give in {\it e.g.} \cite{ks}[equ'ns (21)-(30)]. It remains to derive the operator product ring, the quantum sheaf cohomology ring utilizing the structure derived. As there are no four-fermi contributions (${\cal F}_1 = {\rm Obs} = 0$), we expect from existence of operator products that there should be relations between correlation functions in different instanton sectors, of the form \begin{equation} \label{rreln} \langle {\cal O} \rangle_{d,e} \: \propto \: \langle {\cal O} R_{d,e,d',e'} \rangle_{d',e'} \end{equation} for all ${\cal O}$ and some fixed operator $R_{d,e,d',e'}$. For example, \begin{displaymath} \langle {\cal O} \rangle_{0,0} \: \propto \: \langle {\cal O} Q \rangle_{1,0}, \end{displaymath} which suggests $Q = q$ for some proportionality constant $q$, and \begin{displaymath} \langle {\cal O} \rangle_{0,0} \: \propto \: \langle {\cal O} \tilde{Q} \rangle_{0,1} \end{displaymath} which suggests $\tilde{Q} = \tilde{q}$ for some proportionality constant $\tilde{q}$. Equation~(\ref{rreln}) is merely the generalization to arbitrary instanton degrees. Because of compatibility with the kernels above ({\it i.e.} maps must send kernels to (subsets of) kernels, and must map top-forms to top-forms), the relations~(\ref{rreln}) should be of the form \begin{displaymath} \langle {\cal O} \rangle_{d,e} \: \propto \: \langle {\cal O} Q^{d'-d} \tilde{Q}^{e'-e} \rangle_{d',e'} \end{displaymath} hence \begin{displaymath} \langle {\cal O} \rangle_{d,e} \: = \: A_{d,e,d',e'} \langle {\cal O} Q^{d'-d} \tilde{Q}^{e'-e} \rangle_{d',e'} \end{displaymath} for some constant $A_{d,e,d',e'}$. We assume that the constant $A_{d,e,d',e'}$ has the form \begin{displaymath} A_{d,e,d',e'} \: = \: q^{d'-d} \tilde{q}^{e'-e} \end{displaymath} for some constants $q$, $\tilde{q}$. Note that mathematically this is an assumption, not a derivation; we justify this assumption by the fact that this is the standard form of nonperturbative corrections to operator products, and so we recover standard physics results. Thus, \begin{displaymath} \langle {\cal O} \rangle_{d,e} \: = \: q^{d'-d} \tilde{q}^{e'-e} \langle {\cal O} Q^{d'-d} \tilde{Q}^{e'-e} \rangle_{d',e'}, \end{displaymath} and in particular, \begin{displaymath} \langle \psi \tilde{\psi} Q^d \tilde{Q}^e \rangle_{d,e} \: = \: q^d \tilde{q}^e \langle \psi \tilde{\psi} \rangle_{0,0}, \end{displaymath} from which we derive the quantum sheaf cohomology relations \begin{displaymath} Q \: \sim \: q, \: \: \: \tilde{Q} \: \sim \: \tilde{q}, \end{displaymath} so that the quantum sheaf cohomology ring is given by \begin{displaymath} {\bf C}[\psi, \tilde{\psi} ] / (Q - q, \tilde{Q} - \tilde{q} ). \end{displaymath} This matches the prediction of \cite{mcom}, and also specializes to the results in \cite{ks,gk}. As a consistency check, let us quickly observe how bundle isomorphisms preserve the ring above. Let $R \in GL(W)$, $P_1 \in GL(V)$, $P_2 \in GL(\tilde{V})$. Under the action of this $GL(2)^3$, \begin{displaymath} E \: \mapsto \: \left[ \begin{array}{cc} P_1 A x & P_1 B x \\ P_2 C \tilde{x} & P_2 D \tilde{x} \end{array} \right] R. \end{displaymath} As $R$ also acts on $\psi$, $\tilde{\psi}$, its action falls out of the ring relations, and we are left with \begin{eqnarray*} \det\left( A \psi \: + \: B \tilde{\psi} \right) & \mapsto & \det \left( P_1 \left( A \psi \: + \: B \tilde{\psi} \right) \right) \: = \: \det P_1 \det\left( A \psi \: + \: B \tilde{\psi} \right), \\ \det \left( C \psi \: + \: D \tilde{\psi} \right) & \mapsto & \det \left( P_2 \left( C \psi \: + \: D \tilde{\psi} \right) \right) \: = \: \det P_2 \det \left( C \psi \: + \: D \tilde{\psi} \right), \end{eqnarray*} and so we see that by absorbing $\det P_i$ into $q$, $\tilde{q}$, the ring is preserved. \section{Example: Hirzebruch surface} \label{sect:ex:hirz} Next, we shall compute quantum sheaf cohomology for a deformation of the tangent bundle of the Hirzebruch surface ${\bf F}_n$. We will use the same notation as earlier in section~\ref{sect:genlprocedure}. As in that section, the homogeneous coordinates $u$, $v$, $s$, $t$ (corresponding to the four toric divisors) have the following weights under two ${\bf C}^{\times}$ actions: \begin{center} \begin{tabular}{cccc} $u$ & $v$ & $s$ & $t$ \\ \hline $1$ & $1$ & $0$ & $n$ \\ $0$ & $0$ & $1$ & $1$ \end{tabular} \end{center} We describe a deformation $\cE^*$ of the cotangent bundle as the kernel \begin{displaymath} 0 \: \longrightarrow \: \cE^* \: \longrightarrow \: Z \: \stackrel{E}{\longrightarrow} \: W \otimes {\mathcal O} \: \longrightarrow \: 0, \end{displaymath} where \begin{displaymath} Z \: = \: {\mathcal O}(-1,0)^{\oplus 2} \oplus {\mathcal O}(0,-1) \oplus {\mathcal O}(-n,-1), \end{displaymath} $W$ is a two-dimensional vector space, \begin{displaymath} E \: = \: \left[ \begin{array}{cc} A x & B x \\ \gamma_1 s & \gamma_2 s \\ \alpha_1 t + s f_1(u,v) & \alpha_2 t + s f_2(u,v) \end{array} \right], \end{displaymath} with \begin{displaymath} x \: \equiv \: \left[ \begin{array}{c} u \\ v \end{array} \right], \end{displaymath} $A$, $B$ constant $2 \times 2$ matrices, $\gamma_1$, $\gamma_2$, $\alpha_1$, $\alpha_2$ constants, and $f_{1,2}(u,v)$ homogeneous polynomials of degree $n$. First, we shall outline the classical cohomology ring. As before, we use the generalized Koszul complex associated to $\Lambda^2 {\cal E}^*$, split it into two short exact sequences, and compute the kernel of the map ${\rm Sym}^2 W \rightarrow H^2(\Lambda^2 {\cal E}^*)$. The kernel arises from $H^1(\Lambda^2 Z)$, which is two-dimensional. It can be shown \cite{dgks}[section 3.3] that the kernel is generated by \begin{displaymath} \det\left( \psi A \: + \: \tilde{\psi} B \right), \: \: \: \left( \psi \gamma_1 \: + \: \tilde{\psi} \gamma_2 \right) \left( \psi \alpha_1 \: + \: \tilde{\psi} \alpha_2 \right). \end{displaymath} Because we will be encountering these polynomials often, we shall assign them names as follows: \begin{eqnarray*} Q_{K1} & = & \det \left( \psi A \: + \: \tilde{\psi} B \right), \\ Q_s & = & \psi \gamma_1 \: + \: \tilde{\psi} \gamma_2, \\ Q_t & = & \psi \alpha_1 \: + \: \tilde{\psi} \alpha_2. \end{eqnarray*} (This nomenclature is used in the companion paper \cite{dgks}.) Thus, the kernel in the degree $\vec{d}=0$ sector is generated by $Q_{K1}$, $Q_s Q_t$. Next, consider the sector of instanton degree $\vec{d}=(1,0)$. The linear sigma model moduli space has homogeneous coordinates $u_{0,1}$, $v_{0,1}$, $s$, $t_{0, \cdots, n}$, with weights \begin{center} \begin{tabular}{cccc} $u_{0,1}$ & $v_{0,1}$ & $s$ & $t_{0,\cdots,n}$ \\ \hline $1$ & $1$ & $0$ & $n$ \\ $0$ & $0$ & $1$ & $1$ \end{tabular} \end{center} with exceptional set \begin{displaymath} \{ u_0 = u_1 = v_0 = v_1 = 0, \: \: \: s = t_0 = t_1 = \cdots = t_n = 0 \}. \end{displaymath} The induced bundle ${\cal F}$ is given by \begin{displaymath} 0 \: \longrightarrow \: {\cal F}^* \: \longrightarrow \: Z \: \stackrel{E'}{\longrightarrow} \: W \otimes {\cal O} \: \longrightarrow \: 0, \end{displaymath} where \begin{displaymath} Z \: = \: {\cal O}(-1,0)^{\oplus 4} \oplus {\cal O}(0,-1) \oplus {\cal O}(-n,-1)^{\oplus \, n+1} \end{displaymath} and the map $E'$, induced from $E$, is given by \begin{displaymath} E' \: = \: \left[ \begin{array}{cc} A x_0 & B x_0 \\ A x_1 & B x_1 \\ \gamma_1 s & \gamma_2 s \\ \alpha_1 t_0 \: + \: s f_1(u_0, v_0) & \alpha_2 t_0 \: + \: s f_2(u_0, v_0) \\ \alpha_1 t_1 \: + \: s \cdots & \alpha_2 t_1 \: + \: s \cdots \\ \cdots & \cdots \\ \alpha_1 t_n \: + \: s \cdots & \alpha_2 t_n \: + \: s \cdots \end{array} \right], \end{displaymath} where we are using $\cdots$ to abbreviate full zero mode expansions of $f_1$, $f_2$, as described earlier in {\it e.g.} the analogous case of equation~(\ref{fullzeromodeexp}). We use $s \cdots$ merely to denote a series of terms which have $s$ as a common factor. We want to compute the kernel of the map ${\rm Sym}^{n+4}W \rightarrow H^{n+4}(\Lambda^{n+4} {\cal F}^*)$, which we do using the generalized Koszul complex associated to $\Lambda^{n+4} {\cal F}^*$. Following the usual pattern, and using the exceptional set described above (and the primitive collection it determines as in \cite{dgks}), we find that the map $H^3(S_3) \rightarrow H^4(S_4)$ fails to be an isomorphism (because $H^3(\Lambda^4 Z)$ is nonzero) and $H^{n+1}(S_{n+1}) \rightarrow H^{n+2}(S_{n+2})$ fails to be an isomorphism (because $H^{n+1}(\Lambda^{n+2} {\cal F})$ is nonzero). The kernel arising from the first is generated by \cite{dgks}[section 3.3] \begin{displaymath} Q_{K1}^2 \: = \: \left( \det \left( \psi A \: + \: \tilde{\psi} B \right) \right)^2 \end{displaymath} and the kernel arising from the second is generated by \cite{dgks}[section 3.3] \begin{displaymath} Q_s Q_t^{n+1} \: = \: \left( \psi \gamma_1 \: + \: \tilde{\psi} \gamma_2 \right) \left( \psi \alpha_1 \: + \: \tilde{\psi} \alpha_2 \right)^{n+1}. \end{displaymath} In terms of correlation functions, the result above implies that \begin{displaymath} \langle {\cal O} \rangle_{\vec{d} = 0} \: \propto \: \langle {\cal O} Q_{K1} Q_t^n \rangle_{\vec{d} = (1,0)}, \end{displaymath} which suggests that the OPE ring has the (partial) form \begin{equation} \label{hirzreln1} Q_{K1} Q_t^n \: = \: q_1 \end{equation} for some parameter $q_1$. Next, consider the degree $\vec{d} = (0,1)$ sector. The linear sigma model moduli space has homogeneous coordinates $u$, $v$, $s_{0,1}$, $t_{0,1}$ (where the $s_i$ and $t_i$ are the coefficients in the zero-mode expansion of $s$, $t$). These coordinates have weights: \begin{center} \begin{tabular}{cccccc} $u$ & $v$ & $s_0$ & $s_1$ & $t_0$ & $t_1$ \\ \hline $1$ & $1$ & $0$ & $0$ & $n$ & $n$ \\ $0$ & $0$ & $1$ & $1$ & $1$ & $1$ \end{tabular} \end{center} and the exceptional set is given by \begin{displaymath} \{ u=v=0, \: \: \: s_0 = s_1 = t_0 = t_1 = 0 \}. \end{displaymath} The induced bundle ${\cal F}$ is now given by \begin{displaymath} 0 \: \longrightarrow \: {\cal F}^* \: \longrightarrow \: Z \: \stackrel{E'}{\longrightarrow} \: W \otimes {\cal O} \: \longrightarrow \: 0, \end{displaymath} where \begin{displaymath} Z \: = \: {\cal O}(-1,0)^{\oplus 2} \oplus {\cal O}(0,-1)^{\oplus 2} \oplus {\cal O}(-n,-1)^{\oplus 2}, \end{displaymath} and the map $E'$, induced from $E$, is given by \begin{displaymath} E' \: = \: \left[ \begin{array}{cc} A x & B x \\ \gamma_1 s_0 & \gamma_2 s_0 \\ \gamma_1 s_1 & \gamma_2 s_1 \\ \alpha_1 t_0 \: + \: s_0 f_1(u,v) & \alpha_2 t_0 \: + \: s_0 f_2(u,v) \\ \alpha_1 t_1 \: + \: s_1 f_1(u,v) & \alpha_2 t_1 \: + \: s_1 f_2(u,v) \end{array} \right]. \end{displaymath} As before, we want to compute the kernel of the map ${\rm Sym}^4 W \rightarrow H^4(\Lambda^4 {\cal F}^*)$, which we do using the generalized Koszul complex associated to $\Lambda^4 {\cal F}^*$. Following the usual pattern, and using the exceptional collection described above, we find that the map $H^1(S_1) \rightarrow H^2(S_2)$ fails to be an isomorphism (because $H^1(\Lambda^2 Z)$ is nonzero) and $H^3(S_3) \rightarrow H^4(\Lambda^4 {\cal F}^*)$ fails to be an isomorphism (because $H^3(\Lambda^4 Z)$ is nonzero). The kernel arising from the first is generated by \begin{displaymath} Q_{K1} \: = \: \det\left( \psi A \: + \: \tilde{\psi} B \right) \end{displaymath} and the kernel arising from the second is generated by \cite{dgks}[section 3.3] \begin{displaymath} Q_s^2 Q_t^2 \: = \: \left( \psi \gamma_1 \: + \: \tilde{\psi} \gamma_2 \right)^2 \left( \psi \alpha_1 \: + \: \tilde{\psi} \alpha_2 \right)^2. \end{displaymath} In terms of correlation functions, the result above implies that \begin{displaymath} \langle {\cal O} \rangle_{\vec{d} = 0} \: \propto \: \langle {\cal O} Q_s Q_t \rangle_{\vec{d} = (0,1) } \end{displaymath} which suggests that the OPE ring has the (partial) form \begin{equation} \label{hirzreln2} Q_s Q_t \: = \: q_2 \end{equation} for some parameter $q_2$. Now, consider the sector of instanton degree $\vec{d}=(1,-n)$. In this sector, we need to take into account contributions from four-fermi terms, something we have not needed to do previously. The linear sigma model moduli space has homogeneous coordinates $u_{0,1}$, $v_{0,1}$, $t$ (where the $u_i$, $v_i$ are the coefficients in the zero mode expansion of $u$, $v$, and $s$ does not contribute because it has no zero modes in this sector). These coordinates have weights \begin{center} \begin{tabular}{ccccc} $u_0$ & $u_1$ & $v_0$ & $v_1$ & $t$ \\ \hline $1$ & $1$ & $1$ & $1$ & $n$ \\ $0$ & $0$ & $0$ & $0$ & $1$ \end{tabular} \end{center} and the exceptional set is given by \begin{displaymath} \{ u_0 = u_1 = v_0 = v_1 = 0, \: \: \: t = 0 \}. \end{displaymath} The induced bundle ${\cal F}$ is given by \begin{displaymath} 0 \: \longrightarrow \: {\cal F}^* \: \longrightarrow \: Z \: \stackrel{E'}{\longrightarrow} \: W \otimes {\cal O} \: \longrightarrow \: 0, \end{displaymath} where \begin{displaymath} Z \: = \: {\cal O}(-1,0)^{\oplus 4} \oplus {\cal O}(-n,-1), \end{displaymath} and the map $E'$, induced from $E$, is given by \begin{displaymath} E' \: = \: \left[ \begin{array}{cc} A x_0 & B x_0 \\ A x_1 & B x_1 \\ \alpha_1 t & \alpha_2 t \end{array} \right], \end{displaymath} where $x_i = [u_i, v_i]^T$. Furthermore, the second $U(1)$ effectively removes $t$ from the moduli space, so the linear sigma model moduli space is effectively ${\bf P}^3$, and then\footnote{ The reader might ask why the last factor is ${\cal O}$ instead of ${\cal O}(-n)$, since it arises from ${\cal O}(-n,-1)$. The answer is that the ${\bf C}^{\times}$ action describing the ${\bf P}^3$, must leave the $t$ coordinate neutral. If we label the two ${\bf C}^{\times}$ actions defining ${\bf F}_n$ as $\lambda$, $\mu$, then the ${\bf C}^{\times}$ action defining ${\cal M} = {\bf P}^3$ is $\lambda - n \mu$, so that over that ${\cal M}$, $t$ is a smooth section of ${\cal O}$ and $s$ is a smooth section of ${\cal O}(-n)$. } $Z = {\cal O}(-1)^{\oplus 4} \oplus {\cal O}$. In this example, ${\cal F}_1$ will be nonzero, as we will discuss momentarily, but first let us compute the cohomology ring structure in this instanton sector. Proceeding based on previous experience, the kernel will have two components. One component will arise from $H^3(\Lambda^4 Z) \neq 0$. This kernel will be proportional to \begin{displaymath} Q_{K1}^2 \: = \: \left( \det \left( \psi A \: + \: \tilde{\psi} B \right) \right)^2. \end{displaymath} The second component will arise from $H^0(Z) \neq 0$. This kernel will be proportional to \begin{displaymath} Q_t \: = \: \alpha_1 \psi \: + \: \alpha_2 \tilde{\psi}. \end{displaymath} Now, let us compute ${\cal F}_1$. We will find that four-fermi terms will contribute, something that has not been true in previous cases. (As a result, the interpretation of the kernels computed above as kernels of correlation functions is more subtle than before -- in some ways, this case is more closely parallel to the details of a single projective space and the kernels computed there.) Here, \begin{displaymath} {\cal F}_1 \: = \: H^1\left( {\bf P}^1, {\cal O}(-n) \right) \otimes {\cal O}(0,1) \: = \: \oplus_1^{n-1} {\cal O}(0,1) \end{displaymath} (for $n \geq 1$; we omit $n=0$ as we have already studied ${\bf P}^1 \times {\bf P}^1$). If we describe the moduli space as ${\bf P}^3$, then ${\cal F}_1 = {\cal O}(-n)^{\oplus n-1}$. (In previous cases, ${\cal F}_1$ vanished; we only mention it when it is nonzero.) Since ${\cal F}_1$ is nonzero (and of the same rank as the obstruction bundle, which in fact is identical), in each correlation function in this sector we need to insert \begin{displaymath} Q_s^{n-1} \: = \: \left( \psi \gamma_1 \: + \: \tilde{\psi} \gamma_2 \right)^{n-1} \end{displaymath} (following appendix~\ref{sect:glsm-4fermi}). Now, let us find some relations between correlation functions. First, let us relate correlation functions in degree $(1,-n)$ to those in degree $(1,0)$. In both degrees, $Q_{K1}^2$ partially generates the kernel, but in the former case, the rest arises from $Q_t$, whereas in the latter case, $Q_s Q_t^{n+1}$ is a generator, so to account for the difference, to map kernels to kernels, correlators in the degree $(1,0)$ sector must be multiplied by $Q_s Q_t^{n+1}/Q_t = Q_s Q_t^n$. Furthermore, because in the degree $(1,-n)$ sector, four-fermi terms add a factor of $Q_s^{n-1}$, we must also add that same factor to correlators in degree $(1,0)$. Thus, we find that \begin{displaymath} \langle {\cal O} \rangle_{\vec{d} = (1,-n)} \: \propto \: \langle {\cal O} \left( Q_s Q_t^n \right) \left( Q_s^{n-1} \right) \rangle_{\vec{d} = (1,0)} \: = \: \langle {\cal O} \left( Q_s Q_t \right)^n \rangle_{\vec{d}=(1,0)}. \end{displaymath} Note that this result is compatible with the earlier relation~(\ref{hirzreln1}), namely \begin{displaymath} Q_s Q_t \: = \: q_2 \end{displaymath} for some constant $q_2$; furthermore, to achieve that compatibility required both matching kernels and also utilizing four-fermi terms. As one more consistency check, let us now work out the relation between correlation functions in degree $\vec{d} = 0$ and those in degree $\vec{d} = (1,-n)$. In the former case, the kernel is generated by $Q_{K1}$, $Q_s Q_t$, whereas in the latter case, the kernel is generated by $Q_{K1}^2$, $Q_t$, so if we ignore four-fermi terms, then to match kernels, correlation functions would be related by \begin{displaymath} \langle {\cal O} Q_s \rangle_{\vec{d}=0} \: \propto \: \langle {\cal O} Q_{K1} \rangle_{\vec{d}=(1,-n)}. \end{displaymath} Because in degree $(1,-n)$ we also have four-fermi terms, generating factors of $Q_s^{n-1}$, the correct relation between correlation functions is \begin{displaymath} \langle {\cal O} Q_s Q_s^{n-1} \rangle_{\vec{d}=0} \: \propto \: \langle {\cal O} Q_{K1} \rangle_{\vec{d}=(1,-n)}. \end{displaymath} In terms of our previous relations, this suggests that \begin{displaymath} Q_{K1} \: = \: q_1 q_2^{-n} Q_s^n, \end{displaymath} which is indeed an algebraic consequence of~(\ref{hirzreln1}), (\ref{hirzreln2}). (Simply multiply both sides by either $Q_s^n$ or $Q_t^n$ and apply (\ref{hirzreln2}) to turn one into the other.) Again, note we need both kernels and four-fermi terms to derive consistent relations. This last example also illustrates a technical point regarding OPE computations that will arise in \cite{dgks}. There, we will derive OPE's by giving relations between correlation functions of the form \begin{displaymath} \langle {\cal O} \rangle_{\beta} \: \propto \: \langle {\cal O} R_{\beta, \beta'} \rangle_{\beta'}. \end{displaymath} Except for the last case above, in the examples we have studied it has been possible to find an $R_{\beta,\beta'}$ putting relations in the form above. However, the last example illustrates that this cannot always be done. Technically, in \cite{dgks} we deal with this issue through the introduction of `direct systems' to describe relations between correlation functions of different degrees. Let us also take a moment to discuss the interpretation of the $q$'s. In this text, we have been using them merely as placeholders for unspecified constants; in particular, classical limits do not necessarily correspond to the case that all $q_i \rightarrow 0$. To clarify this, let us consider the (2,2) limit of the relations we have been deriving. In this limit, \begin{displaymath} A \: = \: I, \: \: \: B \: = \: 0, \: \: \: \gamma_1 \: = \: 0, \: \: \: \gamma_2 \: = \: 1, \: \: \: \alpha_1 \: = \: n, \: \: \: \alpha_2 \: = \: 1, \: \: \: f_1 \: = \: f_2 \: = \: 0 \end{displaymath} As a result, \begin{displaymath} Q_{K1} \: = \: \psi^2, \: \: \: Q_s \: = \: \tilde{\psi}, \: \: \: Q_t \: = \: n \psi \: + \: \tilde{\psi} \end{displaymath} The classical cohomology ring of the Hirzebruch surface can be described by (toric) generators $D_u$, $D_v$, $D_s$, $D_t$ in degree 2, obeying \begin{displaymath} D_u \: \sim D_v, \: \: \: D_t \: \sim \: D_s \: + \: n D_v \end{displaymath} \begin{displaymath} D_u^2 \: = \: 0, \: \: \: D_s( n D_u \: + \: D_s) \: = \: 0 \end{displaymath} If we identify $D_u = \psi$, $D_s = \tilde{\psi}$, then the relations~(\ref{hirzreln1}), (\ref{hirzreln2}), namely, \begin{displaymath} Q_{K1} Q_t^n \: = \: q_1, \: \: \: Q_s Q_t \: = \: q_2 \end{displaymath} become \begin{displaymath} D_u^2 (n D_t \: + \: D_u) \: = \: q_1, \: \: \: D_s( n D_u \: + \: D_s) \: = \: q_2 \end{displaymath} which clearly do not have the correct classical limit when $q_1 \rightarrow 0$. On the other hand, the equivalent relations \begin{displaymath} Q_{K1} \: = \: q' Q_s^n, \: \: \: Q_s Q_t \: = \: q_2 \end{displaymath} (where $q' = q_1 q_2^{-n}$) become \begin{displaymath} D_u^2 \: = \: q' D_s^n, \: \: \: D_s( n D_u \: + \: D_s ) \: = \: q_2 \end{displaymath} which does reduce to the classical cohomology ring relations when $q' = q_1 q_2^{-n} \rightarrow 0$ and $q_2 \rightarrow 0$. This also illuminates the issue with the previos presentation -- $q_1 \rightarrow 0$, $q_2 \rightarrow 0$ independently do not give the classical limit, one must also demand $q_1 q_2^{-n} \rightarrow 0$. Thus, we see that only certain presentations of the ring will give the classical cohomology ring on the (2,2) locus when all $q \rightarrow 0$. For other presentations, more complicated limits must be taken\footnote{ We would like to thank I.~Melnikov for illuminating discussions of this point. }. The particular presentation given in \cite{dgks} does have the property that on the (2,2) locus, one recovers the correct classical limit as all $q \rightarrow 0$ independently. In this paper we shall not belabor this point. Now, let us summarize. We have not described an exhaustive survey of all possibilities (see instead \cite{dgks}), but based on the computations performed, it would seem that the OPE ring in this example is defined by \begin{eqnarray*} Q_{K1} Q_t^n & = & q_1, \\ Q_s Q_t & = & q_2, \end{eqnarray*} which are relations~(\ref{hirzreln1}), (\ref{hirzreln2}). We will see in section~\ref{sect:genl} that this is a correct specialization of the general results of \cite{dgks}. \section{General result} \label{sect:genl} \subsection{Result} First, we shall outline the result from \cite{dgks}, and then compute it in examples. Let $\{\rho_i\}$ denote the (one-dimensional) edges of the fan, {\it i.e.} the toric divisors, and let $K_i$ denote `primitive collections' of edges, that is, maximal collections of edges not contained in any single cone. (These collections define the Stanley-Reisner ideal, through the statement that the toric divisors do not all intersect.) To each primitive collection $K$, we can associate a unique divisor class $\beta_K$, as follows. Let the vector generating the edge of the fan corresponding to $\rho$ be denoted $v_{\rho}$, then for $K = \{ \rho_1, \cdots, \rho_k \}$, we can write \begin{equation} \label{cdefn} v_{\rho_1} \: + \: \cdots \: + \: v_{\rho_k} \: = \: \sum_{\rho} c_{\rho} v_{\rho} \end{equation} for some integers $c_{\rho} > 0$, with the sum on the right running over toric divisors not necessarily in $K$. By moving the right-hand-side to the left, we can write this as \begin{equation} \label{adefn} \sum_{\rho} a_{\rho} v_{\rho} \: = \: 0 \end{equation} for some integers $a_i$. Then, it can be shown \cite{batyrev} that there is a unique curve class $\beta_K$ such that $\beta_K \cdot \rho = a_{\rho}$ for all $\rho$. Now, for each divisor class $c = [\rho]$, we define a $|c|\times|c|$ matrix $A_c$, where $|c|$ is the number of toric divisors linearly equivalent to $\rho$. The matrix $A_c$ is given by the rows of the map $E$ appearing in the definition of the deformation ${\cal E}^*$, the rows corresponding to representatives of $c$, and with nonlinear terms omitted. Define \begin{displaymath} Q_c \: = \: \det A_c. \end{displaymath} The quantum sheaf cohomology ring is then given by polynomials in the elements of a basis for $W$, modulo the relations \begin{equation} \label{qsc-general} \prod_{c \in [K]} Q_c \: = \: q^{\beta_K} \prod_{c \in [K^-]} Q_c^{-d_c^{\beta_K}} \end{equation} for each primitive collection $K$, where $[K^-]$ denotes the set of linear equivalence classes of edges appearing in the right-hand-side of~(\ref{cdefn}) with nonzero coefficients $c_{\rho}$, and $d_c^{\beta_K} \equiv c \cdot \beta_K$. (Note that for $c \in [K^-]$, the exponent $-d_c^{\beta_K}$ is nonnegative.) The formula above gives a canonical presentation of the quantum sheaf cohomology ring for each toric variety, dependent only upon the bundle and toric variety and not the details of any particular presentation such as ${\bf C}^{\times}$ weights or $U(1)$ charges in a quotient. Let us work through a few examples of this formalism, beginning with a projective space ${\bf P}^n$, as described in section~\ref{sect:oneproj}. Here, there are $n+1$ toric divisors $\rho_0, \cdots, \rho_{n}$. There is only one primitive collection, \begin{displaymath} K \: = \: \{ \rho_0, \cdots, \rho_n \}, \end{displaymath} and for any fan, the vectors generating the edges obey \begin{displaymath} v_{\rho_0} \: + \: \cdots \: + \: v_{\rho_n} \: = \: 0. \end{displaymath} The unique divisor class $\beta$ such that $\beta \cdot \rho = 1$ for all $\rho$ is represented by any of the toric divisors $\rho$, since they are all linearly equivalent. All of the toric divisors are linearly equivalent, and so there is one matrix $A_c = A \psi$, derived from the map $E$ defining ${\cal E}^*$, and one $Q = \det A_c = \det (A \psi)$. The quantum sheaf cohomology ring is then ${\bf C}[\psi]$ modulo the relation \begin{displaymath} \det( A \psi) \: = \: q \end{displaymath} matching what was found in section~\ref{sect:oneproj}, and for that matter matching the (2,2) locus (since on a single projective space, all toric Euler deformations return the tangent bundle itself). A slightly more interesting example is ${\bf P}^1 \times {\bf P}^1$, as discussed in section~\ref{sect:ex:proj}. Here, let $D_{x_{0,1}}$, $D_{\tilde{x}_{0,1}}$ denote the four toric divisors. (We are using here the nearly same notation for the toric divisors that we used for corresponding homogeneous coordinates in section~\ref{sect:ex:proj}.) There are two primitive collections: \begin{displaymath} K_1 \: = \: \{ D_{x_0}, D_{x_1} \}, \: \: \: K_2 \: = \: \{ D_{\tilde{x}_0}, D_{\tilde{x}_1} \}. \end{displaymath} In each primitive collection, the constituent divisors are all linearly equivalent to one another. Moreover, \begin{displaymath} v_{x_0} \: + \: v_{x_1} \: = \: 0, \: \: \: v_{\tilde{x}_0} \: + \: v_{\tilde{x}_1} \: = \: 0. \end{displaymath} It is easy to check that $\beta_1$ is represented by $D_{\tilde{x}_0}$, $D_{\tilde{x}_1}$, and $\beta_2$ is represented by $D_{x_0}$, $D_{x_1}$, since $D_{x_i} \cdot D_{\tilde{x}_j} = 1$. Following the notation of section~\ref{sect:ex:proj}, we find \begin{eqnarray*} A_1 \: = \: \psi A \: + \: \tilde{\psi} B, \\ A_2 \: = \: \psi C \: + \: \tilde{\psi} D, \end{eqnarray*} and from~(\ref{qsc-general}) the quantum sheaf cohomology ring is ${\bf C}[\psi,\tilde{\psi}]$ modulo the relations \begin{eqnarray*} \det\left( \psi A \: + \: \tilde{\psi} B \right) \: = \: q^{\beta_1}, \\ \det\left( \psi C \: + \: \tilde{\psi} D \right) \: = \: q^{\beta_2}, \end{eqnarray*} matching the results of section~\ref{sect:ex:proj}. Now, let us specialize the general result to Hirzebruch surfaces, and compare to the results we obtained previously. To that end, we describe the Hirzebruch surface classically with four (toric) divisors $D_u$, $D_v$, $D_s$, $D_t$, where \begin{displaymath} D_u \: = \: D_v, \: \: \: D_t \: = \: D_s \: + \: n D_v \end{displaymath} and \begin{displaymath} D_u \cdot D_v \: = \: 0 \: = \: D_s \cdot D_t. \end{displaymath} There are two `primitive collections' of divisors, defined by the Stanley-Reisner ideal above: \begin{displaymath} K_1 \: = \: \{ D_u, D_v \}, \: \: \: K_2 \: = \: \{ D_s, D_t \}. \end{displaymath} For the first primitive collection, \begin{displaymath} v_u \: + \: v_v \: = \: n v_s, \end{displaymath} and the unique divisor class $\beta_{1}$ such that \begin{displaymath} [D_u] \cdot \beta_{1} \: = \: 1 \: = \: [D_v] \cdot \beta_{1}, \: \: \: [D_s] \cdot \beta_{1} \: = \: - n, \: \: \: [D_t] \cdot \beta_{1} \: = \: 0 \end{displaymath} is represented by $D_s$, {\it i.e.} $\beta_{1} = [D_s]$. Similarly, \begin{displaymath} v_s \: + \: v_t \: = \: 0 \end{displaymath} and the unique divisor class $\beta_{2}$ such that \begin{displaymath} [D_u] \cdot \beta_{2} \: = \: 0 \: = \: [D_v] \cdot \beta_{2}, \: \: \: [D_s] \cdot \beta_{2} \: = \: 1 \: = \: [D_t] \cdot \beta_{2} \end{displaymath} is represented by $D_u$, $D_v$, {\it i.e.} $\beta_{K2} = [D_u] = [D_v]$. Then, to each primitive collection $K$ is associated a polynomial in the generators of $W$. In this case, these polynomials are: \begin{eqnarray*} \prod_{c \in [K_1]} Q_c & = & Q_{K1}, \\ \prod_{c \in [K_2]} Q_c & = & Q_{s} Q_t. \end{eqnarray*} In the expressions above, note that $D_u$ and $D_v$ are linearly equivalent, so there is only one linear equivalence class in $[K_1]$, but $D_s$ and $D_t$ are not linearly equivalent, so there are two linear equivalence classes in $[K_2]$. Then, putting this together, the quantum sheaf cohomology relations in~(\ref{qsc-general}) are \begin{eqnarray*} \prod_{c \in [K_1]} Q_c & = & Q_{K1} \: = \: q^{\beta 1} Q_{s}^{-d^{\beta_1}_s} \\ & = & q^{\beta_1} Q_{s}^{n}, \\ \prod_{c \in [K_2]} Q_c & = & Q_{s} Q_t \: = \: q^{\beta_2}, \end{eqnarray*} where \begin{displaymath} d^{\beta_K}_{\rho} \: \equiv \: [D_{\rho}] \cdot \beta_K \end{displaymath} (here, $d^{\beta_1}_s = [D_s] \cdot \beta_{1} = -n$) and $q^{\beta_1}$, $q^{\beta_2}$ are the two quantum parameters. Note that by multiplying both sides by $Q_t^n$ and using the second relation, we can turn these two relations into \begin{eqnarray*} Q_{K1} Q_t^n & = & q', \\ Q_s Q_t & = & q^{\beta_2}, \end{eqnarray*} where $q' = q^{\beta_1}( q^{\beta}_2 )^n$. It is this latter form in which the OPE rings for the Hirzebruch surface appear earlier in section~\ref{sect:ex:hirz}, where $q_1 = q'$, $q_2 = q^{\beta_2}$. \subsection{Comparison to McOrist-Melnikov's results} Let us now compare to the one-loop Coulomb branch results for the quantum sheaf cohomology ring given in \cite{mcom}. Implicitly, the relations derived in all one-loop Coulomb branch computations are not the relations of the ring at any single large-radius limit of the GLSM, but rather live in a `localization' of the ring, in which operators have been inverted. Physically, this arises because the Coulomb branch computations take place in a regime where $\sigma$ vevs are large, and so can be assumed nonzero and invertible; mathematically, this makes it possible for the one-loop Coulomb branch relations to be equally applicable to all large-radius phases. Thus, to compare the results of the last subsection, derived in a single large-radius phase, we must descend to a localization of the ring in which operator invertibility is allowed, and make the comparison in that localization. We will find that, after implicitly descending to that localization, the results of the last subsection do indeed match the predictions of \cite{mcom}. Partition the line bundle factors into collections $\left\{ {\cal O}\left( \vec{q}_i \right) \right\}$ with matching $\vec{q}_i$. (We can think of this equivalently as partitioning the chiral superfields, indexed by $i$, into collections consisting of matching $U(1)$ charges $\vec{q}_i$.) Index such collections by $\alpha$. (There is a one-to-one correspondence between such collections and linear equivalence classes of toric divisors.) Let \begin{displaymath} E_i: \: {\cal O}^{\oplus r} \: \longrightarrow \: {\cal O}\left(\vec{q}_i\right) \end{displaymath} denote the maps in the short exact sequence defining ${\cal E}$. Define \begin{displaymath} A_{(\alpha)i}^{j a} \: \equiv \: \left. \frac{\partial}{\partial \phi_j} E_i^a \right|_{\phi \equiv 0} \end{displaymath} for $i$, $j$ in the collection $\alpha$. For example, the tangent bundle of a toric variety is described by \begin{displaymath} E_i^a \: = \: Q_i^a \phi_i \end{displaymath} hence \begin{displaymath} A_{(\alpha)i}^{j a} \: = \: \delta^j_i Q_{(\alpha)}^a, \end{displaymath} where $\vec{q}_\alpha = (Q^a_{\alpha})$ denotes the $U(1)$ charges of all fields in the collection $\alpha$. In this language, if we define $V_{\alpha}$ to be a vector space of the same dimension as the number of line bundles in the collection $\alpha$ (the number of chiral superfields with matching charges $\vec{q}_{\alpha}$), and let $W = {\bf C}^r$, then we can describe the deformation of the tangent bundle as the cokernel \begin{displaymath} 0 \: \longrightarrow \: W \otimes {\cal O} \: \longrightarrow \: \bigoplus_{\alpha} V_{\alpha} \otimes {\cal O}\left( \vec{q}_{\alpha} \right) \: \longrightarrow \: {\cal E} \: \longrightarrow \: 0. \end{displaymath} Define \begin{displaymath} M_{(\alpha) i}^j \: = \: A_{(\alpha) i}^{ja} \psi_a. \end{displaymath} This is the same matrix that was denoted $A_c$ in the previous section, but we have adapted our notation to more closely resemble that of \cite{mcom}. In this notation, the result of \cite{dgks} is that the quantum sheaf cohomology ring relations descend to \begin{equation} \label{mm-general} \prod_{\alpha} \left( \det M_{(\alpha)} \right)^{Q^a_{\alpha}} \: = \: q_{a} \end{equation} for each $a$, where $\vec{q}_{\alpha} = (Q^a_{\alpha})$, and $q_a$ is the quantum parameter, modulo inversion of operators. The ring above is specified in terms of the $U(1)$ charges of the toric homogeneous coordinates, whereas in the previous section we gave a canonical representation that was independent of such choices. Specifically, the canonical representative was described in terms of $a_{\rho}$ defined in~(\ref{adefn}). However, the charges $Q^a_{\alpha}$ are also defined as the kernel of a matrix formed from the $v_{\rho}$'s, as in equation~(\ref{adefn}); thus, the $a_{\rho} = D_{\rho} \cdot \beta$ defined there are precisely one set of charges. With that in mind, the quantum sheaf cohomology relations~(\ref{qsc-general}) can be written in the form \begin{displaymath} \prod_c Q_c^{D_c \cdot \beta} \: = \: q^{\beta} \end{displaymath} for the $\beta$ associated to each primitive collection, which is the same as \begin{displaymath} \prod_c \left( \det A_c \right)^{Q_c^{\beta}} \: = \: q^{\beta} \end{displaymath} for $Q_c^{\beta} \equiv D_c \cdot \beta$. Thus, we see that the relations~(\ref{qsc-general}) specified in \cite{dgks} really do descend to the relations~(\ref{mm-general}) in a localization\footnote{ We are implicitly performing this comparison in the localization mentioned earlier, as we have not specified whether the charges $Q^a_{\alpha}$ are positive or negative. } of the ring, written in a form closer to that of reference~\cite{mcom}, for a particular choice of charges $Q^a_{\alpha}$. We should emphasize again that this result is independent of nonlinear deformations (meaning, terms in $E_i^a$ nonlinear in $\phi$'s), as conjectured in {\it e.g.} \cite{mm2}[section 3.5]. This result also nicely meshes with previous physics results. For example, \cite{kmmp}[section A.3] conjectured that A/2 correlation functions should be independent of nonlinear deformations, based on the fact that the discriminant locus in gauged linear sigma models does not depend on such nonlinear deformations. In the special case of linear deformations, {\it i.e.} when \begin{displaymath} E_i^a \: = \: \sum_j A_{ (\alpha) i}^{j a} \phi_j, \end{displaymath} the result above specializes to the result of \cite{mcom}, computed with Coulomb branch techniques in gauged linear sigma models. Now, let us compare to particular examples discussed earlier in this paper. In the case of deformations of the tangent bundle of ${\bf P}^1 \times {\bf P}^1$ discussed in section~\ref{sect:qsc-p1p1}, it is straightforward to check that there are two $M_{(\alpha)}$, given by \begin{eqnarray*} M_{(1)} & = & \psi_1 A \: + \: \psi_2 B, \\ M_{(2)} & = & \psi_1 C \: + \: \psi_2 D, \end{eqnarray*} and so we have the relations \begin{displaymath} \det M_{(1)} \: = \: q_1, \: \: \: \det M_{(2)} \: = \: q_2, \end{displaymath} which matches our previous computation. Next, let us describe the example of a Hirzebruch surface ${\bf F}_n$. Consider a fan with edges $(1,0)$, $(0,1)$, $(-1,n)$, $(0,-1)$, defined by the charges $(1,0)$, $(0,1)$, $(1,0)$, $(n,1)$ and homogeneous coordinates $u$, $s$, $v$, $t$, respectively, as in figure~\ref{fig:FnFan}. For a deformation of the tangent bundle of ${\bf F}_n$ as defined in~(\ref{hirz-genl-map}), we compute \begin{eqnarray*} M_{(1)} & = & A \psi_1 \: + \: B \psi_2, \\ M_{(2)} & = & \gamma_1 \psi_1 \: + \: \gamma_2 \psi_2, \\ M_{(3)} & = & \alpha_1 \psi_1 \: + \: \alpha_2 \psi_2, \end{eqnarray*} and so we have the quantum sheaf cohomology relations \begin{eqnarray*} \left( \det M_{(1)} \right) \left( M_{(3)} \right)^n & = & q_1, \\ \left( M_{(2)} \right) \left( M_{(3)} \right) & = & q_2, \end{eqnarray*} which are precisely (\ref{hirzreln1}), (\ref{hirzreln2}) computed earlier, identifying \begin{displaymath} Q_{K1} \: = \: \det M_{(1)}, \: \: \: Q_s \: = \: M_{(2)}, \: \: \: Q_t \: = \: M_{(3)}. \end{displaymath} In the special case that ${\cal E} = TX$, the ring above reduces to \begin{displaymath} \prod_i \left( \sum_b Q_i^b \psi_b \right)^{Q^a_i} \: = \: q_a, \end{displaymath} or equivalently, \begin{eqnarray*} \psi_1^2 \left( n \psi_1 \: + \: \psi_2 \right)^n & = & q_1, \\ \psi_2 \left( n \psi_1 \: + \: \psi_2 \right) & = & q_2, \end{eqnarray*} which is a standard result in (2,2) GLSM's \cite{dr}[equ'n (3.44)]. If we identify the toric divisors $D_i$ as \begin{displaymath} D_i \: = \: \sum_a Q_i^a \psi_a \end{displaymath} (as a consistency check, note that since $\sum_i Q_i^a \vec{v}_i = 0$, it is necessarily the case that \begin{displaymath} \sum_i \langle m, \vec{v}_i \rangle D_i \: = \: 0, \end{displaymath} hence the description above encodes the linear relations on the Chow ring) then the GLSM ring can be written as \begin{displaymath} \prod_i D_i^{Q_i^a} \: = \: q_a, \end{displaymath} or equivalently, \begin{eqnarray*} D_u D_v D_t^n & = & q_1, \\ D_s D_t & = & q_2, \end{eqnarray*} where \begin{displaymath} D_u \: \sim \: D_v, \: \: \: D_t \: \sim \: D_s \: + \: n D_v. \end{displaymath} There is an issue with classical limits, previously noted in section~\ref{sect:ex:hirz}. Let us outline the analysis here, in the language of GLSM's. The classical cohomology ring of ${\bf F}_n$ can be described by the relations \begin{displaymath} D_u^2 \: = \: 0, \: \: \: D_s^2 \: = \: -n D_u D_s, \end{displaymath} and if we identify $D_u = \psi_1$, $D_s = \psi_2$, then we almost recover this in the limit $q_a \rightarrow 0$, except for an extra factor of $D_t^n$ modifying the relation $D_u^2=0$. In order to make the relation with the classical cohomology ring more clear, we should work in a different basis, one in which the fields have charges $(1,0)$, $(1,0)$, $(0,1)$, $(-n,1)$. (Note this is achieved by an $SL(2,{\bf Z})$ transformation.) In this basis, the quantum cohomology ring becomes \begin{displaymath} \psi_1^2 \: = \: q_1 \left( -n \psi_1 \: + \: \psi_2 \right)^n, \: \: \: \psi_2\left( -n \psi_1 \: + \: \psi_2 \right) \: = \: q_2 \end{displaymath} and so when we set $q_a \rightarrow 0$, and identify $D_u = \psi_1$, $D_s = - \psi_2$, we recover the classical cohomology ring without extraneous factors. (Alternatively, the canonical presentation of the previous section avoids this problem.) More invariantly, to cleanly recover the classical cohomology relations from the GLSM relations, one wants to work in a basis such that the smooth phase of the GLSM corresponds to the positive orthant of the secondary fan; this is a property of the canonical presentation of the previous section. \section{Conclusions} In this paper we have outlined the mathematical computation of quantum sheaf cohomology rings for deformations of tangent bundles of toric varieties, emphasizing physics aspects of the computation. Our new methods allow for much more efficient mathematical computations than possible previously. We have also seen in examples that in these cases (toric varieties, deformations of the tangent bundle), quantum sheaf cohomology is independent of nonlinear deformations, as conjectured elsewhere (see {\it e.g.} \cite{mm2,kmmp}). Rigorous general proofs will appear in \cite{dgks}. Extensions of the results of this paper to Grassmannians and flag manifolds are under discussion \cite{dgksnext}. Extensions to hypersurfaces would also be extremely useful. In this paper and \cite{dgks} we compute kernels of correlation functions in order to compute operator products; it would also be interesting to work out complete expressions for the correlation functions themselves. \section{Acknowledgements} We would like to thank M.~Ballard, J.~McOrist, and I.~Melnikov for useful conversations. This work was presented in part at the conferences `String-Math 2011' at the University of Pennsylvania, and `(0,2) mirror symmetry and heterotic topological strings' at the Erwin Schr\"doinger Institute in Vienna, Austria, and we thank both institutions for hospitality while part of this work was completed. R.D. was partially supported by NSF grants DMS-0908487, DMS-0636606. J.G. was partially supported by NSF grant DMS-0636606. S.K. was partially supported by NSF grant DMS-0555678. E.S. was partially supported by NSF grants DMS-0705381, PHY-0755614.
1,314,259,994,398
arxiv
\subsubsection*{\bibname}} \title{Tight Lower Complexity Bounds for Strongly Convex \\ Finite-Sum Optimization} \author Min Zhang$^1$ \qquad Yao Shu$^2$ \qquad Kun He$^1$\thanks{Corresponding author.} \vspace{0.6em}\\ $^1$School of Computer Science and Technology, \\ Huazhong University of Science and Technology, China\\ $^2$School of Computing, National University of Singapore, Singapore \vspace{0.6em}\\ \{m\_zhang, brooklet60\}@hust.edu.cn, shuyao@comp.nus.edu.sg } \begin{document} \maketitle \begin{abstract} Finite-sum optimization plays an important role in the area of machine learning, and hence has triggered a surge of interest in recent years. To address this optimization problem, various randomized incremental gradient methods have been proposed with guaranteed upper and lower complexity bounds for their convergence. Nonetheless, these lower bounds rely on certain conditions: deterministic optimization algorithm, or fixed probability distribution for the selection of component functions. Meanwhile, some lower bounds even do not match the upper bounds of the best known methods in certain cases. To break these limitations, we derive tight lower complexity bounds of randomized incremental gradient methods, including SAG, SAGA, SVRG, and SARAH, for two typical cases of finite-sum optimization. Specifically, our results tightly match the upper complexity of Katyusha or VRADA when each component function is strongly convex and smooth, and tightly match the upper complexity of SDCA without duality and of KatyushaX when the finite-sum function is strongly convex and the component functions are average smooth. \end{abstract} \section{INTRODUCTION} Finite-sum optimization, as known as Empirical Risk Minimization, is a key problem in the area of machine learning to help the learning models achieve satisfying performance, and therefore it requires a thorough study. Specifically, we consider minimizing the following finite-sum optimization problem: \begin{equation} \label{obj} \min_{x\in\mathbb{R}^d} F(x)=\frac{1}{n}\sum_{i=1}^{n}f_i(x). \end{equation} In this work, we mainly focus on the study of randomized incremental gradient methods with the access to Incremental First-order Oracle (IFO) for the component functions \citep{agarwal2015lower}. Formally, for $x\in\mathbb{R}^d$ and index $i\in\{1,2,\ldots,n\}$, IFO returns \begin{equation} h_F(x,i)=[f_i(x),\nabla f_i(x)]. \end{equation} Note that while SAG \citep{schmidt2017minimizing}, SAGA \citep{defazio2014saga}, SVRG \citep{johnson2013accelerating,zhang2013linear} and Katyusha \citep{allen2017natasha} are included in randomized incremental gradient methods, certain dual coordinate methods such as SDCA \citep{shalev2013stochastic} are excluded and therefore areh out of the scope of this paper. The complexity of randomized incremental gradient algorithms for strongly convex functions is literally defined as the number of IFO queries required to find an $\epsilon$-suboptimal solution $\hat{x}$, i.e., to achieve \begin{equation} F(\hat{x})-\min_{x\in\mathbb{R}^d}F(x)\leq\epsilon. \end{equation} For consistency, we follow this definition for the following analysis of the complexity of randomized incremental gradient algorithms. In the literature, great efforts have been devoted to devising randomized incremental gradient algorithms under various conditions, and developing analysis of upper complexity bounds for these methods. Nevertheless, it is still important to figure out whether these methods can enjoy a smaller complexity in certain cases and whether there exist some other methods achieving higher performance. To answer these questions, we attempt to derive tight lower complexity bounds for the problem defined in \autoref{obj}. There are many incremental methods and lower bounds for convex finite-sum optimization, such as SVRG, SAGA and SAG. Algorithms based on variance-reduced stochastic gradients can be accelerated using Nesterov's momentum method \citep{nesterov2013introductory}. When $f_i(x)$ is convex and $L$-smooth, Katyusha, a modified Nesterov's momentum acceleration of SVRG, converges to an $\epsilon$-suboptimal solution in $O(n\log(1/\epsilon)+\sqrt{nL/\epsilon})$ times of gradient evaluation. The lower complexity bound provided by \citet{woodworth2016tight} is $\Omega(n+\sqrt{nL/\epsilon})$. When $F(x)$ is convex and $\{f_i\}_{i=1}^{n}$ is $L$-average smooth (see Definition \ref{def2}), KatyushaX, another accelerated variant of SVRG, can achieve an $\epsilon$-suboptimal solution in $O(n+n^{3/4}\sqrt{L/\epsilon})$ times of gradient computation. A lower bound is then declared by \citet{zhou2019lower} and matches the upper complexity of KatyushaX under this condition. In this work, we focus on two important cases, as described in the following. \subsection{Two Cases to Study} Two cases of finite-sum optimization are investigated: (1) $f_i(x)$ is $\mu$-strongly convex and $L$- smooth, i.e., $f_i(x)$ is $(\mu,L)$-smooth (see Definition \ref{def1}); (2) $F(x)$ is $\mu$-strongly convex and $\{f_i\}_{i=1}^n$ is $L$-average smooth. \textbf{Case (1):} When $f_i(x)$ is $(\mu,L)$-smooth, SVRG \citep{johnson2013accelerating}, SAGA \citep{defazio2014saga}, SAG \citep{schmidt2017minimizing}, SDCA without duality \citep{shalev2016sdca} and SARAH \citep{nguyen2017sarah} can find an $\epsilon$-suboptimal solution in $O((n+L/\mu)\log(1/\epsilon))$ IFO queries, while Gradient Descent (GD) and AGD \citep{nesterov2013introductory} require $O(nL/\mu\log(1/\epsilon))$ and $O(n\sqrt{L/\mu}\log(1/\epsilon))$ IFO queries respectively. APPA \citep{frostig2015regularizing} and Catalyst \citep{lin2015universal} further reduce the IFO calls to $O((n+\sqrt{nL/\mu})\log(L/\mu)\log(1/\epsilon))$ but involve a factor $\log(L/\mu)$. By eliminating this log factor, Katyusha enjoys an upper complexity bound of $O((n+\sqrt{nL/\mu})\log(1/\epsilon))$. Regarding the lower complexity bound, \citet{agarwal2015lower} prove that achieving an $\epsilon$-suboptimal solution expects at least $\Omega(n+\sqrt{nL/\mu}\log(1/\epsilon))$ IFO calls. However, aforementioned lower bound relies on deterministic optimization algorithms and cannot be directly applied to randomized incremental gradient methods. Recently, \citet{lan2018optimal} derive a similar lower bound $\Omega(n+\sqrt{nL/\mu}\log(1/\epsilon))$ for a class of randomized algorithms where component functions are selected from predefined probability distribution. A lower bound $\Omega(n+\sqrt{nL/\mu}\log(1/\epsilon))$ is then proposed by \citet{woodworth2016tight} and \citet{arjevani2016dimension} in a more general setting. \citet{hannah2018breaking} further improve upon the bounds in \citet{arjevani2016dimension} by bringing a lower bound $\Omega(n+\frac{n}{1+(\log(n/\kappa))_+}\log(1/\epsilon))$ for $\kappa=O(n)$ where $\kappa=L/\mu$, which matches the upper bound of VRADA (\citet{SongJM20}). \textbf{Case (2):} When $F(x)$ is $\mu$-strongly convex, most of the existing works, including the following mentioned algorithms, assume that $f_i(x)$ is $L$-smooth. This assumption is stronger than the average smoothness assumption (see Section \ref{Pre}) and consequently can be replaced with the latter one while maintaining the upper complexity bounds as shown in \citep{zhou2019lower} . Classical SVRG method converges to an $\epsilon$-suboptimal solution with $O((n+\sqrt{n}L/\mu)\log(1/\epsilon))$ IFO calls \citep{allen2018katyusha}, while Catalyst on SVRG requires $O((n+n^{3/4}\sqrt{L/\mu})\log^2(1/\epsilon))$ IFO calls. KatyushaX \citep{allen2018katyusha} and SDCA without duality attain an upper bound of $O((n+n^{3/4}\sqrt{L/\mu})\log(1/\epsilon))$. In terms of the lower complexity bound, \citet{zhou2019lower} reveal that any linear-span first-order randomized algorithm needs at least $\Omega(n+n^{3/4}\sqrt{L/\mu}\log(1/\epsilon))$ IFO calls. To bridge the gap between the upper complexity bound of existing algorithms and the aforementioned lower bounds, we construct adversarial functions and provide tight lower bounds for general randomized incremental gradient methods. Therefore, our tight lower bounds suit a substantial number of optimization algorithms, e.g., SVRG, SAGA, SAG, SDCA without duality, Katyusha and KatyushaX. The upper and our improved lower complexity bounds are provided in Table \ref{bounds}. Notably, when $n=O(L/\mu)$ in \textbf{Case (1)}, the second term $\sqrt{nL/\mu}\log(1/\epsilon)$ dominates the complexity not only in our results, but also in the lower bound derived by \citet{woodworth2016tight,arjevani2016dimension} and upper bound by \citet{katyusha}. Nonetheless, randomized incremental gradient algorithms, e.g., SVRG, SARAH and Katyusha, enjoy a superior complexity when $n=O(L/\mu)$ and do not ignore the term $n\log(1/\epsilon)$, and our proof is more straightforward compared with previous works \citep{woodworth2016tight,zhou2019lower}. For $n \gg L/\mu$, \citet{hannah2018breaking} has proved a lower complexity bound, which matches the upper bound of their modified SVRG. We obtain this lower bound in another approach compared with them. Meanwhile, both of our analyses are in a unified framework. Similar statements also suit for \textbf{Case (2)}. \begin{table*}[t] \caption{Comparison between Lower Bounds and Upper Bounds on the Number of IFO Queries.} \label{bounds} \vspace{0.2em} \begin{center} \renewcommand\arraystretch{1.5} \begin{tabular}{|c|c|c|} \hline & $f_i$ is $(\mu,L)$-smooth & \begin{tabular}{c}$F(x)$ is $\mu$-strongly convex\\ and $\{f_i\}_{i=1}^n$ is $L$-average smooth\end{tabular} \\ \hline Lower bounds & \begin{tabular}{c} $n = O(\frac{L}{\mu}): \Omega((n+\sqrt{\frac{nL}{\mu}})\log\frac{1}{\epsilon})$ \\ $n \gg \frac{L}{\mu}: \Omega(n+\frac{n}{\log(n\mu/L)}\log\frac{1}{n\epsilon})$ \\ Theorem \ref{res1}\end{tabular} & \begin{tabular}{c}$\Omega((n+n^{3/4}\sqrt{\frac{L}{\mu}})\log\frac{1}{\epsilon})$\\ Theorem \ref{res2} \end{tabular} \\ \hline Upper bounds & \begin{tabular}{c} $n = O(\frac{L}{\mu}): O((n+\sqrt{\frac{nL}{\mu}})\log\frac{1}{\epsilon})$ \\ \citet{katyusha} \\ $n \gg \frac{L}{\mu}: O(n+\frac{n}{\log(n\mu/L)}\log\frac{1}{n\epsilon})$ \\ \citet{SongJM20,hannah2018breaking} \end{tabular} & \begin{tabular}{c} $O((n+n^{3/4}\sqrt{\frac{L}{mu}})\log\frac{1}{\epsilon})$ \\ \citet{shalev2016sdca}\\\citet{allen2018katyusha}\end{tabular} \\ \hline \end{tabular} \end{center} \end{table*}\textbf{} \subsection{Our Contributions} We focus on the above two cases and our contributions are summarized as follows: \begin{itemize} \item When $f_i(x)$ is $(\mu,L)$-smooth and $n = O(L/\mu)$, a tight lower bound $\Omega((n+\sqrt{nL/\mu})\log(1/\epsilon))$ is derived, closely matching the upper complexity of Katyusha. Compared with the lower bound $\Omega(n+\sqrt{nL/\mu}\log(1/\epsilon))$ provided by \citet{arjevani2016dimension} and \citet{woodworth2016tight}, our lower bound reveals an optimal dependency on $n$, $L/\mu$ and $\epsilon$. In the case with $n \gg L/\mu$, we get a lower bound $\Omega(n+\frac{n}{\log(n\mu/L)}\log(1/n\epsilon))$, which matches the upper bound of \citet{SongJM20} . \item When $F(x)$ is $\mu$-strongly convex and $\{f_i\}_{i=1}^n$ is $L$-average smooth, a tight lower bound $\Omega((n+n^{3/4}\sqrt{L/\mu})\log(1/\epsilon))$ is obtained for any randomized incremental gradient method, tightly matching the upper bound of SDCA without duality and of KatyushaX. Compared with the lower bound $\Omega(n+n^{3/4}\sqrt{L/\mu}\log(1/\epsilon))$ derived by \citet{zhou2019lower}, our results also affirm an optimal dependency on $n$, $L/\mu$ and $\epsilon$. \item We expose that the quality of the solution $\epsilon$ is independent on $n$ and $L/\mu$, while \citet{woodworth2016tight} require $\epsilon=O(\sqrt{nL/\mu})$ for Case (1) and \citet{zhou2019lower} demand $\epsilon=O(n^{7/4}(L/\mu)^{-3/2})$ for Case (2). \item Concerning the analytical techniques, the adversarial functions we construct are straightforward and our analysis is established on the distance between $\hat{x}$ and the optimal $x^*$, i.e. $\|\hat{x}-x^*\|$. We, therefore, are capable to achieve a lower complexity bound of $\hat{x}$ such that $\|\hat{x}-x^*\|\leq\epsilon$. Furthermore, without the assumption of predefined distribution for the selection of component functions and the limitation of the calls of component functions, our lower bounds are derived under a more general setting, as revealed in the proof. \end{itemize} \section{PRELIMINARIES} \label{Pre} We begin our analysis with the definitions of convexity and smoothness of functions, and Nesterov's chain-like quadratic function, which is the foundation of adversarial functions. We first recall the definitions of convexity, strong convexity and smoothness. \begin{definition} \label{def1} For a differentiable function $f$: $\mathbb{R}^d \rightarrow \mathbb{R}$, \begin{itemize} \item $f$ is convex if $\forall x, y \in \mathbb{R}^d$, it satisfies \[ f(y) \geq f(x) +\langle \nabla f(x), y-x \rangle. \] \item $f$ is $\mu$-strongly convex if for some $\mu>0$, $\forall x, y \in \mathbb{R}^d$, it satisfies \[ f(y) \geq f(x) +\langle \nabla f(x), y-x \rangle + \frac{\mu}{2}\|y-x\|_2^2.\] \item $f$ is $L$-smooth if for some $L>0$, $\forall x, y \in \mathbb{R}^d$, it satisfies \[\|\nabla f(x)-\nabla f(y)\|_2 \leq L\|x-y\|_2.\] \item $f$ is $(\mu, L)$-smooth if for some $\mu, L>0$, $\forall x, y \in \mathbb{R}^d$, it satisfies \[ \begin{aligned} \frac{\mu}{2}\|y-x\|_2^2 &\leq f(y)-f(x)-\langle \nabla f(x), y-x \rangle \leq \frac{L}{2}\|y-x\|_2^2. \end{aligned} \] \end{itemize} \end{definition} Note that for a twice differentiable function $f$, it is convex if and only if all eigenvalues of $\nabla^2f(x)$ are non-negative; $f$ is $\mu$-strongly convex if and only if all eigenvalues are at least $\mu$; $f$ is $L$-smooth if and only if all eigenvalues are no more than $L$; $f$ is $(\mu, L)$-smooth if and only if all eigenvalues are no less than $\mu$ and no more than $L$. \begin{definition} \label{def2} For any sequence of differentiable functions $\{f_i\}_{i=1}^n$:$f_i: \mathbb{R}^d\rightarrow\mathbb{R} $, we say $\{f_i\}_{i=1}^n$ is $L$-average smooth if $\forall x, y \in \mathbb{R}^d$, it satisfies \begin{equation} \frac{1}{n}\sum_{i=1}^{n}\|\nabla f_i(x)-\nabla f_i(y)\|_2^2 \leq L^2\|x-y\|_2^2. \end{equation} \end{definition} The average smoothness assumption appears in previous works \citep{zhou2018stochastic,fang2018spider,zhou2019lower,xie2019general}, which is adopted by \citet{zhou2019lower} and \cite{xie2019general} to prove lower bounds for finite-sum optimization. It can be easily verified that if each $f_i$ is $L$-smooth, then $\{f_i\}_{i=1}^n$ is $L$-average smooth. If $\{f_i\}_{i=1}^n$ is $L$-average smooth, with the convexity of $\|\bullet\|_2^2$ we can derive that $F(x)$ defined in \autoref{obj} is $L$-smooth \footnote{The convexity of $\|\bullet\|_2^2$ implies that $\|\nabla F(x)-\nabla F(y)\|_2^2=\|\frac{1}{n}\sum_{i=1}^{n}\nabla f_i(x)-\frac{1}{n}\sum_{i=1}^{n}\nabla f_i(y)\|_2^2=\|\frac{1}{n}\sum_{i=1}^{n} \left(\nabla f_i(x)-\nabla f_i(y) \right)\|_2^2 \leq \frac{1}{n}\sum_{i=1}^{n}\|\nabla f_i(x)-\nabla f_i(y)\|_2^2 \leq L^2\|x-y\|_2^2$, and so we have $\|\nabla F(x)-\nabla F(y)\|_2 \leq L\|x-y\|_2$.}. In order to construct adversarial functions to help introduce our lower bounds, we first introduce the following classes of quadratic functions \citep{nesterov2013introductory}, which were used to prove lower bounds for smooth strongly convex optimization problems. Let us choose $\mu>0$ and $L\geq\mu$. The function defined in the infinite-dimensional space $\mathbb{R}^\infty$ is given by \begin{equation} f(x)=\frac{L-\mu}{4}\left(\frac{1}{2}\langle x,Ax\rangle-\langle e_1,x\rangle \right)+\frac{1}{2}\mu\|x\|_2^2, \end{equation} where \begin{equation} A:= \begin{pmatrix} 2 & -1 & 0 & \\ -1 & 2 & -1 & 0 \\ 0 & -1 & 2 & \\ & 0 & &\cdots \end{pmatrix} \end{equation} and $e_1$ is the unit vector in which the first entry is $1$. Note that for any $s=(s_1,s_2, \ldots)^T\in \mathbb{R}^\infty$, we have \[ \langle s,As\rangle = s_1^2+\sum_{i=1}^{\infty}(s_i-s_{i+1})^2 \geq 0, \] and \[ \begin{aligned} \langle s,As\rangle &\leq s_1^2+\sum_{i=1}^{\infty}2(s_i^2+s_{i+1}^2) 3s_1^2+\sum_{i=2}^{\infty}4s_i^2 \leq 4\sum_{i=1}^{\infty}s_i^2. \end{aligned} \] Thus, $0 \preceq A \preceq 4I_\infty$, where $I_\infty$ is the unit matrix in $\mathbb{R}^\infty$. We can see that $\nabla^2f(x)=\frac{L-\mu}{4}A+\mu I_\infty$. Therefore, $f(x)$ is $L$-smooth and $\mu$-strongly convex. Applying first order optimality condition $\nabla f(x)=0$, we can get $x^*=(q,q^2,q^3,\ldots)$, where $q=\frac{\sqrt L-\sqrt\mu}{\sqrt L+\sqrt\mu}$ \citep{nesterov2013introductory}. In fact, function $f(x)$ is a first-order zero-chain \citep{carmon2019lower}, which implies for any vector $x=(x_1,x_2,\ldots)\in\mathbb{R}^\infty$ where only the first $k-1$ entries are nonzero, i.e., $x_{k}=x_{k+1}\ldots=0$, we have $\nabla_i f(x)=0$ for any $i\geq k+1$. In other words, the information brought by a query to IFO can increase at most one nonzero element. We leverage this chain structure to prove lower bounds in our framework. Meanwhile, note that this function can also be defined in finite-dimensional space with some modifications, which is reflected in the construction of adversarial function in Section \ref{sec:Proof}. \section{MAIN RESULTS} \label{sec_results} Here we present our lower bounds of randomized incremental gradient algorithms. We first establish lower bound for the sum of $(\mu,L)$-smooth functions, then turn to functions where $\{f_i\}_{i=1}^n$ is $L$-average smooth and $F(x)$ is $\mu$-strongly convex. \begin{theorem} \label{res1} For any $L, \mu >0$, $n\geq2$, $0<\epsilon<\frac{B^2\mu}{4}$, and any randomized incremental gradient method $\mathcal{M}$, there exist a dimension $d=O((n+\sqrt{n\frac{L}{\mu}}) \log \frac{1}{\epsilon})$ in the case with $\frac{L}{\mu} > \frac{7}{2}n$ or $d=O(n+\frac{n}{\log\frac{n\mu}{L}} \log \frac{1}{n\epsilon})$ in the case with $n \gg L/\mu$, $x^0\in \mathbb{R}^d$ and n $(\mu,L)$-smooth functions $\{f_i\}_{i=1}^n$: $\mathbb{R}^d\rightarrow \mathbb{R}$, such that $\|x^0-x^*\|=B$ where $x^*=\mathop{\arg\min}_{x \in\mathbb{R}^d}F(x)$. In order to find $F(\hat{x})-F(x^*)\leq\epsilon$, $\mathcal{M}$ needs at least \begin{equation} \Omega((n+\sqrt{\frac{nL}{\mu}}) \log \frac{B^2\mu}{4\epsilon}) \end{equation} IFO queries when $\frac{L}{\mu} > \frac{7}{2}n$, and needs at least \begin{equation} \Omega(n+\frac{n}{\log\frac{n\mu}{L}} \log \frac{1}{n\epsilon}) \end{equation} IFO queries when $n \gg \frac{L}{\mu}$ \end{theorem} The lower bound in Theorem \ref{res1} tightly matches the upper bound of Katyusha \citep{katyusha} when each component function is $(\mu,L)$-smooth. Compared with the complexity bound for strongly convex and smooth finite-sum optimization derived by \citet{woodworth2016tight}, who prove $\Omega(n+\sqrt{n\kappa}\log\frac{1}{\epsilon})$ lower bound, our results provide optimal dependency on $n$, $\kappa$ and $\epsilon$. As a side result, the accuracy of solution $\epsilon$ is independent on $n$ compared with their work. Under the second condition, our lower bound also matches the upper bound of VRADA (\citet{SongJM20}) and modified SVRG (\citet{hannah2018breaking}) Then we give the lower bound when $F(x)$ is strongly convex and $\{f_i\}_{i=1}^n$ is average smooth. \begin{theorem} \label{res2} For any $L, \mu >0$, $n\geq2$ such that $\frac{L}{\mu}\geq \frac{9}{4}\sqrt{n}$, any $0<\epsilon<\frac{B^2\mu}{4}$, and any randomized incremental gradient method $\mathcal{M}$, there exist a dimension $d=O((n+n^{\frac{3}{4}}\sqrt{\frac{L}{\mu}}) \log \frac{1}{\epsilon})$, $x^0\in\mathbb{R}^d$ and functions $\{f_i\}_{i=1}^n$: $\mathbb{R}^d\rightarrow \mathbb{R}$ where $\{f_i\}_{i=1}$ is $L$-average smooth and $F(x)$ is $\mu$-strongly convex, such that $\|x^0-x^*\|=B$ where $x^*=\mathop{\arg\min}_{x \in\mathbb{R}^d}F(x)$. In order to find $F(\hat{x})-F(x^*)\leq\epsilon$, $\mathcal{M}$ needs at least \begin{equation} \Omega ((n+n^\frac{3}{4}\sqrt{\frac{L}{\mu}}) \log \frac{B^2\mu}{4\epsilon}) \end{equation} IFO queries. \end{theorem} The lower bound in Theorem \ref{res2} under the first condition also tightly matches the upper bound of SDCA without duality \citep{shalev2016sdca} and of KatyushaX \citep{allen2018katyusha} when $F(x)$ is $\mu$-strongly convex. While \cite{zhou2019lower} provide $\Omega(n+n^\frac{3}{4}\sqrt{\kappa}\log\frac{1}{\epsilon})$ lower bound and our result provides optimal dependency on $n$, $\kappa$ and $\epsilon$. Furthermore, in our result, the accuracy of solution $\epsilon$ has no dependency on $n$. \textbf{Remark} The lower complexity bound in Theorem \ref{res2} relies on the average smoothness assumption. In fact, many existing finite-sum optimization algorithms such as SDCA without duality \citep{shalev2016sdca}, Natasha \citep{allen2017natasha}, KatyushaX \citep{allen2018katyusha}, RapGrad \citep{lan2019accelerated} and StagewiseKatyusha \citep{chen2018variance} assume that $f_i$ is $L$-smooth and we have claimed that this assumption is stronger than the average smoothness. Previous work \citep{zhou2019lower} indicates that we can replace the smoothness assumption of these algorithms with average smoothness without affecting their upper complexity bounds, and therefore, our average smoothness assumption is reasonable. \section{PROOFS OF MAIN RESULTS} \label{sec:Proof} In this section, the proofs of our main results are provided and the proof of lemmas are in the Appendix. \subsection{Proof of Theorem \ref{res1}} \label{subsec:proof_thm1} We construct a special class of finite-sum optimization problem with the form of \begin{align} \label{obj1} \min_{x \in \mathbb{R}^d} F(x) =&\frac{1}{n}\sum_{i=1}^{n}f_i(x) \nonumber \\ =&\frac{1}{n}\sum_{i=1}^{n}\left[\frac{L-\mu}{4}\left(\frac{1}{2}\langle x,A_i x\rangle- \langle e_{(i-1)p+1},x\rangle\right)\right. +\left.\frac{1}{2}\mu\|x\|_2^2\right], \end{align} where $p=\frac{d}{n}\in\{1,2,3\ldots\}$ and $e_{k}$ is a unit vector in which the $k$-th element is 1. Let $A_i$ be a symmetric matrix in $\mathbb{R}^{d \times d}$ defined as \begin{equation*} \label{matrix} \begin{pmatrix} \begin{smallmatrix} 0_{p(i-1),p(i-1)} & 0_{p(i-1),p} & 0_{p(i-1),d-pi} \\[1.5em] & \left.\begin{matrix} \begin{smallmatrix} \begin{smallmatrix}2 & -1 & 0 &\\ -1 & 2 & -1 \\0 &-1& 2 \end{smallmatrix} & \\ \cdots & \cdots \\ & \begin{smallmatrix} -1 & 2 & -1\\ 0 & -1 & \xi \end{smallmatrix} \end{smallmatrix} \end{matrix}\right\}p \times p & \\[1.5em] 0_{d-pi,p(i-1)} & 0_{d-pi,p} & 0_{d-pi,d-pi} \end{smallmatrix} \end{pmatrix} , \end{equation*} where $0_{k,p}$ is $(k\times p)$ zero matrix and $\xi=\frac{\left(\frac{\kappa-1}{n}+1\right)^{\frac{1}{2}}+3}{\left(\frac{\kappa-1}{n}+1\right)^{\frac{1}{2}}+1}$ in which $\kappa=\frac{L}{\mu}$. It's easy to see that $\nabla^2f_i(x)=\frac{L-\mu}{4}A_i + \mu I$, where $I$ is the unit matrix in $\mathbb{R}^{d \times d}$. Due to the fact that $0 \preceq A_i \preceq 4I $, $\mu I \preceq \nabla^2f_i(x) \preceq LI$. Thus, $f_i(x)$ is $(\mu,L)$-smooth. The following lemma gives an explicit expression of the minimizer in \autoref{obj1}. \begin{lemma} \label{lemma:minimizer_case1} Let $x^*$ be the minimizer of function $F(x)$ in \autoref{obj1}, and \begin{equation} q_1=q_2\ldots=q_n=\frac{\left(\frac{\kappa-1}{n}+1\right)^{\frac{1}{2}}-1}{\left(\frac{\kappa-1}{n}+1\right)^{\frac{1}{2}}+1}, \end{equation} where $\kappa=\frac{L}{\mu}$, then $x^*=(q_1,q_1^2, \ldots,q_1^p,q_2,q_2^2,\ldots,q_2^p, \ldots,q_n,q_n^2,\ldots, q_n^p)^T$. \end{lemma} Without loss of generality, we assume that the initial point $x^0=0$, otherwise we can take $\hat{F}(x)=F(x-x^0)$. The property of function $f_i$ implies that every query to the gradient of component function $f_i(x)$ can only ``discover'' a new coordinate. Let $x^K$ be the point generated after $K$ gradient evaluations (e.g., the point generated by one full gradient iteration of $F(x)$ is denoted as $x^n$), and $K_i \in\{0,1,\ldots,K\}$ be the query times of $\nabla f_i$, with $K=K_1+K_2+\ldots K_n$. Note that $K_i$ queries to $\nabla f_i$ result in at most $K_i$ coordinates of nonzero elements in $x^K$. For simplicity, we divide $x^K$ into $n$ parts and let $x^K=(x_{1}^{K},x_{2}^{K},\ldots, x_{n}^{K})$, in which $x_i^K$ ($i \leq n$) has $p$ elements. Let $S=\{i \in \{1,2,\ldots,n\}|K_i<p\}$ and $S^c=\{i \in \{1,2,\ldots,n\}|K_i\geq p\}$. It means that our proof also considers the case where the algorithms query some component functions more than $p$ times. Assuming $K \leq \frac{d}{2}=\frac{np}{2}$, and then we can establish the lower bound on how close $x^K$ is to $x^*$: \begin{align} \label{xkx*} \frac{\|x^K-x^*\|_2^2}{\|x^0-x^*\|_2^2} &=\frac{\sum_{i=1}^{n}\|x_i^K-(q_i,q_i^2,\ldots,q_i^p)\|_2^2}{n \|(q_1,q_1^2,\ldots,q_1^p)\|_2^2} \nonumber \\ &\geq \frac{\sum_{i\in S}\sum_{j=K_i+1}^{p} q_1^{2j}}{\frac{nq_1^2(1-q_1^{2p})}{1-q_1^2}} \nonumber \\ &=\frac{\sum_{i\in S}(q_1^{2K_i}-q_1^{2p})}{n(1-q_1^{2p})} \nonumber \\ &\geq \frac{\sum_{i\in S}(q_1^{2K_i}-q_1^{2p})+\sum_{i\in S^c}(q_1^{2K_i}-q_1^{2p})}{n(1-q_1^{2p})} \nonumber \\ &=\frac{\sum_{i=1}^{n}(q_1^{2K_i}-q_1^{2p})}{n(1-q_1^{2p})}. \end{align} The first inequality in \autoref{xkx*} follows the property that $x_i^K$ has at most $\min(K_i,p)$ nonzero elements. The second inequality using the following fact: for any $i\in S^c$, $q_1^{2K_i}\leq q_1^{2p}$. Note that $q_1^x$ is convex with respect to $x$, and then we have \begin{align} \frac{\|x^K-x^*\|_2^2}{\|x^0-x^*\|_2^2} &\geq \frac{\frac{1}{n}\sum_{i=1}^{n}q_1^{2K_i}-q_1^{2p}}{1-q_1^{2p}} \nonumber\\ &\geq \frac{q_1^{\frac{1}{n}\sum_{i=1}^{n}2K_i}-q_1^{2p}}{1-q_1^{2p}} \nonumber\\ &= \frac{q_1^{\frac{2K}{n}}-q_1^{2p}}{1-q_1^{2p}}. \end{align} Observing that $\frac{2K}{n}\leq p$, it implies that \begin{align} \frac{\|x^K-x^*\|_2^2}{\|x^0-x^*\|_2^2} &\geq \frac{q_1^{\frac{2K}{n}}-q_1^{2p}}{1-q_1^{2p}} \geq \frac{q_1^{\frac{2K}{n}}-q_1^{p+\frac{2K}{n}}}{1-q_1^{2p}} \nonumber\\ &=\frac{q_1^{\frac{2K}{n}}(1-q_1^p)}{1-q_1^{2p}} \nonumber\\ &=\frac{q_1^{\frac{2K}{n}}}{1+q_1^p}. \end{align} Since $q_1<1$, we have \begin{align} \|x^K-x^*\|_2^2 &\geq\frac{1}{1+q_1^p}\|x^0-x^*\|_2^2 \geq \frac{1}{2}q_1^{\frac{2K}{n}}\|x^0-x^*\|_2^2 \frac{B^2}{2}q_1^{\frac{2K}{n}}. \end{align} Above inequality implies that, in addition to $F(x^K)-F(x^*)\leq\epsilon$, our proof can also obtain a lower bound when $\|x^K-x^*\|\leq\epsilon\|x^0-x^*\|$ \footnote{Letting $\|x^K-x^*\|\leq\epsilon\|x^0-x^*\|$, we can also establish a lower bound with respect to the number of queries $K$ and it could be a side result. }. Noting that $F(x)$ is $\mu$-strongly convex, by the property of strongly convexity, it is easy to get \begin{align} F(x^K)-F(x^*) &\geq \frac{\mu}{2}\|x^K-x^*\|_2^2 \geq \frac{B^2\mu}{4}q_1^{\frac{2K}{n}}. \end{align} In order to get $F(x^K)-F(x^*) \leq \epsilon$, $K$ must satisfy \begin{equation} K \geq n\frac{\log \frac{B^2\mu}{4\epsilon}}{\log \frac{1}{q_1}}. \end{equation} We now provide an estimation on $\frac{1}{\log\frac{1}{q_1}}$ for $n=O(\kappa)$. \begin{lemma} \label{lemma:log_case1} Given $\kappa>1$, $n\geq2$ and $q_1=\frac{\left(\frac{\kappa-1}{n}+1\right)^{\frac{1}{2}}-1}{\left(\frac{\kappa-1}{n}+1\right)^{\frac{1}{2}}+1}$, if $\kappa \geq \frac{7}{2}n$, we have \begin{equation} \frac{1}{\log\frac{1}{q_1}} \geq \frac{1}{2\sqrt{14}}\sqrt{\frac{\kappa}{n}}+\frac{1}{4}. \end{equation} \end{lemma} In the case with $n \gg \kappa$, we provide the following lemma. \begin{lemma} \label{lemma:log_case11} Given $B, L,\mu>0, \kappa=L/\mu>1$, $n\geq2$ and $q_1=\frac{\left(\frac{\kappa-1}{n}+1\right)^{\frac{1}{2}}-1}{\left(\frac{\kappa-1}{n}+1\right)^{\frac{1}{2}}+1}$, if $n \gg \kappa$, we have \begin{equation} \frac{\log \frac{B^2\mu}{4\epsilon}}{\log \frac{1}{q_1}} \geq c + \frac{1}{\log\frac{n\mu}{L}}\log\frac{1}{n\epsilon}, \end{equation} where $c$ is some constant. \end{lemma} Thus, it can be derived that for any $\epsilon <\frac{B^2\mu}{4}$ and $\kappa \geq \frac{7}{2}n$, the number of gradient evaluations needed is at least \begin{equation} K = \Omega ((n+\sqrt{n\kappa}) \log \frac{B^2\mu}{4\epsilon}). \end{equation} Noting that $K\leq\frac{d}{2}$, we can see that there must exist a dimension $d=O((n+\sqrt{n\kappa}) \log \frac{1}{\epsilon})$ that satisfies the above statement~\footnote{We hide unimportant factors in logarithmic term.}. In the case with $n \gg \kappa$, we can obtain \begin{equation} K = \Omega(n+\frac{n}{\log\frac{n\mu}{L}} \log \frac{1}{n\epsilon}), \end{equation} and $d=O(n+\frac{n}{\log\frac{n\mu}{L}} \log \frac{1}{n\epsilon})$. \subsection{Proof of Theorem \ref{res2}} \label{subsec:proof_thm2} We also construct a special class of finite-sum optimization problem with the form of \begin{align} \label{obj2} \min_{x \in \mathbb{R}^d} F(x) = &\frac{1}{n}\sum_{i=1}^{n}f_i(x) \nonumber\\ =&\frac{1}{n}\sum_{i=1}^{n}\left[\frac{\sqrt{n}L-n\mu}{4}\right. \left.\left(\frac{1}{2}\langle x,A_i x\rangle-\langle e_{(i-1)p+1},x\rangle\right) + \frac{n\mu}{2}\langle x,B_ix\rangle\right], \end{align} where $A_i$ has the same form as in Section \ref{subsec:proof_thm1} with $\xi=\frac{\sqrt{\kappa}+3n^{\frac{1}{4}}}{\sqrt{\kappa}+n^{\frac{1}{4}}}$, in which $\kappa=\frac{L}{\mu}>\sqrt{n}$ to guarantee $\sqrt{n}L-n\mu >0 $, and $B_i$ is a symmetric matrix in $\mathbb{R}^{d \times d}$ defined as \begin{equation*} \begin{pmatrix} 0_{p(i-1),p(i-1)} & \\[1em] &I_{p,p} & \\[1em] & & 0_{d-pi,d-pi} \end{pmatrix}, \end{equation*} where $0_{m, n}$ is an $(m\times n)$ zero matrix and $I_{p,p}$ is the unit matrix in $\mathbb{R}^{p\times p}$. Likewise, $p=\frac{d}{n}\in\{1,2,3\ldots\}$ and $e_{k}$ is a unit vector in which the $k$-th element is 1. The lemma below describes the average smoothness of $\{f_i\}_{i=1}^n$. \begin{lemma} \label{lemma:smoothness_case2} For \autoref{obj2}, $\{f_i\}_{i=1}^n$ is $L$-average smooth. \end{lemma} Observing that \begin{equation*} \begin{aligned} \nabla^2 F(x) &=\frac{\sqrt{n}L-n\mu}{4n}\sum_{i=1}^{n}A_i+\mu\sum_{i=1}^{n}B_i \\ &=\frac{\sqrt{n}L-n\mu}{4n}\sum_{i=1}^{n}A_i+\mu I, \end{aligned} \end{equation*} where $I$ is the unit matrix in $\mathbb{R}^{d \times d} $, $F(x)$ is $\mu$-strongly convex. The following lemma gives the optimal solution in \autoref{obj2}. \begin{lemma} \label{lemma:minimizer_case2} Let $x^*$ be the minimizer of function $F(x)$ in \autoref{obj2}, and \begin{equation} q_1=q_2\ldots=q_n=\frac{\sqrt{\kappa}-n^{\frac{1}{4}}}{\sqrt{\kappa}+n^{\frac{1}{4}}}, \end{equation} where $\kappa=\frac{L}{\mu}$, then $x^*=(q_1, q_1^2, \ldots, q_1^p, q_2, q_2^2, \ldots, q_2^p, \ldots, q_n, q_n^2, \ldots, q_n^p)^T$. \end{lemma} Let $x^K$ be the point generated after $K$ gradient evaluations and $K_i \in\{0,1,\ldots,K\}$ be the query times of $\nabla f_i$. Using the same notations and methods, and assuming $K \leq \frac{d}{2}$, we establish the lower bound of $\|x^K-x^*\|_2^2$: \begin{equation} \|x^K-x^*\|_2^2 \geq \frac{B^2}{2}q_1^{\frac{2K}{n}}. \end{equation} By $\mu$-strongly convexity of $F(x)$, we have \begin{align} F(x^K)-F(x^*) &\geq \frac{\mu}{2}\|x^K-x^*\|_2^2 \geq \frac{B^2\mu}{4}q_1^{\frac{2K}{n}}. \end{align} In order to get $F(x^K)-F(x^*) \leq \epsilon$, $K$ must satisfy \begin{equation} K \geq n\frac{\log \frac{B^2\mu}{4\epsilon}}{\log \frac{1}{q_1}}. \end{equation} The following lemma gives an estimation on $\frac{1}{\log\frac{1}{q_1}}$. \begin{lemma} \label{lemma:log_case2} Given $q_1=\frac{\sqrt{\kappa}-n^{\frac{1}{4}}}{\sqrt{\kappa}+n^{\frac{1}{4}}}$, if $\kappa \geq \frac{9}{4}\sqrt{n}$, then we have \begin{equation} \frac{1}{\log\frac{1}{q_1}} \geq \frac{1}{12}n^{-\frac{1}{4}}\sqrt{\kappa}+\frac{1}{8}. \end{equation} \end{lemma} As a result, for any $\epsilon <\frac{B^2\mu}{4}$ and $\kappa \geq \frac{9}{4}\sqrt{n}$, the number of the gradient evaluations needed is at least \begin{equation} K = \Omega ((n+n^\frac{3}{4}\sqrt{\kappa}) \log \frac{B^2\mu}{4\epsilon}). \end{equation} Meanwhile, we can also derive that there must exist a function $F(x)$ with dimension $d= O((n+n^\frac{3}{4}\sqrt{\kappa}) \log \frac{1}{\epsilon})$. \section{CONCLUSION} We have established tight lower bounds on the Incremental First-order Oracle complexity for randomized incremental gradient method in solving two important cases of finite-sum optimization problems. To some extent, our results are general for first-order primal methods. When each component function is strongly convex and smooth, a tight lower bound is obtained. For a general setting that the finite-sum function is strongly convex and the component functions are average smooth, our lower bound also tightly matches the upper complexity of existing algorithms. We should point out that in this case, the condition $L/\mu= \Omega(\sqrt{n})$ must be satisfied in our analysis framework. Thus, how to obtain the lower bound when $L/\mu = O(\sqrt{n})$ could be a future study.
1,314,259,994,399
arxiv
\section{Introduction} The adiabatic theorem of quantum mechanics implies that the final state of a particle that moves slowly along a closed path is identical to the initial eigenstate --- up to a phase factor. The Berry phase is a time-independent contribution to this phase, depending only on the geometry of the path.\cite{berry} A simple example is a spin-$1/2$ in a rotating magnetic field $\bf B$, where the Berry phase equals half the solid angle swept by $\bf B$. It was proposed by Stern \cite{stern} to measure the Berry phase in the conductance $G$ of a mesoscopic ring in a spatially rotating magnetic field. Oscillations of $G$ as a function of the swept solid angle were predicted, similar to the Aharonov-Bohm oscillations as a function of the enclosed flux.\cite{ab} An important practical difference between the two effects is that the Aharonov-Bohm oscillations exist at arbitrarily small magnetic fields, whereas for the oscillations due to the Berry phase the magnetic field should be sufficiently strong to allow the spin to adiabatically follow the changing direction. Generally speaking, adiabaticity requires that the precession frequency $\omega_{\rm B}$ is large compared to the reciprocal of the characteristic timescale $t_{\rm c}$ on which $\bf B$ changes direction. We know that $\omega_{\rm B}=g\mu_{\rm B}B/2\hbar$, with $g$ the Land{\'e}-factor and $\mu_{\rm B}$ the Bohr magneton. The question is, what is $t_{\rm c}$? In a ballistic ring there is only one candidate, the circumference $L$ of the ring divided by the Fermi velocity $v$. (For simplicity we assume that $L$ is also the scale on which the field direction changes.) In a diffusive ring there are two candidates: the elastic scattering time $\tau$ and the diffusion time $\tau_{\rm d}$ around the ring. They differ by a factor $\tau_{\rm d}/\tau \simeq (L/\ell)^2$, where $\ell=v\tau$ is the mean free path. Since, by definition, $L\gg\ell$ in a diffusive system, the two time scales are far apart. Which of the two time scales is the relevant one is still under debate.\cite{sternrev} Stern's original proposal \cite{stern} was that \begin{equation} \label{critstern} \omega_{\rm B}\gg \frac{1}{\tau} \end{equation} is necessary to observe the Berry-phase oscillations. For realistic values of $g$ this requires magnetic fields in the quantum Hall regime, outside the range of validity of the semiclassical theory. We call Eq.\ (\ref{critstern}) the ``pessimistic criterion''. In a later work, \cite{lsg} Loss, Schoeller, and Goldbart (LSG) concluded that adiabaticity is reached already at much weaker magnetic fields, when \begin{equation} \label{critloss} \omega_{\rm B}\gg\frac{1}{\tau_{\rm d}}\simeq\frac{1}{\tau}\left(\frac{\ell}{L}\right)^2. \end{equation} This ``optimistic criterion'' has motivated experimentalists to search for the Berry-phase oscillations in disordered conductors, \cite{experiments} and was invoked in a recent study of the conductivity of mesoscopic ferromagnets.\cite{lyanda} In this paper, we re-examine the semiclassical theory of LSG to resolve the controversy. The Berry-phase oscillations in the conductance result from a periodic modulation of the weak-localization correction, and require the solution of a diffusion equation for the Cooperon propagator. To solve this problem we need to consider the coupled dynamics of four spin-degrees of freedom. (The Cooperon has four spin indices.) To gain insight we first examine in Sec.\ \ref{transmission} the simpler problem of the dynamics of a single spin variable, by studying the randomization of a spin-polarized electron gas by a non-uniform magnetic field. We start at the level of the Boltzmann equation and then make the diffusion approximation. We show how the diffusion equation can be solved exactly for the first two moments of the polarization. The same procedure is used in Sec.\ \ref{localization} to arrive at a diffusion equation for the Cooperon. This equation coincides with the equation derived by LSG in the weak-field regime $\omega_{\rm B}\tau\ll 1$, but is different in the strong-field regime $\omega_{\rm B}\tau\gtrsim 1$. We present an exact solution for the weak-localization correction and compare with the findings of LSG. Our conclusion both for the polarization and for the weak-localization correction is that adiabaticity requires $\omega_{\rm B}\tau\gg 1$. Regrettably, the pessimistic criterion (\ref{critstern}) is correct, in agreement with Stern's original conclusion. The optimistic criterion (\ref{critloss}) advocated by LSG turns out to be the criterion for maximal randomization of the spin by the magnetic field, and not the criterion for adiabaticity. \section{Spin-resolved transmission} \label{transmission} \subsection{Formulation of the problem} Consider a conductor in a magnetic field $\bf B$, containing a disordered segment (length $L$, mean free path $\ell$ at Fermi velocity $v$) in which the magnetic field changes its direction. An electron at the Fermi level with spin up (relative to the local magnetic field) is injected at one end and reaches the other end. What is the probability that its spin is up? \begin{figure}[ht] \epsfxsize=0.7\hsize \hspace*{\fill} \vspace*{-0ex}\epsffile{fig1.eps}\vspace*{0ex} \hspace*{\fill} \medskip \caption[]{Schematic drawing of a two-dimensional electron gas in the spatially rotating magnetic field of Eq.\ (\ref{field}), with $f=1$.} \label{fig1} \end{figure} For simplicity we take for the conductor a two-dimensional electron gas (in the $x$-$y$ plane, with the disordered region between $x=0$ and $x=L$), and we ignore the curvature of the electron trajectories by the Lorentz force. The problem becomes effectively one-dimensional by assuming that $\bf B$ depends on $x$ only. We choose a rotation of $\bf B$ in the $x$-$y$-plane, according to \begin{equation} \label{field} {\bf B}(x,y,z=0)= \left(B\sin\eta\cos\case{2\pi fx}{L},B\sin\eta\sin\case{2\pi fx}{L},B\cos\eta\right), \end{equation} with $\eta$ and $f$ arbitrary parameters. The geometry is sketched in Fig.\ \ref{fig1}. We treat the orbital motion semiclassically, within the framework of the Boltzmann equation. (This is justified if the Fermi wavelength is much smaller than $\ell$.) The spin dynamics requires a fully quantum mechanical treatment. We assume that the Zeeman energy $g\mu_{\rm B}B$ is much smaller than the Fermi energy $\frac{1}{2}mv^2$, so that the orbital motion is independent of the spin. We introduce the probability density $P(x,\phi,\xi,t)$ for the electron to be at time $t$ at position $x$ with velocity ${\bf v}=(v\cos\phi,v\sin\phi,0)$, in the spin state with spinor $\xi =(\xi_1,\xi_2)$. The dynamics of $\xi$ depends on the local magnetic field according to \begin{equation} \label{schroedinger} \frac{d\xi}{dt}=\frac{ig\mu_{\rm B}}{2\hbar}{\bf B}\cdot\mbox{\boldmath$\sigma$}\,\xi, \end{equation} where $\mbox{\boldmath$\sigma$}=(\sigma_x,\sigma_y,\sigma_z)$ is the vector of Pauli matrices. It is convenient to decompose $\xi=\chi_1\xi_\uparrow +\chi_2\xi_\downarrow$ into the local eigenstates $\xi_\uparrow,\xi_\downarrow$ of ${\bf B}\cdot\mbox{\boldmath$\sigma$}$, \begin{mathletters} \label{localbasis} \begin{equation} \xi_\uparrow = \pmatrix{\cos\frac{\eta}{2}\,{\rm e}^{-i\pi fx/L} \cr \sin\frac{\eta}{2}\,{\rm e}^{i\pi fx/L}},~~~~ \xi_\downarrow = \pmatrix{-\sin\frac{\eta}{2}\,{\rm e}^{-i\pi fx/L} \cr \cos\frac{\eta}{2}\,{\rm e}^{i\pi fx/L}}, \end{equation} \begin{equation} {\bf B}\cdot\mbox{\boldmath$\sigma$}\,\xi_\uparrow = B\xi_\uparrow,~~~~ {\bf B}\cdot\mbox{\boldmath$\sigma$}\,\xi_\downarrow = -B\xi_\downarrow, \end{equation} \end{mathletters} and use the real and imaginary parts of the coefficients $\chi_1,\chi_2$ as variables in the Boltzmann equation. The dynamics of the vector of coefficients $c=(c_1,c_2,c_3,c_4)=({\rm Re}\,\chi_1,{\rm Im}\,\chi_1,{\rm Re}\,\chi_2,{\rm Im}\,\chi_2)$ is given by \begin{mathletters} \begin{equation} \frac{dc}{dt} = \frac{1}{\tau} M c,~~M=M_0+M_1\cos\phi, \end{equation} \begin{equation} \label{defm} M_0 = \omega_{\rm B}\tau \pmatrix{ 0&-1&0&0 \cr 1&0&0&0 \cr 0&0&0&1 \cr 0&0&-1&0 },~~~~~ M_1=\frac{\pi f\ell}{L} \pmatrix{ 0 & -\cos\eta & 0 & \sin\eta \cr \cos\eta & 0 & -\sin\eta & 0 \cr 0 & \sin\eta & 0 & \cos\eta \cr -\sin\eta & 0 & -\cos\eta & 0 }, \end{equation} \end{mathletters} where $\omega_{\rm B}=g\mu_{\rm B}B/2\hbar$ is the precession frequency of the spin. The Boltzmann equation takes the form \begin{equation} \label{boltzmann} \tau\frac{\partial}{\partial t} P(x,\phi,c,t) = -\ell\cos\phi\frac{\partial P}{\partial x} -\sum_{i,j}\frac{\partial}{\partial c_i} \left(M_{ij}c_j P\right) -P + \int_0^{2\pi}\frac{d\phi'}{2\pi} P(x,\phi',c,t), \end{equation} where we have assumed isotropic scattering (rate $1/\tau=v/\ell$). We look for a stationary solution to the Boltzmann equation, so the left-hand-side of Eq.\ (\ref{boltzmann}) is zero and we omit the argument $t$ of $P$. A stationary flux of particles with an isotropic velocity distribution is injected at $x=0$, their spins all aligned with the local magnetic field (so $\chi_2=0$ at $x=0$). Without loss of generality we may assume that $\chi_1=1$ at $x=0$. No particles are incident from the other end, at $x=L$. Thus the boundary conditions are \begin{mathletters} \label{beecee} \begin{eqnarray} \label{beginwire} && P(x=0,\phi,c)=\delta (c_1-1)\delta (c_2)\delta (c_3)\delta (c_4)~{\rm if}~\cos\phi > 0, \\ \label{endwire} && P(x=L,\phi,c) = 0~{\rm if}~\cos\phi < 0. \end{eqnarray} \end{mathletters} This completes the formulation of the problem. We compare two methods of solution. The first is an exact numerical solution of the Boltzmann equation using the Monte Carlo method. The second is an approximate analytical solution using the diffusion approximation, valid for $L\gg \ell$. We begin with the latter. \subsection{Diffusion approximation} The diffusion approximation amounts to the assumption that $P$ has a simple cosine-dependence on $\phi$, \begin{equation} \label{diffansatz} P(x,\phi,c)=N(x,c)+J(x,c)\cos\phi. \end{equation} To determine the density $N$ and current $J$ we substitute Eq.\ (\ref{diffansatz}) into Eq.\ (\ref{boltzmann}) and integrate over $\phi$. This gives \begin{eqnarray} \ell\frac{\partial J}{\partial x} &=& -\frac{\partial}{\partial c}\left( 2 M_0 c N+M_1 c J\right). \label{diffpde1} \end{eqnarray} Similarly, multiplication with $\cos\phi$ before integration gives \begin{eqnarray} \ell\frac{\partial N}{\partial x} &=& -\frac{\partial}{\partial c} \left( M_0 c J+ M_1 c N\right) - J. \label{diffpde2} \end{eqnarray} Thus we have a closed set of partial differential equations for the unknown functions $N(x, c)$ and $J(x, c)$. Boundary conditions are obtained by multiplying Eq.\ (\ref{beecee}) with $\cos\phi$ and integrating over $\phi$: \begin{mathletters} \label{diffbc} \begin{eqnarray} && N(x=0,c)+\frac{\pi}{4}J(x=0,c)=\delta (c_1-1)\delta (c_2)\delta (c_3)\delta (c_4),\\ && N(x=L,c)-\frac{\pi}{4}J(x=L,c)=0. \end{eqnarray} \end{mathletters} We seek the spin polarization $p=c_1^2+c_2^2-c_3^2-c_4^2$ of the transmitted electrons, characterized by the distribution \begin{equation} \label{goal} P(p)=\frac{\int\!dc\,J(x=L,c)\delta(c_1^2+c_2^2-c_3^2-c_4^2-p)}{\int\!dc\, J(x=L,c)}. \end{equation} (The notation $\int\!dc\,\equiv\int\!dc_1\,\int\!dc_2\,\int\!dc_3\,\int\!dc_4$ indicates an integration over the spin variables.) We compute the first two moments of $P(p)$. The first moment $\overline{p}$ is the fraction of transmitted electrons with spin up minus the fraction with spin down, averaged quantum mechanically over the spin state and statistically over the disorder. The variance Var $p=\overline{p^2}-\overline{p}^2$ gives an indication of the magnitude of the statistical fluctuations. Integration of Eqs.\ (\ref{diffpde1})--(\ref{diffbc}) over the spin variables yields the equations and boundary conditions for the functions $N(x)=\int\!dc\,N(x,c)$ and $J(x)=\int\!dc\,J(x,c)$: \begin{mathletters} \label{momzero} \begin{eqnarray} &&\ell\frac{dN}{dx}=-J,~~\frac{dJ}{dx} = 0, \label{momzerodiff} \\ && N(0)+\frac{\pi}{4}J(0)=1,~~N(L)-\frac{\pi}{4}J(L)=0. \end{eqnarray} \end{mathletters} The solution \begin{equation} \label{solnorm} J(x) = \left( \frac{\pi}{2} + \frac{L}{\ell} \right)^{-1} \end{equation} determines the denominator of Eq.\ (\ref{goal}). To determine $\overline{p}$ we multiply Eqs.\ (\ref{diffpde1}) and (\ref{diffpde2}) with $\chi_\alpha\chi_\beta^\ast$ and integrate over $c$. (Recall that $\chi_1=c_1+ic_2,\chi_2=c_3+ic_4$.) It follows upon partial integration that \begin{mathletters} \begin{eqnarray} &&\int\! dc\, \chi_\alpha\chi_\beta^\ast \frac{\partial}{\partial c}\left(M_0 c f\right)= -\sum_{\rho,\sigma} \left(S_{\alpha\rho}\delta_{\beta\sigma}-\delta_{\alpha\rho}S_{\beta\sigma}\right) \int\! dc\, \chi_\rho \chi_\sigma^\ast f, \\ &&\int\! dc\, \chi_\alpha\chi_\beta^\ast \frac{\partial}{\partial c}\left(M_1 c f\right) = -\sum_{\rho,\sigma} \left(T_{\alpha\rho}\delta_{\beta\sigma}-\delta_{\alpha\rho}T_{\beta\sigma}\right) \int\! dc\, \chi_\rho \chi_\sigma^\ast f, \end{eqnarray} \end{mathletters} for arbitrary functions $f(x,c)$. The $2\times 2$ matrices $S,T$ are defined by \begin{equation} S=i\omega_{\rm B}\tau\sigma_z, ~~T=\frac{i\pi f\ell}{L}\left(\sigma_z\cos\eta-\sigma_x\sin\eta\right). \end{equation} In this way we find that the moments \begin{mathletters} \begin{eqnarray} N_{\alpha\beta}(x)&=&\int\!dc\,\chi_\alpha\chi^\ast_\beta N(x,c), \\ J_{\alpha\beta}(x)&=&\int\!dc\,\chi_\alpha\chi^\ast_\beta J(x,c), \end{eqnarray} \end{mathletters} satisfy the ordinary differential equations \begin{mathletters} \label{momdiff} \begin{eqnarray} \label{momdiffa} && \ell\frac{dN_{\alpha\beta}}{dx} = \sum_{\rho,\sigma} \left(T_{\alpha\rho}\delta_{\beta\sigma}-\delta_{\alpha\rho}T_{\beta\sigma}\right) N_{\rho\sigma} +\sum_{\rho,\sigma} \left(S_{\alpha\rho}\delta_{\beta\sigma}-\delta_{\alpha\rho}S_{\beta\sigma}\right) J_{\rho\sigma} -J_{\alpha\beta}, \\ && \ell\frac{dJ_{\alpha\beta}}{dx} = 2\sum_{\rho,\sigma} \left(S_{\alpha\rho}\delta_{\beta\sigma}-\delta_{\alpha\rho}S_{\beta\sigma}\right) N_{\rho\sigma} +\sum_{\rho,\sigma} \left(T_{\alpha\rho}\delta_{\beta\sigma}-\delta_{\alpha\rho}T_{\beta\sigma}\right) J_{\rho\sigma}, \end{eqnarray} \end{mathletters} with boundary conditions \begin{mathletters} \begin{eqnarray} && N_{\alpha\beta}(x=0)+\frac{\pi}{4}J_{\alpha\beta}(x=0)=\delta_{\alpha 1}\delta_{\beta 1}, \\ && N_{\alpha\beta}(x=L)-\frac{\pi}{4}J_{\alpha\beta}(x=L)= 0. \end{eqnarray} \end{mathletters} The mean polarization $\overline{p}$ is determined by $J_{\alpha\beta}$ according to \begin{equation} \label{avpdef} \overline{p}=\frac{J_{11}(L)-J_{22}(L)}{J(L)}= \left(\frac{\pi}{2}+\frac{L}{\ell}\right)\left[J_{11}(L)-J_{22}(L)\right]. \end{equation} Since Eq.\ (\ref{momdiff}) is linear in the $8$ functions $N_{\alpha\beta}(x),J_{\alpha\beta}(x)$ ($\alpha,\beta=1,2$), a solution requires the eigenvalues and right eigenvectors of the $8\times 8$ matrix of coefficients. These can be readily computed numerically for any values of $L/\ell$ and $\omega_{\rm B}\tau$. We have found an analytic asymptotic solution for $L/\ell\gg 1$ and $\omega_{\rm B}\tau\gg (f\ell/L)^2$, given by \begin{equation} \label{avp} \overline{p}=\frac{k}{\sinh k},~~~k=\frac{2\pi f\sin\eta}{\sqrt{1+(2\omega_{\rm B}\tau)^2}}. \end{equation} In Fig.\ \ref{fig2} we compare the numerical solution (solid curve) with Eq.\ (\ref{avp}) (dashed curve) for $L/\ell=25$ and $\eta=\pi/3,f=1$. The two curves are almost indistinguishable, except for the smallest values of $\omega_{\rm B}\tau$. \begin{figure}[ht] \epsfxsize=0.6\hsize \hspace*{\fill} \vspace*{-0ex}\epsffile{fig2.eps}\vspace*{0ex} \hspace*{\fill} \medskip \caption[]{Average and variance of the spin polarization $p$ of the current transmitted through a two-dimensional region of length $L=25\,\ell$, as a function of $\omega_{\rm B}\tau$, for a magnetic field given by Eq.\ (\ref{field}) with $\eta=\pi/3$ and $f=1$. The data points result from Monte Carlo simulations of the Boltzmann equation (\ref{boltzmann}), the solid curves result from the diffusion approximation (\ref{diffansatz}), and the dashed curves are the asymptotic formulas (\ref{avp}) and (\ref{varp}). Notice the transient regime (A), the randomized regime (B), and the adiabatic regime (C). } \label{fig2} \end{figure} In a similar way, we compute the second moment of $P(p)$ by multiplying Eqs.\ (\ref{diffpde1}) and (\ref{diffpde2}) with $\chi_\alpha\chi_\beta^\ast\chi_\gamma\chi_\delta^\ast$ and integrating over $c$. The result is a closed set of equations \begin{mathletters} \label{vardiff} \begin{eqnarray} \label{vardiffa} && \ell \frac{d}{dx} N_{\alpha\beta\gamma\delta} = \sum_{\mu,\nu,\rho,\sigma} \left( L^{\mu\nu\rho\sigma}_{\alpha\beta\gamma\delta} N_{\mu\nu\rho\sigma} + K^{\mu\nu\rho\sigma}_{\alpha\beta\gamma\delta} J_{\mu\nu\rho\sigma} \right) - J_{\alpha\beta\gamma\delta}, \\ && \ell \frac{d}{dx} J_{\alpha\beta\gamma\delta} = \sum_{\mu,\nu,\rho,\sigma} \left( 2K^{\mu\nu\rho\sigma}_{\alpha\beta\gamma\delta} N_{\mu\nu\rho\sigma} + L^{\mu\nu\rho\sigma}_{\alpha\beta\gamma\delta} J_{\mu\nu\rho\sigma} \right) , \end{eqnarray} \end{mathletters} where we have defined \begin{mathletters} \begin{eqnarray} && K_{\alpha\beta\gamma\delta}^{\mu\nu\rho\sigma} = S_{\alpha\mu}\delta_{\beta\nu}\delta_{\gamma\rho}\delta_{\delta\sigma} -\delta_{\alpha\mu}S_{\beta\nu}\delta_{\gamma\rho}\delta_{\delta\sigma} +\delta_{\alpha\mu}\delta_{\beta\nu}S_{\gamma\rho}\delta_{\delta\sigma} -\delta_{\alpha\mu}\delta_{\beta\nu}\delta_{\gamma\rho}S_{\delta\sigma}, \\ && L_{\alpha\beta\gamma\delta}^{\mu\nu\rho\sigma} = T_{\alpha\mu}\delta_{\beta\nu}\delta_{\gamma\rho}\delta_{\delta\sigma} -\delta_{\alpha\mu}T_{\beta\nu}\delta_{\gamma\rho}\delta_{\delta\sigma} +\delta_{\alpha\mu}\delta_{\beta\nu}T_{\gamma\rho}\delta_{\delta\sigma} -\delta_{\alpha\mu}\delta_{\beta\nu}\delta_{\gamma\rho}T_{\delta\sigma}, \end{eqnarray} \end{mathletters} \begin{mathletters} \begin{eqnarray} N_{\alpha\beta\gamma\delta}(x) &=& \int\!dc\, \chi_\alpha\chi^\ast_\beta\chi_\gamma\chi^\ast_\delta N(x,c), \\ J_{\alpha\beta\gamma\delta}(x) &=& \int\!dc\, \chi_\alpha\chi^\ast_\beta\chi_\gamma\chi^\ast_\delta J(x,c). \end{eqnarray} \end{mathletters} The boundary conditions on the functions $N_{\alpha\beta\gamma\delta}$ and $J_{\alpha\beta\gamma\delta}$ are \begin{eqnarray} && N_{\alpha\beta\gamma\delta}(x=0)+\frac{\pi}{4}J_{\alpha\beta\gamma\delta}(x=0) =\delta_{\alpha 1}\delta_{\beta 1}\delta_{\gamma 1}\delta_{\delta 1}, \\ && N_{\alpha\beta\gamma\delta}(x=L)-\frac{\pi}{4}J_{\alpha\beta\gamma\delta}(x=L)= 0. \end{eqnarray} The second moment $\overline{p^2}$ is determined by \begin{equation} \overline{p^2}=\left(\frac{\pi}{2}+\frac{L}{\ell}\right) \left[ J_{1111}(x=L)-J_{1122}(x=L)-J_{2211}(x=L)+J_{2222}(x=L) \right]. \end{equation} The numerical solution is plotted also in Fig.\ \ref{fig2}, together with the asymptotic expression \begin{equation} \label{varp} {\rm Var}\, p = \frac{1}{3}+\frac{2k\sqrt{3}}{3\sinh\left(k\sqrt{3}\right)} -\frac{k^2}{\sinh^2 k}. \end{equation} It is evident from Eqs.\ (\ref{avp}) and (\ref{varp}), and from Fig.\ \ref{fig2}, that the regime with $\overline{p}=1$, ${\rm Var}\,p=0$ is entered for $\omega_{\rm B}\tau \gtrsim f$ [for $\sin\eta ={\cal O} (1)$], in agreement with Stern's criterion (\ref{critstern}) for adiabaticity. For smaller $\omega_{\rm B}\tau$ adiabaticity is lost. There is a transient regime $\omega_{\rm B}\tau \ll (f\ell/L)^2$, in which the precession frequency is so low that the spin remains in the same state during the entire diffusion process. For $(f\ell/L)^2 \ll \omega_{\rm B}\tau \ll f$ the average polarization reaches a plateau value close to zero with a finite variance. For a sufficiently non-uniform field, $f\sin\eta\gg 1$, we find in this regime $\overline{p}=0$ and ${\rm Var}\, p=1/3$, which means that the spin state is completely randomized. The transient regime, the randomized regime, and the adiabatic regime are indicated in Fig.\ \ref{fig2} by the letters A, B, and C. \subsection{Comparison with Monte Carlo simulations} In order to check the diffusion approximation we solved the full Boltzmann equation by means of a Monte Carlo simulation. A particle is moved from $x=0$ over a distance $\ell_1$ in the direction $\phi_1$, then over a distance $\ell_2$ in the direction $\phi_2$, and so on, until it is reflected back to $x=0$ or transmitted to $x=L$. The step lengths $\ell_i$ are chosen randomly from a Poisson distribution with mean $\ell$. The directions $\phi_i$ are chosen uniformly from $[0,2\pi]$, except for the initial direction $\phi_1$, which is distributed $\propto\cos\phi_1$. The spin components are given by \begin{equation} \pmatrix{\chi_1 \cr \chi_2} = \prod_i {\rm e}^{\left(S + T \cos\phi_i \right)\ell_i /\ell} \pmatrix{ 1 \cr 0 }. \end{equation} To find $\overline{p^n}$, one has to average $\left(|\chi_1|^2-|\chi_2|^2\right)^n$ over the transmitted particles. The results for $L/\ell=25$ are shown in Fig. \ref{fig2} (data points). They agree very well with the results of the previous subsection, thus confirming the validity of the diffusion approximation for $L/\ell\gg 1$. \section{Weak localization} \label{localization} \subsection{Formulation of the problem} We turn to the effect of the non-uniform magnetic field on the weak-localization correction of a multiply-connected system. We consider the same geometry as in Fig.\ \ref{fig1}, but now with periodic boundary conditions --- to model a ring of circumference $L$. Only the effects of the magnetic field on the spin are included, to isolate the Berry phase from the conventional Aharonov-Bohm phase. As in the previous subsection, we assume that the orbital motion is independent of the spin dynamics. We follow LSG in applying the semiclassical theory of Chakravarty and Schmidt \cite{chakra} to the problem, however, we start at the level of the Boltzmann equation --- rather than at the level of the diffusion equation --- and make the diffusion approximation at a later stage of the calculation. The weak-localization correction $\Delta G$ to the conductance is given by \begin{equation} \label{wl1} \Delta G = - \frac{e^2D}{\pi\hbar L} \int_0^\infty\!\!dt\, {\rm e}^{-t/\tau_\varphi} C(t), \end{equation} where $\tau_\varphi$ is the phase coherence time and the diffusion coefficient $D=vl/d$ in $d$ dimensions. (In our geometry $d=2$.) The ``return quasi-probability'' $C(t)$ is expressed as a sum over ``Boltzmannian walks'' ${\bf R}(t)$ with ${\bf R}(0)={\bf R}(t)$, \begin{equation} \label{returnprob} C(t)= \sum_{\left\{{\bf R}(t)\right\}} W\, {\rm Tr}\, (U^+U^-). \end{equation} Here $W[{\bf R}(t)]$ is the weight of the Boltzmannian walk for a spinless particle. The $2\times 2$ matrices $U^\pm[{\bf R}(t)]$ are defined by \begin{equation} U^\pm = {\cal T}\exp\left\{\pm\frac{ig\mu_{\rm B}}{2\hbar}\int_0^t\!dt'\, {\bf B}\biglb({\bf R}(t')\bigrb) \cdot \mbox{\boldmath$\sigma$}\right\}, \end{equation} where $\cal T$ denotes a time ordering. The factor ${\rm Tr}\, (U^+U^-)$ in Eq.\ (\ref{returnprob}) accounts for the phase difference of time-reversed paths. The Cooperon can be written in terms of a propagator $\chi$, \begin{equation} \label{cooperon} C(t)=\frac{1}{2\pi}\int_0^{2\pi}\!\! d\phi \int_0^{2\pi}\!\! d\phi_{\rm i}\sum_{\alpha,\beta} \chi_{\alpha\beta\beta\alpha} (x_{\rm i},x_{\rm i};\phi,\phi_{\rm i};t), \end{equation} that satisfies the kinetic equation \begin{eqnarray} \label{kinetic} \left( \frac{\partial}{\partial t} + {\cal B} \right) \chi_{\alpha\beta\gamma\delta} (x,x_{\rm i};\phi,\phi_{\rm i};t) - \frac{ig\mu_{\rm B}}{2\hbar} \sum_{\alpha',\gamma'} \left[ \biglb({\bf B}(x)\cdot\mbox{\boldmath$\sigma$}\bigrb)_{\alpha\alpha'}\delta_{\gamma\gamma'}- \delta_{\alpha\alpha'}\biglb({\bf B}(x)\cdot\mbox{\boldmath$\sigma$}\bigrb)_{\gamma\gamma'} \right] \chi_{\alpha'\beta\gamma'\delta} \nonumber \\ = \delta (t) \delta (x-x_{\rm i}) \delta (\phi-\phi_{\rm i}) \delta_{\alpha\beta} \delta_{\gamma\delta}. \end{eqnarray} The Boltzmann operator $\cal B$ is given by \begin{equation} {\cal B} = v\cos\phi\frac{\partial}{\partial x} + \frac{1}{\tau} - \frac{1}{\tau}\int_0^{2\pi}\!\frac{d\phi}{2\pi}. \end{equation} The propagator $\chi$ is a moment of the probability distribution $P(x,\phi,U^+,U^-,t)$, \begin{equation} \label{propagator} \chi_{\alpha\beta\gamma\delta} =\int\! dU^+ \int\! dU^- \, U^+_{\alpha\beta} U^-_{\gamma\delta} P, \end{equation} that satisfies the Boltzmann equation \begin{equation} \label{boeq} \left[\frac{\partial}{\partial t} + {\cal B} +\frac{\partial}{\partial U^+} \left(\frac{dU^+}{dt}\right) +\frac{\partial}{\partial U^-} \left(\frac{dU^-}{dt}\right) \right] P(x,\phi,U^+,U^-,t) = 0, \end{equation} with initial condition \begin{equation} P(x,\phi,U^+,U^-,0) = \delta (x-x_{\rm i}) \delta (\phi-\phi_{\rm i}) \delta (U^+-\openone ) \delta (U^--\openone ). \end{equation} The notation $dU^+$ or $dU^-$ indicates the differential of the real and imaginary parts of the elements of the $2\times 2$ matrix $U^+$ or $U^-$. We will write this in a more explicit way in the next subsection. The Boltzmann equation (\ref{boeq}) has the same form as that which we studied in Sec.\ \ref{transmission}. The difference is that we have four times as many internal degrees of freedom. Instead of a single spinor $\xi$ we now have two spinor matrices $U^+$ and $U^-$. A first doubling of the number of degrees of freedom occurs because we have to follow the evolution of both spin up and spin down. A second doubling occurs because we have to follow both the normal and the time-reversed evolution. \subsection{Diffusion approximation.} We make the diffusion approximation to the Boltzmann equation (\ref{boeq}), by following the steps outlined in Sec.\ \ref{transmission}. The $4 \times 2$ matrix $u^\pm$ containing the real and imaginary parts of $U^\pm$, \begin{equation} u^\pm = \pmatrix{ {\rm Re}\, U^\pm_{11} & {\rm Re}\, U^\pm_{12} \cr {\rm Im}\, U^\pm_{11} & {\rm Im}\, U^\pm_{12} \cr {\rm Re}\, U^\pm_{21} & {\rm Re}\, U^\pm_{22} \cr {\rm Im}\, U^\pm_{21} & {\rm Im}\, U^\pm_{22} }, \end{equation} has a time evolution governed by \begin{mathletters} \begin{equation} \tau\frac{du^\pm}{dt}= \pm Z(x) u^\pm, \end{equation} \begin{equation} Z(x)=\omega_{\rm B}\tau \pmatrix{ 0 & -\cos\eta & \sin\eta\sin\frac{2\pi fx}{L} & -\sin\eta\cos\frac{2\pi fx}{L} \cr \cos\eta & 0 & \sin\eta\cos\frac{2\pi fx}{L} & \hphantom{-}\sin\eta\sin\frac{2\pi fx}{L} \cr -\sin\eta\sin\frac{2\pi fx}{L} & -\sin\eta\cos\frac{2\pi fx}{L} & 0 & \cos\eta \cr \hphantom{-}\sin\eta\cos\frac{2\pi fx}{L} & -\sin\eta\sin\frac{2\pi fx}{L} & -\cos\eta & 0 }. \end{equation} \end{mathletters} The Boltzmann equation (\ref{boeq}) becomes, in a more explicit notation, \begin{eqnarray} \label{freqbe} \tau \frac{\partial}{\partial t} P(x,\phi,u^+,u^-,t) &=& -\ell\cos\phi\frac{\partial P}{\partial x} - \sum_{i,j,k} \frac{\partial}{\partial u^+_{ij}} Z_{ik}(x) u^+_{kj} P + \sum_{i,j,k} \frac{\partial}{\partial u^-_{ij}} Z_{ik}(x) u^-_{kj} P \nonumber \\ && - P + \int_0^{2\pi}\! \frac{d\phi'}{2\pi} P(x,\phi',u^+,u^-,t). \end{eqnarray} We now make the diffusion ansatz in the form \begin{equation} \int_0^\infty\! dt\, {\rm e}^{-t/\tau_\varphi}\int_0^{2\pi}\! d\phi_{\rm i}\, P = N + J\cos\phi. \end{equation} By integrating the Boltzmann equation over $\phi$, once with weight $1$ and once with weight $\cos\phi$, we obtain two coupled equations for the functions $N(x,u^+,u^-)$ and $J(x,u^+,u^-)$. Next we multiply both equations with $U^+_{\alpha\beta}U^-_{\gamma\delta}$ and integrate over the real and imaginary parts of the matrix elements. The moments $N_{\alpha\beta\gamma\delta}$ and $J_{\alpha\beta\gamma\delta}$ defined by \begin{mathletters} \begin{eqnarray} \label{wlmom} N_{\alpha\beta\gamma\delta} (x) &=& \int\! dU^+\int\! dU^-\, U^+_{\alpha\beta} U^-_{\gamma\delta} N, \\ J_{\alpha\beta\gamma\delta} (x) &=& \int\! dU^+\int\! dU^-\, U^+_{\alpha\beta} U^-_{\gamma\delta} J, \end{eqnarray} \end{mathletters} are found to obey the ordinary differential equations \begin{mathletters} \label{odiff} \begin{eqnarray} \label{coopdiffa} \ell\frac{dN_{\alpha\beta\gamma\delta}}{dx} &=& \frac{ig\mu_{\rm B}\tau}{2\hbar} \sum_{\alpha',\gamma'} \left[\biglb({\bf B}(x)\cdot\mbox{\boldmath$\sigma$}\bigrb)_{\alpha\alpha'} \delta_{\gamma\gamma'}- \delta_{\alpha\alpha'} \biglb({\bf B}(x)\cdot\mbox{\boldmath$\sigma$} \bigrb)_{\gamma\gamma'}\right] J_{\alpha'\beta\gamma'\delta} \nonumber \\ && -\left(1+\tau/\tau_\varphi\right)J_{\alpha\beta\gamma\delta}, \\ \label{coopdiffb} \ell\frac{dJ_{\alpha\beta\gamma\delta}}{dx} &=& \frac{ig\mu_{\rm B}\tau}{\hbar} \sum_{\alpha',\gamma'} \left[\biglb({\bf B}(x)\cdot\mbox{\boldmath$\sigma$}\bigrb)_{\alpha\alpha'}\delta_{\gamma\gamma'}- \delta_{\alpha\alpha'} \biglb({\bf B}(x)\cdot \mbox{\boldmath$\sigma$}\bigrb)_{\gamma\gamma'} \right] N_{\alpha'\beta\gamma'\delta} \nonumber \\ && -(2\tau/\tau_\varphi) N_{\alpha\beta\gamma\delta} +2\tau\delta_{\alpha\beta}\delta_{\gamma\delta}\delta (x-x_{\rm i}). \end{eqnarray} \end{mathletters} The periodic boundary conditions are \begin{equation} \label{ringbc} N_{\alpha\beta\gamma\delta}(0) = N_{\alpha\beta\gamma\delta}(L),~~~ J_{\alpha\beta\gamma\delta}(0) = J_{\alpha\beta\gamma\delta}(L). \end{equation} The Cooperon $C$ and the propagator $\chi$ of Eqs.\ (\ref{cooperon}) and (\ref{propagator}) are related to the density $N$ by \begin{eqnarray} && N_{\alpha\beta\gamma\delta} (x) = \int_0^\infty \! dt \, {\rm e}^{-t/\tau_\varphi} \frac{1}{2\pi} \int_0^{2\pi} d\phi \int_0^{2\pi} d\phi_{\rm i} \, \chi_{\alpha\beta\gamma\delta} (x,x_{\rm i};\phi,\phi_{\rm i};t), \\ && \sum_{\alpha,\beta} N_{\alpha\beta\beta\alpha} (x_{\rm i}) = \int_0^\infty \! dt \, {\rm e}^{-t/\tau_\varphi} C(t). \end{eqnarray} Hence the weak-localization correction (\ref{wl1}) is obtained from $N$ by \begin{equation} \label{wl2} \Delta G = -\frac{e^2D}{\pi\hbar L} \sum_{\alpha,\beta} N_{\alpha\beta\beta\alpha}(x_{\rm i}). \label{wlresult} \end{equation} The transformation to the local basis of spin states (\ref{localbasis}) takes the form of a unitary transformation of the moments $N$ and $J$, \begin{mathletters} \begin{eqnarray} && \tilde{N}_{\alpha\beta\gamma\delta} = \sum_{\alpha',\beta',\gamma',\delta'} Q^{\vphantom{\dagger}}_{\alpha\alpha'} Q^{\vphantom{\dagger}}_{\gamma\gamma'} N^{\vphantom{\dagger}}_{\alpha'\beta'\gamma'\delta'} Q^\dagger_{\beta'\beta} Q^\dagger_{\delta'\delta}, \\ && \tilde{J}_{\alpha\beta\gamma\delta} = \sum_{\alpha',\beta',\gamma',\delta'} Q^{\vphantom{\dagger}}_{\alpha\alpha'} Q^{\vphantom{\dagger}}_{\gamma\gamma'} J^{\vphantom{\dagger}}_{\alpha'\beta'\gamma'\delta'} Q^\dagger_{\beta'\beta} Q^\dagger_{\delta'\delta}, \\ && Q(x)= \pmatrix{ \hphantom{-} {\rm e}^{i\pi fx/L}\,\cos\frac{\eta}{2} & {\rm e}^{-i\pi fx/L} \, \sin\frac{\eta}{2} \cr - {\rm e}^{i\pi fx/L} \, \sin\frac{\eta}{2} & {\rm e}^{-i\pi fx/L} \, \cos\frac{\eta}{2} }. \end{eqnarray} \end{mathletters} The transformed moments obey \begin{mathletters} \label{lodiff} \begin{eqnarray} \label{lcoopdiffa} \ell\frac{d\tilde{N}_{\alpha\beta\gamma\delta}}{dx} &=& \sum_{\alpha',\gamma'} \left(T_{\alpha\alpha'}\delta_{\gamma\gamma'}+\delta_{\alpha\alpha'}T_{\gamma\gamma'}\right) \tilde{N}_{\alpha'\beta\gamma'\delta} +\sum_{\alpha',\gamma'} \left(S_{\alpha\alpha'}\delta_{\gamma\gamma'}-\delta_{\alpha\alpha'}S_{\gamma\gamma'}\right) \tilde{J}_{\alpha'\beta\gamma'\delta} \nonumber \\ && -\left(1+\tau/\tau_\varphi\right) \tilde{J}_{\alpha\beta\gamma\delta}, \\ \label{lcoopdiffb} \ell\frac{d\tilde{J}_{\alpha\beta\gamma\delta}}{dx} &=& 2\sum_{\alpha',\gamma'} \left(S_{\alpha\alpha'}\delta_{\gamma\gamma'}- \delta_{\alpha\alpha'}S_{\gamma\gamma'}\right) \tilde{N}_{\alpha'\beta\gamma'\delta} +\sum_{\alpha',\gamma'} \left(T_{\alpha\alpha'}\delta_{\gamma\gamma'}+\delta_{\alpha\alpha'}T_{\gamma\gamma'}\right) \tilde{J}_{\alpha'\beta\gamma'\delta} \nonumber \\ && -(2\tau/\tau_\varphi) \tilde{N}_{\alpha\beta\gamma\delta} + 2\tau \delta_{\alpha\beta} \delta_{\gamma\delta} \delta (x-x_{\rm i}), \end{eqnarray} \end{mathletters} with the same $2\times 2$ matrices $S$ and $T$ as in Sec.\ \ref{transmission}. Because the transformation from $N$ to $\tilde{N}$ is unitary, the weak-localization correction is still given by $\Delta G=-(e^2D/\pi\hbar L)\sum_{\alpha,\beta} \tilde{N}_{\alpha\beta\beta\alpha}(x_{\rm i})$, as in Eq.\ (\ref{wl2}). \begin{figure}[ht] \epsfxsize=0.7\hsize \hspace*{\fill} \vspace*{-0ex}\epsffile{fig3.eps}\vspace*{0ex} \hspace*{\fill} \medskip \caption[]{Weak-localization correction $\Delta G$ of a ring in a spatially rotating magnetic field, as a function of the tilt angle $\eta$. Plotted is the result of Eq.\ (\ref{lodiff}) for $f=5$, $L=500\,\ell$, $L_\varphi=125\,\ell$. The upper panel is for $\omega_{\rm B}\tau\ll 1$. From top to bottom: $\omega_{\rm B}\tau = 10^{-5}$, $10^{-4}$, $2\cdot 10^{-4}$, $3\cdot 10^{-4}$, $5 \cdot 10^{-4}$, $10^{-3}$, $10^{-2}$. At $\omega_{\rm B}\tau\simeq (f\ell/L)^2$, the weak-localization correction crosses over from the transient regime A of Eq.\ (\ref{zerofield}) to the randomized regime B of Eq.\ (\ref{exactsol}). The lower panel is for $\omega_{\rm B}\tau\gtrsim 1$. From bottom to top: $\omega_{\rm B}\tau =0.1$, $1$, $2$, $5$, $10$, $100$. Here the weak-localization correction reaches the adiabatic regime C of Eq.\ (\ref{adiabatic}). } \label{fig3} \end{figure} We have solved Eq.\ (\ref{lodiff}) with periodic boundary conditions by numerically computing the eigenvalues and (right) eigenvectors of the $8\times 8$ matrix of coefficients. The resulting $\Delta G$ is plotted in Fig.\ \ref{fig3} as a function of the tilt angle $\eta$. In the adiabatic regime $\omega_{\rm B}\tau\gg f$ we find the conductance oscillations due to the Berry phase. These are given by \cite{lsg} \begin{eqnarray} \label{adiabatic} &&\Delta G = -\frac{e^2}{\pi\hbar} \frac{L_\varphi}{L} \frac{\sinh (L/L_\varphi)}{\cosh (L/L_\varphi)-\cos\left(2\pi f\cos\eta\right)} \end{eqnarray} analogously to the Aharonov-Bohm oscillations.\cite{ab} (The length $L_\varphi =\sqrt{D\tau_\varphi}$ is the phase-coherence length.) In the randomized regime $\left(f\ell/L\right)^2 \ll \omega_{\rm B}\tau \ll f$ there are no conductance oscillations. Instead we find a reduction of the weak-localization correction, due to dephasing by spin scattering. In the transient regime $\omega_{\rm B}\tau \ll \left(f\ell/L\right)^2$ the effect of the field on the spin can be ignored,\cite{noot} and the weak-localization correction remains at its zero-field value \begin{eqnarray} \label{zerofield} &&\Delta G = -\frac{e^2}{\pi\hbar}\frac{L_\varphi}{L}\,{\rm cotanh}\left(\frac{L}{2L_\varphi}\right). \end{eqnarray} \subsection{Comparison with Loss, Schoeller, and Goldbart} \label{sollsg} If we replace the Boltzmann operator $\cal B$ in Eq.\ (\ref{kinetic}) by the diffusion operator $-D\partial^2/\partial x^2$ and integrate over $\phi$ and $\phi_{\rm i}$, we end up with the diffusion equation studied by LSG, \begin{mathletters} \label{lsgkinetic} \begin{eqnarray} && \left(\frac{\partial}{\partial t}-{\cal H}\right) \chi_{\alpha\beta\gamma\delta} (x,x_{\rm i};t) = \delta (t) \delta (x-x_{\rm i}) \delta_{\alpha\beta}\delta_{\gamma\delta}, \\ && {\cal H} = D \frac{\partial^2}{\partial x^2} +\frac{ig\mu_{\rm B}}{2\hbar} \left[{\bf B}(x)\cdot \mbox{\boldmath$\sigma$}_1- {\bf B}(x)\cdot \mbox{\boldmath$\sigma$}_2 \right], \\ && \chi_{\alpha\beta\gamma\delta} (x,x_{\rm i};t) = \frac{1}{2\pi} \int_0^{2\pi}\! d\phi \int_0^{2\pi}\! d\phi_{\rm i}\, \chi_{\alpha\beta\gamma\delta} (x,x_{\rm i};\phi,\phi_{\rm i};t). \end{eqnarray} \end{mathletters} Here $\mbox{\boldmath$\sigma$}_1$ and $\mbox{\boldmath$\sigma$}_2$ act, respectively, on the first and third indices of $\chi_{\alpha\beta\gamma\delta}$. The difference between the diffusion equation (\ref{lsgkinetic}) and the diffusion equation (\ref{odiff}) is that (\ref{lsgkinetic}) holds only if $\omega_{\rm B}\tau\ll 1$, while (\ref{odiff}) holds for any value of $\omega_{\rm B}\tau$. LSG used Eq.\ (\ref{lsgkinetic}) to argue that there exists an adiabatic region within the regime $\omega_{\rm B}\tau \ll 1$. In contrast, our analysis of Eq.\ (\ref{odiff}) shows that adiabaticity is not possible if $\omega_{\rm B}\tau\ll 1$. The argument of LSG is based on a mapping of the diffusion equation (\ref{lsgkinetic}) onto the Schr{\"o}dinger equation studied in Ref.\ \onlinecite{lg}. However, the mapping is not carried out explicitly. In this subsection we will solve Eq.\ (\ref{lsgkinetic}) exactly using this mapping, to demonstrate that the adiabatic regime of LSG is in fact the randomized regime B. This mis-identification perhaps occurred because both regimes are stationary with respect to the magnetic field strength (cf.\ Fig.\ \ref{fig2}). However, Berry-phase oscillations of the conductance are only supported in the adiabatic regime C, not in the randomized regime B (cf.\ Fig.\ \ref{fig3}). We solve Eq.\ (\ref{lsgkinetic}) for the weak-localization correction \begin{eqnarray} \label{wl3} \Delta G &=& -\frac{e^2D}{\pi\hbar L} \sum_{\alpha,\beta} \left\langle x,\alpha,\beta\left| \left(\tau_\varphi^{-1}-{\cal H}\right)^{-1} \right|x,\beta,\alpha\right\rangle, \end{eqnarray} where we introduced the basis of eigenstates $|x,\alpha,\beta\rangle$ (with $\alpha,\beta=\pm 1$) of the position operator $x$ and the spin operators $\sigma_{1z}$ and $\sigma_{2z}$. The operator ${\cal H}$ commutes with \begin{equation} J=\frac{L}{2\pi i}\frac{\partial}{\partial x}+\case{1}{2}f\left(\sigma_{1z}+\sigma_{2z}\right). \end{equation} It is therefore convenient to use as a basis, instead of the eigenstates $|x,\alpha,\beta\rangle$, the eigenstates $|j,\alpha,\beta\rangle$ of $J$, $\sigma_{1z}$, and $\sigma_{2z}$. The eigenvalue $j$ of $J$ is an integer because of the periodic boundary conditions. The eigenfunctions are given by \begin{equation} \left\langle x,\alpha',\beta'| j,\alpha,\beta\right\rangle = \frac{1}{\sqrt{L}} \delta_{\alpha'\alpha} \delta_{\beta'\beta} \exp\left[\case{2\pi i x}{L} (j-\case{1}{2}f\alpha-\case{1}{2}f\beta)\right] . \end{equation} In the basis $\{|j,1,1\rangle,|j,1,-1\rangle, |j,-1,1\rangle, |j,-1,-1\rangle \}$ the operator $\cal H$ has matrix elements \begin{eqnarray} \label{diagham} \langle j',\alpha',\beta' | {\cal H} | j,\alpha,\beta\rangle &=& -D \left(\frac{2\pi}{L}\right)^2 \delta_{j'j} \pmatrix{ (j-f)^2 & 0 & 0 & 0 \cr 0 & j^2 & 0 & 0 \cr 0 & 0 & j^2 & 0 \cr 0 & 0 & 0 & (j+f)^2 \cr } \nonumber \\ && -i\omega_{\rm B} \delta_{j'j} \pmatrix{ 0 & \hphantom{-}\sin\eta & -\sin\eta & 0 \cr \hphantom{-}\sin\eta & -2\cos\eta & 0 & -\sin\eta \cr -\sin\eta & 0 & 2\cos\eta & \hphantom{-}\sin\eta \cr 0 & -\sin\eta & \hphantom{-}\sin\eta & 0 \cr }. \end{eqnarray} Substitution into Eq.\ (\ref{wl3}) yields \begin{eqnarray} \label{solwl} \Delta G &=& -\frac{e^2D}{\pi\hbar}\frac{1}{L^2}\sum_{\alpha,\beta} \sum_{j=-\infty}^\infty \left\langle j,\alpha,\beta \left| \left(\tau_\varphi^{-1} -{\cal H}\right)^{-1} \right|j,\beta,\alpha \right\rangle \nonumber \\ &=& -\frac{e^2}{\pi\hbar}\frac{1}{2\pi^2}\sum_{j=-\infty}^\infty \left[(\gamma + j^2)^2 (f^2 + \gamma + j^2) + \kappa^2 (3 f^2 + 4 \gamma + 4 j^2 + f^2 \cos 2 \eta ) \right] \nonumber \\ && \mbox{}\times\left[(\gamma +j^2)^2 (f^4+2 f^2\gamma +\gamma^2-2 f^2j^2+2\gamma j^2 + j^4) \right. \nonumber \\ && \mbox{}+\left. 2 \kappa^2 \biglb( f^4 + 3 f^2 \gamma + 2 \gamma^2 - f^2 j^2 + 4 \gamma j^2 + 2 j^4 + f^2 (f^2 + \gamma - 3 j^2 ) \cos 2\eta \bigrb) \right]^{-1}. \end{eqnarray} We abbreviated $\kappa=2\omega_{\rm B}\tau (L/2\pi\ell)^2$ and $\gamma=(L/2\pi L_\varphi)^2$. The sum over $j$ can be done analytically for $\kappa\gg 1$, with the result \begin{mathletters} \label{exactsol} \begin{eqnarray} &&\Delta G = -\frac{e^2}{\pi\hbar} \frac{1}{4\pi Q} \left[ \frac{4a_-+4\gamma+(3+\cos 2\eta )f^2}{\sqrt{a_-}\tan\pi \sqrt{a_-}} - \frac{4a_++4\gamma+(3+\cos 2\eta )f^2}{\sqrt{a_+}\tan\pi \sqrt{a_+}} \right], \\ && Q=\left[f^4 (9\cos^2 2\eta -2\cos 2\eta-7)-32\gamma f^2 (1+\cos 2\eta)\right]^{1/2}, \\ &&a_\pm = -\gamma + \case{1}{4}(1+3\cos 2 \eta ) f^2 \pm\case{1}{4}Q. \end{eqnarray} \end{mathletters} We have checked that our solution (\ref{solwl}) of Eq.\ (\ref{lsgkinetic}) coincides with the solution of Eq.\ (\ref{odiff}) in the regime $\omega_{\rm B}\tau\ll 1$. (The two sets of curves are indistinguishable on the scale of Fig.\ \ref{fig3}.) In particular, Eq.\ (\ref{exactsol}) coincides with the curves labeled B in Fig.\ \ref{fig3}, demonstrating that it represents the randomized regime -- without Berry phase-oscillations. \section{Conclusions} In conclusion, we have computed the effect of a non-uniform magnetic field on the spin polarization (Sec.\ \ref{transmission}) and weak-localization correction (Sec.\ \ref{localization}) in a disordered conductor. We have identified three regimes of magnetic field strength: the transient regime $\omega_{\rm B}\tau\ll (f\ell/L)^2$, the randomized regime $(f\ell/L)^2 \ll \omega_{\rm B}\tau \ll f$, and the adiabatic regime $\omega_{\rm B}\tau\gg f$. In the transient regime (labeled A in Figs.\ \ref{fig2} and \ref{fig3}), the effect of the magnetic field can be neglected. In the randomized regime (labeled B), the depolarization and the suppression of the weak-localization correction are maximal. In the adiabatic regime (labeled C), the polarization is restored and the weak-localization correction exhibits oscillations due to the Berry phase. The criterion for adiabaticity is $\omega_{\rm B}t_{\rm c}\gg 1$, with $\omega_{\rm B}$ the spin-precession frequency and $t_{\rm c}$ a characteristic timescale of the orbital motion. We find $t_{\rm c} =\tau$, in agreement with Stern,\cite{stern} but in contradiction with the result $t_{\rm c}=\tau (L/\ell)^2$ of Loss, Schoeller, and Goldbart. \cite{lsg} By solving exactly the diffusion equation for the Cooperon derived in Ref.\ \onlinecite{lsg}, we have demonstrated unambiguously that the regime which in that paper was identified as the adiabatic regime, is in fact the randomized regime B --- without Berry-phase oscillations. We have focused on transport properties, such as conductance and spin-resolved transmission. Thermodynamic properties, such as the persistent current, in a non-uniform magnetic field have been studied by Loss, Goldbart, and Balatsky \cite{lg,lgb} in connection with Berry-phase oscillations. These papers assumed ballistic systems. We believe that the adiabaticity criterion $\omega_{\rm B}\tau\gg 1$ for disordered systems should apply to thermodynamic properties as well as transport properties. This strong-field criterion presents a pessimistic outlook for the prospect of experiments on the Berry phase in disordered systems. \acknowledgements We are indebted to L. P. Kouwenhoven for bringing this problem to our attention, and to P. W. Brouwer, D. Loss, and A. Stern for useful discussions. This research was supported by the ``Ne\-der\-land\-se or\-ga\-ni\-sa\-tie voor We\-ten\-schap\-pe\-lijk On\-der\-zoek'' (NWO) and by the ``Stich\-ting voor Fun\-da\-men\-teel On\-der\-zoek der Ma\-te\-rie'' (FOM).
1,314,259,994,400
arxiv
\section{Introduction and Preliminaries} In \cite{Br}, Bregman introduced an iterative procedure to find points in an intersection of convex sets. At each step, the next point in the sequence is obtained by minimizing an objective function, that can be described as the vertical distance of the graph of the function to the tangent plane through the previous point. If $\cM$ is a convex set in some $\mathbb{R}^K,$ and $\Phi:\cM\to\mathbb{R}$ is a strictly convex, continuously differentiable function, the {\it divergence function} that it defines is specified by \begin{equation}\label{breg} \bdelta_\Phi(\bx,\by)^2 = \Phi(\bx)-\phi(\by)-\langle(\bx-\by),\nabla\Phi(\by)\rangle. \end{equation} In Bregman's work, $\Phi(\bx)$ was taken to be the Euclidean square norm $\|\bx\|^2.$ The concept was eventually extended, even to the infinite dimensional case, and now plays an important role in many applications. For example, in clustering. classification analysis and machine learning as in Banerjee et al. \cite{BGW}, Boisonnat el al. \cite{BNN}, Banerjee et al. \cite{BDGMM}. Fisher \cite{F}. It plays a role in optimization theory as in Baushke and Borwein \cite{BB}, Baushke and Lewis \cite{BL}, Baushke and Combettes \cite{BC}, Censor and Reich \cite{CR}, Baushke et al. \cite{BBC} and Censor and Zaknoon \cite{CZ}, or to solve operator equations as in Butnariu and Resmerita \cite{BR}, in approximation theory in Banach spaces as in Baushke and Combettes \cite{BC} or Li et al. \cite{LSY}. In applications of geometry to statistics and information theory as in Amari and Nagaoka \cite{AN}, Csisz\'ar \cite{Cs}, Amari and Cichoski \cite{AC}, Calin and Urdiste \cite{CU} or Nielsen \cite{N}. These are just a small sample of the many references to applications of Bregman functions, and the list cascades rapidly. Is is a well known, and easy to verify fact, that $$\bdelta_\Phi(\bx,\by)^2 \geq 0,\;\;\mbox{and}\;\; \bdelta_\phi(\bx,\by)^2=0 \Leftrightarrow \bx=\by.$$ Thus our choice of notation is consistent. But as $\bdelta$ is not symmetric, nor does it satisfy the triangular inequality, it can not be a distance on $\cM.$ Let now $(\Omega,\cF,\bbP)$ be a probability space such that $\cF$ is complete (contains all sets of zero $\bbP$ measure). By $\cL_p, p=1,2$ we shall denote the usual classes of $\bbP$ integrable or square integrable functions, identified up to sets of measure zero. The notion of divergence can be extended to random variables as follows \begin{definition}\label{divrv} Let $\bX,\bY$ be $\cM$-valued random variables such that $\Phi(\bX),$ $\Phi(\bY)$ and $\nabla\Phi(\bY)$ are in $\cL_2.$ The divergence between $\bX$ and $\bY$ is defined by $$\Delta_\Phi(\bX,\bY)^2 = E[\bdelta_\Phi(\bX,\bY)^2] = \int_\Omega\bdelta_\Phi(\bX(\omega),\bY(\omega))^2d\bbP(\omega).$$ \end{definition} Clearly, $\Delta_\Phi(\bX,\bY)$ is neither symmetric nor satisfies the triangle inequality. But as above, we also have $$\Delta_\Phi(\bX,\bY)^2 \geq 0\;\;\mbox{and}\;\;\Delta_\Phi(\bX,\bY)^2=0 \Leftrightarrow \bX=\bY\;\;{a.s}\;\bbP$$ we can think of it as a pseudo distance, cost or penalty function on $\cL_p.$ The motivation for this work comes from two directions. On the one hand, there is the fact that for Bregman divergences there is a notion of best predictor, and this best predictor happens to be the usual conditional expectation. To put it in symbols \begin{theorem}\label{bpdiv} Let $\bX\in\cL_2$ and let $\cG\subset\cF.$ Then the solution to the problem $$\inf\{\Delta_\Phi(\bX,\bY)^2\,|\,\bY\in\cL_2(\cG)\}$$ is given by $E[\bX | \cG].$ \end{theorem} For the proof the reader can consult Banerjee et al., \cite{BGW} or Fisher's \cite{F}. The other thread comes from Gzyl's \cite{GH}, where a geometry on the convex cone of strictly positive is considered. That geometry happens to be derivable from a divergence function, and it leads to a host of curious variations on the theme of best predictor, estimation, laws of large numbers and central limit theorems. The geometry considered there is that induced by the logarithmic distance, which makes $(0,\infty)^d$ a Tits-Bruhat space, which happens to be a special commutative version of the theory explained in Lang \cite{L}, Lawson and Lin \cite{LL}, Mohaker \cite{Moh} and Schwartzman \cite{Sch}. We should mention that the use of differential geometric methods in \cite{AN}, or \cite{CU} and the many references cited therein, is different from the one described below. They consider geometric structure either on the class of probabilities on a finite set, or in the space of parameters characterizing a (usually exponential) family of distributions. Here we analyze how the geometry on the set in which the random variables take value, determines the nature of the standard estimation and prediction process. From now on we shall suppose that $\cM = \cJ^K,$ where $\cJ$ is a bounded or unbounded interval in $\mathbb{R}.$ We shall denote by $\phi:\cJ\to\mathbb{R}$ a strictly convex, three times continuously differentiable function, and define \begin{equation}\label{convf} \Phi(\bX) = \sum_{i=1}^K \phi(x_i) \end{equation} \subsection{Some standard examples} In the next table we list five standard examples. The list could be quite longer, but the examples chosen because the in some of the cases the distance between random variables associated to the divergence bounds their divergence from above, whereas in the other, it is bounded by the divergence from above. The examples are displayed in Table \ref{tab1}. \begin{table}[h!] \centering \begin{tabular}{|c|c|}\hline Domain & $\phi$ \\\hline $\mathbb{R}$ & $x^2$\\\hline $\mathbb{R}$ & $e^x$ \\\hline $\mathbb{R}$ & $e^{-x}$ \\\hline $(0,\infty)$ & $x\ln x$ \\\hline $(0,\infty)$ & $-\ln x$\\ \hline \end{tabular} \caption{Standard convex functions used to generate Bregman divergences} \label{tab1} \end{table} \subsection{Organization of the paper} We have established enough notations to describe the contents of the paper. In Section 2 we start from the divergence function on $\cM$ and derive a metric tensor $g_{i,j}$ from it. We then solve the geodesic equations to compute the geodesic distance $d_\phi(\bx,\by)$ between any two points $\bx,\by\in\cM,$ and we compare it with the divergence $\bdelta_\phi(\bx,\by)$ between the two points. We shall see that there are cases in which one of them dominates the other for any pair of points. The Riemannian distance between points in $\cM$ induces a distance between random variables taking values in there. In Section 3 we come to the main theme of this work, that is, to the computations of best predictors when the distance between random variables is measured in the induced Riemannian distance. We shall call such best predictors the $d-$mean and the $d$-conditional expectation and denote them by $E_d[X]$ and, respectively, $E_d[\bX|\cG].$ In order to compare these to the best predictor in divergence, we use the prediction error as a comparison criterion. It is at this point at which the comparison results established in Section 2 come in. In Section 4 we take up the issue of sample estimation and its properties. We shall see that the standard results hold for the $d$-conditional expectation as well. That is, we shall see that the estimator of the $d$-mean and that of the $d$-variance, are unbiased and converge to their true values as the size of the sample becomes infinitely large. In Section 5 we shall consider the arithmetic properties of the $d$-conditional expectation when there is a commutative group structure on $\cM.$ In Section 6 we collect a few final comments, and in Appendix 7, we present one more derivation of the geodesic equations. \section{Riemannian metric induced by $\phi$} The direct connection between $\Phi$-divergences stems from the fact that a strictly convex, at least twice differentiable function has a positive definite Hessian matrix. Even more, metric derived from a ``separable'' $\Phi$ is diagonal, that is \begin{equation}\label{metric} g_{i,j} = \frac{\partial^2 \Phi(\bx)}{\partial x_i\partial x_j} = \phi''(x_i)\delta_{i,j} \end{equation} \noindent Here we use $\delta_{i,j}$ for the standard Kronecker delta and we shall not distinguish between covariant and contravariant coordinates. This may make the description of standard symbols in differential geometry a bit more awkward. All these examples have an interesting feature in common. The convex function defining the Bregman divergence is three times continuously differentiable, and defines a Riemannian metric in its domain by $g_{i,j}(\bx)=\phi''(x_i)\delta_{i,j}.$ The equations for the geodesics in this metric are separated. It is actually easy to see that for each $1\leq i \leq d,$ the equation for defining the geodesic which at time $t=0$ starts from $x_i$ and end at $y_i$ at time $t=1,$ is the solution to \begin{equation}\label{geoeq} \phi''(x_i(t))\ddot{x}_i(t) + \frac{1}{2}\phi'''(x_i(t))\dot{x}_i^2(t) = 0\;\;x_i(0)=x_i,\;x_i(1)=y_i. \end{equation} Despite the fact that it is easy to integrate this equation rapidly, we show how to integrate this equation in a short appendix at the end. Now denote by $h(x)$ as a primitive of $(\phi''(x))^{1/2},$ that is \begin{equation}\label{chanvar} h(x) =\int^x\big(\phi''(t)\big)^{1/2}dt. \end{equation} therefore, it is strictly positive by assumption, it is invertible because it is strictly increasing. If we put $H=h^{-1},$ for the compositional inverse of $h,$ we can write the solution to (\ref{geoeq}) as \begin{equation}\label{solgeo} x_i(t) = H\Big(h(x_i) + k_it\Big)\;\;\;0\leq t \leq 1,\;\;i=1,...,K. \end{equation} The $k_i$ are integration constants, which using the condition that $x_i(0)=x_i, x_i(1)=y_i$ turn out to be $k_i=h(y_i)- h(x_i).$ Notice now that the distance between $\bx$ and $\by$ is given by \begin{equation}\label{geodis} d_\phi(\bx,\by) = \int_0^1\Big(\sum_{i=1}^K\phi''(x_i(t))\dot{x}_i^2(t)\Big)^{1/2}dt. \end{equation} It takes a simple computation to verify that \begin{equation}\label{geodes2} d_\phi(\bx,\by) = \Big(\sum_{i=1}^K k_i^2\Big)^{1/2} = \Big(\sum_{i=1}^K h(y_i) - h(x_i))^2\Big)^{1/2} \end{equation} For not to introduce more notation, we shall use the symbol $h$ to denote the map $h:\cM \to \mathbb{R}^K,$ defined by $h(\bx)_i=h(x_i).$ Notice that $h$ is isometry between $\cM$ and its image in $\mathbb{R}^K,$ when the distance in the former is $d_phi$ and in the Later is the Euclidean distance. Therefore geometric properties in $\mathbb{R}^K$ have a counterpart in $cM.$ Observe as well that the special form of (\ref{solgeo}) and (\ref{geodes2}) allows us to represent the middle point between $\bx$ and $\by$ easily. As a mater of fact,we have \begin{lemma}\label{midpt} With the notations introduced above, observe that if we put $z_i=\zeta(\bx,\by)_i = H\Big(\frac{1}{2}\big(h(x_i)+h(y_i)\big)\Big),$ then $$d_\phi(\bx,\bz) = d_\phi(\by,\bz) = \frac{1}{2}d_\phi(\bx,\by) = \frac{1}{2}\Big(\sum_{i=1}^K(h(y_i) + h(x_i))^2\Big)^{1/2}.$$ \end{lemma} \subsection{Comparison of Bregman and Geodesic distances} Here we shall examine the relationship between the $\phi$-divergence and the distance induced by $\phi.$ Observe to begin with, for any three times continuously differentiable function, we have $\phi(y)-\phi(x)=\int_x^y\phi'(u)du.$ Applying this once more under the integral sign, and rearranging a bit, we obtain \begin{equation}\label{basic} \phi(y) - \phi(x) -(y - x)\phi'(x) = \int_x^y\phi''(u)(y - u)du. \end{equation} Notice that the left hand side is the building block of the $\phi$-divergence. To make the distance (\ref{geodes2}) appear on the right hand side of (\ref{basic}), we rewrite it as follows. Use the fact that $h'(x)=(\phi''(x))^{1/2},$ and invoke the previous identity applied to $h$ to obtain $$\int_x^y h'(u)(y-u)dh(u) = \int_x^y \Big(h(y) - h(u) -\int_u^y h''(\xi)(y-\xi)d\xi\Big)dh(u).$$ Notice now that $$\int_x^y \Big(h(y) - h(u)\Big)dh(u) = \frac{1}{2}\big(h(y) - h(x)\big)^2.$$ With this, it is clear that $$\phi(y)-\phi(x)-(y-x)\phi'(x) = \frac{1}{2}\big(h(y) - h(x)\big)^2 - \int_x^y\Big(\int_u^y h''(\xi)(y-\xi)d\xi\Big)dh(u)$$ We can use the previous comments to complete the proof of the following result. \begin{theorem}\label{compare} With the notations introduced above, suppose furthermore that $\phi'''(x)$ (and therefore $h''(x)$) has a constant sign. Then \begin{eqnarray}\label{compare2} \bdelta_\phi(\by,\bx)^2 \leq \frac{1}{2}d_\phi(\by,\bx)^2,\;\;\;\mbox{if}\;\;\;\phi''' > 0.\\\label{compare2.1} \bdelta_\phi(\by,\bx)^2 \geq \frac{1}{2}d_\phi(\by,\bx)^2,\;\;\;\mbox{if}\;\;\;\phi''' < 0. \end{eqnarray} \end{theorem} This means that, for example, in the first case, a minimizer with respect to the geodesic distance, yields a smaller approximation error that the corresponding minimizer with respect to the divergence. The inequalities in Theorem \ref{compare} lead to the following result \begin{theorem}\label{compmeans1} Let $\{\bx_1,...,\bx_n\}$ is be set of points in $\cM,$ and $\bx_\phi^*$ and $\bx_d^*$ respectively denote the points in $\cM$ closer to that set in $\phi$-divergence and geodesic distance. Then, for example, when (\ref{compare2.1}) holds, $$\sum_{i=1}^n \bdelta_\phi(\bx_i,\bx_\phi^*)^2 \geq \frac{1}{2}\sum_{i=1}^n d_\phi(\bx_i,\bx_d^*)^2,$$ \end{theorem} \begin{proof}$\,$ If (\ref{compare2.1}) holds, then $\sum_{i=1}^n \bdelta_\phi(\bx_i,\bx)^2 \geq \frac{1}{2}\sum_{i=1}^n d_\phi(\bx_i,\bx)^2$ for any $\bx\in\cM.$ Therefore, to begin with, since $\bx_d^*$ minimizes the right hand side, we have $\sum_{i=1}^n \bdelta_\phi(\bx_i,\bx)^2 \geq \frac{1}{2}\sum_{i=1}^n d_\phi(\bx_i,\bx_d^*)^2$ for any $\bx\in\cM.$ Now minimizing with respect to $\bx$ on the left hand side of this inequality we obtain the desired result. \end{proof} \noindent That is, the approximation error is smaller for the minimizer computed with the geodesic distance than that computed for the divergence. We postpone the explicit computation of $\bx_d^*$ to Section 4, when we show how to compute sample estimators. {\bf Comment} Note that we can think of (\ref{basic}) as a way to construct a convex function starting from its second derivative. What the previous result asserts that if we start from a positive but strictly decreasing function, we generate a divergence satisfying (\ref{compare2.1}), whereas if we start from a positive and strictly increasing function, we generate a divergence satisfying (\ref{compare2}). This is why we included the second and third examples. Even though they would seem to be related by a simple reflection at the origin, their predictive properties are different. Note that when $\phi'''$ is identically zero as in the first example of the list in Table \ref{tab1}, the two distances coincide. This example is the first case treated in the examples described below. The other examples are standard examples used to define Bregman divergences. Note as well that when $\phi(x)=x^p$ with $1<p<2$ the derived distance has smaller prediction error than that of the prediction error in divergence, whereas when $2<p$ the prediction error in divergence is smaller than the prediction error in its derived distance. And we already noted that for $p=2$ both coincide. But to compare the $d$-metric with the Euclidean metric does not seem an easy task. \subsection{Examples of distances related to a Bregman divergence} \subsubsection{Case 1: $\phi = x^2/2$} In this case $\phi''(x)=1$ and $\phi'''(x)=0.$ The geodesics are the straight lines in $\mathbb{R}^K$ and the induced distance is the standard Euclidean distance $$d_\phi(\bx,\by)^2 = \sum_{i=1}^K (x_i - y_i)^2.$$ \subsubsection{Case 2: $\phi(x)=e^x$} Now $\phi''(x)=\phi'''(x)=e^x.$ The solution to the geodesic equation (\ref{geoeq}) is given by $x_i(t)=2\ln\Big(e^{x_i/2} + k_it\Big),\;i=1,...,K$ and therefore $k_i=e^{y_i/2}-e^{x_i/2}.$ The geodesic distance between $\bx$ and $\by$ is given by $$d_\phi(\bx,\by)^2 = \sum_{i=1}^K (e^{y_i/2}-e^{x_i/2})^2.$$ \subsubsection{Case 3: $\phi(x)=e^{-x}$} Now $\phi''(x)=e^{-x}$ but $\phi'''(x)=-e^{-x}.$ The solution to the geodesic equation (\ref{geoeq}) is given by $x_i(t)=-2\ln\Big(e^{-x_i/2} + k_it\Big),\;i=1,...,K$ and therefore $k_i=e^{-y_i/2}-e^{-x_i/2}.$ The geodesic distance between $\bx$ and $\by$ is given by $$d_\phi(\bx,\by)^2 = \sum_{i=1}^K (e^{-y_i/2}-e^{-x_i/2})^2.$$ \subsubsection{Case 4: $\phi(x)=x\ln x$} This time our domain is $\cM=(0,\infty)^K$ and $\phi''(x)=1/x$ whereas $\phi'''(x)=-1/x^2.$ The solution to the geodesic (\ref{geoeq}) is given by $x_i(t) =\Big(\sqrt{x_i} + k_it\Big)^2,\;i=1,...,K$ where $k_i=\sqrt{y_i}-\sqrt{x_i}.$ Therefore, the geodesic distance between $\bx$ and $\by$ is $$d_\phi(\bx,\by)^2 = \sum_{i=1}^K (\sqrt{y_i} - \sqrt{x_i})^2.$$ This look similar to the Hellinger distance used in probability theory. See Pollard's \cite{P} \subsubsection{Case 5: $\phi(x)=-\ln x$} To finish, we shall consider another example on $\cM=(0,\infty)^K.$ Now, $\phi''(x)=1/x^2$ and $\phi'''(x)=-1/x^3.$ The geodesics turn out to be given by $x_i(t)=x_ie^{kt}$ where $k_i=\ln\big(y_i/x_i)$ which yields the representation $\bx(t)=\bx^{(1-t)}\by^{t}.$ Recall that all operations are to be understood componentwise ($d$-vectors are function on $[1,...,K]$). The distance between $\bx$ and $\by$ is now given by $$d_\phi(\bx,\by)^2 = \sum_{i=1}^K (\ln y_i - \ln x_i)^2.$$ \subsection{The semi-parallelogram law of the geodesic distances} As a consequence of Lemma \ref{midpt} and the way the geodesic distances are related to the Euclidean distance through a bijection, we have the following result: \begin{theorem}\label{spl} With the notations introduced in the four examples listed above, the sets $\cM$ with the corresponding geodesic distances, satisfy the semi-parallelogram law. that is in all four cases considered, for any $\bx,\by\in\cM,$ there exists a $\bz$ obtained as in Lemma \ref{midpt}, such that for any $\bv\in\cM$ we have $$d_\phi(\bx,\by)^2 + 4d_\phi(\bv,\bz)^2 \leq d_\phi(\bv,\bx)^2 + 2d_\phi(\bv,\by)^2.$$ \end{theorem} That is, for separable Bregman divergences, the induced Riemannian geometry is a Tits-Bruhat geometry. The semi-parallelogram property is handy in proofs of uniqueness. \section{$\cL_2$-conditional expectations related to Riemannian metrics derived from a Bregman divergences} As we do not have a distinguished point in $\cM$ which is the identity with respect to a commutative operation on $\cM,$ in order to define a squared norm for $\cM$-valued random variables we begin by introducing the following notation. \begin{definition}\label{norm} We shall say that a $\cM$-valued random variable is integrable or square integrable, and write $\bX\in\cL^\phi_p,$ (for $p=1,2$) whenever $$D_\phi(\bX,\bx_0)^p = E\Big(d_\phi(\bX,\bx_0)\Big)^p] < \infty$$ \noindent for some $\bx_0\in\cM.$ It is clear from the triangular inequality that this definition is independent of $\bx_0.$ \end{definition} But more important in the following simple result \begin{lemma}\label{simple2} With the notations introduced above, from (\ref{geodes2}) if follows that $\bX\in\cL^\phi_p,$ is equivalent to $h(\bX)\in\cL_p.$ \end{lemma} With identity (\ref{geodes2}) in mind, it is clear that $\bX,\bY\in\cL^\phi_2,$ the distance on $\cM$ extends to a distance between random variables by \begin{equation}\label{dist3} D_\phi(\bX,\bY)^2 = E[\Big(d_\phi(\bX,\bY)\Big)^2] = E[\|h(\bX) - h(\bY)\|^2]. \end{equation} Now that we have this definition in place, the extension of Theorem (\ref{compare}) to this case can be stated as follows. \begin{theorem}\label{compare3} For any pair $\bX,\bY$ of $\cM$-valued random variables such that the quantities written below are finite, we have \begin{eqnarray}\label{compare3.1} \Delta_\phi(\bY,\bX)^2 \leq \frac{1}{2}D_\phi(\bY,\bX)^2,\;\;\;\mbox{if}\;\;\;\phi''' > 0.\\\label{compare3.2} \Delta_\phi(\bY,\bX)^2 \geq \frac{1}{2}d_\phi(\bY,\bX)^2,\;\;\;\mbox{if}\;\;\;\phi''' < 0.\\\nonumber \end{eqnarray} \end{theorem} We can now move on to the determination of best predictors in the $D_\phi$ distance. \begin{theorem}\label{dcondexp} Let $(\Omega,\cF,\bbP)$ be a probability space and let $\cG$ be a sub-$\sigma$-algebra of $\cF.$ Let $X$ be a $\cM$-valued random variable such that $h(\bX)$ is $\bbP$-square integrable. Then $$E_d[\bX|\cG] = H\Big(E[h(\bX)|\cG]\Big).$$ \end{theorem} Keep in mind that the both $h$ and its inverse $H$ act componentwise. This theorem has a curious corollary, to wit: \begin{corollary}[Intertwining] With the notations in the statement of the last theorem, we have $$h\big(E_d[\bX|\cG]\Big) = \Big(E[h(\bX)|\cG]\Big).$$ \end{corollary} \subsection{Comparison of prediction errors} As a corollary of Theorem \ref{compare3} to compare the prediction errors in the $d$-metric or in divergence. \begin{theorem}\label{prederror} With the notations of Theorem \ref{compare3}, we have \begin{eqnarray}\label{compare3.1} \Delta_\phi(\bX,E[\bX|\cG])^2 \leq D_\phi(\bX,E_d[\bX|\cG])^2,\;\;\;\mbox{if}\;\;\;\phi''' > 0.\\\label{compare3.2} \Delta_\phi(\bX,E[\bX|\cG])^2 \geq D_\phi(\bX,E_d[\bX|\cG])^2,\;\;\;\mbox{if}\;\;\;\phi''' < 0.\nonumber \end{eqnarray} \end{theorem} The proof is simple. For the first case say, begin with (\ref{compare3.2}) and since the right hand side decreases by replacing $\bY$ with $E_d[\bX|\cG]$ we have $\Delta_\phi(\bX,\bY])^2 \geq D_\phi(\bX,E_d[\bX|\cG])^2$ for any $\bY$ with the appropriate integrability. Now, minimize the left hand side of las inequality with respect to $\bY$ to obtain the desired conclusion. \subsection{Examples of conditional expectations} Even though the contents of the next table are obvious, they are worth recording. There we display the appearance of the conditional expectations of a $\cM$-valued random variable $\bX$ in the metrics derived from the divergences listed in Table \ref{tab1}. \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|}\hline Domain & $\phi$ & $h$ &Conditional Expectation\\\hline $\mathbb{R}$ & $x^2$ & $x$ & $E[\bX|\cG]$\\\hline $\mathbb{R}$ & $e^x$ & $e^{x/2}$ & $2\ln\Big(E[e^{\bX/2}|\cG]\Big)$\\\hline $\mathbb{R}$ & $e^{-x}$ & $e^{-x/2}$ & $2\ln\Big(\frac{1}{E[e^{-\bX/2}|\cG]}\Big)$\\\hline $(0,\infty)$ & $x\ln x$ & $\sqrt{x}$ & $\Big(E[\sqrt{\bX}|\cG]\Big)^2$ \\\hline $(0,\infty)$ & $-\ln x$ & $\ln(x)$ & $\exp\Big(E[\ln(\bX)|\cG]\Big)$\\ \hline \end{tabular} \caption{Expected conditional values in $d_\phi$ metric} \label{tab2} \end{table} The only other information that we have about $\cM$ in this context is that it is a convex set in $\mathbb{R}^d.$ But we do not know if it is closed with respect to any group operation. IN this regard, see Section 5.1. Thus the only properties of the conditional expectations that we can verify at this points are those that depend only on its definition, and on the corresponding property for $h(\bX)$ with respect to $\bbP.$ \begin{theorem}\label{dcondexp1} With the notations introduced in the previous result, and assuming that all variables mentioned are $D$-integrable we have\\ {\bf 1)} Let $\cF_0=\{\emptyset,\cF\}$ be the trivial $\sigma$-algebra, then $E_d[\bX|\cF_0]=E_d[\bX].$\\ {\bf 2)} Let $\cG_1\subset\cG_2$ be two sub-$\sigma$-algebras of $\cF,$ then $E_d[E_d[\bX|\cG_2]|\cG_1]=E_d[\bX|\cG_1].$ {\bf 3)} If $\bX\in\cG,$ then $E_d[\bX|\cG]=\bX .$\\ As both $h$ and $H$ are defined component wise, and are increasing, we can also verify the monotonicity properties of the conditional expectations. \\ {\bf 4)} Let $\bX,\bY\in\cM$ with $\bX\leq\bY,$ then $E_d[\bX|\cG] \leq E_d[\bY|\cG].$\\ We do not necessarily have a $\mathbf{0}$ vector in $\cM,$ but a monotone convergence property may be stated as\\ {\bf 5)} Let $\{\bX_n:n\geq 1\}$ be a sequence in $\cM$ increasing to $\bX\in\cM,$ and suppose that there exist $\cF$-measurable $\bY_1 \leq\bX_n\leq\bY_2$ and $E[h(Y_i)]\in\cL_1.$ Then $E_d[\bX_n|\cG]\uparrow E[\bX|\cG].$ \end{theorem} \subsection{A simple application} Let us consider the following two strictly positive random variables (that is $K=1$ and $\cM=(0,\infty)$): $$S(2) = S(0)e^{X+Y}\;\;\mbox{and}\;\;S(1)=S(0)e^{X}$$ where $X\sim N(\mu_1,\sigma_2^2)$ and $Y\sim N(\mu_2,\sigma^2_2)$ are two Gaussian, $\rho$-correlated random variables, with $-1<\rho<1.$ It is a textbook exercise to verify that $$Y_{|X} \sim N(\mu_2+\rho\frac{\sigma_2}{\sigma_1}(X-\mu_1),\sigma_2^2(1-\rho^2)).$$ If we consider the logarithmic distance on $(0,\infty),$ an application of the results in the previous section, taking into account that $S(1)$ and $X$ generate the same $\sigma$-algebra (call it $\cG$) we have that $$E_d[S(2)|\cG] = e^{E[\ln(S(2))|\cG]} = S(1)e^{E[Y_{|X}]} = S(1)e^{m}$$ \noindent where we put $m=\mu_2+\rho\frac{\sigma_2}{\sigma_1}(X-\mu_1).$ For comparison note that the predictor in the Euclidean distance is given by $$E[S(2)|\cG] = S(1)E[e^{Y_{|X}}] = S(1)e^{m+(1-\rho^2)\sigma_2^2/2}$$ According to Theorem \ref{prederror}, the previous one is better than the last because its variance is smaller. A possible interpretation of this example goes as follows. We might of $S(0), S(1)$ and $S(2)$ as the price of an asset today, tomorrow and the day after tomorrow. $X$ and $Y$ might be thought of as the daily logarithmic return. We want to have a predictor of the price of the asset $2$ days from now, given that we observe the price tomorrow. Then $E[S(2)|S(1)]$ gives us the standard estimator, whereas $E_d[S(2)|S(1)]$ gives us the estimator in logarithmic distance. \section{Sample estimation in the Riemannian metric derived from a Bregman divergence}\label{estimation2} In this section we address the issue of sample estimation of the expected values in the $d_\phi$ metric. That is, how to estimate \ \begin{equation}\label{dmean} E_d[\bX] = H\Big(E[h(\bX)]\big) \end{equation} \noindent when all that we have is a sample $\{\bx_1,...,\bx_n\}$ of $\bX.$ The sample estimator is defined to be the point $\bS_n(\{\bx_k\}) \in \cM$ that minimizes the aggregate distance (``cost'' function) $$\sum_{k=1}^n d(\bx_n,\bv)^2 = \sum_{k=1}^n \Big(h(\bx_k) - h(\bv)\Big)^2$$ \noindent when $\bv$ ranges over $\cM.$ Clearly, for the geodesic distance computed in (\ref{geodes2}) the minimizer is easy to compute. Again, as $h$ and $H$ are bijections, we have \begin{equation}\label{est2} \bS_n(\{\bx_k\}) = H\Big(\frac{1}{n}\sum_{k=1}^n h(\bx_k)\Big). \end{equation} Recall that this identity is to be understood componentwise, that is both sides are vectors in $\cM.$\\ Certainly, $d_\phi$-mean (of the set $\{\bx_1,...,\bx_n\}$ is a good name for $\bS_n.$ Given the special form (\ref{dmean}) for $E_d[\bX],$ it is clear that (\ref{est2}) defines an unbiased estimator of the $d_\phi$-mean. At this point we mention that we leave as an exercise for the reader, to use the semi-parallelogram law to verify the uniqueness of the minimizer of the distance to a set of points given by (\ref{est2}). But the worth of (\ref{est2}) is for the proof of the law of large numbers. But first, we need to note that the error in estimating $\bX$ by its expected value, that is, the variance of $\bX$ is \begin{equation}\label{dvar} \sigma_d^2(\bX) = E[\big(h(\bX) - h(E_d[\bX])\big)^2] \end{equation} In this case, as with the standard proof of the weak law of large numbers we have \begin{theorem}\label{lln} Suppose that $\{\bX_k : k\geq 1\}$ is an i.i.d $\cM$-valued random variables, that have finite $\sigma^2_d$ variance. Then $\bS_n(\{\bx_k\})\to E_d[\bX]$ in probability. \end{theorem} Proceeding as in the case of Euclidean geometry, we have \begin{theorem}\label{estvar} With the same notations and assumptions as in the previous result, define the estimator of the variance by $$\hat{\sigma}_d^2 = \frac{1}{(n-1)}\sum_{k=1}^n \big(h(\bX_k) - h(\bS_n(\{\bx_k\}))\big)^2.$$ Then $\hat{\sigma}_d^2$ is an unbiased estimator of $\sigma_d^2(\bX).$ \end{theorem} Comment: Observe that $\hat{\sigma}_d^2$ is a positive, real random variable. So, its expected value is the standard expected value. \section{Arithmetic properties of the expectation operation}\label{group} When there is a commutative group operation on $\cM$ that leaves the metric invariant, then the best predictors have additional properties. The two standard examples that we have in mind are $\cM=\mathbb{R}^d$ and the group operation being the standard addition of vectors, or $\cM=(0,\infty)^d$ and the group operation being the component wise multiplication. For definiteness, let us denote the group operation by $\bx_1\circ\bx_2$ and the inverse of $\bx$ with respect to that operation by $\bx^{-1}.$ Let $\be$ denote the identity for that operation. That the distance invariant (or the group operation is an isometry), that is \begin{equation}\label{ginv} \mbox{Suppose that for any}\;\;\bx,\,\by,\,\bv \in \cM\;\;\mbox{we have}\;\;\;d(\bx\circ\bv,\by\circ\bv)=d(\bx,\by). \end{equation} Some simple consequences of this fact are the following. To begin with, we can define a norm derived from the distance by $|\bx|_d=d(\be,\bx).$ We leave it up to the reader to verify that in this notation the triangle inequality for $d$ becomes $|\bx\circ\by^{-1}|_d=d(\bx,\by) \leq |\bx|_d + |\by|_d,$ and that this implies that implies that $|\bx|_d = |\bx^{-1}|_d.$ Let us now examine two examples of the situation described above. For the first example in Table \ref{tab1}, in which the conditional expectation in divergence and in the distance derived from it coincide, we know that the conditional expectation is linear. In the last example in Table \ref{tab1}, the analogue of multiplication by a scalar is the (componentwise) exponentiation. In this case, we saw that the conditional expectation of a strictly positive random variable $\bX$ with respect to a $\sigma$-algebra $\cG$ is $$E_d[\bX|\cG] = e^{E[\ln(\bX)|\cG]},$$ It is easy to verify, and it is proved in \cite{GH}, that \begin{theorem}\label{lin} Let $\bX_1$ and $\bX_2$ be two $(0,\infty)^d$-valued which are $\bbP$-integrable in the logarithmic metric. Let $a_1$ and $a_2$ be two real numbers, then $$E_d[\bX_1^{a_1}\bX_2^{a_2}|\cG] = \Big(E_d[\bX_1|\cG]\Big)^{a_1}\Big(E_d[\bX_2|\cG]\Big)^{a_2}.$$ \end{theorem} \section{Concluding comments} \subsection{General comments about prediction} A predictive procedure involves several aspects: To begin with, we have to specify the nature of the set in which the random variables of interest take values and the class of predictors that we are interested in. Next comes the criterion, cost function or error function used to quantify the ``betterness'' of a predictor, and finally, we need some way to decide on the uniqueness of the best predictor. We mentioned at the outset that using the notion of divergence function, there exists a notion of best predictor for random variables taking values in convex subsets $\cM$ of some $\mathbb{R}^d,$ which, somewhat surprisingly, coincides with the standard least squares best predictor. The fact that in the Riemannian metric on $\cM$ derived from a divergence function a notion of best predictor exists, suggests the possibility of extending the notion of best predictor to Tits-Bruhat spaces. These are complete metric spaces, whose metric satisfies the semi-parallelogram law stated in Lemma 2.1. Using the completeness of the space the notion of ``mean'' of a finite set as the point that minimizes the sum of the squares of the distances to the points of the set, or that of best predictor are easy to establish. And using the semi-parallelogram law, the uniqueness of the best predictor can be established. The best predictors can be seen to have some of the properties of conditional expectation, except those that depend on the underlying vector space structure of $\cM,$ like Jensen's inequality and the ``linearity'' of the best predictor. \subsection{Other remarks} In some cases it is interesting to consider the Legendre-Fenchel duals of the convex function generating the divergence, see \cite{BDGMM}, \cite{BNN} or \cite{N} for example. The Bregman divergences induce a dually flat space, and conversely, we can associate a canonical Bregman divergence to a dually flat space.\footnote{Thanks to Frank Nielsen for the remark} The derived metric in this case is the (algebraic) inverse of the original metric, and it generates the same distance, see \cite{AC} for this. Therefore the same comparison results hold true in this case as well. As remarked at the end of Section 2.1, to compare the derived metrics to the standard Euclidean metric, and therefore, to compare the prediction errors (or the $d$-variance to the standard variance of a $\cM$ random variable does not seem to be an easy task. This is a pending issue to be settled. We saw that the set $\cM$ on which the random variables of interest may be equipped with more than one distance. The results presented above open up the door to the following conceptual (or methodological) question: Which is the correct distance to be used to make predictions about $\cM$-valued random variables? Other pending issue corresponds to the general case in which $\Phi(\bx)$ is not of the type (\ref{convf}). In this case, by suitable localization we might reproduce the results of Section 2 locally. The problem is to paste together the representation of the geodesics and the rest using the local representation. We saw as well that when there is no algebraic structure upon $\cM,$ some properties of the estimators are related only to the metric properties of the space, while when there is a commutative operation on $\cM,$ the best estimators have further algebraic properties. In reference to the examples in Section 2, an interesting question is which metrics admit a commutative group operation that leaves them invariant. \section{Appendix: Integration of the geodesic equations} Consider (\ref{geoeq}), that is $$\phi''(x_i)\ddot{x}_i + \frac{1}{2}\phi'''(x_i)\dot{x}_i^2 = 0$$ This is the Euler-Lagrange equation of the Lagrangian $L(x,\dot{x})=g\dot{x}^2/2,$ where we put $\phi''(x)=g(x).$ Notice now, that of we make the change of variables $y=h(x),$ where $h'(x)=g^{1/2}(x),$ in the new coordinates we can write the Lagrangian function as $L(y,\dot{y})=\dot{y}^2.$ In these new coordinates the geodesics are straight lines $$y(t) = y(0) + kt.$$ If at $t=0$ the geodesic starts at $x_0$ (or at $y_0=h(x_0)$) and at $t=1$ it is at $x_1$ (or at $y_1=h(x_1)$), we obtain $k=h(x_1)-h(x_0).$ {\bf Acknowledgment} I want to thank Frank Nielsen for his comments and suggestions on the first draft of this note.
1,314,259,994,401
arxiv
\section{Social choice and Boolean functions} \label{sec:arrow} We begin by discussing a problem concerning voting. This will motivate for us certain definitions involving \emph{Boolean functions}; i.e.,~functions $f \btb$ (or more generally, $f\btR$) whose domain consists of \emph{$n$-bit strings}. Suppose we have an election with~$n$ voters and $2$~candidates, named~$-1$ and~$1$. A \emph{voting rule} is simply any Boolean function $f \btb$, mapping the voters' votes to the winner of the election. The \emph{majority rule} $\Maj_n \btb$, defined (for $n$~odd) by $\Maj_n(x) = \sgn(x_1 + x_2 + \cdots + x_n)$, is perhaps the most natural and mathematically elegant voting rule, but a variety of others are used in practice. Several countries (the US and the UK, for example) elect their head of state via a two-level (weighted-)majority scheme. Other countries, unfortunately, have been known to use a \emph{dictator} rule: $f(x) = x_i$ for some dictator $i \in [n]$. The mathematical field of \emph{social choice} is concerned with the properties of various voting rules; for a survey, see e.g.~\cite{BGR09}. Let's now imagine a twist on the scenario: The $n$ voters decide on their votes, $x = (x_1, \dots, x_n) \in \bn$. However, due to faulty voting machines, each vote is independently \emph{misrecorded} with probability~$\delta \in [0,1]$. We denote the resulting list of votes by $\by \in \bn$, and call it a \emph{noisy copy} of the original votes~$x$. We now ask: \emph{What is the probability that the noise affects the outcome of the election? How does this probability depend on the voting rule~$f$?} To answer this question we also need a probabilistic model for how the original votes are cast. We make the simplest possible assumption --- that they are uniformly random, denoted $\bx \sim \bn$. In the social choice literature this is called the Impartial Culture Assumption~\cite{GK68}. Let's introduce some mathematical notation for our scenario, using the more convenient parameter $\rho = 1-2\delta \in [-1,1]$: \begin{definition} Given $x \in \bn$ and $\rho \in [-1,1]$, we say that the random vector $\by$ is a \emph{$\rho$-correlated copy} of~$x$ if each coordinate $\by_i$ is independently set to~$x_i$ with probability $\half(1+\rho)$ and set to $-x_i$ with probability $\half(1-\rho)$. (For the more common case of $\rho \geq 0$, this is equivalent to setting $\by_i = x_i$ with probability~$\rho$ and making~$\by_i$ uniformly random with probability $1-\rho$.) When $\bx \sim \bn$ is uniformly random and $\by$ is a $\rho$-correlated copy of~$\bx$, we call $(\bx, \by)$ a \emph{$\rho$-correlated random pair of strings}. Note that this is actually symmetric in $\bx$ and $\by$; an alternative definition is that each pair $(\bx_i, \by_i) \in \bits^2$ is chosen independently with $\E[\bx_i] = \E[\by_i] = 0$ and $\E[\bx_i\by_i] = \rho$. \end{definition} \begin{definition} For $\rho \in [-1,1]$, the operator $\T_\rho$ acts on Boolean functions ${f \btR}$ via \[ \T_\rho f(x) = \E_{\by \text{ a $\rho$-correlated copy of~$x$}}[f(\by)]. \] We also define the \emph{noise stability of~$f$ at~$\rho$} to be \[ \Stab_\rho[f] = \E_{\bx \sim \bn}[f(\bx) \cdot \T_\rho f(\bx)] = \E_{\substack{(\bx, \by) \textnormal{ $\rho$-correlated} \\ \textnormal{strings}}}[f(\bx)f(\by)]. \] Note that in the special case $f \btb$, \[ \Stab_\rho[f] = 1 - 2\Pr_{\substack{(\bx, \by) \textnormal{ $\rho$-correlated} \\ \textnormal{strings}}}[f(\bx) \neq f(\by)]. \] \end{definition} Returning to the election scenario in which the voters' votes are misrecorded with probability~$\delta$, we see that the probability this affects the outcome of the election is precisely $\half - \half \Stab_{1-2\delta}[f]$. Thus the voting rules that minimize this probability are precisely those which maximize the noise stability $\Stab_{1-2\delta}[f]$. Let's focus on the more natural case of $0 < \rho < 1$, i.e., $0 < \delta < \half$. It's obvious that the Boolean functions $f \btb$ that maximize $\Stab_\rho[f]$ are precisely the two constant functions $f(x) = \pm 1$. These functions are highly unfair as voting rules, so it's natural to make an assumption that rules them out. One common such assumption is that $f$ is \emph{unbiased}, meaning $\E[f(\bx)] = 0$; in other words, the two outcomes $\pm 1$ are equally likely when the voters vote uniformly at random. A stronger, but still very natural, assumption is that~$f$ is \emph{odd}, meaning $f(-x) = -f(x)$. In the social literature this is called \emph{neutrality}, meaning that the voting rule is not affected by changing the names of the candidates. We might now ask which \emph{unbiased} functions $f \btb$ maximize $\Stab_\rho[f]$. This problem can be solved easily using \emph{Fourier analysis of Boolean functions}, the basic facts of which we now recall: \begin{fact} Any $f \btR$ can be uniquely expressed as a multilinear polynomial, \[ f(x) = \sum_{S \subseteq [n]} \wh{f}(S) \prod_{i \in S} x_i. \] This is called the \emph{Fourier expansion} of $f$, and the coefficients $\wh{f}(S) \in \R$ are called the \emph{Fourier coefficients} of~$f$. We have \emph{Parseval's formula}, \[ \E_{\bx \sim \bn} [f(\bx) g(\bx)] = \sum_{S \subseteq [n]} \wh{f}(S)\wh{g}(S). \] In particular, if $f \btb$ then $\sum_S \wh{f}(S)^2 = 1$. \end{fact} \begin{fact} \label{fact:Trho-fourier} The Fourier expansion of $\T_\rho f$ is \[ \T_\rho f(x) = \sum_{S \subseteq [n]} \rho^{|S|} \wh{f}(S) \prod_{i \in S} x_i \] and hence $\Stab_\rho[f] = \sum_{S} \rho^{|S|} \wh{f}(S)^2$. \end{fact} Using these facts, the following is an exercise: \begin{fact} \label{fact:stab-max} Assume $0 < \rho < 1$. Then $\Stab_\rho[f] \leq \rho$ holds for all unbiased ${f \btb}$, with equality iff $f$ is a (possibly negated) dictator function, $f(x) = \pm x_i$. Furthermore, $\Stab_{-\rho}[f] \geq -\rho$ holds for \emph{all} ${f \btb}$, not necessarily unbiased, with the same equality conditions. \end{fact} This conclusion is somewhat disappointing from the standpoint of election fairness; it says that if our goal is to choose a voting rule that minimizes the effect of misrecorded votes (assuming $0 < \delta < \frac12$), the ``best'' choice is dictatorship (or negated-dictatorship). Incidentally, this is precisely the disappointment that occurs in \emph{Arrow's Theorem}~\cite{Arr50}, the seminal result in social choice theory. In brief, Arrow's Theorem is concerned with what happens when~$n$ voters try to rank \emph{three} candidates by means of holding three pairwise elections using Boolean voting rule~$f$. The well-known \emph{Condorcet Paradox}~\cite{dC85} is that for some~$f$ --- including $f = \Maj_n$ --- it is possible to get an ``irrational'' outcome in which the electorate prefers Candidate~$A$ to Candidate~$B$, prefers Candidate~$B$ to Candidate~$C$, and prefers Candidate~$C$ to Candidate~$A$. Arrow showed that the only~$f$'s which \emph{always} yield ``rational'' outcomes are dictators and negated-dictators. Kalai~\cite{Kal02} gave a very elegant Fourier-analytic proof of Arrow's Theorem by noting that when the voters' individual rankings are uniformly random, the probability of a rational outcome is precisely $\frac34 - \frac34 \Stab_{-\frac13}[f]$ (which also equals $\frac34 + \frac34 \Stab_{\frac13}[f]$ for odd~$f$). Then Arrow's conclusion follows from Fact~\ref{fact:stab-max}. Kalai also obtained a robust version of Arrow's Theorem by using the \emph{FKN Theorem}~\cite{FKN02} from the analysis of Boolean functions: Any $f$ that achieves a rational outcome with probability at least $1-\delta$ must agree with some (negated-)dictator on all but an $O(\delta)$-fraction of inputs. Just as we ruled out constant functions~$f$ by insisting on unbiasedness, we might also try to rule out dictatorships (and similar functions) by insisting that~$f$ give only negligible \emph{influence} to each individual voter. Here we refer to the following definitions: \begin{definition} Let $f \btR$. For $i \in [n]$, the \emph{(discrete) $i$th derivative} is \[ \D_i f(x) = \tfrac{f(x_1, \dots, x_{i-1}, 1, x_{i+1}, \dots, x_n) - f(x_1, \dots, x_{i-1}, -1, x_{i+1}, \dots, x_n)}{2} = \sum_{S \ni i} \wh{f}(S) \prod_{j \in S \setminus \{i\}} x_j. \] The \emph{$i$th influence of~$f$} is \[ \Inf_i[f] = \E_{\bx \sim \bn}[\D_if(\bx)^2] = \sum_{S \ni i} \wh{f}(S)^2. \] Note that when $f \btb$ we also have \[ \Inf_i[f] = \Pr_{\bx \sim \bn}[f(\bx) \neq f(\bx_1, \dots, \bx_{i-1}, -\bx_i, \bx_{i+1}, \dots, \bx_n)]. \] \end{definition} If $f \btb$ is a voting rule, $\Inf_i[f]$ represents the probability that the $i$th voter's vote is pivotal for the outcome. (This notion was originally introduced by the geneticist Penrose~\cite{Pen46}; it was independently popularized in the social choice literature by the lawyer Banzhaf~\cite{Ban65}.) The $i$th influence also has an interpretation in terms of the ``geometry'' of the discrete cube graph: if we think of $f \btb$ as the indicator of a vertex set $A \subseteq \bn$, then $\Inf_i[f]$ is fraction of edges in the $i$th coordinate direction that are on $A$'s boundary. In the interest of fairness, one might want to disallow voting rules $f \btb$ that give unusually large influence to any one voter. This would disqualify a dictator voting rule like $f(x) = x_i$ since it has $\Inf_i[f] = 1$ (which is maximum possible). On the other hand, the majority voting rule is quite fair in this regard, since all of its influences are quite small: using Stirling's formula one can compute $\Inf_i[\maj_n] \sim \sqrt{\frac{2}{\pi}} \frac{1}{\sqrt{n}} \xrightarrow{n \to \infty} 0$ for all $i \in [n]$. We can now ask a question that will occupy us for a significant portion of this survey: \begin{question} \label{ques:misc} Let ${0 < \rho < 1}$. Assume $f \btb$ is unbiased and satisfies $\max_i \{\Inf_i[f]\} \leq o_n(1)$. How large can $\Stab_\rho[f]$ be? \end{question} We can think of this question as asking for the ``fair'' voting rule that minimizes the effect of misrecorded votes in a noisy election. Alternatively, the case of $\rho = \frac13$ corresponds to asking for the ``fair'' odd voting rule which maximizes the probability of a ``rational'' outcome in the context of Arrow's Theorem. Since majority rule seems like a fair voting scheme, it's natural to ask how well it does. For $n \to \infty$, this can be estimated using the Central Limit Theorem: \begin{align*} \Stab_\rho[\Maj_n] =& \E_{\substack{(\bx, \by) \textnormal{ $\rho$-correlated} \\ \textnormal{strings}}}\left[\sgn\left(\tfrac{\bx_1 + \cdots + \bx_n}{{\sqrt{n}}}\right)\sgn\left(\tfrac{\by_1 + \cdots + \by_n}{{\sqrt{n}}}\right)\right] \\ \xrightarrow{n \to \infty}& \E_{\substack{(\bz, \bz') \textnormal{ $\rho$-correlated} \\ \textnormal{Gaussians}}}[\sgn(\bz)\sgn(\bz')] = 1 - 2\Pr[\sgn(\bz) \neq \sgn(\bz')], \end{align*} where we say $(\bz, \bz')$ is a \emph{$\rho$-correlated pair of Gaussians} if the random variables $\bz,\bz'$ are joint standard normals with $\E[\bz \bz'] = \rho$. An equivalent definition is that ${\bz = \la \vec{u}, \vec{\bg}\ra}$ and $\bz' = \la \vec{v}, \vec{\bg} \ra$, where $\vec{\bg}$ is drawn from the standard $d$-dimensional Gaussian distribution~$\gamma_d$ and $\vec{u}, \vec{v} \in \R^d$ are any two unit vectors satisfying ${\la \vec{u}, \vec{v} \ra = \rho}$. (In particular, we can take $\bz = \vec{\bg}_1$, $\bz' = \rho \vec{\bg}_1 + \sqrt{1-\rho^2} \vec{\bg}_2$.) Using this latter definition, it's not hard to verify the following old~\cite{She99} fact: \begin{proposition}[Sheppard's Formula] \label{prop:sheppard} If $(\bz, \bz')$ are $\rho$-correlated Gaussians, $-1 \leq \rho \leq 1$, then $\Pr[\sgn(\bz) \neq \sgn(\bz')] = \tfrac{1}{\pi} \arccos \rho$. \end{proposition} Taking care with the error term in the Central Limit Theorem, one may deduce: \begin{proposition} \label{prop:maj-stab} For fixed $-1 < \rho < 1$, \[ \Stab_\rho[\Maj_n] = 1 - \tfrac{2}{\pi} \arccos \rho + O(\tfrac{1}{\sqrt{n}}). \] \end{proposition} \noindent As a corollary, the probability of a ``rational'' outcome when using $\maj_n$ in a three-way election tends to $\frac{3}{2\pi} \arccos(-\frac13) \approx 91\%$, a fact known as \emph{Guilbaud's Theorem}~\cite{Gui52}.\\ Is there a ``fair'' voting rule with even higher noise stability? In 2004, Khot et~al.~\cite{KKMO04,KKMO07} conjectured the result below, stating that majority essentially gives the best possible answer to Question~\ref{ques:misc}. A year later their conjecture was proven by Mossel et~al.~\cite{MOO05,MOO10}: \begin{theorem}[``Majority Is Stablest Theorem''] \label{thm:mist} Fix $0 < \rho < 1$. Assume ${f \btI}$ satisfies $\E[f(\bx)] = 0$ and $\max_i \{\Inf_i[f]\} \leq \eps$. Then \[ \Stab_\rho[f] \leq 1 - \tfrac{2}{\pi} \arccos \rho + o_\eps(1). \] (Furthermore, for $-1 < \rho < 0$ the inequality holds in reverse and the hypothesis $\E[f(\bx)] = 0$ is unnecessary.) \end{theorem} Peculiarly, the motivation in Khot et~al.~\cite{KKMO04} for conjecturing the above had nothing to do with social choice and voting. Instead, the conjecture was precisely what was needed to establish the computational complexity of finding approximately maximum \emph{cuts} in graphs. We discuss this motivation next. \section{The computational complexity of Max-Cut} \label{sec:max-cut} The \emph{Max-Cut} problem is the following fundamental algorithmic task: Given as input is an undirected graph $G = (V,E)$. The goal is to find a partition ${V = V^+ \cup V^-}$ so as to maximize the fraction of \emph{cut} edges. Here we say $e \in E$ is ``cut'' if it has one endpoint in each of $V^\pm$. We write $\Opt(G)$ to denote the value of the best possible solution; i.e., the maximum fraction of edges in~$G$ that can be cut. For example, $\Opt(G) = 1$ iff $G$ is bipartite. Unfortunately, the Max-Cut problem is known to be \emph{$\NP$-hard}~\cite{Kar72}. This means that there is no efficient (i.e., $\poly(|V|)$-time) algorithm for determining $\Opt(G)$, assuming the well-believed $\PTIME \neq \NP$ Conjecture. Under the closely related \linebreak $\coNP \neq \NP$ Conjecture, we can also state this difficulty as follows: It is \emph{not} true that whenever~$G$ is a graph satisfying $\Opt(G) \leq \beta$, there is a short (i.e., $\poly(|V|)$-length) proof of the statement ``$\Opt(G) \leq \beta$''. Max-Cut is perhaps the simplest nontrivial \emph{constraint satisfaction problem (CSP)}. Rather than formally defining this class of problems, we'll simply give two more examples. In the \emph{Max-3Lin} problem, given is a system of equations over~$\F_2$, each of the form ``$x_{i_1} + x_{i_2} + x_{i_3} = b$''; the task is to find an assignment to the variables $x_1, \dots, x_n$ so as to maximize the fraction of satisfied equations. In the \emph{Max-3Coloring} problem, given is an undirected graph; the task is to color the vertices using~$3$ colors so as to maximize the fraction of bichromatic edges. For all of these CSPs the task of determining $\Opt(\cdot)$ is $\NP$-hard. One way to cope with this difficulty is to seek \emph{approximation algorithms}: \begin{definition} Let $0 \leq \alpha \leq \beta \leq 1$. Algorithm $\calA$ is said to be \emph{$(\alpha,\beta)$-approximating} for a certain CSP (e.g., Max-Cut) if it has the following guarantee: For every input~$G$ satisfying $\Opt(G) \geq \beta$, the algorithm finds a solution of value at least~$\alpha$. If $\calA$ is a randomized algorithm, we allow it to achieve value at least~$\alpha$ \emph{in expectation}. Note that a fixed~$\calA$ may be $(\alpha,\beta)$-approximating for many pairs $(\alpha, \beta)$ simultaneously. \end{definition} \begin{example} There is a simple greedy algorithm that is $(1,1)$-approximating for Max-Cut; i.e., given a bipartite~$G$, it finds a bipartition. Similarly, one can efficiently $(1,1)$-approximate Max-3Lin using Gaussian elimination. On the other hand, $(1,1)$-approximating Max-3Coloring --- i.e., validly $3$-coloring $3$-colorable graphs --- is $\NP$-hard. For Max-3Lin the near-trivial algorithm of outputting $x_1 = \cdots = x_n = B$, where $B$ is the more common ``right-hand side'' of the system, is a $(\half, \beta)$-approximation for every~$\half \leq \beta \leq 1$. One can also get an efficient $(\half, \beta)$-approximation for Max-Cut (for any~$\beta$) either by a simple greedy algorithm, or by outputting a \emph{random} partition $V = V^+ \cup V^-$. The classical statement that ``Max-Cut is $\NP$-hard'' is equivalent to stating that \emph{there exists} $\half < \beta < 1$ such that $(\beta, \beta)$-approximating Max-Cut is $\NP$-hard (in fact, this is true for all $\half < \beta < 1$). \end{example} In the case of Max-3Lin, it is a rather astonishing fact that the trivial approximation algorithms mentioned above are best possible assuming $\PTIME \neq \NP$; this is a celebrated result of H{\aa}stad~\cite{Has97,Has01} combining ``PCP technology''~\cite{FGL+96,AS98,ALM+98,BGS98} and Fourier analysis of Boolean functions: \begin{theorem} \label{thm:hastad} For any $\delta > 0$, it's $\NP$-hard to $(\half + \delta, 1 - \delta)$-approximate Max-3Lin. \end{theorem} For quite a long time, it was not known how to do any better even for the much simpler problem of Max-Cut. This changed in 1994 with the famous and sophisticated result of Goemans and Williamson~\cite{GW94,GW95} (see also~\cite{DP93}): \begin{theorem} \label{thm:gw} There is an efficient algorithm that $(\frac{\theta}{\pi}, \half - \half \cos \theta)$-approximates Max-Cut for every $\theta \in [\theta_{\mathrm{GW}}, \pi]$, where $\theta_{\mathrm{GW}} \approx .74\pi$ is the positive solution of ${\tan(\frac{\theta}{2}) = \theta}$. E.g., the Goemans--Williamson algorithm simultaneously $(\frac34, \frac12 + \frac{1}{2\sqrt{2}})$-approximates, $(\frac45, \frac58 + \frac{\sqrt{5}}{8})$-approximates, and $(1-\frac2{\pi}\sqrt{\eps} -o(\sqrt{\eps}), 1-\eps)$-approximates Max-Cut. \end{theorem} \noindent (Variants of the Goemans--Williamson algorithm that perform well for $\theta < \theta_{\mathrm{GW}}$ are also known.) \vspace{-1.4in} \begin{center} \includegraphics[width=.75\textwidth]{gw-plot.pdf} \end{center} \vspace{-1.2in} Briefly, the algorithm works as follows: Given a graph $G = (V,E)$, one considers the following \emph{semidefinite programming} optimization problem: \begin{equation} \label{eqn:gw-sdp} \tag{SDP} \begin{aligned} \SDPOpt(G)\ \ =\ \ \text{max}& \quad \avg_{(v,w) \in E} \left[\half - \half \la \vec{U}(v), \vec{U}(w) \ra\right] \\ \text{subject to}& \quad \vec{U} \co V \to S^{d-1}. \end{aligned} \end{equation} Here one also maximizes over all $d \in \Z^+$, although one can show that it suffices to take $d = |V|$. Essentially, the optimization problem~\eqref{eqn:gw-sdp} seeks to assign a unit vector to each vertex in~$V$ so that edges in~$G$ are spread as far apart as possible. It's easy to see that if $d$ is fixed to~$1$ (so that $\vec{U} \co V \to \{-1,1\}$) then~\eqref{eqn:gw-sdp} is identical to the Max-Cut problem; therefore $\Opt(G) \leq \SDPOpt(G)$ always. Surprisingly, although computing $\Opt(G)$ is intractable, one can efficiently compute $\SDPOpt(G)$. (Roughly speaking, the reason is that if we introduce real variables $\rho_{vw} = \la \vec{U}(v), \vec{U}(w) \ra$, then~\eqref{eqn:gw-sdp} is equivalent to maximizing a linear function of the $\rho_{vw}$'s over an explicit convex subset of $\R^{|V| \times |V|}$, namely the set of all positive semidefinite matrices $R = (\rho_{vw})_{v,w \in V}$ with $1$'s on the diagonal.) Thus~\eqref{eqn:gw-sdp} gives us an efficiently-computable upper bound on~$\Opt(G)$. One may hope that it is a relatively ``good'' upper bound, and that furthermore one can prove this constructively by providing an efficient algorithm which converts the optimum ``vector solution'' $(\vec{U}^*(v))_{v \in V}$ to a good ``$\pm 1$ solution'' $(U^*(v))_{v \in V}$ --- i.e., a good bipartition of~$V$. Goemans and Williamson fulfilled this hope, as follows: Their algorithm first chooses $\vec{\bg}$ to be a standard $d$-dimensional Gaussian and then it outputs the bipartition of~$G$ defined by $U^*(v) = \la \vec{U}^*(v), \vec{\bg} \ra$. Using Sheppard's Formula, it's not hard to show that this establishes Theorem~\ref{thm:gw}. The Goemans--Williamson algorithm was originally considered to be quite complex for such a simple CSP as Max-Cut; furthermore, its approximation guarantee seemed quite peculiar. More than one paper~\cite{Fei99,FS02} suggested the research goal of improving this approximation guarantee. Furthermore, the best known $\NP$-hardness result for the problem (from~\cite{Has97,TSSW00}) does not match the algorithm. For example, it's known that $(.875 + \delta, .9)$-approximating Max-Cut is $\NP$-hard for all $\delta > 0$, and the Goemans--Williamson algorithm achieves $(\alpha, .9)$-approximation for $\alpha = 1-\frac{1}{\pi} \arccos\frac{4}{5} \approx .795$. But whether cutting $80\%$ of the edges in a graph $G$ with $\Opt(G) = 90\%$ is polynomial-time solvable or is $\NP$-hard is unknown. Nevertheless, in 2004 Khot et al.~\cite{KKMO04} obtained the following ``surprising''~\cite{Joh06} result: Under the \emph{Unique Games Conjecture}~\cite{Kho02} (a notorious conjecture in computational complexity not related to Max-Cut), the Majority Is Stablest Theorem implies that there is no efficient algorithm beating the Goemans--Williamson approximation guarantee (at least for $\theta \in [\theta_{\mathrm{GW}}, \pi]$; see~\cite{OW08} for optimal results when $\theta < \theta_{\mathrm{GW}}$). We remark that the while the Unique Games Conjecture is believable, its status is vastly more uncertain than the $\PTIME \neq \NP$ conjecture. Let us briefly explain what the Majority Is Stablest Theorem has to do with the complexity of the Max-Cut problem. As shown in~\cite{KKMO04}, the advantage of the Unique Games Conjecture (as opposed to just the $\PTIME \neq \NP$ assumption) is that it makes the ``H{\aa}stad PCP technology'' much easier to use. Very roughly speaking, it implies that to establish intractability of beating $(\frac{\theta}{\pi}, \half - \half \cos \theta)$-approximation, it suffices to find certain so-called ``gadget graphs'' for the Max-Cut problem. Precisely speaking, these gadget graphs need to have the following properties: \begin{itemize} \item The vertex set $V$ should be $\bits^n$. (As a consequence, bipartitions of $V$ correspond to Boolean functions $f \btb$.) \item The bipartitions given by the $n$ ``dictators'' $f(x) = x_i$ should each cut at least a $\half - \half \cos \theta$ fraction of the edges. \item Any bipartition which is not ``noticeably correlated'' with a dictator partition should not cut ``essentially more'' than a $\frac{\theta}{\pi}$ fraction of the edges. More precisely, if $f \btb$ is any bipartition of~$V$ with $\max_i \{\Inf_i[f]\} \leq \eps$, then the fraction of edges it cuts is at most $\frac{\theta}{\pi} + o_\eps(1)$. \end{itemize} Actually, it's also acceptable for these gadgets to be \emph{edge-weighted} graphs, with nonnegative edge-weights summing to~$1$. Khot et al.~suggested using the \emph{noisy hypercube} graph on vertex set $\{-1,1\}^n$, in which the weight on edge $(u,v) \in \bn \times \bn$ is precisely $\Pr[\bx = u, \by = v]$ when $(\bx,\by)$ are a $(\cos \theta)$-correlated random strings (note that $\rho = \cos \theta < 0$ for $\theta \in [\theta_{\mathrm{GW}}, \pi]$). Such gadget graphs have the first two properties above, and the Majority Is Stablest Theorem precisely implies that they also have the third property. It's somewhat surprising that the technical properties required for this Unique Games/PCP-based hardness result correspond so perfectly to a natural problem about voting theory. Thus subject to the Unique Games Conjecture, no efficient algorithm can improve on the Goemans--Williamson Max-Cut approximation guarantee. In particular, this means that there must be infinite families of graphs on which the Goemans--Williamson algorithm performs no better than the guarantee established in Theorem~\ref{thm:gw}. As first shown by Karloff~\cite{Kar99}, the noisy hypercube graphs~$G$ also serve as examples here: Though they have $\Opt(G) = \half - \half \cos \theta$, one optimal solution of~\eqref{eqn:gw-sdp} for these graphs is $\vec{U}^*(v) = v/\sqrt{d}$, and applying the Goemans--Williamson algorithm to these vectors will indeed give a bipartition cutting only a $\frac{\theta}{\pi}$ fraction of edges in expectation. Before turning our attention more fully to the Majority Is Stablest Theorem, we should mention a far-reaching generalization of the above-described work in complexity theory, namely the Raghavendra Theory of CSP approximation. Raghavendra~\cite{Rag08} showed that for \emph{all} CSPs (not just Max-Cut), the natural analogue of the Goemans--Williamson SDP algorithm has optimal approximation guarantee among all efficient algorithms, subject to the Unique Games Conjecture. This theory will be discussed further in our concluding Section~\ref{sec:conclusion}. \section{Borell's Isoperimetric Inequality} \label{sec:borell} The Majority Is Stablest Theorem concerns Boolean functions, but thanks to the Central Limit Theorem it includes as a ``special case'' a certain inequality concerning \emph{Gaussian geometry} first proved by Borell~\cite{Bor85}. (In this field, the idea that Boolean inequalities imply Gaussian inequalities dates back to the work of Gross~\cite{Gro75} on the Log-Sobolev Inequality.) To state this Gaussian inequality we first make some definitions: \begin{definition} Let $\vphi$ and $\Phi$ denote the standard Gaussian pdf and cdf, respectively. Given $z \in \R^d$ and $\rho \in [-1,1]$, we say that the random vector~$\bz'$ is a \emph{$\rho$-correlated Gaussian copy} of~$z$ if $\bz'$ has the distribution $\rho z + \sqrt{1-\rho^2} \bg$, where~$\bg$ is a standard $d$-dimensional Gaussian random vector. When~$\bz$ is itself a standard $d$-dimensional Gaussian and $\bz'$ is a $\rho$-correlated Gaussian copy, we call $(\bz, \bz')$ a \emph{$\rho$-correlated $d$-dimensional Gaussian pair}. An equivalent definition is that each pair of random variables $(\bz_i, \bz'_i)$ is a $\rho$-correlated pair of Gaussians (as defined in Section~\ref{sec:arrow}) and the pairs are independent for $i \in [d]$. Note that $(\bz,\bz')$ has the same distribution as $(\bz', \bz)$. \end{definition} \begin{remark} \label{rem:sphere-idea} The distribution of a $\rho$-correlated $d$-dimensional Gaussian pair $(\bz, \bz')$ is also rotationally symmetric in $\R^d$. Note that for large~$d$ we'll have $\|\bz\|, \|\bz'\| \sim \sqrt{d}$ and $\la \bz, \bz' \ra \sim \rho d$. Thus an intuitive picture to keep in mind when~$d$ is large is that $(\bz, \bz')$ is roughly distributed as a uniformly random pair of vectors of length $\sqrt{d}$ and angle $\arccos \rho$. \end{remark} \begin{definition} The \emph{Ornstein--Uhlenbeck semigroup} of operators is defined as follows: For $\rho \in [-1,1]$, the operator $\U_\rho$ acts on functions $f \co \R^d \to \R$ by \[ \U_\rho f(z) = \E_{\bz' \text{ a $\rho$-correlated Gaussian copy of~$z$}}[f(\bz')]. \] We also define the \emph{Gaussian noise stability of~$f$ at~$\rho$} to be \[ \Stab_\rho[f] = \E_{\substack{(\bz, \bz') \textnormal{ $\rho$-correlated} \\ \textnormal{ $d$-dimensional Gaussian pair}}}[f(\bz)f(\bz')]. \] \end{definition} We can now state the ``Gaussian special case'' of Majority Is Stablest: \begin{theorem} \label{thm:gaussian-mist} Fix $0 < \rho < 1$. Assume $h \co \R^d \to [-1,1]$ satisfies $\E_{\bz \sim \gamma_d}[h(\bz)] = 0$. Then its Gaussian noise stability satisfies \[ \Stab_\rho[h] \leq 1 - \tfrac{2}{\pi} \arccos \rho. \] (Furthermore, for $-1 < \rho < 0$ the inequality holds in reverse and the hypothesis $\E[h] = 0$ is unnecessary.) \end{theorem} To obtain Theorem~\ref{thm:gaussian-mist} from the Majority Is Stablest Theorem (at least for ``nice enough''~$h$), we use the fact that Gaussian random variables can be ``simulated'' by sums of many independent $\pm 1$ random bits. More precisely, we can apply Majority Is Stablest to $f \co \bits^{dn} \to [-1,1]$ defined by \[ f(x_{1,1}, \dots, x_{d,n}) = h\left(\tfrac{x_{1,1} + \cdots + x_{1,n}}{\sqrt{n}}, \dots, \tfrac{x_{d,1} + \cdots + x_{d,n}}{\sqrt{n}}\right) \] and then take $n \to \infty$ and use a $d$-dimensional Central Limit Theorem. (The assumption and error dependence on the influence bound~$\epsilon$ disappears, because we have $\eps \to 0$ as $n \to \infty$.) Note that in Section~\ref{sec:arrow} we saw exactly this limiting procedure in the case of $h = \sgn : \R^1 \to \{-1,1\}$ when we computed the limiting (Boolean) noise stability of~$\Maj_n$. Theorem~\ref{thm:gaussian-mist} was first proved by Borell in 1985~\cite{Bor85}. (In fact, Borell proved significant generalizations of the theorem, as discussed below.) In 2005, Mossel et al.~\cite{MOO10} used it to prove the Majority Is Stablest Theorem by reducing the Boolean setting to the Gaussian setting. The key technical tool here was a ``nonlinear'' version of the Central Limit Theorem called the \emph{Invariance Principle} (see also~\cite{Rot75}). Briefly, the Invariance Principle implies that if $f \btR$ is a low-degree multilinear polynomial with small influences then the distributions of $f(\bx_1, \dots, \bx_n)$ and $f(\bg_1, \dots, \bg_n)$ are ``close'', where $\bx_1, \dots, \bx_n$ are independent $\pm 1$ random variables and $\bg_1, \dots, \bg_n$ are independent Gaussians. The Invariance Principle has had many applications (e.g., in combinatorics~\cite{DFR08}, learning theory~\cite{Kan12a}, pseudorandomness~\cite{MZ10}, social choice~\cite{Mos10}, sublinear algorithms~\cite{BO10}, and the Raghavendra Theory of CSPs mentioned at the end of Section~\ref{sec:max-cut}) but we won't discuss it further here. Instead, we'll outline in Section~\ref{sec:DMN} an alternative, ``purely discrete'' proof of the Majority Is Stablest Theorem due to De, Mossel, and Neeman~\cite{DMN13}. Let's now look more carefully at the geometric content of Theorem~\ref{thm:gaussian-mist}. Suppose $A \subset \R^d$ is a set with \emph{Gaussian volume} $\gamma_d(A) = \half$. Applying Theorem~\ref{thm:gaussian-mist} with $h = 1 - 2 \cdot 1_A$, and also writing $\theta = \arccos \rho \in (0,\frac{\pi}{2})$, one obtains the following: \begin{corollary} \label{cor:easy-borell} For $0 \leq \theta \leq \frac{\pi}{2}$ and $A \subseteq \R^d$, define the \emph{rotation sensitivity} \[ \RS_A(\theta) = \Pr_{\substack{(\bz, \bz') \textnormal{ $\cos \theta$-correlated} \\ \textnormal{$d$-dimensional Gaussian pair}}}[1_A(\bz) \neq 1_A(\bz')]. \] Then if $\gamma_d(A) = \half$, we have $\RS_A(\theta) \geq \frac{\theta}{\pi}$. \end{corollary} By Sheppard's Formula, equality is obtained if $d = 1$ and $A = (-\infty, 0]$. In fact, by rotational symmetry of correlated Gaussians, equality is obtained when~$A$ is any halfspace through the origin in~$\R^d$. (Geometrically, it's natural to guess that halfspaces minimize $\RS_A(\theta)$ among sets~$A$ of fixed Gaussian volume, using the intuition from Remark~\ref{rem:sphere-idea}.) As shown in~\cite{KO12}, this corollary is quite easy to prove for ``many'' values of~$\theta$: \begin{proof}[Proof of Corollary~\ref{cor:easy-borell} for $\theta = \frac{\pi}{2\ell}$, $\ell \in \Z^+$] Let $\bg, \bg'$ be independent {$d$-dimensional} Gaussians and define $\bz^{(j)} = \cos(j\theta) \bg + \sin(j\theta) \bg'$ for $0 \leq j \leq \ell$. Then it's easy to check that $(\bz^{(i)}, \bz^{(j)})$ is a $\cos((j-i)\theta)$-correlated Gaussian pair. In particular, $\bz^{(0)}$ and $\bz^{(\ell)}$ are independent. Now using $\gamma_d(A) = \half$ and a union bound we get \[ \half = \Pr[1_A(\bz^{(0)}) \neq 1_A(\bz^{(\ell)})] \leq \sum_{j = 1}^\ell \Pr[1_A(\bz^{(j-1)}) \neq 1_A(\bz^{(j)})] = \ell \cdot \RS_{A}(\theta), \] which is the desired inequality. \end{proof} Returning to Theorem~\ref{thm:gaussian-mist}, it states that if $(\bz, \bz')$ are $\rho$-correlated $d$-dimensional Gaussians ($0 < \rho < 1$) then halfspaces are the volume-$\half$ sets which maximize $\Pr[\bz, \bz' \in A]$. In fact, halfspaces are also the optimizers at \emph{any} fixed volume. Furthermore, if we generalize by looking for sets $A, B$ of fixed volume maximizing $\Pr[\bz \in A, \bz' \in B]$, parallel halfspaces are again best. These isoperimetric facts (and more) were all originally proved by Borell~\cite{Bor85}: \begin{theorem}[``Borell Isoperimetric Inequality''] \label{thm:borell} Fix $0 < \rho < 1$ and ${0 \leq \alpha, \beta \leq 1}$. Suppose $A, B \subseteq \R^d$ satisfy $\gamma_d(A) = \alpha$, $\gamma_d(B) = \beta$. Then if $(\bz, \bz')$ is a $\rho$-correlated $d$-dimensional Gaussian pair, \[ \Pr[\bz \in A, \bz' \in B] \leq \Pr[\bz \in H, \bz' \in H'] \] where $H$ and $H'$ are (any) parallel halfspaces satisfying $\gamma_d(H) = \alpha$, $\gamma_d(H') = \beta$. (If $-1 < \rho < 0$ then the inequality is reversed.) By rotational symmetry we may assume $H = (-\infty, \Phi^{-1}(\alpha)],\ H' = (-\infty, \Phi^{-1}(\beta)] \subseteq \R$ and thus write the above as \[ \Pr[\bz \in A, \bz' \in B] \leq \Lambda_\rho(\alpha,\beta) \coloneqq \Pr_{\substack{(\bw, \bw') \textnormal{ $\rho$-correlated} \\ \textnormal{Gaussians}}}[\bw \leq \Phi^{-1}(\alpha), \bw' \leq \Phi^{-1}(\beta)]. \] In case $\alpha = \beta = \frac12$, Sheppard's Formula implies \[ \Pr[\bz \in A, \bz' \in B] \leq \Lambda_\rho(\tfrac12,\tfrac12) = \tfrac12 - \tfrac{1}{2\pi} \arccos \rho. \] \end{theorem} Borell's original proof of this theorem used the Gaussian symmetrization method due to Ehrhard~\cite{Ehr83} and was quite technical. Four other proofs are known. Beckner~\cite{Bec92} pointed out that the analogous isoperimetric inequality on the sphere is easy to prove by two-point symmetrization~\cite{BT76}, and the Gaussian result can then be deduced via ``Poincar\'{e}'s limit'' (see~\cite{CL90}). Mossel and Neeman~\cite{MN12} recently gave a slick proof using semigroup methods, and together with De~\cite{DMN13} they gave another proof via Boolean functions. Finally, Eldan~\cite{Eld13} gave the most recent new proof, using stochastic calculus. We will describe De, Mossel, and Neeman's Boolean proof of Borell's Isoperimetric Inequality in Section~\ref{sec:DMN}. It has the advantage that it can be used to prove the Majority Is Stablest Theorem ``at the same time'' (using a few technical tricks from the original Invariance Principle-based proof, including \emph{hypercontractivity}). But first, we'll spend some time discussing further special cases of Borell's Isoperimetric Inequality. \section{Hypercontractivity} \label{sec:hypercon} Borell's Isoperimetric Inequality is very precise, giving the exact maximal value of $\Pr[\bz \in A, \bz' \in B]$ (when $(\bz,\bz')$ are $\rho$-correlated) for any fixed Gaussian volumes $\gamma_d(A) = \alpha$, $\gamma_d(B) = \beta$. A small downside is that this maximum value, $\Lambda_\rho(\alpha, \beta)$, does not have a nice closed-form expression except when $\alpha = \beta = \half$. In the interesting regime of $\alpha, \beta \to 0$, however, we can get a closed form for its asymptotics. Let's do a rough ``heuristic'' estimation. Suppose $H, H'$ are parallel halfspaces of ``small'' Gaussian volume $\alpha, \beta$, with $\alpha \leq \beta$. By rotational symmetry we can assume $H = [a, \infty), H' = [b, \infty) \subset \R$ for some ``large'' values $a \geq b > 0$. Precisely, we have $a = -\Phi^{-1}(\alpha)$, but speaking roughly we'll express this as $\alpha \approx \exp(-\frac{a^2}{2})$, as this is asymptotically correct up to lower-order factors. Similarly we'll write $\beta \approx \exp(-\frac{b^2}{2})$. We are interested in estimating $\Pr[\bg \in H, \bg' \in H']$, where $(\bg, \bg')$ are a $\rho$-correlated Gaussian pair. We'll actually take $\bg' = \rho \bg + \sqrt{1-\rho^2} \bh$, where $\bh$ is a standard Gaussian independent of~$\bg$. To start the estimation, by definition we have $\Pr[\bg \in H] \approx \exp(-\frac{a^2}{2})$. Further, conditioned on $\bg \in H$ we will almost surely have that $\bg$ is only ``barely'' larger than~$a$. Thus we expect $\bg'$ to be conditionally distributed roughly as $\rho a + \sqrt{1-\rho^2} \bh$. In this case, $\bg'$ will be in $H'$ if and only if $\bh \geq (b - \rho a)/\sqrt{1-\rho^2}$. Under the assumption that $b - \rho a \geq 0$, the probability of this is, roughly again, $\exp(-\frac{(b-\rho a)^2}{2 (1-\rho^2)})$. All in all, these calculations ``suggest'' that \[ \Lambda_\rho(\alpha, \beta) = \Pr[\bg \in H, \bg' \in H'] \approx \exp(-\tfrac{a^2}{2})\exp(-\tfrac{(b-\rho a)^2}{2 (1-\rho^2)}) = \exp\left(-\tfrac12 \tfrac{a^2 - 2\rho a b + b^2}{1-\rho^2}\right) \] (under the assumption that $\alpha \approx \exp(-\frac{a^2}{2}) \leq \exp(-\frac{b^2}{2}) \approx \beta$ are ``small'', with $b \geq \rho a$). Since Borell's Isoperimetric Inequality tells us that parallel halfspaces are maximizers, we might optimistically guess the following: \begin{theorem}[``Gaussian Small-Set Expansion Theorem''] \label{thm:gaussian-sse} Let $0 < \rho < 1$. Let $A, B \subseteq \R^d$ have Gaussian volumes $\exp(-\frac{a^2}{2}), \exp(-\frac{b^2}{2})$, respectively, and assume $0 \leq \rho a \leq b \leq a$. Then \[ \Pr_{\substack{(\bz, \bz') \textnormal{ $\rho$-correlated} \\ \textnormal{$d$-dimensional Gaussian pair}}}[\bz \in A, \bz' \in B] \leq \exp\left(-\tfrac12 \tfrac{a^2 - 2\rho a b + b^2}{1-\rho^2}\right). \] In particular, if $A \subseteq \R^d$ has $\gamma_d(A) = \alpha$ then \begin{equation} \label{eqn:g-sse} \Stab_\rho[1_A] \leq \alpha^{\frac{2}{1+\rho}} \iff \Pr_{\substack{(\bz, \bz') \textnormal{ $\rho$-correlated} \\ \textnormal{$d$-dimensional Gaussian pair}}}[\bz' \in A \mid \bz \in A] \leq \alpha^{\frac{1-\rho}{1+\rho}}. \end{equation} \end{theorem} Indeed this theorem is correct, and it can be formally deduced from Borell's Isoperimetric Inequality. We'll outline a more direct proof shortly, but first let's discuss its content. The one-set statement~\eqref{eqn:g-sse} says that if~$A$ is any ``small'' subset of Gaussian space (think of $\alpha$ as tending to~$0$) and $\rho$ is bounded away from~$1$ (say $\rho = 1 - \delta$), then a $\rho$-noisy copy of a random point in~$A$ will almost certainly (i.e., except with probability $\alpha^{\delta/(2+\delta)}$) be outside~$A$. One might ask whether a similar statement is true for subsets of the discrete cube~$\bn$. As we saw with Majority Is Stablest implying Theorem~\ref{thm:gaussian-mist}, isoperimetric inequalities on the discrete cube typically imply the analogous statement in Gaussian space, by the Central Limit Theorem. On the other hand, the converse does not generally hold; this is because there are subsets of $\bn$ like the dictators $\{x : x_i = 1\}$, or more generally ``subcubes'' ${\{x : x_{i_1} = \cdots = x_{i_k} = 1\}}$, which have no analogue in Gaussian space. In particular, one has to rule out dictators using the ``small-influences'' condition in order for the Boolean analogue of Borell's theorem, namely the Majority Is Stablest Theorem, to be true. However it \emph{is} often true that asymptotic isoperimetric inequalities for ``small'' subsets of Gaussian space also hold in the Boolean setting with no influences assumption; this is because \emph{small} subcubes and \emph{small} Hamming balls (the Boolean analogue of Gaussian halfspaces) have similar isoperimetric properties in $\bn$. In particular, it turns out that Theorem~\ref{thm:gaussian-sse} holds identically in $\bn$: \begin{theorem}[``Boolean Small-Set Expansion Theorem''] \label{thm:boolean-sse} Let $0 < \rho < 1$. Let $A, B \subseteq \bn$ have volumes $\frac{|A|}{2^n} = \exp(-\frac{a^2}{2})$, $\frac{|B|}{2^n} = \exp(-\frac{b^2}{2})$, and assume $0 \leq \rho a \leq b \leq a$. Then \[ \Pr_{\substack{(\bx, \by) \textnormal{ $\rho$-correlated strings}}}[\bx \in A, \by \in B] \leq \exp\left(-\tfrac12 \tfrac{a^2 - 2\rho a b + b^2}{1-\rho^2}\right). \] In particular, if $\frac{|A|}{2^n} = \alpha$ then \begin{equation} \label{eqn:b-sse} \Stab_\rho[1_A] \leq \alpha^{\frac{2}{1+\rho}} \iff \Pr_{\substack{(\bx, \by) \textnormal{ $\rho$-correlated strings} \\ }}[\bx \in A \mid \by \in A] \leq \alpha^{\frac{1-\rho}{1+\rho}}. \end{equation} \end{theorem} This theorem is formally stronger than its Gaussian counterpart Theorem~\ref{thm:gaussian-sse}, by virtue of the Central Limit Theorem. In fact, there is a related \emph{functional inequality} which is even stronger; this is the crucial \emph{Hypercontractive Inequality} first proved by Bonami~\cite{Bon70}. \begin{theorem}[``Boolean Hypercontractive Inequality''] \label{thm:hypercon} Let $f, g \co \bn \to \R$, let $r, s \geq 0$, and assume $0 \leq \rho \leq \sqrt{rs} \leq 1$. Then \[ \E_{\substack{(\bx, \by) \textnormal{ $\rho$-correlated}}}[f(\bx) g(\by)] \leq \|f\|_{1+r} \|g\|_{1+s}. \] (Here we are using $L^p$-norm notation, $\|f\|_p = \E_{\bx \sim \bn} [|f(\bx)|^p]^{1/p}$.) \end{theorem} To recover Theorem~\ref{thm:boolean-sse}, one simply applies the Hypercontractive Inequality with $f = 1_A$, $g = 1_B$ and optimizes the choice of $r, s$. (We mention that this deduction was first noted, in its ``reverse'' form, by Mossel et al.~\cite{MOR+06}.) The Gaussian analogue of the Boolean Hypercontractive Inequality also holds; indeed, the traditional proof of it (say, in~\cite{Jan97}) involves first proving the Boolean inequality and then applying the Central Limit Theorem. Another interpretation of the Hypercontractive Inequality is as a ``generalized \Holder's inequality''. In fact, its $\rho = 1$ case (corresponding to $\by \equiv \bx$) is \emph{identical} to \Holder's inequality (since the hypothesis $\sqrt{rs} = 1$ is identical to $(1+s)' = 1+r$). The Hypercontractive Inequality shows that as $\bx$ and $\by$ become less and less correlated, one can put smaller and smaller norms of~$f$ and~$g$ on the right-hand side. (In the ultimate case of $\rho = 0$, meaning $\bx$ and $\by$ are independent, one gets the trivial inequality $\E[f(\bx)g(\by)] \leq \|f\|_1 \|g\|_1$.) Speaking of \Holder's inequality, we should mention that it can be used to show that Theorem~\ref{thm:hypercon} is equivalent to the following more traditional formulation of the Hypercontractive Inequality: \begin{equation} \label{eqn:holder-hypercon} \text{For $f \btR$, $1 \leq p \leq q \leq \infty$: }\ \ \|\T_\rho f\|_q \leq \|f\|_p \text{ provided $0 \leq \rho \leq \sqrt{\tfrac{p-1}{q-1}}$.} \end{equation} Writing $p = 1+r$, $q = 1+1/s$, one uses the fact that $\|\T_\rho f\|_q = \sup\{\E[g \cdot \T_\rho f] : {\|g\|_{q'} = 1}\}$, and that the quantity inside the $\sup$ is the same as the left-hand side in Theorem~\ref{thm:hypercon}. Here we see an explanation for the name of the inequality --- it shows that $\T_\rho$ is not just a contraction in~$L^p$ but in fact is a ``hypercontraction'' from $L^p$ to $L^q$. In this formulation, the inequality can be viewed as quantifying the ``smoothing'' effect of the~$\T_\rho$ operator. By virtue of Fact~\ref{fact:Trho-fourier} one can use this formulation to show that low-degree polynomials of independent $\pm 1$ random variables are ``reasonable'', in the sense that their high norms are comparable to their $2$-norm. However we won't pursue this interpretation any further here. A wonderful fact about the Boolean Hypercontractive Inequality is that the $n = 1$ case implies the general~$n$ case by induction. Indeed, for the two-function form given in Theorem~\ref{thm:hypercon}, the induction is almost trivial. If $(\bx, \by)$ are $\rho$-correlated and we write $\bx = (\bx_1, \bx')$ for $\bx' \in \bits^{n-1}$ (and similarly for~$\by$), then \[ \E[f(\bx) g(\by)] = \E_{(\bx_1, \by_1)} \E_{(\bx', \by')}[f_{\bx_1}(\bx') g_{\by_1}(\by')] \leq \E_{(\bx_1, \by_1)} [\|f_{\bx_1}\|_{1+r} \|g_{\by_1}\|_{1+s}], \] by induction, where $f_{x_1}$ denotes the restriction of $f$ gotten by fixing the first coordinate to be~$x_1$ (and similarly for $g_{y_1}$). Then defining the $1$-bit functions $F(x_1) = \|f_{x_1}\|_{1+r}$ and $G(y_1) = \|g_{y_1}\|_{1+s}$ we have \[ \E_{(\bx_1, \by_1)} [\|f_{\bx_1}\|_{1+r} \|g_{\by_1}\|_{1+s}] = \E_{(\bx_1, \by_1)} [F(\bx_1)G(\by_1)] \leq \|F\|_{1+r} \|G\|_{1+s} = \|f\|_{1+r} \|g\|_{1+s}, \] where we used the $n = 1$ case of the Hypercontractive Inequality. Thus to prove all of the Boolean and Gaussian Hypercontractivity and Small-Set Expansion theorems, it suffices to prove the $n = 1$ case of the Boolean Hypercontractive Inequality. In fact, by the \Holder trick we just need to prove~\eqref{eqn:holder-hypercon} in the case $n = 1$. It's also easy to show that we can assume $f \co \bits \to \R$ is nonnegative, and by homogeneity we can also assume $f$ has mean~$1$. Thus everything boils down to proving the following: If $0 \leq \rho \leq \sqrt{\frac{p-1}{q-1}} \leq 1$ and $0 \leq \delta \leq 1$ then \begin{equation} \label{eqn:two-point} \left(\tfrac12 (1+ \rho \delta)^q + \tfrac12 (1- \rho \delta)^q\right)^{1/q} \leq \left(\tfrac12 (1+ \delta)^p + \tfrac12 (1- \delta)^p\right)^{1/p}. \end{equation} Note that if we think of~$\delta$ as very small and perform a Taylor expansion, the above becomes \[ 1 + \tfrac12 \rho^2 (q-1)\delta^2 + \cdots \leq 1 + \tfrac12 (p-1)\delta^2 + \cdots. \] This shows that the $\rho \leq \sqrt{\frac{p-1}{q-1}}$ condition is necessary, and also that it's ``essentially'' sufficient assuming~$\delta$ is small. However, we need to actually verify~\eqref{eqn:two-point} for all $0 \leq \delta \leq 1$. For some simple values of~$p$ and~$q$, this is easy. For example, if $p = 2$ and $q = 4$, establishing~\eqref{eqn:two-point} amounts to noting that $1 + 2\delta^2 + \frac19 \delta^4 \leq 1 + 2\delta^2 + \delta^4$. This is already enough to prove, say, the Boolean Small-Set Expansion statement~\eqref{eqn:b-sse} with parameter $\rho = \frac13$. On the other hand, establishing~\eqref{eqn:two-point} for all $p, q$ and all~$\delta$ is a little bit painful (albeit elementary). In the next section, we'll see a similar problem where this pain can be circumvented. \section{Bobkov's Inequality and Gaussian Isoperimetry} \label{sec:bobkov} Let's now look at a different special case of Borell's Isoperimetric Inequality, namely the case where $B = A$ and $\rho \to 1^{-}$. Using the rotation sensitivity definition from Corollary~\ref{cor:easy-borell}, Borell's inequality tells us that if $A \subseteq \R^d$, and $H \subseteq \R^d$ is a halfspace of the same Gaussian volume, then $\RS_A(\delta) \geq \RS_H(\delta)$. Since we also have $\RS_A(0) = \RS_H(0) = 0$, it follows that $\RS_A'(0^+) \geq \RS_H'(0^+)$. (It can be shown that this derivative $\RS'_A(0^+)$ is always well-defined, though it may be~$\infty$.) As we'll explain shortly, the derivative $\RS_A'(0^+)$ has a very simple meaning; up to a factor of~$\sqrt{\frac{\pi}{2}}$, it is the \emph{Gaussian surface area} of the set~$A$. Thus Borell's Isoperimetric Inequality implies the following well-known result \begin{theorem}[``Gaussian Isoperimetric Inequality''] \label{thm:giso} Let $A \subseteq \R^d$ have Gaussian volume $\gamma_d(A) = \alpha$, and let $H \subseteq \R^d$ be any halfspace with $\gamma_d(H) = \alpha$. Then \begin{equation} \label{eqn:giso} \surf_d(A) \geq \surf_d(H). \end{equation} \end{theorem} Here we are using the following definition: \begin{definition} \label{def:giso} The \emph{Gaussian surface area} of $A \subseteq \R^d$ is \[ \surf_d(A) = \sqrt{\frac{\pi}{2}} \cdot \RS_A'(0^+) = \lim_{\delta \to 0^+}\frac{\gamma_d((\bdry A)^{+\delta/2})}{\delta} = \E_{\bz \sim \gamma_d}[\|\grad 1_A(\bz)\|] = \int_{\bdry A} \vphi(x)\,dx. \] The first equation may be taken as the definition, and the remaining equations hold assuming $A$ is ``nice enough'' (for technical details, see~\cite{AMMP10,AFR13,Hin10,AF11,MNP12,AFR13}). \end{definition} To get a feel for the definition, let's ``heuristically justify'' the second equality above, which relates the derivative of rotation sensitivity to the more natural-looking Gaussian Minkowski content of $\bdry A$. We can think of \begin{equation} \label{eqn:rsa0} \RS_A'(0^+) = \frac{\RS_A(\delta)}{\delta} = \frac{1}{\delta} \Pr_{\substack{(\bz, \bz') \textnormal{ $\cos \delta$-correlated} \\ \textnormal{$d$-dimensional Gaussian pair}}}[1_A(\bz) \neq 1_A(\bz')] \end{equation} for ``infinitesimal''~$\delta$. The last expression here can be thought of as the probability that the line segment $\bell$ joining $\bz, \bz'$ crosses $\bdry A$. Now for infinitesimal~$\delta$ we have $\cos \delta \approx 1$ and $\sin \delta \approx \delta$; thus the distribution of $(\bz, \bz')$ is essentially that of $(\bg, \bg + \delta \bg')$ for $\bg$, $\bg'$ independent $d$-dimensional Gaussians. When $\bg$ lands near~$\bdry A$, the length of the segment $\bell$ in the direction of the nearby unit normal~$\bv$ to~$\bdry A$ will have expectation $\E[|\la \delta \bg', \bv \ra|] = \delta \E[|\normal(0,1)|] = \sqrt{2/\pi} \cdot \delta$. Thus~\eqref{eqn:rsa0} should essentially be~$\sqrt{2/\pi} \cdot \delta \cdot \gamma_d(\{z : \dist(z, \bdry A) < \delta\})$, completing the heuristic justification of the second inequality in Definition~\ref{def:giso}. Incidentally, it's easy to see that the Gaussian surface area of the one-dimensional halfspace $(-\infty, a] \subseteq \R$ is $\vphi(a)$; thus we can give an explicit formula for the right-hand side of~\eqref{eqn:giso}: \begin{fact} \label{fact:giso} The right-hand side of~\eqref{eqn:giso} is the \emph{Gaussian isoperimetric function}, \[ \giso(\alpha) = \vphi \circ \Phi^{-1}(\alpha) \in [0, \tfrac{1}{\sqrt{2\pi}}]. \] A remark: One easily checks that it satisfies the differential equation ${\giso \giso'' + 1 = 0}$, with boundary conditions $\giso(0) = \giso(1) = 0$. \end{fact} The Gaussian Isoperimetric Inequality was originally independently proven by Borell~\cite{Bor75} and by Sudakov and Tsirel'son~\cite{ST78}. Both proofs deduced it via Poincar\'e's limit from L\'evy's Spherical Isoperimetric Inequality~\cite{Lev22,Sch48}. (This is the statement that the fixed-volume subsets of a sphere's surface which minimize perimeter are caps --- i.e., intersections of the sphere with a halfspace.) Ehrhard~\cite{Ehr83} subsequently developed his Gaussian symmetrization method to give a different proof. In 1997, Bobkov gave a surprising new proof by the same technique we saw in the last section: establishing a functional Boolean analogue by induction. We'll now outline this proof. We start with the following equivalent functional form of the Gaussian Isoperimetric Inequality (first noted by Ehrhard~\cite{Ehr84}): For locally Lipschitz ${f \co \R^d \to [0,1]}$, \begin{equation} \label{eqn:gaussian-bobkov} \giso(\E[f(\bz)]) \leq \E[\|(\grad f(\bz), \giso(f(\bz)))\|_2], \end{equation} where $\bz \sim \gamma_d$ and $\|\cdot\|_2$ denotes the usual Euclidean norm in $d+1$ dimensions. The Gaussian Isoperimetric Inequality for~$A$ can be deduced by taking $f = 1_A$; conversely, inequality~\eqref{eqn:gaussian-bobkov} can be deduced from the Gaussian Isoperimetric Inequality by taking $A = \{(x, a) : f(x) \geq \Phi(a)\} \subseteq \R^{d+1}$. In turn, Bobkov showed that the above inequality can be deduced (by the usual Central Limit Theorem argument) from the analogous Boolean inequality: \begin{theorem}[``Bobkov's Inequality''] \label{thm:bobkov} For any $f \co \bn \to [0,1]$, \[ \giso(\E[f]) \leq \E[\|(\grad f, \giso(f))\|_2]. \] Here the expectation is with respect to the uniform distribution on $\bn$, and $\grad f = (\D_1 f, \dots, \D_n f)$. \end{theorem} Just as with the Hypercontractive Inequality, this inequality has the property that the $n = 1$ case implies the general~$n$ case by a fairly easy induction. Indeed, this induction uses no special property of $\giso$ or the $2$-norm: \begin{fact} \label{fact:bobkov-induction} Let $J \co [0,1] \to \R^{\geq 0}$, and let $\| \cdot \|$ denote a fixed $L^p$-norm. Consider, for $f \co \bn \to [0,1]$, the following inequality: \begin{equation} \label{eqn:bobkov1} J(\E[f]) \leq \E[\|(\grad f, J(f))\|]. \end{equation} If this inequality holds for $n = 1$ then it holds for general~$n$. \end{fact} Now given a norm $\| \cdot \|$ we can seek the ``largest'' function~$J$ for which~\eqref{eqn:bobkov1} holds when $n = 1$. As an aside, for the $1$-norm $\| \cdot \|_1$ we may take $J(\alpha) = \alpha \log_2(1/\alpha)$, and this yields a form of the classic Edge Isoperimetric Inequality for the discrete cube~\cite{Har64}, sharp for all $\alpha = 2^{-k}, k \in \Z^+$. Returning to Bobkov's Inequality, the $n = 1$ case we need to verify is that \begin{equation} \label{eqn:bobkov-key} J(\alpha) \leq \half \sqrt{\delta^2 + J(\alpha + \delta)^2} + \half \sqrt{\delta^2 + J(\alpha - \delta)^2} \end{equation} when $J = \giso$ and $\alpha \pm \delta \in [0,1]$. Bobkov used some (elementary) labor to show that this inequality indeed holds when $J = \Giso$. To see how the Gaussian isoperimetric function arises, we Taylor-expand the right-hand side in~$\delta$, getting: \begin{equation} \label{eqn:bobkov-taylor} J(\alpha) + \frac{1}{2J(\alpha)}(J(\alpha) J''(\alpha)+1) \delta^2 \pm O(\delta^4). \end{equation} Thus if take $J = \Giso$, which satisfies $\Giso \Giso'' + 1 = 0$, then the needed inequality~\eqref{eqn:bobkov-key} will at least be satisfied ``for small $\delta$, up to an additive $o(\delta^2)$''. Perhaps surprisingly, this is enough to deduce that~\eqref{eqn:bobkov-key} holds exactly, for all~$\delta$. This was (in a sense) first established by Barthe and Maurey, who used stochastic calculus and It\^o's Formula to prove that~\eqref{eqn:bobkov-key} holds with $J = \Giso$. Let us present here a sketch of an elementary, discrete version of Barthe--Maurey argument. We wish to show that Theorem~\ref{thm:bobkov} holds in the $n = 1$ case; say, for the function $f(\by) = \alpha + \beta \by$, where $\by \sim \bits$. Let's take a random walk on the line, starting from~$0$, with independent increments $\bx_1, \bx_2, \bx_3, \dots$ of $\pm \delta$, and stopping when the walk reaches $\pm 1$ (we assume $1/\delta \in \Z^+$). We let $\by \in \{-1,1\}$ be the stopping point of this walk (which is equally likely to be~$\pm 1$). Now proving Bobkov's inequality for $f(\by) = \alpha + \beta(\bx_1 + \bx_2 + \bx_3 + \cdots)$ can be reduced to proving Bobkov's inequality just for $f(\bx_1) = \alpha + \beta \bx_1$, essentially by the same easy induction used to derive Theorem~\ref{thm:bobkov} from its $n = 1$ case. This puts us back in the same position as before: we need to show that \[ \giso(\alpha) \leq \half \sqrt{(\beta \delta)^2 + \giso(\alpha + \beta\delta)^2} + \half \sqrt{(\beta\delta)^2 + \giso(\alpha - \beta\delta)^2}. \] However we now have the advantage that the quantity $\beta \delta$ is indeed ``small''; we can make it as small as we please. By the Taylor expansion~\eqref{eqn:bobkov-taylor}, the above inequality indeed holds up to an additive $o(\delta^2)$ error. Furthermore, if we simply let this error accumulate in the induction, it costs us almost nothing. It's well known and simple that if $\bT$ is the number of steps the random walk takes before stopping, then $\E[\bT] = 1/\delta^2$. Thus we can afford to let an~$o(\delta^2)$ error accumulate for $1/\delta^2$ steps, since~$\delta$ can be made arbitrarily small. The Barthe--Maurey version of the above argument replaces the random walk with Brownian motion; this is arguably more elegant, but less elementary. An amusing aspect of all this is the following: We first saw in Section~\ref{sec:borell} that statements about Gaussian geometry can be proven by ``simulating'' Gaussian random variables by sums of many random $\pm 1$ bits (scaled down); the above argument shows that it can also be effective to simulate a single $\pm 1$ random bit by the sum of many small Gaussians (i.e., with Brownian motion). We end this section by mentioning that Bobkov's approach to the Gaussian Isoperimetric Inequality inspired Bakry and Ledoux~\cite{BL96b,Led98} to give a ``semigroup proof'' of the Gaussian version of Bobkov's inequality~\eqref{eqn:gaussian-bobkov} (\'a~la~\cite{BE85a,Led94}). Specifically, if one defines \[ F(\rho) = \E_{\gamma_d}[\|(\grad \U_\rho f,\ \giso(\U_\rho f))\|_2], \] then they showed that~$F$ is a nondecreasing function of $\rho \in [0,1]$ just by differentiation (though the computations are a bit cumbersome). This immediately implies~\eqref{eqn:gaussian-bobkov} by taking $\rho = 0, 1$. Mossel and Neeman~\cite{MN12} proved the more general Borell Isoperimetric Inequality using a very similar semigroup technique, and Ledoux~\cite{Led13} generalized their methodology to include the Hypercontractive Inequality, Brascamp--Lieb inequalities, and some forms of the Slepian inequalities. However, it was by returning to discrete methods --- i.e., proving a statement about Boolean functions by induction --- that De, Mossel, and Neeman~\cite{DMN13} were able to simultaneously establish the Majority Is Stablest Theorem and Borell's theorem. \section{The De--Mossel--Neeman proof of the MIST} \label{sec:DMN} Mossel and Neeman actually proved the following functional version of Borell's Isoperimetric Inequality: \begin{theorem} \label{thm:mn} Fix $0 < \rho < 1$ and let $f, g \co \R^d \to [0,1]$. Then if $(\bz, \bz')$ is a $\rho$-correlated $d$-dimensional Gaussian pair, \begin{equation} \label{eqn:mn} \E[\Lambda_{\rho}(f(\bz),g(\bz'))] \leq \Lambda_{\rho}(\E[f(\bz)], \E[g(\bz')]). \end{equation} (If $-1 < \rho < 0$ then the inequality is reversed.) \end{theorem} This is equivalent to Borell's inequality in the same way that~\eqref{eqn:gaussian-bobkov} is equivalent to the Gaussian Isoperimetric Inequality (note in particular that $\Lambda_\rho(\alpha,\beta) = \alpha \beta$ when $\alpha,\beta \in \{0,1\}$). This inequality also has the property that the general-$d$ case follows from the $d = 1$ case by a completely trivial induction, using no special property of $\Lambda_\rho$ or the Gaussian distribution; it only uses that the~$d$ pairs $(\bz_i, \bz'_i)$ are independent. In particular, \emph{if}~\eqref{eqn:mn} were to hold for one-bit functions $f, g \co \{-1,1\} \to [0,1]$ then we could deduce it for general $f, g \co \{-1,1\}^n \to [0,1]$ by induction, then for Gaussian $f, g \co \R \to [0,1]$ by the Central Limit Theorem, and finally for Gaussian $f, g \co \R^d \to [0,1]$ by induction again. Unfortunately, the inequality~\eqref{eqn:mn} does \emph{not} hold for $f, g \co \bits \to [0,1]$. It's clear that it can't, because otherwise we would obtain the Majority Is Stablest Theorem with no hypothesis about small influences (which is false). Indeed, the ``dictator'' functions $f, g \co \bits \to [0,1]$, $f(x) = g(x) = \half + \half x$ provide an immediate counterexample; inequality~\eqref{eqn:mn} becomes the false statement $\frac14 + \frac14\rho \leq \frac12 - \frac{1}{2\pi}\arccos \rho$. Nevertheless, as noted by De, Mossel, and Neeman~\cite{DMN13} we are back in the situation wherein~\eqref{eqn:mn} ``essentially'' holds for one-bit functions ``with small influences''; i.e., for $f(x) = \alpha + \delta_1 x$, $g(x) = \beta + \delta_2 x$ with $\delta_1, \delta_2$ ``small''. To see this, Taylor-expand the left-hand side of~\eqref{eqn:mn} around~$(\alpha,\beta)$: \begin{align} \label{eqn:dmn-1} \E_{\substack{(\bx, \bx') \\ \textnormal{$\rho$-correlated}}}[\Lambda_{\rho}(f(\bx),g(\bx'))] &= \Lambda_\rho(\alpha,\beta) + \E[\delta_1 \bx \cdot \D_1 \Lambda_\rho(\alpha,\beta)] + \E[\delta_2 \bx' \cdot \D_2 \Lambda_\rho(\alpha,\beta)] \nonumber\\ &+\ \E\left[\begin{bmatrix} \delta_1 \bx & \delta_2 \bx' \end{bmatrix} \cdot H\Lambda_\rho(\alpha,\beta) \cdot \begin{bmatrix} \delta_1 \bx \\ \delta_2 \bx' \end{bmatrix}\right] + \cdots \end{align} (Here $H \Lambda_\rho$ denotes the Hessian of $\Lambda_\rho$.) The first term here matches the right-hand side of~\eqref{eqn:mn}. The second and third terms vanish, since $\E[\bx] = \E[\bx'] = 0$. Finally, since $\E[\bx\bx'] = \rho$ the fourth term is \begin{equation} \label{eqn:dmn-4} \begin{bmatrix} \delta_1 & \delta_2 \end{bmatrix} \cdot H_\rho \Lambda_\rho(\alpha,\beta) \cdot \begin{bmatrix} \delta_1 \\ \delta_2 \end{bmatrix}, \quad \text{where the notation } H_\rho F \text{ means } \begin{bmatrix} 1 & \rho \\ \rho & 1 \end{bmatrix} \circ HF. \end{equation} One can show by a relatively short calculation that $\det(H_\rho \Lambda_\rho)$ is identically~$0$ and that the diagonal entries of $H_\rho \Lambda_\rho$ always have opposite sign to~$\rho$. Thus for $0 < \rho < 1$, the matrix $H_\rho \Lambda_\rho$ is everywhere negative semidefinite and hence~\eqref{eqn:dmn-4} is always nonpositive. (The reverse happens for $0 < \rho < 1$.) Ledoux~\cite{Led13} introduced the terminology \emph{$\rho$-concavity of~$F$} for the condition $H_\rho F \preccurlyeq 0$. It follows that~\eqref{eqn:mn} indeed holds for one-bit Boolean functions $f, g$, up to the ``cubic error term'' elided in~\eqref{eqn:dmn-1}. If one now does the induction while keeping these cubic error terms around, the result is the following: \begin{theorem}[``De--Mossel--Neeman Theorem''] \label{thm:dmn} Fix $0 < \rho < 1$ and any small $\eta > 0$. Then for $f, g \co \bits^n \to [\eta, 1-\eta]$, \begin{equation} \label{eqn:mn2} \E_{\substack{(\bx, \by) \\ \textnormal{$\rho$-correlated}}}[\Lambda_{\rho}(f(\bx),g(\by))] \leq \Lambda_{\rho}(\E[f(\bx)], \E[g(\by)]) + O_{\rho, \eta}(1) \cdot \sum_{i=1}^n (\|\mathrm{d}_i f\|_3^3 + \|\mathrm{d}_i g\|_3^3), \end{equation} where $\mathrm{d}_i h$ denotes the \emph{$i$th martingale difference} for~$h$, \[ (\bx_1, \dots, \bx_i) \mapsto \E[h \mid \bx_1, \dots, \bx_i] - \E[h \mid \bx_1, \dots, \bx_{i-1}]. \] (For $-1 < \rho < 0$, the inequality~\eqref{eqn:mn2} is reversed.) \end{theorem} With this theorem in hand, Borell's Isoperimetric Inequality for Gaussian functions $f, g \co \R \to [\eta,1-\eta]$ is easily deduced by the standard Central Limit Theorem argument: one only needs to check that the cubic error term is~$O(\frac{1}{\sqrt{n}})$, and~$n$ may be taken arbitrarily large. Then one immediately deduces the full Borell theorem by taking $\eta \to 0$ and doing another induction on the Gaussian dimension~$d$. On top of this, De, Mossel, and Neeman showed how to deduce Majority Is Stablest from Theorem~\ref{thm:dmn}, using a small collection of analytical tricks appearing in the original proof. The key trick is to use hypercontractivity to bound $\|\mathrm{d}_i f\|_3^3$ in terms of \[ (\|\D_i f\|_2^2)^{1+\delta} = \Inf_i[f]^{1+\delta} \] for some small $\delta \approx \frac{\log \log(1/\eps)}{\log(1/\eps)} > 0$. The fact that we get the nontrivial extra factor $\Inf_i[f]^{\delta}$, which is at most $\eps^{\delta} \approx \frac{1}{\log(1/\eps)}$ by assumption, is the key to finishing the proof. \section{Conclusions: proof complexity} \label{sec:conclusion} As mentioned, there are two known proofs of the Majority Is Stablest Theorem: the original one, which used the Invariance Principle to reduce the problem to Borell's Isoperimetric Inequality; and, the elegant one due to De, Mossel, and Neeman, which is a completely ``discrete proof'', as befits a purely discrete problem like Majority Is Stablest. Esthetics is not the only merit of the latter proof, however; as we describe in this section, the fact that the De--Mossel--Neeman proof is simpler and more discrete leads to new technical results concerning the computational complexity of Max-Cut. Regarding Max-Cut, let's consider the closely related problem of \emph{certifying} that a given graph has no large cut. As we saw in Section~\ref{sec:max-cut}, for any graph~$G$ we can use semidefinite programming to efficiently compute a value $\beta = \SDPOpt(G)$ such that the maximum cut in $G$ satisfies $\Opt(G) \leq \beta$. We think of this algorithm as producing a \emph{proof} of the statement ``$\Opt(G) \leq \beta$''. Furthermore, the (analysis of the) Goemans--Williamson algorithm implies that the bound found by this algorithm is fairly good; whenever $G$ truly satisfies $\Opt(G) \leq \frac{\theta}{\pi}$ (for $\theta \in [\theta_{\mathrm{GW}}, \pi]$), we will efficiently obtain a proof of ``$\Opt(G) \leq \half - \half \cos \theta$''. For example, if $\Opt(G) \leq \frac34$ then there is an efficiently-obtainable ``SDP proof'' of the statement ``$\Opt(G) \leq \half + \frac{1}{2\sqrt{2}} \approx .854$''. Assuming the Unique Games Conjecture (and $\PTIME \neq \NP$), the works~\cite{KKMO04,MOO05} imply that there is no efficient algorithm that can in general find better proofs; e.g., that can certify ``$\Opt(G) \leq .853$'' whenever $\Opt(G) \leq \frac34$. In fact, under the additional standard assumption of $\coNP \neq \NP$, the implication is simply that no short proofs \emph{exist}; i.e., there are infinite families of graphs $G = (V,E)$ with $\Opt(G) \leq \frac34$ but no $\poly(|V|)$-length proof of the statement ``$\Opt(G) \leq .853$'' (say, in some textbook formalization of mathematical reasoning). In other words: \paragraph{Unique Games \& $\PTIME \neq \NP$ Prediction about Max-Cut:} Let $\theta \in [\theta_{\mathrm{GW}}, \pi]$ and $\delta > 0$. There is no polynomial-time algorithm that, given a Max-Cut instance~$G$ with $\Opt(G) \leq \frac{\theta}{\pi}$, outputs a proof of ``$\Opt(G) \leq \half - \half \cos \theta - \delta$''. \paragraph{Unique Games \& $\coNP \neq \NP$ Prediction about Max-Cut:} In fact, there are infinitely many graphs~$G$ with $\Opt(G) \leq \frac{\theta}{\pi}$, yet for which no polynomial-length proof of ``$\Opt(G) \leq \half - \half \cos \theta - \delta$'' exists.\\ As mentioned, the Unique Games Conjecture is quite contentious, so it's important to seek additional evidence concerning the above predictions. For example, to support the first prediction one should at a minimum show that the semidefinite program~\eqref{eqn:gw-sdp} fails to provide such proofs. That is, one should find graphs~$G$ with $\Opt(G) \leq \frac{\theta}{\pi}$ yet $\SDPOpt(G) \geq \half - \half \cos \theta$. Such graphs are called \emph{SDP integrality gap instances}, as they exhibit a large gap between their true optimal Max-Cut and the upper-bound certified by the SDP. Borell's Isoperimetric Inequality precisely provides such graphs, at least if ``weighted continuous graphs'' are allowed: One takes the ``graph'' $G$ whose vertex set is~$\R^d$ and whose ``edge measure'' is given by choosing a $(\cos \theta)$-correlated pair of Gaussians. The fact that $\Opt(G) \leq \frac{\theta}{\pi}$ is immediate from Borell's Theorem~\ref{thm:gaussian-mist}; further, it's not hard to show (using the idea of Remark~\ref{rem:sphere-idea}) that choosing $\vec{U}(v) = v/\sqrt{d}$ in~\eqref{eqn:gw-sdp} establishes $\SDPOpt(G) \geq \half - \half \cos \theta - o_d(1)$. These facts were essentially established originally by Feige and Schechtman~\cite{FS02}, who also showed how to discretize the construction so as to provide finite integrality gap graphs. (Incidentally, we may now explain that the Raghavendra Theory mentioned at the end of Section~\ref{sec:max-cut} significantly generalizes the work of Khot et al.~\cite{KKMO04} by showing how to transform an SDP integrality gap instance for \emph{any} CSP into a matching computational hardness-of-approximation result, assuming the Unique Games Conjecture.) Although the semidefinite program~\eqref{eqn:gw-sdp} fails to certify $\Opt(G) \leq \frac{\theta}{\pi}$ for the ``correlated Gaussian graphs'' described above, a great deal of recent research has gone into developing stronger ``proof systems'' for reasoning about Max-Cut and other CSPs. (See, e.g.,~\cite{Geo10} for a survey.) Actually, until recently this research was viewed not in terms of proof complexity but in terms of analyzing ``tighter'' SDP relaxations that can still be solved efficiently. For example, one can still solve the optimization problem~\eqref{eqn:gw-sdp} in polynomial time with the following ``triangle inequality'' constraint added in: \[ \la U(v), U(w) \ra + \la U(w), U(x) \ra - \la U(v), U(x)\ra \leq 1 \quad \forall v,w,x \in V. \] Note that with this additional constraint we still have $\Opt(G) \leq \SDPOpt(G)$ for all~$G$, because the constraint is satisfied by any genuine bipartition $U \co V \to \bits$. As noted by Feige and Schechtman~\cite{FS02}, adding this constraint gives a certification better than ``$\Opt(G) \leq \half - \half \cos \theta$'' for the Gaussian correlation graphs, though it's not clear by how much. Although this stronger ``SDP + triangle inequality'' proof system does better on Gaussian correlation graphs, a breakthrough work of Khot and Vishnoi~\cite{KV05} showed that it still suffers from the same integrality gap for a different infinite family of graphs. In other words, even when the SDP includes the triangle inequalities, these \emph{Khot--Vishnoi graphs}~$G = (V,E)$ have $\SDPOpt(G) \geq \half - \half \cos \theta$ yet $\Opt(G) \leq \frac{\theta}{\pi} + o_{|V|}(1)$. The second fact, the upper bound on the true Max-Cut value, relies directly on the Majority Is Stablest Theorem. Subsequent works~\cite{KS09,RS09b} significantly generalized this result by showing that even much tighter ``SDP hierarchies'' still fail to certify anything better than ``$\Opt(G) \leq \half - \half \cos \theta$'' for the Khot--Vishnoi graphs~$G$. This could be considered additional evidence in favor of the Unique Games \& $\PTIME \neq \NP$ Prediction concerning Max-Cut. A recent work by Barak et al.~\cite{BBH+12} cast some doubt on this prediction, however. Their work showed that the especially strong ``Lasserre/Parrilo SDP hierarchy''~\cite{Sho87,Las00,Par00} succeeds in finding some good CSP bounds which weaker SDP hierarchies are unable to obtain. Specifically, they showed it provides good upper bounds on the optimal value of the Khot--Vishnoi ``Unique Games instances'' (which are, in some sense, subcomponents of the Khot--Vishnoi Max-Cut graphs). Subsequent work of O'Donnell and Zhou~\cite{OZ13} further emphasized the equivalence of the Lasserre/Parrilo SDP hierarchy and the \emph{Sum-of-Squares (SOS) proof system}, invented by Grigroriev and Vorobjov~\cite{GV01}. In the context of the Max-Cut CSP, this proof system (inspired by Hilbert's~17th Problem~\cite{Hil02} and the \emph{Positivstellensatz} of Krivine~\cite{Kri64} and Stengle~\cite{Ste73}) seeks to establish the statement ``$\Opt(G) \leq \beta$'' for a graph $G = (V,E)$ by expressing \begin{equation} \label{eqn:SOS} \beta - \left(\avg_{(v,w) \in E} \half - \half X_v X_w\right) = \sum_{i=1}^s P_i^2 \quad \text{within the ring } \R[(X_v)_{v \in V}]/(X_v^2 - 1)_{v \in V}, \end{equation} for some formal polynomials $P_1, \dots, P_s$ of degree at most some constant~$C$. Somewhat remarkably, there is an efficient ($|V|^{O(C)}$-time) algorithm for finding such $P_i$'s whenever they exist. As mentioned, for the Khot--Vishnoi Max-Cut graphs~$G$, the fact that $\Opt(G) \leq \frac{\theta}{\pi} + o(1)$ follows directly from the Majority Is Stablest Theorem. To show that the SOS proof system can also certify this fact (thereby casting some doubt on the Unique Games \& $\PTIME \neq \NP$ Prediction about Max-Cut), one needs to show that not only is the Majority Is Stablest Theorem true, but that it can be proved within the extremely constrained SOS proof system, \'a~la~\eqref{eqn:SOS}. The original proof of the Majority Is Stablest Theorem was quite complicated, using the Invariance Principle from~\cite{MOO10} to reduce Borell's Isoperimetric Inequality, and then relying on the known geometric proofs~\cite{Bor85,Bec92} of the latter. The prospect for converting this proof into an SOS format seemed quite daunting (although a partial result was established in~\cite{OZ13}, showing that the SOS proof system can establish ``$\Opt(G) \leq \half - \frac{\cos\theta}{\pi} - (\half - \frac{1}{\pi})\cos^3 \theta$''). However, the simplicity and discrete nature of the new De--Mossel--Neeman proof of the Majority Is Stablest Theorem allowed them to show that the SOS proof system \emph{can} establish the truth about the Khot--Vishnoi graphs, $\Opt(G) \leq \half - \half \cos \theta + o(1)$. It is to be hoped that this result can be extended to the entire Raghavendra Theory, thereby showing that the SOS proof system can certify the optimal value of the analogue of the Khot--Vishnoi instances for \emph{all} CSPs. However as the Raghavendra Theory still relies on the Invariance Principle, whether or not this is possible is unclear. Finally, in light of the De--Mossel-Neeman result, the following interesting question is open: Are there (infinite families of) instances of the Max-Cut problem~$G$ such that $\Opt(G) \leq \frac{\theta}{\pi}$, yet such that any mathematical proof of this statement is so complicated that the SOS proof system cannot establish anything better than ``$\Opt(G) \leq \half - \half \cos \theta$''? If such graphs were found, this might tilt the weight of evidence back in favor of the Unique Games \& $\PTIME \neq \NP$ Prediction. Of course, if human mathematicians explicitly construct the proof of $\Opt(G) \leq \frac{\theta}{\pi}$, presumably it will have polynomial length, and therefore not provide any evidence in favor of the Unique Games \& $\coNP \neq \NP$ Prediction. To provide evidence for this stronger prediction, one presumably needs to give a \emph{probabilistic} construction of graphs~$G$ such that both of the following happen with high probability: (i)~$\Opt(G) \leq \frac{\theta}{\pi}$; and, (ii)~there is no polynomial-length proof even of ``$\Opt(G) \leq \half - \half \cos \theta$''. \frenchspacing \vspace{-.04in}
1,314,259,994,402
arxiv
\section{Introduction} In recent years, there has been growing interest and a great deal of activity (see e.g. $[1],[2],[3],[4],[5]$) in multidimensional cosmology. A feature common to many of those works is to assume that the Universe is a $(4+d)$-dimensional manifold where, due to its evolution, only three spatial dimensions are actually observable, while the remaining $d$ have curled up into compact spaces of unobservable small radii. This point of view is also apparent in the Kaluza-Klein spacetime of multidimensional supergravity$[6],[7],[8],[9]$.\par\noindent In this letter we consider dynamical compactification of a different sort. The $(4+d)$-dimensional space is supposed to break up into a $(4-n)$-dimensional Minkowski space and a compact $(n+d)$-dimensional manifold whose compactification radii are governed by Einstein's field equations. Here the integer $n$, ranging from $1$ to $3$, is the number of usual spatial dimensions by hypothesis compactified, like the extra $d$ dimensions but with different radii, in a circle. Moreover we require that at the initial time $t=0$ the compactification radii of the $(n+d)$ spatial dimensions are all the same, and that each radius of the $d$ extra dimensions at the present time equals the Planck length, while each radius of the usual $n$ dimensions is comparable to the size of the Universe. In the simplified model we propose, only the cosmological constant $\Lambda$ will be retained in Einstein's equations, thus neglecting matter contributions as well as scalar field terms appropriate to inflationary cosmology and a scale factor for the flat dimensions. This model is admittedly not realistic, but it can prove to be useful for future developments.\par\noindent The letter is organised as follows: taking as guidelines the work of Chodos and Detweiler $[10]$, we find the solutions to the field equations and generalize those already found by Kasner when $\Lambda = 0$ $[11]$. Successively we consider eleven dimensional cosmological models with different values of $n$ and $\Lambda$ and give numerical estimates of some quantities of interest such as the Universe age and the evolution of the compactification radii.\par\noindent \section{The line element} The metric suitable to our problem has the form \begin{equation} ds^{2} = dt^{2} - \sum_{\imath = 1}^{3-n}\, dx^{\imath} dx^{\imath} - a^{2}(t)\, \sum_{\imath = 4-n}^{3}\, d\varphi^{\imath}d\varphi^{\imath} - l^{2}(t)\, \sum_{\imath = 1}^{d}\, d\psi^{\imath}d\psi^{\imath} \end{equation} where $a(t)$ and $l(t)$ are respectively the compactification radii of each one of the $n$ and of the $d$ spatial dimensions and $\varphi^{\imath}$ and $\psi^{\imath}$ have a $2\pi$ period. \par\noindent Einstein's equations, with cosmological term only, can be written as \begin{equation} R_{MN} = \dfrac{2 \Lambda}{n+d-1}\, g_{MN}\, , \hspace{1in}{\textsl {\scriptsize {M,N = 1,2,\ldots ,4+d}}} \end{equation} and the relevant ones are given explicitly by: \begin{align*} &n\, \dfrac{\ddot{a}}{a} + d\, \dfrac{\ddot{l}}{l} = \dfrac{2 \Lambda}{n+d-1} \tag{3a}\\{}\\&\dfrac{\ddot{a}}{a} + (n-1)\,\left(\dfrac{\dot{a}}{a}\right)^{2} +d\, \dfrac{\dot{a}}{a}\, \dfrac{\dot{l}}{l} = \dfrac{2 \Lambda}{n+d-1} \tag{3b}\\{}\\&\dfrac{\ddot{l}}{l} + (d-1)\,\left(\dfrac{\dot{l}}{l}\right)^{2} +n\, \dfrac{\dot{a}}{a}\, \frac{\dot{l}}{l}= \dfrac{2 \Lambda}{n+d -1} \tag{3c} \end{align*} where a dot means derivative with respect to the time. \par\noindent The system $(3)$ can be solved with the conditions that at the present time $t=t_{0}$ (age of the Universe) one has \setcounter{equation}{3}\begin{equation} \begin{split}a(t_{0}) = a_{0}\, , &\qquad\Dot{a}(t_{0}) = H_{0}\, a_{0} \\l(t_{0}) = l_{0}\, , &\qquad\Dot{l}(t_{0}) = h_{0}\, l_{0} \end{split}\end{equation} The Hubble constant $H_{0}$ and the new constant $h_{0}$ which appear in the above conditions are not independent as one can see from Eqs.(3) rewritten at $t=t_{0}$ with the introduction of the deceleration parameter $q_{0}= -\, (\Ddot{a} a/\Dot{a}^{2})_{0}$ and of its analogous $Q_{0}= -\, (\Ddot{l} l/\Dot{l}^{2})_{0}$: \begin{align*} &n q_{0} H_{0}^{2} + d Q_{0} h_{0}^{2} = -\, \dfrac{2 \Lambda} {n+d-1} \tag{5a} \\&(n-1-q_{0})\, H_{0}^{2} + d H_{0} h_{0} = \dfrac{2 \Lambda}{n+d-1} \tag{5b} \\&(d-1-Q_{0})\, h_{0}^{2} + n H_{0} h_{0} = \dfrac{2 \Lambda}{n+d-1} \tag{5c} \end{align*} It is in fact straightforward to obtain: \setcounter{equation}{5}\begin{equation} \hspace{-0.4in} \dfrac{h_{0}}{H_{0}}= \begin{cases}-\, \dfrac{n}{d-1} + \sqrt{\dfrac{n(n+d-1)} {d(d-1)^{2}} + \dfrac{2 \lambda}{d(d-1)}} &\qquad {\text {if}} \quad d \neq 1 \\ {} \\-\, \dfrac{n-1}{2} +\dfrac{\lambda}{n}\, &\qquad {\text {if}} \quad d = 1 \end{cases}\end{equation} with \begin{equation}\lambda = \dfrac{\Lambda}{H_{0}^{2}} \end{equation} When $d\neq 1$ it must be $\lambda > - \, n(n+d-1)/2(d-1)$ for reality.\par\noindent For future convenience we define the dimensionless quantities: \begin{align} &\tau = H_{0}t \, , \qquad \tau_{0} = H_{0}t_{0} \\&\omega = \sqrt{\dfrac {n+d}{2\, (n+d-1)}\, |\lambda |} \\&\delta_{>} = \dfrac{1}{2}\, {\text {arctanh}}\, \dfrac{2 \omega}{n + d\, \dfrac{h_{0}}{H_{0}}} \\&\delta_{<} = \dfrac{1}{2}\, {\text {arctan}}\, \dfrac{2 \omega}{n + d\, \dfrac{h_{0}}{H_{0}}} \end{align} The solutions to system $(3)$ can then be written as \begin{equation}\hspace{-1in} \dfrac{a(\tau)}{a_{0}}=\begin{cases}&\left[ \dfrac{\sinh (\omega (\tau-\tau_{0}) + \delta_{>})} {\sinh \delta_{>}}\right]^{\beta_{+}}\, \left[ \dfrac{\cosh (\omega (\tau-\tau_{0}) + \delta {>})} {\cosh \delta_{>}}\right]^{\beta_{-}} \hfill {\text {if}} \quad \lambda > 0 \\{}\\&\left[ 1 + (n + d\, \dfrac{h_{0}} {H_{0}})(\tau - \tau_{0}) \right]^{\beta_{+}} \hfill {\text {if}} \quad \lambda = 0 \\ {} \\&\left[ \dfrac{\sin (\omega (\tau-\tau_{0}) + \delta_{<})} {\sin \delta_{<}}\right]^ {\beta_{+}}\, \left[ \dfrac{\cos (\omega (\tau-\tau_{0}) + \delta_{<})} {\cos \delta_{<}}\right]^{\beta_{-}} \hfill {\text {if}} \quad \lambda < 0 \end{cases}\end{equation} \begin{equation}\hspace{-1in} \dfrac{l(\tau)}{l_{0}}=\begin{cases}&\left[ \dfrac{\sinh (\omega (\tau-\tau_{0}) + \delta_{>})} {\sinh \delta_{>}}\right]^ {\gamma_{-}}\, \left[ \dfrac{\cosh (\omega (\tau-\tau_{0}) + \delta_{>})} {\cosh \delta_{>}}\right]^{\gamma_{+}} \hfill {\text {if}} \quad \lambda > 0 \\{}\\&\left[ 1 + (n + d\, \dfrac{h_{0}}{H_{0}})(\tau - \tau_{0}) \right]^{\gamma_{-}} \hfill {\text {if}} \quad \lambda = 0 \\ {} \\&\left[ \dfrac {\sin (\omega (\tau-\tau_{0}) + \delta_{<})} {\sin \delta_{<}} \right]^{\gamma_{-}}\, \left[ \dfrac{\cos (\omega (\tau-\tau_{0}) + \delta_{<})} {\cos \delta_{<}}\right]^{\gamma_{+}} \hfill {\text {if}} \quad \lambda < 0 \end{cases}\end{equation} Here \begin{align} \beta_{+} & = \dfrac{1 + \sqrt {d\, (n+d-1)/n}}{n+d} \; ,&\beta_{-} & = \dfrac{1 - \sqrt {d\, (n+d-1)/n}}{n+d} \\{}\nonumber \\\gamma_{+} & = \dfrac{1 + \sqrt {n\, (n+d-1)/d}}{n+d} \; ,&\gamma_{-} & = \dfrac{1 - \sqrt {n\, (n+d-1)/d}}{n+d} \end{align} while $l_{0}$ is assumed to be of the order of the the Planck length and $a_{0}$ is the radius of the circle of the actually macroscopic dimensions.\par\noindent Once $n$ and $d$ are fixed and $H_{0}$ is taken as known, the ratios $a(\tau)/a_{0}$ and $l(\tau)/l_{0}$ result to depend only on $\lambda$ and $\tau_{0}$. Then, due to the fact that these two quantities are not sufficiently well established, we might vary them step by step in a neighborhood, say, of $\lambda = 0$ and of $\tau_{0}=1$ to obtain numerical estimates of $a(\tau)/a_0$ and $l(\tau)/l_0$.\par\noindent We find however more convenient to proceed in a different manner. Noticing that $\beta_{+} - \, \gamma_{-} = -\, (\beta_{-} - \, \gamma_{+}) \equiv 1/\alpha$ and defining \begin{equation} \rho \equiv \left( \dfrac {l_{0}\, a(0)}{a_{0}\, l(0)} \right) ^{\alpha} \end{equation} which is expected to be a quantity much less than unity, one easily obtains from Equations $(12)$ and $(13)$ written at $\tau =0$: \begin{equation}\hspace{-0.4in} \tau_{0} =\begin{cases}\dfrac{ \delta_{>} - {\text {arctanh}} (\rho\, \tanh \delta_{>})}{\omega} &\qquad {\text {if}}\quad \lambda > 0 \\{}\\ \dfrac {1 -\, \rho}{n+d\, h_{0}/H_ {0}} &\qquad {\text {if}}\quad \lambda = 0 \\{}\\ \dfrac{ \delta_{<} - {\text {arctan}}(\rho\, \tan \delta_{<})} {\omega} &\qquad {\text {if}}\quad \lambda < 0\end{cases} \end{equation} In this way we can calculate $\tau_{0}$ for a given $\lambda$ if we properly choose the parameter $\rho$ or, otherwise stated, the ratio $a(0)/l(0)$. To make an example we can recover, for $\lambda =0$, Kasner's solution by choosing $a(0)$ equals to zero, or equivalently $l(0)$ equals to infinity, and therefore $\rho = 0$.\par\noindent In view of the smallness of the ratio $l_0/a_0$, initial values for $a(0)$ and $l(0)$ of not too much different order of magnitude does not appreciably influence the results at farther times. Our choice is therefore to have at the initial time the same compactification radii for the $(n+d)$ spatial dimensions and so we put $a(0) = l(0)$. As a consequence, for a given pair $(n,d)$, we have only $\lambda$ as the parameter left free to evaluate both the actual age of the Universe $\tau_0$ and the ratios $a(\tau )/a_0$ and $l(\tau )/l_0$. \section{Numerical results for eleven dimensions} We shall limit ourselves to the most popular choice of eleven dimensions and so fix the value $d = 7$. \par\noindent To begin with, let us examine the values of $\tau_0 = H_0 t_0$ which can be obtained by our model when the adimensional cosmological constant $\lambda$ vary in the interval $(-\, 1,1)$. As to the Hubble constant $H_0$, it is common practice to write $H_0 = 100\eta\, km\, s^{-\, 1}\, Mpc^{-\, 1}$ where the uncertainty on it is put into the constant $\eta$, whose present value ranges from $0.50$ to $0.85$. Thus a characteristic time scale for the expansion of the Universe is the Hubble time $1/H_0 = (9.8/\eta)\, Gyr$.\par\noindent If we look at the graph of Figure $1$, where the above stated restriction on the negative values of $\lambda$ is apparent only for $n=1$, we can see that $\tau_0$ can exceed unity, as one expects, only if $n=1$ and $\lambda < 0$. Due however to the simplicity of our model, this fact does not seem a serious drawback and all other combinations of $n$ and $\lambda$ can not, in our opinion, be ruled out. \par\noindent The sign of $\lambda$ and the values of $n$ appear to be of great importance for the time evolution of the radii $a(\tau )$ and $l(\tau )$ as it is shown in Figures from $2$~to~$7$. \par\noindent We can summarize the various behaviours as follows: \par\noindent 1) When $\lambda >0$, $a(\tau )$ is always increasing to infinity with time and so does $l(\tau )$ apart in the cases $n=2,3$ where $l(\tau )$ is initially decreasing in a finite time interval. \par\noindent 2) When $\lambda = 0$, $a(\tau )$ is always increasing to infinity with time, while $l(\tau )$ is constant if $n=1$ and decreasing to zero if $n=2,3$. \par\noindent 3) When $\lambda < 0$\,, $a(\tau )$ increases to infinity and $l(\tau )$ decreases to zero until $\tau$ reaches the finite value $\tau = \tau_0 + (\pi /2 - \delta_< )/\omega $\, ; whether this is a final state or a new initial state of the Universe is a question we leave open. \par\noindent Let us notice, as one can see from Figures $3$,$5$ and $7$, that the rate of variation of the Planck length is not so dramatic in the range of times considered, and in any case still compatible with the experimental bounds due to the possible time variation of the fundamental constants involved in its definition. \section{Conclusions} The widespread belief in existing multidimensional cosmological models is that three spatial dimensions expand isotropically while the remaining $d$ are actually curled up into spaces of dimensions comparable to the Planck length. Such a behaviour is exibited also by the $(4+d)$-dimensional Kaluza-Klein spacetime derived from M-theory. \par\noindent We instead propose that at least one of the three spatial macroscopic dimensions can undergo a compactification process with a consequent loss of isotropy. This fact would bring to important experimental consequences, for instance, with respect to the cosmic microwave background anisotropy. When all the three usual spatial dimensions compactify, that space becomes like a flat three-dimensional one with the scale factor $a(\tau )/a_0$ describing its espansion. Of course in all the cases we have considered, the espansion in the distant future is driven by the cosmological costant.\par\noindent Our model is admittedly greatly simplified, but it seems worth exploring the possibility of a compactification process also in the large scale. \newpage
1,314,259,994,403
arxiv
\section{Introduction} \label{Introduction} N\'eel\cite{Neel} predicted the first-order transition of anisotropic antiferromagnets in the presence of a magnetic field in 1936. He pointed out that the spins will abruptly change directions from parallel to perpendicular with respect to the $c$-axis (easy axis of sublattice magnetization) at some value of the external magnetic field, when the magnetic field is applied to the direction parallel to the $c$-axis. His prediction was confirmed experimentally\cite{experiment}, and this first-order phase transition is now known as the spin-flopping process. Thirty years later than the discovery of the spin-flopping process, C.N. Yang and C.P. Yang showed by the Bethe ansatz that the one-dimensional (1d) spin-1/2 $XXZ$ model with Ising-like anisotropy exhibits a second-order transition in the presence of a magnetic field\cite{YY}. Thus, for one-dimensional Ising-like antiferromagnets, quantum fluctuations, which are neglected in the mean-field approximation, play an essential role. In this way, quantum fluctuations may drastically modify the classical behavior depending on the dimensionality. Hence, we investigate the magnetization process of the spin-1/2 Ising-like $XXZ$ (I-$XXZ$) models in two and three dimensions in order to see how quantum fluctuations modify the classical behavior of the magnetization process of Ising-like antiferromagnets. \par Also, the spin-1/2 $XXZ$ model can be translated into the hard-core boson model with nearest neighbor repulsion\cite{matsu2}. This model corresponds to a special case of the extended Bose-Hubbard model which is considered to be relevant for low-temperature properties of liquid helium on a periodic substrate and also for Josephson junction arrays \cite{bose_Hub_scal1,bose_Hub_scal2,bose_Hub_mf1}. From the theoretical point of view, a lot of attention has been paid to the Bose-Hubbard model as the simplest model to describe the superfluid-insulator transition \cite{bose_Hub_scal1,bose_Hub_scal2,bose_Hub_mf1,bose_Hub_QMC2,bose_Hub_QMC3}. We can obtain information about the superfluid-insulator transition occurring in the extended Bose-Hubbard model through the investigation of the spin-1/2 I-$XXZ$ model. \par This paper is organized as follows: In Sec. \ref{Model}, the $XXZ$ model is defined. The details of the numerical calculations are presented. In Sec. \ref{Classical}, we review the classical Ising-like $XXZ$ model. Numerical results on the magnetization curve of the spin-1/2 I-$XXZ$ models in two and three dimensions are shown in Sec. \ref{Spin-1/2}. In Sec. \ref{Ising-limit}, the behavior of the magnetization curve in the Ising-limit is investigated by means of perturbation theory. In Sec. \ref{Bose-Hubbard}, we briefly discuss the superfluid-insulator transition in the extended Bose-Hubbard model in the hard-core limit based on the numerical results of the spin-1/2 I-$XXZ$ model. Section \ref{Summary} is devoted to summary. \section{Model and Method} \label{Model} In the present paper, we consider the $XXZ$ model defined by the following Hamiltonian: \begin{equation} {\cal H}_{XXZ} = J\sum_{\langle i,j\rangle} (S^{x}_{i}S^{x}_{j}+S^{y}_{i}S^{y}_{j} + \lambda S^{z}_{i}S^{z}_{j}), \end{equation} where $S^{x(y,z)}_{i}$ denote the $x(y,z)$ components of the spin operator at site $i$. Here $\langle i,j\rangle$ denotes nearest neighbors. The anisotropic coupling constant is denoted by $\lambda$. For $\lambda=1$, the isotropic Heisenberg model is recovered. We investigate the spin-1/2 $XXZ$ models on square and cubic lattices in the ground state in the canonical ensemble. Namely, we measure the energy $E$ within the subspace of fixed magnetization $M$ (=$\sum_i S^{z}_i$). The magnetic field in a finite-size cluster is defined as $H(\bar M)\equiv (E(M_1)-E(M_2))/(M_1-M_2)$, where $\bar M$ is defined by $(M_1+M_2)/2$. In the thermodynamic limit, this definition of the magnetic field reduces to the normal one: $H\equiv \partial{E}/\partial{M}$. The maximum magnetization and the saturation field are denoted by $M_{\rm max}$(=$N_{\rm s}/2$) and $H_{\rm max}$(=$J(\lambda+1)d$), respectively. Here $N_{\rm s}$ and $d$ are the system size and the spatial dimensionality. \par We use the Lanczos algorithm (exact diagonalization) for clusters up to 32 sites and the cluster algorithm\cite{clst_alg} (quantum Monte Carlo) for larger clusters up to 100 sites. For the cluster algorithm, the measurements have been performed at the inverse temperature $\beta J=16$. The width of the Trotter slice is chosen as $\Delta\tau J=0.04$ for two dimensions (2d) and $\Delta\tau J=0.053$ for three dimensions (3d). The simulation has been performed in the canonical ensemble. In the small $\lambda$ regime, the number of points near $M=0$ obtained by exact diagonalization is too small to use the Maxwell construction. On the other hand, in the large $\lambda$ regime, statistical errors in the quantum Monte Carlo calculation become large, because the spin configurations are almost frozen. Hence, we have investigated the small $\lambda$ regime by quantum Monte Carlo and the large $\lambda$ regime by exact diagonalization. \par In order to see finite-size effects, we show the size-dependence of the energy gap $\Delta_{\rm g}(\equiv E(M=1)-E(M=0))$, the critical field $H_{\rm c}$ and the magnetization jump $M_{\rm s}$ in Fig.\ref{size}. As shown in this figure, the size-dependence is very small. As a check of numerical accuracy, we compare the energy gap $\Delta_{\rm g}$ obtained by this method with those obtained by other methods as shown in Fig.\ref{gap}. Our result is quite consistent with those of third-order spin-wave theory\cite{3OSW} and the series expansion around the Ising-limit\cite{expansion}. Hence, we consider that the inverse temperature $\beta$ and the width of the Trotter slice $\Delta\tau$ are sufficient. [See also Fig.\ref{mh}.] For three dimensions, we report the 64-site results. \section{Review of the Classical Spin Case} \label{Classical} Before investigating the spin-1/2 $XXZ$ models, we briefly review the magnetization process of the classical I-$XXZ$ model in the ground state\cite{Neel,Yosida}. The ground state energy of the classical I-$XXZ$ model may be written in the following form: \begin{eqnarray} E_{{\rm C}-XXZ} &=& \frac{JN_{\rm s}z}{2} (S^{x}_{A}S^{x}_{B}+S^{y}_{A}S^{y}_{B} + \lambda S^{z}_{A}S^{z}_{B}) \nonumber\\ &=& -\frac{JN_{\rm s}zS^2}{2} (\sin(\theta+\phi)\sin(\theta-\phi) \nonumber\\ &&+ \lambda \cos(\theta+\phi)\cos(\theta-\phi)), \end{eqnarray} where $S^{x(y,z)}_{A(B)}$ represent the $x(y,z)$ components of the spin operators at a site in the A(B) sublattices. The length of the spin and the coordination number are denoted by $S$ and $z(=2d)$, respectively. The angles $\theta$ and $\phi$ are defined as in Fig.\ref{class}(a) ($0\le\theta\le\pi/2$, $0\le\phi\le\pi/2$). The Zeeman term $E_{\rm Z}$ is written in the following form: \begin{equation} E_{\rm Z} = -\frac{HN_{\rm s}}{2}(S^{z}_{A}+S^{z}_{B}) = \frac{HN_{\rm s}S}{2}(\cos(\theta+\phi)-\cos(\theta-\phi)). \end{equation} By minimizing the total energy ($E_{\rm tot}\equiv E_{{\rm C}-XXZ}+E_{\rm Z}$) with respect to $\theta$ and $\phi$, one finds the following stable states (Fig.\ref{class}(b)):\\ \begin{tabular}{clllcl} (i) &$\theta=0$,&$\phi=0$,& $E_{\rm tot}=-{\tilde J}\lambda$ &for& ${\tilde H}<{\tilde J}\sqrt{\lambda^2-1}$\\ (ii) &$\theta=\pi/2$,&$\phi=\arcsin\frac{{\tilde H}}{{\tilde J}(\lambda+1)}$,& $E_{\rm tot}=-{\tilde J}-\frac{{\tilde H}^2}{{\tilde J}(\lambda+1)}$ &for& ${\tilde J}(\lambda+1)>{\tilde H}>{\tilde J}\sqrt{\lambda^2-1}$\\ (iii) &$\theta=\pi/2$,&$\phi=\pi/2$,& $E_{\rm tot}={\tilde J}\lambda-2{\tilde H}$ &for& ${\tilde H}>{\tilde J}(\lambda+1)$, \end{tabular}\\ where ${\tilde J}$ and ${\tilde H}$ are defined as ${\tilde J}\equiv JN_{\rm s}zS^2/2$ and ${\tilde H}\equiv HN_{\rm s}S/2$. The magnetization curve of the classical I-$XXZ$ model is shown in Fig.\ref{class}(c). The transition from the state (i) to the state (ii) is known as the spin-flopping process\cite{Neel,Yosida}. The critical field $H_{\rm c}$ is defined as the magnetic field above which the ground state has non-zero magnetization. The $\lambda$ dependence of the critical field $H_{\rm c}$ and that of the magnetization jump $M_{\rm s}$ are obtained as \begin{equation} H_{\rm c}/H_{\rm max} = M_{\rm s}/M_{\rm max} = \sqrt{(\lambda-1)/(\lambda+1)}, \label{Hc_classical} \end{equation} where $H_{\rm max}=JSz(\lambda+1)$ and $M_{\rm max}=N_{\rm s}S$. \section{Magnetization Curve of the Spin-1/2 $XXZ$ Model} \label{Spin-1/2} In this section, we present the numerical results on the magnetization curve of the spin-1/2 I-$XXZ$ models on square and cubic lattices. As an example, we show the magnetization curve of the spin-1/2 $XXZ$ model for $\lambda=2$ on a square lattice in Fig. \ref{mh}. The critical field $H_{\rm c}$ and the magnetization jump $M_{\rm s}$ are determined on the basis of the Maxwell construction as follows. [See also Fig.\ref{Ising}(a).] We fit the energy as a function of magnetization by a polynomial. The tangent from the point at $M=0$ to the fitting curve gives a lower energy than the numerical data in the region of $0<M<M_{\rm s}$. Here $M_{\rm s}$ is the magnetization at the point of contact between the fitting curve and the tangent. Hence, we can identify the region of phase separation as $0<M<M_{\rm s}$ on the basis of the Maxwell construction \cite{ene_size_correction}. The magnetic field of the phase-separated state ($H_{\rm c}$) is given as the slope of the tangent. In practice, we have determined the phase-separation boundary as the point where the following condition is satisfied: $\partial{E}/\partial{M}=(E(M)-E(M=0))/M$ as in Fig.\ref{det}. The size-dependence of the critical field $H_{\rm c}$ and that of the magnetization jump $M_{\rm s}$ determined by the Maxwell construction are very small as discussed in Sec.\ref{Model} (Fig.\ref{size}). \par We show the $\lambda$ dependence of the critical field $H_{\rm c}$ in Fig.\ref{hc}. The critical field $H_{\rm c}$ is suppressed by quantum fluctuations. In order to see how large the critical field $H_{\rm c}$ is suppressed by quantum fluctuations, we have tried to fit the numerical data as $H_{\rm c}/H_{\rm max}=(\frac{\lambda-1}{\lambda+1})^{\alpha}$, analogously with the classical result $\alpha=0.5$ (eq.(\ref{Hc_classical})). We estimate $\alpha$ to be $\alpha=0.64\pm0.01$ for 2d and $\alpha=0.57\pm0.01$ for 3d. Note that the $\lambda$ dependence of the critical field $H_{\rm c}$ in two and three dimensions is quite different from the one-dimensional case, where the gap (=$H_{\rm c}\equiv\partial{E}/\partial{M}|_{M/M_{\rm max}\rightarrow+0}$) opens exponentially: $H_{\rm c}\propto\exp[-\pi^2/2\sqrt{2(\lambda-1)}]$\cite{dG}. \par Here, we mention the relation between the critical field $H_{\rm c}$ and the energy gap $\Delta_{\rm g}$. It is expected that the energy gap $\Delta_{\rm g}$ is larger than the critical field $H_{\rm c}$, if a first-order transition occurs in the presence of a magnetic field. The reason is as follows. The ground state of $M=1$ is considered to be the one-magnon state, which may be described by spin-wave theory. Hence, the gap $\Delta_{\rm g}$ corresponds to the excitation energy of one magnon from the ground state of $M=0$. On the other hand, phase separation occurs, because magnons gain energy by interacting attractively with each other. The critical field $H_{\rm c}$ would be determined by the effective attractive interactions between the macroscopic number of magnons. As a result, if phase separation occurs, the gap $\Delta_{\rm g}$ is expected to be larger than the critical field $H_{\rm c}$, i.e. \begin{equation} \Delta_{\rm g}>H_{\rm c}\equiv\partial{E}/\partial{M}|_{M/M_{\rm max}\rightarrow+0}, \end{equation} where $M$ is assumed to be a macroscopic number when the limit $M/M_{\rm max}\rightarrow+0$ is taken. We compare the gap $\Delta_{\rm g}$ and the critical field $H_{\rm c}$ of the spin-1/2 I-$XXZ$ model on a square lattice in Fig.\ref{gap_Hc}. The gap $\Delta_{\rm g}$ is always larger than the critical field $H_{\rm c}$ as expected. It is interesting to contrast this behavior with the one-dimensional result. For one dimension, the transition is of second order\cite{YY}, and the following relation is satisfied: $\partial{E}/\partial{M}|_{M/M_{\rm max}\rightarrow+0}=E(M=1)-E(M=0)$. This is considered to be due to effective repulsive interactions. \par The $\lambda$ dependence of the magnetization jump $M_{\rm s}$ is shown in Fig.\ref{ms}. We estimate the critical value of $\lambda$, where $M_{\rm s}$ vanishes, as $\lambda_{\rm c}=1.00\pm 0.02$ by extrapolating the data in Fig.\ref{ms}. This confirms that the spin-1/2 I-$XXZ$ models on square and cubic lattices show a first-order transition at some critical field for any value of the anisotropic coupling constant larger than one ($\lambda>1$). The $\lambda$ dependence of the magnetization jump $M_{\rm s}$ is remarkably different from the classical result, especially in the large $\lambda$ regime. \section{Ising-limit} \label{Ising-limit} In this section, we discuss the magnetization process of the spin-1/2 $XXZ$ model in the Ising-limit. In Fig.\ref{ms}, the value of the magnetization jump $M_{\rm s}$ in the Ising-limit ($\lambda\rightarrow\infty$) does not coincide with that of the Ising model ($\lambda=\infty$)($M_{\rm s}(\lambda=\infty)=M_{\rm max}$): \begin{equation} M_{\rm s}(\lambda\rightarrow\infty)\ne M_{\rm s}(\lambda=\infty). \end{equation} This can be explained by means of perturbation theory as follows. We rewrite the spin-1/2 $XXZ$ model as \begin{equation} {\cal H}_{XXZ} = {\bar J}\sum_{\langle i,j\rangle}S^{z}_{i}S^{z}_{j} + \epsilon{\bar J}\sum_{\langle i,j\rangle} (S^{x}_{i}S^{x}_{j}+S^{y}_{i}S^{y}_{j}), \end{equation} where ${\bar J}$ and $\epsilon$ are defined as ${\bar J}\equiv J\lambda$ and $\epsilon\equiv 1/\lambda$. We consider the $XY$-term as the perturbation. At $M=0$, the unperturbed ground states are the two-degenerate N\'eel states. The leading perturbation energy is of order $\epsilon^2$. On the other hand, in the limit of $M\rightarrow M_{\rm max}$, the leading perturbation energy is of order $\epsilon$ and proportional to $M-M_{\rm max}$. Hence it is expected that phase separation occurs for the magnetization smaller than some value $M_{\rm s}$($<M_{\rm max}$) in the Ising-limit. We numerically estimate the value of $M_{\rm s}$ in the Ising-limit with first-order perturbation theory in the following way. The first-order perturbation energy $E_1$ is obtained as \begin{equation} E_1 = \frac{\epsilon{\bar J}}{2}\frac{\sum_{\alpha,\beta}\langle \beta| \sum_{\langle i,j\rangle}(S^{+}_{i}S^{-}_{j}+S^{+}_{i}S^{-}_{j})|\alpha \rangle} {\sum_{\alpha}\langle \alpha|\alpha \rangle}, \end{equation} where $|\alpha\rangle$ and $|\beta\rangle$ denote unperturbed ground states of the Ising model in the subspace of fixed magnetization. We generate $|\alpha\rangle$'s randomly and measure $E_1$ using Monte Carlo technique. The value of $M_{\rm s}$ is determined based on the Maxwell construction. Figure \ref{Ising} shows the first-order perturbation energy $E_1$ and the value of $M_{\rm s}$ in the Ising-limit on hyper-cubic lattices in dimensions up to six. We extrapolate the data in Fig.\ref{Ising}(b) and estimate the inverse dimensionality, where $M_{\rm s}$ coincides with $M_{\rm max}$, as $1/d=0.01\pm 0.02$. This confirms that the value of $M_{\rm s}$ in the Ising-limit ($\lambda\rightarrow\infty$) does not coincide with that of the Ising model ($\lambda=\infty$) in finite spatial dimensions. \section{Relation to the Bose-Hubbard Model} \label{Bose-Hubbard} In this section, we briefly discuss the phase diagram of the hard-core boson model with nearest-neighbor repulsion based on the numerical results of the spin-1/2 $XXZ$ models. The hard-core boson model with nearest-neighbor repulsion can be obtained from the following extended Bose-Hubbard model by taking the on-site repulsion $U$ infinity: \begin{equation} {\cal H}_{\rm BH} = t \sum_{\langle i,j\rangle}(b^{\dag}_{i}b_{j} +b^{\dag}_{j}b_{i}) + U \sum_{i}n_{i}(n_{i}-1) + V \sum_{\langle i,j\rangle}(n_{i}-1/2)(n_{j}-1/2), \end{equation} where $b^{\dag}_{i}$ ($b_{i}$) creates (annihilates) a boson on site $i$ and $n_{i}=b^{\dag}_{i}b_{i}$. Here $\langle i,j\rangle$ denotes nearest neighbors. The spin-1/2 $XXZ$ model can be translated into the hard-core boson model with nearest-neighbor repulsion ($t\leftrightarrow J/2$, $V\leftrightarrow J\lambda/4$)\cite{matsu2}. Figure \ref{ms} corresponds to the phase diagram of this model by relating the filling $\rho$ and the interaction strength $V/t$ to $M/M_{\rm max}$ and $\lambda$ according to $\rho=(1-M/M_{\rm max})/2$ and $V/t=\lambda/2$. The numerical results in the previous section are translated as follows: The superfluid-insulator transition occurs in the region of $V>t/2$. This transition is a first-order transition, which is consistent with recent investigation of the Bose-Hubbard model\cite{bose_Hub_QMC3}. Phase separation does not occur for the density $\rho$ smaller than $\rho_{\rm c}$, for finite $V/t$, where $\rho_{\rm c}\equiv (1-M_{\rm s}(\lambda\rightarrow\infty)/M_{\rm max})/2>0$. This $\rho_{\rm c}$ approaches zero as the spatial dimensionality $d$ goes to infinity (Fig.\ref{Ising}(b)). \section{Summary} \label{Summary} In summary, numerical results on the magnetization process of the spin-1/2 Ising-like $XXZ$ models have been reported. The spin-1/2 $XXZ$ models on square and cubic lattices show a first-order phase transition at some critical magnetic field for the anisotropic coupling constant larger than one ($\lambda>1$). The critical field $H_{\rm c}$ and the magnetization jump $M_{\rm s}$ are estimated on the basis of the Maxwell construction. The critical field $H_{\rm c}$ is suppressed by quantum fluctuations (Fig.\ref{hc}). We have demonstrated that the energy gap $\Delta_{\rm g}$ is larger than the critical field $H_{\rm c}$ (Fig.\ref{gap_Hc}). The anisotropy $\lambda$ dependence of the magnetization jump $M_{\rm s}$ is remarkably different from the classical result (Fig.\ref{ms}). It is strongly suggested that the value of $M_{\rm s}$ in the Ising-limit ($\lambda\rightarrow \infty$) does not coincide with that of the Ising model ($\lambda=\infty$) in finite spatial dimensions due to quantum effects (Fig.\ref{Ising}). \narrowtext \acknowledgments The authors would like to thank H.Shiba for helpful discussions and useful comments. One of the authors (M.K.) thanks T. Kawarabayashi, E. Williams and D. Lidsky for reading of the manuscript. The exact diagonalization program is based on the subroutine package "TITPACK Ver.2" coded by H. Nishimori. Part of the calculations were performed on the Fujitsu VPP500 and on the Intel Japan PARAGON at Institute for Solid State Physics, Univ. of Tokyo.
1,314,259,994,404
arxiv
\section{Motivation and introduction} The Standard-Model Extension (SME) is an effective field theory that contains all the possible Lorentz-violating deviations from the Standard Model and General Relativity.\cite{SME} A reason for the creation of the SME is to facilitate the systematic search for Lorentz and CPT violations by classifying all Lorentz-violating operators in terms of measurable SME coefficients. We can quantify the reliability of Lorentz symmetry by measuring how consistent with zero are all of the SME coefficients. The first models for searching for Lorentz violation in atomic experiments, called in this presentation atomic SME models, were limited to Lorentz-violating operators of mass dimensions three and four called minimal operators.\cite{Prev} Lorentz-violating operators of mass dimensions five or higher are called nonminimal Lorentz-violating operators. The minimal atomic SME models, the models limited to minimal operators, were used to identify signals for Lorentz violation that were targeted by clock comparison experiments with ordinary matter and exotic atoms.\cite{AtomSpec} Some of the historical and current best bounds on many SME coefficients are the results of these experimental studies.\cite{datatables} Any systematic search for Lorentz violation must consider minimal and nonminimal Lorentz-violating operators. The limitation on the first SME models to consider only minimal Lorentz-violating operators was for practical reasons. For example, it took more than a decade after the creation of the SME for all the fermion Lorentz-violating operators of arbitrary mass dimensions to be classified.\cite{KoMe13} After the theoretical advancements in recent years, models for testing Lorentz symmetry with atomic experiments that include nonminimal operators have been obtained.\cite{GoKo14, KoVa15, KoVa18} This presentation is based on SME nonminimal atomic models introduced in three publications. The first publication considers atomic spectroscopy experiments with muonium and muonic atoms.\cite{GoKo14} The second publication\cite{KoVa15} considers two-fermion atoms such as hydrogen, antihydrogen, and positronium as well as deuterium and some simple molecules. The third publication extends the analyses to multiple fermion atoms with applications to microwave and optical clocks. \section{NR coefficients and atomic energy levels} The nonrelativistic (NR) coefficients $\anr{kjm}$, $\cnr{kjm}$, $\HzBnr{kjm}$, $\HoBnr{kjm}$, $\gzBnr{kjm}$, and $\goBnr{kjm}$, which are defined in Ref.~\refcite{KoMe13}, are the effective SME coefficients used in the models discussed in this presentation.\cite{GoKo14, KoVa15, KoVa18} The Lorentz-violating operators associated with the NR coefficients are expressed as the components of spherical tensors.\cite{KoMe13} The component of a spherical tensor is an object that rotates as the angular momentum eigenstates $|jm\rangle$. For the states $|jm\rangle$, $j$ is the rank of the spherical tensor and $m$ is the component. Following a similar notation, the $j$ index of the NR coefficient specifies the rank of the spherical tensor $T^{(j)}_m$ associated with the coefficient, and the $m$ index specifies the component. The meaning of the other indices of the NR coefficients are explained in Ref.~\refcite{KoMe13}. The leading-order Lorentz-violating atomic energy shift is obtained from the expectation value of the Lorentz-violating perturbation. The perturbation can be expressed as\cite{GoKo14, KoVa15, KoVa18} \begin{equation} \delta h=\sum_j\sum^j_{m=-j} \mathcal K_{jm} T^{(j)}_m, \end{equation} where $\mathcal K_{jm}$ represents a generic NR coefficient and $T^{(j)}_m$ represents a generic Lorentz-violating operator expressed as the component of a spherical tensor. Representing the atomic state as $|F\rangle$, where we are supressing all the quantum numbers except the total angular momentum quantum number $F$, the leading-order energy shift is given by\cite{GoKo14, KoVa15, KoVa18} \begin{equation} \delta \epsilon=\sum_j\sum^j_{m=-j} \mathcal K_{jm}\langle F| T^{(j)}_m|F\rangle.\label{eshift} \end{equation} An advantage of expressing the Lorentz-violating operators in terms of spherical tensors is that we can use the Wigner-Eckart theorem\cite{WiEc}. This theorem implies that $\langle F| T^{(j)}_m|F\rangle=0$ if $j>2F$ and therefore, only the NR coefficients with index $j\leq 2F$ can contribute to the Lorentz-violating shift for an atomic state with quantum number $F$. In the case of the $nS_{1/2}$ atomic state of hydrogen, the maximum possible value for $F$ is $F=1$, and therefore, only NR coefficients with $j\leq 2$ contribute to the energy shift. Evidently, it is impossible to measure all the NR coefficients by only studying transitions between $nS_{1/2}$ states as the NR coefficients with $j>2$ do not contribute to the energy of these states. The situation is different in the minimal SME as all coefficients in the minimal SME atomic models contribute to the energy of the $nS_{1/2}$ states. Therefore, in the context of the minimal SME is enough to study transitions between $nS_{1/2}$ states. In the nonminimal SME models, transitions between states with higher values of $F$ are necessary to measure all the NR coefficients. For instance, only transitions involving states with $F\geq2$ are sensitive to NR coefficients with $j=4$. \section{NR coefficients and harmonics of the sidereal frequency} A possible signal for Lorentz violation is if the atomic transition frequency depends on the orientation of the atom. We can rotate the atom by fixing the atomic state relative to the quantization axis defined by an external uniform magnetic field. In this scenario, we can rotate the atomic states by rotating the magnetic field. The most common situation is when the orientation of the magnetic field is fixed relative to a laboratory on the surface of the Earth. In this case, the magnetic field and the atomic state rotates with the Earth, and the signal for Lorentz violation is a sidereal variation of the transition frequency.\cite{GoKo14, KoVa15, KoVa18} For now, we will consider only the variations in the transition frequency due to the Earth's rotation and ignore the effect of the motion of the Earth around the Sun. In general, the sidereal variation of the Lorentz-violating shift to the transition frequency can be expressed as \begin{equation} \delta \nu=A_{0}+\sum_{q}(A_q\cos{(q\omega_\oplus T)}+B_q \sin{(q\omega_\oplus T)}) \label{sid}, \end{equation} where $\omega_\oplus\simeq 2\pi/({\rm 23h\,56 m})$ is the sidereal frequency, $T$ is the time in the Sun-centered frame\cite{datatables} and $q\geq 1$. The NR coefficients that contribute to the amplitude $A_q$ and $B_q$ of the hamonics of the sidereal frequency are determined by the abosulute value $|m|$ of the $m$-index. If $q\ne|m|$, then the NR coefficient $\mathcal K_{jm}$ cannot contribute to the amplitudes $A_q$ or $B_q$.\cite{KoVa15, KoVa18} For example, the coefficient $\HoBnr{432}$ has $m=2$ and therefore, it can only contribute to $A_2$, $B_2$, or both. Consequently, the goal of any sidereal variation study of the transition frequency $\nu$ is to measure the amplitudes of the harmonics of the sidereal frequency that contribute to $\delta \nu$. The highest harmonic of the sidereal frequency that contributes to $\delta \nu$ is determined by the angular momentum quantum numbers of the states involved in the transition.\cite{KoVa15, KoVa18} It is worth contrasting this to the minimal SME case where the highest harmonic that contributes to $\delta \nu$ is the first harmonic of the sidereal frequency in the absence of boost corrections. Assuming that $F_{\rm max}$ is the highest value of $F$ for the states involved in the transition, the highest harmonic of the sidereal frequency capable of contributing to $\delta \nu$ is $q=2F_{\rm max}$. As an illustration, if $F_{\rm max}=3$, the highest harmonic of the sidereal frequency that can contribute to $\delta \nu$ is the sixth harmonic. Based on the previous discussion, we might conclude that the transition frequency between atomic states with $F=0$ has no sidereal variation. In general, this is false. As mentioned, we have been ignoring the contribution from the boost transformation between the lab and the Sun-centered frame. We justify this because of the slow speed, around $10^{-4}$, between the frames. However, even the transition frequency between atomic states with $F=0$ can have a sidereal variation if we include the boost corrections. Additionally, the transition frequency can gain an annual variation due to the Earth's orbital motion.\cite{KoVa15, KoVa18} The aforementioned annual variation is important as it is the dominant signal for Lorentz violation for many potential studies of Lorentz violation with optical clocks.\cite{KoVa18} \section{NR coefficients and antihydrogen} CPT violation implies Lorentz violation in any realistic interacting field theory.\cite{Gr02} Hence, the SME can be used to facilitate the systematic search for CPT violation by classifying all the CPT-violating terms. In the models discussed in this presentation, the operators associated with the $a$-type and $g$-type NR coefficients are CPT-violating operators. A signal for CPT violation produced by these CPT-violating operators is a discrepancy between the spectrum of hydrogen and antihydrogen.\cite{KoVa15} Several groups are measuring or planning to measure antihydrogen transition frequencies to compare them to their hydrogen counterpart.\cite{Alpha, ASACUSA} All the antihydrogen transitions considered by these collaborations are between states with $F\leq 1$. Based on the previous discussion, these measurements will be sensitive to a small subset of the CPT-violating operators in our models. Therefore, any systematic search of CPT violations must measure antihydrogen transitions involving a state with $F\geq 1$. \section*{Acknowledgments} This work was supported by Department of Energy grant {DE}-SC0010120 and by the Indiana University Center for Spacetime Symmetries. \section{Files} You should have four files: \begin{enumerate} \item {\em ws-cpt22.tex} --- the main latex file, containing the instructions for contributors and sample text. To prepare your Proceedings contribution, you can delete the sample text and replace it with your own material. However, we recommend that you keep an initial version of the file for reference. \item {\em ws-procs9x6-cpt22.cls} --- the class file that provides the higher level latex commands for these Proceedings. Don't change these parameters. \item{\em ws-procs-fig1.eps} --- a sample eps figure file. \item {\em ws-cpt22.pdf} --- a pdf output of the above. \end{enumerate} These files will work with standard \LaTeX2e. Note that the final pagination of the volume will be implemented after you submit the paper. \section{Using other packages}\label{aba:sec1} The class file loads the packages {\tt amsfonts, amsmath, amssymb, chapterbib, natbib, graphicx, rotating}, and {\tt url} at startup. Please try to limit your use of additional packages as they often introduce incompatibilities. If you do need additional packages, send them along with the paper. In general, please use standard \LaTeX{} commands as much as possible. \section{Layout} To facilitate our processing of your article, please give easily identifiable structure to the various parts of the text by making use of the usual \LaTeX{} commands or by using your own commands defined in the preamble, rather than by \hbox{using} explicit layout commands, such as \verb|\\, \hspace, \vspace, \large, \centering|, etc. Also, do not redefine the page-layout parameters. \section{User defined macros} User defined macros should be placed where indicated in the preamble of the article, and not at any other place in the document. Please do not use large macro packages and definitions. Please do not change the existing environments, commands, and other standard parts of \LaTeX. \section{Sectional units} Sectional units are obtained with the \LaTeX{} commands \verb|\section|, \verb|\subsection|, and \verb|\subsubsection|. Each section header should have the first letter of the first word capitalized but otherwise be capitalized as in standard text, but ending without a period. Please limit the length of section headers to one line. All sections except the Acknowledgments should be numbered. If you don't use section headings, please add the command \verb|\phantom{}\vskip10pt\noindent| immediately before the body of your main text following the abstract to obtain correct spacing and avoid an initial paragraph indent, and also note that the Acknowledgments should still be a separate unnumbered section. \section{Comments on text usage} In the body of your text, please adopt the following standard usages in these Proceedings: (1) the transformation CPT is in Roman letters, not italic, (2) spacetime is one word, (3) General Relativity (GR), Standard Model (SM), and Standard-Model Extension (SME) are capitalized, (4) e.g., i.e., etc., appear with periods and commas as shown, e.g., here. \section{Mathematical formulas} Please note that equations are part of the text even when displayed, so they should be punctuated accordingly. They should be typset to avoid overflow outside the text area and should appear in the latex file without blank lines before or after them. \paragraph{Inline:} For in-line formulas use \verb|$ ... $|. Avoid built-up constructions, for example fractions and matrices, in in-line formulas. Fractions in inline formulas can be typed with a solidus, e.g., \verb|(x+y)/z=0|. \paragraph{Display:} For numbered display formulas, use the displaymath environment: \begin{verbatim} \begin{equation} ... \label{aba:eqno} \end{equation} \end{verbatim} All displayed equations should be numbered. For example, the input for: \begin{equation} \mu(n,t) = \frac{\sum\limits^\infty_{i=1}1 (d_i < t, N(d_i) = n)} {\int\limits^t_{\sigma=0}1 (N(\sigma)=n)d\sigma}. \label{aba:eq1} \end{equation} \noindent is: \begin{verbatim} \begin{equation} \mu(n,t) = \frac{\sum\limits^\infty_{i=1}1 (d_i < t, N(d_i) = n)} {\int\limits^t_{\sigma=0}1 (N(\sigma)=n)d\sigma}. \label{aba:eq1} \end{equation} \end{verbatim} For displayed multiline formulas, use the \verb|eqnarray| environment. For example, \begin{verbatim} \begin{eqnarray} \zeta\mapsto\hat{\zeta}&=&a\zeta+b\eta\label{aba:appeq2} \nonumber \\ \eta\mapsto\hat{\eta}&=&c\zeta+d\eta\label{aba:appeq3} \end{eqnarray} \end{verbatim} \noindent produces: \begin{eqnarray} \zeta\mapsto\hat{\zeta}&=&a\zeta+b\eta\label{aba:appeq2} \nonumber\\ \eta\mapsto\hat{\eta}&=&c\zeta+d\eta\label{aba:appeq3} \end{eqnarray} Superscripts and subscripts that are words or abbreviations, as in \(\sigma_{\mathrm{low}}\), should be typed as roman letters, with \verb|\(\sigma_{\mathrm{low}}\)| instead of \(\sigma_{low}\) done with \verb|\(\sigma_{low}\)|. For geometric functions, e.g., exp, sin, cos, tan, etc., please use the macros \verb|\sin, \cos, \tan|. These macros give proper spacing in mathematical formulas. \section{Tables and figures} Put tables and figures in text using the table and figure environments, and position them near the first reference of the table or figure in the text. Please use only short captions in figures and tables. Please avoid large tables and figures insofar as possible. \subsection{Tables} The following commands produce Table 1: \begin{verbatim} \begin{table} \tbl{Comparison of acoustic for frequencies for piston-cylinder problem.} {\begin{tabular}{@{}cccc@{}}\toprule Piston mass & Analytical frequency & TRIA6-$S_1$ model & ...\\ & (Rad/s) & (Rad/s) \\\colrule 1.0\hphantom{00}&\hphantom{0}281.0&\hphantom{0}280.81&0.07 \\ 0.1\hphantom{00}&\hphantom{0}876.0&\hphantom{0}875.74&0.03 \\ 0.01\hphantom{0}&2441.0&2441.0\hphantom{0}&0.0\hphantom{0} \\ 0.001 & 4130.0 & 4129.3\hphantom{0}& 0.16\\\botrule \end{tabular}} \begin{tabnote} $^{\text a}$ Sample table footnote.\\ \end{tabnote}\label{aba:tbl1} \end{table} \end{verbatim} \begin{table} \tbl{Acoustic frequencies for piston-cylinder problem.} {\begin{tabular}{@{}cccc@{}}\toprule Piston mass & Analytical frequency & TRIA6-$S_1$ model & \% Error$^{\text a}$ \\ & (Rad/s) & (Rad/s) \\ \colrule 1.0\hphantom{00} & \hphantom{0}281.0 & \hphantom{0}280.81 & 0.07 \\ 0.1\hphantom{00} & \hphantom{0}876.0 & \hphantom{0}875.74 & 0.03 \\ 0.01\hphantom{0} & 2441.0 & 2441.0\hphantom{0} & 0.0\hphantom{0} \\ 0.001 & 4130.0 & 4129.3\hphantom{0} & 0.16\\ \botrule \end{tabular} } \label{aba:tbl1} \end{table} Please put the table caption above the table. By using the \verb|\tbl| command in table environment, long captions will be justified to the table width while the short or single line captions are centered. Tables should have a uniform style. It does not matter how you place the inner lines of the table, but we would prefer the border lines to be of the style as shown in our sample table. Please keep the inner lines of the table to a minimum. For most tables, the horizontal rules are obtained by: \begin{tabular}{ll} {\bf toprule} & one rule at the top\\ {\bf colrule}& one rule separating column heads from\\ & data cells\\ {\bf botrule}& one bottom rule\\ {\bf Hline} & one thick rule at the top and bottom of\\ & the tables with multiple column heads\\ \end{tabular} To avoid the rules sticking out at either end of the table, add \verb|@{}| before the first and after the last descriptors, e.g. {@{}llll@{}}. Please avoid vertical rules in tables. Headings which span more than one column should be set using \verb|\multicolumn{#1}{#2}{#3}| where \verb|#1| is the number of columns to be spanned, \verb|#2| is the argument for the alignment of the column head (which in general may be either {c} for center alignment, {l} for left alignment, or {r} for right alignment; but please use {c} for column heads as this is the WS style), and \verb|#3| is the heading. \subsection{Figures} All images should be in Encapsulated PostScript (.eps) format. Other graphics formats are unsuitable. Even if we can read such files, there is no guarantee that they will look the same on our systems as on yours. Color figures cannot be reproduced in these Proceedings, and we have found that color figures fail to display properly when reproduced directly in grayscale format. Please prepare all figures in black and white or grayscale. Please prepare the figures in high resolution (at least 300 dpi) for half-tone illustrations or images. Half-tone pictures must be sharp enough for reproduction. Please ensure that all labels in the figures are legible. \begin{figure} \begin{center} \includegraphics[width=4in]{ws-procs-fig1.eps} \end{center} \caption{The bifurcating response curves of system $\alpha=0.5, \beta=1.8; \delta=0.2, \gamma=0$: (a) $\mu=-1.3$; and (b) $\mu=0.3$.} \label{aba:fig1} \end{figure} The following commands produce Figure 1: \begin{verbatim} \begin{figure} \includegraphics[width=4in]{ws-procs-fig1.eps} \caption{The bifurcating response curves of system $\alpha=0.5, \beta=1.8; \delta=0.2, \gamma=0$: (a) $\mu=-1.3$; and (b) $\mu=0.3$.} \label{aba:fig1} \end{figure} \end{verbatim} Adjust the scaling of the figure until it is correctly positioned within the left and right margins of the text. The figure caption should appear below the figure. All figures should be mentioned in the main text. \section{Cross references} Please do not use plain numbers for cross references in the text. Every quantity to which you wish to refer should be labeled with \verb|\label|. Use \verb|\label| and \verb|\ref| for cross references to figures, tables, sections, and subsections. Use \verb|\label| and \verb|\refeq| for cross references to equations (this will ensure the equation number appears in parentheses, following the style of the Proceedings). For example: \begin{verbatim} \begin{equation} \mu(n, t) = \frac{\sum\limits^\infty_{i=1}1 (d_i < t, N(d_i) = n)} {\int\limits^t_{\sigma=0}1 (N(\sigma)=n)d\sigma}. \label{aba:eq1} \end{equation} \end{verbatim} With the instruction \verb|\refeq| one can refer to a numbered equation that has been labeled: \begin{verbatim} ..., see also Eq.\ \refeq{aba:eq1}. \end{verbatim} The \verb|\label| instruction should be typed immediately after but not inside the argument of a number-generating instruction such as \verb|\section| or \verb|\caption|. For example, \verb|\caption{Caption here.}\label{aba:fig1}|. It should aso be roughly in the position where the number appears, in environments such as an equation. Labels must be unique. Please use abbreviations for Equation, Section, Figure, and Table according to the following list. \begin{center}{\tablefont \begin{tabular}{ll} \toprule latex command & output\\\colrule \multicolumn{2}{@{}l}{In the middle of a sentence:}\\ \verb|Eq.\ \refeq{aba:eq1}| & \eref{aba:eq1}\\ \verb|Sec.\ \ref{aba:sec1}| & \sref{aba:sec1}\\ \verb|Fig.\ \ref{aba:fig1}| & \fref{aba:fig1}\\ \verb|Table \ref{aba:tbl1}| & \tref{aba:tbl1}\\[3pt] \multicolumn{1}{@{}l}{At the starting of a sentence:}\\ \verb|Equation \refeq{aba:eq1}| & \Eref{aba:eq1}\\ \verb|Section \ref{aba:sec1}| & \Sref{aba:sec1}\\ \verb|Figure \ref{aba:fig1}| & \Fref{aba:fig1}\\ \verb|Table \ref{aba:tbl1}| & \Tref{aba:tbl1}\\\botrule \end{tabular}} \end{center} \section{Footnotes} Footnotes are denoted by a Roman letter superscript in the text,\footnote{Just like this one.} whereas references are denoted by a number superscript (see below). The footnote is created by typing: \noindent \begin{verbatim} ... in the text,\footnote{Just like this one.} \end{verbatim} Note there is no spacing between the comma and the footnote command. Footnotes should appear numbered sequentially in superscript lowercase Roman letters.\footnote{Footnotes should appear as 8~pt Times Roman at the bottom of the page.} \section{Citations} Citations are generated as superscripts for these Proceedings. They should appear numbered consecutively in Arabic numerals in the order of first appearance. If you normally use the method of square brackets for citations, please check that the citation command appears {\it after} the punctuation mark in the pdf output (this should be automatic). Please don't leave a blank space between the punctuation mark or word and the citation command. For example, (1) $\qquad$ ``$\ldots$ in the statement.\cite{datatables}'' (2) $\qquad$ ``$\ldots$ have proven\cite{datatables,randomphoton,randomnu} that this equation $\ldots$'' \noindent Citations are introduced using the command `$\backslash$cite\{citationlabel\}.' Citation labels must be unique. For multiple citations, use the form \verb|\cite{cite1,cite2}| instead of \verb|\cite{cite1}|, \verb|\cite{cite2}|. When the citation forms part of the sentence it should not be superscripted. For example, (1) $\qquad$ ``One can deduce from Ref.\ \refcite{randomphoton} that $\ldots$'' (2) $\qquad$ ``See Refs.\ \refcite{datatables,randomphoton,randomnu,randombook} for more details.'' \noindent This is done using the special command `Ref.$\backslash$ $\backslash$refcite\{name\}.' The bibliography at the end of this file is produced with the commands \begin{verbatim}
1,314,259,994,405
arxiv
\section{ Introduction } Our present knowledge of strong interactions indicates that QCD (Quantum Chromodynamics) is the quantum field theory that models the physics of this fundamental interaction. However, many important properties of the strongly interacting particles, the hadrons, can not be described using only perturbative QCD. This happens because the QCD coupling is large at low energies. Thus, in order to calculate static properties of hadrons, or to describe their structure, one needs additional tools. In the last years, many interesting models to study non perturbative aspects of hadronic physics were developed based on the idea of gauge/string duality. The main source of inspiration for these kind of models was the discovery of the AdS/CFT correspondence\cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj} that is an exact duality between string theory in certain ten dimensional geometries and supersymmetric $ SU(N)$ gauge theories with large $N $ on the corresponding boundary. In particular, string theory in $AdS_5 \times S^5 $ space is dual to a four dimensional gauge theory. In the AdS/CFT correspondence, the gauge theory is conformal. The idea of breaking this conformal invariance by an Infrared (IR) cut off, represented in the dual AdS geometry by a geometrical cut off in the radial coordinate of the space, was introduced in \cite{Polchinski:2001tt} as a tool to reproduce the high energy scaling of hadronic scattering amplitudes for processes at fixed angles. This scaling had been found in QCD a long time before \cite{Matveev:1973ra,Brodsky:1973kr} but the corresponding string theory description was lacking for a very long time. This approach of considering a maximum radial size of AdS space as the dual of an infrared cut off in the gauge theory was then used in \cite{BoschiFilho:2002ta,BoschiFilho:2002vd} to calculate the mass spectrum of glueballs. In these articles, boundary conditions were imposed on fields living in an AdS slice and the corresponding normal modes were associated with hadronic states. This kind of model, later called AdS/QCD hard wall, was then applied to other hadrons, as, for example, in \cite{deTeramond:2005su}. It is important to mention the important earlier works \cite{Csaki:1998qr,Hashimoto:1998if,Csaki:1998cb,Minahan:1998tm,Brower:2000rp}. where glueball masses were calculated by considering an AdS Schwarzschild black hole as dual to a non-supersymmetric Yang Mills theory. The hadronic masses calculated using the hard wall model, for particles with a given spin, do not exhibit a linear relation between mass squared and excitation number. This motivated a different AdS/QCD approach consisting of a background involving AdS space plus a field that acts effectively as a smooth infrared cut off \cite{Karch:2006pv}. This, so called soft wall model, leads to a linear relation between the mass squared and the radial excitation number for vector mesons (see also \cite{Colangelo:2007pt} for scalar glueballs). However, the soft wall model, as originally formulated\cite{Karch:2006pv}, does not work for fermions. That means, it does not lead to a discrete mass spectrum for fermions because the dilaton introduced in the action is factorized out in their equations of motion. In other words the fermions do not feel the smooth cut off of the soft wall model. Some alternative versions of the Soft Wall AdS/QCD models for fermions were developed then, with the purpose of reproducing the mass spectrum of baryons observed. One example can be found in \cite{Forkel:2007cm,dePaula:2008fp}, where the authors consider models in asymptotically AdS space including a warp factor in the metric. Other possibility, studied in \cite{Vega:2008te,Abidin:2009hr,Gutsche:2011vb}, considers a z dependent (or dressed) mass for the fermionic modes propagating in AdS space. This last approach has been considered in \cite{Abidin:2009hr} to study nucleon form factors, and in \cite{Vega:2010ns} to obtain some generalized parton distributions (GPD) for nucleons. It is important to note that experimental data for the mass spectrum of baryons of spin 1/2 show that the square of the masses of the excited states are almost equally spaced. This can be seen, for example, in \cite{Klempt:2002vp} (see specially table II in this reference). That means, there is an approximately linear relation between the mass squared and the excitation level quantum number for these baryons of spin 1/2. The study of deep inelastic scattering (DIS) using the AdS/CFT correspondence appeared first in \cite{Polchinski:2002jw}. Then other authors considered the description of DIS using various AdS/QCD models like, for example, in \cite{BallonBayona:2007qr,BallonBayona:2007rs,Cornalba:2008sp,Pire:2008zf,Albacete:2008ze,Gao:2009ze, Hatta:2009ra,BallonBayona:2008zi,Cornalba:2009ax,Hatta:2007cs,Bu:2011my}. In \cite{BallonBayona:2007qr} the soft wall AdS/QCD model was considered for scalar hadrons and a hybrid model involving a soft wall cut off for the photons and a hard wall cut off for the fermions was also discussed. However, in neither of these articles the DIS was studied for baryons satisfying the experimentally observed mass spectrum for nucleons. So, the important case of the determination of the structure functions for baryons with a mass spectrum consistent with the physical observations was still lacking. The purpose of this paper is to fill this gap by considering baryons in an AdS/QCD soft wall model with dressed mass in order to describe DIS in the AdS/QCD context but with baryons that present a spectrum similar to the physically observed one. We will use the modified soft wall model studied in \cite{Abidin:2009hr,Gutsche:2011vb} to describe the baryons and then calculate their DIS structure functions. Once we have the structure functions for these baryons we will compare our $F_2$ with experimental results for the proton in the regime of large values of the Bjorken parameter $x$ where the supergravity approximation that we use is more reliable. The present paper is focused in large x limit, but it is important say that Bottom Up holographical models had been used in small x limit too in a successful way as you can see for example in \cite{Polchinski:2002jw, BallonBayona:2007rs, Cornalba:2010vk, Brower:2010wf}. In section {\bf 2} we present a briefly review of DIS and hadronic structure functions. In section {\bf 3} we show our AdS/QCD calculation of the structure functions. Then in section {\bf 4} we analyze our results, discussing the possible choices of the parameters of the model, plotting the structure function $F_2 $ for some kinematical regimes and comparing with experimental results. \section{ Deep Inelastic Scattering and Structure Functions } Deep inelastic scattering (DIS) is a process where a highly energetic lepton, an electron in general, interacts with a hadronic target through the exchange of a virtual photon, as show in the diagram of figure {\bf 1}. The momenta of the photon and of the initial hadron are respectively $q^\mu$ and $P^\mu $. After the interaction there is a final hadronic state represented by $X$ with momentum $P_X^\mu$. The experimental measurement of the inclusive cross section of DIS corresponds to detecting the final lepton, thus determining the momentum transfer $q^\mu$, but not the final hadronic state $X$. That means, summing over all possible final hadronic states $X$. One usually parametrizes DIS using as dynamical variables the photon virtuality $q^2$ and the Bjorken parameter \begin{equation} x \equiv -q^2 /2P\cdot q \,. \label{Bjorkenx} \end{equation} \noindent Deep inelastic scattering, in the strict sense, corresponds to the limit $q^2\to\infty$, with $x$ fixed. \begin{figure}\begin{center} \setlength{\unitlength}{0.1in} \vskip 3.cm \begin{picture}(0,0)(15,0) \rm \thicklines \put(1,14.5){$\ell$} \put(3,15){\line(2,-1){7}} \put(3,15){\vector(2,-1){4}} \put(18,14.5){$\ell$} \put(17,15){\line(-2,-1){7}} \put(10,11.5){\vector(2,1){4.2}} \put(9.5,8){$q$} \bezier{300}(10,11.5)(10.2,10.7)(11,10.5) \bezier{300}(11,10.5)(11.8,10.3)(12,9.5) \bezier{300}(12,9.5)(12.2,8.7)(13,8.5) \bezier{300}(13,8.5)(13.8,8.3)(14,7.5) \put(0,-2){$P$} \put(3,0){\line(2,1){10.5}} \put(3,0){\vector(2,1){6}} \put(16,6){\circle{5}} \put(27,-2){$X$} \put(18.5,5.5){\line(3,-1){8}} \put(18.3,5){\line(2,-1){8}} \put(18,4.5){\line(3,-2){7.5}} \put(17.5,3.8){\line(1,-1){6}} \end{picture} \vskip 1.cm \parbox{4.1 in}{\caption{Diagram of a deep inelastic scattering.} } \end{center} \end{figure} \vskip .5cm The inclusive cross section for DIS can be calculated from the hadronic tensor that is defined as \begin{equation} W^{\mu\nu} \, = \frac{1}{8 \pi} \sum_s \, \int d^4y\, e^{iq\cdot y} \langle P, s \vert \, \Big[ J^\mu (y) , J^\nu (0) \Big] \, \vert P, s \rangle \,, \label{HadronicTensor} \end{equation} \noindent where $ J^\mu(y)$ is the electromagnetic hadronic current and $s$ the spin of the initial hadron. For an unpolarized scattering (spin independent) this tensor can be decomposed in terms of the structure functions $F_1 (x,q^2) $ and $F_2 (x,q^2) $ as \cite{Manohar:1992tz} \begin{equation} W^{\mu\nu} \, = \, F_1 (x,q^2) \Big( \eta^{\mu\nu} \,-\, \frac{q^\mu q^\nu}{q^2} \, \Big) \,+\,\frac{2x}{q^2} F_2 (x,q^2) \Big( P^\mu \,+ \, \frac{q^\mu}{2x} \, \Big) \Big( P^\nu \,+ \, \frac{q^\nu}{2x} \, \Big)\, .\label{Structure} \end{equation} Considering only final states with just one baryon with mass $M_X$, we can introduce a basis of such one particle states and write the hadronic tensor as \begin{eqnarray} \label{hadronic2} W^{\mu\nu} & = & \frac{1}{8 \pi} \sum_{s,s_X} \sum_{M_X} \int \frac{d^4P_X}{2\pi^3)} \theta (P^0_X ) \delta \Big( P^2_X + M_X^2 \, \Big) (2\pi )^4 \delta^4 (P +q - P_X ) \cr &\times& \langle P,s \vert J^\nu ( 0 ) \vert P_X , s_X \rangle\, \langle P_X ,\, s_X \vert J^\mu ( 0 ) \vert P,\, s \rangle\, \cr & =& \frac{1}{4} \sum_{s,s_X} \sum_{M_X} \,\delta \Big( M_X^2 + (P+q)^2 \, \Big) \langle P,s \vert J^\nu ( 0 ) \vert P + q, s_X \rangle\, \langle P + q,\, s_X \vert J^\mu ( 0 ) \vert P,\, s \rangle\,\,.\cr & & \end{eqnarray} So, in order to calculate the hadronic structure functions one needs to find the matrix elements of the current and also the spectrum of hadronic masses $M_X\,$. We will present in the next section the calculation of these quantities using the modified soft wall model. \section{ DIS in the soft wall model with fermionic dressed bulk mass} In the soft wall model, a four dimensional gauge theory is represented by a gravity dual consisting of fields living in anti-de Sitter space $AdS_5 $, whose metric can be written as \begin{equation} \label{AdS} ds^2 \equiv g_{mn} \,dy^m dy^n \,= \, \frac{R^2}{z^2}( dz^2 + \eta_{\mu\nu} dy^\mu dy^\nu )\,\,= \, \frac{R^2}{z^2}( dz^2 - dt^2 + (d\vec y )^2 )\,\,, \end{equation} \noindent with the presence of an additional background of the form $e^{-\kappa^2 z^2}$. This factor plays the role of an infrared cut off, with the energy parameter $\kappa$ representing the corresponding scale. The general form of an action integral corresponding to a given Lagrangian density ${\cal L}$ is \begin{equation} I \,=\, \int dz d^{4}y \, \sqrt{-g} \,\, e^{-\kappa^2 z^2} \, {\cal L}\,\,. \label{action} \end{equation} For the case of scalar and vector fields, this kind of action integral leads to dual four dimensional theories with infrared cut off and discrete mass spectra. For fermions one must do some modification in order to find a dual theory with an infrared cut off. This happens because using a standard fermionic Lagrangian in the action above the background $e^{-\kappa^2 z^2}$ factors out of the equations of motion. So, the fermions are not affected by the energy scale $\kappa$ and get no discrete mass spectrum. This problem can be solved introducing a $z$ dependent mass, as studied in \cite{Abidin:2009hr,Gutsche:2011vb}. Following this approach, the appropriate action that describes the dynamics of the fermionic and gauge fields and their interaction is \begin{eqnarray} \label{Action1} I &=& \int dz d^{4}y \, \sqrt{-g} \, e^{- \kappa^2 z^2 } \,\biggl[ - \frac{1}{4} F_{mn} F^{mn} \, + \, \frac{i}{2} \bar\Psi \epsilon_a^m \Gamma^a {\cal D}_m \Psi \cr &-& \frac{i}{2} ({\cal D}_m\Psi)^\dagger \Gamma^0 \epsilon_a^m \Gamma^a \Psi \,- \, \bar\Psi \Big(\mu + V_F(z)\Big) \Psi \biggr] \,, \end{eqnarray} \noindent where the field strength is $ F^{mn} = \partial^m A^n - \partial^n A^m \,$ and $\epsilon_a^m = \delta_a^m \frac{z}{R} $ and the covariant derivative is: \begin{equation} {\cal D}_m = \partial_m - \frac{1}{8} \omega_m^{ab} \, [\Gamma_a, \Gamma_b] - i g_{_{V}} A_m\,, \end{equation} \noindent with: $ \omega_m^{ab} = - \frac{1}{z} (\delta^a_z \delta^b_m - \delta^b_z \delta^a_m)\,$. The coupling $g_{_{V}}$ will be associated with the electric charge $e$ and $\Gamma^a=(\gamma^\mu, - i\gamma^5)$ are the Dirac matrices. In order to produce a discrete spectrum with the appropriate spectrum for fermions we introduced the effective fermionic potential $V_F(z) = \kappa^2 z^2 /R$ depending on the $z$ coordinate (associated with the boundary energy scale), to dress the fermionic bulk mass. Note that we are using here the same parameter $\kappa$ that appears in eq. (\ref{action}) in the dilaton background. We do this because in both cases the parameter $\kappa$ has the role of an infrared regulator that dictates the slope in a plot $m^{2}$ as a function of n (radial quantum number), as the hadronic mass spectrum suggest and it is widely accepted. The factor $\kappa$ in the potential will appear, as we will see, in spectrum of the fermions, while the factor $\kappa$ in the dilaton background appears in the gauge field solution and would also appear in the spectrum of vector mesons, as studied in \cite{Karch:2006pv}. So we take $\kappa$ as some universal infrared mass scale of the model. The physical motivation for using a potential depending on the $z$ coordinate is the following. On one hand, according to the AdS/CFT dictionary, the mass of a supergravity bulk field is related to the dimension of the corresponding boundary operator. On the other hand, in general the dimensions of quantum operators receive anomalous contributions, depending on the energy scale. Since the energy scale of the boundary theory is holographically related to the localization in the $z$ coordinate, the possibility of anomalous contributions to the dimension of the operators can be translated into z dependent masses for the dual bulk modes \cite{Vega:2008te,Cherman:2008eh}. This kind of procedure makes it possible to introduce an important ingredient of QCD that is not considered in most of the AdS / QCD models. Other related works that consider masses varying in the bulk can be found in, for example, \cite{Forkel:2008un,Forkel:2010gu,Vega:2010ne,Vega:2011tg}. For the gauge fields it is convenient \cite{Polchinski:2002jw} to impose the gauge condition \begin{equation} \label{gaugechoice} \partial_\mu A^\mu \,+\, z e^{ \kappa^2 z^2} \partial_z \Big( e^{-\kappa^2 z^2} \frac{1}{z} A_z \Big) \,=\,0 \,. \end{equation} Note that from now on we use the notation of raising and lowering the four dimensional indices with the Minkowski metric: $A^\mu \equiv \eta^{\mu\nu} A_\nu\,$ and $\Box \equiv \eta^{\mu\nu}\partial_\mu \partial_\nu\,$. With the gauge choice (\ref{gaugechoice}) the equations of motion that emerge from the action (\ref{Action1}) are \begin{eqnarray} \label{Solutionsgauge} \Box A^\mu &+& z e^{\kappa^2 z^2} \partial_z \Big( e^{-\kappa^2 z^2} \frac{1}{z} \partial_z A^\mu \Big) \,=\, 0\nonumber\\ \Box A_z &-& \partial_z \Big( \partial_\mu A^\mu \Big) \,=\, 0\,. \end{eqnarray} \noindent We impose the condition that the boundary value of the gauge field represents a virtual photon with polarization $\eta^\mu$ and space-like momentum $ q^\mu$ \begin{equation} A_\mu (z, y) \vert_{z\to 0} \,=\, \eta_{\mu} \, e^{iq\cdot y} \,, \end{equation} \noindent The corresponding solutions are \begin{eqnarray} A_\mu (z, y) &=& \eta_\mu \, e^{iq\cdot y} \,\kappa^2 \,\, \Gamma (1+\frac{q^2}{4\kappa^2} )\,\, z^2 \,\,{\cal U} (1+\frac{q^2}{4 \kappa^2} ; 2 ; \kappa^2 z^2 ) \nonumber\\ A_z (z, y) &=& \frac{i}{2} \, \eta \cdot q \, e^{iq\cdot y} \,\, \Gamma (1+\frac{q^2}{4\kappa^2 } )\,\, z \,\, {\cal U} (1+\frac{q^2}{4 \kappa^2} ; 1 ; \kappa^2 z^2 )\,, \label{Gauge} \end{eqnarray} \noindent where $\,{\cal U} (a;b;w) \,$ are the confluent hypergeometric functions of the second kind. Now, regarding the fermionic fields, it is convenient to do the following field redefinition \begin{equation} \Psi (y,z) = e^{ + \kappa^2 z^2/2} \psi(y,z). \end{equation} \noindent in such a way that the equations of motion take the form \begin{equation} \biggl[ i\not\!\partial + \gamma^5\partial_z - \frac{2}{z} \gamma^5 - \frac{1}{z} \Big(m + \kappa^2 z^2 \Big) \biggr] \psi(y,z) = 0 \,, \end{equation} \noindent where $ \not\!\partial = \gamma^\mu \, \partial_\mu$. The quantity $\mu$, the bulk fermion mass was replaced by the dimensionless parameter $m = \mu R $. In order to solve the equation of motion, we decompose the fermionic field into chiral components \begin{equation} \psi(y,z) = \psi_L(y,z) + \psi_R(y,z)\,, \quad \psi_{L/R} = \frac{1 \mp \gamma^5}{2} \psi \,. \end{equation} \noindent with $(\gamma^5)^2 = 1 $. We will consider solutions of the form \begin{equation} \psi_{L/R}(y,z) = e^{iP\cdot y } \frac12 \Big( 1 \mp \gamma^5 \Big) u_s(P) \, \frac{z^2}{R^2} f_{L/R}(z) \,, \end{equation} \noindent that contain the plane wave factor and the four component spinor $ u_s(P) $ corresponding to a four dimensional free fermion with momentum $P^\mu $ and spin $s$. Then, the $z$ dependent parts of chiral components of the fields: $f_{L/R}(z)$ must satisfy \begin{equation} \label{EqzLR} \biggl[ -\partial_z^2 + \kappa^4 z^2 + 2 \kappa^2 \Big(m \mp \frac{1}{2} \Big) + \frac{m (m \pm 1)}{z^2} \biggr] f_{L/R}(z) = - P^2 \, f_{L/R}(z) \,. \end{equation} Following the usual prescription of gauge/gravity dualities, normalizable solutions for fields in the gravity side are dual to states in the boundary four dimensional theory. Equations (\ref{EqzLR}) have normalizable solutions only when $- P^2$, the four dimensional mass, has the discrete values \begin{equation} \label{Masses} - P^2_n = M_n^2 = 4 \kappa^2 \Big( n + m + \frac{1}{2} \Big) \,, \end{equation} \noindent with $ n= 0,1,2,..., $. The corresponding discrete set of normalizable solutions $f^n_{L/R}(z)$ are \begin{eqnarray} f^n_{L}(z) &=& \sqrt{\frac{2\Gamma(n+1)}{\Gamma(n+m+3/2)}} \ \kappa^{m+3/2} \ z^{m+1} \ e^{-\kappa^2 z^2/2} \ L_n^{m+1/2}(\kappa^2z^2) \,, \\ f^n_{R}(z) &=& \sqrt{\frac{2\Gamma(n+1)}{\Gamma(n+m+1/2)}} \ \kappa^{m+1/2} \ z^{m} \ e^{-\kappa^2 z^2/2} \ L_n^{m-1/2}(\kappa^2z^2) \end{eqnarray} \noindent with normalization condition $\int\limits_0^\infty dz \, f^{n^\prime}_{L/R}(z) f^n_{L/R}(z) = \delta_{n^{\prime}n}\,$. We want to describe processes where the initial state is a proton that absorbs a virtual photon transforming into a final state corresponding to an excited hadronic state of spin $1/2$. So we consider the interaction action \begin{eqnarray} \label{InteractionAction} S_{int} [i,X] = g_{_{V}}\, \int dz d^{4}y \sqrt{-g} e^{- \kappa^2 z^2} \,\frac{z}{R} \, A_m \bar\Psi_X \, \gamma^m \, \Psi_i \,. \end{eqnarray} \noindent where $\Psi_i$ represents the initial proton, that we take as the state with lowest mass level, corresponding to $ n=0$. So the initial momentum $P_i \equiv p $ satisfies $-p^2 = M_0^2 = 4 \kappa^2 ( m + 1/2 )$. The fermionic field $\Psi_X$ represents a final state with (higher) mass $M_X$ and momentum $P_X = p + q $ satisfying $ - P_X^2 = M_X^2 = 4 \kappa^2 ( n_X + m + 1/2 ) $, where we are representing as $n_X$ the integer associated with the excitation level of the final state. The corresponding solutions have the form \begin{eqnarray} \label{SolutionFermions} \Psi_i &=& e^{ip\cdot y } e^{ + \kappa^2 z^2/2} \, \frac{z^2}{R^2} \Big[ \Big(\frac{1 - \gamma^5}{2} \Big) u_{s_{i}}(p) \, f^0_{L}(z) \,+\, \Big( \frac{1 + \gamma^5}{2} \Big) u_{s_{i}} (p) \,f^0_{R}(z) \Big] \cr \cr \Psi_X &=& e^{iP_X \cdot y } e^{ + \kappa^2 z^2/2} \, \frac{z^2}{R^2} \Big[ \Big(\frac{1 - \gamma^5}{2} \Big) u_{s_{X}}(P_X) \, f^{n_X}_{L}(z) \,+\, \Big( \frac{1 + \gamma^5}{2} \Big) u_{s_{X}} (P_X) \,f^{n_X}_{R}(z) \Big]\,\,, \cr & & \end{eqnarray} \noindent where $s_i $ and $s_X$ are the spins of the initial and final fermionic states. In order to find out the structure functions for the hadrons we have to calculate the interaction action (\ref{InteractionAction}) with the solutions for the fields. We can simplify the calculations by considering, as it was done in \cite{Polchinski:2002jw,BallonBayona:2007qr}, that we are probing the hadron with a particular photon with polarization $\eta^\mu $ satisfying $\eta \cdot q = 0$. For this situation the $z$ component of the gauge field does not contribute and, substituting the solutions (\ref{Solutionsgauge}) and (\ref{SolutionFermions}) in (\ref{InteractionAction}) the interaction action, {\underline \it for this initial and final states $(i,X)$ } takes the on shell form \begin{eqnarray} \label{InteractionAction2} S_{int} [i,X] &=& \frac{g_{_{V}}}{2}\, \, (2\pi )^4 \delta^4 ( P_X -p - q ) \eta_\mu \Big[ \bar{u}_{s_{X}}(P_X) \gamma^\mu \Big(\frac{1 - \gamma^5}{2} \Big) u_{s_{i}}(p) {\cal I}_L (n_x ) \cr & & + \,\bar{u}_{s_{X}}(P_X) \gamma^\mu \Big(\frac{1 + \gamma^5}{2} \Big) u_{s_{i}}(p) {\cal I}_R (n_x ) \Big] \,, \end{eqnarray} \noindent where $ {\cal I}_L (n_x ) $ and ${\cal I}_R (n_x ) $ are integrals involving the two fermionic solutions with different chiralities. They have a similar structure and can be written, in terms of the variable $ w \equiv \kappa^2 z^2 $, in the general form: \begin{equation} \label{generalintegral} {\cal I} ( {\bar m}, n_X ) = C ( {\bar m}, n_X ) \, \Gamma( 1 + \frac{q^2}{4 \kappa^2} ) \, \int_0^\infty dw w^{{\bar m} -1} e^{-w} {\cal U} (1+\frac{q^2}{4\kappa^2} ; 2 ; w ) \, L_{n_X}^{{\bar m} - 2}( w )\,, \end{equation} \noindent where \begin{equation} C ( {\bar m}, n_X ) = \sqrt{\frac{4 \Gamma(n_X +1)}{\Gamma({\bar m} -1 )\Gamma(n_X + {\bar m} -1 )}}\,. \end{equation} The integrals $ {\cal I}_L (n_x ) $ and ${\cal I}_R (n_x ) $ correspond to ${\cal I} ( {\bar m}, n_X )$ with $ {\bar m} = m + 5/2 $ and $ {\bar m} = m + 3/2 $ respectively. Performing the integral (\ref{generalintegral}) we get \begin{equation} \label{I} {\cal I} (\bar{m},n_{X})=\frac{q^2 \Gamma \left(\bar{m}\right) \sqrt{\frac{\Gamma \left(\bar{m}-1\right) \Gamma \left(n_X +\bar{m}-1\right)}{\Gamma (n_X +1)}} \Gamma \left(\frac{q^2}{4 \kappa ^2}+n_X\right)}{2 \kappa ^2 \Gamma \left(\bar{m}-1\right) \Gamma \left(\frac{q^2}{4 \kappa ^2}+n_X+\bar{m}\right)} \end{equation} Now that we calculated the action in the soft wall background for photons interacting with fermions with dressed mass, we connect this result with the four dimensional boundary theory using a proposal similar to the one used in refs. \cite{Polchinski:2002jw,BallonBayona:2007qr}. In these references the matrix element of the fermionic electromagnetic current on the boundary four dimensional theory was considered to be equal to the corresponding bulk interaction action. Here we will take a different point of view and consider that bulk/boundary duality implies that these quantities are proportional, rather than necessarily equal. So we assume the relations: \begin{eqnarray} \eta_\mu \langle P_X \vert {\tilde J}^\mu ( q ) \vert P \rangle\, &=& (2 \pi)^4 \, \delta^4 ( P_X - P - q ) \,\eta_\mu \, \langle P + q \vert J^\mu ( 0 ) \vert P \rangle \,=\,{\cal K}_{_{eff}} \, S_{int} [i,X]\, \cr \cr \eta_\mu \langle P \vert {\tilde J}^\mu ( q ) \vert P_X \rangle\, &=& (2 \pi)^4 \, \delta^4 ( P_X - P - q ) \,\eta_\mu \, \langle P \vert J^\mu ( 0 ) \vert P + q \rangle \,=\,\,{\cal K}_{_{eff}} \, S_{int} [X,i]\,,\cr & & \label{Prescription} \end{eqnarray} \noindent where ${\cal K}_{_{eff}}$ plays the role of a bulk/boundary effective factor that phenomenologically adjust the bulk supergravity quantities to the boundary observed ones. Following this prescription, the hadronic tensor in eq. (\ref{hadronic2}), contracted with the photon polarization $\eta $, can then be written in terms of our interaction action of eq. (\ref{InteractionAction2}) as \begin{eqnarray} \label{Amplitudewithspinors} & & \eta_\mu \eta_\nu W^{\mu\nu} \, = \,\frac14 \,\sum_{M_X} \delta ( M_X^2 + (p+q)^2 ) \,\frac{g_{eff}^2}{4} \times \cr & & \sum_{ s_i } \sum_{s_X} \, \Big\{ {\cal I}^2_L {\cal I}^2_R \Big( \bar{u}_{s_{X}} \gamma^\mu \Gamma^{(- )} u_{s_{i}} \bar{u}_{s_{i}} \gamma^\nu \Gamma^{(+ )} u_{s_{X}} + \bar{u}_{s_{X}} \gamma^\mu \Gamma^{(+)} u_{s_{i}} \bar{u}_{s_{i}} \gamma^\nu \Gamma^{(- )} u_{s_{X}} \Big) \cr \cr & & + {\cal I}^2_L \bar{u}_{s_{X}} \gamma^\mu \Gamma^{(-)} u_{s_{i}} \bar{u}_{s_{i}} \gamma^\nu \Gamma^{(-)} u_{s_{X}} \, + {\cal I}^2_R \bar{u}_{s_{X}}\gamma^\mu \Gamma^{(+)} u_{s_{i}} \bar{u}_{s_{i}} \gamma^\nu \Gamma^{(+)} u_{s_{X}}\Big\} \end{eqnarray} \noindent where we defined $ g_{eff} \equiv {\cal K}_{_{eff}} \, g_{_{V}} $ and $ \Gamma^{(\pm )} \equiv \frac{1 \pm \gamma^5}{2}$ and omitted the dependence of the spinors on the momenta. Note that we are summing over the final spins and averaging over the initial ones. Using the property $$ \sum_s (u_s)_{_\alpha} (p) (\bar{u}_s)_{_\beta} (p) = (\gamma^\mu p_\mu + M )_{_{\alpha\beta}} \,,$$ \noindent satisfied by both the initial and final state (on shell) spinors, with the corresponding masses and momenta, we find \begin{eqnarray} \label{Amplitude} \eta_\mu \eta_\nu W^{\mu\nu} &=& \frac14 \, \sum_{M_X} \delta ( M_X^2 + (p+q)^2 ) \, \, g_{eff}^2 \Big\{ - {\cal I}^2_L (n_x ) {\cal I}^2_R (n_x ) M_X M_0 \,\eta \cdot \eta \cr \cr & & + \Big( {\cal I}^2_L (n_x ) + {\cal I}^2_R (n_x ) \Big) \Big( ( p\cdot \eta )^2 - \frac{1}{2} ( p^2 + p \cdot q )\, \eta \cdot \eta \Big) \Big\} \end{eqnarray} The sum over the final states $X$ can be approximated by an integral over a continuum of states as it was done in in refs. \cite{Polchinski:2002jw,BallonBayona:2007qr}. In the present case we this corresponds to replacing the sum over $X$ of the delta functions by the factor: $ \frac{1}{4 \kappa^2} $. That means: \begin{eqnarray} \label{ApproximateAmplitude} \eta_\mu \eta_\nu W^{\mu\nu} &\approx & \frac{ g_{eff}^2}{16 \kappa^2} \Big\{ - {\cal I}^2_L (n_x ) {\cal I}^2_R (n_x ) M_X M_0 \,\eta \cdot \eta \cr \cr & & + \Big( {\cal I}^2_L (n_x ) + {\cal I}^2_R (n_x ) \Big) \Big( ( p\cdot \eta )^2 - \frac{1}{2} ( p^2 + p \cdot q )\, \eta \cdot \eta \Big) \Big\} \end{eqnarray} For the particular photon that we are considering, with $\eta \cdot q = 0$ we get from eq. (\ref{Structure}) \begin{equation} \eta_\mu \eta_\nu W^{\mu\nu} \,=\, \eta^2 F_1 \,+\, \frac{2 x}{q^2 } ( \eta \cdot p )^2 \, F_2 \end{equation} Comparing this with our expression for the hadronic tensor eq. (\ref{ApproximateAmplitude}) we find our results for the fermionic structure functions \begin{eqnarray} \label{Result1} F_1 &=& \frac{ g_{eff}^2 }{16 \kappa^2 } \Big\{ \Big( {\cal I}^2_L (n_x ) + {\cal I}^2_R (n_x ) \Big) \Big( \frac{M_0^2}{2 } + \frac{q^2}{4 x} \Big) \cr \cr & & - {\cal I}_L (n_x ) {\cal I}_R (n_x ) M_0 \sqrt{ M_0^2 + \frac{q^2(1-x)}{x} } \Big\}\cr \cr F_2 & = & \frac { g_{eff}^2 \,}{32 \kappa^2 } \Big( {\cal I}^2_L (n_x ) + {\cal I}^2_R (n_x ) \Big) \frac{ q^2 }{ x} \end{eqnarray} The excitation level $n_x$ of the final state is not an independent variable. It can be expressed in terms the DIS variables $q^2$ and $x$. Using eqs. (\ref{Bjorkenx}) and (\ref{Masses}) one finds \begin{equation} \label{nx1} - (p + q)^2 = M_0^2 + q^2 \Big[ \frac{1}{x} - 1 \Big]\,= M_X^2 \,= 4 \kappa^2 \Big( n_X + m + \frac{1}{2} \Big) \,, \end{equation} \noindent so \begin{equation} \label{nx2} n_X \,=\, \frac{q^2}{4 \kappa^2 } \Big[ \frac{1}{x} - 1 \Big] \,, \end{equation} \begin{center} \begin{figure}[ht] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~\includegraphics[width=2.8 in]{F2x085FixParameters.pdf} \caption{$F_{2}$ as a function of $ q^2$, where $q^2$ is in units of (GeV)$^2$. The plot consider $x = 0.85$. The higher curve in for $m = 0.6$, the middle one for $m = 0.7$ and the lower is plotted with $m = 0.8$. Dots correspond to experimental data \cite{Nakamura:2010zzi, Whitlow:1991uw}.} \end{figure} \end{center} \section{ Analysis of the results } In order to calculate the structure functions and analyze their dependence on $x$ and $q^2 $, first we have to fix the parameters of the model. The effective coupling $ g_{eff} \equiv {\cal K}_{_{eff}} \,g_{V}$ contains the electric charge $g_{_{V}}$, that satisfies $ g_{_{V}}^2 = 1/137 $ but multiplied by the yet undetermined parameter $ {\cal K}_{_{eff}} $ inserted in the model in order to phenomenologically adjust the relation between bulk and boundary quantities and essentially fix the size the structure functions. We found that the choice $g_{eff}^{2} = 0.66$ leads to structure functions $F_{2}$ with values compatible with experimental data. So we used this value in the calculations. The infrared energy scale $\kappa$ is related to the slope in a plot of $m^{2}$ as a function of $n$. In fact, in the present holographic model these slope is $4 \kappa^{2}$. We will choose $4 \kappa^{2} = 0.9 \, GeV^{2}$ that was found in ref. \cite{Forkel:2007cm,Vega:2008te} to give a good adjustment for the nucleon masses. So we use $\kappa = 0.474 GeV$. The other parameter of the model: $m$ is considered as a free parameter, associated, as we discussed before, with the anomalous dimension of the operator that creates the baryonic states. This parameter appears in mass the spectrum and can be fixed in this way. For example considering the Proton mass equal to 0.938 GeV, m must be 0.477, but we prefer to consider values such that we get the best fit of the shape of the structure function $F_2 $ to the experimental data, and that produce values for the proton mass close to the experimental one. The m values considered are 0.6, 0.7 and 0.8 that produce 0.995, 1.039 and 1.081 GeV respectively for the proton mass, and they were adjusted using data for x=0.85, see Fig. 2. It is important to remark that the supergravity approximation used in the model is, in principle, valid only for large values of $x$. This happens because, as discussed in \cite{Polchinski:2002jw}, for low values of $x$ string theory corrections would become relevant. So we analyzed the region $ 0.8 < x < 1.0 $. We show in Fig. 2 the structure functions found with the holographic model for $x = 0.85$ compared with the corresponding experimental data that appear in PDG \cite{Nakamura:2010zzi, Whitlow:1991uw} considering the values of $m$ that led to the best fits, that means, $m$ close to 0.7. \begin{center} \begin{figure}[ht] \begin{tabular}{c c} \includegraphics[width=2.8 in]{F2x08New.pdf} & \includegraphics[width=2.8 in]{F2x09New.pdf} \end{tabular} \caption{$F_{2}$ as a function of $ q^2$. The plot on the left consider $x = 0.8$ and on the right $x = 0.9$. In both plots the higher curve in for $m = 0.6$, the middle one for $m = 0.7$ and the lower is plotted with $m = 0.8$.} \end{figure} \end{center} In order to check if the dependence of $F_{2}$ on $ q^2 $ has a similar form for other values of $x$ in the range considered, we show in Fig. 3 this structure function for $x =0.8$ and $ x= 0.9$. In both cases we included plots for three different choices of the parameter m. We see that $F_{2}$ decreases with $q^2$ in a way that is compatible with the experimental results shown in figure (16.7) of \cite{Nakamura:2010zzi} or in \cite{Whitlow:1991uw} for $x=0.85$. It is important to stress the fact that we are only considering a region of high values of the Bjorken parameter $x$ (close to the elastic case $ x = 1$). For the range considered: $ 0.8 \le x < 1 $ the experimental results show a strong dependence of the structure function $F_2$ on $q^2$. That means: there is no Bjorken scaling in this region. So, the picture of the virtual photon interacting with just one parton carrying a fraction $x$ of the hadron momentum, which corresponds to structure functions depending only on $x$ and not on $q^2$, does not hold in this region. It would be valid for smaller values of $x$, as can be seen in ref. \cite{Nakamura:2010zzi}. This is consistent with our approximation of representing the final states by just a single hadron with the total momentum. Now, let's consider the dependence of $F_{2}$ on the Bjorken parameter $x$. There are not many experimental results for this structure function for $x$ close to one. But one finds in ref. \cite{Malace:2009kw} an interesting investigation of $F_{2}$ at large x, for low values of $q^2$, using some parametrizations obtained from experimental data. In particular, they show results for the $F_2$ as a function of $x$ for the cases of $q^2 = 4 $ GeV$^2$ and 9 GeV$^2$. Their results show a decrease in this structure function when we increase $x$ in the interval $ 0.9 < x < 1$. We show in Fig. 4 the structure function $F_2 $ obtained using our holographic model with z dependent mass in the range $0.8 < x < 1.0 $, for these two values of $q^{2}$ analyzed in \cite{Malace:2009kw}, using values for m around 0.7. The order of magnitude of our results was adjusted by the choice of the effective coupling $g_{eff}$ to be consistent with the experimental values, so they are of the same order of those found in \cite{Malace:2009kw}, however, in contrast to the results of \cite{Malace:2009kw}, we found an increase in the structure functions when $ x \to 1 $. So, our model for DIS of baryons does not give a good description for the dependence of $F_2$ on $x$ at low $q^{2}$. This may be a consequence of the fact that in this simple model the final hadronic states have just one baryon with spin $1/2$. In a general non elastic process the final state may include more than one hadron and also hadrons with higher spins. A description of such kind of process with multiple hadrons and also higher spins is out of the scope of the present kind of model. Nevertheless it is interesting to see that the model is able to reproduce, by adjusting the free parameters, the experimental dependence of $F_2 $ on $q^2 $. It is also interesting to consider the high q limit of the structure functions eq.(\ref{Result1}). Using the property \begin{center} \begin{figure}[ht] \begin{tabular}{c c} \includegraphics[width=2.8 in]{F2Q4x08a1New.pdf} & \includegraphics[width=2.8 in]{F2Q8x08a1New.pdf} \end{tabular} \caption{$F_{2}~v/s~x$. The plot at the left consider $q^{2} = 4 GeV^{2}$ and the other is for $q^{2} = 8 GeV^{2}$. In both plots the higher curve in for $m = 0.6$, the middle one for $m = 0.7$ and the lower is plotted with $m = 0.8$.} \end{figure} \end{center} \begin{equation} \label{Aprox} \frac{\Gamma (a + y)}{\Gamma (b + y)} = \biggl( \frac{1}{y} \biggr)^{b-a} \biggl( 1 + O (y^{-1}) \biggr);~~~~~y\rightarrow \infty \,, \end{equation} \noindent we can expand the integrals $I_{L,R}$ defined by eq. (\ref{I}) and that appear in eq.(\ref{Result1}). This way we find properties shared with other AdS/QCD models at dominant order in q, like \begin{equation} F_{2} = 2 F_{1} \end{equation} \noindent that implies that for $ x \to 1 $ we approach the Callan-Gross relation. We also find \begin{equation} F_{2} \sim \biggl( \frac{q^{2}}{4 \kappa^{2}} \biggr)^{-m - \frac{1}{2}} x^{m + \frac{5}{2}} (1 - x)^{m - \frac{1}{2}}. \end{equation} \begin{center} \begin{figure}[ht] \begin{tabular}{c c} \includegraphics[width=2.8 in]{F2Q30x08a1New.pdf} & \includegraphics[width=2.8 in]{F2Q1000x08a1New.pdf} \end{tabular} \caption{$F_{2}~v/s~x$. The plot at the left consider $q^{2} = 30~GeV^{2}$ and the other is for $q^{2} = 1000~GeV^{2}$. In both plots the higher curve in for $m = 0.6$, the middle one for $m = 0.7$ and the lower is plotted with $m = 0.8$. It can be seen that at high q the structure functions decrease for $x \to 1$.} \end{figure} \end{center} Notice that when $m = \tau - 3/2$, where $\tau$ is the twist (dimension of the operator that creates the state, minus the spin of the state), our model reproduces results found in previous works that consider DIS in holographic models \cite{Polchinski:2002jw, BallonBayona:2007qr}, this is not strange, because the dilaton decouples in this limit, a property discused in \cite{Brodsky:2007hb, Vega:2012iz}. Additionally, at high values for $q^{2}$, we find a decreasing behavior for the structure function when x is close to one, as can be seen in Fig. 5. Finally, it interesting to discuss the role of the parameter $m $ ($ \equiv \mu R)$ that was used to adjust the form of the structure functions and find a nice fit to the experimental data. This parameter was introduced as a five dimensional mass term of the fermionic field (in units of the inverse of the AdS radius). The five dimensional mass in the gauge/string correspondence is related to the scaling dimension of the dual boundary operator. In the standard approach, based on AdS/CFT, a fermionic field with a five dimensional constant mass $\mu $ is dual to a boundary operator with a (constant) scaling dimension \cite{Polchinski:2002jw} \begin{equation} \label{Dimension} \Delta = \mu R + 2 \,. \end{equation} \noindent Note that in a conformal field theory, that is the case in AdS/CFT, the dimensions of the operators do not vary with the energy. Here, we followed a phenomenological AdS/QCD approach and described the fermionic fields as in refs. \cite{Vega:2008te,Cherman:2008eh}. That means, considering that in a non conformal theory the scaling dimensions of the operators in general vary with the energy scale, we introduced an effective mass for the fermionic field of the form: $ ( m + \kappa^2 z^2 )/R $. Since the bulk coordinate $z$ is related to the energy scale of the boundary theory, this $z$ dependent mass term represents an effective way of incorporating the anomalous dimension of the boundary fermionic operators into the model. The asymptotic behaviour of our fermionic solutions when $z \to 0$, that determines the dimension of the dual boundary operator, is not affected by the presence of the $ \kappa^2 z^2 $ term. So, we can assume relation (\ref{Dimension}) to hold in our model. Thus, changing the value of $m$ corresponds to changing the dimension of the baryon operator. We can illustrate this relation taking two limiting cases. First, if we consider a baryon as been build from three fermionic operators (the valence quarks) we should have $\Delta = 9/2$ and then $m = 5/2$. On the other hand, if we consider the baryon operator as just one fermionic operator (like a particle without any internal structure) we have $\Delta = 1/2$ leading to $ m = -1/2$. Note that these two values for $\Delta $ are just classical scaling dimensions. The values of $m$ that we found to give a nice fit for the structure function $F_2$ are in a region near $ m \approx 0.7$. We may interpret this result as indicating that the hadron behaves, in the range of $x$ and $q^2 $ that we considered as having an effective scaling dimension $ \Delta_{effective} \approx 2.7 $. \noindent {\bf Acknowledgments:} We would like to thank Carlos Alfonso Ballon Bayona for important discussions. N.B. is partially supported by CNPq and Capes (Brazil) and A.V is supported by Fondecyt (Chile) under Grant No. 3100028. A.V. is grateful for the hospitality of the Instituto de F\'{\i}sica of Universidade Federal do Rio de Janeiro, where this work started.
1,314,259,994,406
arxiv
\section{Introduction} Wave motion is prevalent in many applications and has a great impact on our daily lives. Examples include the use of seismic waves \cite{Aki_and_Richards_1981,10.1785/BSSA0530010217B} to image natural resources in the Earth's subsurface, to detect cracks and faults in structures, to monitor underground explosions, and to investigate strong ground motions from earthquakes. Most wave propagation problems are formulated in large or infinite domains. However, because of limited computational resources, numerical simulations must be restricted to smaller computational domains by introducing artificial boundaries. Therefore, reliable and efficient domain truncation techniques that significantly minimise artificial reflections are important for the development of effective numerical wave solvers A straightforward approach to construct a domain truncation procedure is to surround the computational domain with an absorbing layer of finite thickness such that outgoing waves are absorbed. For this approach to be effective, all outgoing waves entering the layer should decay without reflections, regardless of the frequency and angle of incidence. An absorbing layer with this desirable property is called a perfectly matched layer (PML) \cite{BERENGER1994185,Chew_and_Weedon,Kuzuoglu_and_Mittra,Duru2014K,Duru2012JSC,BECACHE2003399,APPELO2006642,Baffet2019,BecaheKachanovska2021}. The PML was first derived for electromagnetic waves in the pioneering work \cite{BERENGER1994185,Chew_and_Weedon} but has since then been extended to other applications, for example acoustic and elastic waves \cite{Duru2014K,Duru2012JSC,BECACHE2003399,APPELO2006642,Baffet2019,BecaheKachanovska2021}. The PML has gained popularity because of its effective absorption properties, versatility, simplicity, ease of derivation and implementation using standard numerical methods. A stable PML model, when effectively implemented in a numerical solver, can yield a domain truncation scheme that ensures the convergence of the numerical solution to the solution of the unbounded problem \cite{Baffet2019,DURU2019898,ElasticDG_PML2019}. However, the PML is also notorious for supporting fatal instabilities which can destroy the accuracy of numerical solutions. These undesirable exponentially growing modes can be present in both the PML model at the continuous level or numerical methods at the discrete level. The stability analysis of the PML has attracted substantial attention in the literature, see for examples \cite{Duru2014K,Duru2012JSC,BECACHE2003399,APPELO2006642,Baffet2019,BecaheKachanovska2021}, and \cite{KDuru_and_GKreiss_Review_2022} for a recent review. For hyperbolic PDEs, mode analysis for PML initial value problems (IVP) with constant damping and constant material properties yields a necessary geometric stability condition \cite{BECACHE2003399}. When this condition is violated, exponentially growing modes in time are present, rendering the PML model useless. In certain cases, for example the acoustic wave equation with constant coefficients, analytical solutions can be derived \cite{BecaheKachanovska2021,DIAZ20063820}. In addition, energy estimates for the PML have recently been derived in physical space \cite{Baffet2019} and Laplace space \cite{DURU2019898,ElasticDG_PML2019}, which can be useful for deriving stable numerical methods. However, in general, even if the PML IVP does not support growing modes, there can still be stability issues when boundaries and material interfaces are introduced. For the extension of mode stability analysis to boundary and guided waves in homogeneous media, see \cite{Duru2014K,DURU2015372,Duru2012JSC, DURU2014445}. The stability analysis of the PML in discontinuous acoustic media was presented in \cite{DURU2014757}. To the best of our knowledge, the stability analysis of the PML for more general wave media such as the discontinuous or layered elastic solids has not been reported in literature. In geophysical and seismological applications, the wave media can be composed of layers of rocks, soft and hard sediments, bedrock layers, water and possibly oil. In layered elastic media, the presence of interface wave modes such as Stoneley waves \cite{doi:10.1098/rspa.1924.0079,10.1785/BSSA0530010217B}, makes the stability analysis of the PML more challenging. Numerical experiments have also reported PML instabilities and poor performance for problems with material boundaries entering into the layer and problems with strong evanescent waves \cite{APPELO20094200}. These existing results have motivated this study to investigate where the inadequacies of the PML arise. The main objective of this study is to analyse the stability of interface wave modes for the PML in discontinuous elastic solids. Using normal mode analysis, we prove that if the PML IVP has no temporally growing modes, then all interface wave modes present at a planar interface of bi-material elastic solids are dissipated by the PML. The analysis closely follows the steps taken in \cite{Duru2014K} for boundary waves modes, but here we apply the techniques to investigate the stability of interface wave modes in the PML. Numerical experiments in two-layered isotropic and anisotropic elastic solids, and a multi-layered isotropic elastic solid corroborate the theoretical analysis. The remainder of the paper proceeds as follows. In the next section, we present the elastic wave equation in discontinuous media, define interface conditions and discuss energy stability for the model problem. In section 3, we introduce the mode analysis for body and interface wave modes, and formulate the determinant condition that is necessary for stability. The PML model is derived in section 4. In section 5, we present the stability analysis of the PML in a piecewise constant elastic medium and formulate the main results. Numerical examples are given in section 6, corroborating the theoretical analysis. In section 7, we draw conclusions. \section{The elastic wave equation in discontinuous media} Consider the 2D elastic wave equation in the two half-planes, $\Omega_1 = (-\infty, \infty)\times(0, \infty)$ and $\Omega_2 = (-\infty, \infty)\times(-\infty, 0)$ \begin{align} \rho_i\frac{\partial^2 \mathbf{u}_i}{\partial t^2} &= \frac{\partial}{\partial x}\left(A_i\frac{\partial\mathbf{u}_i}{\partial x} + C_i\frac{\partial\mathbf{u_1}}{\partial y}\right)+\frac{\partial}{\partial y}\left(B_i\frac{\partial\mathbf{u}_i}{\partial y} + C^T_i\frac{\partial\mathbf{u}_i}{\partial x}\right), \quad (x,y) \in \Omega_i, \quad i = 1, \label{pde1} \end{align} with a planar interface at $y = 0$ and subject to the smooth initial conditions $$ \mathbf{u}_i(x, y, 0) = \mathbf{f_i}(x,y), \quad \frac{\partial \mathbf{u}_i}{\partial t}(x, y, 0) = \mathbf{g_i}(x,y). $$ At the material interface $y=0$, we impose physical interface conditions \begin{equation}\label{int_con} \mathbf{u}_1=\mathbf{u}_2,\quad B_1\frac{\partial\mathbf{u}_1}{\partial y}+C_1^T\frac{\partial\mathbf{u}_1}{\partial x}=B_2\frac{\partial\mathbf{u}_2}{\partial y}+C_2^T\frac{\partial\mathbf{u}_2}{\partial x}, \end{equation} which correspond to continuity of displacement and continuity of traction. For $i = 1,2$, we have the unknown displacement vectors $\mathbf{u}_i=[u_{i1}, u_{i2}]^T$. The medium parameters are described by the densities $\rho_i >0$ and the coefficient matrices $A_i$, $B_i$, $C_i$ of elastic constants. In 2D orthotropic elastic media, the elastic coefficients are described by four independent parameters $c_{11_i}$, $c_{22_i}$, $c_{33_i}, c_{12_i}$ and the coefficient matrices are given by \begin{equation}\label{mp} A_i = \begin{bmatrix} c_{11_i} & 0 \\ 0 & c_{33_i} \end{bmatrix}, \quad B_i = \begin{bmatrix} c_{33_i} & 0 \\ 0 & c_{22_i} \end{bmatrix}, \quad C_i = \begin{bmatrix} 0 & c_{12_i} \\ c_{33_i} & 0 \end{bmatrix},\quad i = 1,2. \end{equation} Here, the material coefficients $c_{11_i}$, $c_{22_i}$, $c_{33_i}$ are always positive, but $c_{12_i}$ may be negative for certain materials. In general, for stability, we require \begin{align}\label{elastic_contants_ellipticity} c_{11_i} > 0, \quad c_{22_i} >0, \quad c_{33_i} >0, \quad c_{11_i}c_{22_i}-c_{12_i}^2>0. \end{align} For planar waves propagating along the $x$-direction and $y-$direction, the $p$-wave speed and $s$-wave speed are given by \begin{align}\label{elastic_wave_speeds_an} c_{px_i}:= \sqrt{\frac{c_{11_i}}{\rho_i}}, \quad c_{sx_i}:= \sqrt{\frac{c_{33_i}}{\rho_i}}, \quad c_{py_i}:= \sqrt{\frac{c_{22_i}}{\rho_i}}, \quad c_{sy_i}:= \sqrt{\frac{c_{33_i}}{\rho_i}}. \end{align} In the case of isotropic media, the material properties can be described by using only two Lam\'{e} parameters, $\mu_i >0$ and $\lambda_i$, such that $c_{11_i}=c_{22_i}=2\mu_i+\lambda_i$, $c_{33_i}=\mu_i>0$, $c_{12_i}=\lambda_i > - \mu_i$, yielding \begin{equation}\label{mp_isotropic} A_i = \begin{bmatrix} 2\mu_i+\lambda_i & 0 \\ 0 & \mu_i \end{bmatrix}, \quad B_i = \begin{bmatrix} \mu_i & 0 \\ 0 & 2\mu_i+\lambda_i \end{bmatrix}, \quad C_i = \begin{bmatrix} 0 & \lambda_i \\ \mu_i & 0 \end{bmatrix},\quad i=1,2, \end{equation} with the wave speeds \begin{align}\label{elastic_wave_speeds_iso} c_{pi}:= \sqrt{\frac{2\mu_i+\lambda_i}{\rho_i}}, \quad c_{si}:= \sqrt{\frac{\mu_i}{\rho_i}}. \end{align} In isotropic media, a wave mode propagates with the same wave speed in all directions. We introduce the strain-energy matrix \begin{align}\label{eq:energy_matrix} P_i = \begin{bmatrix} A_i & C_i\\ C^T_i & B_i \end{bmatrix}. \end{align} The symmetric strain-energy matrix $P_i$ is positive semi-definite \cite{Lai1993}. For $i=1,2$, we define the mechanical energy in the medium $\Omega_i$ by \begin{align}\label{eq:energy_continuous} E_i(t) = \frac{1}{2}\int_{\Omega_i}\left( \rho_i\left(\frac{\partial\mathbf{u}_i}{\partial t}\right)^T\left(\frac{\partial\mathbf{u}_i}{\partial t}\right) \begin{bmatrix} \frac{\partial\mathbf{u}_i}{\partial x}\\ \frac{\partial\mathbf{u}_i}{\partial y} \end{bmatrix}^T P_i \begin{bmatrix} \frac{\partial\mathbf{u}_i}{\partial x}\\ \frac{\partial\mathbf{u}_i}{\partial y} \end{bmatrix}\right)dxdy. \end{align} The following theorem states a stability result for the coupled problem, \eqref{pde1} with \eqref{int_con}. \begin{theorem}\label{Thm_energy} The elastic wave equation \eqref{pde1}, for $i = 1, 2$, subject to the interface condition \eqref{int_con} is energy conserving, that is $E_1(t)+E_2(t)=E_1(0)+E_2(0)$ for all $t\ge 0$, where the energy $E_i(t)$ is defined in \eqref{eq:energy_continuous}. \end{theorem} \begin{proof} Multiply \eqref{pde1} by $\left(\frac{\partial\mathbf{u}_i}{\partial t}\right)^T$ and integrate over the spatial domain $\Omega_i$, yielding \begin{align*} & \int_{\Omega_i}\rho_i\left(\frac{\partial\mathbf{u}_i}{\partial t}\right)^T\left(\frac{\partial^2\mathbf{u}_i}{\partial t^2}\right) dxdy=\int_{\Omega_i}\left[ \left(\frac{\partial\mathbf{u}_i}{\partial t}\right)^T \frac{\partial}{\partial x}\left(A_i\frac{\partial\mathbf{u}_i}{\partial x}\right)+\left(\frac{\partial\mathbf{u}_i}{\partial t}\right)^T \frac{\partial}{\partial y}\left(B_i\frac{\partial\mathbf{u}_i}{\partial y}\right)\right.\\&+\left.\left(\frac{\partial\mathbf{u}_i}{\partial t}\right)^T \frac{\partial}{\partial x}\left(C_i\frac{\partial\mathbf{u}_i}{\partial y}\right)+\left(\frac{\partial\mathbf{u}_i}{\partial t}\right)^T \frac{\partial}{\partial y}\left(C_i^T\frac{\partial\mathbf{u}_i}{\partial x}\right)\right]dxdy. \end{align*} Integrating by parts and summing contributions from both half-planes, $i = 1, 2$, we obtain \begin{align*} \sum_{i=1}^2 & \int_{\Omega_i}\rho_i\left(\frac{\partial\mathbf{u}_i}{\partial t}\right)^T\left(\frac{\partial^2\mathbf{u}_i}{\partial t^2}\right) dxdy =-\sum_{i=1}^2\int_{\Omega_i} \left[\left(\frac{\partial^2\mathbf{u}_i}{\partial t\partial x}\right)^T \left(A_i\frac{\partial\mathbf{u}_i}{\partial x}\right)+\left(\frac{\partial^2\mathbf{u}_i}{\partial t\partial y}\right)^T \left(B_i\frac{\partial\mathbf{u}_i}{\partial y}\right)\right.\\&+ \left.\left(\frac{\partial^2\mathbf{u}_i}{\partial t\partial x}\right)^T \left(C_i\frac{\partial\mathbf{u}_i}{\partial y}\right)+\left(\frac{\partial^2\mathbf{u}_i}{\partial t\partial y}\right)^T \left(C_i^T\frac{\partial\mathbf{u}_i}{\partial x}\right)\right]dxdy\\ &-\int_{-\infty}^{\infty}\left(\frac{\partial\mathbf{u}_1}{\partial t}\right)^T\left(B_1\frac{\partial\mathbf{u}_1}{\partial y}+C_1^T\frac{\partial\mathbf{u}_1}{\partial x}\right)\bigg|_{y=0}dx + \int_{-\infty}^{\infty}\left(\frac{\partial\mathbf{u}_2}{\partial t}\right)^T\left(B_2\frac{\partial\mathbf{u}_2}{\partial y}+C_2^T\frac{\partial\mathbf{u}_2}{\partial x}\right)\bigg|_{y=0}dx. \end{align*} The interface terms at $y=0$ vanish because of \eqref{int_con}. The relation can then be rewritten as \[ \frac{1}{2}\frac{d}{dt}\sum_{i=1}^2\int_{\Omega_i}\rho_i\left(\frac{\partial\mathbf{u}_i}{\partial t}\right)^T\left(\frac{\partial\mathbf{u}_i}{\partial t}\right)dxdy = -\frac{1}{2}\frac{d}{dt}\sum_{i=1}^2\int_{\Omega_i}\begin{bmatrix} \frac{\partial\mathbf{u}_i}{\partial x}\\ \frac{\partial\mathbf{u}_i}{\partial y} \end{bmatrix}^T \begin{bmatrix} A_i & C_i\\ C^T_i & B_i \end{bmatrix} \begin{bmatrix} \frac{\partial\mathbf{u}_i}{\partial x}\\ \frac{\partial\mathbf{u}_i}{\partial y} \end{bmatrix}dxdy . \] Moving all terms to the left-hand side and identifying the energy gives $$ \frac{d}{dt}\left(E_1(t)+E_2(t)\right) =0. $$ The time derivative of the energy vanishes, thus $E_1(t)+E_2(t) = E_1(0)+E_2(0)$ for all $t \ge 0$. The proof is complete. \end{proof} We say that the problem is energy-stable if the energy is conserving or dissipating. \section{Mode analysis}\label{sec:mode_analysis} Theorem \ref{Thm_energy} proves energy stability of the elastic wave equation \eqref{pde1} in general media $\Omega_i$, for $i = 1, 2$, subject to the interface condition \eqref{int_con}. However, the theorem does not provide information about the wave modes that may exist in the medium. In this section, we use mode analysis to gain insights on the existence of possible wave modes. More precisely, we start by considering a constant-coefficient problem for the existence of body waves. After that, we analyse interface waves in media with piecewise constant material property and formulate a stability result in the framework of normal mode analysis. \subsection{Plane waves and dispersion relations}\label{sec:dispersion_relation} To study the existence of body wave modes, we consider the problem \eqref{pde1} in the whole real plane $(x,y) \in \mathbb{R}^2$ with constant medium parameters, $$ \rho_i = \rho, \quad A_i = A, \quad B_{i} = B, \quad C_i = C, $$ for $\Omega_i$, $i = 1, 2$. In this case, there is no interface condition at $y=0$, and the material parameters are constant in the entire domain $\Omega_1\cup\Omega_2$. Consider the wave-like solution \begin{align}\label{eq:plane_wave} \mathbf{u}\left(x, y, t\right) = \mathbf{u}_0 e^{st+\bm{i}\left( k_x x + k_y y \right)}, \quad \mathbf{u}_0 \in \mathbb{R}^2, \quad k_x, k_y, x, y \in \mathbb{R}, \quad t\ge 0, \quad \bm{i} = \sqrt{-1}. \end{align} In \eqref{eq:plane_wave}, $\mathbf{k}=\left(k_x, k_y\right) \in \mathbb{R}^2$ is the wave vector, and $ \mathbf{u}_0\in \mathbb{R}^2$ is a vector of constant amplitude called the polarization vector. By inserting \eqref{eq:plane_wave} into \eqref{pde1}, we have the eigenvalue problem \begin{align}\label{body_wave_speeds_eig} -s^2\mathbf{u}_0 = \mathcal{P}(\mathbf{k})\mathbf{u}_0, \quad \mathcal{P}(\mathbf{k}) = \frac{k_x^2 A + k_y^2 B + k_xk_y(C + C^T)}{\rho}. \end{align} The polarisation vector $\mathbf{u}_0 \in \mathbb{R}^2$ is an eigenvector of the matrix $\mathcal{P}(\mathbf{k})$ and $-s^2$ is the corresponding eigenvalue. For problems that are energy conserving, the matrix $\mathcal{P}(\mathbf{k})$ is symmetric positive definite for all $\mathbf{k} \in \mathbb{R}^2$. Thus, the eigenvectors $\mathbf{u}_0$ of $\mathcal{P}(\mathbf{k})$ are orthogonal and the eigenvalues are real and positive, $-s^2 >0$. The wave-mode \eqref{eq:plane_wave} is a solution of the elastic wave equation \eqref{pde1} in the whole plane $(x,y) \in \mathbb{R}^2$ if $s$ and $\mathbf{k}$ satisfy the dispersion relation \begin{align}\label{eq:dispersion_relation_f} &F\left(s, \mathbf{k}\right):= \det\left({s^2} I + \mathcal{P}(\mathbf{k})\right) = 0. \end{align} Evaluating the determinant and simplifying further, we obtain { \footnotesize \begin{align}\label{eq:dispersion_orthotropic} F\left(s, \mathbf{k}\right) = s^4 + \frac{\left(c_{11} + c_{33}\right)k_x^2+\left(c_{22} + c_{33}\right)k_y^2}{\rho} s^2 + \frac{c_{11}c_{33}k_x^4 + c_{22}c_{33}k_y^4 + \left(c_{11}c_{22} + c_{33}^2 - \left(c_{33}+c_{12}\right)^2\right)k_x^2k_y^2}{\rho^2} = 0. \end{align} } In an isotropic medium, with $c_{11}=c_{22}=2\mu+\lambda$, $c_{33}=\mu>0$, $c_{12}=\lambda > - \mu$, the dispersion relation simplifies to \begin{align}\label{eq:dispersion_isotropic} F\left(s, \mathbf{k}\right)=\left({s^2} + c_p^2|\mathbf{k}|^2\right)\left({s^2}+ c_s^2|\mathbf{k}|^2\right)=0, \quad c_p= \sqrt{\frac{2\mu + \lambda}{\rho}}, \quad c_s= \sqrt{\frac{\mu}{\rho}}, \quad |\mathbf{k}| = \sqrt{k_x^2 + k_y^2}. \end{align} Then, the eigenvalues are given by \begin{equation}\label{eq:eigenvalues_isotropic} \begin{split} -s_1^2 = c_p^2|\mathbf{k}|^2, \quad -s_2^2= c_s^2|\mathbf{k}|^2, \end{split} \end{equation} which correspond to the P-wave and S-wave propagating in the medium. In linear orthotropic elastic media, the eigenvalues $-s^2$ also have closed form expressions { \footnotesize \begin{equation}\label{eq:eigenvalues_orthotropic} \begin{split} -s_1^2 &= \frac{1}{2{\rho}}\left({(c_{11}+c_{33})k_x^2 + (c_{22}+c_{33})k_y^2}\right)\\ &+ \frac{1}{2{\rho}}\sqrt{\left({(c_{11}+c_{33})k_x^2 + (c_{22}+c_{33})k_y^2}\right)^2 -4\left(\left(c_{11}c_{33}k_x^4 +c_{22}c_{33}k_y^4\right)+ \left(c_{11}c_{22} +c_{33}^2-\left(c_{12} +c_{33}\right)^2k_x^2k_y^2\right)\right)},\\ -s_2^2 &= \frac{1}{2{\rho}}\left({(c_{11}+c_{33})k_x^2 + (c_{22}+c_{33})k_y^2}\right)\\ &- \frac{1}{2{\rho}}\sqrt{\left({(c_{11}+c_{33})k_x^2 + (c_{22}+c_{33})k_y^2}\right)^2 -4\left(\left(c_{11}c_{33}k_x^4 +c_{22}c_{33}k_y^4\right)+ \left(c_{11}c_{22} +c_{33}^2-\left(c_{12} +c_{33}\right)^2k_x^2k_y^2\right)\right)}. \end{split} \end{equation} } Using the stability conditions \eqref{elastic_contants_ellipticity}, it is easy to check that the two eigenvalues are strictly positive, that $-s_j^2 > 0$ for $j = 1,2$. These two eigenvalues again indicate two body-wave modes, corresponding to the quasi-P waves and the quasi-S waves. \begin{remark}\label{remark:imaginaryroots} The indeterminate $s\in \mathbb{C}$ that solves the dispersion relation \eqref{eq:dispersion_relation_f} is related to the temporal frequency. Since Theorem \ref{Thm_energy} holds for all stable medium parameters, the whole plane problem \eqref{pde1} conserves energy. Thus, the real part of the roots $s$ must be zero, that is $s\in \mathbb{C}$ with $\Re{s} =0$. Otherwise, if the roots $s$ have non-zero real parts then the energy will grow or decay, contradicting Theorem \ref{Thm_energy}. \end{remark} We write $s = \bm{i}\omega$, where $\omega \in \mathbb{R}$ is called the temporal frequency, and introduce \begin{align}\label{KVSV} \begin{split} &\mathbf{K} = \left(\frac{k_x}{|\mathbf{k}|}, \frac{k_y}{|\mathbf{k}|}\right), \quad \text{normalized propagation direction},\\ &\mathbf{V}_p = \left(\frac{\omega}{k_x}, \frac{\omega}{k_y}\right), \quad \text{phase velocity},\\ &\mathbf{S} = \left(\frac{k_x}{\omega}, \frac{k_y}{\omega}\right), \quad \text{slowness vector},\\ &\mathbf{V}_g = \left(\frac{\partial \omega}{\partial k_x}, \frac{\partial \omega}{\partial k_y}\right), \quad \text{group velocity}. \end{split} \end{align} For the Cauchy problem in a constant coefficient medium, the dispersion relation $F\left(i\omega, \mathbf{k}\right)=0$ and the quantities $\mathbf{K}$, $\mathbf{V}_p $, $\mathbf{S}$, $\mathbf{V}_g$, defined above give detailed description of the wave propagation properties in the medium. In addition, they determine a stability property for the corresponding PML model, which is discussed in section 5.1. In Figure \ref{fig:dispersion_relation}, we plot the dispersion relations of two different elastic solids, showing the slowness diagrams. \begin{figure}[ht] \centering \subfloat[Isotropic elastic solid]{\includegraphics[width=0.48\linewidth]{dispersion_isotropy_stable-eps-converted-to.pdf}} \subfloat[Anisotropic elastic solid]{\includegraphics[width=0.48\linewidth]{anisotropy_2-eps-converted-to.pdf}} \label{fig:dispersion_relation} \end{figure} When boundaries and interfaces are present, additional boundary and interface wave modes, such as Rayleigh \cite{https://doi.org/10.1112/plms/s1-17.1.4} and Stoneley waves \cite{doi:10.1098/rspa.1924.0079,10.1785/BSSA0530010217B}, are introduced. In the following, we consider the problem in two half-planes coupled together at a planar interface and formulate an alternative procedure to characterise the stability property of interface wave modes. \subsection{Normal modes analysis and the determinant condition}\label{sec:normal_modes_analysis_interface} Here, we present the normal modes analysis for interface wave modes in discontinuous media, which tightly connects to the analysis of the PML in the next section. To begin, we consider piecewise constant media parameters $$ \rho_i >0, \quad A_i, \quad B_{i}, \quad C_i, $$ for $\Omega_i$, $i = 1, 2$, where $A_i, B_{i}, C_i$ are given in \eqref{mp}. The material parameters are constant in each half-plane, but are discontinuous at the interface $y =0$, where the equations \eqref{pde1} are coupled by the interface condition \eqref{int_con}. We look for wave solutions in the form \begin{align}\label{eq:simple_wave} \mathbf{u}_i\left(x, y, t\right) = \boldsymbol{\phi}_i(y) e^{st+ \bm{i} k_x x }, \quad \|\boldsymbol{\phi}_i\| < \infty, \quad k_x\in \mathbb{R}, \quad (x, y) \in \Omega_i, \quad t\ge 0. \end{align} The variable $s$ is related to the stability property of the model, which is characterised in the following lemma. \begin{lemma}\label{def:well-posedness} The elastic wave equations \eqref{pde1} with piecewise constant media parameters \eqref{mp} and the interface condition \eqref{int_con} are not stable in any sense if there are nontrivial solutions of the form \eqref{eq:simple_wave} with $\Re{s} >0$. \end{lemma} If there are nontrivial solutions of the form \eqref{eq:simple_wave}, we can always construct solutions that grow arbitrarily fast \cite{Duru2014K}, which is not supported by a stable system. Now, we reformulate Lemma \eqref{def:well-posedness} as an algebraic condition, i.e. the so-called determinant condition in Laplace space \cite{Gustafsson2013}. For a complex number $z = a + \bm{i}b$, we define the branch of $\sqrt{z}$ by $$ -\pi < \arg{(a + \bm{i}b)} \le \pi, \quad \arg{(\sqrt{a + \bm{i}b})} = \frac{1}{2} \arg{(a + \bm{i}b)}. $$ We insert \eqref{eq:simple_wave} in the equation \eqref{pde1} and the interface condition \eqref{int_con}, and obtain \begin{align} s^2 \rho_i{\boldsymbol{\phi}_i} = -k_x^2A_i \boldsymbol{\phi}_i +B_i\frac{d^2\boldsymbol{\phi}_i}{dy^2} +\bm{i}k_x \left(C_i + C^T_i\right) \frac{d\boldsymbol{\phi}_i}{dy},\quad i=1,2,\label{pde1_lap} \\ \boldsymbol{\phi}_1 =\boldsymbol{\phi}_2,\quad B_1\frac{d{\boldsymbol{\phi}_1}}{d y}+\bm{i}k_xC_1^T{{\boldsymbol{\phi}_1}}=B_2\frac{d\boldsymbol{\phi}_2}{d y}+\bm{i}k_xC_2^T{\boldsymbol{\phi}_2}.\label{int_con_Lap} \end{align} For $\boldsymbol{\phi}_i$, we seek the modal solution \begin{equation}\label{eq:modal_solution} \boldsymbol{\phi}_i = \boldsymbol{\Phi}_i e^{\kappa y}, \quad \boldsymbol{\Phi}_i \in \mathbb{C}^2,\quad i=1,2. \end{equation} Inserting the modal solution \eqref{eq:modal_solution} in \eqref{pde1_lap}, we have the eigenvalue problem \begin{align}\label{interface_wave_speeds_eig} -s^2\boldsymbol{\Phi}_i = \mathcal{P}_i(k_x, \kappa)\boldsymbol{\Phi}_i, \quad \mathcal{P}_i(k_x, \kappa) = \frac{k_x^2 A_i - \kappa^2 B_i - \bm{i}k_x\kappa(C_i + C_i^T)}{\rho_i},\quad i=1,2. \end{align} The solutions satisfy the condition \begin{align}\label{eq:characteristics_f} &F_i\left(s, k_x, \kappa\right):= \det\left({s^2} I + \mathcal{P}_i(k_x, \kappa)\right) = 0. \end{align} We note that if we set $\kappa = \bm{i}k_y$ in $F_i\left(s, k_x, \kappa\right)$, we get exactly the same dispersion relation \eqref{eq:dispersion_relation_f} for the Cauchy problem. For $s$ with large $\Re{s} > 0$, the roots $\kappa_i$ come in pairs and have non-vanishing real parts, \cite{Duru2014K}, with \begin{align}\label{eq:kappa_roots} \kappa_{ij}^{-}(s, k_x), \quad \kappa_{ij}^{+}(s, k_x), \quad j = 1, 2. \end{align} The following lemma states an important property of the roots \begin{lemma} The real parts of the roots $\kappa_{ij}^{\pm},\ i,j=1,2$ in \eqref{eq:kappa_roots} do not change sign for all $s$ with $\Re{s} >0$. \end{lemma} \begin{proof} We note that the roots vary continuously with $s$. Thus, if the real part of a root $\kappa_{ij}$ changes sign, then for some $s$ with $\Re{s} >0$ the root is purely imaginary, $\kappa_{ij} = \bm{i}k_y$. Because of the equivalence between the dispersion relations \eqref{eq:characteristics_f} and \eqref{eq:dispersion_relation_f} when $\kappa_{ij} = \bm{i}k_y$, a purely imaginary root corresponds to an exponentially growing mode for the Cauchy problem, which contradicts Theorem \ref{Thm_energy}. \end{proof} We use the notation \eqref{eq:kappa_roots} to denote the roots with the stated sign convention for all $s$ with $\Re{s} >0$. That is, the superscript ($+$) denotes the root with positive real part and the superscript ($-$) denotes the root with negative real part. Because of the condition $\|\boldsymbol{\phi}_i\|<\infty$, the general solution of \eqref{pde1_lap} takes the form \begin{equation}\label{gen_sol_ped1_lap} \boldsymbol{\phi}_1(y)=\delta_{11} e^{\kappa_{11}^- y}\Phi_{11}+\delta_{12} e^{\kappa_{12}^- y}\Phi_{12}, \quad \boldsymbol{\phi}_2(y) =\delta_{21} e^{\kappa_{21}^+ y}\Phi_{21}+\delta_{22} e^{\kappa_{22}^+ y}\Phi_{22}, \end{equation} where $\Phi_{ij},\ i,j=1,2$ are the corresponding eigenvectors. As an example, in isotropic linear elastic media, the analytical expressions of the roots and eigenvectors are \begin{align*} \kappa_{i1}^{\pm}=\pm \sqrt{ k_x^2+\frac{ s^2}{c_{si}^2}}, \quad \kappa_{i2}^{\pm}=\pm \sqrt{k_x^2 + \frac{s^2}{c_{pi}^2}},\quad i=1,2, \end{align*} and \begin{align*} \Phi_{11}=\begin{bmatrix} \frac{\bm{i}}{\widetilde k_x}\kappa_{11}^-\\ 1 \end{bmatrix},\quad \Phi_{12}=\begin{bmatrix} \frac{-\bm{i} k_x}{\kappa_{12}^-}\\ 1 \end{bmatrix},\quad \Phi_{21}=\begin{bmatrix} \frac{\bm{i}}{ k_x}\kappa_{21}^+\\ 1 \end{bmatrix},\quad \Phi_{22}=\begin{bmatrix} \frac{-\bm{i} k_x}{\kappa_{22}^+}\\ 1 \end{bmatrix}. \end{align*} For orthotropic elastic media, the roots can also be expressed in closed form, but the expressions are much more complicated. We refer the reader to \cite{Duru2014K} for more details. The coefficients $\boldsymbol{\delta}=[\delta_{11},\delta_{12},\delta_{21},\delta_{22}]^T$ are determined by inserting \eqref{gen_sol_ped1_lap} into the interface conditions \eqref{int_con_Lap}, yielding the following equation \begin{equation}\label{Fs_pml} \mathcal{C}(s, k_x)\boldsymbol{\delta}=\mathbf{0}, \end{equation} where the $4\times 4$ boundary matrix $\mathcal{C}$ takes the form { \footnotesize \begin{equation}\label{F_0} \mathcal{C}(s, k_x)=\begin{bmatrix} \Phi_{11} &\Phi_{12} & -\Phi_{21} & -\Phi_{22} \\ (\kappa_{11}^-B_1+\bm{i} k_x C_1^T)\Phi_{11} &(\kappa_{21}^-B_1+\bm{i} k_x C_1^T)\Phi_{12} & -(\kappa_{12}^+B_2+ \bm{i} k_x C_2^T)\Phi_{21}& -(\kappa_{22}^+B_2+\bm{i} k_x C_2^T)\Phi_{22} \end{bmatrix}. \end{equation} } To ensure only trivial solutions for $\Re{s} >0$, the coefficients $\boldsymbol{\delta}$ must vanish, and thus we require the {\it determinant condition} \begin{align}\label{eq:determinat_conditon} \mathcal{F}(s, k_x):=\det\left(\mathcal{C}(s, k_x)\right) \ne 0, \quad \forall\Re{s} >0. \end{align} We will now formulate an algebraic definition of stability equivalent to Lemma \ref{def:well-posedness}, for the coupled problem, \eqref{pde1} and \eqref{int_con}, with piecewise constant media parameters \eqref{mp}. \begin{lemma}\label{def:determinant_condition} The solutions of the elastic wave equation \eqref{pde1} with piecewise constant media parameters \eqref{mp} and the interface condition \eqref{int_con} are not stable in any sense if for some $k_x \in \mathbb{R}$ and $s \in \mathbb{C}$ with $\Re{s} >0$, we have $$ \mathcal{F}(s, k_x):=\det\left(\mathcal{C}(s, k_x)\right) =0. $$ \end{lemma} The {\it determinant condition} is defined for all $s$ with $\Re{s} >0$. The case when $\Re{s} =0$ would correspond to time-harmonic and important interface wave modes, such as Stoneley waves \cite{Gustafsson2013,Kreiss2012}. The energy stability in Theorem \ref{Thm_energy} states that the coupled problem, \eqref{pde1} and \eqref{int_con}, with piecewise constant media parameters \eqref{mp} conserves energy. Therefore, similar to Remark \ref{remark:imaginaryroots}, the roots $s$ of $\mathcal{F}(s, k_x)$ must be zero or purely imaginary, i.e. $s\in \mathbb{C}$ with $\Re{s} =0$. We conclude that all nontrivial and stable interface wave modes, such as Stoneley waves, that solve $\mathcal{F}(s, k_x)=0$, must have purely imaginary roots, $s = \bm{i}\xi$ with $\xi \in \mathbb{R}$. A main objective of the present work is to determine how the purely imaginary roots $s = \bm{i}\xi$ will move in the complex plane when the PML is introduced. \section{The perfectly matched layer} We consider the elastic wave equation \eqref{pde1} with the interface conditions \eqref{int_con}. Let the Laplace transform, in time, of $\mathbf{u}\left(x,y, t\right)$ be defined by { \begin{equation} \widehat{\mathbf{u}}(x,y,s) = \int_0^{\infty}e^{-st}{\mathbf{u}}\left(x,y,t\right)\text{dt}, \quad s = a + \bm{i}b, \quad \Re{s} = a > 0. \end{equation} } We consider a setup where the PML is included in the $x$-direction only. Without loss of generality, we assume that we are only interested in the solution in the left half-plane $x\leq 0$. To absorb outgoing waves, we introduce a PML outside the left half-plane and require that the material properties are invariant in $x$ in PML. To derive the PML model, we Laplace transform \eqref{pde1} in time, and obtain \begin{equation}\label{lap1} \rho_i s^2 \mathbf{\widehat u_i} = \frac{\partial}{\partial x}\left(A_i\frac{\partial{\mathbf{\widehat u}_i}}{\partial x}\right)+\frac{\partial}{\partial y}\left(B_i\frac{\partial\mathbf{\widehat u}_i}{\partial y}\right)+\frac{\partial}{\partial x}\left(C_i\frac{\partial\mathbf{\widehat u}_i}{\partial y}\right)+\frac{\partial}{\partial y}\left(C^T_i\frac{\partial\mathbf{\widehat u}_i}{\partial x}\right),\quad (x,y) \in \Omega_i, \quad \Re{s} >0. \end{equation} Note that we have tacitly assumed homogeneous initial data. Next, we consider \eqref{lap1} in the transformed coordinate $(\widetilde x, y)$, such that \begin{equation}\label{eqn_Sx} \frac{d\widetilde x}{dx}=1 + \frac{\sigma(x)}{\alpha+s}=:S_x. \end{equation} Here, $\sigma(x)\geq 0$ is the damping function and $\alpha\geq 0$ is the complex frequency shift (CFS) \cite{Kuzuoglu_and_Mittra}. For all $s$, we have $S_x \ne 0$ and $1/S_x \ne 0$, and the smooth complex coordinate transformation \cite{Chew_and_Weedon}, \begin{equation}\label{complex_scaling} \frac{\partial}{\partial x} \to \frac{1}{S_x}\frac{\partial}{\partial x}. \end{equation} The PML model in Laplapce space is \begin{equation}\label{lap2} s^2 \rho_i\mathbf{\widehat u}_i = \frac{1}{S_x}\frac{\partial}{\partial x}\left(\frac{1}{S_x}A_i\frac{\partial\mathbf{\widehat u}_i}{\partial x}\right)+\frac{\partial}{\partial y}\left(B_i\frac{\partial\mathbf{\widehat u}_i}{\partial y}\right)+\frac{1}{S_x}\frac{\partial}{\partial x}\left(C_i\frac{\partial\mathbf{\widehat u}_i}{\partial y}\right)+\frac{\partial}{\partial y}\left(C^T_i\frac{1}{S_x}\frac{\partial\mathbf{\widehat u}_i}{\partial x}\right),\quad i=1,2, \end{equation} with interface conditions \begin{equation}\label{int_lap_pml} \mathbf{\widehat u}_1=\mathbf{\widehat u}_2,\quad B_1\frac{\partial\mathbf{\widehat u}_1}{\partial y}+C_1^T\frac{1}{S_x}\frac{\partial\mathbf{\widehat u}_1}{\partial x}=B_2\frac{\partial\mathbf{\widehat u}_2}{\partial y}+C_2^T\frac{1}{S_x}\frac{\partial\mathbf{\widehat u}_2}{\partial x}. \end{equation} Choosing the auxiliary variables \begin{align*} \mathbf{\widehat v}_i=\frac{1}{s+\sigma+\alpha}\frac{\partial \mathbf{\widehat u}_i}{\partial x}, \quad \mathbf{\widehat w}_i=\frac{1}{s+\alpha}\frac{\partial \mathbf{\widehat u}_i}{\partial y},\quad \mathbf{\widehat q}_i=\frac{\alpha}{s+\alpha} \mathbf{\widehat u}_i, \end{align*} we invert the Laplace transformed equation \eqref{lap2} and obtain the PML model in physical space, { \small \begin{equation}\label{eq:PML_physical_space} \begin{split} \rho_i\left(\frac{\partial^2\mathbf{u}_i}{\partial t^2}+ \sigma\frac{\partial \mathbf{u}_i}{\partial t} -\sigma\alpha (\mathbf{u}_i-\mathbf{q}_i)\right)&=\frac{\partial}{\partial x}\left(A_i\frac{\partial\mathbf{u}_i}{\partial x}+C_i\frac{\partial\mathbf{u}_i}{\partial y} - \sigma A_i\mathbf{v}_i\right)+\frac{\partial}{\partial y}\left(B_i\frac{\partial\mathbf{u}_i}{\partial y}+C_i^T\frac{\partial\mathbf{u}_i}{\partial x} + \sigma B_i\mathbf{w}_i\right),\\ \frac{\partial \mathbf{v}_i}{\partial t}&=-(\sigma+\alpha)\mathbf{v}_i+\frac{\partial\mathbf{u}_i}{\partial x}, \\ \frac{\partial \mathbf{w}_i}{\partial t}&=-\alpha\mathbf{w}_i+\frac{\partial\mathbf{u}_i}{\partial y}, \\ \frac{\partial \mathbf{q}_i}{\partial t}&=\alpha(\mathbf{u}_i-\mathbf{q}_i). \end{split} \end{equation} } Similarly, inverting the Laplace transformed interface conditions \eqref{int_lap_pml} for the PML model gives \begin{equation}\label{int_pml} \mathbf{u_1}=\mathbf{u_2}, \quad B_1\frac{\partial\mathbf{u_1}}{\partial y}+C_1^T\frac{\partial\mathbf{u_1}}{\partial x}+\sigma B_1\mathbf{w_1}=B_2\frac{\partial\mathbf{u_2}}{\partial y}+C_2^T\frac{\partial\mathbf{u_2}}{\partial x}+\sigma B_2\mathbf{w_2}. \end{equation} In the absence of the PML, $\sigma =0$, the above model problem is energy-stable in the sense of Theorem \ref{Thm_energy} for all elastic material parameters. When $\sigma >0$, however, the coupled PML model \eqref{eq:PML_physical_space}-\eqref{int_pml} is asymmetric with auxiliary differential equations. Thus, a similar energy-stability cannot be established in general. To analyse the stability property of the PML model in a piecewise constant elastic medium, we use the mode analysis discussed in Section \ref{sec:mode_analysis} to prove that exponentially growing wave modes are not supported. \section{Stability analysis of the PML model} The stability analysis of the PML will mirror directly the mode analysis described in Section \ref{sec:mode_analysis}. We will split the analysis into two parts: plane wave analysis for the Cauchy PML problem and normal modes analysis for the interface wave modes. \subsection{Plane waves analysis}\label{sec:planewave_pml} We now investigate the stability of body wave modes in the PML in the whole real plane $(x,y) \in \mathbb{R}^2$ with constant medium parameters. As before, we consider constant PML damping $\sigma >0$ and uniformly constant coefficients medium parameters $$ \rho_i = \rho, \quad A_i = A, \quad B_{i} = B, \quad C_i = C, $$ for $\Omega_i$, $i = 1, 2$, that is there is no discontinuity of material parameters at the interface at $y =0$. Consider the wave-like solution \begin{align}\label{eq:plane_wave_pml} \mathbf{u}\left(x, y, t\right) = \mathbf{u}_0 e^{st+\bm{i}\left( k_x x + k_y y \right)}, \quad \mathbf{u}_0 \in \mathbb{R}^8, \quad k_x, k_y, x, y \in \mathbb{R}, \quad t\ge 0, \end{align} where $s \in \mathbb{C}$ is to be determined and relates to the stability property of the PML model. \begin{lemma} The PML model \eqref{eq:PML_physical_space} is not stable if there are nontrivial solutions $\mathbf{u}$ of the form \eqref{eq:plane_wave_pml} with $\Re{s} > 0$. \end{lemma} An $s$ with a positive real part, $\Re{s} >0$ corresponds to a plane wave solution with exponentially growing amplitude. A stable system does not admit such wave modes. We consider the normalised wave vector $\mathbf{K} = (k_1, k_2)$, with $\sqrt{k_1^2 + k_2^2} = 1$ and the normalised variables $$ \lambda = \frac{s}{|\mathbf{k}|}, \quad \epsilon = \frac{\sigma}{|\mathbf{k}|}, \quad \nu = \frac{\alpha}{|\mathbf{k}|}, \quad S_x\left(\lambda, \epsilon, \nu\right)= 1 +\frac{\epsilon}{\lambda + \nu}. $$ Thus, if there are $\Re{\lambda} > 0$, the PML is unstable. We insert the plane wave solution \eqref{eq:plane_wave_pml} in the PML and obtain the dispersion relation { \begin{align}\label{eq:PML_dispersion_relation} &F_\epsilon(\lambda, \mathbf{K}):=F\left(\lambda, \frac{1}{S_x\left(\lambda, \epsilon, \nu\right)}k_1, k_2\right) = 0, \end{align} } where the function $F(\lambda, \mathbf{K})$ is defined by \eqref{eq:dispersion_orthotropic} and \eqref{eq:dispersion_isotropic}. The scaled eigenvalue $\lambda$ is a root of the complicated nonlinear dispersion relation $F_\epsilon(\lambda, \mathbf{K})$ for the PML and defined in \eqref{eq:PML_dispersion_relation}. When the PML damping vanishes, $\epsilon =0$ we have $S_x =1$, and $F_0(\lambda, \mathbf{K}) \equiv F(\lambda, \mathbf{K})$. As shown in Section \ref{sec:dispersion_relation}, the roots of $F(\lambda, \mathbf{K})$ are purely imaginary and correspond to the body wave modes propagating in a homogeneous elastic medium. When the PML damping is present $\epsilon >0$, the roots $\lambda$ can be difficult to determine. However, standard perturbation arguments yield the following well-known result \cite{duru_kreiss_2012,BECACHE2003399,APPELO2006642}. \begin{theorem}[Necessary condition for stability]\label{theo:high_frequency_stability} Consider the constant coefficient PML, with $\epsilon >0$, $\nu \ge 0$. Let the elastic medium be described by the phase velocity $\mathbf{V}_p= ({V}_{px}, {V}_{py})$ and the group velocity $\mathbf{V}_g = ({V}_{gx}, {V}_{gy})$ defiend in \eqref{KVSV}. If ${V}_{px}{V}_{gx} < 0$, then at all sufficiently high frequencies, $|\mathbf{k}| \to \infty$, there are corresponding unstable wave modes with $\Re{\lambda} > 0$. \end{theorem} For the elastic subdomains $\Omega_i$, $i = 1, 2$, we will consider only media parameters where the \emph{geometric stability condition}, ${V}_{px}{V}_{gx} > 0$, is satisfied and there no growing modes for the Cauchy PML problem. In particular, it can be shown for isotropic elastic materials that body wave modes inside the PML are asymptotically stable for all frequencies \cite{Duru2012JSC,Duru2014K}. In many anisotropic elastic materials the geometric stability condition and the complex frequency shift $\alpha > 0$ will ensure the stability of plane wave modes for all frequencies \cite{APPELO2006642}. Next, we will characterise the stability of interface wave modes in the PML. \subsection{Stability analysis of interface wave modes} As above, we assume constant PML damping $\sigma \ge 0$ and piecewise constant elastic media parameters with a planar interface at $y = 0$. We Laplace transform \eqref{eq:PML_physical_space}--\eqref{int_pml} in time, perform a Fourier transformation in the spatial variable $x$ of \eqref{eq:PML_physical_space}--\eqref{int_pml} and eliminate all PML auxiliary variables. We have \begin{equation}\label{fou1} \rho_i s^2 \mathbf{\widetilde u}_i = -\widetilde k_x^2A_i\mathbf{\widetilde u}_i+B_i\frac{\mathrm{d}^2\mathbf{\widetilde u}_i}{\mathrm{d} y^2}+\bm{i}\widetilde k_x\left(C_i+C^T_i\right)\frac{\mathrm{d}\mathbf{\widetilde u}_i}{\mathrm{d} y},\quad i=1,2, \end{equation} where $\widetilde k_x=k_x/S_x$. The Laplace-Fourier transformed interface conditions are \begin{equation}\label{int_fou} \mathbf{\widetilde u_1}=\mathbf{\widetilde u_2},\quad B_1\frac{\mathrm{d}\mathbf{\widetilde u_1}}{\mathrm{d} y}+\bm{i}\widetilde k_xC_1^T\mathbf{\widetilde u_1}=B_2\frac{\mathrm{d} \mathbf{\widetilde u_2}}{\mathrm{d} y}+\bm{i}\widetilde k_x C_2^T\mathbf{\widetilde u_2},\quad y=0. \end{equation} Note the similarity between \eqref{fou1}--\eqref{int_fou} and \eqref{pde1_lap}--\eqref{int_con_Lap}; the only difference is that we have replaced $k_x$ with $\widetilde k_x$ and $\boldsymbol{\phi}_i$ with $\mathbf{\widetilde u}_i$. When the PML damping vanishes $\sigma =0$, we have $S_x\equiv 1$ and $\widetilde k_x \equiv k_x$. In this case, the PML model \eqref{fou1}--\eqref{int_fou} is equivalent to the original equation \eqref{pde1_lap}--\eqref{int_con_Lap}, and \eqref{fou1} is the Laplace-Fourier transformations of equation \eqref{pde1}. We seek modal solutions to \eqref{fou1} in the form \begin{equation}\label{gel_sol} \mathbf{\widetilde u}_i=\boldsymbol{\Phi}_i e^{\kappa y}, \quad \boldsymbol{\Phi}_i \in \mathbb{C}^2,\quad i=1,2. \end{equation} Substituting \eqref{gel_sol} into \eqref{fou1}, we obtain \begin{equation}\label{int_sys} \left(s^2 I + \mathcal{P}_i(\widetilde k_x, \kappa)\right)\boldsymbol{\Phi}_i = 0,\quad i=1,2, \end{equation} where \begin{align*} \mathcal{P}_i(\widetilde k_x, \kappa)&=\widetilde k_x^2 A_i - \kappa^2 B_i-\bm{i}\widetilde k_x \kappa (C_i+C_i^T),\quad i=1,2. \end{align*} The existence of nontrivial solutions to \eqref{int_sys} requires that \begin{align}\label{eq:characteristics_f_pml} &F_i\left(s, \widetilde k_x, \kappa\right):= \det\left({s^2} I + \mathcal{P}_i(\widetilde k_x, \kappa)\right) = 0,\quad i=1,2. \end{align} As above, we note that if we set $\kappa = \bm{i}k_y$ in $F_i\left(s, k_x, \kappa\right)$, we get exactly the same PML dispersion relation \eqref{eq:PML_dispersion_relation} for the Cauchy problem. Again, note also the close similarity between \eqref{eq:characteristics_f} and \eqref{eq:characteristics_f_pml}. The roots, $\kappa = \widetilde{\kappa}_{ij}^{\pm}$, of the characteristic function $F_i\left(s, \widetilde{k}_x, \kappa\right)$ are $$ \widetilde{\kappa}_{ij}^{-}(s, k_x)=\kappa_{ij}^{-}(s, \widetilde{k}_x), \quad \widetilde{\kappa}_{ij}^{+}(s, {k}_x) = \kappa_{ij}^{+}(s, \widetilde{k}_x), \quad j = 1, 2. $$ For the proceeding analysis to directly mirror the mode analysis discussed in section \ref{sec:normal_modes_analysis_interface}, we will need the sign consistency between $\Re{\kappa_{ij}^{\pm}}$ and $\Re{\widetilde{\kappa}_{ij}^{\pm}}$. That is for $\Re{s} > 0$, $\sigma \ge 0$ and $\alpha\ge 0$ we must have \begin{align} \text{sign}{\left(\Re{\kappa_{ij}^{\pm}}\right)}= \text{sign}{\left(\Re{\widetilde{\kappa}_{ij}^{\pm}}\right)}. \end{align} The following lemma, which uses a standard continuity argument, was first proven in \cite{Duru2014K}. \begin{lemma} If the PML Cauchy problem has no temporally growing modes, then for all $k_x \in \mathbb{R}$ and all $s \in \mathbb{C}$ with $\Re{s} > 0$ the PML characteristic equation has roots $\widetilde{\kappa}_{ij}^{\pm}(s, k_x)$ with $$ \text{sign}{\left(\Re{\kappa_{ij}^{\pm}}\right)}= \text{sign}{\left(\Re{\widetilde{\kappa}_{ij}^{\pm}}\right)}. $$ \end{lemma} \begin{proof} As above, we note that the roots vary continuously with $s$. Thus, if the real part of a root $\widetilde{\kappa}_{ij}$ changes sign, then for some $s$ with $\Re{s} >0$ the root must be purely imaginary, $\widetilde{\kappa}_{ij} = \bm{i}k_y$. When $\kappa_{ij} = \bm{i}k_y$ the PML dispersion relations \eqref{eq:PML_dispersion_relation} for the Cauchy problem and the characteristic \eqref{eq:characteristics_f_pml} are equivalent. Therefore a purely imaginary root $\kappa_{ij} = \bm{i}k_y$ with $\Re{s} >0$ corresponds to an exponentially growing mode for the Cauchy PML problem, which contradicts the assumption that the Cauchy PML problem has no growing wave modes. \end{proof} Taking into account the boundedness condition, the general solution of \eqref{fou1} is \begin{equation}\label{gen_sol_ped1_lap_pml} \mathbf{\widetilde u_{1}}(y)=\delta_{11} e^{\widetilde{\kappa}_{11}^- y}\Phi_{11}+\delta_{12} e^{\widetilde{\kappa}_{12}^- y}\Phi_{12}, \quad \mathbf{\widetilde u_{2}}(y) =\delta_{21} e^{\widetilde{\kappa}_{21}^+ y}\Phi_{21}+\delta_{22} e^{\widetilde{\kappa}_{22}^+ y}\Phi_{22}, \end{equation} The coefficients $\boldsymbol{\delta}=[\delta_{11},\delta_{12},\delta_{21},\delta_{22}]^T$ are determined by inserting \eqref{gen_sol_ped1_lap_pml} into the interface conditions \eqref{int_con_Lap}. We have the following equation \begin{equation}\label{Fs_pml_9} \mathcal{C}(s, \widetilde{k}_x)\boldsymbol{\delta}=\mathbf{0}, \end{equation} where { \footnotesize \begin{equation}\label{F} \mathcal{C}(s, \widetilde{k}_x)=\begin{bmatrix} \Phi_{11} &\Phi_{12} & -\Phi_{21} & -\Phi_{22} \\ (\widetilde{\kappa}_{11}^-B_1+\bm{i}\widetilde k_x C_1^T)\Phi_{11} &(\widetilde{\kappa}_{21}^-B_1+ \bm{i} \widetilde k_x C_1^T)\Phi_{12} & -(\widetilde{\kappa}_{12}^+B_2+ \bm{i}\widetilde k_x C_2^T)\Phi_{21}& -(\widetilde{\kappa}_{22}^+B_2+ \bm{i} \widetilde k_x C_2^T)\Phi_{22} \end{bmatrix}. \end{equation} } Using the determinant condition given in Definition \ref{def:determinant_condition}, we formulate a stability condition for the PML in a piecewise constant elastic medium. \begin{lemma}[Stability condition]\label{lem:determinant_condition} The solution to the PML model \eqref{fou1} with piecewise constant material parameters \eqref{mp} and interface condition \eqref{int_fou} is not stable in any sense if for some $k_x \in \mathbb{R}$ and $s \in \mathbb{C}$ with $\Re{s} >0$, the determinant vanishes, $$ \mathcal{F}(s, \widetilde{k}_x):=\det\left(\mathcal{C}(s, \widetilde{k}_x)\right) =0. $$ \end{lemma} The roots of $\mathcal{F}(s, \widetilde{k}_x)$ are tightly connected to the roots of $\mathcal{F}(s, {k}_x)$ by the homogeneous property of $\mathcal{F}$. As a consequence, it is enough to analyse the roots of $\mathcal{F}(s, \widetilde{k}_x)$ for the stability property of the PML model. Below, we define the homogeneous property, followed by a theorem for the PML stability. \begin{definition}\label{def_homogeneous} Let $\bm{f}(\bm{v})$ be a function with the vector argument $\bm{v}$. If $\bm{f}(\alpha \bm{v}) = \alpha^n\bm{f}(\bm{v})$ for all $\alpha \ne 0$ and some $n \in \mathbb{Z}$, then $\bm{f}(\bm{v})$ is homogeneous of degree $n$. \end{definition} \begin{theorem}\label{s0} Let $\mathcal{F}(s, k_x)$ be a homogeneous function of degree $n$. Assume that $\mathcal{F}(s, k_x) \ne 0$ for all $\Re{s} >0$ and $k_x \in \mathbb{R}$. Let $\widetilde{k}_x = k_x/S_x$, where $S_x$ is the PML metric \eqref{eqn_Sx}. Then the function $\mathcal{F}(s, \widetilde{k}_x)$ has no root $s$ with positive real part, $\Re{s} > 0$. \end{theorem} \begin{proof} Consider the homogeneous function $\mathcal{F}(s, k_x)$, we have \begin{align*} \mathcal{F}(s, \widetilde k_x)=\mathcal{F}\left(s, \frac{k_x}{S_x}\right)=\left(\frac{1}{S_x}\right)^n\mathcal{F}\left(sS_x, k_x\right). \end{align*} Since ${S_x} \ne 0$ and ${1}/{S_x} \ne 0$, we must have \begin{align*} \mathcal{F}(s, \widetilde k_x)=0 \iff \mathcal{F}\left(\widetilde{s}, k_x\right) =0, \quad \widetilde{s} = sS_x. \end{align*} Assume that $s = a + \bm{i}b$ with $a > 0$, we have \begin{align*} &\Re{\widetilde{s}} = \left(a + \left( \frac{a\left(a+\alpha\right) + b^2}{{|s + \alpha|^2}}\right)\sigma\right) \ge a>0. \end{align*} Thus if $\mathcal{F}\left(\widetilde{s}, k_x\right) =0$ then $\widetilde{s}$ with $\Re{\widetilde{s}} > 0$ is a root. This will contradict the assumption that $\mathcal{F}\left({s}, k_x\right) \ne 0 $ for all $\Re{s} >0$. We conclude that for $s = a + \bm{i}b$ with $a > 0$, we must have $\mathcal{F}\left(\widetilde{s}, k_x\right) \ne 0$ for all $\sigma \ge 0$ and $\alpha \ge 0$. \end{proof} To determine the homogeneity property of $\mathcal{F}(s, k_x) = \det(\mathcal{C}(s, k_x))$, we may evaluate the corresponding determinant of $\mathcal{C}(s, k_x)$. We have the following result. \begin{theorem}\label{thm_homogeneous} In a piecewise isotropic medium, the determinant $\mathcal{F}(s, k_x) = \det\left(\mathcal{C}\left(s, k_x\right)\right)$ given in \eqref{eq:determinat_conditon} is homogeneous of degree two. \end{theorem} \begin{proof} Consider the modified boundary matrix $\mathcal{C}_1(s, k_x)$ where we have multiplied the first two rows of $\mathcal{C}(s, k_x)$ by $s \ne 0$, that is { \footnotesize \begin{equation}\label{F_0_C1} \mathcal{C}_1(s, k_x)=\begin{bmatrix} s\Phi_{11} &s\Phi_{12} & -s\Phi_{21} & -s\Phi_{22} \\ (\kappa_{11}^-B_1+\bm{i} k_x C_1^T)\Phi_{11} &(\kappa_{21}^-B_1+\bm{i} k_x C_1^T)\Phi_{12} & -(\kappa_{12}^+B_2+ \bm{i} k_x C_2^T)\Phi_{21}& -(\kappa_{22}^+B_2+\bm{i} k_x C_2^T)\Phi_{22} \end{bmatrix}. \end{equation} } By inspection, every element of $\mathcal{C}_1(s, k_x)$ is homogeneous of degree one. Therefore the determinant $\det(\mathcal{C}_1(s, k_x))$ of the $4\times4$ matrix $\mathcal{C}_1(s, k_x)$, using cofactor expansion, must be homogeneous of degree four. Note that $$ \mathcal{C}_1(s, k_x)= \mathcal{K}(s)\mathcal{C}(s, k_x), \quad \mathcal{K}(s) = \begin{bmatrix} s & 0 &0 & 0\\ 0 & s &0 & 0\\ 0 & 0 &1 & 0\\ 0 & 0 &0 & 1 \end{bmatrix}. $$ Using the properties of the determinants of products of matrices we have $$ \det(\mathcal{C}_1(s, k_x)) = \det\left( \mathcal{K}(s) \right)\det(\mathcal{C}(s, k_x))= s^2 \det(\mathcal{C}(s, k_x)) = s^2\mathcal{F}(s, k_x). $$ Since $\det(\mathcal{C}_1(s, k_x))$ is homogeneous of degree four, therefore the determinant $\mathcal{F}(s, k_x)$ is homogeneous of degree two. \end{proof} We can now state the result that shows that exponentially growing waves modes are not supported by the PML in a discontinuous elastic medium. \begin{theorem} Consider the PML \eqref{fou1} in a discontinuous elastic medium with the interface condition \eqref{int_fou} at $y = 0$. Let $\mathcal{F}(s, k_x)$ be the homogeneous function given in \eqref{eq:determinat_conditon}. If $\mathcal{F}(s, k_x) \ne 0$ for all $\Re{s} >0$ and $k_x \in \mathbb{R}$ and the PML Cauchy problem has no temporally growing modes, then there are no growing interface wave modes in the PML. That is $\mathcal{F}(s, \widetilde{k}_x) \ne 0$ for all $\Re{s} >0$ and $k_x \in \mathbb{R}$. \end{theorem} \begin{proof} The proof is identical to the proof of Theorem \ref{s0} with degree of homogeneity $n = 2$. \end{proof} % % The following theorem states that interface wave modes are dissipated by the PML. \begin{theorem}\label{theorem:dissipation_of_interface_waves} Consider the PML \eqref{fou1} in a discontinuous elastic medium with the interface condition \eqref{int_fou} at $y = 0$. If the PML Cauchy problem has no temporally growing modes then all stable interface wave modes, that solve $\mathcal{F}(s, k_x) =0$ for all $k_x \in \mathbb{R}$ with $s = \bm{i}\xi$, are dissipated by the PML. \end{theorem} \begin{proof} It suffices to prove that $\mathcal{F}(s, \widetilde{k}_x) =0$ implies $\Re{s} \le 0$ for all $k_x \in \mathbb{R}$, $\alpha \ge 0$ and $\sigma \ge 0$. We will split the proof into two cases, for $\alpha = 0$ and $\alpha >0$. When $\alpha =0$, we have \begin{align*} \mathcal{F}(s, \widetilde k_x)=0 \iff \mathcal{F}\left(\widetilde{s}, k_x\right) =0, \quad \widetilde{s} = sS_x=\frac{\alpha+s+\sigma}{\alpha+s}s. \end{align*} Since $\mathcal{F}\left({s}_0, k_x\right)$ has purely imaginary roots ${s}_0=\bm{i}\xi$, we must have \begin{equation}\label{eqni} \frac{\alpha+s+\sigma}{\alpha+s}s=\bm{i}\xi, \end{equation} for some $\xi\in\mathbb{R}$. Thus, if $\alpha=0$, then $s=-\sigma+\bm{i}\xi$ and $\Re{s}=-\sigma<0$. When $\alpha > 0$, we consider \begin{equation*} \frac{\alpha+s+\sigma}{\alpha+s}s=\bm{i}\xi \iff s^2 + (\alpha + \sigma -\bm{i}\xi)s - \bm{i}\alpha\xi = 0. \end{equation*} If $\xi =0$, then the roots are $s =0$ and $s = -(\alpha + \sigma) < 0$. Clearly the real parts of the roots are non-positive. If $\xi \ne 0$, then the roots are given by $$ s = -\frac{(\alpha + \sigma -\bm{i}\xi)}{2} \pm \frac{1}{2}\sqrt{(\alpha + \sigma -\bm{i}\xi)^2 + \bm{i}4\alpha\xi}. $$ The real parts of the two roots are $$ \Re{s} = -\frac{(\alpha + \sigma)}{2}\pm \frac{1}{2\sqrt{2}}\sqrt{(\alpha + \sigma)^2 - \xi^2 + \sqrt{((\alpha + \sigma)^2 - \xi^2)^2 + 4 \xi^2(\alpha - \sigma)^2}}. $$ We note that the root with a negative sign has a negative real part, $$ \Re{s} = -\frac{(\alpha + \sigma)}{2}- \frac{1}{2\sqrt{2}}\sqrt{(\alpha + \sigma)^2 - \xi^2 + \sqrt{((\alpha + \sigma)^2 - \xi^2)^2 + 4 \xi^2(\alpha - \sigma)^2}} < 0. $$ For the other root with a positive sign, $$ \Re{s} = -\frac{(\alpha + \sigma)}{2}+ \frac{1}{2\sqrt{2}}\sqrt{(\alpha + \sigma)^2 - \xi^2 + \sqrt{((\alpha + \sigma)^2 - \xi^2)^2 + 4 \xi^2(\alpha - \sigma)^2}}, $$ if we assume that $\Re{s} > 0$ for $\alpha > 0$, $\sigma >0$ and $\xi \in \mathbb{R}$, then this implies that $$ (\alpha + \sigma) < \frac{1}{\sqrt{2}}\sqrt{(\alpha + \sigma)^2 - \xi^2 + \sqrt{((\alpha + \sigma)^2 - \xi^2)^2 + 4 \xi^2(\alpha - \sigma)^2}}. $$ Squaring both sides of the inequality gives $$ (\alpha + \sigma)^2 + \xi^2 < \sqrt{((\alpha + \sigma)^2 - \xi^2)^2 + 4 \xi^2(\alpha - \sigma)^2}. $$ Squaring both sides again and simplifying further yields $$ (\alpha+\sigma)^2 < (\alpha-\sigma)^2. $$ This is a contradiction since $\alpha > 0$ and $\sigma > 0$. Thus, for $\alpha > 0$ and $\sigma > 0$, we must have $\Re{s} < 0$. The roots are moved further by the PML into the stable complex plane. \end{proof} \section{Numerical experiments} In this section, we present some numerical examples to verify the stability analysis performed in the previous sections and demonstrate the absorption properties of the PML model for the elastic wave equation with piecewise constant material parameters. \subsection{Two layers} We consider the elastic wave equation in a two-layered medium $\Omega_1\cup\Omega_2$, where $\Omega_1=[0,4\pi]^2$ and $\Omega_2=[0,4\pi]\times[-4\pi,0]$. The material property in each layer is either isotropic or orthotropic anisotropic elastic solid. For the isotropic case, we use the material parameters $\rho_1=1.5$, $\mu_1=4.86$, $\lambda_1=4.8629$ in $\Omega_1$, and $\rho_2=3$, $\mu_2=27$, $\lambda_2=26.9952$ in $\Omega_2$. For the anisotropic material property, we choose $\rho_1=1$, $c_{{11}_1}=4$, $c_{{12}_1}=3.8$, $c_{{22}_1}=20$ and $c_{{33}_1}=2$ in $\Omega_1$, and the material parameters in $\Omega_2$ are chosen as $\rho_2=0.25$ and $c_{{ij}_2}=4c_{{ij}_1}$ for $i,j=1,2$. For initial conditions, we set the initial displacements as \[ \mathbf{u_1}=\mathbf{u_2}=e^{-20((\mathbf{x}-2\pi)^2+(\mathbf{y}-1.6\pi)^2)}, \] and zero initial data for the velocity field and all auxiliary variables. We impose the transformed interface conditions \eqref{int_pml} at the material interface $y=0$. We impose characteristic boundary conditions at the left boundary $x=0$, the bottom boundary $y=-4\pi$, and the top boundary $y=4\pi$. Outside the right boundary $x=4\pi$, we use a PML $[4\pi, 4.4 \pi]\times [-4\pi,4\pi]$ closed by the characteristic boundary condition at the PML boundaries. Because of the PML, the boundary conditions must be modified as \begin{align} Z_{1y}\frac{\partial\mathbf{u_1}}{\partial t}+B_1\frac{\partial\mathbf{u_1}}{\partial y}+C_1^T\frac{\partial\mathbf{u_1}}{\partial x}+B_1\sigma\mathbf{w_1}+\sigma Z_{1y}(\mathbf{u_1}-\mathbf{q_1})&=0, \quad y=4\pi,\label{TwoLayerBCtop}\\ Z_{ix}\frac{\partial\mathbf{u}_i}{\partial t}-A_i\frac{\partial\mathbf{u}_i}{\partial x}-C_i\frac{\partial\mathbf{u}_i}{\partial y}+A_i\sigma\mathbf{v_i}&=0, \quad x=0,\ \mathbf{i}=1,2,\label{TwoLayerBCleft}\\ Z_{ix}\frac{\partial\mathbf{u}_i}{\partial t}+A_i\frac{\partial\mathbf{u}_i}{\partial x}+C_i\frac{\partial\mathbf{u}_i}{\partial y}-A_i\sigma\mathbf{v_i}&=0, \quad x=4.4\pi,\ \mathbf{i}=1,2,\label{TwoLayerBCright}\\ Z_{2y}\frac{\partial\mathbf{u_2}}{\partial t}-B_2\frac{\partial\mathbf{u_2}}{\partial y}-C_2^T\frac{\partial\mathbf{u_2}}{\partial x}-B_2\sigma\mathbf{w_2}+\sigma Z_{2y}(\mathbf{u_2}-\mathbf{q_2})&=0, \quad y=-4\pi.\label{TwoLayerBCbottom} \end{align} The impedance matrices $Z_{ix}$ and $Z_{iy}$ are given by $$ Z_{ix} = \begin{bmatrix} \rho_ic_{px i} && 0\\ 0&& \rho_ic_{sx i} \\ \end{bmatrix}, \quad Z_{iy} = \begin{bmatrix} \rho_ic_{sy i} && 0\\ 0&& \rho_ic_{py i} \\ \end{bmatrix}, $$ and the wave speeds $c_{px i}, c_{py i}, c_{sx i}, c_{sy i}$ are defined in \eqref{elastic_wave_speeds_an}. The PML damping function is \begin{equation}\label{eq:damping_func} \begin{split} &\sigma\left(x\right) = \left \{ \begin{array}{rl} 0 \quad {} \quad {}& \text{if} \quad x \le L_x,\\ \sigma_0\Big(\frac{x-L_x}{\delta}\Big)^3 & \text{if} \quad x \ge L_x , \end{array} \right. \end{split} \end{equation} where the damping strength is \begin{equation}\label{eq:damping_strength} \sigma_0=\frac{4c_{p,\max}}{2\delta}\log(\frac{1}{Ref}). \end{equation} Here, $c_{p,\max}=\max(c_{p1},c_{p2})$, $c_{p1} = \max(c_{px 1}, c_{py 1})$ and $c_{p2}= \max(c_{px 2}, c_{py 2})$ are the maximum pressure wave speeds in $\Omega_1$ and $\Omega_2$, respectively. The parameter $L_x = 4\pi$ is the length of the domain, $\delta=0.1 L_x$ is the width of the PML and $Ref=10^{-4}$ is the relative PML modeling error. Additionally, we choose the CFS $\alpha=0.05\sigma_0$ in both subdomains. For the spatial discretisation, we use the SBP finite difference operators with the fourth order accurate interior stencil \cite{Mattsson2004}. The boundary conditions and material interface conditions are imposed weakly by the penalty technique \cite{Duru2014,Carpenter1994} such that a discrete energy estimate is obtained when the damping vanishes. For details on the SBP discretisation and stability for the undamped problem, we refer the reader to \cite{Duru2014, Duru2014V}. We discretise in time using the classical Runge-Kutta method with the time step $\Delta t=0.2h/\sqrt{\max_i(c_{pi}^2+c_{si}^2)}$. As above $c_{pi} = \max(c_{px i}, c_{py i})$ are the maximum pressure wave speeds in $\Omega_i$ and $c_{si} = \max(c_{sx i}, c_{sy i})$ are the maximum shear wave speeds in $\Omega_i$. In Figure \ref{fig_TwoDomainsIso}-\ref{fig_TwoDomains}, we plot the numerical solutions at four time points for the isotropic and anisotropic media, respectively. In both cases, the initial data is a Gaussian in the top layer. At $t=1$, we observe that a wave mode propagates at the same speed in the two spatial directions in the isotropic medium but at different speeds in the anisotropic medium. The elastic waves have propagated into the bottom layer at $t=2$, where the effects of discontinuous material property are clearly observed. At $t=5$, it is clear that waves coming into the PML are absorbed. In the last panels, we plot the solution after long time $t=100$. Note that the largest amplitude is about $10^{-5}$, demonstrating numerical stability and the effectiveness of PML. \begin{figure}[H] \centering \includegraphics[width=0.24\textwidth,trim={5cm 0 4cm 0},clip]{TwoDomainIso_t_1.png} \includegraphics[width=0.24\textwidth,trim={5cm 0 4cm 0},clip]{TwoDomainIso_t_2.png} \includegraphics[width=0.24\textwidth,trim={5cm 0 4cm 0},clip]{TwoDomainIso_t_3.png} \includegraphics[width=0.24\textwidth,trim={5cm 0 4cm 0},clip]{TwoDomainIso_t_100.png} \caption{The solution at four time points $t=1,2,3,100$ in a piecewise isotropic medium. } \label{fig_TwoDomainsIso} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.24\textwidth,trim={5cm 0 4cm 0},clip]{TwoDomain_t_1.png} \includegraphics[width=0.24\textwidth,trim={5cm 0 4cm 0},clip]{TwoDomain_t_2.png} \includegraphics[width=0.24\textwidth,trim={5cm 0 4cm 0},clip]{TwoDomain_t_5.png} \includegraphics[width=0.24\textwidth,trim={5cm 0 4cm 0},clip]{TwoDomain_t_100.png} \caption{The solution at four time points $t=1,2,5,100$ in a piecewise orthotropic medium. } \label{fig_TwoDomains} \end{figure} In Figure \ref{fig_TwoDomainsSolEnergy}, using the computed numerical solution we plot the $l_2$-norm $\|\mathbf{u}\|_H=\sqrt{\sum_{i=1}^{2} \mathbf{u}_{i}^T \mathbf{H}\mathbf{u}_{i}}$ in time, where $i$ corresponds to the two layers and $\mathbf{H}$ the discrete norm associated with the SBP operator. We observe that $\|\mathbf{u}\|_H$ decays monotonically in both the isotropic and anisotropic media. \begin{figure}[H] \centering \includegraphics[width=0.48\textwidth,trim={0cm 0 0cm 0},clip]{TwoDomainsIso_m111_energy.png} \includegraphics[width=0.48\textwidth,trim={0cm 0 0cm 0},clip]{TwoDomainsOrt_m111_energy.png} \caption{The quantity $\|\mathbf{u}\|_H$ with for the isotropic (left) and the orthotropic (right) media.} \label{fig_TwoDomainsSolEnergy} \end{figure} Finally, we compare the absorbing property of the PML model with the first order absorbing boundary conditions (ABC) \cite{doi:10.1061/JMCEA3.0001144}. We compute a solution in the large domain $[0,4\pi]\times[-4\pi,12\pi]$, which is the original domain extended three times in the positive $x$ direction, and regard the part of the solution in $[0,4\pi]\times[-4\pi,4\pi]$ as a reference solution. In Figure \ref{fig_PMLError}, we plot the PML error defined as the maximum norm of the difference between the PML solution and the reference solution, and the ABC error that is defined analogously as the maximum norm of the difference between the solution computed by using the ABC on all boundaries and the reference solution. We observe that the PML error is about two order of magnitude smaller than the ABC error in both isotropic and anisotropic media. \begin{figure} \centering \includegraphics[width=0.48\textwidth,trim={0cm 0 0cm 0},clip]{PML_error_iso.png} \includegraphics[width=0.48\textwidth,trim={0cm 0 0cm 0},clip]{PML_error_ort.png} \caption{The maximum error for the isotropic (left) and the orthotropic (right) media. ABC is absorbing boundary condition.} \label{fig_PMLError} \end{figure} \subsection{Four layers} Next we demonstrate extension of the results to multiple elastic layers. We consider the elastic wave equation in domain $\Omega=[0,40]\times[-40,0]$. The medium has a four-layered structure and the material parameters are summarized in Table \ref{tab_mp}. In each layer, the material property is homogeneous and isotropic, and the governing equation is \eqref{eq:PML_physical_space}. At the interface between two adjacent layers, the material property is discontinuous, and the equations are coupled by imposing continuity of displacement and traction in the form of \eqref{int_pml}. At time $t=0$, we initialise the displacement fields as \[ \mathbf{u}_i = e^{-5((x-20)^2+(y+15)^2)},\quad {i}=1,2,3,4, \] that is, a Gaussian centred in the middle of Layer 2. \begin{table}[] \centering \begin{tabular}{ccccc} Layer & $\rho$ & $c_s$ & $c_p$ & Domain \\ \hline 1 & 1.5 & 1.8 & 3.118 & $[0,40] \times [-10,0]$ \\ 2 & 1.9 & 2.3 & 3.984 & $[0,40] \times [-20,-10]$ \\ 3 & 2.1 & 2.7 & 4.667 & $[0,40] \times [-30,-20]$ \\ 4 & 3 & 3 & 5.196 & $[0,40] \times [-40,-30]$ \\ \hline \end{tabular} \caption{Material properties in the four layers.} \label{tab_mp} \end{table} We impose a traction free boundary condition at the top boundary $y=0$, and the characteristic boundary condition at the bottom boundary $y=-40$ and the left boundary $x=0$. At the right boundary $x=40$, we add a PML in $[40,44]\times[-40,0]$, where the width $\delta=4$ is 10\% of the computational domain in $x$. At the boundaries of the PML, we impose the characteristic boundary condition. Because of the PML, the boundary conditions must be modified as \begin{align} B_1\frac{\partial\mathbf{u_1}}{\partial y}+C_1^T\frac{\partial\mathbf{u_1}}{\partial x}+B_1\sigma\mathbf{w_1}&=0, \quad y=0,\label{FourLayerBCtop}\\ Z_{ix}\frac{\partial\mathbf{u}_i}{\partial t}-A_i\frac{\partial\mathbf{u}_i}{\partial x}-C_i\frac{\partial\mathbf{u}_i}{\partial y}+A_i\sigma\mathbf{v_i}&=0, \quad x=0,\ {i}=1,2,3,4,\label{FourLayerBCleft}\\ Z_{ix}\frac{\partial\mathbf{u}_i}{\partial t}+A_i\frac{\partial\mathbf{u}_i}{\partial x}+C_i\frac{\partial\mathbf{u}_i}{\partial y}-A_i\sigma\mathbf{v_i}&=0, \quad x=44,\ {i}=1,2,3,4,\label{FourLayerBCright}\\ Z_{4y}\frac{\partial\mathbf{u_4}}{\partial t}-B_4\frac{\partial\mathbf{u_4}}{\partial y}-C_4^T\frac{\partial\mathbf{u_4}}{\partial x}-B_4\sigma\mathbf{w_4}+\sigma Z_{4y}(\mathbf{u_4}-\mathbf{q_4})&=0, \quad y=-40.\label{FourLayerBCbottom} \end{align} More precisely, on the $y$-boundaries the modified traction includes the auxiliary variable $\mathbf{w}$. In addition, the time derivative in the characteristic boundary condition introduces a lower order term, see \eqref{FourLayerBCbottom}. Similarly, on the $x$-boundaries, the modified traction includes the auxiliary variable $\mathbf{v}$. Inside the PML of all four layers, we choose the damping function $\sigma(x)$ is given by \eqref{eq:damping_func}, where the damping strength $\sigma_0 > 0$ is given by \eqref{eq:damping_strength}. Here, $c_{p,max}=\max_i{c_{pi}}$ is the largest pressure wave speed $c_{pi}$ in $\Omega_i$, $i=1,2,3,4$, $L_x = 40$, $\delta=0.1 L_x$, and $Ref=10^{-4}$ is the relative modeling error. Additionally, we choose the CFS $\alpha=0.05\sigma_0$. Numerically, we use the same spatial and temporal discretisation as in the previous numerical example. In Figure \ref{fig_FourDomainsSol441}, we plot the solutions at four time points with grid size $h=0.1$. We observe that at $t=3$, the Gaussian has expanded from its centre to the top three layers and the reflections at the material interfaces are clearly visible. At $t=5$, the wave has propagated to all four layers, and has interacted with the free surface, at $y=0$, and the characteristic boundary condition, at $x=0$. The plot at $t=9$ shows that the surface wave entering the PML is effectively absorbed. After a long time until $t=1000$, most waves have left the computational domain and the largest amplitude is only $10^{-6}$. \begin{figure}[H] \centering \includegraphics[width=0.24\textwidth,trim={1cm 0 1cm 0},clip]{FourDomains_m441_t_3.png} \includegraphics[width=0.24\textwidth,trim={1cm 0 1cm 0},clip]{FourDomains_m441_t_5.png} \includegraphics[width=0.24\textwidth,trim={1cm 0 1cm 0},clip]{FourDomains_m441_t_9.png} \includegraphics[width=0.24\textwidth,trim={1cm 0 1cm 0},clip]{FourDomains_m441_t_1000.png} \caption{The solution at four time points $t=3,5,9$ and 1000 with Gaussian initial data and grid size $h=0.1$.} \label{fig_FourDomainsSol441} \end{figure} Next, we consider an example driven by seismological sources, an explosive moment tensor point source $F=g M_0 \nabla f_{\delta}$, as the forcing in the governing equation. The moment time function $g$ and the approximated delta function $f_{\delta}$ take the form \begin{align*} g=e^{-\frac{(t-0.215)^2}{0.15}},\quad f_{\delta}=\frac{1}{2\pi\sqrt{s_1s_2}}e^{-\left(\frac{(x-20)^2}{2s_1}+\frac{(y+15)^2}{2s_2}\right)}, \end{align*} where the parameters $s_1=s_2=0.5h$ and $M_0=1000$. We note that the peak amplitude of $F$ is located in the middle of Layer 2. With zero initial data for all variables, we run the simulation until $t=1000$ using the same numerical method, and plot the solutions by $h=0.1$ in Figure \ref{fig_FourDomainsSol441Forcing}. We have similar observation as the case with a Gaussian initial data. \begin{figure}[H] \centering \includegraphics[width=0.24\textwidth,trim={1cm 0 1cm 0},clip]{FourDomainsForcing_m441_t_3.png} \includegraphics[width=0.24\textwidth,trim={1cm 0 1cm 0},clip]{FourDomainsForcing_m441_t_5.png} \includegraphics[width=0.24\textwidth,trim={1cm 0 1cm 0},clip]{FourDomainsForcing_m441_t_9.png} \includegraphics[width=0.24\textwidth,trim={1cm 0 1cm 0},clip]{FourDomainsForcing_m441_t_1000.png} \caption{The solution at four time points $t=3,5,9$ and 1000 with single point moment source and grid size $h=0.1$.} \label{fig_FourDomainsSol441Forcing} \end{figure} To see the stability property of the PML, we plot $\|\mathbf{u}\|_H$ in time in Figure \ref{fig_FourDomainsSolEnergy}. The first plot correspond to the case with Gaussian initial data, and the second plot corresponds to the case with the single point moment source. It is clear that the PML remains stable after a long time. \begin{figure}[H] \centering \includegraphics[width=0.48\textwidth,trim={0cm 0 0cm 0},clip]{FourDomains_m441_energy.png} \includegraphics[width=0.48\textwidth,trim={0cm 0 0cm 0},clip]{FourDomainsForcing_m441_energy.png} \caption{The quantity $\|\mathbf{u}\|_H$ with $h=0.1$ for the Gaussian initial data (left) and the single point moment source (right).} \label{fig_FourDomainsSolEnergy} \end{figure} As before, we compare the absorbing property of the PML model with the ABC. Similar to the last section, we compute a reference solution in a larger domain that is extended in the positive $x$ direction three times the length of the original domain. In all computations, we have used a spatial mesh size $h=0.2$. In Figure \ref{fig_FourDomainsPMLerr}, we plot the PML error and the ABC error, and observe again that the PML error is about two orders of magnitude smaller than the ABC error for both cases. \begin{figure}[H] \centering \includegraphics[width=0.48\textwidth,trim={0cm 0 0cm 0},clip]{Elastic2D_FourDomain_err1.png} \includegraphics[width=0.48\textwidth,trim={0cm 0 0cm 0},clip]{Elastic2D_FourDomain_err2.png} \caption{The maximum error for the case with Gaussian initial data (left) and with the single point moment source (right). ABC is absorbing boundary condition.} \label{fig_FourDomainsPMLerr} \end{figure} \section{Conclusion} We have analysed the stability of the PML for the elastic wave equation with piecewise constant material parameters and interface conditions at material interfaces. The elastic wave equation and the interface conditions, without the PML, satisfy an energy estimate in physical space. Alternatively, a mode analysis can also be used to prove that exponentially growing modes are not supported by the elastic wave equation subject to the interface conditions. In particular, the normal mode analysis in Laplace space for interface waves gives a boundary matrix $\mathcal{C}(s, k_x)$ of which the determinant is a homogeneous function $\mathcal{F}(s, k_x)$ of $(s, k_x)$ and does not have any roots $s$ with a positive real part $\Re{s} >0$ in the complex plane. When the PML is present, the energy method is in general not applicable but the normal mode analysis can be used to investigate the existence of exponentially growing modes in the PML. The normal mode analysis when applied to the PML in a discontinuous elastic medium yields a similar boundary matrix perturbed by the PML. Our analysis shows that if the PML IVP does not support growing modes, then the PML moves the roots of the determinant $\mathcal{F}(s, k_x)$ further into the stable complex plane. This proves that interface wave modes present at layered material interfaces in elastic solids are dissipated by the PML. We have presented numerical examples for both isotropic and anisotropic elastic solids verifying the analysis, and demonstrating that interface wave modes decay in the PML. \section*{Acknowledgement} Part of the work was carried out during S. Wang's research visit at Australian National University (ANU). The financial support from SVeFUM and ANU is greatly appreciated. \bibliographystyle{siamplain}
1,314,259,994,407
arxiv
\section{Introduction} Deep neural networks have shown remarkable performance on various computer vision tasks in recent years. There have been many researches on network architecture that extracts discriminative features to gain more efficient performance. In the early years, most of the works were focused on designing deeper and/or wider network to enhance the capacity of deep neural networks. ResNet \cite{he2016deep} brought in the concept of residual learning to efficiently increase the depth of the network as well as the accuracy. On the other hand, Wide Residual Networks (WRN) \cite{zagoruyko2016wide} presented that the model can be improved by increasing the width of the network rather than increasing the depth. Besides developing network architecture, there have been attempts to get away from modifying the network architecture itself and to develop new training mechanism. The first approach is the feature fusion method that can combine different feature maps gained from multiple sub-networks. DualNet \cite{how2017dual} coordinated two parallel sub-networks and trained them iteratively to learn complementary features, then they fused the two-stream features and passed it into the fused classifier. They showed that the ensemble of the fused classifier and the two classifiers of sub-networks outperforms an independently trained network. However, this approach only focuses on the performance of the fused classifier. The performance of the sub-networks is significantly lower than the performance of the network that is independently trained with the same architecture. Another approach is Knowledge Transfer, which is to improve the performance of a smaller student network by transferring the knowledge of the teacher network. Knowledge Distillation \cite{hinton2015distilling}, one of the popular methods of Knowledge Transfer, starts with training a powerful teacher model followed by encouraging the student model to mimic the teacher model’s softened distribution. Besides probability distribution, some other researches have tried to distill the attention or factors extracted from the feature to the smaller model \cite{zagoruyko2016paying,kim2018paraphrasing}. \begin{figure*}[t] \centering \includegraphics[width = 0.62\linewidth]{overallprocess.png}\\ \caption{The overall process of our method is called a Feature Fusion Learning (FFL). The sub-networks create an ensemble classifier for training the fusion module. Then, the ensemble classifier transfers its knowledge to the fusion module. Similarly, the fusion module transfers its knowledge back to each sub-network. This online mutual knowledge distillation helps to obtain better performance gain in the fused classifier as well as the sub-networks. More details are explained in Sec. \ref{method}.} \label{fig:FFL_overall} \end{figure*} Online and offline methods are the two ways of distilling knowledge. Offline distillation is the conventional way of distilling the softened distribution or feature map information of pre-trained teacher model to the smaller target model. On the other hand, online distillation removes the stage of pre-training the teacher model and trains both the teacher model and the target model simultaneously. There is also another online distillation method which trains an ensemble of student models to learn collaboratively and mutually teach one another without a particular teacher model \cite{zhang2018deep}. However, this method may only provide limited information to the target because it does not utilize the rich information from the teacher model for distillation. The On-the-fly Native Ensemble (ONE) \cite{lan2018knowledge} is one of the online distillation methods that trains only a single multi-branch network while concurrently building a strong teacher model with gating of the branch logits to enhance the learning of a student network. This method distills the knowledge of the teacher network to the student network in one-way. It uses a gating module located on the shared layer, thus it is applicable only when the branches have the same architecture. Also, this type of logit based distillation method can not make good use of feature maps which are useful in many vision tasks. In this work, we propose a solution for efficiently fusing the features of sub-networks. Contrary to the existing feature fusion methods, we adopted an online mutual knowledge distillation method to enhance the performance of both sub-networks and the fusion module. The overall process of our method is described in Figure \ref{fig:FFL_overall}. When the same architecture of network is employed as sub-networks, we can share the low-level layers and take a multi-branch network similar to \cite{lan2018knowledge}. However, when different network architectures are used as sub-networks, the sub-networks are trained in different streams analogous to \cite{zhang2018deep}. Here we have two important classifiers which are the ensemble classifier, and the fused classifier. The ensemble classifier uses the ensemble logit produced from the sub-networks and the fused classifier uses the feature map generated from the fusion module. The fusion module receives feature maps from each sub-network and fuses them using depthwise convolution and pointwise convolution. The fused feature map is then forwarded to the fused classifier for class prediction. When both the ensemble classifier and the fused classifier yield logits, the model performs knowledge distillation from the ensemble classifier to the fused classifier. At the same time, another knowledge distillation is carried out from the fused classifier to each sub-network classifier. This eventually creates a loop between the sub-networks and the fusion module. The sub-networks and the fusion module are learned by mutually teaching each other via knowledge distillation. When the training is completed, the performances of the sub-networks as well as the fusion module are greatly improved due to the online mutual knowledge distillation between the sub-networks and the fusion module. \section{Related Work} \subsection{Feature Fusion} Feature fusion methods have been used in many previous deep learning studies. In deep convolutional network models, different types of features are extracted from each layer \cite{goodfellow2016deep}. From this fact, researchers found that combining the features of each layer increases the performance of the model and showed the effectiveness of this method in various computer vision tasks such as detection, semantic segmentation and gesture classification \cite{neverova2015multi, hariharan2015hyper, fan2018multi, chang2018multi}. The researches in \cite{lin2015bilinear, how2017dual} applied the feature fusion in dual learning. In the bilinear CNN \cite{lin2015bilinear}, outputs from two different networks are fused and mapped into a bilinear vector. DualNet \cite{how2017dual} trains two parallel networks with the same structure and uses the `SUM' operation to combine the features of those networks so as to build a fused classifier. In addition, it applies iterative training, which alternately updates the weight of the sub-networks to learn the complementary features. Our Feature Fusion Learning (FFL) has three distinct points compared to DualNet. First, DualNet is designed to work only for the same architectures of sub-networks, whereas FFL is applicable to any network architecture. Second, FFL concatenates the features of the sub-networks and forwards it to the fusion module. We intended the trainable fusion module to be more effective than simple feature fusion methods. Finally, the main difference is that DualNet is only focused on improving the performance of the fused classifier, while FFL focuses on improving the performances of both the fused classifier and the sub-networks through an online mutual knowledge distillation which will be described later. \begin{figure*}[t] \centering \begin{subfigure}[t]{0.23\textwidth} \includegraphics[width=1\textwidth]{DML.png} \caption{DML \cite{zhang2018deep}} \label{fig:a} \end{subfigure} \begin{subfigure}[t]{0.23\textwidth} \includegraphics[width=1\textwidth]{ONE.png} \caption{ONE \cite{lan2018knowledge}} \label{fig:b} \end{subfigure} \begin{subfigure}[t]{0.23\textwidth} \includegraphics[width=1\textwidth]{DN.png} \caption{DualNet \cite{how2017dual}} \label{fig:c} \end{subfigure} \begin{subfigure}[t]{0.23\textwidth} \includegraphics[width=1\textwidth]{FFL.png} \caption{FFL (proposed)} \label{fig:d} \end{subfigure} \caption{(a) and (b) are online knowledge distillation methods which focus on the training of sub-networks. (a) uses the knowledge of students mutually for the training. (b) makes a teacher with the gating of logits for students. (c) and (d) are feature fusion methods which generate useful feature maps. Unlike (c), (d) uses online mutual knowledge distillation between the sub-networks and the fused classifier. Therefore, (d) enhances the performance of both the sub-networks and the fused classifier. Also, (a) and (d) can use sub-networks with different architectures. However, (b) and (c) are only applicable to sub-networks with the same architecture. } \label{fig:arch-comparison} \end{figure*} \subsection{Knowledge Transfer} Knowledge Transfer (KT) is a model compression method proposed to deliver the performance of a lager model to smaller and lighter ones \cite{cheng2017survey}. It is basically composed of a teacher network and a student network, and it transfers the knowledge of the teacher network to the student network in various ways. This scheme was first applied in an offline manner \cite{bucilua2006model}. After that, an online KT was developed to enhance the performance of the student network which learns without a pre-trained teacher network \cite{zhang2018deep,lan2018knowledge,song2018collaborative,anil2018large}. This online learning method is related to our work in this paper. \noindent \textbf{Offline KT} is a way of training a student network from scratch by transferring the knowledge of a pre-trained teacher network. In \cite{ba2014deep, hinton2015distilling}, the information of the teacher network is distilled to the student network through L2-norm or Kullback-Leibler divergence (KLD) loss in logit values. Consequentially the student network mimics the outputs of the teacher network. There are some other studies of offline KT which directly or indirectly pass the features of convolution layers from the teacher to the student \cite{romero2014fitnets, zagoruyko2016paying, yim2017gift,furlanello2018born,kim2018paraphrasing,heo2018knowledge}. \noindent \textbf{Online KT} trains a student network without a pre-trained model unlike the offline KT. In this method, the student network imitates a teacher network which is trained in an online manner instead of imitating a pre-trained teacher network. Deep Mutual Learning (DML) \cite{zhang2018deep} suggested a method which trains student networks to exchange information mutually through the KLD loss and could achieve better performance than an original network. In this framework, each student network plays the role of a teacher network to the other student networks. One advantage of this method is that any kind of different network architectures can be flexibly applied. Codistillation \cite{anil2018large} is similar to DML, but it forces student networks to maintain diversity longer by adding distillation loss after enough \textit{burn in steps}. One-the-fly Native Ensemble (ONE) \cite{lan2018knowledge} method transfers knowledge using a gated logit ensemble of student networks which is trained simultaneously. Our FFL method, which will be described later, can also be categorized as an online KT method. While the aforementioned methods transfer knowledge in one-way from the teacher to the student or the students mutually transfer their knowledge to each other, FFL improves the performance of both sub-networks and the fused classifier by performing bidirectional KT. More specifically, the fused classifier created by the fusion module distills information to the sub-networks acting as a teacher, and the logit ensemble of the sub-networks working as another teacher distills information into the fused classifier. Figure \ref{fig:arch-comparison} shows the difference between DML, ONE, DualNet and the proposed FFL method. \begin{figure}[t] \centering \includegraphics[width = 0.88\linewidth]{FusionModule.png}\\ \caption{The architecture of a fusion module. The depthwise convolution is operated on concatenated feature maps of sub-networks with $M$ filters. Then, the pointwise convolution is operated with $N$ filters. } \label{fig:fusion} \end{figure} \section{Proposed Method} In this section, we describe how to effectively fuse the features of sub-networks. The proposed method is called Feature Fusion Learning (FFL). Unlike the existing fusion methods, FFL is a learning method that takes care of not only the performance of the fused classifier but also the performance of the sub-networks. In the overall process, the features of a parallel sub-networks are fused through a fusion module, and then the final classification result is obtained through a fused classifier. During training, an ensemble of sub-networks distills its knowledge to the fused classifier, and the fused classifier distills its knowledge to each sub-network mutually. \subsection{Fusion Module} Different from DualNet \cite{how2017dual}, our method does not make use of the simple sum or average operation when fusing features. Instead, we concatenate the features of the sub-networks and then perform the convolution operation through the fusion module. To reduce the number of parameters, we use a simple depthwise convolution and an $1 \times 1$ convolution called pointwise convolution that has been used in MobileNet \cite{howard2017mobilenets}. We use the feature map of the last layer for fusion because it is specific to the task and has sufficient expressive power of the network. Let $ C_1 $ and $ C_2 $ are the numbers of channels of the feature map in the last layer of network 1 and 2, respectively, then the number of channels from the concatenated feature map, $M$, will be $ C_1 + C_2 $. The number of output channels from the fusion module, $N$, can be manipulated as needed. As shown in Figure \ref{fig:fusion}, we firstly perform a $3 \times 3$ depthwise convolution which applies a single filter per each input channel and then apply a pointwise convolution to create a linear combination of the slices of the feature map in order to combine them well. In DualNet, there is a problem that the number of output channels of the sub-networks must be the same because the feature maps are simply averaged and added element-wise. On the other hand, in our fusion module, since the feature maps of the sub-network are concatenated, FFL can use different networks having different output channels as its sub-networks. If the resolutions of the final feature maps are different between the sub-networks, a simple convolution operation can make the spatial resolutions identical through the module which is similar to the regressor in the FitNets \cite{romero2014fitnets}. \subsection{Feature Fusion Learning} \label{method} In terms of sub-network architectures, ONE \cite{lan2018knowledge} is not flexible in that it can not be applied to sub-networks with different architectures because it creates a teacher by gating logits based on a shared feature map. Similarly, DualNet \cite{how2017dual} should also be applied to the same sub-network architecture because it simply combines features through the channel-wise sum. To overcome this problem, we designed two types of FFL depending on the architectures of sub-networks in the training process: \begin{itemize} \item Case 1: If sub-networks have the same architecture, the low-level layers of the sub-networks are shared and the high-level layers are separated into multiple branches similar to ONE \cite{lan2018knowledge}. \item Case 2: If sub-networks have different architectures, sub-networks are trained independently since sub-networks can not share the layers. \end{itemize} In this work, we handle the multi-class classification task. Assuming that there are $m$ classes, the logit fowarded by the $k$-th network is defined as $\mathbf{z}_k=\{z_k^1,z_k^2,...,z_k^m\}$. In the training process, we use softened probability for the model generalization. Given $\mathbf{z}_k$, the softened probability is defined as \begin{equation} \sigma_i(\mathbf{z}_k;T) = \frac{ e^{z^i_k/T}}{\sum_{j}^me^{z^j_k/T}} \end{equation} When $T = 1$, it is the same as the original softmax. If the one-hot ground-truth is given as $\mathbf{y} = \{y^1,y^2,..,y^m\}$, cross-entropy loss of $k$-th network is defined as \begin{equation} \mathcal{L}_{ce}^k = -\sum_{i=1}^{m}y^{(i)}\log(\sigma_i(z_k;1)) \end{equation} The overall process is shown in Figure \ref{fig:FFL_overall}. For illustration, we have chosen a scenario that uses different sub-network architectures (case 2). Sub-networks create an ensemble classifier through an ensemble of logits to train the fusion module. Assuming that there are $n$ sub-networks, then the ensemble of logits is computed as follows: \begin{equation} z_e=\frac{1}{n}\sum_{k=1}^{n}z_k \end{equation} To train the fusion module, the ensemble classifier distills its knowledge to the fused classifier. This is called \textit{ensemble knowledge distillation} (EKD). The EKD loss is defined as the KL-divergence between the softened distribution of the ensemble classifier and the softened distribution of the fused classifier. If the logit in the fused classifier is denoted as $ z_f $, the EKD loss is as follows: \begin{equation} \mathcal{L}_{kl}^e = \sum_{i=1}^{m}\sigma_i(z_e;T)\log(\frac{\sigma_i(z_e;T)}{\sigma_i(z_f;T)}) \end{equation} Feature maps from the last layer of sub-networks are concatenated and put into the fusion module. To train each sub-network, the fused classifier in the fusion module distills its knowledge to each sub-network. This is called \textit{fusion knowledge distillation} (FKD). The FKD loss for distilling the softened distribution of the fused classifier into each sub-network is defined as follows: \begin{equation} \mathcal{L}_{kl}^f = \sum_{k=1}^{n}\sum_{i=1}^{m}\sigma_i(z_f;T)\log(\frac{\sigma_i(z_f;T)}{\sigma_i(z_k;T)}) \end{equation} In addition to the distillation loss, each sub-network and the fused classifier learns the true label through cross entropy and the total loss becomes \begin{equation} \mathcal{L}_{total} = \sum_{k=1}^{n}\mathcal{L}_{ce}^k+\mathcal{L}_{ce}^f+ T^2\times(\mathcal{L}_{kl}^e + \mathcal{L}_{kl}^f) \end{equation} In our FFL, each sub-network and the fused classifier learns through ground-truth with cross-entropy loss. At the same time, the ensemble classifier distills its knowledge to the fused classifier with $\mathcal{L}_{kl}^e$ and in return, the fused classifier distills its knowledge to each sub-network. Through such \textit{mutual knowledge distillation} (MKD), the fusion module generates meaningful features for classification. Since the scale of the gradient produced by the softend distribution is $1/T^2$, we multiply $T^2$ according to the KD recommendations \cite{hinton2015distilling}. Sub-networks and the fusion module in FFL are trained simultaneously. Generally, in the training process, the number of sub-networks is set to two ($n=2$). However, in some cases, FFL can increase the number of branches (case 1) or sub-networks (case 2). After training, our method performs classification through the fused classifier. However, if there is a constraint on the memory, as with ONE, we can remove other branches and deploy the original network in the condition that the sub-networks have the same architecture (case 1). If the sub-networks have different architectures (case 2), we can deploy the one that matches the memory as needed. \section{Experiments} To verify our method, we compare FFL with various other methods on image classification datasets. In Sec. \ref{ex1}, we compare our method with DualNet \cite{how2017dual}, one of the feature fusion method which has the same purpose as our method, and show the ablation study of the proposed mutual knowledge distillation method and the fusion module. Then, in Sec. \ref{ex2}, we compare FFL with ONE \cite{lan2018knowledge} which is an online ensemble distillation method using the sub-networks with the same architecture. In Sec. \ref{ex3} we also compare FFL with DML \cite{zhang2018deep} which distills knowledge mutually between students with different architectures. Finally, we deal with qualitative analysis in terms of the feature map and generalization in Sec. \ref{ex4}. \noindent \textbf{Dataset:} We evaluate our method on several benchmark datasets which are CIFAR-10 \cite{cifar10}, CIFAR-100 \cite{cifar100} and ImageNet LSVRC 2015 \cite{ILSVRC15}. The CIFAR-10 dataset contains 50k training images and 10k test images with 10 classes. Each class has 6000 images. The CIFAR-100 dataset has the same number of images as CIFAR-10 dataset, 50k (train) and 10k (test), but it has 100 classes so each class is assigned only 600 images. The ImageNet dataset consists of 1.2M training images and 50K validation images with 1,000 classes \noindent \textbf{Experiment setting:} In most experiments, we set the number of sub-networks to two, and $T=3$. In case 1, we separate the last block of a backbone network from parameter sharing and the number of output channels $ N $ of the fusion module is designed to match the smaller channels between $ C_1 $ and $ C_2 $. In ImageNet, we set the $N$ as $C_1 + C_2$, and separate the last 2 blocks for giving more learning capacity same as \cite{lan2018knowledge}. (Sec. \ref{ex1}): We reimplemented DualNet based on the original paper and experimented by setting FFL under the same conditions as DualNet. (Sec. \ref{ex2}): We use the same learning schedule and hyper-parameters as in ONE. (Sec. \ref{ex3}): For fair comparison, DML and FFL use the same learning schedule as used in Sec. \ref{ex2}. Other details of experiments are described in the supplementary material. \subsection{Comparison with Feature Fusion Method} \label{ex1} In this section, we compare DualNet and FFL in terms of feature fusion. Each model consists of two sub-networks with the same architecture. DualNet first trains the model with the iterative training that updates the sub-networks alternately, and then goes through the joint training process which updates only the sub-network classifiers and the fused classifier. On the other hand, FFL simultaneously learns two sub-networks and the fused classifier during the entire learning process. All experiments were repeated $10$ times on CIFAR-10 and CIFAR-100 datasets. \noindent \textbf{Fused Classifier:} Table \ref{table:compare_FM_fuse} represents the top-1 error rate of the fused classifier for the test set. The performance of DualNet represents the average classifier, an ensemble of the sub-networks and the fused classifier as described in the original paper. The performance of FFL is the prediction result of the fused classifier. In CIFAR-10, FFL has slightly better performance than DualNet within the error range. Overall, as the depth of the network increases, the performance gap decreases. However, for the CIFAR-100 dataset, which is a bit more difficult problem, FFL is clearly superior to DualNet. The performance difference from ResNet-56 becomes up to $2.34\%p$. \noindent \textbf{Sub-network Classifier:} Table \ref{table:compare_FM_sub} is the top-1 error rate of all the sub-network classifiers. In this case, there are two sub-network classifiers. FFL shows better performance than DualNet and the difference is larger than that of the fused classifier experiment, because DualNet is not designed to improve the performance of sub-networks. The difference of the error rate between two methods is around $2\%p$ in CIFAR-10 whereas the difference increases up to $7.85\%p$ in the CIFAR-100 experiment. Experiments on Table \ref{table:compare_FM_fuse} show that our proposed fusion module fuses features more effectively than DualNet. We also found out that FFL even improved the performance of the sub-networks which DualNet is overlooking in the experiments of Table \ref{table:compare_FM_sub}. Furthermore, when using the same sub-network architecture as DualNet, FFL learns efficiently in terms of memory consumption because it uses a shared network, shown in Table \ref{table:Memory}. \begin{table}[t] \caption{Performance comparison of two feature fusion methods, FFL and DualNet, with four different network architectures. Table \ref{table:compare_FM_fuse} is the performance of the fused classifiers. FFL is slightly better than DualNet in CIFAR-10 dataset and at least around $1\%p$ better in CIFAR-100. This indicates that FFL is a more effective feature fusion method. Table \ref{table:compare_FM_sub} shows the performance of sub-network classifiers. Due to the effect of mutual knowledge distillation, the error rate of FFL is clearly better than that of DualNet in all experiments.} \centering \begin{subtable}{.5\textwidth} \centering \caption{Top-1 classification error rate of fused classifiers. DualNet outputs results from the average classifier and FFL uses fusion module for classification.} \begin{adjustbox}{width=0.95\linewidth} \begin{tabular}{ l | c c | c c} \toprule & \multicolumn{2}{|c|}{CIFAR-10} & \multicolumn{2}{|c}{CIFAR-100} \\ $(\%)$ & DualNet & FFL & DualNet & FFL \\ \midrule ResNet-32 & 6.21$\pm$0.20 & 5.78$\pm$0.13 & 27.49$\pm$0.31 & 25.56$\pm$0.32\\ ResNet-56 & 5.67$\pm$0.12 & 5.26$\pm$0.17 & 25.87$\pm$0.29 & 23.53$\pm$0.25 \\ WRN-16-2 & 5.92$\pm$0.16 & 5.97$\pm$0.13 & 25.71$\pm$0.20 & 24.74$\pm$0.31 \\ WRN-40-2 & 4.94$\pm$0.10 & 4.6$\pm$0.13 & 23.22$\pm$0.25 & 21.05$\pm$0.25 \\ \bottomrule \end{tabular} \end{adjustbox} \label{table:compare_FM_fuse} \end{subtable} \begin{subtable}{.5\textwidth} \centering \caption{Top-1 classification error rate of sub-network classifiers.} \begin{adjustbox}{width=0.95\linewidth} \begin{tabular}{ l | c c | c c} \toprule & \multicolumn{2}{|c|}{CIFAR-10} & \multicolumn{2}{|c}{CIFAR-100} \\ $(\%)$ & DualNet & FFL & DualNet & FFL \\ \midrule ResNet-32 & 8.23$\pm$0.31 & 6.06$\pm$0.15 & 34.91$\pm$1.23 & 27.06$\pm$0.34 \\ ResNet-56 & 7.34$\pm$0.25 & 5.58$\pm$0.13 & 32.67$\pm$1.14 & 24.85$\pm$0.30 \\ WRN-16-2 & 7.53$\pm$0.20 & 6.09$\pm$0.09 & 31.7$\pm$1.00 & 25.72$\pm$0.28 \\ WRN-40-2 & 6.25$\pm$0.14 & 4.75$\pm$0.16 & 28.4$\pm$0.61 & 22.06$\pm$0.20 \\ \bottomrule \end{tabular} \end{adjustbox} \label{table:compare_FM_sub} \end{subtable} \label{table:compare_FM} \end{table} \begin{table}[ht] \centering \caption{Ablation study of FFL. All models were trained on ResNet-32 and we evaluated the performance of each experiments with top-1 error rate on the CIFAR-100 dataset. We compared our proposed method (case A) to the cases without fusion module (case B), logit ensemble KD (case C) and fusion KD (case D).} \begin{adjustbox}{width=0.8\linewidth} \begin{tabular}{c| c c c | c c} \toprule & & & & \multicolumn{2}{|c}{CIFAR-100} \\ case & FM & EKD & FKD & Fused & Sub-network \\ \midrule A& $\Large{\checkmark}$ &$\Large{\checkmark}$ &$\Large{\checkmark}$ & 25.56$\pm$0.32 & 27.06$\pm$0.34 \\ B& {$\Large{\text{\ding{55}}}$} &$\Large{\checkmark}$ &$\Large{\checkmark}$ & 26.1$\pm$0.36 & 27.46$\pm$0.31 \\ C& $\Large{\checkmark}$ & {$\Large\text{\ding{55}}$}&$\Large{\checkmark}$ & 27.03$\pm$0.31 & 28.36$\pm$0.44 \\ D& $\Large{\checkmark}$ & {$\Large\text{\ding{55}}$}&{$\Large\text{\ding{55}}$} & 27.29$\pm$0.24 & 31.04$\pm$0.31 \\ \bottomrule \end{tabular} \end{adjustbox} \label{table:FFL_ablation} \end{table} \noindent \textbf{Ablation Study:} In FFL, we have taken a step forward from previous researches by introducing the fusion module (FM) and the mutual knowledge distillation (MKD) which is composed of the ensemble KD (EKD) and the fusion KD (FKD). We are going to show the efficacy of our proposed methodology through an ablation study in this part. Experiments were repeated 10 times on the CIFAR-100 dataset with two sub-networks based on ResNet-32 architecture. The numbers in Table \ref{table:FFL_ablation} represent the top-1 test error rate. In the table, case A corresponds to our full FFL model, while case B is where the features are averaged like in DualNet instead of using our fusion module (FM). As expected, the error rates of the fused classifier and the sub-network classifier increase around $0.5\%p$ and $0.4\%p$ respectively. Next two rows, case C and D are the cases where we remove the effect of EKD and the FKD sequentially. Without EKD (case C), the error rates of the fused and the sub-network classifiers increase by around $1.5\%p$ and $1.3\%p$ respectively, and EKD seems to have more influence on the fused one. When we additionally got rid of FKD (case D), the performance of the sub-network classifier shows a sharp decline compared to that of the fused classifier. This can be interpreted that the FKD has a significant impact on the performance of the sub-networks. \subsection{Comparison with Online ensemble Distillation} \label{ex2} Since ONE \cite{lan2018knowledge} can not be applied to different sub-networks, we consider case 1 which uses sub-networks having the same architecture. \begin{table}[t] \centering \caption{Top-1 classification error rate on CIFAR-10 and CIFAR-100. (Mean classification error (\%) of 10 runs).} \begin{adjustbox}{width=0.75\linewidth} \begin{tabular}{ l |c | c | c } \hline & Method & CIFAR-10 & CIFAR-100 \\ \hline &vanilla & 6.93$\pm$0.17 & 30.95$\pm$0.43 \\ & ONE & 6.24$\pm$0.12 & 27.43$\pm$0.22 \\ & ONE+ & 6.20$\pm$0.12 & 27.45$\pm$0.30 \\ ResNet-32 & FFL-S & 6.19$\pm$0.12 & 27.03$\pm$0.14 \\\cline{2-4} & ONE-E & 6.07$\pm$0.17 & 25.84$\pm$0.27 \\ & ONE-E+ & 5.98$\pm$0.09 & 25.92$\pm$0.33 \\ & FFL & 5.98$\pm$0.12 & 25.45$\pm$0.28 \\ \hline &vanilla & 6.20$\pm$0.17 & 28.63$\pm$0.42 \\ & ONE & 5.62$\pm$0.13 & 25.42$\pm$0.17 \\ & ONE+ & 5.69$\pm$0.17 & 25.57$\pm$0.33 \\ ResNet-56 & FFL-S & 5.57$\pm$0.17 & 25.22$\pm$0.20 \\\cline{2-4} & ONE-E & 5.37$\pm$0.13 & 24.31$\pm$0.13 \\ & ONE-E+ & 5.40$\pm$0.17 & 24.36$\pm$0.35 \\ & FFL & 5.35$\pm$0.14 & 24.04$\pm$0.28 \\ \hline &vanilla & 6.45$\pm$0.11 & 28.79$\pm$0.29 \\ & ONE & 6.24$\pm$0.16 & 26.05$\pm$0.28 \\ & ONE+ & 6.30$\pm$0.06 & 26.23$\pm$0.24 \\ WRN-16-2 & FFL-S & 6.21$\pm$0.12 & 25.83$\pm$0.31 \\ \cline{2-4} & ONE-E & 6.16$\pm$0.20 & 25.07$\pm$0.30 \\ & ONE-E+ & 6.23$\pm$0.06 & 25.23$\pm$0.23 \\ & FFL & 6.14$\pm$0.11 & 24.70$\pm$0.33 \\ \hline &vanilla & 5.30$\pm$0.15 & 25.65$\pm$0.31 \\ & ONE & 4.94$\pm$0.13 & 22.37$\pm$0.17 \\ & ONE+ & 4.89$\pm$0.19 & 22.34$\pm$0.18 \\ WRN-40-2 & FFL-S & 4.83$\pm$0.11 & 22.23$\pm$0.28 \\ \cline{2-4} & ONE-E & 4.82$\pm$0.13 & 21.62$\pm$0.25 \\ & ONE-E+ & 4.75$\pm$0.18 & 21.64$\pm$0.14 \\ & FFL & 4.74$\pm$0.11 & 21.35$\pm$0.40 \\ \hline \end{tabular} \end{adjustbox} \label{table:one_cifar} \end{table} \noindent \textbf{CIFAR DataSet:} In this section, all experiments were performed on the CIFAR dataset. Only two branches were used to compare the performances of ONE and FFL. For FFL, fusion module is needed to combine features, while ONE needs a gate module. Since the fusion module requires additional parameters than the gate module, we experimented with the same number of parameters by stacking the residual blocks in front of the gate module for fairness. ONE in the Table \ref{table:one_cifar} shows the average performance of the two branches, and ONE-E is the performance of the gated ensemble teacher. ONE-E+ is the performance of the gated ensemble teacher with the increased parameters which has a similar number of parameters to that of FFL. FFL-S represents the average value of sub-networks and FFL indicates the performance of the fused classifier. Vanilla shows the performance of the original network only trained with cross-entropy. In the case of FFL-S, since the other branch and the fusion module are removed, the number of parameters are equal to ONE and Vanilla. Table \ref{table:Memory} shows the number of parameters used in the experiment and the FM ratio is the rate of increase in the number of parameters by the fusion module compared to ONE-E. The table shows that FM increases the number of parameters up to 4\%. In both ResNet and WRN series, ONE, ONE+, and FFL-S has better performance than the Vanilla network, shown in Table \ref{table:one_cifar}. Unlike DualNet, FFL improves the performance of sub-networks, so it has many advantages similar to ONE. In CIFAR-10, all three methods show similar performance improvements than Vanilla. The comparison of ONE-E and ONE-E+ shows that increasing the number of parameters for the gate module does not improve the performance. Even in the case of CIFAR-100, performance improvement due to the increase of parameter in the gate module can not be seen either. On the other hand, the performance of FFL-S and FFL has been improved by an average of around $0.24\%p$ and $0.33\%p$ compared with ONE and ONE-E. \begin{table}[t] \centering \caption{The number of parameters in Millions for each method with CIFAR-100. FM Ratio shows the relative increase in the number of parameters in FFL compared to that of ONE-E.} \begin{adjustbox}{width=0.9\linewidth} \begin{tabular}{ c | c c c c} \toprule Method & \multicolumn{4}{c}{Net Types} \\ & ResNet-32 & ResNet-56 & WRN-16-2 & WRN-40-2 \\ \midrule Vanilla & 0.47M & 0.86M & 0.70M &2.26M \\ DualNet & 0.97M & 1.75M & 1.45M & 4.55M \\ ONE-E & 0.83M & 1.52M & 1.24M & 3.98M \\ FFL & 0.85M & 1.54M & 1.29M & 4.03M \\ FM Ratio & 2.4\% & 1.3\% & 4.0\% & 1.3\% \\ ONE-E+ & 0.85M & 1.54M & 1.32M & 4.05M \\ \bottomrule \end{tabular} \end{adjustbox} \label{table:Memory} \end{table} \noindent \textbf{Branch Expansion:} FFL generally learns with two branches like DualNet. Since the Fusion module is a method that concatenates the feature maps, FFL can be learned by expanding branches like ONE. In this experiment, we apply three branches for FFL to show the possibility of expanding the branches. The experiments are conducted with ResNet-32 and ResNet-56 in CIFAR-100. All conditions were the same as ONE. Table \ref{table:branch} shows the results with 3 branch similar to those of 2 branch experiments. We can confirm that the feature fusion method also improves the performance even when the number of branches are increased. \begin{table}[t] \centering \caption{Top-1 classification error rate with 3 branches. The numbers are from 10 runs of experiments and show the best values as in \cite{srivastava2015training}. ``*'' represents reported results in \cite{lan2018knowledge}.} \begin{adjustbox}{width=0.9\linewidth} \begin{tabular}{ l | c | c } \toprule & ResNet-32 & ResNet-56 \\ \midrule ONE & 26.64 (26.94$\pm$0.21) \{26.61*\} & 24.63 (25.10$\pm$0.29) \\ FFL-S & 26.3 (26.66$\pm$0.21) & 24.51 (24.85$\pm$0.31) \\ \cline{1-3} ONE-E & 24.75 (25.19$\pm$0.20) \{24.63*\} & 23.27 (23.59$\pm$0.24) \\ FFL & 24.31 (24.82$\pm$0.33) & 23.20 (23.43$\pm$0.19) \\ \bottomrule \end{tabular} \end{adjustbox} \label{table:branch} \end{table} \noindent \textbf{ImageNet DataSet:} The experiments on ImageNet with ResNet-34 also have a similar tendency to those on CIFAR dataset. Both ONE and FFL have better performance than Vanila as shown in Table \ref{table:imagenet}. ONE and FFL-S have a quite similar performance. Regarding the fused classifiers, the feature based teacher shows better performance than the logits based teacher. This indicates that our method also can be applied to a large scale image dataset. \begin{table}[t] \centering \caption{Top-1 and Top-5 classification error rate on ImageNet. We report the average performance of two branch outputs with standard deviation as in \cite{lan2018knowledge}.} \begin{adjustbox}{width=0.7\linewidth} \begin{tabular}{ l| c | c | c } \toprule &Method & Top-1 & Top-5 \\ \hline &vanila &26.69 &8.58 \\ &ONE &25.61$\pm$0.02 & 7.96$\pm$0.02 \\ ResNet-34&FFL-S & 25.58$\pm$0.06 &7.95$\pm$0.06 \\\cline{2-4} &ONE-E & 24.48 &7.31 \\ &FFL & 23.91 &7.17 \\ \bottomrule \end{tabular} \end{adjustbox} \label{table:imagenet} \end{table} \begin{table}[t] \centering \caption{Top-1 classification error rate on CIFAR-100. (Mean classification error (\%) of 10 runs).} \begin{adjustbox}{width=1\linewidth} \begin{tabular}{ c c | c c | c c } \toprule \multicolumn{2}{c|}{Net Types} & \multicolumn{2}{|c|}{DML} & \multicolumn{2}{|c}{FFL} \\ Net 1 & Net 2 & Net 1 & Net 2 & Net 1 & Net 2 \\ \midrule ResNet-32 & WRN-16-2 & 28.31$\pm$0.28 & 26.45$\pm$0.30 & 27.06$\pm$0.26& 25.93$\pm$0.30 \\ ResNet-56 & WRN-40-2 & 26.75$\pm$0.21 & 23.33$\pm$0.27 & 26.23$\pm$0.30 & 23.06$\pm$0.43 \\ \bottomrule \end{tabular} \end{adjustbox} \label{table:MLM} \end{table} \begin{figure*}[t] \centering \includegraphics[width = 0.88\linewidth]{qa.jpg}\\ \caption{We compare the Grad-CAM \cite{selvaraju2017grad} visualizations of the fusion module and the two sub-networks with the vanilla network (ResNet-34) using ImageNet dataset.} \label{fig:visual} \end{figure*} \subsection{Comparison with Mutual Learning Method} \label{ex3} In the previous experiments, sub-networks had to have the same architecture due to the architectures of the comparing methods. In case of DML, it is advantageous to be able to train sub-networks having different architecture. In this experiment, we compare the performance on CIFAR-100 dataset with a combination of two sub-networks having different architectures (case 2). The first combination is ResNet-34 and WRN-16-2 which has a relatively low depth and the second one is the combination of ResNet-56 and WRN-40-2 that has a deeper depth. Table \ref{table:MLM} shows that all networks of the two combinations using FFL method is better than those of DML. FFL also obtains a stronger teacher (fused classifier) and its feature maps require less than 4\% additional parameters compared to the parameters used in DML. In FFL, error of the fused classifier for the first combination and the second combination are 24.23$\pm$0.25 and 22.20$\pm$0.21 respectively. This experiment shows that FFL method can be applied even in the case where sub-networks have different architecture. \subsection{Qualitative analysis} \label{ex4} We aim to give insights on how our FFL method is contributing to the performance of our model by analyzing the feature map outputs. We have created heatmaps of features from four different networks which are the fusion module, the two sub-networks and an independently trained ResNet-34 network. We applied Grad-CAM \cite{selvaraju2017grad} algorithm which is a method that visualizes the important regions where the the network has considered important to discover how our model is making use of the features. Figure \ref{fig:visual} shows the Grad-CAM visualizations from each network with the highest probability and the corresponding class. 1-2 columns show cases which both the networks of our model and ResNet-34 predict the correct class. 3-6 columns are cases where ours get the correct answer but vanilla does not. 7-9 columns show that the feature maps of the fusion module and the sub-networks are very similar and predict the same class even when they get wrong answer. We have observed that the networks of our model detect the correct object better than ResNet-34. Even when both ResNet-34 and our three networks predict the same correct answer, ours have higher rate of confidence (First two columns of Figure \ref{fig:visual}). Also, we have discovered that the heatmaps of the sub-networks have a tendency to mimic the heatmap of the fusion module. This implies that the sub-networks are greatly influenced by the fusion module and vice versa. This is mainly due to the mutual knowledge distillation between the sub-networks and the fusion module which transfers softened probabilities that has rich information about the relative probabilities of incorrect answers. \section{Discussion} \noindent \textbf{Applicability for other tasks:} In addition to image classification, various other vision tasks use feature maps in various ways. For example, in image detection task, Faster R-CNN learns the region proposal network (RPN) and the recognition classifier uses the feature maps in a pre-trained backbone network \cite{ren2015faster}. In the case of image segmentation, \cite{chen2017rethinking} uses the feature map of a pre-trained network by applying atrous convolution to extract dense features. Also, in image style transfer task, the perceptual loss uses the feature maps of a pre-trained network \cite{johnson2016perceptual}. In this respect, creating a teacher which can generate meaningful feature maps has more applicability to other tasks than a teacher that consists of gated logits. \section{Conclusion} In this work, we propose a feature fusion method using online mutual knowledge distillation. Unlike existing feature fusion methods, it focuses on not only the performance of the fused classifier but also the performance of the sub-networks and can deploy sub-networks as needed. Moreover, there is no constraint on the architecture of the sub-networks. Therefore, the features of different sub-networks can be fused. The fusion module generates meaningful features by adding less than 4\% of additional parameters. From various perspectives, we demonstrated the effectiveness of FFL through experiments in three datasets. {\small \bibliographystyle{ieee}
1,314,259,994,408
arxiv
\section{Introduction} \label{sec:f1} The structure of Donaldson invariants of $4$-manifolds has been found out by Kronheimer and Mrowka~\cite{KM} and Fintushel and Stern~\cite{FS} for a large class of $4$-manifolds (those of simple type with $b_1=0$, $b^+>1$) making use of universal relations coming from embedded surfaces. In order to analyse general $4$-manifolds, we need to set up first the right framework for getting enough universal relations. It is the purpose of this work to do this by using the Fukaya-Floer homology of the three manifold $Y=\Sigma \times {{\Bbb S}}^1$, the product of a surface times a circle. This is obviously not the only way, but it already gives new results. Donaldson invariants for a $4$-manifold $X$ with $b^+>1$ are defined as linear functionals $$ D^w_X: \AA(X)= \text{Sym}^*(H_0(X) \oplus H_2(X)) \otimes \L^* H_1(X) \rightarrow {\Bbb C}, $$ where $w \in H^2(X;{\Bbb Z})$. For the homology $H_*(X)$ we shall understand complex coefficients. $\AA(X)$ is graded giving degree $4-i$ to the elements in $H_i(X)$. There is a slight difference in our definition of $\AA(X)$ with that of Kronheimer and Mrowka~\cite{KM}, as we do not consider $3$-homology classes (this is done in this way since the techniques here contained are not well suited to deal with these classes). We say that $X$ is of {\bf $w$-simple type} when $D^w_X((x^2-4)z)=0$, for any $z\in \AA(X)$. If $X$ has $b_1=0$ and it is of $w$-simple type, then it is of $w'$-simple type, for any other $w'$ and it is said to be of simple type for brevity. Analogously, we say that $X$ is of {\bf $w$-finite type} when there is some $n \geq 0$ such that $D^w_X((x^2-4)^nz)=0$, for any $z\in \AA(X)$. The order is the minimum such $n$, so order $1$ means simple type and order $0$ means that the Donaldson invariants are identically zero. $X$ is of finite type if it is of $w$-finite type for any $w$. For completeness we introduce the notion of $X$ being of {\bf $w$-strong simple type} when $D^w_X((x^2-4)z)=0$, for any $z\in \AA(X)$ and $D^w_X(\gamma z)=0$, for any $\gamma \in H_1(X)$ and any $z\in \AA(X)$. This condition is the right one for extending the concept of simple type in the case $b_1=0$ to the case $b_1>0$. It gives the same structure theorem for the invariants presented in~\cite{KM}. Introducing $3$-homology classes, we would have (potentially) different definitions, but we shall not deal here with that issue. Also the order of $w$-finite type of $X$ does not depend on $w$ (see~\cite{otro}). Let $\S=\S_g$ be a Riemann surface of genus $g\geq 1$ and consider the three-manifold $Y=\Sigma \times {{\Bbb S}}^1$. In~\cite{floer} we computed the ring structure of the (instanton) Floer (co)homology of $Y$, together with the $SO(3)$-bundle with $w_2=\text{P.D.}[{\Bbb S}^1]$. This gadget encondes all the relations $R\in \AA(\S)$ satisfied by all $4$-manifolds $X$ containing an embedded surface $\S$, representing an odd homology class and with $\S^2=0$. More accurately, for such $X$, $D^w_X(R z)=0$, for any $z\in \AA(\S^{\perp})$ and $w \in H^2(X;{\Bbb Z})$ with $w \cdot\S \equiv 1\pmod 2$. This is so since we have a decomposition $X=X_1\cup_Y A$, where $A$ is a tubular neighbourhood of $\S$, and we can consider $R \in \AA(A)$ and $z \in \AA(X_1)$. Then the (relative) Donaldson invariants for $A$ corresponding to $R$ are already vanishing. In order to drop the condition $z\in \AA(\S^{\perp})$, the useful space to work in is no longer the Floer homology, but the extension developed by Fukaya~\cite{Fukaya}~\cite{HFF} and known as Fukaya-Floer homology. This one deals with $2$-cycles in $X$ cutting $Y$ non-trivially. In our case as $X=X_1\cup_Y A$, the only possibility for the cutting of a $2$-cycle of $X$ with $Y$ is $n{\Bbb S}^1 \subset Y=\Sigma \times {{\Bbb S}}^1$. Here we extend the arguments of~\cite{floer} to find (to a large extent) the structure of the Fukaya-Floer (co)homology $HFF^*(Y,n{\Bbb S}^1)$ (with the same $SO(3)$-bundle as above). This will set up the background work necessary to give a structure theorem of the Donaldson invariants for manifolds not of simple type~\cite{Kr}, work which will be carried out in future. Such a structure theorem was conjectured in~\cite{Kr} and presumably, it might follow from the arguments given in~\cite{KM}~\cite{FS}. The first result in this direction is the finite type condition for all $4$-manifolds with $b^+>1$, which we prove. Fr{\o}yshov~\cite{froyshov} and Wieczorek~\cite{wieczorek} have given alternative proofs only valid for simply connected $4$-manifolds. On the other hand, there is another possibility for the Fukaya-Floer homology of $Y$, that is cutting with $\d \subset \S \subset Y$, $\d$ primitive in homology. For completeness, we also determine the structure of $HFF_*(Y,\d)$ and, as an application we show that a connected sum of two $4$-manifolds with $b_1=0$ along a surface (which represents an odd homology class and has self-intersection zero) is of simple type and give serious constraints on its basic classes, along the lines in~\cite{genus2}~\cite{genusg}. Finally, we prove that the product of two Riemann surfaces are of strong simple type using these techniques and give their Donaldson invariants. The basic classes coincide with its Seiberg-Witten basic classes, as expected. The paper is organised as follows. In sections~\ref{sec:f2} and~\ref{sec:f3} we review the construction of the Floer homology and Fukaya-Floer homology of a three manifold with $b_1 \neq 0$. Then in section~\ref{sec:f4} we recall, for the convenience of the reader, the structure of the Floer cohomology $HF^*(\Sigma \times {{\Bbb S}}^1)$. Section~\ref{sec:f5} is devoted to determining the Fukaya-Floer cohomology corresponding to the $1$-cycle ${\Bbb S}^1\subset Y$, $HFF^*(\Sigma \times {{\Bbb S}}^1,{\Bbb S}^1)$, which is finally given in terms of the relations (which are partially determined) satisfied by the generators. On the other hand, the Fukaya-Floer cohomology $HFF^*(\Sigma \times {{\Bbb S}}^1,\d)$ corresponding to $\d \subset \S \subset \Sigma \times {{\Bbb S}}^1$, is given in section~\ref{sec:f6}. The applications already mentioned are collected in section~\ref{sec:f7}. \section{Review of Floer homology} \label{sec:f2} In this section we are going to review the construction of the Floer homology groups of a $3$-manifold $Y$ with $b_1 >0$, endowed with an $SO(3)$-bundle $P$ with second Stiefel-Whitney class $w_2 =w_2(P) \neq 0 \in H^2(Y;{\Bbb Z}/2{\Bbb Z})$. Recall that $w_2$ determines uniquely $P$. To be more precise, we are going to suppose that $w_2$ has an integral lift (i.e.\ that the $SO(3)$-bundle lifts to an $U(2)$-bundle). This case is in contrast with the case of Floer homology of rational homology spheres. All the facts stated here are well known. For full treatment and proofs see~\cite{Fl}~\cite{Don1}~\cite{Furuta} (for the case of Floer homology of rational homology spheres see~\cite{Braam}). We shall use complex coefficients for the Floer homology (although it is usually developed over the integers). \subsection{Floer homology} As $w_2 \neq 0 \in H^2(Y;{\Bbb Z}/2{\Bbb Z})$, there are no reducible flat connections on $P$. We say that $P$ is free of flat reductions. Possibly after a small perturbation of the flat equations, there will be finitely many flat connections $\rho_j$, and they will all be non-degenerate. The Floer complex $CF_*(Y)$ is the complex vector space with basis given by the $\rho_j$, with a ${\Bbb Z}/4{\Bbb Z}$-grading which is given by the index~\cite{Don1}~\cite{Don2}. Actually, this grading is only defined up to addition of a constant. The complex $CF_*(Y)$ depends on $w_2$, but in general we will not express this in the notation. Now for every two flat connections $\rho_k$ and $\rho_l$ there is a moduli space ${\cal M}(\rho_k, \rho_l)$ of (perturbed) ASD connections on the tube $Y \times {\Bbb R}$ with limits $\rho_k$ and $\rho_l$. There is an ${\Bbb R}$-action by traslations and ${\cal M}_0(\rho_k, \rho_l)$ shall stand for the quotient ${\cal M}(\rho_k, \rho_l)/{\Bbb R}$. This space has components ${\cal M}_0^D(\rho_k, \rho_l)$ of dimensions $D \equiv \text{ind}(\rho_k)-\text{ind}(\rho_l)-1 \pmod 4$, and can be oriented in a compatible way~\cite{Fl}. When $\text{ind}(\rho_l)=\text{ind}(\rho_k)-1$, there is a compact zero dimensional moduli space ${\cal M}_0^0(\rho_k, \rho_l)$, for which the algebraic number of its points is defined $\# {\cal M}_0^0(\rho_k, \rho_l)$. The boundary map of the Floer complex is then \begin{eqnarray*} \partial: CF_i(Y) & \rightarrow & CF_{i-1}(Y) \\ \rho_k & \mapsto & \hspace{-5mm} \sum_{\rho_l \atop \text{ind}(\rho_l)=\text{ind}(\rho_k)-1}\hspace{-5mm} \# {\cal M}_0^0(\rho_k, \rho_l) \rho_l \end{eqnarray*} To see that $(CF_*(Y), \partial)$ defines a complex we need to check that $\partial^2=0$ (see~\cite{Don1}~\cite{Furuta}). For that, consider $\rho_k$ and $\rho_l$ flat connections such that $\text{ind}(\rho_l)=\text{ind}(\rho_k)-2$. Then the moduli space ${\cal M}_0^1(\rho_k, \rho_l)$ is a smooth one dimensional manifold which can be compactified adding the broken instantons in \begin{equation} \bigcup_{\rho_m \atop \text{ind}(\rho_m)=\text{ind}(\rho_k)-1}\hspace{-5mm} {\cal M}_0^0(\rho_k, \rho_m) \times {\cal M}_0^0(\rho_m, \rho_l). \label{eqn:f2.1} \end{equation} So this compactification, $\overline{{\cal M}}_0^1(\rho_k, \rho_l)$, is a manifold with boundary given by~\eqref{eqn:f2.1}. The homology class of the boundary is zero, i.e. $$ \sum_{\rho_m \atop \text{ind}(\rho_m)=\text{ind}(\rho_k)-1}\hspace{-5mm} \# {\cal M}_0^0(\rho_k, \rho_m) \cdot \# {\cal M}_0^0(\rho_m, \rho_l) =0, $$ or equivalently, $\partial^2=0$. We define the Floer homology $HF_*(Y)$ as the homology of this complex $(CF_*(Y),\partial)$ (see~\cite{Fl}). It can be proved that these groups do not depend on the metric of $Y$ or on the chosen perturbation of the ASD equations. The groups $HF_*(Y)$ are natural under diffeomorphisms of the pair $(Y,P)$. The Floer cohomology $HF^*(Y)$ is defined analogously out of the dual complex $CF^*(Y)$ and it is naturally isomorphic to $HF_{c-*}(\overline Y)$, where $\overline Y$ denotes $Y$ with reversed orientation ($c$ is a constant that we need to introduce due to the indeterminacy of the grading). The natural pairing $HF_*(Y) \otimes HF^*(Y) \to {\Bbb C}$ yields the pairing $\langle,\rangle: HF_*(Y) \otimes HF_{c-*} (\overline Y) \to {\Bbb C}$. It is worth noticing that when $Y$ has an orientation reversing diffeomorphism, i.e. $Y\cong \overline Y$, we have a pairing \begin{equation} \langle,\rangle: HF_*(Y) \otimes HF_{c-*} (Y) \to {\Bbb C}. \label{eqn:f2.2} \end{equation} \subsection{Action of $H_*(Y)$ on $HF_*(Y)$} Let $\a \in H_{3-i}(Y)$. We have cycles $V_{\a}$, in the moduli spaces ${\cal M}(\rho_k, \rho_l)$, of codimension $i+1$, representing $\mu(\a \times \text{pt})$, for $\a\times\text{pt} \subset Y\times{\Bbb R}$, much in the same way as in the case of a closed manifold~\cite{DK}~\cite{KM}. Using them, we construct a map \begin{eqnarray*} \mu(\a): CF_j(Y) & \rightarrow & CF_{j-i-1}(Y) \\ \rho_k & \mapsto & \hspace{-5mm} \sum_{\rho_l \atop \text{ind}(\rho_l)=\text{ind}(\rho_k)-i-1} \hspace{-5mm}(\# {\cal M}^{i+1}(\rho_k, \rho_l) \cap V_{\a}) \, \rho_l \end{eqnarray*} (note that this time we do not quotient by the traslations as the cycles $V_{\a}$ are not translation invariant). For $\text{ind}(\rho_l)=\text{ind}(\rho_k)-i-2$ consider the $1$-dimensional moduli space ${\cal M}^{i+2}(\rho_k, \rho_l) \cap V_{\a}$, and looking at the number of points in the boundary of its compactification, as we did before, we get that $$ \sum_{\rho_s \atop \text{ind}(\rho_s)=\text{ind}(\rho_l)+1} \hspace{-7mm} (\# {\cal M}^{i+1}(\rho_k, \rho_s) \cap V_{\a}) \cdot \# {\cal M}^0_0(\rho_s,\rho_l) + \hspace{-5mm} \sum_{\rho_s \atop \text{ind}(\rho_s)=\text{ind}(\rho_k)-1} \hspace{-7mm} \# {\cal M}^0_0(\rho_k, \rho_s) \cdot ( \# {\cal M}^{i+1}(\rho_s,\rho_l) \cap V_{\a}) $$ vanishes, or equivalently, $\partial \circ \mu(\a) +\mu(\a)\circ \partial=0$. So $\mu(\a)$ descends to a map $$ \mu(\a) : HF_*(Y) \to HF_{*-i-1}(Y). $$ \subsection{Products in Floer homology} Another useful construction, which is outlined in~\cite{Don1}, is the following. Suppose that we have an (oriented) four-dimensional cobordism $X$ between two closed oriented $3$-manifolds $Y_1$ and $Y_2$, i.e. $X$ is a $4$-manifold with boundary $\partial X=Y_1 \sqcup \overline Y_2$. Suppose that we have an $SO(3)$-bundle $P_X$ over $X$ such that $P_1=P_X|_{Y_1}$ and $P_2=P_X|_{Y_2}$ satisfy $w_2(P_i)\neq 0$, $i=1,2$, so that we have defined the Floer homologies of $(Y_1,P_1)$ and $(Y_2,P_2)$. Furnishing $X$ with two cylindrical ends, the cobordism $X$ gives a map \begin{eqnarray*} \P_X: CF_*(Y_1) &\rightarrow & CF_*(Y_2) \\ \rho_k & \mapsto & \sum_{\rho'_l} \# {\cal M}^0(X,\rho_k, \rho'_l) \rho'_l, \end{eqnarray*} where ${\cal M}(X,\rho_k, \rho'_l)$ is the moduli space of (perturbed) ASD connections on $X$ with flat limits $\rho_k$ on the $Y_1$ side and $\rho'_l$ on the $Y_2$ side. We can check along the lines above that $\partial \circ \P_X + \P_X \circ \partial =0$, so that $\P_X$ defines a map $\P_X:HF_*(Y_1) \rightarrow HF_*(Y_2)$. Also note that if $\a_1 \in H_*(Y_1)$ and $\a_2 \in H_*(Y_2)$ define the same homology class in $X$, then $\mu(\a_2) \circ \P_X=\P_X\circ \mu(\a_1)$. On the other hand, suppose that $Y_1$ and $Y_2$ are oriented $3$-manifolds and $P_1$ and $P_2$ are $SO(3)$-bundles with $w_2(P_i)\neq 0$, $i=1,2$. Consider the $SO(3)$-bundle $P=P_1\sqcup P_2$ over $Y=Y_1 \sqcup Y_2$. Every flat connection on $P$ is of the form $\t_{ll'}=(\rho_l,\rho'_{l'})$ and $\text{ind} (\t_{ll'})= \text{ind} (\rho_l) +\text{ind}(\rho'_{l'})$. So we have naturally $CF_*(Y)=CF_*(Y_1) \otimes CF_*(Y_2)$. For $\text{ind} (\t_{kk'})=\text{ind} (\t_{ll'})+1$, if ${\cal M}^0_0(\t_{kk'},\t_{ll'})$ is not empty, then either $\rho_k=\rho_l$ and $$ {\cal M}^0_0(\t_{kk'},\t_{ll'})={\cal M}^0_0((\rho_k,\rho'_{k'}),(\rho_l,\rho'_{l'}))= \{\rho_k\} \times {\cal M}^0_0(\rho'_{k'},\rho'_{l'}) $$ or $\rho'_{l'}=\rho'_{k'}$ and ${\cal M}^0_0(\t_{kk'},\t_{ll'})={\cal M}^0_0(\rho_k,\rho_l) \times\{\rho'_{k'}\}$. This proves that $\partial_{CF_*(Y)}=\partial_{CF_*(Y_1)}+\partial_{CF_*(Y_2)}$ and therefore $HF_*(Y)= HF_*(Y_1) \otimes HF_*(Y_2)$. Putting the above together, a product for $HF_*(Y)$ might arise as follows. Suppose that there is a cobordism between $Y \sqcup \, Y$ and $Y$, i.e. an oriented $4$-manifold $X$ with boundary $\partial X= Y\sqcup Y \sqcup \overline Y$. Then there is a map $$ HF_*(Y) \otimes HF_*(Y) \rightarrow HF_*(Y). $$ In some particular cases, this gives an associative and graded commutative ring structure on $HF_*(Y)$. We shall prove it for the particular $3$-manifold $Y=\Sigma \times {{\Bbb S}}^1$ using an argument along different lines (see section~\ref{sec:f4}). \subsection{Relative invariants of $4$-manifolds} Our purpose now is to recall the definition of Donaldson invariants of an (oriented) $4$-manifold $X$ with boundary $\partial X=Y$, for any $w \in H^2(X;{\Bbb Z})$ such that $w|_Y=w_2 \in H^2(Y;{\Bbb Z}/2{\Bbb Z})$. These invariants will not be numerical (in contrast with the case of a closed $4$-manifold), instead they live in the Floer homology $HF_*(Y)$. We give $X$ a cylindrical end modelled on $Y \times [0,\infty)$ and denote it by $X$ again (no confusion should arise out of this). We have moduli spaces ${\cal M}(X, \rho_l)$ of (perturbed) ASD connections with finite action and asymptotic to $\rho_l$. ${\cal M}(X, \rho_l)$ has components ${\cal M}^D(X, \rho_l)$ of dimensions $D \equiv \text{ind}(\rho_l)+ C \pmod 4$, for some fixed constant $C$ only dependent on $X$. The spaces ${\cal M}(X, \rho_l)$ can be oriented coherently and, for $z=\a_1 \a_2 \cdots \a_r \in \AA(X)$ of degree $d$, we can choose (generic) cycles $V_{\a_i} \subset {\cal M}(X, \rho_l)$ representing $\mu(\a_i)$, so that we have defined an element $$ \phi^w(X,z)=\sum_{\rho_l \atop \text{ind}(\rho_l)+C=d} (\# {\cal M}^d(X, \rho_l) \cap V_{\a_1} \cap \cdots \cap V_{\a_r}) \,\rho_l \in CF_*(Y). $$ This element has boundary zero (we recommend the reader prove this fact) and hence it defines a homology class, which is called the {\bf relative invariants} of $X$, denoted again by $\phi^w(X,z)$, in $HF_*(Y)$ (see~\cite{Don1}~\cite{Furuta}). Once defined the relative invariants, we have a gluing theorem for them. Suppose we are in the situation of a closed $4$-manifold $X=X_1 \cup_Y X_2$, obtained as the union of two $4$-manifolds with boundary, where $\partial X_1=Y$ and $\partial X_2=\overline Y$. Let $w \in H^2(X; {\Bbb Z})$ with $w|_Y=w_2 \in H^2(Y;{\Bbb Z}/2{\Bbb Z})$ as above (this implies in particular $b^+(X)>0$, so the Donaldson invariants of $X$ are defined; in the case $b^+=1$ relative to chambers~\cite{Kotschick}~\cite{wall}). We need another bit of terminology from~\cite{genus2} \begin{defn} \label{def:f2.allowable} $(w,\S)$ is an {\bf allowable} pair if $w, \S \in H^2(X; {\Bbb Z})$, $w \cdot \S \equiv 1\pmod 2$ and $\S^2 =0$. Then we define $D^{(w,\S)}_X=D^w_X +D^{w+\S}_X$. \end{defn} Usually, for $X=X_1\cup_Y X_2$, we have $w \in H^2(X;{\Bbb Z})$ with $w|_Y=w_2$ as above, and $\S \in H^2(X;{\Bbb Z})$ whose Poincar\'e dual lies in the image of $H_2(Y;{\Bbb Z}) \rightarrow H_2(X;{\Bbb Z})$, and satisfies $w \cdot \S \equiv 1\pmod 2$. Then $(w,\S)$ is an allowable pair. The series $D^{(w,\Sigma)}_X$ behaves much in the same way as the Kronheimer-Mrowka~\cite{KM} series ${\Bbb D}^w_X(\a)=D^w_X((1+{x\over 2})e^{\a})$ (they are equivalent for manifolds of simple type with $b_1=0$ and $b^+>1$, see~\cite{genusg} for an explicit formula), but it is a more efficient way of wrapping around the information in general. When $b^+=1$, the Donaldson invariants depend on the choice of metric for $X$. In general, we shall consider a family of metrics $g_R$, $R>1$, giving a neck of length $R$, i.e. $X=X_1 \cup (Y\times [0,R]) \cup X_2$, where the metrics on $X_1$ and $X_2$ are fixed and the metric on $Y\times [0,R]$ is of the form $g_Y +dt\otimes dt$, for a fixed metric $g_Y$ on $Y$. Then for large enough $R$ (depending on the degree of $z\in\AA(X)$), the metrics $g_R$ stay within a fixed chamber and $D^w_X(z)$ is well deined. We shall refer to these metrics as {\em metrics on $X$ giving a long neck}. Note that in this case $\phi^w(X_i,z_i)$ also depends on the metric on $X_i$. \begin{thm} \label{thm:f2.Floer} Let $X=X_1 \cup_Y X_2$ be as above and $w \in H^2(X;{\Bbb Z})$ with $w|_Y=w_2$. Take $\S \in H^2(X;{\Bbb Z})$ whose Poincar\'e dual lies in the image of $H_2(Y;{\Bbb Z}) \rightarrow H_2(X;{\Bbb Z})$, and satisfies $w \cdot \S \equiv 1\pmod 2$. Put $w_i =w|_{X_i} \in H^2(X_i;{\Bbb Z})$. For $z_i \in \AA(X_i)$, $i=1,2$, it is $$ D^{(w,\Sigma)}_X(z_1\,z_2)=\langle\phi^{w_1}(X_1,z_1), \phi^{w_2}(X_2,z_2)\rangle. $$ When $b^+=1$ the invariants are calculated for metrics on $X$ giving a long neck. \end{thm} This is a standard and well known fact~\cite{Don2}. The only not-so-standard fact is the appearance of $(w,\S)$. This is so since we are working with $SO(3)$-Floer theory instead of $U(2)$-Floer theory which would give Floer groups graded modulo $8$. When we glue the $SO(3)$-bundles over $X_1$ and $X_2$ with second Stiefel-Whitney classes $w_1$ and $w_2$ we can do it in different ways, as there is a choice of gluing automorphism of the bundles along $Y$, and both $w$ and $w+\S$ are two different possibilities for the resulting $SO(3)$-bundle whose difference in the indices of both is $4$ (see~\cite{Don1}~\cite{Don2}). In general we shall write $$ \phi^w(X,e^{t\a}) = \sum_d {\phi^w(X,\a^d) \over d!}t^d, $$ as an element living in $HF_*(Y)\otimes {\Bbb C}[[t]]$. Theorem~\ref{thm:f2.Floer} can be rewritten as \begin{thm} \label{thm:f2.Floer.series} Let $X=X_1 \cup_Y X_2$ be as above and $w \in H^2(X;{\Bbb Z})$ with $w|_Y=w_2$. Take $\S \in H^2(X;{\Bbb Z})$ whose Poincar\'e dual lies in the image of $H_2(Y;{\Bbb Z}) \rightarrow H_2(X;{\Bbb Z})$, and satisfies $w \cdot \S \equiv 1\pmod 2$. Put $w_i =w|_{X_i} \in H^2(X_i;{\Bbb Z})$. Then for $\a_i \in H_2(X_i)$, $i=1,2$, $$ D^{(w,\Sigma)}_X(e^{t(\a_1+\a_2)}) =\langle\phi^{w_1}(X_1,e^{t\a_1}),\phi^{w_2}(X_2,e^{t\a_2})\rangle. $$ When $b^+=1$ the invariants are calculated for metrics on $X$ giving a long neck. \end{thm} \section{Review of Fukaya-Floer homology} \label{sec:f3} Now we pass on to the definition of the Fukaya-Floer homology groups, which are a refinement of the Floer homology groups of a $3$-manifold $Y$ with $b_1>0$. The construction is initially given by Fukaya in~\cite{HFF} and explained in a paper worth reading by Braam and Donaldson~\cite{Fukaya}. The origin of the Fukaya-Floer homology is the need of defining relative invariants (and establishing the appropriate gluing threorem) for $2$-homology classes crossing the neck in a splitting $X=X_1\cup_Y X_2$. They are in some sense more natural than the Floer homology from the point of view of the Donaldson invariants of $4$-manifolds. \subsection{Fukaya-Floer homology} The input is a triple $(Y, P, \d)$, where $P$ is an $SO(3)$-bundle with $w_2 \neq 0$ over an oriented $3$-manifold $Y$ and $\d$ is a loop in $Y$, i.e.\ an (oriented) embedded $\d \cong {\Bbb S}^1 \hookrightarrow Y$. The complex $CFF_*(Y,\d)$ will be the total complex of the double complex $$ CF_*(Y) \otimes \hat{H}_*({\Bbb C \Bbb P}^{\infty}), $$ where $\hat{H}_*({\Bbb C \Bbb P}^{\infty})$ is the completion of $H_*({\Bbb C \Bbb P}^{\infty})$, i.e.\ the ring of formal power series. Recall that $H_i({\Bbb C \Bbb P}^{\infty}) = 0$ for $i$ odd and ${\Bbb C}$ for $i$ even (we are using complex coefficients). Therefore \begin{equation} CFF_i(Y,\d)=CF_i(Y)\times CF_{i-2}(Y) t \times CF_{i-4}(Y) {t^2 \over 2!} \times CF_{i-6}(Y) {t^3 \over 3!} \times \cdots \label{eqn:f3.1} \end{equation} The labels ${t^k \over k!}$ must be understood as the generators of $H_{2k} ({\Bbb C \Bbb P}^{\infty})$ and have an assigned (homological) degree $2k$. So $CFF_*(Y,\d)=CF_*(Y) \otimes {\Bbb C}[[t]]$, i.e. Fukaya-Floer chains are infinite sequences of (possibly non-zero) Floer chains. This complex is also graded over ${\Bbb Z}/4{\Bbb Z}$. To construct the boundary $\partial$ we work as follows. For every pair of flat connections $\rho_k$ and $\rho_l$ we have the moduli space ${\cal M}_0(\rho_k,\rho_l)$ of section~\ref{sec:f2} and we can construct generic cycles $V_{\d \times{\Bbb R}}$ representing $\mu(\d\times{\Bbb R})$, for $\d \times{\Bbb R}\subset Y\times{\Bbb R}$, which intersect transversely in the top stratum of the compactification of ${\cal M}_0(\rho_k, \rho_l)$. The boundary of $CFF_*(Y)$ will be defined as (see~\cite{Fukaya}) \begin{eqnarray*} \partial: CFF_i(Y) & \rightarrow & CFF_{i-1}(Y) \\ \rho_k {t^a \over a!}& \mapsto & \sum_{\rho_l\atop b\geq a} {b \choose a} (\# {\cal M}^{2(b-a)}_0(\rho_k, \rho_l)\cap V_{\d \times{\Bbb R}}^{b-a}) \rho_l {t^b \over b!} \end{eqnarray*} for $\rho_k \in CF_{i-2a}$, $\rho_l \in CF_{i-1-2b}$. Here $V_{\d \times{\Bbb R}}^{b-a}$ means the intersection of $b-a$ different generic cycles (we only have added the labels to the formula in~\cite{Fukaya}). The proof of $\partial^2=0$ is given in~\cite{Fukaya} and runs as follows. Consider two flat connections $\rho_k$ and $\rho_l$, such that $\text{ind}(\rho_l)=\text{ind}(\rho_k)-2-2e$. Then the moduli space ${\cal M}^{2e+1}_0(\rho_k, \rho_l) \cap V_{\d\times{\Bbb R}}^e$ is a one dimensional manifold. We compactify it and count the boundary points in the same way as in the case of Floer homology to get $$ \sum_{\rho_m \atop \text{ind}(\rho_m)=\text{ind}(\rho_k)-1-2f}\hspace{-5mm} {e \choose f} \# {\cal M}^{2f}_0(\rho_k, \rho_m) \cap V_{\d\times{\Bbb R}}^f\cdot \# {\cal M}^{2(e-f)}_0(\rho_m, \rho_l) \cap V_{\d\times{\Bbb R}}^{e-f} =0, $$ equivalently $\partial^2 \rho_k= 0$. We have thus defined the Fukaya-Floer homology $HFF_*(Y,\d)$ as the homology of the complex $(CFF_*(Y,\d),\partial)$. These groups are independent of metrics and of perturbations of equations~\cite{HFF}. For the effective computation of $HFF_*(Y,\d)$, we construct a spectral sequence next. There is a filtration $(K^{(i)})_* = CF_*(Y) \otimes (\prod_{* \geq i} \hat{H}_*({\Bbb C \Bbb P}^{\infty}))$ of $CFF_*(Y,\d)$ inducing a spectral sequence whose $E_3$ term is $HF_*(Y) \otimes \hat{H}_*({\Bbb C \Bbb P}^{\infty})$ and converging to the Fukaya-Floer groups (there is no problem of convergence because of the periodicity of the spectral sequence). The boundary $d_3$ turns out to be $$ \mu(\d): HF_i(Y) \otimes H_{2j}({\Bbb C \Bbb P}^{\infty}) \rightarrow HF_{i-3}(Y) \otimes H_{2j+2} ({\Bbb C \Bbb P}^{\infty}). $$ The obvious ${\Bbb C}[[t]]$-module structure of $CFF_*(Y,\d)= CF_*(Y)\otimes{\Bbb C}[[t]]$ descends to give a ${\Bbb C}[[t]]$-module structure for $HFF_*(Y,\d)$ (the boundary $\partial$ is ${\Bbb C}[[t]]$-linear thanks to the choice of denominators in~\eqref{eqn:f3.1}). The Fukaya-Floer cohomology will be defined as the homology of the dual complex $CFF^*(Y,\d)=\text{Hom}_{{\Bbb C}[[t]]}(CFF_*(Y,\d),{\Bbb C}[[t]])$. We remark that this is a different definition from that of~\cite{Fukaya}. There is a pairing $\langle,\rangle:HFF_*(Y,\d) \otimes HFF^*(Y, \d) \rightarrow {\Bbb C}[[t]]$ and an isomorphism $HFF_*(\overline Y, -\d) \cong HFF^*(Y,\d)$, where $-\d$ is $\d$ with reversed orientation, hence a pairing for the Fukaya-Floer homology groups $$ \langle,\rangle:HFF_*(Y,\d) \otimes HFF_*(\overline Y, -\d) \rightarrow {\Bbb C}[[t]]. $$ This can be defined through the spectral sequence from the natural pairing in $HF_*(Y)$. Also it is a nice way of collecting all the pairings $\sigma_m$ in~\cite{Fukaya}. The Fukaya-Floer homology may also be defined for $(Y,P,\d)$ where $\d \cong {\Bbb S}^1 \sqcup \cdots \sqcup {\Bbb S}^1 \hookrightarrow Y$ is a collection of finitely many disjoint loops (possibly none). In particular, for $\d=\o$, $HFF_*(Y,\o)=HF_*(Y)\otimes {\Bbb C}[[t]]$ naturally. \subsection{Action of $H_*(Y)$ on $HFF_*(Y,\d)$} This is explained in~\cite[section 5.3]{thesis}. Let $\a \in H_{3-i}(Y)$. We define $\mu(\a)$ at the level of chains as \begin{eqnarray*} \mu(\a): CFF_j(Y) & \rightarrow & CFF_{j-i-1}(Y) \\ \rho_k {t^a \over a!} & \mapsto & \sum_{\rho_l\atop b\geq a} {b \choose a} (\# {\cal M}^{2(b-a)+i+1}(\rho_k, \rho_l)\cap V_{\d \times{\Bbb R}}^{b-a} \cap V_{\a \times \text{pt}}) \rho_l {t^b \over b!} \end{eqnarray*} for $\rho_k \in CF_{j-2a}$, $\rho_l \in CF_{j-i-1-2b}$. Again $\partial \circ \mu(\a) +\mu(\a)\circ \partial=0$ and $\mu(\a)$ descends to a map $\mu(\a):HFF_*(Y,\d)\rightarrow HFF_{*-i-1}(Y,\d)$. For instance, for $HFF_*(Y,\o)=HF_*(Y)\otimes {\Bbb C}[[t]]$, the map $\mu(\a)$ is the one induced from $HF_*(Y)$. In general, the induced map in the term $E_3=HF_*(Y)\otimes {\Bbb C}[[t]]$ of the spectral sequence computing $HFF_*(Y,\d)$ is $\mu(\a)$ in Floer homology. The structure of the map $\mu(\a)$ is the cornerstone of the analysis in~\cite[chapter 5]{thesis} and the seed of this work. \subsection{Products in Fukaya-Floer homology} We can extend the arguments of section~\ref{sec:f2}. Suppose that we have an (oriented) four-dimensional cobordism $(X,D,P)$ between two triples $(Y_1,\d_1,P_1)$ and $(Y_2,\d_2,P_2)$ as above. Then $\P_X$ is defined at the level of chains by \begin{eqnarray*} \P_X: CFF_*(Y_1) &\rightarrow & CFF_*(Y_2) \\ \rho_k {t^a \over a!} & \mapsto & \sum_{\rho'_l} {b \choose a} (\# {\cal M}^{2(b-a)}(X,\rho_k, \rho'_l) \cap V_D^{b-a} ) \rho'_l {t^b \over b!}. \end{eqnarray*} As $\partial \circ \P_X + \P_X \circ \partial =0$, $\P_X$ defines a map $\P_X:HFF_*(Y_1,\d_1) \rightarrow HFF_*(Y_2,\d_2)$. Again if $\a_1 \in H_*(Y_1)$ and $\a_2 \in H_*(Y_2)$ define the same homology class in $X$, then $\mu(\a_2) \circ \P_X=\P_X\circ \mu(\a_1)$. On the other hand, suppose that we have $(Y_1,\d_1,P_1)$ and $(Y_2,\d_2,P_2)$ and consider $(Y,\d,P)$ with $Y=Y_1\sqcup Y_2$, $P=P_1\sqcup P_2$ and $\d=\d_1 \sqcup \d_2$. One can prove easily that $HFF_*(Y,\d)= HFF_*(Y_1,\d_1) \otimes_{{\Bbb C}[[t]]} HFF_*(Y_2,\d_2)$. Finally, in case that there is a cobordism between $(Y,\d) \sqcup \, (Y,\d)$ and $(Y,\d)$ we have a map \begin{equation} HFF_*(Y,\d) \otimes_{{\Bbb C}[[t]]} HFF_*(Y,\d) \rightarrow HFF_*(Y,\d), \label{eqn:f3.2} \end{equation} which in some cases it may give an associative and graded commutative ring structure on $HFF_*(Y,\d)$. Also note that if there is a cobordism between $(Y,\d) \sqcup \, (Y,\o)$ and $(Y,\d)$ then there will be a map \begin{equation} HF_*(Y) \otimes HFF_*(Y,\d) \rightarrow HFF_*(Y,\d), \label{eqn:f3.3} \end{equation} which may lead to a module structure of $HFF_*(Y,\d)$ over $HF_*(Y)$. \subsection{Relative invariants of $4$-manifolds} To define relative invariants, let $X$ be a $4$-manifold with $\partial X=Y$ and $w \in H^2(X;{\Bbb Z})$ such that $w|_Y=w_2\in H^2(Y;{\Bbb Z}/2{\Bbb Z})$. We give $X$ a cylindrical end. Let $D \subset X$ be a $2$-cycle such that $\partial D=D\cap Y =\d$ (more accurately, $D \cap (Y \times [0,\infty)) = \d \times [0,\infty)$). One has the moduli spaces ${\cal M}(X,\rho_k)$ and we can choose generic cycles $V_D^{(i)}$ representing $\mu(D)$ and intersecting transversely in the top stratum of the compactification of ${\cal M}(X,\rho_k)$ (see~\cite{Fukaya}). Then we have an element $$ \phi^w(X,D^d)= \sum_{\rho_k} (\# {\cal M}^{2d}(X, \rho_k) \cap V_D^{(1)} \cap \cdots \cap V_D^{(d)}) \rho_k $$ in $CF_*(Y) \otimes H_{2d}({\Bbb C \Bbb P}^{\infty}) \subset CFF_*(Y,\d)$. We remark that this is {\bf not} a cycle. Then we set $\phi^w(X,D)=\prod_d \phi^w(X,D^d)$, which is a cycle. We also denote by $\phi^w(X,D)\in HFF_*(Y,\d)$ the Fukaya-Floer homology class it represents. Alternatively, we denote this same element as $$ \phi^w(X,e^{tD}) = \phi^w(X,D)=\sum_d \phi^w(X,D^d) {t^d \over d!}. $$ Formally this element lives in $HF_*(Y) \otimes \hat{H}_*({\Bbb C \Bbb P}^{\infty})$, the $E_3$ term of the spectral sequence alluded above, but it represents the same Fukaya-Floer homology class. The definition of $\phi^w(X,D^d)$ depends on some choices~\cite{Fukaya}, but the homology class $\phi^w(X,D)$ only depends on $(X,D)$. Moreover if we have a homology of $D$ which is the identity on the cylindrical end of $X$, $\phi^w(X,D)$ remains fixed. Analogously, for any $z \in \AA(X)$, we define $\phi^w(X,z\, D^d) \in CF_*(Y) \otimes H_{2d}({\Bbb C \Bbb P}^{\infty}) \subset CFF_*(Y,\d)$ and $\phi^w(X,z\, e^{tD})$. The relevant gluing theorem is~\cite{Fukaya}~\cite{thesis}: \begin{thm} \label{thm:f3.fukaya} Let $X=X_1 \cup_Y X_2$ and $w \in H^2(X;{\Bbb Z})$ with $w|_Y=w_2$. Take $\S \in H^2(X;{\Bbb Z})$ whose Poincar\'e dual lies in the image of $H_2(Y;{\Bbb Z}) \rightarrow H_2(X;{\Bbb Z})$, and satisfies $w \cdot \S \equiv 1\pmod 2$. Put $w_i =w|_{X_i} \in H^2(X_i;{\Bbb Z})$. Let $D \in H_2(X)$ decomposed as $D=D_1 +D_2$ with $D_i \subset X_i$, $i=1,2$, $2$-cycles with $\partial D_1=\d$, $\partial D_2=-\d$. Choose $z_i \in \AA(X_i)$, $i=1,2$. Then $$ D^{(w,\Sigma)}_X(z_1z_2e^{tD})= \langle\phi^{w_1}(X_1,z_1e^{tD_1}),\phi^{w_2}(X_2,z_2e^{tD_2})\rangle. $$ When $b^+=1$ the invariants are calculated for metrics on $X$ giving a long neck. \end{thm} \section{Floer homology of $\Sigma \times {{\Bbb S}}^1$} \label{sec:f4} We want to specialise to the case relevant to us. Let $\S=\S_g$ be a Riemann surface of genus $g \geq 1$ and let $Y=\Sigma \times {{\Bbb S}}^1$ be the trivial circle bundle over $\S$. Over this $3$-manifold, we fix the $SO(3)$-bundle with $w_2=\text{P.D.}[{\Bbb S}^1]\in H^2(Y;{\Bbb Z}/2{\Bbb Z})$, which satisfies the hypothesis of section~\ref{sec:f2}. Therefore the instanton Floer homology $HF_*(Y)$ is well-defined. As $Y=\Sigma \times {{\Bbb S}}^1$ admits an orientation reversing self-diffeomorphism, given by conjugation on the ${\Bbb S}^1$ factor, there is a Poincar\'e duality isomorphism of $HF^*(Y)$ with the dual of $HF_*(Y)$ (this identification will be done systematically and without further notice) and a pairing $\langle,\rangle :HF^*(Y) \otimes HF^*(Y) \rightarrow {\Bbb C}$. We introduce a multiplication on $HF^*(Y)$ using the cobordism between $Y \sqcup \, Y$ and $Y$ given by the $4$-manifold which is a pair of pants times $\S$. This gives a map $HF^*(Y) \otimes HF^*(Y) \rightarrow HF^*(Y)$. We shall prove later explicitly that this is an associative and graded commutative ring structure on $HF^*(Y)$. As a shorthand notation, we shall write henceforth $HF^*_g=HF^*(Y)$, making explicit the dependence on the genus $g$ of the Riemann surface $\S$. The Floer cohomology of $Y=\Sigma \times {{\Bbb S}}^1$ has been completely computed thanks to the works of Dostoglou and Salamon~\cite{DS} and its ring structure has been found by the author in~\cite{floer} and turns out to be isomorphic to the quantum cohomology of the moduli space ${\cal N}_g$ of stable bundles of odd degree and rank two over $\S$ (with fixed determinant), i.e. $QH^*({\cal N}_g) \cong HF^*(\S_g \times {\Bbb S}^1)$, as the author has proved in~\cite{quantum}. Here we shall recall the result stated in~\cite{floer}. We fix some notation. Let $\{\seq{\gamma}{1}{2g}\}$ be a symplectic basis of $H_1(\S;{\Bbb Z})$ with $\gamma_i \gamma_{i+g}=\text{pt}$, for $1 \leq i \leq g$. Also $x$ will stand for the generator of $H_0(\S;{\Bbb Z})$. First we recall the usual cohomology ring of ${\cal N}_g$, because of its similarity with the Floer cohomology $HFF^*_g$ and for later use in section~\ref{sec:f5}. \subsection{Cohomology ring of ${\cal N}_g$} (see~\cite{King}~\cite{ST}~\cite{quantum}) The ring $H^*({\cal N}_g)$ is generated by the elements $$ \left\{ \begin{array}{l} a= 2\, \mu(\S) \in H^2({\cal N}_g) \\ c_i= \mu (\gamma_i) \in H^3({\cal N}_g), \qquad 1 \leq i \leq 2g \\ b= - 4 \, \mu(x) \in H^4 ({\cal N}_g) \end{array} \right. $$ where the map $\mu: H_*(\S) \to H^{4-*}({\cal N}_g)$ is, as usual, given by $-{1 \over 4}$ times slanting with the first Pontrjagin class of the universal $SO(3)$-bundle over $\S\times{\cal N}_g$. Thus there is a basis $\{f_s\}_{s \in {\cal S}}$ for $H^*({\cal N}_g)$ with elements of the form \begin{equation} f_s=a^nb^mc_{i_1}\cdots c_{i_r}, \label{eqn:f5.0} \end{equation} for a finite set ${\cal S}$ of multi-indices of the form $s=(n,m; i_1,\ldots,i_r)$, $n,m \geq 0$, $r \geq 0$, $1 \leq i_1 < \cdots < i_r \leq 2g$. There is an epimorphism of rings $\AA(\S) \twoheadrightarrow H^*({\cal N}_g)$. The mapping class group $\text{Diff}(\S)$ acts on $H^*({\cal N}_g)$, with the action factoring through the action of the symplectic group $\text{Sp}\, (2g,{\Bbb Z})$ on $\{c_i\}$. The invariant part, $H_I^*({\cal N}_g)$, is generated by $a$, $b$ and $c=-2 \sum_{i=0}^g c_ic_{i+g}$. Then \begin{equation} {\Bbb C}[a,b,c] \twoheadrightarrow H^*_I({\cal N}_g) \label{eqn:f4.qu4} \end{equation} which allows us to write $$ H_I^*({\cal N}_g)= {\Bbb C} [a,b,c]/I_g, $$ where $I_g$ is the ideal of relations satisfied by $a$, $b$ and $c$. The space $H^3=H^3({\cal N}_g)$ has a basis $\seq{c}{1}{2g}$, so $\mu:H_1(\S) \stackrel{\simeq}{\ar} H^3$. For $0 \leq k \leq g$, the primitive component of $\L^k H^3$ is $$ \L_0^k H^3 = \ker (c^{g-k+1} : \L^k H^3 \rightarrow \L^{2g-k+2} H^3). $$ The spaces $\L^k_0 H^3$ are irreducible $\text{Sp}\, (2g,{\Bbb Z})$-representations~\cite[theorem 17.5]{Fulton}. The description of the cohomology ring $H^*({\cal N}_g)$ is given in the following \begin{prop}[\cite{ST}~\cite{King}] \label{prop:f4.homology} The cohomology ring of the moduli space ${\cal N}_g$ of stable bundles of odd degree and rank two over $\S$ with fixed determinant has a presentation $$ H^*({\cal N}_g)= \bigoplus_{k=0}^{g} \L_0^k H^3 \otimes {\Bbb C} [a, b, c]/I_{g-k}, $$ where $I_r=(q^1_r,q^2_r,q^3_r)$ and $q^i_r$ are defined recursively by setting $q^1_0=1$, $q^2_0=0$, $q^3_0=0$ and then for all $r \geq 0$ $$ \left\{ \begin{array}{l} q_{r+1}^1 = a q_r^1 + r^2 q_r^2 \\ q_{r+1}^2 = b q_r^1 + {2r \over r+1} q_r^3 \\ q_{r+1}^3 = c q_r^1 \end{array} \right. $$ \end{prop} The basis $\{f_s\}_{s\in{\cal S}}$ of $H^*({\cal N}_g)$ can be chosen to be as follows. Choose, for every $0 \leq k \leq g-1$, a basis $\{x^{(k)}_i\}_{i \in B_k}$ for $\L^k_0 H^3$. Then \begin{equation} \{x^{(k)}_i a^n b^m c^r/ k=0,1,\ldots, g-1, \> n+m+r < g-k, \> i \in B_k \} \label{eqn:f4.an1} \end{equation} is a basis for $H^*({\cal N}_g)$, as proved in~\cite{ST}. Also proposition~\ref{prop:f4.homology} gives us the relations for $H^*({\cal N}_g)$. If we set $x^{(k)}_0= c_1 c_2 \cdots c_k \in\L_0^k H^3$, then the relations are given by $$ x_0^{(k)}q_{g-k}^i, \qquad 1 \leq i\leq 3, \quad 0\leq k \leq g, $$ and their transforms under the $\text{Sp}\, (2g,{\Bbb Z})$-action. \subsection{Floer cohomology $HF^*_g$} The description of the Floer cohomology $HF_g^*=HF^*(Y)$ of $Y=\Sigma \times {{\Bbb S}}^1$, where $\S=\S_g$ is a Riemann surface of genus $g$, is given in~\cite{floer}. Consider the manifold $A=\S \times D^2$, $\S$ times a disc, with boundary $Y=\Sigma \times {{\Bbb S}}^1$, and let $\Delta= \text{pt} \times D^2 \subset A$ be the horizontal slice. Let $w\in H^2(A;{\Bbb Z})$ be any odd multiple of $\text{P.D.} [\Delta]$, so that $w|_Y =w_2$. Clearly $\AA(A)= \AA(\S)= \text{Sym}^*(H_0(\S) \oplus H_2(\S)) \otimes \bigwedge^* H_1(\S)$. Define the following elements of $HF^*(Y)$ as in~\cite{floer} \begin{equation} \left\{ \begin{array}{l} \a= 2 \, \phi^w(A,\S) \in HF^2_g \\ \psi_i= \phi^w(A,\gamma_i) \in HF^3_g, \qquad 0\leq i \leq 2g \\ \b= - 4 \, \phi^w(A,x) \in HF^4_g \end{array} \right. \label{eqn:f4.1} \end{equation} The relative invariants of section~\ref{sec:f2} give a map \begin{equation} \begin{array}{ccc} \AA(\S) & \rightarrow & HF^*_g \\ z & \mapsto & \phi^w(A,z) \end{array} \label{eqn:f4.2} \end{equation} For every $s\in {\cal S}$ and $f_s$ as in~\eqref{eqn:f5.0}, we define \begin{equation} \begin{array}{lcl} z_s&=& \S^n x^m \gamma_{i_1} \cdots \gamma_{i_r} \in \AA(\S), \\ e_s&=& \phi^w (A, z_s ) \in HF^*_g. \end{array} \label{eqn:f4.1.5} \end{equation} As a consequence of~\cite[lemma 21]{genusg}, $\{e_s\}_{s\in{\cal S}}$ is a basis for $HF^*_g$. Hence~\eqref{eqn:f4.2} is surjective. Now it is easy to check that $\phi^w(A,z)\phi^w(A,z')=\phi^w(A,zz')$, as for any $s\in{\cal S}$, the gluing theorem~\ref{thm:f2.Floer} implies $$ \langle\phi^w(A,z)\phi^w(A,z'),\phi^w(A,z_s)\rangle =D^{(w,\Sigma)}_{\S\times{\Bbb C \Bbb P}^1}(zz'z_s)= \langle\phi^w(A,zz'),\phi^w(A,z_s)\rangle . $$ In particular this implies that the product of $HF^*_g$ is graded commutative and associative and that~\eqref{eqn:f4.2} is an epimorphism of rings. The neutral element of the product is ${\bf 1}=\phi^w(A,1)$. The mapping class group $\text{Diff}(\S)$ acts on $HF^*_g$, with the action factoring through the action of $\text{Sp}\, (2g,{\Bbb Z})$ on $\{\psi_i\}$. It also acts on $\AA(\S)$ and~\eqref{eqn:f4.2} is $\text{Sp}\, (2g,{\Bbb Z})$-equivariant. The invariant part, $(HF_g^*)_I=HF_I^*(Y)$, is generated by $\a$, $\b$ and $\gamma=-2 \sum_{i=0}^g \psi_i\psi_{i+g}$, so that there is an epimorphism $$ {\Bbb C}[\a,\b,\gamma] \twoheadrightarrow (HF_g^*)_I, $$ which allows us to write $$ (HF_g^*)_I= {\Bbb C}[\a,\b,\gamma]/J_g, $$ where $J_g$ is the ideal of relations satisfied by $\a$, $\b$ and $\gamma$. As a matter of notation, let $H^3$ denote the $2g$-dimensional vector space generated by $\seq{\psi}{1}{2g}$ in $HF^3$. Then $H^3 \cong H^3({\cal N}_g)$ and $\phi^w(A,\cdot):H_1(\S) \stackrel{\simeq}{\ar} H^3$. No confussion should arise from this multiple use of $H^3$. Then from~\cite{floer}, a basis for $HF_g^*$ is given by $$ \{x^{(k)}_i\a^a\b^b\gamma^c/ k=0,1,\ldots, g-1, \> a+b+c < g-k, \> i \in B_k \}, $$ where $x^{(k)}_i \in \L^k_0 H^3$ are interpreted now as Floer products. The explicit description of $HF^*_g$ is given in~\cite[theorem 16]{floer} \begin{prop} \label{prop:f4.floer} The Floer cohomology of $Y=\Sigma \times {{\Bbb S}}^1$, for $\S=\S_g$ a Riemann surface of genus $g$, and $w_2=\text{P.D.}[{\Bbb S}^1] \in H^2(Y;{\Bbb Z}/2{\Bbb Z})$, has a presentation $$ HF^*(\Sigma \times {{\Bbb S}}^1)= \bigoplus_{k=0}^{g} \L_0^k H^3 \otimes {\Bbb C}[\a,\b,\gamma] /J_{g-k}. $$ where $J_r=(R^1_r, R^2_r,R^3_r)$ and $R^i_r$ are defined recursively by setting $R^1_0=1$, $R^2_0=0$, $R^3_0=0$ and putting for all $r \geq 0$ $$ \left\{ \begin{array}{l} R_{r+1}^1 = \a R_r^1 + r^2 R_r^2 \\ R_{r+1}^2 = (\b+(-1)^{r+1}8) R_r^1 + {2r \over r+1} R_r^3 \\ R_{r+1}^3 = \gamma R_r^1 \end{array} \right. $$ \end{prop} The meaning of this proposition is the following. The Floer (co)homology $HF_g^*$ is generated as a ring by $\a$, $\b$ and $\psi_i$, $1\leq i\leq 2g$, and the relations are $$ x_0^{(k)}R_{g-k}^i, \qquad 1 \leq i\leq 3, \quad 0\leq k \leq g, $$ where $x_0^{(k)}=\psi_1 \psi_2 \cdots \psi_k \in \L^k_0 H^3$, and the $\text{Sp}\, (2g,{\Bbb Z})$-transforms of these. Also if we write $$ F_r={\Bbb C}[\a,\b,\gamma]/J_r= (HF^*_r)_I $$ then $HF^*_g=\oplus \L^k_0 H^3 \otimes F_{g-k}$. The quotient $\overline{F}_r={F_r/\gamma F_r}$ is easily described. \begin{prop} \label{prop:f4.fij} Let $\overline{F}_r=F_r/\gamma F_r$, $r\geq 0$. Then $\overline{F}_r$ has basis $\a^a\b^b$, $a+b<r$. We have $\overline{F}_r ={\Bbb C}[\a,\b]/ \bar{J}_r$, where $\bar{J}_r= (\bar{R}^1_r, \bar{R}^2_r)$, and $\bar{R}^i_r$ are determined by $\bar{R}_{0}^1=1$, $\bar{R}_{0}^2=0$ and then recursively for all $r\geq 0$, $$ \left\{ \begin{array}{l}\bar{R}_{r+1}^1=\a\bar{R}_r^1 +r^2 \bar{R}_r^2 \\ \bar{R}_{r+1}^2 = (\b+(-1)^{r+1} 8) \bar{R}_r^1 \end{array}\right. $$ \end{prop} \begin{pf} The $r+1 \choose 2$ elements $\a^a\b^b$, $a+b<r$, generate $\overline{F}_r$. Also Poincar\'e duality identifies $\overline{F}_r= F_r/\gamma F_r$ with $\ker (\gamma:F_r \rightarrow F_r)$ which equals $J_{r-1}/J_r$, by~\cite[corollary 18]{floer}. So $$ \dim\overline{F}_r=\dim ({\Bbb C}[\a,\b,\gamma]/J_r) -\dim ({\Bbb C}[\a,\b,\gamma]/J_{r-1})= \dim F_r-\dim F_{r-1}={r+1 \choose 2}, $$ and $\a^a\b^b$, $a+b<r$, form a basis for $\overline{F}_r$. \end{pf} In subsection~\ref{subsec:effective} we shall need the following technical lemma. \begin{lem} \label{lem:f4.redi} We have $\bar{J}_r / \bar{J}_{r+1}= \ker (\overline{F}_{r+1}\twoheadrightarrow \overline{F}_r) = \bigoplus\limits_{-r\leq i\leq r\atop i\equiv r\pmod 2} R_{r+1,i}$, where $R_{r+1,i}$ is a $1$-dimensional vector space such that $R_{r+1,i}={\Bbb C}[\a,\b]/(\a- 4i\sqrt{-1},\b-8)$ for $r$ even, $R_{r+1,i}={\Bbb C}[\a,\b]/(\a- 4i,\b+8)$ for $r$ odd. \end{lem} \begin{pf} The first equality follows from the exact sequence $$ {\bar{J}_r \over \bar{J}_{r+1}} \hookrightarrow \overline{F}_{r+1}= {{\Bbb C}[\a,\b]\over\bar{J}_{r+1}} \twoheadrightarrow \overline{F}_r= {{\Bbb C}[\a,\b] \over \bar{J}_r}. $$ Next we claim that $(\b+(-1)^{r+1} 8) \bar{J}_r \subset \bar{J}_{r+1} \subset \bar{J}_r$, $r\geq 0$. The second inclusion is obvious as $\bar{R}^i_{r+1}$ are written in terms of $\bar{R}^i_r$ by proposition~\ref{prop:f4.fij}. The first inclusion follows from $(\b +(-1)^{r+1}8) \bar{R}^1_{r} =\bar{R}^2_{r+1} \in \bar{J}_{r+1}$ and then multiplying the first equation in proposition~\ref{prop:f4.fij} by $(\b +(-1)^{r+1} 8)$ to get $(\b +(-1)^{r+1}8) \bar{R}^2_{r} \in \bar{J}_{r+1}$. Now $\bar{J}_{r}/\bar{J}_{r+1} = \ker (\b +(-1)^{r+1}8: \overline{F}_{r+1} \rightarrow \overline{F}_{r+1})$. This follows from factoring the map $\b +(-1)^{r+1} 8$ as $$ {\Bbb C}[\a,\b] /\bar{J}_{r+1} \twoheadrightarrow {\Bbb C}[\a,\b] /\bar{J}_{r} \stackrel{\b+(-1)^{r+1} 8}{\hookrightarrow} {\Bbb C}[\a,\b] /\bar{J}_{r+1}. $$ The second map is well defined by the claim above and it is a monomorphism since $\a^a\b^b$, $a+b<r$, form a basis for ${\Bbb C}[\a,\b] /\bar{J}_{r}$, and their image under $\b +(-1)^{r+1} 8$ are linearly independent in ${\Bbb C}[\a,\b] /\bar{J}_{r+1}$. As $\overline{F}_{r+1}$ is a Poincar\'e duality algebra (being a complete intersection algebra), $\ker (\b +(-1)^{r+1}8:\overline{F}_{r+1} \rightarrow \overline{F}_{r+1})$ is dual to $\overline{F}_{r+1}/(\b +(-1)^{r+1}8)= F_{r+1}/(\b +(-1)^{r+1}8,\gamma)$. Using the computations in the proof of~\cite[proposition 20]{floer}, we get finally $$ \bar{J}_{r}/\bar{J}_{r+1}=F_{r+1}/(\b +(-1)^{r+1}8,\gamma)= \left\{ \begin{array}{ll} {\Bbb C} [\a]/\left( (\a^2 +r^2 16) \cdots (\a^2 + 2^2 16)\a \right), \qquad & \text{$r$ even} \\ {\Bbb C} [\a]/\left( (\a^2 -r^2 16) \cdots (\a^2 - 1^2 16) \right), \qquad & \text{$r$ odd} \end{array} \right. $$ as required. \end{pf} \section{Fukaya-Floer homology $HFF_*(\Sigma \times {{\Bbb S}}^1,{\Bbb S}^1)$} \label{sec:f5} In this section we are going to describe the Fukaya-Floer (co)homology of the $3$-manifold $Y=\Sigma \times {{\Bbb S}}^1$ with the $SO(3)$-bundle with $w_2=\text{P.D.}[{\Bbb S}^1] \in H^2(Y;{\Bbb Z}/2{\Bbb Z})$ and loop $\d=\text{pt}\times{\Bbb S}^1 \subset Y=\Sigma \times {{\Bbb S}}^1$, together with its ring structure. As $Y$ admits an orientation reversing self-diffeomorphism, we can identify its Fukaya-Floer homology and Fukaya-Floer cohomology through Poincar\'e duality, which we will do. From now on we shall fix the genus $g\geq 1$ of $\S$ and denote $HFF_g^*=HFF^*(\Sigma \times {{\Bbb S}}^1,{\Bbb S}^1)$. \subsection{The vector space $HFF_g^*$} The following argument is taken from~\cite{genusg}. The spectral sequence computing $HFF^*_g$ has $E_3$ term $HF^*_g \otimes {\Bbb C}[[t]]$. All the differentials in this $E_3$ term are of the form $HF^{\hbox{\scriptsize odd}}_g \rightarrow HF^{\hbox{\scriptsize even}}_g$ and $HF^{\hbox{\scriptsize even}}_g \rightarrow HF^{\hbox{\scriptsize odd}}_g$. As ${\Bbb S}^1$ is invariant under the action of the mapping class group $\text{Diff} (\S)$ on $Y=\Sigma \times {{\Bbb S}}^1$, the differentials commute with the action of $\text{Diff} (\S)$. Since there are elements $f \in \text{Diff} (\S)$ acting as $-1$ on $H_1(\S)$, we have that $f$ acts as $-1$ on $HF^{\hbox{\scriptsize odd}}_g$ and as $1$ on $HF^{\hbox{\scriptsize even}}_g$. Therefore the differentials are zero. Analogously for the higher differentials. So the spectral sequence degenerates in the third term and $$ HFF^*_g= HF^*_g \otimes {\Bbb C}[[t]] =HF^*_g[[t]]. $$ The pairing in $HFF^*_g$ is induced from that of $HF^*_g$ by coefficient extension to ${\Bbb C}[[t]]$. For a $4$-manifold $X$ with boundary $\partial X=Y$, $w \in H^2(X;{\Bbb Z})$ with $w|_Y=w_2$ and $D \in H_2(X)$ with $\partial D={\Bbb S}^1$, the relative invariants will be $\phi^w(X,e^{t D}) \in HF^*_g[[t]]$, i.e. formal power series with coefficients in the Floer cohomology $HF^*_g$. Recall the manifold $A=\S \times D^2$, with boundary $Y=\Sigma \times {{\Bbb S}}^1$, and let $\Delta= \text{pt} \times D^2 \subset A$ be the horizontal slice with $\partial\Delta={\Bbb S}^1$. Let $w \in H^2(A;{\Bbb Z})$ be any odd multiple of $\text{P.D.}[\Delta]$, so that $w|_Y =w_2 \in H^2(Y;{\Bbb Z}/2{\Bbb Z})$. The elements \begin{equation} \hat{e}_s=\phi^w(A, z_s\, e^{t\Delta}) \in HFF_g^* \label{eqn:f5.nopuse} \end{equation} analogous to the elements $e_s$ of~\eqref{eqn:f4.1.5}, for $s \in{\cal S}$, are a basis of $HFF^*_g$ as ${\Bbb C}[[t]]$-module (see~\cite[lemma 21]{genusg}). There is a well defined map $HFF_g^*=HF_g^*\otimes{\Bbb C}[[t]] \twoheadrightarrow HF_g^*$ formally obtained by equating $t=0$. It takes $\phi^w(A, z\, e^{t\Delta}) \mapsto \phi^w(A, z)$, for any $z\in\AA(\S)$. This map intertwines the $\mu$ actions on $HFF^*_g$ and $HF^*_g$, and it respects the pairings. \subsection{The ring $HFF^*_g$} The ring structure of $HFF^*_g$ comes from the cobordism between $(Y,{\Bbb S}^1) \sqcup (Y,{\Bbb S}^1)$ and $(Y,{\Bbb S}^1)$, given by the pair of pants times $(\S,\text{pt})$. This yields $$ HFF^*_g \otimes HFF^*_g \rightarrow HFF^*_g $$ which is an associative and graded commutative ring structure on $HFF^*_g$. We prove this as for the case of Floer homology by showing first that $\phi^w(A, z \, e^{t\Delta})\phi^w(A, z' e^{t\Delta})=\phi^w(A, zz' \, e^{t\Delta})$, so that \begin{equation} \label{eqn:eio} \begin{array}{ccl} \AA(\S) \otimes {\Bbb C}[[t]] &\rightarrow& HFF^*_g \\ z &\mapsto& \phi^w(A, z \, e^{t\Delta}) \end{array} \end{equation} is a ${\Bbb C}[[t]]$-linear epimorphism of rings. The map $HFF^*_g \twoheadrightarrow HF^*_g$ mentioned above is a ring epimorphism. The action of $\mu(\S)$ is Fukaya-Floer multiplication by $\phi^w(A,\S \, e^{t\Delta})$, etc. We define the following elements of $HFF^*_g$ which are generators as ${\Bbb C}[[t]]$-algebra, \begin{equation} \left\{ \begin{array}{l} \hat{\a}= 2 \, \psi^w(A,\S\,e^{t\Delta}) \in HFF^2_g \\ \hat{\psi}_i= \psi^w(A,\gamma_i\,e^{t\Delta}) \in HFF^3_g, \qquad 0\leq i \leq 2g \\ \hat{\b}= - 4 \, \psi^w(A,x\,e^{t\Delta}) \in HFF^4_g \end{array} \right. \label{eqn:f5.1} \end{equation} The mapping class group $\text{Diff}(\S)$ acts on both sides of~\eqref{eqn:eio} with the action factoring through an action of $\text{Sp}\, (2g,{\Bbb Z})$. The invariants parts surject \begin{equation} {\Bbb C}[\hat{\a},\hat{\b},\hat{\gamma}]\otimes{\Bbb C}[[t]]= {\Bbb C}[[t]][\hat{\a},\hat{\b},\hat{\gamma}] \twoheadrightarrow (HFF_g^*)_I \label{eqn:f5.2} \end{equation} where $\hat{\gamma} =-2 \sum_{i=0}^g \hat{\psi}_i\hat{\psi}_{i+g}$. Thus we can write \begin{equation} (HFF^*_g)_I={\Bbb C}[[t]][\hat{\a},\hat{\b},\hat{\gamma}] /{\cal J}_g, \label{eqn:f5.3} \end{equation} where ${\cal J}_g$ is the ideal of relations of the generators $\hat{\a}$, $\hat{\b}$ and $\hat{\gamma}$. Recall that $t$ has homological degree $2$ and hence cohomological degree $-2$. The other cohomological degrees are $\deg\hat{\a}=2$, $\deg\hat{\psi}_i=3$, $\deg\hat{\b}=4$ and $\deg\hat{\gamma}=6$. The ring structure of $HFF^*_g$, which is in some sense equivalent to the determination of the kernel of~\eqref{eqn:f5.2}, runs closely parallel to the arguments in~\cite{floer} to find out the ring structure of $HF^*_g=HF^*(Y)$. We recommend the reader have~\cite{floer} at hand. Consider the ring $H^*({\cal N}_g)[[t]]$, where $t$ is given degree $-2$. The elements in $H^i({\cal N}_g)[[t]]$ are thus sums $\sum_{n\geq 0} s_{i+2n} t^n$, where $\deg(s_{i+2n})= i+2n$. Note that all such elements are finite sums, although $H^i({\cal N}_g)[[t]]\neq 0$ for arbitrarily negative $i$. The analogue of~\cite[theorem 5]{floer} is \begin{prop} \label{prop:f5.deform} Denote by $*$ the product induced in $H^*({\cal N}_g)[[t]]$ by the product in $HFF^*_g$ under the ${\Bbb C}[[t]]$-linear isomorphism $H^*({\cal N}_g)[[t]] \stackrel{\simeq}{\ar} HFF^*_g$ given by $f_s\mapsto \hat{e}_s$, $s \in {\cal S}$. Then $*$ is a deformation of the cup-product graded modulo $4$, i.e. for $f_1 \in H^i ({\cal N}_g)[[t]]$, $f_2 \in H^j({\cal N}_g)[[t]]$, it is $f_1 * f_2 = \sum_{r \geq 0} \P_r(f_1,f_2)$, where $\P_r \in H^{i+j-4r}({\cal N}_g)[[t]]$ and $\P_0= f_1 \cup f_2$. \end{prop} \begin{pf} To start with, let us fix some notation. The choice of basis~\eqref{eqn:f5.nopuse} gives a splitting $\imath: H^*({\cal N}_g) \rightarrow \AA(\S)$, $f_s \mapsto z_s$, satisfying the property that $f \mapsto \phi^w(A, \imath (f)\, e^{t\Delta})$ is the isomorphism of the statement. Now we claim that for any $s, s' \in {\cal S}$ it is \begin{equation} \langle\hat{e}_s, \hat{e}_{s'}\rangle = D^{(w,\Sigma)}_{\S \times {\Bbb C \Bbb P}^1}(z_sz_{s'}e^{t{\Bbb C \Bbb P}^1}) = -\langle f_s, f_{s'}\rangle + O(t^{(6g-6-(\deg(f_s) + \deg(f_{s'})))/2+1}), \label{eqn:f5.claim} \end{equation} where $O(t^r)$ means any element in $t^r{\Bbb C}[[t]]$ (note that~\eqref{eqn:f5.claim} vanishes when $\deg(f_s) + \deg(f_{s'}) \not\equiv 0 \pmod 2$). If $\deg(f_s) + \deg(f_{s'}) > 6g-6$ then the statement is vacuous. For $\deg(f_s) + \deg(f_{s'}) \leq 6g-6$ it follows from the fact that the dimensions of the moduli spaces of anti-self-dual connections on $\S \times {\Bbb C \Bbb P}^1$ are $6g-6+4r$, $r \geq 0$, and the $(6g-6)$-dimensional moduli space is ${\cal N}_g$, as remarked in~\cite{floer}, so that for $\deg(f_s) + \deg(f_{s'}) +2d=6g-6$, it is $D^{(w,\Sigma)}_{\S \times {\Bbb C \Bbb P}^1}(z_sz_{s'}({\Bbb C \Bbb P}^1)^{d})=0$, unless $d=0$, and in that case it gives $-\langle f_s, f_{s'}\rangle $ (the minus sign is due to the different convention orientation for Donaldson invariants). We shall check the statement of the proposition on basic elements $f_s$ and $f_{s'}$ of degrees $i$ and $j$ respectively. Put $f_s * f_{s'}= \sum_{m \leq M} g_m$, where $g_m \in H^m({\cal N}_g)[[t]]$ and $g_M \neq 0$ is the leading term. By definition, $\hat{e}_s\hat{e}_{s'}= \sum_{m \leq M} \hat g_m$ (with $\hat g_m \in HFF^*_g$ corresponding to $g_m$ under the isomorphism of the statement). Suppose $M > i+j$. Then let $f t^r$, $f\in H^*({\cal N}_g)$, be the non-zero monomial in $g_M$ with minimum $r$. So $f$ has degree $M+2r$. Pick $f' \in H^{6g-6-(M+2r)}({\cal N}_g)$ with $\langle f,f'\rangle=-1$ in $H^*({\cal N}_g)$. Let $z,z' \in \AA(\S)$ be the elements corresponding to $f , f' \in H^*({\cal N}_g)$ under the splitting $\imath$. Then by~\eqref{eqn:f5.claim} $$ \langle t^r\phi^w(A,ze^{t\Delta}), \phi^w(A, z'e^{t\Delta})\rangle = t^r + O(t^{r+1}), $$ so $\langle\hat g_M, \phi^w(A, z'e^{t\Delta})\rangle =t^r +O(t^{r+1})$. For $m <M$, it must be $\langle\hat g_m, \phi^w(A, z'e^{t\Delta})\rangle =O(t^{r+1})$ by~\eqref{eqn:f5.claim} again, so finally $$ \langle\hat{e}_s\hat{e}_{s'}, \phi^w(A, z'e^{t\Delta})\rangle = t^r + O(t^{r+1}). $$ On the other hand, as $\deg(f_s)+\deg(f_{s'})+\deg(f') <6g-6-2r$, it is $$ \langle\hat{e}_s\hat{e}_{s'}, \phi^w(A, z'e^{t\Delta})\rangle =D^{(w,\Sigma)}_{\S \times {\Bbb C \Bbb P}^1}(z_sz_{s'}z'e^{t{\Bbb C \Bbb P}^1})=O(t^{r+1}), $$ which is a contradiction. It must be $M \leq i+j$. For $m=i+j$, put $g_m=G_{i+j} +tG_{i+j+2} + \cdots$, where $G_{i+j+2r} \in H^{i+j+2r}({\cal N}_g)$. Pick any $f_{s''}$ of degree $6g-6-m$. Clearly $D^{(w,\Sigma)}_{\S \times {\Bbb C \Bbb P}^1}(z_sz_{s'}z_{s''}e^{t{\Bbb C \Bbb P}^1})=-\langle f_s f_{s'}, f_{s''}\rangle +O(t)$. Also $$ D^{(w,\Sigma)}_{\S\times {\Bbb C \Bbb P}^1}(z_sz_{s'}z_{s''}e^{t{\Bbb C \Bbb P}^1})= \langle\hat{e}_{s}\hat{e}_{s'},\hat{e}_{s''}\rangle =\langle\hat g_m, \hat{e}_{s''}\rangle +O(t)=-\langle g_m, f_{s''}\rangle +O(t). $$ So $\langle G_{i+j},f_{s''}\rangle =\langle f_sf_{s'},f_{s''}\rangle $, for arbitrary $f_{s''}$, and hence $G_{i+j}=f_sf_{s'}$. To check that $G_{i+j+2r}=0$ for $r>0$, pick any $f_{s''}$ of degree $6g-6-(m+2r)$. By~\eqref{eqn:f5.claim} it is $D^{(w,\Sigma)}_{\S \times {\Bbb C \Bbb P}^1}(z_sz_{s'}z_{s''}e^{t{\Bbb C \Bbb P}^1})=O(t^{r+1})$ and $$ D^{(w,\Sigma)}_{\S \times {\Bbb C \Bbb P}^1}(z_sz_{s'}z_{s''}e^{t{\Bbb C \Bbb P}^1})=\langle\hat{e}_{s}\hat{e}_{s'},\hat{e}_{s''}\rangle = \langle\hat g_m, \hat{e}_{s''}\rangle +O(t^{r+1})=- \langle G_{i+j+2r}, f_{s''}\rangle t^r+O(t^{r+1}), $$ So $\langle G_{i+j+2r},f_{s''}\rangle =0$, i.e. $G_{i+j+2r}=0$. \end{pf} From~\eqref{eqn:f4.an1} a basis of $H^*({\cal N}_g)[[t]]$ as ${\Bbb C}[[t]]$-module is given by $$ \{x^{(k)}_ia^nb^mc^r/ k=0,1,\ldots, g-1, \> n+m+r < g-k, \> i \in B_k \}. $$ Recalling $x^{(k)}_0=c_1c_2 \cdots c_k \in\L_0^k H^3$, a complete set of relations satisfied in $H^*({\cal N}_g)$ are $x_0^{(k)} q^i_{g-k}$, $i=1,2,3$, $0 \leq k \leq g$, and the $\text{Sp}\, (2g,{\Bbb Z})$-transforms of these. Now identifying $H^3$ with the $2g$-dimensional subspace of $HFF^3_g$ generated by $\seq{\hat{\psi}}{1}{2g}$, proposition~\ref{prop:f5.deform} implies that the set $$ \{x^{(k)}_i\hat{\a}^a\hat{\b}^b\hat{\gamma}^c/ k=0,1,\ldots, g-1, \> a+b+c < g-k, \> i \in B_k \}, $$ where $x^{(k)}_i \in \L^k_0 H^3 \subset HFF_g^*$, is a basis for $HFF_g^*$ as ${\Bbb C}[[t]]$-module, where Fukaya-Floer multiplication is understood. Also from proposition~\ref{prop:f4.homology}, we can write $$ H^*({\cal N}_g)[[t]]=\bigoplus_{k=0}^g \L^k_0 H^3 \otimes {{\Bbb C}[[t]][a,b,c] \over (q_{g-k}^1,q_{g-k}^2,q_{g-k}^3)}. $$ The products in both $H^*({\cal N}_g)[[t]]$ and $HFF_g^*$ are $\text{Sp}\, (2g,{\Bbb Z})$-equivariant and the isomorphism in the statement of proposition~\ref{prop:f5.deform} is also $\text{Sp}\, (2g,{\Bbb Z})$-equivariant. Then we can use the arguments in the proof of~\cite[proposition 16]{quantum} to write $$ HFF_g^*= \bigoplus_{k=0}^g \L^k_0 H^3 \otimes {{\Bbb C}[[t]][\hat{\a},\hat{\b},\hat{\gamma}] \over ({\cal R}_{g-k}^1,{\cal R}_{g-k}^2,{\cal R}_{g-k}^3)}, $$ where if we put $x^{(k)}_0=\hat{\psi}_1\hat{\psi}_2 \cdots \hat{\psi}_k \in\L_0^k H^3$, then $x^{(k)}_0 {\cal R}_{g-k}^i$, $i=1,2,3$, $0 \leq k \leq g$, and their $\text{Sp}\, (2g,{\Bbb Z})$-transforms, are a complete set of relations for $HFF_g^*$. More explicitly, we decompose $HFF_g^*=\bigoplus\limits_{k=0}^g {\cal V}_k$, where ${\cal V}_k=\L^k_0 H^3 \otimes {\cal F}_{g-k}$ is the image of $$ \L^k_0 H^3 \otimes {\Bbb C}[[t]][\hat{\a},\hat{\b},\hat{\gamma}] \rightarrow HFF_g^*, $$ so in particular, the invariant part is ${\cal V}_0=(HFF_g^*)_I$ and ${\cal V}_g=0$. Then $$ HFF_g^*= \bigoplus_{k=0}^g \L^k_0 H^3 \otimes {\cal F}_{g-k}, $$ where \begin{equation} {\cal F}_{g-k}={{\Bbb C}[[t]][\hat{\a},\hat{\b},\hat{\gamma}] \over {\cal J}_{g-k}}, \label{eqn:cFr} \end{equation} for $0\leq k\leq g$, where the generators of the ideal ${\cal J}_{g-k} \subset {\Bbb C}[[t]][\hat{\a},\hat{\b},\hat{\gamma}]$ are obtained by writing $q^1_{g-k}$, $q^2_{g-k}$, $q^3_{g-k}$ in terms of the Fukaya-Floer product (see~\cite{ST2} for an analogous argument in the study of quantum cohomology) $$ q^i_{g-k} = \sum c^i_{abcd} \hat{\a}^a\hat{\b}^b\hat{\gamma}^c t^d, $$ where the sum runs for $a+b+c<g-k$, $d\geq 0$, $2a+4b+6c-2d =\deg q^i_{g-k}-4r$, $r>0$ and $c^i_{abcd} \in {\Bbb C}$. So ${\cal J}_{g-k}= ({\cal R}_{g-k}^1,{\cal R}_{g-k}^2,{\cal R}_{g-k}^3)$ with $$ {\cal R}^i_{g-k}= q^i_{g-k} -\sum c^i_{abcd} \hat{\a}^a\hat{\b}^b\hat{\gamma}^c t^d. $$ The elements ${\cal R}^i_{g-k}$ are uniquely defined by the folowing two conditions: $x_0^{(k)}{\cal R}^i_{g-k}=0 \in HFF^*_g$ and ${\cal R}^1_{g-k}$ (respectively ${\cal R}^2_{g-k}$, ${\cal R}^3_{g-k}$) equals $\hat{\a}^{g-k}$ (respectively $\hat{\a}^{g-k-1}\hat{\b}$, $\hat{\a}^{g-k-1}\hat{\gamma}$) plus terms of the form $\hat{\a}^a\hat{\b}^b\hat{\gamma}^c t^d$ with $a+b+c <g-k$. Note that they might depend, in principle, not only on $g-k$ but also on the genus $g$ (which was fixed throughout this section). In particular ${\cal R}^1_0=1$, ${\cal R}^2_0=0$ and ${\cal R}^3_0=0$. \begin{lem} \label{lem:f5.pr} For $0 \leq r \leq g-1$, it is $\hat{\gamma} {\cal J}_r \subset {\cal J}_{r+1}\subset {\cal J}_r$. \end{lem} \begin{pf} Analogous to~\cite[lemma 17]{quantum}. \end{pf} \begin{thm} \label{thm:f5.main} Fix $g \geq 1$. Let $\S=\S_g$ be a Riemann surface of genus $g$. The Fukaya-Floer cohomology $HFF_g^*=HFF^*(\Sigma \times {{\Bbb S}}^1,{\Bbb S}^1)$ has a presentation $$ HFF^*_g= \bigoplus_{k=0}^g \L_0^k H^3 \otimes {\Bbb C}[[t]][\hat{\a},\hat{\b},\hat{\gamma}] /{\cal J}_{g-k}. $$ where ${\cal J}_r=({\cal R}^1_r, {\cal R}^2_r,{\cal R}^3_r)$ and ${\cal R}^i_r$ are defined recursively by setting ${\cal R}^1_0=1$, ${\cal R}^2_0=0$, ${\cal R}^3_0=0$ and putting, for all $0\leq r \leq g-1$, \begin{equation} \left\{ \begin{array}{l} {\cal R}_{r+1}^1 = (\hat{\a}+f_{11}(t)) {\cal R}_r^1 + r^2 (1+f_{12}(t)) {\cal R}_r^2 +f_{13}(t){\cal R}_r^3 \\ {\cal R}_{r+1}^2 = (\hat{\b}+(-1)^{r+1} 8+f_{21}(t)) {\cal R}_r^1 +f_{22}(t){\cal R}_r^2+ ({2r \over r+1}+f_{23}(t)) {\cal R}_r^3 \\ {\cal R}_{r+1}^3 = \hat{\gamma} {\cal R}_r^1 \end{array} \right. \label{eqn:f5.tex} \end{equation} for some (unknown) functions $f_{ij}^{r,g}(t) \in t{\Bbb C}[[t]][\hat{\a},\hat{\b},\hat{\gamma}]$, dependent on $r$ and $g$. Moreover $f_{ij}$ are such that $f_{11}{\cal R}^1_r+f_{12}{\cal R}^2_r+f_{13}{\cal R}^3_r$ and $f_{21}{\cal R}^1_r+f_{22}{\cal R}^2_r+f_{23}{\cal R}^3_r$ are both ${\Bbb C}[[t]]$-linear combinations of the monomials $\hat{\a}^a\hat{\b}^b\hat{\gamma}^c$, $a+b+c< r+1$. \end{thm} \begin{pf} We only have to prove the recurrence stated in~\eqref{eqn:f5.tex}, which is similar to~\cite[theorem 10]{floer}. The inclusion $\hat{\gamma} {\cal J}_{r} \subset {\cal J}_{r+1}$ says that $\hat{\gamma} {\cal R}_r^1$ must be in ${\cal J}_{r+1}$, so it must coincide with ${\cal R}_{r+1}^3$. Now the inclusion ${\cal J}_{r+1}\subset {\cal J}_r$ implies the recurrence as written in~\eqref{eqn:f5.tex} with $f_{ij} \in {\Bbb C}[[t]][\hat{\a},\hat{\b},\hat{\gamma}]$. Lastly, the $\text{Sp}\, (2g,{\Bbb Z})$-equivariant epimorphism $HFF_g^* \twoheadrightarrow HF_g^*$ yields that ${\cal R}^i_g$ reduces to $R^i_g$ when we set $t=0$. Thus the functions $f_{ij}$ are multiples of $t$. The last sentence follows from the fact that ${\cal R}_{r+1}^1$ is written as a series with leading term $\hat{\a}^{r+1}$ plus terms of the form $\hat{\a}^a\hat{\b}^b\hat{\gamma}^c t^d$, $a+b+c< r+1$, and that ${\cal R}_{r+1}^2$ is written as a series with leading term $\hat{\a}^{r}\hat{\b}$ plus terms of the form $\hat{\a}^a\hat{\b}^b\hat{\gamma}^c t^d$, $a+b+c< r+1$. \end{pf} We give the following two results, whose proofs are left to the reader, for completeness. \begin{cor} \label{cor:f5.nSS1} Fix $g \geq 1$. Let $\S=\S_g$ be a Riemann surface of genus $g$. Let $n \in{\Bbb Z}$. The Fukaya-Floer cohomology $HFF^*(\Sigma \times {{\Bbb S}}^1,n {\Bbb S}^1)$ has a presentation $$ HFF^*(\Sigma \times {{\Bbb S}}^1,n {\Bbb S}^1)= \bigoplus_{k=0}^g \L_0^k H^3 \otimes {\Bbb C}[[t]][\hat{\a},\hat{\b},\hat{\gamma}] /{\cal J}_{g-k}. $$ where ${\cal J}_r=({\cal R}^1_r, {\cal R}^2_r,{\cal R}^3_r)$ and ${\cal R}^i_r$ are defined recursively by setting ${\cal R}^1_0=1$, ${\cal R}^2_0=0$, ${\cal R}^3_0=0$ and putting, for all $0 \leq r\leq g-1$, $$ \left\{ \begin{array}{l} {\cal R}_{r+1}^1 = (\hat{\a}+f_{11}(nt)) {\cal R}_r^1 + r^2 (1+f_{12}(nt)) {\cal R}_r^2 +f_{13}(nt){\cal R}_r^3 \\ {\cal R}_{r+1}^2 = (\hat{\b}+(-1)^{r+1} 8+f_{21}(nt)) {\cal R}_r^1 +f_{22}(nt){\cal R}_r^2+ ({2r \over r+1}+f_{23}(nt)) {\cal R}_r^3 \\ {\cal R}_{r+1}^3 = \hat{\gamma} {\cal R}_r^1 \end{array} \right. $$ \end{cor} In particular, for $n=0$ we recuperate proposition~\ref{prop:f4.floer}. \begin{prop} \label{prop:f5.reader} Let $n \in {\Bbb Z}$ and consider $HFF^*(\Sigma \times {{\Bbb S}}^1,n {\Bbb S}^1)$. Then for any $\a \in H_0(\S)$ or $\a\in H_1(\S)$, the action of $\mu (\a \times{\Bbb S}^1)$ in $HFF^*(\Sigma \times {{\Bbb S}}^1,n {\Bbb S}^1)$ is zero. \end{prop} \subsection{Reduced Fukaya-Floer homology} \label{subsec:rHFF} We introduce the following space associated to $HFF_g$ in order to understand the gluing theory of $4$-manifolds with $b^+>1$ which are of strong simple type. \begin{defn} \label{def:rHFF} We define the {\em reduced Fukaya-Floer homology of $\Sigma \times {{\Bbb S}}^1$\/} to be $$ \overline{HFF}{}^*_g= HFF^*_g / (\hat{\b}^2-64, \seq{\hat{\psi}}{1}{2g}) \cong \ker(\hat{\b}^2-64) \cap \bigcap_{1\leq i\leq 2g} \ker \hat{\psi}_i, $$ where the last isomorphism is Poincar\'e duality. Note that $\overline{HFF}{}^*_g= (HFF^*_g)_I / (\hat{\b}^2-64,\hat{\gamma})$. \end{defn} Suppose that $X_1$ is a $4$-manifold with boundary $\partial X_1=Y$ (and $w\in H^2(X;{\Bbb Z})$ satisfies $w|_Y=w_2=\text{P.D.} [{\Bbb S}^1]$) such that $X=X_1 \cup_Y A$ is a closed $4$-manifold with $b^+>1$ and of strong simple type. Then $$ \phi^w(X_1, z_1 e^{tD_1}) \in \ker(\hat{\b}^2-64) \cap \bigcap_{1\leq i\leq 2g} \ker \hat{\psi}_i \cong \overline{HFF}{}^*_g, $$ for any $z_1 \in \AA(X_1)$ and any $D_1\subset X_1$ with $\partial D_1={\Bbb S}^1$. Indeed $\langle(\hat{\b}^2-64)\phi^w(X_1, z_1e^{tD_1}), \hat{e}_s\rangle = D^{(w,\Sigma)}_X(16 z_1z_s (x^2-4) e^{tD})=0$, for any $s\in{\cal S}$, where $D=D_1+\Delta \in H_2(X)$. Then $(\hat{\b}^2-64)\phi^w(X_1, z_1e^{tD_1})=0$. Analogously $\hat{\psi}_i \phi^w(X_1, z_1e^{tD_1})=0$, for $1\leq i \leq 2g$. \begin{prop} \label{prop:f5.10} For $0\leq k \leq g-1$, there exists a non-zero vector $v \in (HFF^*_g)_I$ such that \begin{eqnarray*} \hat{\a} v &=& \left\{ \begin{array}{ll} (\pm 4(g-k-1)+2t) \, v & \text{$g-k$ even} \\ (\pm 4(g-k-1) \sqrt{-1}-2t) \, v \qquad & \text{$g-k$ odd} \end{array} \right. \\ \hat{\b} v &=& (-1)^{g-k-1} 8 \, v \\ \hat{\gamma} v & =& 0 \end{eqnarray*} \end{prop} \begin{pf} This is an extension of~\cite[proposition 12]{floer}. We shall construct the vector corresponding to the plus sign, the other being analogous. We have the following cases: \begin{itemize} \item $0=k<g-1$. We shall look for $v \in (HFF_g^*)_I$ constructed as the relative invariants of a particular $4$-manifold (see section~\ref{sec:f3}). For finding such a vector $v$ we use the same manifold as in the proof of~\cite[proposition 12]{floer}. This is a $4$-manifold $X=C_g$ with an embedded Riemann surface $\S$ of genus $g$ and self-intersection zero, and $w \in H^2(X;{\Bbb Z})$ with $w\cdot \S \equiv 1 \pmod 2$. Such $X$ is of simple type, with $b_1=0$, $b^+>1$. Suppose for simplicity that $g-k=g$ is even (the other case is analogous). Then the Donaldson invariants of $X$ are \begin{equation} D^{(w,\Sigma)}_{X}(e^{\a})= -2^{3g-5} e^{Q(\a)/2} e^{K \cdot \a} + (-1)^g 2^{3g-5} e^{Q(\a)/2} e^{-K \cdot \a}, \label{eqn:f5.nose} \end{equation} for a single basic class $K\in H^2(X;{\Bbb Z})$ with $K\cdot \S=2g-2$. Let $X_1$ be $X$ with a small open tubular neighbourhood of $\S$ removed, so that $X=X_1 \cup_Y A$. Consider $D\subset X$ intersecting transversely $\S$ in just one positive point. Let $D_1=X_1 \cap D\subset X_1$, so that $\partial D_1={\Bbb S}^1$ and $D=D_1 +\Delta$. Then set $$ v = \phi^w(X_1,(\S + 2g-2-t)e^{tD_1}) \in HFF^*(\Sigma \times {{\Bbb S}}^1,{\Bbb S}^1)_I. $$ Let us prove that this $v$ does the job. For any $z_s=\S^n x^m \gamma_{i_1} \cdots \gamma_{i_r}$, we compute from~\eqref{eqn:f5.nose} that $\langle v, \hat{e}_s\rangle =\langle \phi^w(X_1,(\S + 2g-2-t)e^{tD_1}), \phi^w(A, z_s e^{t\Delta})\rangle = D^{(w,\Sigma)}_{X}((\S+2g-2-t) z_s e^{tD})$ is equal to $$ \left\{ \begin{array}{ll} 0, & r >0 \\ -2^{3g-4}(2g-2)(2g-2+t)^n 2^m\, e^{Q(tD)/2+tK\cdot D}, \qquad & r=0 \end{array} \right. $$ Then $\langle \hat{\a} v , \hat{e}_s\rangle = \langle \phi^w(X_1,2 \,\S(\S + 2g-2-t)e^{tD_1}), \phi^w(A, z_s e^{t\Delta})\rangle = D^{(w,\Sigma)}_X((\S+2g-2-t) 2 \,\S\, z_s e^{t D})= (4g-4+2t) \langle v, \hat{e}_s\rangle $, for all $s \in {\cal S}$. Then $\hat{\a} v = (4g-4+2t) v$. Analogously, $\hat{\gamma} v =0$ and $\hat{\b} v= -8 v$. \item $0<k<g-1$. The same argument above for genus $g-k$ produces a $4$-manifold $C_{g-k}$ with an embedded Riemann surface $\S_{g-k}$ of genus $g-k$ and self-intersection zero with a single basic class $K\in H^2(X;{\Bbb Z})$ with $K\cdot \S_{g-k}=2(g-k)-2$. Let now $X=C_{g-k} \# k\, {\Bbb S}^1 \times {\Bbb S}^3$ (performing the connected sum well appart from $D$). Consider the torus ${\Bbb S}^1\times{\Bbb S}^1 \subset {\Bbb S}^1\times {\Bbb S}^3$ and the internal connected sum $\S=\S_g =\S_{g-k} \# k\, {\Bbb S}^1\times{\Bbb S}^1 \subset X=C_{g-k} \# k\, {\Bbb S}^1\times{\Bbb S}^3$. When choosing the basis of $H_1(\S;{\Bbb Z})$, we arrange $\gamma_1,\ldots, \gamma_k$ such that $\gamma_i={\Bbb S}^1 \times \text{pt}$ in the $i$-th copy ${\Bbb S}^1\times{\Bbb S}^3$. Suppose for instance that $g-k$ is even. Then by lemma~\ref{lem:f5.S1xS3} below it is, for any $\a \in H_2(C_{g-k})=H_2(X)$, $$ D^{(w,\Sigma)}_{X}(\gamma_1\cdots\gamma_k e^{\a})= c^k \,D^{(w,\Sigma)}_{C_{g-k}}(e^{\a}) = c^k \left( -2^{3(g-k)-5} e^{Q(\a)/2} e^{K \cdot \a} + (-1)^{g-k} 2^{3(g-k)-5} e^{Q(\a)/2} e^{-K \cdot \a} \right), $$ with $w \in H^2(C_{g-k};{\Bbb Z})$ as in the first case. Write again $X=X_1 \cup_Y A$ and consider $D\subset X$ intersecting transversely $\S$ in one point with $D\cdot \S=1$, so that $D=D_1 +\Delta$ with $\partial D_1={\Bbb S}^1$. Then the element $$ v=\phi^w(X_1, (\S+2(g-k)-2-t)\gamma_1\cdots \gamma_k e^{tD_1})\in (HFF^*_g)_I $$ satisfies the required properties. Note that $v$ is invariant since has only non-zero pairing with elements in $(HFF_g^*)_I \subset HFF_g^*$. \item $k=0$ and $g=1$. Let $S$ be the $K3$ surface and fix an elliptic fibration for $S$, whose generic fibre is an embedded torus $\S={\Bbb T}^2$. The Donaldson invariants are, for $w \in H^2(S; {\Bbb Z})$ with $w \cdot \S \equiv 1 \pmod 2$, $D^{(w,\Sigma)}_S (e^{tD})=-e^{-Q(tD)/2}$. Fix $D \subset S$ cutting $\S$ transversely in one point such that $\S\cdot D=1$. Then $D^{(w,\Sigma)}_S (\S \, e^{tD})= te^{-Q(tD)/2}$. Let $S_1$ be the complement of a small open tubular neighbourhood of $\S$ in $S$ and $D_1=S_1\cap D \subset S_1$, so that $\partial D_1={\Bbb S}^1$. Then $v=\phi^w(S_1, e^{tD_1})$ generates $HFF_1^*$ and $\phi^w(S_1, \S \, e^{tD_1})= -t \phi^w(S_1, e^{tD_1})$, so that $\hat{\a} v =-2t v$. Analogously $\hat{\b} v=8 v$ and $\hat{\gamma} v=0$. \item $0 <k=g-1$. We use the same trick as in the second case, considering the $K3$ surface connected sum with $k$ copies of ${\Bbb S}^1\times{\Bbb S}^3$. \end{itemize} \end{pf} \begin{lem} \label{lem:f5.S1xS3} Let $X$ be a $4$-manifold with $b^+> 1$, and $z \in \AA(X)$. Then consider $\tilde X=X\# {\Bbb S}^1\times{\Bbb S}^3$ and $\gamma={\Bbb S}^1\times\text{pt} \subset {\Bbb S}^1\times{\Bbb S}^3$ the natural generator of the fundamental group of ${\Bbb S}^1\times{\Bbb S}^3$. We can view $\gamma$ as an element of $\AA(\tilde X)$. For any $w \in H^2(X;{\Bbb Z})=H^2(\tilde X;{\Bbb Z})$, it is $D^w_{\tilde X}(\gamma \, z)=c \, D^w_X (z)$, where $c$ is a universal constant. \end{lem} \begin{pf} Consider the moduli space ${\cal M}^{w,\kappa}_{\tilde X}$ of ASD connections over $\tilde X$ of dimension $d=3+\deg(z)$, $\kappa$ denoting the charge~\cite{KM}. Then there is a choice of generic cycles $V_z$, $V_{\gamma}$ in ${\cal M}^{w,\kappa}_{\tilde X}$ such that $$ M= {\cal M}^{w,\kappa}_{\tilde X} \cap V_z $$ is smooth $3$-dimensional and compact, and $D^w_{\tilde X}(\gamma \, z)=\# (M\cap V_{\gamma})$. For metrics giving a long neck to the connected sum $\tilde X=X\# {\Bbb S}^1\times{\Bbb S}^3$, the usual counting arguments give that the only possible distribution of charges of limitting connections are $\kappa$ on the $X$ side and $0$ on the ${\Bbb S}^1\times{\Bbb S}^3$. Now recall that the moduli space of flat $SO(3)$-connections on ${\Bbb S}^1\times{\Bbb S}^3$ is $\text{Hom}(\pi_1({\Bbb S}^1\times{\Bbb S}^3), SO(3))=SO(3)$, hence $$ M= ({\cal M}^{w,\kappa}_{X}) \cap V_z \times SO(3), $$ where ${\cal M}^{w,\kappa}_{X} \cap V_z$ consists of $D^w_X (z)$ points (counted with signs). The description of the cycle $V_{\gamma}$ given in~\cite{KM} implies that there is a universal constant $c=\# SO(3)\cap V_{\gamma}$ yielding the statement of the lemma. \end{pf} \begin{thm} \label{thm:f5.rHFF} $\overline{HFF}{}^*_g$ is a free ${\Bbb C}[[t]]$-module of rank $2g-1$. Moreover $\overline{HFF}{}^*_g=\bigoplus\limits_{i=-(g-1)}^{g-1} R_{g,i}$, where $R_{g,i}$ are free ${\Bbb C}[[t]]$-modules of rank $1$ such that for $i$ odd, $\hat{\a}=4i +2t$ and $\hat{\b}=-8$ in $R_{g,i}$. For $i$ even, $\hat{\a}=4i\sqrt{-1} -2t$ and $\hat{\b}=8$ in $R_{g,i}$. \end{thm} \begin{pf} Since $\hat{\a}^a\hat{\b}^b\hat{\gamma}^c$, $a+b+c<g$ form a basis for $(HFF^*_g)_I$, we have that $\hat{\a}^a\hat{\b}^b$, $a+b<g$, $b=0,1$, generate $\overline{HFF}{}^*_g$ as ${\Bbb C}[[t]]$-module. Therefore the rank of $\overline{HFF}{}^*_g$ is less or equal than $2g-1$. Now using Poincar\'e duality in $HFF^*_g$, we have that the dual of $\overline{HFF}{}^*_g$ is \begin{equation} \ker (\hat{\b}^2-64) \cap \ker \hat{\psi}_1 \cap \cdots \cap \ker \hat{\psi}_{2g} \subset HFF^*_g. \label{eqn:f5.k} \end{equation} By proposition~\ref{prop:f5.10} there are at least $2g-1$ independent vectors in~\eqref{eqn:f5.k}, so the rank of $\overline{HFF}{}^*_g$ is exactly $2g-1$ and it must be a free ${\Bbb C}[[t]]$-module. The $2g-1$ eigenvalues of $(\hat{\a},\hat{\b})$ given by proposition~\ref{prop:f5.10} provide the decomposition in the statement. \end{pf} \subsection{Effective Fukaya-Floer homology} \label{subsec:effective} We introduce the following space associated to $HFF_g^*$ in order to understand the gluing theory of general $4$-manifolds which have $b^+>1$. \begin{defn} \label{def:f5.eHFF} We define the effective Fukaya-Floer homology as the sub-${\Bbb C}[[t]]$-module $\widetilde{HFF}{}_g^* \subset HFF^*_g$ generated by all $\phi^w(X_1, z_1 e^{tD_1})$, for all $4$-manifolds $X_1$ with boundary $\partial X_1=Y$ such that $X=X_1\cup_Y A$ has $b^+>1$, $z_1 \in \AA(X_1)$, $D_1 \subset X_1$ with $\partial D_1={\Bbb S}^1$ and $w\in H^2(X_1;{\Bbb Z})$ with $w|_Y=w_2=\text{P.D.}[{\Bbb S}^1]$. \end{defn} The action of $\text{Sp}\, (2g,{\Bbb Z})$ on $HFF_g^*$ restricts to an action on $\widetilde{HFF}{}_g^*$. Also $\hat{\a}$, $\hat{\b}$, $\seq{\hat{\psi}}{1}{2g}$ (and hence $\hat{\gamma}$) act on $\widetilde{HFF}{}_g^*$ by multiplication. Now $$ \widetilde{HFF}{}_g^*=\bigoplus_{k=0}^g \L^0_k H^3 \otimes \widetilde{{\cal F}}_{g-k} \subset \bigoplus_{k=0}^g \L^k_0 H^3 \otimes {\cal F}_{g-k}, $$ with $\widetilde{{\cal F}}_r \subset {\cal F}_r$, $0\leq r\leq g$. We aim to find the eigenvalues of $(\hat{\a},\hat{\b},\hat{\gamma})$ in $\widetilde{{\cal F}}_r$. Some preliminary information on ${\cal F}_r$ is needed. The following result is proved analogously to proposition~\ref{prop:f4.fij}, \begin{prop} \label{prop:f5.redi} Let $\overline{{\cal F}}_r={\cal F}_r/\hat{\gamma} {\cal F}_r$, $0\leq r\leq g$. Then $\overline{{\cal F}}_r$ is a free ${\Bbb C}[[t]]$-module with basis $\hat{\a}^a\hat{\b}^b$, $a+b<r$. We have $\overline{{\cal F}}_r ={\Bbb C}[[t]][\hat{\a},\hat{\b}]/ \bar{{\cal J}}_r$, where $\bar{{\cal J}}_r= (\bar{{\cal R}}^1_r, \bar{{\cal R}}^2_r)$, and $\bar{{\cal R}}^i_r$ are determined by $\bar{{\cal R}}_{0}^1=1$, $\bar{{\cal R}}_{0}^2=0$ and for $0 \leq r \leq g-1$, $$ \left\{ \begin{array}{l} \bar{{\cal R}}_{r+1}^1 = (\hat{\a}+\bar{f}_{11}) \bar{{\cal R}}_r^1 + r^2 (1+\bar{f}_{12}) \bar{{\cal R}}_r^2 \\ \bar{{\cal R}}_{r+1}^2 = (\hat{\b}+(-1)^{r+1} 8+\bar{f}_{21}) \bar{{\cal R}}_r^1 + \bar{f}_{22}\bar{{\cal R}}_r^2 \end{array}\right. $$ for some functions $\bar{f}_{ij}(t)\in t{\Bbb C}[[t]][\hat{\a},\hat{\b}]$. Moreover $\bar f_{ij}$ are such that $\bar f_{11}\bar{\cal R}^1_r+\bar f_{12}\bar{\cal R}^2_r$ and $\bar f_{21}\bar{\cal R}^1_r+\bar f_{22}\bar{\cal R}^2_r$ are both ${\Bbb C}[[t]]$-linear combinations of the monomials $\hat{\a}^a\hat{\b}^b$, $a+b< r+1$. \hfill $\Box$ \end{prop} \begin{lem} \label{lem:f5.redi} We have $\bar{{\cal J}}_r / \bar{{\cal J}}_{r+1}= \ker (\overline{{\cal F}}_{r+1}\twoheadrightarrow \overline{{\cal F}}_r) = \bigoplus\limits_{-r\leq i\leq r\atop i\equiv r\pmod 2} R_{r+1,i}$, where $R_{r+1,i}$ is a free ${\Bbb C}[[t]]$-module of rank $1$. For $r$ even, $\hat{\a}=4i\sqrt{-1}+O(t)$ and $\hat{\b}=8+O(t)$ in $R_{r+1,i}$. For $r$ odd, $\hat{\a}=4i+O(t)$, $\hat{\b}=-8+O(t)$ in $R_{r+1,i}$. \end{lem} \begin{pf} The natural map $HFF^*_g \twoheadrightarrow HF^*_g$ given by equating $t=0$ and lemma~\ref{lem:f4.redi} give the following commutative diagram with exact rows \begin{equation*} \begin{array}{ccccc} {\bar{{\cal J}}_r / \bar{{\cal J}}_{r+1}} &\hookrightarrow& \overline{{\cal F}}_{r+1} &\twoheadrightarrow &\overline{{\cal F}}_r \\ \downarrow & & \downarrow & &\downarrow \\ {\bar{J}_r / \bar{J}_{r+1}} &\hookrightarrow &\overline{F}_{r+1}& \twoheadrightarrow& \overline{F}_r \end{array} \end{equation*} where $\text{rk}_{{\Bbb C}[[t]]}(\bar{{\cal J}}_r /\bar{{\cal J}}_{r+1})=\dim (\bar{J}_r /\bar{J}_{r+1})={r+2\choose 2}-{r+1 \choose 2} =r+1$. Suppose for instance that $r$ is odd. Then lemma~\ref{lem:f4.redi} implies that $P(\a)= \prod\limits_{-r\leq i\leq r\atop i\equiv r\pmod 2} (\a-4i)$ is the characteristic polynomial of the action of $\a$ on ${\bar{J}_r / \bar{J}_{r+1}}$. Therefore (and since all the roots are simple) the characteristic polynomial of the action of $\hat{\a}$ on ${\bar{{\cal J}}_r / \bar{{\cal J}}_{r+1}}$ is $P_t(\a)= \prod\limits_{-r\leq i\leq r\atop i\equiv r\pmod 2} (\a-4i-f_i(t))$, for some $f_i(t)\in t{\Bbb C}[[t]]$. This implies that $\bar{{\cal J}}_r / \bar{{\cal J}}_{r+1}= \bigoplus\limits_{-r\leq i\leq r\atop i\equiv r\pmod 2} R_{r+1,i}$, where $R_{r+1,i}$ is a free ${\Bbb C}[[t]]$-module of rank $1$ with $\hat{\a}=4i+f_i(t)$. The eigenvalue of $\hat{\b}$ on $R_{r+1,i}$ must be of the form $-8+O(t)$. The case $r$ even is analogous. \end{pf} \begin{thm} \label{thm:f5.eHFF} The eigenvalues of $(\hat{\a}, \hat{\b},\hat{\gamma})$ in $\widetilde{HFF}{}_g^*$ are $(-2t,8,0)$, $(\pm 4+2t, -8 ,0)$, $(\pm 8 \sqrt{-1}-2t, 8,0)$, $\ldots$, $(\pm 4(g-1) \sqrt{-1}^g +(-1)^{g} 2t, (-1)^{g-1} 8 , 0)$. \end{thm} \begin{pf} As $\hat{\gamma} {\cal J}_{r-1} \subset {\cal J}_r$, one has $\hat{\gamma}^g \in {\cal J}_g$, i.e. $\hat{\gamma}^g=0$ in $HFF_g^*$, so the only eigenvalue of $\hat{\gamma}$ in $HFF_g^*$, and hence in $\widetilde{HFF}{}_g^*$, is zero. To compute the eigenvalues of $\hat{\b}$ in $HFF^*_g$ we may restrict to $HFF^*_g/(\hat{\gamma})$, i.e.\ to every $\overline{{\cal F}}_{g-k}$. Using lemma~\ref{lem:f5.redi} recursively we find that all the eigenvalues of $\hat{\b}$ in $\overline{{\cal F}}_r$ are of the form $\pm 8+O(t)$. Thus all the eigenvalues of $\hat{\b}$ in $HFF^*_g$ are of the form $\pm 8+O(t)$. To get the eigenvalues of $\hat{\b}$ in $\widetilde{HFF}{}_g^*$, let us argue by contradiction. Suppose that there is an eigenvalue different from $\pm 8$. By definition of $\widetilde{HFF}{}_g^*$, there exists a vector $v=\phi^w(X_1,z_1e^{tD_1}) \in \widetilde{HFF}{}_g^*$ such that $X=X_1\cup_Y A$ is a $4$-manifold with $b^+>1$, $z_1 \in \AA(X_1)$, $D_1\subset X_1$ with $\partial D_1={\Bbb S}^1$, $w \in H^2(X;{\Bbb Z})$ with $w\cdot \S \equiv 1\pmod 2$, satisfying $(\hat{\b}^2-64)^N v \neq 0$, for arbitrarily large $N$. Then there is a polynomial $P(\hat{\b},t)=\prod (\hat{\b} +(-1)^{\varepsilon_i} 8-f_i(t))$ with $f_i(t)\in t{\Bbb C}[[t]]$, $f_i(t)\neq 0$, $\varepsilon_i=0,1$, such that $$ P(\hat{\b}, t) (\hat{\b}^2-64)^N v=0, $$ for some $N\geq 0$. Substituting $v$ by $(\hat{\b}^2-64)^N v$, for some large $N$, we can suppose that $N=0$. Therefore $D^{(w,\Sigma)}_X(z_1e^{tD+s\S}) \neq 0$, $D^{(w,\Sigma)}_X(P(-4x,t) z_1e^{tD+s\S}) =0$, with $D=D_1+\Delta$. As $$ D^w_X(z_1e^{tD+s\S}) =\frac{1}{2} \left( D^{(w,\Sigma)}_X(z_1e^{tD+s\S}) +\sqrt{-1}^{d_0-\deg z_1/2} D^{(w,\Sigma)}_X(z_1e^{\sqrt{-1}(tD+s\S)}) \right)/2, $$ for $d_0=d_0(X,w)=-w^2-\frac{3}{2}(1-b_1+b^+)$, we have $D^w_X(z_1e^{tD+s\S}) \neq 0$ and $D^{(w,\Sigma)}_X(Q(x,t) z_1e^{tD+s\S}) =0$, with $Q(x,t)=P(-4x,t)P(4x,\sqrt{-1}t)$. Moreover we can suppose that $z_1\in \AA(X)$ is homogeneous and that all its components are orthogonal to $D$ as well as to $\S$, which we express as $z_1 \in\AA(<\S,D>^{\perp})$. Subsituting $D$ by a linear combination $aD+b\S$, $a\neq 0$, we can suppose that $D^2=0$, $D\in H_2(X;{\Bbb Z})\subset H_2(X)$ and primitive, $D\cdot \S \neq 0$. Then $D^w_X(Q(x,at) z_1e^{tD+s\S}) =0$. Also changing $w$ by $w+\S$ if necessary we can assume that $w\cdot D\equiv 1\pmod 2$. At this stage, we represent $D$ by an embedded surface and invert the roles of $D$ and $\S$. This corresponds to changing the metric: we go from metrics giving a long neck when pulling $\S$ appart to metrics giving a long neck when pulling $D$ appart. The Donaldson invariants of $X$ do not change since $b^+>1$. Arguing as above, $D^w_X(Q'(x,s) z_1e^{tD+s\S}) =0$ for some polynomial $Q'(x,s)$. This time we do not bother on whether $Q'$ is independent of $s$ or not; we can take it to be just the characteristic polynomial of $\hat{\b}$ acting on $HFF^*(D\times{\Bbb S}^1, {\Bbb S}^1)$. Now take the resultant of $Q(x,at)$ and $Q'(x,s)$, which is a series $R(s,t) \neq 0$. Then $D^w_X(R(s,t) z_1e^{tD+s\S}) =0$ implies $D^w_X(z_1e^{tD+s\S})=0$, which is a contradiction. This proves that the only eigenvalues of $\hat{\b}$ are $\pm 8$. Finally, to compute the eigenvalues of $\hat{\a}$ we can restrict to $\widetilde{HFF}{}_g^*/(\hat{\b}^2-64,\seq{\hat{\psi}}{1}{2g})$. This is a subset of $\overline{HFF}{}^*_g= (HFF^*_g)_I/(\hat{\gamma},\hat{\b}^2-64)$, which is computed in theorem~\ref{thm:f5.rHFF}. Moreover all the eigenvalues in $\overline{HFF}{}^*_g$ are indeed eigenvalues of $\widetilde{HFF}{}_g^*$ as all the vectors constructed in proposition~\ref{prop:f5.10} come from $4$-manifolds with $b^+>1$. This completes the proof. \end{pf} \begin{rem} The author believes that the eigenvalues of $\widetilde{HFF}{}_g^*$ are indeed all the eigenvalues of $HFF^*_g$. \end{rem} \section{Fukaya-Floer homology $HFF_*(\Sigma \times {{\Bbb S}}^1,\d)$} \label{sec:f6} Now we deal with the Fukaya-Floer (co)homology of the $3$-manifold $Y=\Sigma \times {{\Bbb S}}^1$ with the $SO(3)$-bundle with $w_2=\text{P.D.}[{\Bbb S}^1] \in H^2(Y;{\Bbb Z}/2{\Bbb Z})$ and loop $\d \subset \S \subset \Sigma \times {{\Bbb S}}^1$ representing a primitive homology class, and its $\AA(\S)$-module structure. Poincar\'e duality identifies the Fukaya-Floer homology $HFF_*( Y,\d)$ with the Fukaya-Floer cohomology $HFF^*(Y,-\d)$. The $\mu$ map gives an action of $\AA(\S)$ on $HFF_*( Y,\d)$. We shall see later that this gives in fact a structure of module over $HF_*(Y)$. \subsection{The vector space $HFF_*(\Sigma \times {{\Bbb S}}^1,\d)$} We can suppose that the basis $\{\seq{\gamma}{1}{2g}\}$ of $H_1(\S;{\Bbb Z})$ is chosen so that $[\d]=\gamma_1$ (recall that $\gamma_i \gamma_{i+g}=\text{pt}$ for $1 \leq i \leq g$). The action of $\text{Sp}\, (2g,{\Bbb Z})$ on $\{\gamma_i\}$ restricts to an action of the subgroup $\text{Sp}\, (2g-2,{\Bbb Z})$ on $\gamma_2,\ldots, \gamma_g,\gamma_{g+2},\ldots, \gamma_{2g}$. Any element of $\text{Sp}\, (2g-2,{\Bbb Z})$ can be realized by a diffeomorphism of $\Sigma \times {{\Bbb S}}^1$ fixing $\d$, hence it induces an automorphism of $HFF_*(Y,\d)$. This gives an action of $\text{Sp}\, (2g-2,{\Bbb Z})$ on $HFF_*(Y,\d)$. We recall that for computing $HFF_*(Y,\d)$ there is a spectral sequence whose $E_3$ term is $HF_*(Y) \otimes \hat{H}_*({\Bbb C \Bbb P}^{\infty})$, with differencital $d_3$ given by $$ \mu(\d):HF_i(Y) \otimes H_{2j}({\Bbb C \Bbb P}^{\infty}) \rightarrow HF_{i-3}(Y) \otimes H_{2j+2}({\Bbb C \Bbb P}^{\infty}). $$ and converging to $HFF_*(Y,\d)$. The $\text{Sp}\, (2g-2,{\Bbb Z})$ action on this $E_3$ term gives the action on $HFF_*(Y,\d)$. Now we can use the description of $HF_g^*=HF^*(Y)$ gathered in proposition~\ref{prop:f4.floer} and the fact that $\mu(\d)$ is multiplication by $\psi_1=\phi^w(A,\gamma_1)$ to get a description of the $E_4$ term of spectral sequence. \begin{prop} \label{prop:f6.q1} Consider $\psi_1:HF^*_g \rightarrow HF_g^*$. Then $\ker\psi_1/\text{\rm im\,} \psi_1= \bigoplus\limits_{k=0}^{g-1} \L^k_0 H^3_{\text{red}} \otimes K_{g-k}$, where $H^3_{\text{red}}=<\psi_2,\ldots, \psi_g,\psi_{g+2},\ldots,\psi_{2g}>$ and $K_r={J_{r-1}/ (J_r +\gamma J_{r-2})}$. \end{prop} \begin{pf} The space $H^3$ has basis $\psi_1, \psi_2,\ldots, \psi_{2g}$, so we can write $H^3=<\psi_1,\psi_{g+1}>\oplus H^3_{\text{red}}$, where $H^3_{\text{red}}$ is generated by $\psi_2,\ldots, \psi_g,\psi_{g+2},\ldots,\psi_{2g}$ and it is the standard representation of $\text{Sp}\, (2g-2,{\Bbb Z})$ (`$\text{red}$' stands for reduced and follows the notation of~\cite{genus2}). More intrinsically, we can identify $H^3_{\text{red}} \cong <\psi_1>^{\perp}/<\psi_1>$. It is easy to check that $\L_0^k H^3$ decomposes as $$ \L_0^k H^3=\gamma'\L_0^{k-2}H^3_{\text{red}} \oplus \left( <\psi_1,\psi_{g+1}>\otimes \L_0^{k-1} H^3_{\text{red}} \right) \oplus \L_0^k H^3_{\text{red}} $$ as $\text{Sp}\, (2g-2,{\Bbb Z})$-representations, where $\gamma'=-g\psi_1\wedge\psi_{g+1}+\gamma$. The reader can check this directly, noting that $\gamma' \in \L_0^2 H^3$, or otherwise see~\cite[formula (25.36)]{Fulton}. As a shorthand, write $F_r={\Bbb C}[\a,\b,\gamma]/J_r=(HF_r^*)_I$. Then proposition~\ref{prop:f4.floer} says that \begin{equation} \begin{array}{rcl} HF_g^*= \bigoplus\limits_{k=0}^{g-1} (\L_0^k H^3 \otimes F_{g-k}) & = &\bigoplus\limits_{k=0}^{g-1} \L_0^k H^3_{\text{red}} \otimes (F_{g-k}\oplus \gamma'F_{g-k-2}) \oplus \\ & & \bigoplus\limits_{k=0}^{g-1}\L_0^k H^3_{\text{red}} \otimes (<\psi_1,\psi_{g+1}> \otimes F_{g-k-1}), \end{array}\label{eqn:f6.dia} \end{equation} as $\text{Sp}\, (2g-2,{\Bbb Z})$-representations. Now multiplication by $\psi_1$ is $\text{Sp}\, (2g-2,{\Bbb Z})$-equivariant and intertwines the two summands in~\eqref{eqn:f6.dia}, i.e. \begin{equation} \begin{array}{ccc} F_{g-k}\oplus \gamma'F_{g-k-2} &\stackrel{\psi_1}{\rightarrow} &<\psi_1,\psi_{g+1}> \otimes F_{g-k-1} \\ x \oplus \gamma' y &\mapsto & \psi_1 \otimes (x +\gamma y) \end{array} \label{eqn:f6.uno} \end{equation} and \begin{equation} \begin{array}{ccc} <\psi_1,\psi_{g+1}> \otimes F_{g-k-1} &\stackrel{\psi_1}{\rightarrow} & F_{g-k}\oplus \gamma'F_{g-k-2} \\ \psi_1 \otimes z &\mapsto & 0\qquad \\ \psi_{g+1} \otimes z &\mapsto & {1\over g}\gamma z \oplus (-{1\over g})\gamma' z \end{array} \label{eqn:f6.dos} \end{equation} In~\eqref{eqn:f6.uno}, $\ker \psi_1= \{x \oplus \gamma'y \in F_{g-k}\oplus \gamma'F_{g-k-2} / x+\gamma y =0 \in F_{g-k-1}\}$ and $\text{\rm im\,} \psi_1= \psi_1 \otimes F_{g-k-1}$. In~\eqref{eqn:f6.dos}, $\ker \psi_1= \psi_1 \otimes F_{g-k-1}$ and $\text{\rm im\,}\psi_1= \{ \gamma y\oplus (-\gamma' y) \in F_{g-k}\oplus \gamma'F_{g-k-2}\}$, so $$ \ker \psi_1/\text{\rm im\,} \psi_1= \bigoplus_{k=0}^{g-1} \L_0^k H^3_{\text{red}} \otimes K_{g-k}, $$ where \begin{eqnarray*} K_{r} &=& { \{x \oplus \gamma'y \in F_{r}\oplus \gamma'F_{r-2} / x+\gamma y =0 \in F_{r-1}\} \over \{ \gamma y\oplus (-\gamma' y) \in F_{r}\oplus \gamma'F_{r-2}\} } \cong \\ & \cong & { \{ x \in F_{r} / x=0 \in F_{r-1}\} \over \{ \gamma y / y=0 \in F_{r-2}\} }= {J_{r-1}/J_r \over \gamma (J_{r-2}/J_r)} = {J_{r-1} \over J_r +\gamma J_{r-2}}. \end{eqnarray*} \end{pf} \begin{lem} \label{lem:f6.Kr} As a ${\Bbb C}[\a,\b,\gamma]$-module, $K_r=\bigoplus\limits_{-(r-1) \leq i \leq r-1 \atop i \equiv r-1 \pmod 2} R_i$, where $R_i$ is $1$-dimensional, $\a$ acts as $4i\sqrt{-1}$ if $i$ is even and as $4i$ if $i$ is odd, $\b$ as $(-1)^i8$ and $\gamma$ as zero on $R_i$. \end{lem} \begin{pf} $K_r$ is generated, as ${\Bbb C}[\a,\b,\gamma]$-module, by three elements $R_{r-1}^1$, $R_{r-1}^2$ and $R_{r-1}^3$, which satisfy six relations $R_{r}^1=0$, $R_{r}^2=0$, $R_{r}^3=0$, $\gamma R_{r-2}^1=0$, $\gamma R_{r-2}^2=0$ and $\gamma R_{r-2}^3=0$. Therefore $$ \left\{ \begin{array}{l} 0 = \a R_{r-1}^1 + (r-1)^2 R_{r-1}^2 \\ 0= (\b+(-1)^{r}8) R_{r-1}^1 + {2(r-1) \over r} R_{r-1}^3 \\ 0 = \gamma R_{r-1}^1 \end{array} \right. $$ Also $R_{r-1}^3=\gamma R_{r-2}^1=0$. The first line allows to write $R_{r-1}^2$ in terms of $R_{r-1}^1$, so $K_r$ is generated by an element $k_r=R_{r-1}^1$, which satisfies $\gamma k_r=0$ and $(\b+(-1)^{r}8) k_r=0$. Therefore $K_r$ is a module over ${\Bbb C}[\a,\b,\gamma]/((\gamma,\b+(-1)^{r}8)+J_r)$. This is a quotient of $(HF_r^*)_I$ which has been computed in~\cite[proposition 20]{floer} to be $$ S_r= \left\{ \begin{array}{ll} {\Bbb C}[\a]/((\a-16(r-1)^2)(\a-16(r-3)^2) \cdots (\a-16\cdot 1^2) ) & \text{$r$ even}\\ {\Bbb C}[\a]/((\a+16(r-1)^2)(\a+16(r-3)^2) \cdots (\a+16\cdot 2^2)\a ) & \text{$r$ odd} \end{array} \right. $$ So $K_r$ is a quotient of $S_r$, being a cyclic module over this ring. In particular $\dim K_r \leq r$. On the other hand, if we consider the action of $\gamma$ in $F_r$,~\cite[corollary 18]{floer} says that $\ker \gamma=J_{r-1}/J_r$. Moreover $\ker \gamma^2=J_{r-2}/J_r$, which is proved in the same fashion. So we can write $K_r={\ker \gamma \over \gamma \ker \gamma^2}$. Now $\dim \ker \gamma={r+1 \choose 2}$, $\dim \ker \gamma^2={r+1 \choose 2} + {r \choose 2}$. As the action of (multiplication by) $\gamma$ vanishes on $\ker \gamma \subset \ker \gamma^2$, we have that $\dim (\gamma\ker \gamma^2) \leq {r \choose 2}$. So $\dim (\ker \gamma/(\gamma\ker\gamma^2)) \geq {r+1\choose 2}-{r\choose 2}=r$, and thus $K_r$ must equal $S_r$. \end{pf} Now we are able to write down the $E_4$ term of the spectral sequence. Decompose $\ker \psi_1=\text{\rm im\,} \psi_1 \oplus (\ker \psi_1/\text{\rm im\,} \psi_1)$, where $\text{\rm im\,} \psi_1 \subset \ker \psi_1$ is the null part for the intersection pairing on $\ker \psi_1$. Then $$ E_4= (\text{\rm im\,} \psi_1 \oplus (\ker \psi_1/\text{\rm im\,} \psi_1)) \times (\ker \psi_1/\text{\rm im\,} \psi_1) t \times (\ker \psi_1/\text{\rm im\,} \psi_1) {t^2 \over 2!} \times \cdots $$ So lemma~\ref{lem:f6.Kr} gives $$ E_4= \text{\rm im\,} \psi_1 \oplus \bigoplus_{i,k} \L^k_0 H^3_{\text{red}} \otimes R_i \otimes {\Bbb C}[[t]], $$ where $0 \leq k \leq g-1$, $-(g-k-1)\leq i \leq g-k-1$ and $i \equiv g-k-1 \pmod 2$. We can write $E_4=\text{\rm im\,} \psi_1 \oplus \tilde E_4$, where the intersection pairing vanishes on the first summand. In order to compute Donaldson invariants, this first summand is ineffective, so we will ignore its behaviour through the spectral sequence, and look henceforth to the spectral sequence given by $\tilde E_4$. \begin{prop} \label{prop:f6.E4} The spectral sequence $\tilde E_n$, $n \geq 4$, collapses at the fourth stage, i.e. $d_n=0$, for all $n \geq 4$. \end{prop} \begin{pf} There is a well-defined $\AA(\S)$-module structure in the spectral sequence, since it is defined at the chain level in section~\ref{sec:f3}. Also any $f \in \text{Sp}\, (2g-2,{\Bbb Z})$ induces $f: HFF_*(\Sigma \times {{\Bbb S}}^1, \d) \rightarrow HFF_*(\Sigma \times {{\Bbb S}}^1,\d)$ which can be defined at the chain level and therefore also appears through the spectral sequence. Therefore every differential $d_n$ is $\text{Sp}\, (2g-2,{\Bbb Z})$-equivariant, ${\Bbb C}[\a,\b,\gamma]$-linear and ${\Bbb C}[[t]]$-linear. Now $$ \tilde E_4= \bigoplus_{i,k} \L_0^k H^3_{\text{red}} \otimes R_i \otimes {\Bbb C}[[t]] $$ is a direct sum of inequivalent irreducible representations of $\text{Sp}\, (2g-2,{\Bbb Z}) \times {\Bbb C}[\a,\b,\gamma]\otimes {\Bbb C}[[t]]$. So $d_n$ has to send every summand to itself, and $d_n^2=0$ on it implies $d_n=0$. The proposition follows. \end{pf} Henceforth we will only consider \begin{equation} \overline{HFF}_*(Y,\d)= \bigoplus_{i,k} \L_0^k H^3_{\text{red}} \otimes R_i \otimes {\Bbb C}[[t]] \subset HFF_*(Y,\d), \label{eqn:f6.lineHFF} \end{equation} which coincides with $HFF_*(Y,\d)/\text{null part}$. \subsection{The $HF^*_g$-module $HFF_*(\Sigma \times {{\Bbb S}}^1,\d)$} In order to determine the $\AA(\S)$-module structure on $\overline{HFF}_*(Y,\d)$, we consider the natural cobordism between $(Y,\d) \sqcup \, (Y,\o)$ and $(Y,\d)$. It gives the map~\eqref{eqn:f3.3} $$ \cdot : HF_*(Y) \otimes HFF_*(Y,\d) \rightarrow HFF_*(Y,\d). $$ Now for any $\phi \in HFF_*(Y,\d)$, it is $\a \cdot \phi=\phi^w(A,2\,\S)\cdot \phi= 2 \mu(\S)({\bf 1})\cdot \phi={\bf 1}\cdot 2\mu(\S)(\phi) =2\mu(\S) (\phi)$, with ${\bf 1}= \phi^w(A, 1)$. Therefore the action of $2\mu(\S)$ is multiplication by $\a$. Analogously for $\mu(\text{pt})$ and $\mu(\gamma_j)$. Therefore the $\AA(\S)$-module structure reduces to an $HF^*(Y)$-module structure on $HFF_*(Y,\d)$, and hence on $\overline{HFF}_*(Y,\d)$. In~\eqref{eqn:f6.lineHFF}, it is $i \equiv g-k-1 \pmod 2$, so the action of $\mu(\gamma_j)$ vanishes. Therefore we have proved \begin{thm} \label{thm:f6.main} Let $Y=\Sigma \times {{\Bbb S}}^1$ and $\d\subset \S\subset Y$ a loop representing a primitive homology class. Let $\overline{HFF}_*(Y,\d)$ be $HFF_*(Y,\d)$ modulo its null part under the intersection pairing. Then $\overline{HFF}_*(Y,\d)$ is an $HF^*(Y)$-module and \begin{equation} \overline{HFF}_*(Y,\d)= \bigoplus_{i,k} \L_0^k H^3_{\text{red}} \otimes R_i \otimes {\Bbb C}[[t]], \label{eqn:f6.yava2} \end{equation} where $0 \leq k \leq g-1$, $-(g-k-1)\leq i \leq g-k-1$ and $i \equiv g-k-1 \pmod 2$. The $R_i$ are $1$-dimensional, $\a$ acts as $4i\sqrt{-1}$ if $i$ is even and as $4i$ if $i$ is odd and $\b$ as $(-1)^i8$. The action of $\psi_j$ is zero. \end{thm} Theorem~\ref{thm:f6.main} gives us the action of $H_*(\S)$ on $\overline{HFF}_*(Y,\d)$, but to get a more intrinsic picture which does not need explicitly the isomorphism $Y\cong\Sigma \times {{\Bbb S}}^1$, we need to give the action of the full $H_*(Y)$ on the Fukaya-Floer cohomology. This is provided by the following \begin{prop} \label{prop:f6.act} Consider $\overline{HFF}_*(Y,\d)$ as given in~\eqref{eqn:f6.yava2}. Then on $R_i\otimes{\Bbb C}[[t]]$, $-4\mu(\text{pt})$ acts as $(-1)^i8$, $\mu(a)=0$ for any $a \in H_1(Y)$ and, for $a \in H_2(Y)$, $2\mu(a)$ is $4(a\cdot{\Bbb S}^1)i\sqrt{-1}- 2(a \cdot \d) t$ if $i$ is even and $4(a\cdot{\Bbb S}^1)i+2(a \cdot \d) t$ if $i$ is odd. \end{prop} \begin{pf} As $Y=\Sigma \times {{\Bbb S}}^1$ is a (trivial) circle bundle over $\S$, we may consider an automorphism of $Y$ as a circle bundle. This is classified by an element $f \in H^1(\S;{\Bbb Z})$, so we shall put $\v_f : Y \rightarrow Y$. The action in homology $\v_f: H_*(Y) \rightarrow H_*(Y)$ is $\v_f(\text{pt})=\text{pt}$, $\v_f(\gamma_j )=\gamma_j+(f[\gamma_j]) {\Bbb S}^1$, $\v_f(\S )= \S + \text{P.D.}[f]\times{\Bbb S}^1$ and $\v_f(\a\times{\Bbb S}^1 )= \a\times{\Bbb S}^1$, for any $\a \in H_*(\S)$. In particular, $$ \d_f=\v_f(\d) =\d +n {\Bbb S}^1, \qquad \text{where $n=f[\d]$.} $$ So $\v_f:HFF_*(Y,\d) \stackrel{\simeq}{\ar} HFF_*(Y,\d_f)$ and hence \begin{equation} \overline{HFF}_*(Y,\d+n{\Bbb S}^1)= \bigoplus_{i,k} \L_0^k H^3_{\text{red}} \otimes R_i \otimes {\Bbb C}[[t]]. \label{eqn:f6.i} \end{equation} Now there is a natural cobordism between $(Y,\d_f) \sqcup (Y, n{\Bbb S}^1)$ and $(Y,\d_f)$, which gives, in the same fashion as above, an $HFF^*(Y,n{\Bbb S}^1)$-module structure to $HFF_*(Y,\d_f)$. This goes down to a module structure over the reduced Fukaya-Floer homology $\overline{HFF}{}^*(Y,n{\Bbb S}^1)=HFF^*(Y,n{\Bbb S}^1)/(\hat{\b}^2 -64,\seq{\hat{\psi}}{1}{2g})$. Corollary~\ref{cor:f5.nSS1} (and the description of the eigenvalues of $\overline{HFF}{}^*_g$ given in theorem~\ref{thm:f5.rHFF}) yields that on the summand $R_i \otimes {\Bbb C}[[t]]$ of $\overline{HFF}_*(Y,\d+n{\Bbb S}^1)$, $2\mu(\S)$ must act as $4i\sqrt{-1} -2nt$ if $i$ is even and as $4i+2nt$ if $i$ is odd, $-4\mu(\text{pt})$ as $(-1)^i8$ and $\mu(\gamma_j)$ as zero. Finally we go back under the isomorphism $\v_f : Y \rightarrow Y$. So on the summand $R_i \otimes {\Bbb C}[[t]]$ of~\eqref{eqn:f6.yava2}, the $\mu$-actions are as follows $$ \left\{ \begin{array}{l} 2\mu(\v_f^{-1}(\S))= 2\mu(\S - \text{P.D.}[f]\times{\Bbb S}^1))= \left\{ \begin{array}{ll} 4i\sqrt{-1} -2nt \qquad & i \text{ even}\\ 4i+2nt & i \text{ odd} \end{array} \right. \\ -4\mu(\v_f^{-1}(\text{pt}))= -4\mu(\text{pt})= (-1)^i 8 \vspace{2mm}\\ \mu(\v_f^{-1}(\gamma_j))= \mu(\gamma_j -(f[\gamma_j]) {\Bbb S}^1)=0 \end{array} \right. $$ This implies that $\mu({\Bbb S}^1)$ acts as zero and $\mu(\gamma_j\times{\Bbb S}^1)$ acts as $(-1)^{i} (\gamma_j\cdot \d) t$. The proposition follows. \end{pf} \section{Applications of Fukaya-Floer homology} \label{sec:f7} In this section we are going to give a number of remarkable applications from the knowledge of the structure of the Fukaya-Floer homology groups of $\Sigma \times {{\Bbb S}}^1$. The author expects to extend the techniques to be able to get the general shape of the Donaldson invariants of $4$-manifolds not of simple type with $b^+>1$. \subsection{$4$-manifolds are of finite type} In~\cite{Kr} it is conjectured that any $4$-manifold with $b^+>1$ is of finite type. In~\cite{froyshov}, Fr{\o}yshov gives a proof of the finite type condition for any simply connected $4$-manifold by studying the general properties of the map $\mu(\text{pt})$ on the Floer homology of $3$-manifolds. In~\cite{wieczorek}, Wieczorek also proves the finite type condition for simply connected $4$-manifolds by studying configurations of embedded spheres of negative self-intersections. Here we give a proof of the finite type condition for arbitrary $4$-manifolds with $b^+>1$ by using the effective Fukaya-Floer homology $\widetilde{HFF}{}_g^*$. \begin{prop} \label{prop:f7.finite} Let $X$ be a $4$-manifold with $b^+> 1$ and $\S \hookrightarrow X$ an embedded surface of self-intersection zero. Suppose there is $w \in H^2(X;{\Bbb Z})$ with $w \cdot \S \equiv 1 \pmod 2$. Then there exists $n \geq 0$ such that $D^w_X((x^2-4)^n z)=0$ for any $z \in \AA(X)$. \end{prop} \begin{pf} If the genus $g$ of $\S$ is zero then the Donaldson invariants vanish identically, so the statement is true with $n=0$. Suppose then that $g \geq 1$. Then we split $X=X_1 \cup_Y A$, where $A$ is a small tubular neighbourhood of $\S$. Let $D \in H_2(X)$ such that $D\cdot \S=1$. Represent $D$ by a $2$-cycle intersecting transversely $\S$ in one positive point and put $D=D_1+\Delta$, with $D_1 \subset X_1$ and $\partial D_1 ={\Bbb S}^1$. Then for any $z \in \AA(X_1)$ it is $\phi^w(X_1,z e^{tD_1})\in \widetilde{HFF}{}_g^*$ by definition~\ref{def:f5.eHFF}. By theorem~\ref{thm:f5.eHFF} there is some $n>0$ such that $(\hat{\b}^2-64)^n=0$ in $\widetilde{HFF}{}_g^*$. Hence $$ D^{(w,\Sigma)}_X(z (x^2-4)^n e^{tD})=\langle {1\over 16^n}(\hat{\b}^2-64)^n \phi^w(X_1, z e^{tD_1}), \phi^w(A, e^{t\Delta})\rangle=0. $$ So $D^w_X(z (x^2-4)^n D^m)=0$ for all $m\geq 0$. This is equivalent to the statement. \end{pf} \begin{thm} \label{thm:f7.finite} Let $X$ be a $4$-manifold with $b^+>1$. Then $X$ is of $w$-finite type, for any $w \in H^2(X;{\Bbb Z})$. \end{thm} \begin{pf} First note that if $\tilde X=X\# \overline{{\Bbb C}{\Bbb P}}^2$ is the blow-up of $X$ with exceptional divisor $E$, then $X$ is of $w$-finite type if and only if $\tilde X$ is of $w$-finite type if and only if $\tilde X$ is of $(w+E)$-finite type. This is a consequence of the general blow-up formula~\cite{FS-bl}. This means that, after possibly blowing-up, we can suppose $w$ is odd. Then there exists $x \in H_2(X;{\Bbb Z})$ with $w\cdot x \equiv 1 \pmod 2$. As $b^+>0$, there is $y \in H_2(X;{\Bbb Z})$ with $y \cdot y>0$. Consider $x'=x+2n y $ for $n$ large. Then $x'\cdot x'>0$ and $w \cdot x' \equiv 1 \pmod 2$. Represent $x'$ by an embedded surface $\S'$ and blow-up $X$ at $N=x'\cdot x'$ points in $\S'$ to get a $4$-manifold $\tilde X=X\# N\overline{{\Bbb C}{\Bbb P}}^2$ with an embedded surface $\S\subset \tilde X$ such that $\S\cdot \S=0$ and $w \in H^2(X;{\Bbb Z})\subset H^2(\tilde X;{\Bbb Z})$ with $w\cdot\S \equiv 1 \pmod 2$. Then proposition~\ref{prop:f7.finite} implies that $\tilde X$ is of $w$-finite type and hence $X$ is of $w$-finite type. \end{pf} \begin{prop} \label{prop:f7.orderft} Let $X$ be a $4$-manifold with $b^+>1$ and containing an embedded surface $\S$ of genus $g$ and self-intersection zero such that there is $w \in H^2(X;{\Bbb Z})$ with $w\cdot \S\equiv 1\pmod 2$. Then $X$ is of $w$-finite type of order less or equal than $$ \sum_{i=1}^g \left(\left[{2g-2i \over 4}\right] +1\right), $$ where $[x]$ denotes the entire part of $x$. If furthermore $X$ has $b_1=0$, then $X$ is of $w$-finite type of order less or equal than \begin{equation} \left[{2g-2 \over 4}\right] +1. \label{eqn:f7.orderft} \end{equation} \end{prop} \begin{pf} The result is obvious for $g=0$. We can thus suppose $g\geq 1$. We only need to find the minimum $n \geq 0$ such that $(\hat{\b}^2-64)^n=0$ in $\widetilde{HFF}{}_g^*$ (see definition~\ref{def:f5.eHFF}). Consider the element $$ e_r=(\hat{\b}+(-1)^r 8)(\hat{\b}+(-1)^{r-1} 8)\stackrel{(r)}{\cdots} (\hat{\b}-8), $$ for $1\leq r\leq g$. Using lemma~\ref{lem:f5.redi} we prove by induction that there are polynomials $P_r(\hat{\b},t) \in {\Bbb C}[[t]][\hat{\b}]$ such that $e_rP_r =0\in \overline{{\cal F}}_r$ and $P_r(\pm 8,t)\neq 0$ (indeed $P_r$ collects all the eigenvalues of $\hat{\b}$ different from $\pm 8$). Thus $e_rP_r$ is a multiple of $\hat{\gamma}$ in ${\cal F}_r$. Now the inclusion $\hat{\gamma} {\cal J}_r \subset {\cal J}_{r+1}$ yields that $e_rP_r {\cal J}_r \subset {\cal J}_{r+1}$ and, by recurrence, that $\prod_{r=1}^g e_rP_r \in {\cal J}_g$. We conclude that $\prod_{r=1}^g e_r P_r=0$ in $HFF^*_g$. As $P_r$ are isomorphisms over $\widetilde{HFF}{}_g^*$ by theorem~\ref{thm:f5.eHFF}, we have that $\prod_{r=1}^g e_r=0$ in $\widetilde{HFF}{}_g^*$. This means that we may take $n=\sum_{i=1}^g (\left[{2g-2i \over 4}\right] +1)$ to get $(\hat{\b}^2-64)^n=0$ in $\widetilde{HFF}{}_g^*$. In the case $b_1=0$, we use that $e_gP_g$ is a multiple of $\hat{\gamma}$ in ${\cal F}_g$. As $P_g$ is an isomorphism over $\tilde{{\cal F}}_g$, $e_g$ is a multiple of $\hat{\gamma}$ on $\tilde{{\cal F}}_g$. The result follows easily. \end{pf} \begin{rem} \label{rem:f7.orderft} The bound in~\eqref{eqn:f7.orderft} is in agreement with the conjecture in~\cite{Kr}. Let us check some simple cases in which proposition~\ref{prop:f7.orderft} was already known to hold. For $g=0$, we get that $X$ is of zeroth-order finite type, i.e. that the Donaldson invariants vanish identically. For $g=1$, we get that $X$ is of simple type~\cite{thesis}~\cite{MSz}. For $g=2$ we get that $X$ is of second order finite type~\cite[theorem 5.16]{thesis}. If $b_1=0$ and $g=2$, $X$ is again of simple type. \end{rem} \subsection{Connected sums along surfaces of $4$-manifolds with $b_1=0$} We are going to apply the description of the Fukaya-Floer homology of section~\ref{sec:f6} to the problem of determining the Donaldson invariants of a connected sum along a Riemann surface of $4$-manifolds with $b_1=0$ (but not necessarily of simple type). This has been extensively studied in~\cite{genusg}. Let $\bar X_1$ and $\bar X_2$ be $4$-manifolds with $b_1=0$ and containing embedded Riemann surfaces $\S=\S_i \hookrightarrow \bar X_i$ of the same genus $g \geq 1$, self-intersection zero and representing odd homology classes. Put $X_i$ for the complement of a small open tubular neighbourhood of $\S_i$ in $\bar X_i$ so that $\bar X_i=X_i \cup_Y A$, $X_i$ is a $4$-manifold with boundary $\partial X_i=Y=\Sigma \times {{\Bbb S}}^1$. Let $\phi:\partial X_1 \rightarrow \overline{\partial X_2}$ be an identification (i.e. a bundle isomorphism) and put $X=X(\phi) = X_1 \cup_{\phi} X_2 = \bar X_1 \#_{\S} \bar X_2$ for the connected sum of $\bar X_1$ and $\bar X_2$ along $\S$. As we are only dealing with one identification, we may well suppose that $\phi=\text{id}$. Recall~\cite[remark 8]{genusg} that homology orientations of both $\bar X_i$ induce a homology orientation of $X$. Also choose $w_i \in H^2(X_i;{\Bbb Z})$, $i=1,2$, and $w \in H^2(X;{\Bbb Z})$ such that $w_i \cdot \S_i \equiv 1 \pmod 2$, $w \cdot \S \equiv 1 \pmod 2$, in a compatible way (i.e. the restricition of $w$ to $X_i \subset X$ coincides with the restriction of $w_i$ to $X_i \subset \bar X_i$). Also as $b_1(X_1)=b_1(X_2)=0$ it is $b_1(X)=0$ and $b^+(X)>1$. Moreover there is an exact sequence~\cite[subsection 2.3.1]{thesis} \begin{equation} 0 \rightarrow H_2(Y) \rightarrow H_2(X) \rightarrow H_2(X_1, \partial X_1) \oplus H_2(X_2, \partial X_2) \rightarrow H_1(Y) \rightarrow 0 \label{eqn:f7.harto} \end{equation} The following result gives a strong restriction on the invariants of $X$ and complements the results of~\cite{genusg}. It is also in accordance with the case $g=2$ studied in~\cite{genus2}. \begin{thm} \label{thm:f7.harto} The $4$-manifold $X=\bar X_1\#_{\S} \bar X_2$ is of simple type with $b_1=0$ and $b^+>1$. Let ${\Bbb D}_X=e^{Q/2}\sum a_i e^{K_i}$ be its Donaldson series. Then for all basic classes $K_i$, we have $K_i \cdot \S \equiv 2g-2 \pmod 4$. \end{thm} \begin{pf} Fix $D_S \in H_2(X)$ with $D_S|_Y=[{\Bbb S}^1]\in H_1(Y)$. Now for any $\d \in H_1(\S;{\Bbb Z})$ which is primitive we consider any $D\in H_2(X)$ with $D|_Y=\d$. Represent $D+nD_S$ as $D_1 +D_2$, with $D_i\subset X_i$ and $\partial D_1=\d+n{\Bbb S}^1$, where $n\in{\Bbb Z}$. The Fukaya-Floer homology $\overline{HFF}_*(Y,\d+n{\Bbb S}^1)$ has been determined in~\eqref{eqn:f6.i} and in particular $\hat{\b}^2-64=0$. So for any $z_1 \in \AA(X_1)$ and $z_2 \in \AA(X_2)$ $$ D^{(w,\Sigma)}_X (z_1z_2(x^2-4) e^{t(D+nD_S)})=\langle \phi^w(X_1, (x^2-4)z_1 e^{tD_1}),\phi^w(X_2, z_2 e^{tD_2})\rangle= $$ $$ \qquad\qquad =\langle {1\over 16}(\hat{\b}^2-64)\phi^w(X_1, z_1 e^{tD_1}),\phi^w(X_2, z_2 e^{tD_2})\rangle=0. $$ By continuity this implies that $D^{(w,\Sigma)}_X (z (x^2-4) e^{tD})=0$ for any $D\in H_2(X)$. So $X$ is of $w$-simple type, and hence of simple type. Now $X$ has $b_1=0$ and $b^+>1$, so we have ${\Bbb D}_X=e^{Q/2}\sum a_i e^{K_i}$. Also $$ \phi^w(X_1, e^{tD_1}) \in \overline{HFF}_*(Y,\d+n{\Bbb S}^1)_I= \bigoplus_{ -(g-1)\leq i \leq g-1 \atop i \equiv g-1 \pmod 2} R_i \otimes {\Bbb C}[[t]]. $$ Put $$ p(\S)=\left\{\begin{array}{ll} (\S^2-(2g-2)^2)(\S^2-(2g-6)^2) \cdots (\S^2-2^2) & \text{$g$ even} \\ (\S^2+(2g-2)^2)(\S^2-(2g-6)^2) \cdots (\S^2+4^2)\,\S \qquad & \text{$g$ odd} \end{array}\right. $$ so that in $\overline{HFF}_*(Y,\d+n{\Bbb S}^1)_I$ it is $p(\a/2+nt)=0$ if $g$ odd and $p(\a/2-nt)=0$ if $g$ even (see proposition~\ref{prop:f6.act}). Suppose for concreteness that $g$ is even (the other case is analogous). Then $$ p({\partial\over \partial s}-nt)D^{(w,\Sigma)}_X (e^{t(D+nD_S)+s\S})=D^{(w,\Sigma)}_X (p(\S-nt) e^{t(D+nD_S)+s\S})= $$ $$ \qquad\qquad =\langle p(\a/2-nt) \phi^w(X_1, e^{tD_1}),\phi^w(X_2,e^{tD_2+s\S})\rangle=0. $$ On the other hand, as $Q(t(D+nD_S)+s\S)=Q(t(D+nD_S)) +2nts$,~\cite[proposition 12]{genus2} implies $$ D^{(w,\Sigma)}_S(e^{t(D+nD_S)+s\S})=e^{Q(t(D+nD_S))/2 +nts}\sum_{K_i\cdot\S \equiv 2\pmod 4} a_{i,w} e^{K_i\cdot (D+nD_S)t +(K_i\cdot \S)s} + $$ $$ \qquad \qquad+e^{-Q(t(D+nD_S))/2 -nts} \sum_{K_i\cdot\S \equiv 0\pmod 4} a_{i,w}e^{\sqrt{-1}K_i\cdot (D+nD_S)t +\sqrt{-1}(K_i\cdot \S)s}, $$ which is a sum (over ${\Bbb C}[[t]]$) of exponentials of the form $e^{nts+2rs}$, $-(g-1) \leq r \leq g-1$, $r \equiv 1 \pmod 2$, and $e^{-nts+2r\sqrt{-1}s}$, $-(g-1) \leq r \leq g-1$, $r \equiv 0 \pmod 2$. So for $D^{(w,\Sigma)}_X(e^{t(D+nD_S)+s\S})$ to be a solution of the ordinary differential equation $p({\partial\over \partial s}-nt)$, the only exponentials appearing should be $e^{nts+2rs}$, with $-(g-1) \leq r \leq g-1$, $r \equiv 1 \equiv g-1 \pmod 2$. The result follows. \end{pf} From~\cite[corollary 13]{genusg}, the sum of the coefficients of all basic classes $K_i$ of $X$ with $K_i\cdot \S=2r$ is zero whenever $|r|<g-1$. It is natural to expect that actually these basic classes do not appear. Theorem~\ref{thm:f7.harto} shows that this is in fact true for $r \not{\!\! \equiv} g-1 \pmod 2$. \subsection{Donaldson invariants of $\S_g \times \S_h$} Our final intention is to give the Donaldson invariants of the $4$-manifold which is given as the product of two Riemann surfaces of genus $g\geq 1$ and $h\geq 1$. Let $S=\S_g \times \S_h$. Then $b^+=1+ 2 g h >2$, so the Donaldson invariants are well-defined. Recall that a $4$-manifold $X$ is of $w$-strong simple type if $D^w_X(\gamma z)=0$ for any $\gamma \in H_1(X)$, $z \in \AA(X)$, and also $D^w_X((x^2-4) z)=0$ for any $z \in \AA(X)$. The structure theorem of~\cite{KM} is also valid in this case (see~\cite{otro} for a proof using Fukaya-Floer homology groups). \begin{prop}[\cite{KM}~\cite{otro}] \label{prop:f7.KM} Let $X$ be a manifold of $w$-strong simple type for some $w$ and $b^+ >1$. Then $X$ is of strong simple type and we have ${\Bbb D}_X^{w}= e^{Q /2} \sum (-1)^{{K_i \cdot {w} +{w}^2} \over 2} a_i \, e^{K_i}$, for finitely many $K_i \in H^2(X;{\Bbb Z})$ (called basic classes) and rational numbers $a_i$ (the collection is empty when the invariants all vanish). These classes are lifts to integral cohomology of ${w}_2(X)$. Moreover, for any embedded surface $S \hookrightarrow X$ of genus $g$, with $S^2 \geq 0$ and representing a non-torsion homology class, one has $2g-2 \geq S^2 +|K_i \cdot S|$. \end{prop} Now suppose we are in the following situation: $\bar X_1$ and $\bar X_2$ are $4$-manifolds containing embedded Riemann surfaces $\S=\S_i \hookrightarrow \bar X_i$ of the same genus $g \geq 1$, self-intersection zero and representing odd elements in homology. Consider $X=\bar X_1 \#_{\S} \bar X_2$, the connected sum along $\S$ (for some identification). Suppose that $\bar X_i$ are of strong simple type and moreover that there is an injective map \begin{eqnarray*} H_2(X) &\rightarrow& H_2(\bar X_1) \oplus H_2(\bar X_2)\\ D &\mapsto & (\bar D_1, \bar D_2) \end{eqnarray*} satisfying $D^2=\bar D_1^2+\bar D_2^2$ and $D|_{X_i}=\bar D_i|_{X_i}$, $i=1,2$. Then \begin{prop} \label{prop:f7.nolanum} In the situation above $X$ is of strong simple type. Write ${\Bbb D}_{X_1}=e^{Q/2}\sum a_j e^{K_j}$ and\/ ${\Bbb D}_{X_2}=e^{Q/2}\sum b_k e^{L_k}$ for the Donaldson series for $X_1$ and $X_2$, respectively. If $g\geq 2$ then $$ {\Bbb D}_X(e^{tD}) = e^{Q(tD)/2}(\hspace{-5mm} \sum_{K_j \cdot \S=L_k \cdot \S = 2g-2}\hspace{-5mm} 2^{7g-9} a_{j}b_{k} \, e^{(K_j \cdot \bar D_1 + L_k \cdot \bar D_2 +2\S\cdot D)t} + $$ $$ + \hspace{-5mm}\sum_{K_j \cdot \S=L_k \cdot \S= -(2g-2)} \hspace{-5mm} (-1)^{g-1} \,2^{7g-9} a_{j}b_{k} \, e^{(K_j \cdot \bar D_1 + L_k \cdot \bar D_2-2\S\cdot D)t}). $$ If $g=1$ then $$ {\Bbb D}_X(e^{tD}) = e^{Q(tD)/2} \sum_{K_j,L_k} a_{j}b_{k} \, e^{(K_j \cdot \bar D_1 + L_k \cdot \bar D_2)t}( \sinh (\S\cdot D)t)^2. $$ \end{prop} \begin{pf} Let us see first that $X$ is of strong simple type. Choose $w_i \in H^2(X_i;{\Bbb Z})$, $i=1,2$, and $w \in H^2(X;{\Bbb Z})$ such that $w_i \cdot \S_i \equiv 1 \pmod 2$, $w \cdot \S \equiv 1 \pmod 2$, in a compatible way. For any $D \in H_2(X)$ with $D\cdot \S=1$, put $D= D_1+D_2$ with $\bar D_i=D_i+\Delta \subset X_i$. As $\bar X_1$ is of strong simple type, $D^{(w,\Sigma)}_{\bar X_1}((x^2-4)e^{t\bar D_1} z_s)=0$, for any $s \in {\cal S}$, so $\phi^w(X_1,e^{tD_1})$ is killed by $\hat{\b}^2-64$. Analogously $\phi^w(X_1, e^{tD_1})$ is killed by $\psi_i$, for $1\leq i \leq 2g$. Therefore $D^{(w,\Sigma)}_X((x^2-4)e^{tD})=\langle \phi^w(X_1,(x^2-4)e^{tD_1}),\phi^w(X_2,e^{tD_2})\rangle=0$. Analogously we see $D^{(w,\Sigma)}_X(\gamma_i e^{tD})=0$, $1\leq i\leq 2g$. We leave to the reader the other $\gamma \in H_1(X)$ not in the image of $H_1(\S)\rightarrow H_1(X)$. For the second assertion, suppose now $g\geq 2$. Then $\phi^w(X_1, e^{tD_1})$ lives in the reduced Fukaya-Floer homology $\overline{HFF}{}^*_g$ of subsection~\ref{subsec:rHFF}, which is found in theorem~\ref{thm:f5.rHFF} to be isomorphic to ${\Bbb C}^{2g-1}[[t]]$. Actually it is the space ${\Bbb C}^{2g-1}[[t]] \subset V[[t]]$ of~\cite[page 794]{genusg}. In~\cite{genusg} the intersection pairing restricted to ${\Bbb C}^{2g-1}[[t]]$ is computed and then $$ D^{(w,\Sigma)}_X(e^{tD})=\langle \phi^w(X_1,e^{tD_1}),\phi^w(X_2,e^{tD_2})\rangle $$ is found. So the arguments in~\cite{genusg} carry over in our situation and the result in~\cite[theorem 9]{genusg} is true for $X$. The statement follows. The resut for $g=1$ is in~\cite[theorem 4.13]{thesis} and~\cite{MSz}. \end{pf} \begin{thm} Let $S=E \times F$ be the product of two Riemann surfaces of genus $g,h\geq 1$, i.e. $E=\S_g$ and $F=\S_h$. Arrange so that $h \leq g$. Then $S$ is of strong simple type and the Donaldson series are as follows. \begin{equation*} \left\{ \begin{array}{ll} {\Bbb D}_S= 4^g e^{Q/2} \sinh^{2g-2} F \qquad & \text{if $h=1$} \\ {\Bbb D}_S= 2^{7(g-1)(h-1)+3} \sinh K & \text{if $g,h>1$, both even} \\ {\Bbb D}_S= 2^{7(g-1)(h-1)+3} \cosh K & \text{if $g,h>1$, at least one odd} \end{array} \right. \end{equation*} where $K=K_S= (2g-2) F +(2h-2) E$ is the canonical class. \end{thm} \begin{pf} The result is a simple consequence of proposition~\ref{prop:f7.nolanum} noting that $S=\S_1 \times\S_1$ is of strong simple type (we leave the proof of this to the reader using the description of $HFF^*_1$) and also making use of the Donaldson series ${\Bbb D}_S= 4 e^{Q/2}$ given in~\cite{Stipsicz}. \end{pf} {\em Acknowledgements.\/} I am grateful to the Mathematics Department in Universidad de M\'alaga for their hospitality and support. Conversations with Marcos Mari\~no, Tom Mrowka, Cliff Taubes, Bernd Siebert, Gang Tian, Dietmar Salamon and Rogier Brussee have been very helpful. Special thanks to Simon Donaldson and Ron Stern for their encouragement.
1,314,259,994,409
arxiv
\section{Preliminaries} Let $D$ be an open domain in $\mathbb{R}^d$ with its boundary $\partial D$. We denote by $C_0^\infty(D)$ the space of all infinitely differentiable real functions on $D$ with compact support. Consider the {\it Schr\"odinger operator} ${\cal A}=-\frac{\Delta}{2}+V$ acting on space $C_0^\infty(D)$, where $\Delta$ is the Laplace operator and $V:\mathbb{R}^d\longrightarrow\mathbb{R}$ is a Borel measurable potential.\\ The {\it essential self-adjointness} of Schr\"odinger operator in $L^2\left(\mathbb{R}^d,dx\right)$, equivalent to the unique solvability of Schr\"odinger equation in $L^2\left(\mathbb{R}^d,dx\right)$, has been studied by {\sc Kato} \cite{kato'84}, {\sc Reed} and {\sc Simon} \cite{reed-simon'75}, {\sc Simon} \cite{simon'82} and others because of its importance in Quantum Mechanics. In the case where $V$ is bounded, it is not difficult to prove that $\left({\cal A},C_0^\infty(\mathbb{R}^d)\right)$ is essentially self-adjoint in $L^2(\mathbb{R}^d,dx)$. But in almost all interesting situations in quantum physics, the potential $V$ is unbounded. In this situation we need to consider the {\it Kato class}, used first by {\sc Schechter} \cite{schechter'71} and {\sc Kato} \cite{kato'72}. A real valued measurable function $V$ is said to be in the {\it Kato class} ${\cal{K}}^d$ on $\mathbb{R}^d$ if $$ \lim_{\delta\searrow 0}\sup_{x\in\mathbb{R}^d}\int\limits_{|x-y|\leq\delta}\left|g(x-y)V(y)\right|dy=0 $$ where $$ g(x)=\left\{\begin{array}{lcl} \frac{1}{|x|^{d-2}}&,&\mbox{ if }d\geq 3\\ \ln\frac{1}{|x|}&,&\mbox{ if }d=2\\ 1&,&\mbox{ if }d=1. \end{array} \right. $$ If $V\in L^2_{loc}\left(\mathbb{R}^d,dx\right)$ is such that $V^{-}$ belongs to the Kato class on $\mathbb{R}^d$, it is well known that the Schr\"odinger operator $({\cal A},C_0^\infty(\mathbb{R}^d))$ is essentially self-adjoint and the unique solution in $L^2$ of the heat equation is given by the famous {\it Feynmann-Kac semigroup} $\left\{P^V_t\right\}_{t\geq 0}$ $$ P^V_tf(x):=\mathbb{E}^xf(B_t)exp\left(-\int\limits_0^t\!V(B_s)\:ds\right) $$ where $f$ is a nonnegative measurable function, $\left(B_t\right)_{t\geq 0}$ is the Brownian Motion in $\mathbb{R}^d$ defined on some filtered probability space $\left(\Omega,{\cal F},\left({\cal F}_t\right)_{t\geq 0},\left(\mathbb{P}_x\right)_{x\in\mathbb{R}^d}\right)$ with $\mathbb{P}_x\left(B_0=x\right)=1$ for any initial point $x\in\mathbb{R}^d$ and $\mathbb{E}^x$ means the expectation with respect to $\mathbb{P}_x$.\\ In the case where $D$ is a strict sub-domain, sharp results are known only when $d=1$ or, in the multidimensional case, only in some special situations.\\ Consequently of an intuitive probabilistic interpretation of uniqueness, {\sc Wu} \cite{wu'98} introduced and studied the uniqueness of Schr\"odinger operators in $L^1\left(D,dx\right)$. On say that $\left({\cal A}, C^\infty_0\left(D\right)\right)$ is $L^1\left(D,dx\right)$-unique if $\cal A$ is closable and its closure is the generator of some $C_0$-semigroup on $L^1\left(D,dx\right)$. This uniqueness notion was also studied in {\sc Arendt} \cite{arendt'86}, {\sc Eberle} \cite{eberle'97}, {\sc Djellout} \cite{djellout'97}, {\sc R\"ockner} \cite{rockner'98}, {\sc Wu} \cite{wu'98} and \cite{wu'99} and others in the Banach spaces setting. \section{$L^\infty(D,dx)$-uniqueness of Schr\"odinger operators} Our purpose is to study the $L^\infty\left(D,dx\right)$-uniqueness of the Schr\"odinger operator $\left({\cal A}, C^\infty_0\left(D\right)\right)$ in the case where $D$ is a strict sub-domain on $\mathbb{R}^d$. But how we can define the uniqueness in $L^\infty(D,dx)$? One can prove rather easely that {\it the killed Feynmann-Kac} semigroup $\left\{P^{D,V}_t\right\}_{t\geq 0}$ $$ P^{D,V}_tf(x):=\mathbb{E}^x1_{[t<\tau_D]}f(B_t)exp\left(-\int\limits_0^t\!V(B_s)\:ds\right) $$ where $\tau_D:=inf\{t>0\::\:B_t\notin D\}$ is the first exiting time of $D$, is a semigroup of bounded operators on $L^p(D,dx)$ for any $1\leq p\leq\infty$, which is strongly continuous for $1\leq p<\infty$, but never strongly continuous in $(L^\infty(D,dx),\|\:.\:\|_\infty)$. Moreover, a well known result of {\sc Lotz} \cite[Theorem 3.6, p. 57]{lotz'86} says that the generator of any strongly continuous semigroup on $(L^\infty(D,dx),\|\:.\:\|_\infty)$ must be bounded.\\ To obtain a correct definition of $L^\infty(D,dx)$-uniqueness, we should introduce a weaker topology of $L^\infty(D,dx)$ such that $\left\{P^{D,V}_t\right\}_{t\geq 0}$ becomes a strongly continuous semigroup with respect to this new topology. Remark that the natural topology for studying $C_0$-semigroups on $L^\infty\left(D,dx\right)$ used first by {\sc Wu} and {\sc Zhang} \cite{wu-zhang'06} is {\it the topology of uniform convergence on compact subsets of $L^1\left(D,dx\right)$}, denoted by ${\cal C}\left(L^\infty,L^1\right)$. More precisely, if we denote $$ \left\langle f,g\right\rangle:=\int\limits_D\!f(x)g(x)dx $$ for all $f\in L^1(D,dx)$ and $g\in\L^\infty(D,dx)$, then for an arbitrary point $g_0\in L^\infty(D,dx)$, a basis of neighborhoods with respect to ${\cal C}\left(L^\infty,L^1\right)$ is given by $$ N(g_0;K,\varepsilon):=\left\{g\in L^\infty(D,dx)\::\:\sup_{f\in K}\left|\left\langle f,g\right\rangle-\left\langle f,g_0\right\rangle\right|<\varepsilon\right\} $$ where $K$ runs over all compact subsets of $L^1(D,dx)$ and $\varepsilon>0$.\\ Remark that $(L^\infty(D,dx),{\cal C}\left(L^\infty,L^1\right))$ is a locally convex space and if $\left\{T(t)\right\}_{t\geq 0}\;$ is a $C_0$-semigroup on $L^1\left(D,dx\right)$ with generator $\cal L$, by \cite[Tneorem 1.4, p. 564]{wu-zhang'06} it follows that $\left\{T^{*}(t)\right\}_{t\geq 0}\;$ is a $C_0$-semigroup on $\left(L^\infty(D,dx),{\cal C}\left(L^\infty,L^1\right)\right)$ with generator ${\cal L}^{*}$.\\ Now we can introduce the uniqueness notion in $L^\infty(D,dx)$. Let $\bf A$ be a linear operator on $L^\infty(D,dx)$ with domain $\cal D$ wich is assumed to be dense in $L^\infty(D,dx)$ with respect to the topology ${\cal C}\left(L^\infty,L^1\right)$. \begin{defn} The operator $\bf A$ is said to be a pre-generator on $L^\infty(D,dx)$ if there exists some $C_0$-semigroup on $\left(L^\infty(D,dx),{\cal C}\left(L^\infty,L^1\right)\right)$ such that its generator $\cal L$ extends $\bf A$. We say that $\bf A$ is $L^\infty(D,dx)$-unique if $\bf A$ is closable and its closure with respect to the topology ${\cal C}\left(L^\infty,L^1\right)$ is the generator of some $C_0$-semigroup on $\left(L^\infty(D,dx),{\cal C}\left(L^\infty,L^1\right)\right)$. \end{defn} The main result of this paper is \begin{thm}\label{1.2} Let $V\in L^\infty_{loc}\left(D,dx\right)$ such that $V^{-}\in{\cal{K}}^d$. Then the Schr\"odinger operator $\left({\cal A}, C^\infty_0\left(D\right)\right)$ is $(L^\infty(D,dx),{\cal C}\left(L^\infty,L^1\right))$-unique. \end{thm} {\bf Proof.\ } First, we must remark that the existence assumption of pre-generator in \cite[Theorem 2.1, p. 570]{wu-zhang'06} is satisfied. Indeed, if consider the killed Feynman-Kac semigroup $\left\{P^{D,V}_t\right\}_{t\geq 0}$ on $L^\infty\left(D,dx\right)$ and for any $p\in[1,\infty]$ we define $$ \left\|P^{D,V}_t\right\|_p:=\sup_{\stackrel{f\geq 0}{\|f\|_p\leq 1}}\left\|P^{D,V}_tf\right\|_p, $$ next lemma show that $\cal A$ is a pre-generator on $(L^\infty(D,dx),{\cal C}\left(L^\infty,L^1\right))$, i.e. $\cal A$ is contained in the generator ${\cal L}^{D,V}_{(\infty)}$ of the killed Feynmann-Kac semigroup $\left\{P^{D,V}_t\right\}_{t\geq 0}$. \begin{lem}\label{3.2} Let $V\in L^\infty_{loc}\left(D,dx\right)$ such that $V^{-}\in{\cal{K}}^d$ and let $\left\{P^{D,V}_t\right\}_{t\geq 0}$ be the killed Feynman-Kac semigroup on $L^\infty\left(D,dx\right)$. If $\left\|P^{D,V}_t\right\|_\infty$ is bounded over the compact intervals, then $\left\{P^{D,V}_t\right\}_{t\geq 0}$ is a $C_0$-semigroup on $\left(L^\infty(D,dx),{\cal C}\left(L^\infty,L^1\right)\right)$ and its generator ${\cal L}^{D,V}_{(\infty)}$ is an extension of $\left({\cal A},C^\infty_0(D)\right)$. \end{lem} {\bf Proof.\ } The proof is close to that of \cite[Lemma 2.3, p. 288]{wu'98}. Let $\left\{P^{D,V}_t\right\}_{t\geq 0}$ be the killed Feynman-Kac semigroup on $L^\infty(D,dx)$. Remark that $$ \left|P^{D,V}_tf(x)\right|\leq P^{D,V}_t|f|(x)\leq P^{D,-V^{-}}_t|f|(x)\leq P_t^{-V^{-}}|f|(x) $$ from where we deduce that $$ \sup_{0\leq t\leq 1}\left\|P^{D,V}_t\right\|_{\infty}\leq\sup_{0\leq t\leq 1}\left\|P^{-V^{-}}_t \right\|_\infty<\infty $$ since $\left\|P^{-V^{-}}_t \right\|_\infty$ is uniformly bounded by the assumption that $V^{-}\in{\cal{K}}^d$ (see \cite{aizenman-simon'82}). Since $\left\|P^{D,V}_t\right\|_1=\left\|P^{D,V}_t\right\|_\infty$ is bounded for $t$ in compact intervals of $[0,\infty)$, using \cite[Lemma 2.3, p. 59]{wu'01} it follows that $\left\{P^{D,V}_t\right\}_{t\geq 0}$ is a $C_0$-semigroup on $L^1(D,dx)$. By \cite[Theorem 1.4, p. 564]{wu-zhang'06} we find that $\left\{P^{D,V}_t\right\}_{t\geq 0}$ is a $C_0$-semigroup on $L^\infty(D,dx)$ with respect to the topology ${\cal C}(L^\infty,L^1)$. We have only to show that its generator ${\cal L}^{D,V}_{(\infty)}$ is an extension of $\left({\cal A},C^\infty_0(D)\right)$.\\ {\bf Step 1: the case $V\geq 0$.} For $n\in\mathbb{N}$ we consider $V_n:=V\wedge n$. By a theorem of bounded perturbation (see \cite[Theorem 3.1, p. 68]{davies'80}) it follows that $$ {\cal A}_n=-\frac{\Delta}{2}+V_n $$ is the generator of a $C_0$-semigroup $\left\{P^{D,V_n}_t\right\}_{t\geq 0}$ on $\left(L^\infty(D,dx),{\cal C}\left(L^\infty,L^1\right)\right)$. So for any $f\in{\cal C}_0^\infty(D)$ we have $$ P_t^{D,V_n}f-f=\int\limits_{0}^{t}P_s^{D,V_n}{\cal A}_nf\;ds\quad,\quad\forall t\geq 0. $$ Letting $n\rightarrow\infty$, we have pointwisely on $D$: $$ P_t^{D,V_n}f\rightarrow P_t^{D,V}f $$ and $$ P_t^{D,V_n}{\cal A}_nf\rightarrow P_t^{D,V}{\cal A}f\quad. $$ Moreover, for any $x\in D$ we have: $$ \left|P_t^{D,V_n}f(x)\right|\leq P_t^{D,V}|f|(x) $$ and $$ \left|P_t^{D,V_n}{\cal A}_nf(x)\right|\leq P_t^{D,V}\left(\left|\frac{\Delta}{2}\right|+|Vf|\right)(x)\quad. $$ Hence by the dominated convergence we derive that $$ P_t^{D,V}f-f=\int\limits_{0}^{t}\!P_s^{D,V}{\cal A}fds\quad,\quad\forall t\geq 0. $$ It follows that $f$ is in the domain of the generator ${\cal L}^{D,V}_{(\infty)}$ of $C_0$-semigroup $\left\{P^{D,V}_t\right\}_{t\geq 0}$.\\ {\bf Step 2: the general case.} Setting $V^n=V\vee(-n)$, for $n\in\mathbb{N}$, and denoting by $$ {\cal A}^n=-\frac{\Delta}{2}+V^n $$ the generator of the $C_0$-semigroup $\left\{P^{D,V^n}_t\right\}_{t\geq 0}$ on $\left(L^\infty(D,dx),{\cal C}\left(L^\infty,L^1\right)\right)$, we have by Step 1 $$ P_t^{D,V^n}f-f=\int\limits_0^t\!P_s^{D,V^n}{\cal A}^nfds\quad,\quad t\geq 0. $$ Notice that $$ \left|P_s^{D,V^n}{\cal A}^nf(x)\right|\leq P_s^{D,V}\left(\left|\frac{\Delta}{2}f\right|+|Vf|\right)(x) $$ which is uniformly bounded in $L^\infty(D,dx)$ over $[0,t]$. By Fubini's theorem we have $$ \int\limits_0^t\!P_s^{D,V}\left(\left|\frac{\Delta}{2}f\right|+|Vf|\right)(x)ds<\infty\mbox{ dx-a.e. on }D. $$ On the other hand, for any $x\in D$ fixed such that $$ P_s^{D,V}\left(\left|\frac{\Delta}{2}f\right|+|Vf|\right)(x)<\infty $$ then by dominated convergence we find $$ P_s^{D,V^n}\left(-\frac{\Delta}{2}+V^n\right)f(x)\longrightarrow P_s^{D,V}\left(-\frac{\Delta}{2}+V\right)f(x)\quad. $$ Thus by dominated convergence we have dx-a.e. on $D$, $$ \int\limits_0^t\!P_s^{D,V^n}\left(-\frac{\Delta}{2}+V^n\right)fds\rightarrow\int\limits_0^t\! P_s^{D,V}\left(-\frac{\Delta}{2}+V\right)fds\quad,\quad\forall t\geq 0. $$ The same argument shows that $$ P_t^{D,V^n}f-f\rightarrow P_t^{D,V}f-f\quad. $$ By consequence $$ P_t^{D,V}f-f=\int\limits_0^t\!P_s^{D,V}\left(-\frac{\Delta}{2}+V\right)fds\quad,\quad\forall t\geq 0. $$ Hence $f$ is in the domain of generator ${\cal L}^{D,V}_{(\infty)}$ of semigroup $\left\{P^{D,V}_t\right\}_{t\geq 0}$. So ${\cal L}^{D,V}_{(\infty)}$ is an extension of the operator $\left({\cal A},C^\infty_0(D)\right)$ and the lemma is proved.\\ Next we prove the $L^\infty(D,dx)$-uniqueness of $\cal A$. By \cite[Theorem 2.1, p. 570]{wu-zhang'06}, we deduce that the operator $\left({\cal A}, C^\infty_0\left(D\right)\right)$ is $L^\infty\left(D,dx\right)$-unique if and only if for some $\lambda$, the range $(\lambda I-{\cal A})\left(C^\infty_0\left(D\right)\right)$ is dense in $\left(L^\infty(D,dx),{\cal C}\left(L^\infty,L^1\right)\right)$. It is enough to show that for any $h\in L^1\left(D,dx\right)$ which satisfies the equality $$ \left\langle h,(\lambda I+{\cal A})f\right\rangle=0\quad,\quad\forall f\in C^\infty_0\left(D\right) $$ it follows $h=0$.\\ Let $h\in L^1\left(D,dx\right)$ be such that for some $\lambda$ one have $$ \left\langle h,(\lambda I+{\cal A})f\right\rangle=0\quad,\quad\forall f\in C^\infty_0\left(D\right) $$ or $$ (\lambda I+{\cal A})h=0\quad\mbox{ in the sense of distribution.} $$ Since $V\in L^\infty_{loc}\left(D,dx\right)$, by applying \cite[Theorem 1.5, p. 217]{aizenman-simon'82} we can see that $h$ is a continuous function. By the mean value theorem due to {\sc Aizenman} and {\sc Simon} \cite[Corollary 3.9, p. 231]{aizenman-simon'82}, there exists some constant $C>0$ such as $$ |h(x)|\leq C\int\limits_{|x-y|\leq 1}\!|h(y)|\:dy\quad,\quad\forall x\in D. $$ As $V^{-}\in{\cal{K}}^d$, $C$ may be chosen independently of $x\in D$. Since $h\in L^1(D,dx)$, it follows that $h$ is bounded and, consequently, $h\in L^2(D,dx)$. Now by the $L^2(D,dx)$-uniqueness of $\left({\cal A},C_0^\infty(D)\right)$ and \cite[Theorem 2.1, p. 570]{wu-zhang'06}, $h$ belongs to the domain of the generator ${\cal L}^{D,V}_{(2)}$ of $\left\{P^{D,V}_t\right\}_{t\geq 0}$ on $L^2$ and $$ {\cal L}^{D,V}_{(2)}h=\left(-\frac{\Delta}{2}+V\right)h=-\lambda h\quad. $$ Hence $$ P^{D,V}_th=e^{-\lambda t}h\quad,\quad\forall t\geq 0. $$ Let $$ \lambda(D,V):=\inf_{f\in C_0^\infty(D)}\left\{\frac{1}{2}\int\limits_D\!|\nabla f|^2dx+Vf^2dx\::\:\|f\|_2\leq 1\right\}. $$ be the lowest energy of the Schr\"odinger operator. If we take $\lambda<\lambda(D,V)$, then the last equality is possible only for $h=0$, because $\left\|P^{D,V}_t\right\|_2=e^{-\lambda(D,V)t}$ (see {\sc Albeverio} and {\sc Ma} \cite[Theorem 4.1, p. 343]{albeverio-ma'91}).\rule{0.5em}{0.5em} \begin{rem} \em Intuitively, to have $L^1\left(D,dx\right)$-uniqueness, the repulsive potential $V^{+}$ should grow rapidly to infinity near $\partial D$, this means $$ (C_1)\quad\quad\quad\mathbb{P}_x\left(\int\limits_0^{\tau_D}\!V^{+}(B_s)\:ds+\tau_D=\infty\right)=1\quad\mbox{for a.e. }x\in D $$ so that a particle with starting point inside $D$ can not reach the boundary $\partial D$ (see \cite [Theorem 1.1, p. 279]{wu'98}).\\ By analogy with the uniqueness in $L^1(D,dx)$, the $L^\infty(D,dx)$-uniqueness of $\left({\cal A},C_0^\infty(D)\right)$ means that a particle starting from the boundary $\partial D$ can not enter in $D$. Unfortunately, here we have a problem: $L^\infty(D,dx)$-uniqueness of $\cal A$ is equivalent to the existence of a unique boundary condition for ${\cal A}^{*}$. It is well known that there are many boundary conditions (Dirichlet, Newmann, etc.). Remark that in the case of $L^1(D,dx)$-uniqueness of $\cal A$, the effect of the boundary condition for ${\cal A}^{*}$ is eliminated by the condition $(C_1)$ for potential. To find such condition in the case of $L^\infty(D,dx)$-uniqueness is very dificult. In this moment we can present here an interesting result from \cite{wu-zhang'06}: \end{rem} \begin{prop} Let $D$ be a nonempty open domain of $\mathbb{R}^d$. If the Laplacian $(\Delta,C_0^\infty(D))$ is $L^\infty(D,dx)$-unique, then $D^{C}=\O$ or $D=\mathbb{R}^d$. \end{prop} For the heat diffusion equation we can formulate the next result \begin{cor} If $V\in L^\infty_{loc}(\mathbb{R}^d,dx)$ and $V^{-}\in{\cal K}^d$, then for every $h\in L^1(\mathbb{R}^d,dx)$, the heat diffusion equation $$ \left\lbrace\begin{array}{l} \partial_tu(t,x)=\left(-\frac{\Delta}{2}+V\right)u(t,x)\\ u(0,x)=h(x) \end{array} \right. $$ has one $L^1(\mathbb{R}^d,dx)$-unique weak solution which is given by $u(t,x)=P^{V}_th(x)$. \end{cor} {\bf Proof.\ } The assertion follows by \cite[Theorem 2.1, p. 570]{wu-zhang'06} and Theorem \ref{1.2}. \rule{0.5em}{0.5em} \vspace{1cm} \noindent {\bf Acknowledgements.} I am grateful to Professor Liming WU for his kind invitation to Wuhan University, China, during May-June 2006 where this result was reported and for his valuable help and support. And I want to thank to anonymous reviewer for sugestions. \bibliographystyle{plain}
1,314,259,994,410
arxiv
\section{Introduction} The two-user discrete memoryless state-dependent multiple-access channel (MAC) models a scenario in which two encoders transmit independent messages to a single receiver via a MAC whose channel law is governed by the pair of encoders' inputs and by an i.i.d. state random variable $S$. In the cooperative state-dependent MAC model it is further assumed that Message~1 is shared by both encoders whereas Message~2 is known only to Encoder~2 -- the cognitive transmitter. The capacity of the cooperative state-dependent MAC where the realization of the state sequence is known non-causally to the cognitive encoder has been derived by Somekh-Baruch~\emph{et.~al.} in \cite{anelia}. In this work we dispense of the assumption that Message~1 is shared a-priori by both encoders. Instead, we study a ``more realistic'' model in which Encoder~2 ``cribs'' and learns the sequence of channel inputs emitted by Encoder~1 before generating its next channel input. Specifically, we study both, the case where Encoder~2 cribs strictly causal -- i.e. its current channel input depends on its message as well as the past inputs of Encoder~1 (in the sense of \cite[Situation~2]{frans}), and the case where Encoder~2 cribs causally -- i.e. its current channel input depends on its message as well as all past including the current inputs of Encoder~1 (in the sense of \cite[Situation~3]{frans}). The model is depicted in Figure~1. For both cases -- strictly causal cribbing as well as causal cribbing -- we provide a complete characterization of the capacity region. The paper is organized as follows. In Section~II we provide a formal definition for the state-dependent MAC with a cribbing encoder. In Section~III we present our main results, while Section~IV is devoted for the proofs. \begin{figure} \centering \setlength{\unitlength}{0.1mm} \begin{picture}(850,320) \linethickness{0.1mm} \put(30,265){$W_1$} \put(0,250){\vector(1,0){110}} \put(110,220){\line(0,1){60}} \put(110,220){\line(1,0){90}} \put(110,280){\line(1,0){90}} \put(200,220){\line(0,1){60}} \put(140,240){$f_1$} \put(200,250){\vector(1,0){120}} \put(220,265){$X_{1,k}$} \put(250,250){\line(0,-1){80}} \put(210,170){\line(1,0){40}} \put(210,170){\oval(30,30)[l]} \put(210,155){\line(0,1){30}} \put(155,170){\line(1,0){40}} \put(155,170){\vector(0,-1){60}} \put(30,95){$W_2$} \put(0,80){\vector(1,0){110}} \put(110,50){\line(0,1){60}} \put(110,50){\line(1,0){90}} \put(110,110){\line(1,0){90}} \put(200,50){\line(0,1){60}} \put(127,70){$f_{2,k}$} \put(200,80){\vector(1,0){120}} \put(220,95){$X_{2,k}$} \put(340,160){\small $P_{Y|X_1,X_2,S}$} \put(320,50){\line(0,1){230}} \put(510,50){\line(0,1){230}} \put(320,280){\line(1,0){190}} \put(320,50){\line(1,0){190}} \put(510,160){\vector(1,0){110}} \put(540,175){$Y_k$} \put(0,5){\line(1,0){415}} \put(415,5){\vector(0,1){40}} \put(155,5){\vector(0,1){40}} \put(30,15){$S^n$} \put(620,130){\line(0,1){60}} \put(620,130){\line(1,0){80}} \put(620,190){\line(1,0){80}} \put(700,130){\line(0,1){60}} \put(645,155){$g$} \put(700,160){\vector(1,0){140}} \put(715,175){${\hat{W}}_1,\hat{W}_2$} \end{picture} \label{fig:setup} \caption{State-dependent MAC with a cribbing encoder.} \end{figure} \section{Channel model} A discrete memoryless state-dependent multiple-access channel is a triple $\left( {\cal X}_1\times{\cal X}_2\times{\cal S}, p(y|x_1,x_2,s),{\cal Y}\right)$ where ${\cal X}_1$ and ${\cal X}_2$ are finite sets corresponding to the input alphabets of Encoder~1 and Encoder~2 respectively, ${\cal S}$ is a finite set corresponding to the alphabet of the state governing the channel law, the finite set ${\cal Y}$ is the output alphabet at the receiver, and $p(\cdot|x_1,x_2,s)$ is a collection of probability laws on ${\cal Y}$ indexed by the input symbols $x_1\in{\cal X}_1$ and $x_2\in{\cal X}_2$ and $s\in{\cal S}$. The channel's law extends to $n$-tuples according to the memoryless law \begin{IEEEeqnarray*}{l} \Pr(y^n|x_{1}^{n},x_{2}^n,s^{n})= \prod_{k=1}^n p(y_k|x_{1,k},x_{2,k},s_k) \ , \end{IEEEeqnarray*} where $x_{1,k},x_{2,k},s_{k}$ and $y_k$ denote the inputs, state and output of the channel at time $k$, and $x_1^k\triangleq (x_{1,1},\ldots,x_{1,k})$. Encoder~1 sends a message $W_1$, which is drawn uniformly over the set $\{1,\ldots,e^{nR_1}\}\triangleq {\cal W}_1$ to the receiver, while Encoder~2 sends to the receiver a message $W_2$ which is independent of $W_1$ and is drawn uniformly over the set $\{1,\ldots,e^{nR_2}\}\triangleq {\cal W}_2$. The channel state sequence $S^n$, which is drawn i.i.d. according to the law $p_S$, is available non-causally to Encoder~2. It is further assumed that Encoder~2 ``cribs'' causally and learns the sequence of channel inputs emitted by Encoder~1 in all past transmissions (in the sense of \cite[Situation~2]{frans}) before generating its next channel input. The model is depicted in Figure~1. An $(e^{nR_1},e^{nR_2},n)$ code for the state-dependent multiple-access channel with a {\it strictly causal} cribbing encoder consists of: 1) \ Encoder~1 defined by a deterministic mapping \begin{IEEEeqnarray}{l} f_1 \colon {\cal W}_1 \to {\cal X}_1^n \label{eq:enc1} \end{IEEEeqnarray} which maps a message $w_1\in {\cal W}_1$ to a codeword $x_1^n\in {\cal X}_1^n$. 2) \ Encoder~2 defined by a collection of encoding functions \begin{IEEEeqnarray}{l} f_{2,k}^{(sc)} \colon {\cal W}_2\times{\cal S}^n\times{\cal X}_1^{k-1}\to {\cal X}_2 \ \ \ k=1,2,\ldots,n \label{eq:enc2} \end{IEEEeqnarray} which, based on the message $w_2\in {\cal W}_2$, the state sequence $s^n\in {\cal S}^n$ and what was learned from the other encoder by cribbing $x_1^{k-1}\in {\cal X}_1^{k-1}$, map into the next channel input $x_{2,k}\in {\cal X}_2$. 3) \ The receiver decoder defined by a mapping \begin{IEEEeqnarray*}{l} g \colon {\cal Y}^n \to {\cal W}_1\times{\cal W}_2 \end{IEEEeqnarray*} which maps a received sequence $y^n$ to a message pair $({\hat{w}}_1,\hat{w}_2)\in {\cal W}_1\times{\cal W}_2$. \medskip An $(e^{nR_1},e^{nR_2},n)$ code for the state-dependent multiple-access channel with a {\it causal} cribbing encoder differs from that for a strictly causal encoder just in the encoding rule at Encoder~2 which is defined by a collection of encoding functions \begin{IEEEeqnarray}{l} f_{2,k}^{(c)} \colon {\cal W}_2\times{\cal S}^n\times{\cal X}_1^{k}\to {\cal X}_2 \ \ \ k=1,2,\ldots,n \label{eq:enc2c} \end{IEEEeqnarray} which, based on the message $w_2\in {\cal W}_2$, the state sequence $s^n\in {\cal S}^n$ and what was learned from the other encoder by cribbing $x_1^{k}\in {\cal X}_1^{k}$, map into the current channel input $x_{2,k}\in {\cal X}_2$. \medskip For a given code, the block average probability of error is \begin{IEEEeqnarray*}{l} P_e^{(n)}= \frac{1}{e^{n(R_1+R_2)}} \sum_{w_1=1}^{e^{nR_1}}\sum_{w_2=1}^{e^{nR_2}} P_e^{(n)}(w_1,w_2) \end{IEEEeqnarray*} where \begin{IEEEeqnarray*}{l} P_e^{(n)}(w_1,w_2)= \nonumber \\ \ \ \ \ \ \ \ \Pr\left\{(\hat{w}_1,\hat{w}_2)\neq (w_1,w_2) | (w_1,w_2) \ \mbox{sent} \right\}. \end{IEEEeqnarray*} A rate-pair $(R_1,R_2)$ is said to be achievable if there exists a sequence of $(e^{nR_1},e^{nR_2},n)$ codes with $\lim_{n\to\infty} P_e^{(n)}=0$. The capacity region of the state-dependent MAC with a cribbing encoder is the closure of the set of achievable rate-pairs. \section{Main results} Our first result is a characterization of the capacity region for the two-user discrete memoryless state-dependent MAC with state-sequence available non-causally at a strictly causal cribbing encoder. By combining the coding strategies from \cite{anelia} and \cite{frans} we prove the following. \vskip.1truein \begin{theorem} \label{th:thm1} Consider the discrete memoryless state-dependent MAC $\left( {\cal X}_1\times{\cal X}_2\times{\cal S}, p(y|x_1,x_2,s),{\cal Y}\right)$ with state-sequence available non-causally at a strictly causal cribbing encoder and finite alphabets ${\cal S},{\cal X}_1,{\cal X}_2$. The capacity region of this channel is \begin{IEEEeqnarray}{l} {\cal C}=\bigcup_{p_{VSUX_{1}X_{2}Y}} \biggl\{ (R_1,R_2): \nonumber \\ 0\leq R_1\leq H(X_{1}|V) \nonumber \\ 0\leq R_2\leq I(U;Y|VX_1)-I(U;S|V) \nonumber \\ 0\leq R_1+R_2\leq I(VUX_1;Y)-I(U;S|V)\biggr\} , \label{eq:r11} \end{IEEEeqnarray} where the union in (\ref{eq:r11}) is over all laws on $V\in\set{V},S\in \set{S},U\in\set{U}, X_1\in \set{X}_1, X_2\in\set{X}_2, Y\in\set{Y}$ of the form \begin{IEEEeqnarray}{l} p_{VSUX_{1}X_{2}Y}(v,s,u,x_{1},x_{2},y) \nonumber \\ = p_{V}(v)p_S(s)p_{X_{1}|V}(x_{1}|v) p_{UX_{2}|SV}(u,x_{2}|s,v)p(y|x_{1},x_{2},s) . \nonumber \\ \label{eq:jointlawp} \end{IEEEeqnarray} The cardinalities of the auxiliary random variables $V$ and $U$ are bounded by \begin{IEEEeqnarray*}{l} |{\cal V}| \leq |{\cal X}_1| |{\cal X}_2| |{\cal S}| + 5 \nonumber \\ |{\cal U}\|\leq |{\cal X}_1| |{\cal X}_2| |{\cal S}| |{\cal V}| + 2. \end{IEEEeqnarray*} \end{theorem} \vskip.1truein Our second result is a characterization of the capacity region for the two-user discrete memoryless state-dependent MAC with state-sequence available non-causally at a causal cribbing encoder. \vskip.1truein \begin{theorem} \label{th:thm2} Consider the discrete memoryless state-dependent MAC $\left( {\cal X}_1\times{\cal X}_2\times{\cal S}, p(y|x_1,x_2,s),{\cal Y}\right)$ with state-sequence available non-causally at a causal cribbing encoder and finite alphabets ${\cal S},{\cal X}_1,{\cal X}_2$. The capacity region of this channel is the set of rate pairs satisfying (\ref{eq:r11}) except that the union is taken over all laws on $V\in\set{V},S\in \set{S},U\in\set{U}, X_1\in \set{X}_1, X_2\in\set{X}_2, Y\in\set{Y}$ of the form \begin{IEEEeqnarray}{l} p_{VSUX_{1}X_{2}Y}(v,s,u,x_{1},x_{2},y) \nonumber \\ = p_{V}(v)p_S(s)p_{X_{1}|V}(x_{1}|v)p_{U|SV}(u|s,v) \nonumber \\ \ \ \ \quad p_{X_{2}|VUSX_1}(x_{2}|v,u,s,x_1)p(y|x_{1},x_{2},s) . \label{eq:jointlawpc} \end{IEEEeqnarray} \end{theorem} \section{Proofs} \subsection{Proof of the achievability part in Theorem~\ref{th:thm1}} We propose a coding scheme that is based on Block-Markov superposition encoding and which combines the coding technique of \cite{anelia} with that of \cite{frans}, while the decoder uses backward decoding. \subsubsection{Coding Scheme}\label{sec:coding-scheme} We consider $B$ blocks, each of $n$ symbols. A sequence of $B-1$ message pairs $(W_1^{(b)},W_2^{(b)})$, for $b=1,\ldots, B-1$, will be transmitted during $B$ transmission blocks. Here the sequence $\{W_1^{(b)}\}$ is an i.i.d. sequence of uniform random variables over $\left\{1,\ldots,e^{nR_1}\right\}$ and independent thereof $\{W_2^{(b)}\}$ is an i.i.d. sequence of uniform random variables over $\left\{1,\ldots,e^{nR_{2}}\right\}$. As $B\to\infty$, for fixed $n$, the rate pair of the message $(W_1,W_2)$, $(\tilde{R}_1,\tilde{R}_2)=(R_1(B-1)/B,R_2(B-1)/B)$, is arbitrarily close to $(R_1,R_2)$. We assume a tuple of random variables $V\in\set{V},S\in \set{S},U\in\set{U}, X_1\in \set{X}_1, X_2\in\set{X}_2, Y\in\set{Y},$ of joint law (\ref{eq:jointlawp}). \vskip.1truein {\it Random coding and partitioning:} In each block $b,b=1,2,\ldots,B$, we shall use the following code. \begin{itemize} \item Generate $e^{nR_1}$ sequences $\makebox{{\boldmath $v$}}=({v}_{1},\ldots,{v}_{n})$, each with probability $\Pr\left({\mbox{\boldmath $v$}}\right)=\prod_{k=1}^{n}p_{V}({v}_{k})$. Label them ${\mbox{\boldmath $v$}}\left(\omega_0\right)$ where $\omega_{0}\in\left\{1,\ldots,e^{nR_{1}}\right\}$. \item For each ${\mbox{\boldmath $v$}}\left(\omega_0\right)$ generate $e^{nR_1}$ sequences ${\mbox{\boldmath $x$}_1}=({x}_{1,1},{x}_{1,2},\ldots,{x}_{1,n})$, each with probability $\Pr\left({\mbox{\boldmath $x$}_1}|{\mbox{\boldmath $v$}}\left(\omega_0\right)\right)= \prod_{k=1}^{n}p_{X_1|V}({x}_{1,k}|v_k(\omega_0))$. Label them ${\mbox{\boldmath $x$}_1}\left(i,\omega_0\right), i\in \left\{1,\ldots,e^{nR_{1}}\right\}$. \item For each ${\mbox{\boldmath $v$}}\left(\omega_0\right)$ generate $e^{n(R_{2}+R')}$ sequences ${\mbox{\boldmath $u$}}=({u}_{1},{u}_{2},\ldots,{u}_{n})$, each with probability $\Pr\left({\mbox{\boldmath $u$}}|{\mbox{\boldmath $v$}}\left(\omega_0\right)\right)= \prod_{k=1}^{n}p_{U|V}({u}_{k}|v_k(\omega_0))$. Randomly partition the set $\left\{{\mbox{\boldmath $u$}}\right\}$ into $e^{nR_2}$ bins, each consisting of $e^{nR'}$ codewords. Now label the codewords by ${\mbox{\boldmath $u$}}\left(j,\jmath,\omega_0\right), j\in \{1,\ldots,e^{nR_{2}}\}, \jmath \in \{1,\ldots,e^{nR'}\}$ where $j$ identifies the bin and $\jmath$ the index within the bin. \end{itemize} \vskip.1truein {\it Encoding :} We denote the realizations of the sequences $\{W_1^{(b)}\}$ and $\{W_{2}^{(b)}\}$ by $\{w_1^{(b)}\}$ and $\{w_{2}^{(b)}\}$, and the realization of the state sequence $(S_1^{(b)},S_2^{(b)},\ldots,S_n^{(b)})$ by ${\mbox{\boldmath $s$}}^{(b)}$. The code builds upon a Block-Markov structure in which the message $(w_1^{(b)},w_{2}^{(b)})$ is encoded over the successive blocks $b$ and $(b+1)$ such that, $\omega_0^{(b+1)}=w_1^{(b)}$, for $b=1,\ldots, B-1$. \medskip The messages $\{w_1^{(b)}\}$ and $\{w_{2}^{(b)}\}$, $b=1,2,\ldots,B-1$ are encoded as follows: In block $1$ the encoders send \begin{IEEEeqnarray*}{l} \mbox{\boldmath $x$}_1^{(1)} = \mbox{\boldmath $x$}_1(w_1^{(1)},1) \nonumber \\ \mbox{\boldmath $x$}_2^{(1)} = \mbox{\boldmath $x$}_2({\mbox{\boldmath $s$}}^{(1)},w_{2}^{(1)},1). \end{IEEEeqnarray*} Here, the encoding $\mbox{\boldmath $x$}_2({\mbox{\boldmath $s$}}^{(b)},w_{2}^{(b)},\omega_0^{(b)})$ is defined as follows: \begin{enumerate} \item Find the typical ${\mbox{\boldmath $u$}}(w_{2}^{(b)},\jmath_0,\omega_0^{(b)})$: Search within the bin ${\mbox{\boldmath $u$}}(w_{2}^{(b)},\cdot,\omega_0^{(b)})$ for the lowest $\jmath_0\in\{1,\ldots,e^{nR'}\}$ such that ${\mbox{\boldmath $u$}}(w_{2}^{(b)},\jmath_0,\omega_0^{(b)})$ is jointly typical with the pair $({\mbox{\boldmath $v$}}(\omega_0^{(b)}),{\mbox{\boldmath $s$}}^{(b)})$; denote this $\jmath_0$ as $\jmath_0({\mbox{\boldmath $s$}}^{(b)},w_{2}^{(b)},\omega_0^{(b)})$. If such $\jmath_0$ is not found or if the state sequence ${\mbox{\boldmath $s$}}^{(b)}$ is non-typical an error is declared and $\jmath_0({\mbox{\boldmath $s$}}^{(b)},w_{2}^{(b)},\omega_0^{(b)})=1$. \item Generate the codeword $\mbox{\boldmath $x$}_2({\mbox{\boldmath $s$}}^{(b)},w_{2}^{(b)},\omega_0^{(b)})$ by drawing its components i.i.d. conditionally on the triple $({\mbox{\boldmath $s$}}^{(b)},{\mbox{\boldmath $u$}}(w_{2}^{(b)},\jmath_0,\omega_0^{(b)}),{\mbox{\boldmath $v$}}(\omega_0^{(b)}))$, where the conditional law is induced by (\ref{eq:jointlawp}). \end{enumerate} Suppose that, as a result of cribbing from Encoder~1, before the beginning of block $b=2,3,\ldots,B$, Encoder~2 has an estimate $\hat{\hat{w}}_1^{(b-1)}$ for $w_1^{(b-1)}$. Then, in block $b=2,3,\ldots,B-1$, the encoders send \begin{IEEEeqnarray*}{l} \mbox{\boldmath $x$}_1^{(b)} = \mbox{\boldmath $x$}_1(w_1^{(b)},w_1^{(b-1)}) \nonumber \\ \mbox{\boldmath $x$}_2^{(b)} = \mbox{\boldmath $x$}_2({\mbox{\boldmath $s$}}^{(b)},w_{2}^{(b)},\hat{\hat{w}}_1^{(b-1)}), \end{IEEEeqnarray*} and in block $B$ \begin{IEEEeqnarray*}{l} \mbox{\boldmath $x$}_1^{(B)} = \mbox{\boldmath $x$}_1(1,w_1^{(B-1)}) \nonumber \\ \mbox{\boldmath $x$}_2^{(B)} = \mbox{\boldmath $x$}_2({\mbox{\boldmath $s$}}^{(B)},1,\hat{\hat{w}}_1^{(B-1)}). \end{IEEEeqnarray*} \vskip.1truein {\it Decoding at the receiver:} After the reception of block-$B$ the receiver uses backward decoding starting from block $B$ downward to block $1$ and decodes the messages as follows. In block $B$ the receiver looks for $\hat{w}_1^{(B-1)}$ such that \begin{IEEEeqnarray*}{l} \left( \mbox{\boldmath $v$}({\hat{w}}_{1}^{(B-1)}), \mbox{\boldmath $x$}_1(1,{\hat{w}}_{1}^{(B-1)}), {\mbox{\boldmath $u$}}(1,\jmath_0,{\hat{w}}_{1}^{(B-1)}), \right. \nonumber \\ \hspace{1.3cm} \left. \mbox{\boldmath $x$}_2({\mbox{\boldmath $s$}}^{(B)},1,{\hat{w}}_{1}^{(B-1)}), \mbox{\boldmath $y$}^{(B)} \right) \in{\cal A}_{\epsilon}(V,X_1,U,X_2,Y), \end{IEEEeqnarray*} where $\jmath_0= \jmath_0({\mbox{\boldmath $s$}}^{(B)},1,{\hat{w}}_{1}^{(B-1)})$. Next, assume that, decoding backwards up to (and including) block $b+1$, the receiver decoded $\hat{w}_1^{(B-1)},(\hat{w}_{2}^{(B-1)},\hat{w}_1^{(B-2)}), \ldots,(\hat{w}_2^{(b+1)},\hat{w}_1^{(b)})$. To decode block $b$, the receiver looks for $(\hat{w}_2^{(b)},\hat{w}_1^{(b-1)})$ such that \begin{IEEEeqnarray*}{l} \left( \mbox{\boldmath $v$}({\hat{w}}_{1}^{(b-1)}), \mbox{\boldmath $x$}_1(\hat{w}_1^{(b)},{\hat{w}}_{1}^{(b-1)}), {\mbox{\boldmath $u$}}(\hat{w}_2^{(b)},\jmath_0,{\hat{w}}_{1}^{(b-1)}), \right. \nonumber \\ \hspace{1.3cm} \left. \mbox{\boldmath $x$}_2({\mbox{\boldmath $s$}}^{(b)},\hat{w}_2^{(b)},{\hat{w}}_{1}^{(b-1)}), \mbox{\boldmath $y$}^{(b)} \right) \in{\cal A}_{\epsilon}(V,X_1,U,X_2,Y), \end{IEEEeqnarray*} where $\jmath_0= \jmath_0({\mbox{\boldmath $s$}}^{(b)},\hat{w}_{2}^{(b)},{\hat{w}}_{1}^{(b-1)})$. \medskip {\it Decoding at Encoder~2:} To obtain cooperation, after block $b=1,2,\ldots,B-1$, Encoder~2 chooses $\tilde{w}_1^{(b)}$ such that \begin{IEEEeqnarray*}{l} \left( \mbox{\boldmath $v$}(\tilde{\omega}_{0}^{(b)}), \mbox{\boldmath $x$}_1(\tilde{w}_{1}^{(b)},\tilde{\omega}_{0}^{(b)}), \mbox{\boldmath $x$}_1^{(b)} \right) \in{\cal A}_{\epsilon}(V,X_1,X_1), \end{IEEEeqnarray*} where $\tilde{\omega}_{0}^{(b)}=\tilde{w}_{1}^{(b-1)}$ was determined at the end of block $b-1$ and $\tilde{\omega}_{0}^{(1)}=1$. When a decoding step either fails to recover a unique index (or index pair) which satisfies the decoding rule, or there is more than one index (or index pair), then an index (or an index pair) is chosen at random. \vskip.1truein \subsubsection{Bounding the Probability of Error} Genie-aided arguments as in \cite{rimoldiurbanke96} and \cite{wozencraftjacobs65} can be used to show that the probability that either Endoder~2 makes an encoding error or the receiver makes a decoding error after block $b$ in the above scheme is upper bounded by the probability that at least one of the following events $E_0^{(b)}-E_5^{(b)}$ happens. {\it Error events:} \begin{itemize} \item $E_0^{(b)}:$ \begin{IEEEeqnarray*}{l} \left(\mbox{\boldmath $v$}(\omega_{0}^{(b)}),\mbox{\boldmath $u$}(w_{2}^{(b)},\jmath_0,\omega_{0}^{(b)}), \mbox{\boldmath $x$}_{1}(w_1^{(b)},\omega_{0}^{(b)}) \right) \\ \hspace{2.5cm} \not\in {\cal A}_{\epsilon}(V,U,X_{1}). \IEEEeqnarraynumspace \end{IEEEeqnarray*} \item $E_1^{(b)}$: There exists $\tilde{w}_1\neq w_{1}^{(b)}$ such that \begin{IEEEeqnarray*}{l} \left( \mbox{\boldmath $v$}({\omega}_{0}^{(b)}), \mbox{\boldmath $x$}_1(\tilde{w}_{1},{\omega}_{0}^{(b)}), \mbox{\boldmath $x$}_1^{(b)} \right) \in{\cal A}_{\epsilon}(V,X_1,X_1). \end{IEEEeqnarray*} \item $E_2^{(b)}:$ There doesn't exist $\jmath_0\in\{1,\ldots,e^{nR'}\}$ such that \begin{IEEEeqnarray*}{l} \left(\mbox{\boldmath $v$}(\omega_{0}^{(b)}),\mbox{\boldmath $u$}(w_{2}^{(b)},\jmath_0,\omega_{0}^{(b)}), {\mbox{\boldmath $s$}}^{(b)} \right) \in {\cal A}_{\epsilon}(V,U,S). \IEEEeqnarraynumspace \end{IEEEeqnarray*} \item $E_3^{(b)}:$ \begin{IEEEeqnarray*}{l} \left(\mbox{\boldmath $v$}(\omega_{0}^{(b)}),\mbox{\boldmath $u$}(w_{2}^{(b)},\jmath_0,\omega_{0}^{(b)}), \mbox{\boldmath $x$}_{1}(w_1^{(b)},\omega_{0}^{(b)}),\right. \\ \left. \hspace{1.0cm} \mbox{\boldmath $x$}_2({\mbox{\boldmath $s$}}^{(b)},{w}_2^{(b)},{{w}}_{1}^{(b-1)}), \mbox{\boldmath $y$}^{(b)} \right) \\ \hspace{2.5cm} \not\in {\cal A}_{\epsilon}(V,U,X_{1},X_{2},Y). \IEEEeqnarraynumspace \end{IEEEeqnarray*} \item $E_4^{(b)}$: There exists $\tilde{\omega}_0\neq \omega_0^{(b)}$ such that \begin{IEEEeqnarray*}{l} \left( \mbox{\boldmath $v$}(\tilde{\omega}_{0}^{(b)}), \mbox{\boldmath $x$}_1(w_1^{(b)},\tilde{\omega}_{0}^{(b)}), \mbox{\boldmath $u$}(j,\jmath_0,\tilde{\omega}_{0}^{(b)}), \right. \nonumber \\ \left. \hspace{1.0cm} \mbox{\boldmath $x$}_2({\mbox{\boldmath $s$}}^{(b)},j,\tilde{\omega}_0^{(b)}), \mbox{\boldmath $y$}^{(b)} \right) \hspace{1cm}\nonumber \\ \hspace{2.5cm}\in{\cal A}_{\epsilon}(V,U,X_1,X_2,Y), \end{IEEEeqnarray*} for some pair $(j,\jmath_0) \ , \ j\in{\cal W}_{2} \ , \ \jmath_0\in\{1,\ldots,e^{nR'}\}$. \item $E_5^{(b)}$: There exists $\tilde{w}_{2}\neq w_{2}^{(b)}$ such that \begin{IEEEeqnarray*}{l} \left( \mbox{\boldmath $v$}({\omega}_{0}^{(b)}), \mbox{\boldmath $x$}_1(w_1^{(b)},{\omega}_{0}^{(b)}), \mbox{\boldmath $u$}(\tilde{w}_2,\jmath_0,{\omega}_{0}^{(b)}), \right. \nonumber \\ \left. \hspace{1.0cm} \mbox{\boldmath $x$}_2({\mbox{\boldmath $s$}}^{(b)},\tilde{w}_2,{\omega}_0^{(b)}), \mbox{\boldmath $y$}^{(b)} \right) \hspace{1cm}\nonumber \\ \hspace{2.5cm}\in{\cal A}_{\epsilon}(V,U,X_1,X_2,Y), \end{IEEEeqnarray*} for some index $\jmath_0\in\{1,\ldots,e^{nR'}\}$. \end{itemize} We define the event \begin{IEEEeqnarray*}{rCl} F_1^{(b)} \triangleq \bigcup_{j=4}^{5}E_{j}^{(b)}, \qquad b=1,\ldots, B, \end{IEEEeqnarray*} the event \begin{IEEEeqnarray*}{rCl} F_2 \triangleq \bigcup_{j=1}^{B}E_{0}^{(b)}, \end{IEEEeqnarray*} the event \begin{IEEEeqnarray*}{rCl} F_3 \triangleq \bigcup_{j=1}^{B} \left(E_{0}^{(b)}\cup E_{1}^{(b)}\right), \end{IEEEeqnarray*} the event \begin{IEEEeqnarray*}{rCl} F_4 \triangleq \bigcup_{j=1}^{B} \left(E_{0}^{(b)}\cup E_{1}^{(b)}\cup E_2^{(b)}\right), \end{IEEEeqnarray*} and the event \begin{IEEEeqnarray*}{rCl} F_4 \triangleq \bigcup_{j=1}^{B} \left(E_{0}^{(b)}\cup E_{1}^{(b)}\cup E_2^{(b)}\cup E_3^{(b)}\right). \end{IEEEeqnarray*} We can upper bound the average probability of error $\bar{P}_e$ averaged over all codebooks and all random partitions by \begin{IEEEeqnarray*}{rCl}\label{eq:Petot} \bar{P}_{e} & \leq & \sum_{b=1}^{B}\left\{ \Pr\left[E_{0}^{(b)}\right] + \Pr\left[E_{1}^{(b)}|{F_2}^{c},{E_{1}^{(1\ldots b-1)}}^c\right] \right\} \nonumber \\ & & + \sum_{b=1}^{B}\left\{ \Pr\left[E_{2}^{(b)}|F_3^{c}\right] + \Pr\left[E_{3}^{(b)}|{F_4}^{c},{E_{3}^{(1\ldots b-1)}}^c\right] \right\} \nonumber \\ & & +\sum_{b=1}^{B}\Pr\left[F_1^{(b)}|{F_4}^{c},{F_1^{(b+1\ldots B)}}^{c}\right], \end{IEEEeqnarray*} where $F{^{(1\ldots b-1)}}^{c}$ denotes the complement of the event $F^{(1)} \cup \ldots \cup F^{(b-1)}$. Furthermore, we can upper bound each of the summands in the last component as \begin{IEEEeqnarray*}{rCl}\lefteqn{ \Pr\left(F_1^{(b)}|{F_4}^{c},{F_1^{(b+1\ldots B)}}^{c}\right)}\\ & =& \Pr\left(\bigcup^{5}_{j=4}E_{j}^{(b)}| {F_4}^{c},{F_1^{(b+1\ldots B)}}^{c} \right)\\ &\leq & \Pr\left(E_{4}^{(b)} \big| {F_4}^{c},{F_1^{(b+1\ldots B)}}^{c}\right) \\ && + \Pr\left(E_{5}^{(b)} \big| {F_4}^{c},{F_1^{(b+1\ldots B)}}^{c}\right) . \end{IEEEeqnarray*} In the following we separately examine each of the above summands. By Lemma~\ref{th:lm1} (Appendix A) $\Pr\left[E_{3}^{(b)}|{F_4}^{c},{E_{3}^{(1\ldots b-1)}}^c\right] $ and $\Pr\left[E_{0}^{(b)}\right]$ can be made arbitrarily small for sufficiently large $n$. Also, by Lemma~\ref{th:lm2}: \begin{itemize} \item If \begin{eqnarray} R_1 < H(X_1|V), \label{eq:decenc2} \end{eqnarray} then $\Pr\left[E_{1}^{(b)}|{F_2}^{c},{E_{1}^{(1\ldots b-1)}}^c\right]$ can be made arbitrarily small, provided that $n$ is sufficiently large; \item If \begin{eqnarray} R_1+R_{2}+R' < I(VUX_1;Y), \label{eq:decrec1} \end{eqnarray} then $\Pr\left(E_{4}^{(b)} \big| {F_4}^{c},{F_1^{(b+1\ldots B)}}^{c}\right)$ can be made arbitrarily small, provided that $n$ is sufficiently large; \item If \begin{eqnarray} R_2+R'<I(U;Y|VX_1) \label{eq:decrec21} \end{eqnarray} then $\Pr\left(E_{5}^{(b)} \big| {F_4}^{c},{F_1^{(b+1\ldots B)}}^{c}\right)$ can be made arbitrarily small, provided that $n$ is sufficiently large; \end{itemize} Finally, by the covering lemma (See \cite{ahlkor,wy1,ber} or \cite[Chapter 13]{cov}), if \begin{eqnarray} R'>I(U;S|V) \label{eq:decrec22} \end{eqnarray} then $\Pr\left[E_{2}^{(b)} \big|{F_3}^{c}\right]$ can be made arbitrarily small, provided that $n$ is sufficiently large. The combination of (\ref{eq:decenc2}), (\ref{eq:decrec1}), (\ref{eq:decrec21}), and (\ref{eq:decrec22}) establishes the achievability of the rate region (\ref{eq:r11}) for a law of the form (\ref{eq:jointlawp}). \vskip.1truein \subsection{Proof of the converse in Theorem~\ref{th:thm1}} Consider an $(e^{nR_1},e^{nR_2},n)$ code with average block error probability $P_e^{(n)}$, and a law on ${\cal W}_1\times{\cal W}_2\times{\cal X}_1^n\times{\cal X}_2^n\times{\cal Y}^n\times{\cal S}^n$ given by \begin{IEEEeqnarray}{l} p_{W_1W_2X_1^nX_2^nS^nY^n} \nonumber \\ =p_{W_1}p_{W_2}I_{\{X_1^n=f_1(W_1)\}}p_{X_2^n|W_1W_2S^n} \prod_{k=1}^np_{Y_k|X_{1,k}X_{2,k}S_k}. \nonumber \\ \label{eq:convlaw} \end{IEEEeqnarray} \medskip Let $V_k$ be the random variable defined by \begin{IEEEeqnarray}{rCl} V_k & \triangleq & X_1^{k-1}, \label{eq:defVk} \end{IEEEeqnarray} and let $U_k$ be the random variable defined by \begin{IEEEeqnarray}{rCl} U_k & \triangleq & W_2Y^{k-1}S_{k+1}^{n}. \label{eq:defUk} \end{IEEEeqnarray} We start with an upper bound on $R_1$ by following similar steps as in \cite[Section~V---Converse for situation 2]{frans}. \begin{IEEEeqnarray}{rCl} nR_1 & = & H(W_1|W_2) \nonumber\\ & = & I(W_1;Y^n|W_2)+H(W_1|W_2Y^n) \nonumber \\ & \leq & I(W_1;Y^n|W_2)+n\delta(P_e) \nonumber \\ & \stackrel{(a)}{=} & I(X_1^n;Y^n|W_2)+n\delta(P_e) \nonumber \\ & = & \sum_{k=1}^nI(X_{1,k};Y^n|W_2X_1^{k-1}) + n\delta(P_e) \nonumber \\ & \leq & \sum_{k=1}^n H(X_{1,k}|X_1^{k-1}) + n\delta(P_e) \nonumber \\ & = & \sum_{k=1}^n H(X_{1,k}|V_{k}) + n\delta(P_e). \label{eq:R1upr1} \end{IEEEeqnarray} where $(a)$ follows from the encoding relation (\ref{eq:enc1}). Next, consider $R_2$ \begin{IEEEeqnarray}{rCl} nR_2 & = & H(W_2|W_1) \nonumber\\ & \leq & I(W_2;Y^n|W_1)+n\delta(P_e) \nonumber \\ & = & \sum_{k=1}^n I(W_2;Y_k|W_1Y^{k-1})+n\delta(P_e) \nonumber \\ & \leq & \sum_{k=1}^n I(W_2Y^{k-1};Y_k|W_1)+n\delta(P_e) \nonumber \\ & = & \sum_{k=1}^n \left[I(W_2Y^{k-1}S_{k+1}^n;Y_k|W_1) \right. \nonumber \\ & & \quad \left. -I(S_{k+1}^n;Y_k|W_1W_2Y^{k-1})\right]+n\delta(P_e) \nonumber \\ & \stackrel{(b)}{=} & \sum_{k=1}^n\left[ I(W_2Y^{k-1}S_{k+1}^n;Y_k|W_1) \right. \nonumber \\ & & \quad \left. -I(Y^{k-1};S_k|W_1W_2S_{k+1}^n)\right]+n\delta(P_e) \nonumber \\ & \stackrel{(c)}{=} & \sum_{k=1}^n \left[ I(W_2Y^{k-1}S_{k+1}^n;Y_k|W_1) \right. \nonumber \\ & & \quad \left. -I(W_2Y^{k-1}S_{k+1}^n;S_k|W_1)\right]+n\delta(P_e) \nonumber \\ & \stackrel{(d)}{=} & \sum_{k=1}^n \left[ I(W_2Y^{k-1}S_{k+1}^n;Y_k|W_1X_1^{k-1}X_{1,k}) \right. \nonumber \\ & & \quad \left. -I(W_2Y^{k-1}S_{k+1}^n;S_k|W_1X_1^{k-1})\right]+n\delta(P_e) \nonumber \\ & \stackrel{(e)}{=} & \sum_{k=1}^n \left[ I(W_2Y^{k-1}S_{k+1}^n;Y_k|X_1^{k-1}X_{1,k}) \right. \nonumber \\ & & \quad \left. -I(W_2Y^{k-1}S_{k+1}^n;S_k|W_1X_1^{k-1})\right] +n\delta(P_e) \nonumber \\ & \stackrel{(f)}{=} & \sum_{k=1}^n \left[ I(W_2Y^{k-1}S_{k+1}^n;Y_k|X_1^{k-1}X_{1,k}) \right. \nonumber \\ & & \quad \left. -I(W_2Y^{k-1}S_{k+1}^n;S_k|X_1^{k-1})\right]+n\delta(P_e) \nonumber \\ & = & \sum_{k=1}^n \left[ I(U_k;Y_k|V_{k}X_{1,k})-I(U_k;S_k|V_k)\right] . \label{eq:R2upr1} \end{IEEEeqnarray} Here, \begin{itemize} \item[$(b)$] follows by the Csisz\'{a}r-K\"{o}rner's identity \cite[Lemma 7]{csi}; \item[$(c)$] follows since $(W_2S_{k+1}^n)$ is independent of $S_k$; \item[$(d)$] follows by the encoding relation (\ref{eq:enc1}); \item[$(e)$] follows since $W_1{-\hspace{-.635em}\circ\hspace{.4em}} X_{1,k}X_1^{k-1}{-\hspace{-.635em}\circ\hspace{.4em}} W_2Y_kY^{k-1}S_{k+1}^n$ and $W_1{-\hspace{-.635em}\circ\hspace{.4em}} X_{1,k}X_1^{k-1}{-\hspace{-.635em}\circ\hspace{.4em}} Y_k$ are Markov strings; and \item[$(f)$] follows since $W_1{-\hspace{-.635em}\circ\hspace{.4em}} X_1^{k-1}{-\hspace{-.635em}\circ\hspace{.4em}} W_2S_kY^{k-1}S_{k+1}^n$ is a Markov string. \end{itemize} Finally, we consider the sum-rate $R_1+R_2$ \begin{IEEEeqnarray}{rCl} n(R_1+R_2) & = & H(W_1W_2) \nonumber\\ & \leq & I(W_1W_2;Y^n)+n\delta(P_e) \nonumber \\ & = & \sum_{k=1}^n I(W_1W_2;Y_k|Y^{k-1})+n\delta(P_e) \nonumber \\ & \stackrel{(g)}{\leq} & \sum_{k=1}^n \left[I(W_1W_2Y^{k-1}S_{k+1}^n;Y_k) \right. \nonumber \\ & & \quad \left. -I(W_1W_2Y^{k-1}S_{k+1}^n;S_k)\right]+n\delta(P_e) \nonumber \\ & = & \sum_{k=1}^n \left[ I(W_1X_1^{k-1}X_{1,k}W_2Y^{k-1}S_{k+1}^n;Y_k) \right. \nonumber \\ & & \quad \left. -I(W_1W_2Y^{k-1}S_{k+1}^n;S_k)\right]+n\delta(P_e) \nonumber \\ & \stackrel{(h)}{=} & \sum_{k=1}^n\left[ I(X_1^{k-1}X_{1,k}W_2Y^{k-1}S_{k+1}^n;Y_k) \right. \nonumber \\ & & \quad \left. -I(W_1W_2Y^{k-1}S_{k+1}^n;S_k)\right]+n\delta(P_e) \nonumber \\ & = & \sum_{k=1}^n \left[ I(V_kU_kX_{1,k};Y_k)-I(W_1;S_k) \right. \nonumber \\ & & \quad \left. -I(W_2Y^{k-1}S_{k+1}^n;S_k|W_1)\right]+n\delta(P_e) \nonumber \\ & = & \sum_{k=1}^n \left[ I(V_kU_kX_{1,k};Y_k)-I(W_1;S_k) \right. \nonumber \\ & & \quad \left. -I(W_2Y^{k-1}S_{k+1}^n;S_k|W_1X_1^{k-1})\right]+n\delta(P_e) \nonumber \\ & \stackrel{(i)}{=} & \sum_{k=1}^n \left[I(V_kU_kX_{1,k};Y_k)-I(U_k;S_k|V_k)\right] . \label{eq:sumRupr1} \end{IEEEeqnarray} Here, \begin{itemize} \item[$(g)$] follows by the same procedure as $(b)$ and $(c)$; \item[$(h)$] follows by the encoding relation (\ref{eq:enc1}) and since $W_1{-\hspace{-.635em}\circ\hspace{.4em}} X_{1,k}X_1^{k-1}W_2Y^{k-1}S_{k+1}^n{-\hspace{-.635em}\circ\hspace{.4em}} Y_k$ is a Markov string; and \item[$(i)$] follows since $W_1$ is independent of $S_k$ and since $W_1{-\hspace{-.635em}\circ\hspace{.4em}} X_1^{k-1}W_2Y^{k-1}S_{k+1}^n {-\hspace{-.635em}\circ\hspace{.4em}} S_k$ and $W_1{-\hspace{-.635em}\circ\hspace{.4em}} X_1^{k-1}{-\hspace{-.635em}\circ\hspace{.4em}} S_k$ are Markov strings. \end{itemize} \medskip Next we verify the joint law of the auxiliary random variables. By \eqref{eq:convlaw} and the encoding rule (\ref{eq:enc2}) we may write \begin{IEEEeqnarray*}{l} p_{W_1W_2X_1^{k-1}X_{1,k}S^{k-1}S_kS_{k+1}^nX_2^kY^{k-1}}= \nonumber \\ \ \ \ p_{W_1}p_{X_1^{k-1}|W_1}P_{X_{1,k}|W_1X_1^{k-1}}p_{S^{k-1}}p_{S_k}p_{S_{k+1}^n} \nonumber \\ \ \ \ \quad \cdot p_{W_2}p_{X_2^k|W_2X_1^{k-1}S^n}p_{Y^{k-1}|X_1^{k-1}X_2^{k-1}S^{k-1}} \end{IEEEeqnarray*} Summing this joint law over $w_1$ we obtain \begin{IEEEeqnarray*}{l} \sum_{w_1} p_{W_1W_2X_1^{k-1}X_{1,k}S^{k-1}S_kS_{k+1}^nX_2^kY^{k-1}} \nonumber \\ = p_{W_2X_1^{k-1}X_{1,k}S^{k-1}S_kS_{k+1}^nX_2^kY^{k-1}} \nonumber \\ =p_{X_1^{k-1}}P_{X_{1,k}|X_1^{k-1}}p_{S^{k-1}}p_{S_k}p_{S_{k+1}^n} \nonumber \\ \ \ \ \quad \cdot p_{W_2}p_{X_2^k|W_2X_1^{k-1}S^n}p_{Y^{k-1}|X_1^{k-1}X_2^{k-1}S^{k-1}} \end{IEEEeqnarray*} Summing this joint law over all possible sub-sequences $(s_1,s_2,\ldots,s_{k-1})$ we obtain \begin{IEEEeqnarray*}{l} \sum_{(s_1,s_2,\ldots,s_{k-1})} p_{W_2X_1^{k-1}X_{1,k}S^{k-1}S_kS_{k+1}^nX_2^kY^{k-1}} \nonumber \\ = p_{W_2X_1^{k-1}X_{1,k}S_kS_{k+1}^nX_2^kY^{k-1}} \nonumber \\ =p_{X_1^{k-1}}P_{X_{1,k}|X_1^{k-1}}p_{S_k}p_{S_{k+1}^n} \nonumber \\ \ \ \ \quad \cdot p_{W_2}p_{X_2^k|W_2X_1^{k-1}S_kS_{k+1}^n}p_{Y^{k-1}|X_1^{k-1}X_2^{k-1}} \end{IEEEeqnarray*} This establishes the Markov relation \begin{IEEEeqnarray}{C}\label{eq:markov-Uk} X_{2,k}U_k{-\hspace{-.635em}\circ\hspace{.4em}} S_kV_k{-\hspace{-.635em}\circ\hspace{.4em}} X_{1,k}. \end{IEEEeqnarray} Next, let $J$ be a r.v. uniformly distributed over $\{1,\ldots,n\}$ and independent of $(X_{1,k},X_{2,k},V_k,U_k,S_k,Y_k) \ , \ k=1,\ldots,n$, and define \begin{IEEEeqnarray*}{l} (S,X_1,X_2,V,U,Y)=(S_J,X_{1,J},X_{2,J},V_J,U_J,Y_J). \end{IEEEeqnarray*} We may express (\ref{eq:R1upr1}) as follows \begin{IEEEeqnarray}{l} R_1\leq \frac{1}{n}\sum_{k=1}^n H(X_{1,k}|V_{k})=H(X_1|V,J)=H(X_1|\bar{V}), \label{eq:R1upr11} \end{IEEEeqnarray} where in the last step we've defined $\bar{V}\triangleq (V,J)$. \\ Similarly, we may express (\ref{eq:R2upr1}) as follows \begin{IEEEeqnarray}{rCl} R_2 & \leq & \frac{1}{n} \sum_{k=1}^n \left[ I(U_k;Y_k|V_{k}X_{1,k})-I(U_k;S_k|V_k)\right] \nonumber \\ & = & I(U;Y|V,X_1,J)-I(U;S|V,J) \nonumber \\ & = & I(U;Y|\bar{V},X_1)-I(U;S|\bar{V}) , \label{eq:R2upr11} \end{IEEEeqnarray} Finally, we may express (\ref{eq:sumRupr1}) as follows \begin{IEEEeqnarray}{rCl} R_1+R_2 & \leq & \frac{1}{n} \sum_{k=1}^n \left[I(V_kU_kX_{1,k};Y_k)-I(U_k;S_k|V_k)\right] \nonumber \\ & = & I(V,U,X_1;Y|J)-I(U;S|V,J) \nonumber \\ & = & I(V,J,U,X_1;Y)-I(J;Y)-I(U;S|V,J) \nonumber \\ & \leq & I(V,J,U,X_1;Y)-I(U;S|V,J) \nonumber \\ & = & I(\bar{V},U,X_1;Y)-I(U;S|\bar{V}). \label{eq:sumRupr11} \end{IEEEeqnarray} This establishes the single letter expression for the achievable rate region (\ref{eq:r11}). The convexity of the rate region (\ref{eq:r11}) can be shown in a similar way. The inequalities (\ref{eq:R1upr1}), (\ref{eq:R2upr1}), (\ref{eq:sumRupr1}) combined with their respective single-letter expressions and the Markov relation (\ref{eq:markov-Uk}) establish the converse part of Theorem~1. \subsection{Bounds on alphabets sizes in Theorem~\ref{th:thm1}} \label{sec:ba} \vskip.1truein We consider the alphabet sizes of $U$ and $V$. Specifically, let $P_{X_1,X_2,S,V,U}$ be a distribution satisfying the Markov conditions required in \eqref{eq:jointlawp}. For convenience, $P_{X_1,X_2,S,U|V}(x_1,x_2,s,u|v)$ will be denoted in the sequel as $P(\cdot|v)$. We would like to bound the sizes of the alphabets ${\cal V}$ and ${\cal U}$, while preserving the region given in \eqref{eq:r11}. For a generic distribution $Q$ on ${\cal X}_1\times{\cal X}_2\times{\cal S}\times{\cal U}$, define the functionals \begin{subequations} \label{eq:ba1} \begin{IEEEeqnarray}{l} q_{x_1,x_2,s}(Q) = \sum_u Q(x_1,x_2,s,u),\ \ \ x_1,x_2,s\in{\cal X}_1\times{\cal X}_2 \times{\cal S} \nonumber \\ \label{eq:ba1_1}\\ J_1(Q) = \sum_{x_1,x_2,s,u} Q(x_1,x_2,s,u)\log \frac{1}{\sum_{x_2',s',u'}Q(x_1,x_2',s',u')} \nonumber \\ \label{eq:ba1_2}\\ J_2(Q) = \sum_{x_1,x_2,s,u} Q(x_1,x_2,s,u)\log \frac{1}{\sum_{x_1',x_2',u'}Q(x_1',x_2',s,u')} \nonumber \\ \label{eq:ba1_3}\\ J_3(Q) = \sum_{x_1,x_2,s,u} Q(x_1,x_2,s,u)\log \frac{\sum_{x_1',x_2',s'}Q(x_1',x_2',s',u)} {\sum_{x_1',x_2'}Q(x_1',x_2',s,u)} \nonumber \\ \label{eq:ba1_4}\\ J_4(Q) = \sum_{x_1,x_2,s,u} Q(x_1,x_2,s,u)\nonumber\\ \quad \cdot\log \frac{1}{\sum_{x_2',s',u'}Q(x_1,x_2',s',u') P_{Y|X_1,X_2,S}(y|x_1,x_2',s')} \nonumber \\ \label{eq:ba1_5}\\ J_5(Q) = \sum_{x_1,x_2,s,u} Q(x_1,x_2,s,u)\nonumber\\ \quad \cdot\log \frac{1}{\sum_{x_2',s'}Q(x_1,x_2',s',u) P_{Y|X_1,X_2,S}(y|x_1,x_2',s')} \nonumber \\ \label{eq:ba1_6} . \end{IEEEeqnarray} \end{subequations} Substituting the distribution $P_{X_1,X_2,S,U|V}(\cdot|v)$ in the functionals, and averaging them with respect to $v$, we obtain \begin{subequations} \label{eq:ba2} \begin{IEEEeqnarray}{rCl} \sum_v P_V(v)q_{x_1,x_2,s}(P(\cdot|v)) &=& P_{X_1,X_2,S}(x_1,x_2,s) \label{eq:ba2_1}\\ \sum_v P_V(v)J_1(P(\cdot|v)) &=& H(X_1|V)\label{eq:ba2_2}\\ \sum_v P_V(v)J_2(P(\cdot|v)) &=& H(S|V)\label{eq:ba2_3}\\ \sum_v P_V(v)J_3(P(\cdot|v)) &=& H(S|U,V)\label{eq:ba2_4}\\ \sum_v P_V(v)J_4(P(\cdot|v)) &=& H(Y|V,X_1)\label{eq:ba2_5}\\ \sum_v P_V(v)J_5(P(\cdot|v)) &=& H(Y|V,U,X_1)\label{eq:ba2_6}. \end{IEEEeqnarray} \end{subequations} Observe that preserving the values of the right hand sides of \eqref{eq:ba2_1}-\eqref{eq:ba2_6}, guarantees that we also preserve the region \eqref{eq:r11}. We used here the Markov structure $Y {-\hspace{-.635em}\circ\hspace{.4em}} (X_1,X_2,S) {-\hspace{-.635em}\circ\hspace{.4em}} (V,U)$, and the fact that if we preserve the joint distribution of $X_1,X_2,S$, the distribution of $Y$ is also preserved. By the Support Lemma~\cite{CKBook}, we can restrict the alphabet of $V$ to: \begin{equation} |{\cal V}| \leq |{\cal X}_1| |{\cal X}_2| |{\cal S}| + 5. \label{eq:ba3} \end{equation} Note that this bound is independent of the alphabet of $U$. We now fix some $V$ with bounded alphabet as above, and proceed to bound the alphabet of $U$. Let $\tilde{Q}$ be a generic distribution on ${\cal X}_1\times{\cal X}_2\times{\cal S}\times{\cal V}$, and define the functionals \begin{subequations} \label{eq:ba4} \begin{IEEEeqnarray}{l} \tilde{q}_{x_1,x_2,s,v}(\tilde{Q}) = \tilde{Q}(x_1,x_2,s,v)\label{eq:ba4_1}\\ \tilde{J}_1(\tilde{Q}) = \sum_{x_1,x_2,s,v}\tilde{Q}(x_1,x_2,s,v)\nonumber\\ \ \ \ \ \quad \cdot\log\frac{\sum_{x_1',x_2',s'}\tilde{Q}(x_1',x_2',s',v)} {\sum_{x_1',x_2'}\tilde{Q}(x_1',x_2',s,v)} \nonumber \\ \label{eq:ba4_2}\\ \tilde{J}_2(\tilde{Q}) = \sum_{x_1,x_2,s,v}\tilde{Q}(x_1,x_2,s,v)\nonumber\\ \quad \cdot\log\frac{1}{\sum_{x_2',s'}\tilde{Q}(x_1,x_2',s',v)P_{Y|X_1,X_2,S}(y|x_1,x_2',s')} \nonumber \\ \label{eq:ba4_3} . \end{IEEEeqnarray} \end{subequations} Since \begin{IEEEeqnarray*}{rCl} P_{Y|V,X_1}(y|v,x_1) &=& \sum_{x_2,s}P_{Y|X_1,X_2,S}(y|x_1,x_2,s)\\ & & \cdot\frac{P_{X_1,X_2,S,V}(x_1,x_2,s,v)} {\sum_{x_2',s'}P_{X_1,X_2,S,V}(x_1,x_2',s',v)}, \end{IEEEeqnarray*} in order to preserve the value of $H(Y|V,X_1)$, it suffices to preserve the joint distribution of $X_1,X_2,S,V$. For convenience, we use in the sequel the shorthand notation $P(\cdot|u)=P_{X_1,X_2,S,V|U}(\cdot|u)$. Substituting the distribution $P_{X_1,X_2,S,V|U}(\cdot|u)$ in the functionals~\eqref{eq:ba4} and averaging over $u$, we obtain \begin{subequations} \label{eq:ba5} \begin{IEEEeqnarray}{rCl} \sum_u P_U(u)\tilde{q}_{x_1,x_2,s,v}(P(\cdot|u)) &=& P_{X_1,X_2,S,V}(x_1,x_2,s,v) \nonumber \\ \label{eq:ba5_1}\\ \sum_u P_U(u)\tilde{J}_1(P(\cdot|u)) &=& H(S|V,U) \label{eq:ba5_2}\\ \sum_u P_U(u)\tilde{J}_2(P(\cdot|u)) &=& H(Y|V,U,X_1) \label{eq:ba5_3} \end{IEEEeqnarray} \end{subequations} Applying again the Support Lemma, we see that it suffices to bound the alphabe size of $U$ as \begin{equation} |{\cal U}|\leq |{\cal X}_1| |{\cal X}_2| |{\cal S}| |{\cal V}| + 2. \end{equation} This completes the proof of the bounds on the alphabet sizes. \subsection{Proof of Theorem~\ref{th:thm2}} The achievability part follows similarly to that of Theorem~\ref{th:thm1} the only difference being in the way the codeword $\mbox{\boldmath $x$}_2({\mbox{\boldmath $s$}}^{(b)},w_{2}^{(b)},\omega_0^{(b)})$ is generated. Here the second encoder generates the codeword $\mbox{\boldmath $x$}_2({\mbox{\boldmath $s$}}^{(b)},w_{2}^{(b)},\omega_0^{(b)})$ by drawing its components i.i.d. conditionally on the quadruple $({\mbox{\boldmath $s$}}^{(b)},{\mbox{\boldmath $u$}}(w_{2}^{(b)},\jmath_0,\omega_0^{(b)}),{\mbox{\boldmath $v$}}(\omega_0^{(b)}), {\mbox{\boldmath $x$}}_1^{(b)})$, where the conditional law is induced by (\ref{eq:jointlawpc}). \medskip For the converse, consider an $(e^{nR_1},e^{nR_2},n)$ code with average block error probability $P_e^{(n)}$, and a law on ${\cal W}_1\times{\cal W}_2\times{\cal X}_1^n\times{\cal X}_2^n\times{\cal Y}^n\times{\cal S}^n$ given by \begin{IEEEeqnarray}{l} p_{W_1W_2X_1^nX_2^nS^nY^n} \nonumber \\ =p_{W_1}p_{W_2}I_{\{X_1^n=f_1(W_1)\}}p_{X_2^n|W_1W_2S^n} \prod_{k=1}^np_{Y_k|X_{1,k}X_{2,k}S_k}. \nonumber \\ \label{eq:convlawc} \end{IEEEeqnarray} \medskip The Fano inequalities for the causal cribbing case yield the same inequalities (\ref{eq:R1upr1}), (\ref{eq:R2upr1}), and (\ref{eq:sumRupr1}). \medskip It remains to verify the joint law of the auxiliary random variables. By \eqref{eq:convlawc} and the encoding rule (\ref{eq:enc2c}) we may write \begin{IEEEeqnarray*}{l} p_{W_1W_2X_1^{k-1}X_{1,k}S^{k-1}S_kS_{k+1}^nX_2^kY^{k-1}}= \nonumber \\ \ \ \ p_{W_1}p_{X_1^{k-1}|W_1}P_{X_{1,k}|W_1X_1^{k-1}}p_{S^{k-1}}p_{S_k}p_{S_{k+1}^n} \nonumber \\ \ \ \ \quad \cdot p_{W_2}p_{X_2^{k-1}|W_2X_1^{k-1}S^n} p_{X_{2,k}|W_2X_1^{k-1}X_{1,k}X_2^{k-1}S^n} \nonumber \\ \ \ \ \quad \cdot p_{Y^{k-1}|X_1^{k-1}X_2^{k-1}S^{k-1}} \end{IEEEeqnarray*} Summing this joint law over $w_1$ we obtain \begin{IEEEeqnarray*}{l} \sum_{w_1} p_{W_1W_2X_1^{k-1}X_{1,k}S^{k-1}S_kS_{k+1}^nX_2^kY^{k-1}} \nonumber \\ = p_{W_2X_1^{k-1}X_{1,k}S^{k-1}S_kS_{k+1}^nX_2^kY^{k-1}} \nonumber \\ =p_{X_1^{k-1}}P_{X_{1,k}|X_1^{k-1}}p_{S^{k-1}}p_{S_k}p_{S_{k+1}^n} \nonumber \\ \ \ \ \quad \cdot p_{W_2}p_{X_2^{k-1}|W_2X_1^{k-1}S^n} p_{X_{2,k}|W_2X_1^{k-1}X_{1,k}X_2^{k-1}S^n} \nonumber \\ \ \ \ \quad \cdot p_{Y^{k-1}|X_1^{k-1}X_2^{k-1}S^{k-1}} \end{IEEEeqnarray*} Summing this joint law over all possible sub-sequences $(s_1,s_2,\ldots,s_{k-1})$ we obtain \begin{IEEEeqnarray*}{l} \sum_{(s_1,s_2,\ldots,s_{k-1})} p_{W_2X_1^{k-1}X_{1,k}S^{k-1}S_kS_{k+1}^nX_2^kY^{k-1}} \nonumber \\ = p_{W_2X_1^{k-1}X_{1,k}S_kS_{k+1}^nX_2^kY^{k-1}} \nonumber \\ =p_{X_1^{k-1}}P_{X_{1,k}|X_1^{k-1}}p_{S_k}p_{S_{k+1}^n} \nonumber \\ \ \ \quad \cdot p_{W_2}p_{X_2^{k-1}|W_2X_1^{k-1}S_kS_{k+1}^n} p_{X_{2,k}|W_2X_1^{k-1}X_{1,k}X_2^{k-1}S_kS_{k+1}^n} \nonumber \\ \ \ \ \quad \cdot p_{Y^{k-1}|X_1^{k-1}X_2^{k-1}} \end{IEEEeqnarray*} This establishes the Markov relation \begin{IEEEeqnarray}{C}\label{eq:markov-Ukc} U_k{-\hspace{-.635em}\circ\hspace{.4em}} S_kV_k{-\hspace{-.635em}\circ\hspace{.4em}} X_{1,k} , \end{IEEEeqnarray} as well as the fact that conditionally on $V_kU_kS_kX_{1,k}$ the r.v. $X_{2,k}$ is independent of the rest. \begin{appendix} \subsection{Strong Typicality}\label{sec:st} Let $\left\{X^{(1)},X^{(2)},\ldots,X^{(k)}\right\}$ denote a finite collection of discrete random variables with some joint distribution $P\left(x^{(1)},x^{(2)},\ldots,x^{(k)}\right)$ with $\left(x^{(1)},x^{(2)},\ldots,x^{(k)}\right)\in{\cal X}^{(1)}\times{\cal X}^{(2)}\times\ldots\times{\cal X}^{(k)}$. Let $S$ denote an ordered nonempty subset of these random variables and consider $n$ independent copies of $S$. Thus, with $\mbox{\boldmath $S$}\triangleq(S_1,S_2,\ldots,S_n)$, \begin{eqnarray*} \Pr\{\mbox{\boldmath $S$}=\mbox{\boldmath $s$}\}=\prod_{j=1}^n \Pr\{S_j=s_j\}. \end{eqnarray*} Let $N(s;\mbox{\boldmath $s$})$ be the number of indices $j\in\{1,2,\ldots,n\}$ such that $S_{j}=s$. By the law of large numbers, for any subset $S$ of random variables and for all $s\in S$, \begin{eqnarray} \frac{1}{n}N(s;\mbox{\boldmath $s$})\rightarrow P(s), \label{eq:conv1} \end{eqnarray} as well as \begin{eqnarray} -\frac{1}{n}\ln P(s_{1},s_{2},\ldots,s_{n})=-\frac{1}{n}\sum_{j=1}^{n}\ln P(s_{j})\rightarrow H(S). \label{eq:conv2} \end{eqnarray} The convergence in (\ref{eq:conv1}) and (\ref{eq:conv2}) takes place simultaneously with probability one for all nonempty subsets $S$ \cite{cov}. \begin{definition}The set ${\cal A}_{\epsilon}$ of $\epsilon$-strongly typical $n$-sequences is defined by (see \cite[Chapter 3,12,13]{cov}) \begin{eqnarray*} {\cal A}_{\epsilon} & \triangleq & {\cal A}_{\epsilon}\left(X^{(1)},X^{(2)},\ldots,X^{(k)}\right) \\ &\triangleq & \Bigg\{\left(\mbox{\boldmath $x$}^{(1)},\mbox{\boldmath $x$}^{(2)},\ldots,\mbox{\boldmath $x$}^{(k)}\right): \\ & & \quad \Big|\frac{1}{n}N\left(x^{(1)},x^{(2)},\ldots,x^{(k)}; \mbox{\boldmath $x$}^{(1)},\mbox{\boldmath $x$}^{(2)},\ldots,\mbox{\boldmath $x$}^{(k)}\right) \\ & &\quad \qquad - P\left(x^{(1)},x^{(2)},\ldots,x^{(k)}\right)\Big| \\ & & \quad <\frac{\epsilon}{\| {\cal X}^{(1)}\times{\cal X}^{(2)}\times\ldots\times{\cal X}^{(k)}\|}, \\ & & \quad \forall\left(\mbox{\boldmath $x$}^{(1)},\mbox{\boldmath $x$}^{(2)},\ldots,\mbox{\boldmath $x$}^{(k)}\right)\in {\cal X}^{(1)}\times \ldots\times{\cal X}^{(k)} \Bigg\}, \end{eqnarray*} where $\|{\cal X}\|$ is the cardinality of the set ${\cal X}$.\\ Let ${\cal A}_{\epsilon}(S)$ be defined similar to ${\cal A}_{\epsilon}$ but now with constraints corresponding to all nonempty subsets of $S$. We recall now two basic lemmas (for the proofs we refer to \cite{cov}). \end{definition} \begin{lemma} \label{th:lm1}For any $\epsilon>0$ the following statements hold for every integer $n \geq 1$: \begin{enumerate} \item If $\mbox{\boldmath $s$}\in{\cal A}_{\epsilon}(S)$, then $\exp\left(-n(H(S)+\epsilon)\right)\leq \Pr\{\mbox{\boldmath $S$}=\mbox{\boldmath $s$}\}\leq \exp\left(-n(H(S)-\epsilon)\right)$. \item If $S_1,S_2\subseteq \left\{X_1,X_2,\ldots,X_k\right\}$ and $(\mbox{\boldmath $s$}_1,\mbox{\boldmath $s$}_2)\in{\cal A}_{\epsilon}(S_1\cup S_2)$, then \begin{eqnarray*}\lefteqn{ \exp\left(-n(H(S_1|S_2)+2\epsilon)\right)\leq \Pr\{\mbox{\boldmath $S_1$}=\mbox{\boldmath $s_1$}|\mbox{\boldmath $S_2$}=\mbox{\boldmath $s_2$}\}} \\ & & \leq \exp\left(-n(H(S_1|S_2)-2\epsilon)\right) .\hspace{2cm} \end{eqnarray*} Moreover, the following statements hold for every sufficiently large $n$: \item $\Pr\left\{{\cal A}_{\epsilon}(S)\right\}\geq 1-\epsilon$, \item $(1-\epsilon)\exp(n(H(S)-\epsilon))\leq \left\|{\cal A}_{\epsilon}(S)\right\|\leq \exp(n(H(S)+\epsilon)).$ \end{enumerate} \end{lemma} \vskip.1truein \begin{lemma}\label{th:lm2} Let the discrete random variables $X,Y,Z$ have joint distribution $P_{X,Y,Z}(x,y,z)$. Let $X'$ and $Y'$ be conditionally independent given $Z$, with the marginal laws \begin{eqnarray*} P_{X'|Z}(x|z) & = & \sum_{y} P_{X,Y,Z}(x,y,z)/P_{Z}(z) , \\ P_{Y'|Z}(y|z) & = & \sum_{x}P_{X,Y,Z}(x,y,z)/P_{Z}(z) . \end{eqnarray*} Let $(\mbox{\boldmath $X$},\mbox{\boldmath $Y$},\mbox{\boldmath $Z$})\sim\prod_{k=1}^n P_{X,Y,Z}(x_k,y_k,z_k)$ and $(\mbox{\boldmath $X$}',\mbox{\boldmath $Y$}', \mbox{\boldmath $Z$})\sim\prod_{k=1}^n P_{X'|Z}(x_k|z_k) P_{Y'|Z}(y_k|z_k) P_{Z}(z_k)$. Then \begin{equation*} \Pr\left\{(\mbox{\boldmath $X$}',\mbox{\boldmath $Y$}',\mbox{\boldmath $Z$})\in{\cal A}_{\epsilon}(X,Y,Z)\right\} \leq\exp(-n[I(X;Y|Z)-\epsilon]). \end{equation*} \end{lemma} \end{appendix}
1,314,259,994,411
arxiv
\section{Introduction} Attempts to apply mathematical methods to the extraction of information from data can be traced back to the work of Boscovich \cite{Bosc57}, Gauss \cite{Gaus09}, Laplace \cite{Lapl89}, and Legendre \cite{Lege05}. Thus, in connection with the problem of estimating parameters from noisy observations, Boscovich and Laplace invented the least-deviations data fitting method, while Legendre and Gauss invented the least-squares data fitting method. On the algorithmic side, the gradient method was invented by Cauchy \cite{Cauc47} to solve a data fitting problem in astronomy, and more or less heuristic methods have been used from then on. The early work involving provably convergent numerical solutions methods was focused mostly on quadratic minimization problems or linear programming techniques, e.g., \cite{Artz79,Herm73,Hunt70,Twom65,Wagn59}. Nowadays, general convex optimization methods have penetrated virtually all branches of data science \cite{Bach12,Byrn14,Cham16,Aiep96,Banf11,Glow16,Wrig12,Theo20}. In fact, the optimization and data science communities have never been closer, which greatly facilitates technology transfers towards applications. Reciprocally, many of the recent advances in convex optimization algorithms have been motivated by data processing problems in signal recovery, inverse problems, or machine learning. At the same time, the design and the convergence analysis of some of the most potent splitting methods in highly structured or large-scale optimization are based on concepts that are not found in the traditional optimization toolbox but reach deeper into nonlinear analysis. Furthermore, an increasing number of problem formulations go beyond optimization in the sense that their solutions are not optimal in the classical sense of minimizing a function but, rather, satisfy more general notions of equilibrium. Among the formulations that fall outside of the realm of standard minimization methods, let us mention variational inequality and monotone inclusion models, game theoretic approaches, neural network structures, and plug-and-play methods. Given the abundance of activity described above and the increasingly complex formulations of some data processing problems and their solution methods, it is essential to identify general structures and principles in order to simplify and clarify the state of the art. It is the objective of the present paper to promote the viewpoint that fixed point theory constitutes an ideal technology towards this goal. Besides its unifying nature, the fixed point framework offers several advantages. On the algorithmic front, it leads to powerful convergence principles that demystify the design and the asymptotic analysis of iterative methods. Furthermore, fixed point methods can be implemented using stochastic perturbations, as well as block-coordinate or block-iterative strategies which reduce the computational load and memory requirements of the iterations. Historically, one of the first uses of fixed point theory in signal recovery is found in the bandlimited reconstruction method of \cite{Land61}, which is based on the iterative Banach-Picard contraction process \begin{equation} \label{e:emile} x_{n+1}=Tx_n, \end{equation} where the operator $T$ has Lipschitz constant $\delta<1$. The importance of dealing with the more general class of nonexpansive operators, i.e., those with Lipschitz constant $\delta=1$, was emphasized by Youla in \cite{Youl78} and \cite{Youl82}; see also \cite{Scha81,Tomv81,Wile78}. Since then, many problems in data science have been modeled and solved using nonexpansive operator theory; see for instance \cite{Fixe11,Byrn14,Aiep96,Smds20,Smms05,Daub04,Mari90,% Pott93,Star87,Theo11}. The outline of the paper is as follows. In order to make the paper as self-contained as possible, we present in Section~\ref{sec:2} the essential tools and results from nonlinear analysis on which fixed point approaches are grounded. These include notions of convex analysis, monotone operator theory, and averaged operator theory. Section~\ref{sec:3} provides an overview of basic fixed point principles and methods. Section~\ref{sec:4} addresses the broad class of monotone inclusion problems and their fixed point modeling. Using the tools of Section~\ref{sec:3}, various splitting strategies are described, as well as block-iterative and block-coordinate algorithms. Section~\ref{sec:5} discusses applications of splitting methods to a large panel of techniques for solving structured convex optimization problems. Moving beyond traditional optimization, algorithms for Nash equilibria are investigated in Section~\ref{sec:6}. Section~\ref{sec:7} shows how fixed point strategies can be applied to four additional categories of data science problems that have no underlying minimization interpretation. Some brief conclusions are drawn in Section~\ref{sec:8}. For simplicity, we have adopted a Euclidean space setting. However, most results remain valid in general Hilbert spaces up to technical adjustments. \section{Notation and mathematical foundations} \label{sec:2} We review the basic tools and principles from nonlinear analysis that will be used throughout the paper. Unless otherwise stated, the material of this section can be found in \cite{Livre1}; for convex analysis see also \cite{Rock70}. \subsection{Notation} \label{sec:not} Throughout, $\mathcal H$, $\mathcal G$, $(\mathcal H_i)_{1\leq i\leq m}$, and $(\mathcal G_k)_{1\leq k\leq q}$ are Euclidean spaces. We denote by $2^\mathcal H$ the collection of all subsets of $\mathcal H$ and by $\ensuremath{\boldsymbol{\mathcal H}}=\mathcal H_1\times\cdots\times\mathcal H_m$ and $\ensuremath{\boldsymbol{\mathcal G}}=\mathcal G_1\times\cdots\times\mathcal G_q$ the standard Euclidean product spaces. A generic point in $\ensuremath{\boldsymbol{\mathcal H}}$ is denoted by $\boldsymbol{x}=(x_i)_{1\leq i\leq m}$. The scalar product of a Euclidean space is denoted by $\scal{\cdot}{\cdot}$ and the associated norm by $\|\cdot\|$. The adjoint of a linear operator $L$ is denoted by $L^*$. Let $C$ be a subset of $\mathcal H$. Then the \emph{distance function} to $C$ is $d_C\colon x\mapsto\inf_{y\in C}\|x-y\|$ and the \emph{relative interior} of $C$, denoted by $\ensuremath{\operatorname{ri}} C$, is its interior relative to its affine hull. \subsection{Convex analysis} \label{sec:21} The central notion in convex analysis is that of a convex set: a subset $C$ of $\mathcal H$ is \emph{convex} if it contains all the line segments with end points in the set, that is, \begin{equation} \label{e:convex1} (\forall x\in C)(\forall y\in C)(\forall\alpha\in\zeroun\,)\quad \alpha x+(1-\alpha)y\in C. \end{equation} The projection theorem is one of the most important results of convex analysis. \begin{theorem}[projection theorem] \label{t:11} Let $C$ be a nonempty closed convex subset of $\mathcal H$ and let $x\in\mathcal H$. Then there exists a unique point $\ensuremath{\mathrm{proj}}_Cx\in C$, called the \emph{projection} of $x$ onto $C$, such that $\|x-\ensuremath{\mathrm{proj}}_Cx\|=d_C(x)$. In addition, for every $p\in\mathcal H$, \begin{equation} \label{e:proj} p=\ensuremath{\mathrm{proj}}_Cx\quad\Leftrightarrow\quad \begin{cases} p\in C\\ (\forall y\in C)\;\scal{y-p}{x-p}\leq 0. \end{cases} \end{equation} \end{theorem} Convexity for functions is inherited from convexity for sets as follows. Consider a function $f\colon\mathcal H\to\ensuremath{\left]-\infty,+\infty\right]}$. Then $f$ is \emph{convex} if its \emph{epigraph} \begin{equation} \label{e:epi} \ensuremath{\mathrm{epi}\,} f=\menge{(x,\xi)\in\mathcal H\times\ensuremath{\mathbb R}}{f(x)\leq\xi} \end{equation} is a convex set. This is equivalent to requiring that \begin{multline} \label{e:convex} (\forall x\in\mathcal H)(\forall y\in\mathcal H)(\forall\alpha\in\zeroun\,) \quad\\ f\big(\alpha x+(1-\alpha) y\big)\leq\alpha f(x)+(1-\alpha)f(y). \end{multline} If $\ensuremath{\mathrm{epi}\,} f$ is closed, then $f$ is \emph{lower semicontinuous} in the sense that, for every sequence $(x_n)_{n\in\ensuremath{\mathbb N}}$ in $\mathcal H$ and $x\in\mathcal H$, \begin{equation} \label{e:lsc} x_n\to x\quad\Rightarrow\quad f(x)\leq\varliminf f(x_n). \end{equation} Finally, we say that $f\colon\mathcal H\to\ensuremath{\left]-\infty,+\infty\right]}$ is \emph{proper} if $\ensuremath{\mathrm{epi}\,} f\neq\ensuremath{\varnothing}$, which is equivalent to \begin{equation} \label{e:dom} \ensuremath{\mathrm{dom}\,} f=\menge{x\in\mathcal H}{f(x)<\ensuremath{+\infty}}\neq\ensuremath{\varnothing}. \end{equation} The class of functions $f\colon\mathcal H\to\ensuremath{\left]-\infty,+\infty\right]}$ which are proper, lower semicontinuous, and convex is denoted by $\Gamma_0(\mathcal H)$. The following result is due to Moreau \cite{Mor62b}. \begin{theorem}[proximation theorem] \label{t:12} Let $f\in\Gamma_0(\mathcal H)$ and let $x\in\mathcal H$. Then there exists a unique point $\ensuremath{\mathrm{prox}}_fx\in\mathcal H$, called the \emph{proximal point} of $x$ relative to $f$, such that \begin{multline} \label{e:prox} f\big(\ensuremath{\mathrm{prox}}_fx\big)+\frac12\|x-\ensuremath{\mathrm{prox}}_fx\|^2=\\ \min_{y\in\mathcal H}\bigg(f(y)+\frac{1}{2}\|x-y\|^2\bigg). \end{multline} In addition, for every $p\in\mathcal H$, \begin{multline} \label{e:moreau1} p=\ensuremath{\mathrm{prox}}_fx\;\Leftrightarrow\\ (\forall y\in\mathcal H)\;\scal{y-p}{x-p}+f(p)\leq f(y). \end{multline} \end{theorem} The above theorem defines an operator $\ensuremath{\mathrm{prox}}_f$ called the \emph{proximity operator} of $f$ (see \cite{Banf11} for a tutorial, and \cite[Chapter~24]{Livre1} and \cite{Comb18} for a detailed account with various properties). \begin{figure} \scalebox{0.53} { \begin{pspicture}(-6.0,-2.0)(11.5,11.1) \psplot[linewidth=0.06cm,linestyle=solid,algebraic,% linecolor=dbrown]{-2.32}{6.3}{abs(x/2-1)+(x/2-1)^2+3} \psplot[linewidth=0.05cm,linestyle=solid,algebraic,% linecolor=red]{-2.3}{6.7}{1.2*x+1.22} \psplot[linewidth=0.05cm,linestyle=solid,algebraic,% linecolor=blue]{-3.0}{6.6}{x/2+2} \psline[linewidth=0.05cm,arrowsize=3.0mm,% linestyle=solid,linecolor=dgreen]{->}% (3.10,3.85)(3.10,4.9) \psline[linecolor=black,linewidth=0.045cm,linestyle=dashed]{-}% (2.00,0.0)(2.00,3) \psline[linecolor=black,linewidth=0.045cm,linestyle=dashed]{-}% (-1,3.0)(2.00,3) \psline[linewidth=0.05cm,arrowsize=0.11cm 4.0,% arrowlength=1.4,arrowinset=0.4]{->}(-4,0)(7.2,0) \psline[linewidth=0.05cm,arrowsize=0.11cm 4.0,% arrowlength=1.4,arrowinset=0.4]{->}(-1,-1)(-1,10.2) \rput(5.2,5.5){\LARGE \color{dbrown}$\ensuremath{\mathrm{gra}\,} f$} \rput(-2.7,1.5){\LARGE \color{blue}$\ensuremath{\mathrm{gra}\,}{{m_{x,w}}}$} \rput(7.7,8.6){\LARGE \color{red}$\ensuremath{\mathrm{gra}\,}{{\scal{\cdot}{u}}}$} \rput(2.00,-0.4){\LARGE $x$} \rput(1.70,8.2){\LARGE \color{dbrown}$\ensuremath{\mathrm{epi}\,} f$} \rput(-1.7,3.0){\LARGE \color{black}$f(x)$} \rput(-1,10.5){\LARGE $\ensuremath{\mathbb R}$} \rput(7.5,0){\LARGE $\mathcal H$} \rput(2.3,5.2){\LARGE \color{dgreen}{${f^*(u)}$}} \end{pspicture} } \caption{The graph of a function $f\in\Gamma_0(\mathcal H)$ is shown in brown. The area above the graph is the closed convex set $\ensuremath{\mathrm{epi}\,} f$ of \eqref{e:epi}. Let $u\in\mathcal H$ and let the red line be the graph of the linear function $\scal{\cdot}{u}$. In view of \eqref{e:conj}, the value of $f^*(u)$ (in green) is the maximum signed difference between the red line and the brown line. Now fix $x\in\mathcal H$ and $w\in\partial f(x)$. Additionally, by \eqref{e:subdiff}, the affine function $m_{x,w}\colon y\mapsto\scal{y-x}{w}+f(x)$ satisfies $m_{x,w}\leq f$ and it coincides with $f$ at $x$. Its graph is represented in blue. Every subgradient $w$ gives such an affine minorant. } \label{fig:8} \end{figure} Now let $C$ be a nonempty closed convex subset of $\mathcal H$. Then its \emph{indicator function} $\iota_C$, defined by \begin{equation} \label{e:iota} \iota_C\colon\mathcal H\to\ensuremath{\left]-\infty,+\infty\right]}\colon x\mapsto \begin{cases} 0,&\text{if}\;\:x\in C;\\ \ensuremath{+\infty},&\text{if}\;\:x\notin C, \end{cases} \end{equation} lies in $\Gamma_0(\mathcal H)$ and it follows from \eqref{e:proj} and \eqref{e:moreau1} that \begin{equation} \label{e:98} \ensuremath{\mathrm{prox}}_{\iota_C}=\ensuremath{\mathrm{proj}}_C. \end{equation} This shows that Theorem~\ref{t:12} generalizes Theorem~\ref{t:11}. Let us now introduce basic convex analytical tools (see Fig.~\ref{fig:8}). The \emph{conjugate} of $f\colon\mathcal H\to\ensuremath{\left]-\infty,+\infty\right]}$ is \begin{equation} \label{e:conj} f^*\colon\mathcal H\to\ensuremath{\left[-\infty,+\infty\right]}\colon u \mapsto\sup_{x\in\mathcal H}\big(\scal{x}{u}-f(x)\big). \end{equation} The \emph{subdifferential} of a proper function $f\colon\mathcal H\to\ensuremath{\left]-\infty,+\infty\right]}$ is the set-valued operator $\partial f\colon\mathcal H\to 2^{\mathcal H}$ which maps a point $x\in\mathcal H$ to the set (see Fig.~\ref{fig:7}) \begin{equation} \label{e:subdiff} \partial f(x)\!=\! \menge{u\in\mathcal H} {(\forall y\in\mathcal H)\;\scal{y-x}{u}+f(x)\leq f(y)}. \end{equation} A vector in $\partial f(x)$ is a \emph{subgradient} of $f$ at $x$. If $C$ is a nonempty closed convex subset of $\mathcal H$, $N_C=\partial\iota_C$ is the \emph{normal cone} operator of $C$, that is, for every $x\in\mathcal H$, \begin{equation} \label{e:normalcone} N_Cx= \begin{cases} \menge{u\in\mathcal H}{\!(\forall y\in C)\:\scal{y-x}{u}\leq 0}, &\text{if}\;\;x\in C;\\ \ensuremath{\varnothing},&\text{otherwise.} \end{cases} \end{equation} Let us denote by $\ensuremath{\mathrm{Argmin}\,} f$ the set of minimizers of a function $f\colon\mathcal H\to\ensuremath{\left]-\infty,+\infty\right]}$ (the notation $\ensuremath{\mathrm{Argmin}\,}_{x\in\mathcal H}f(x)$ will also be used). The most fundamental result in optimization is actually the following immediate consequence of \eqref{e:subdiff}. \begin{theorem}[Fermat's rule] \label{t:4} Let $f\colon\mathcal H\to\ensuremath{\left]-\infty,+\infty\right]}$ be a proper function. Then $\ensuremath{\mathrm{Argmin}\,} f=\menge{x\in\mathcal H}{0\in\partial f(x)}$. \end{theorem} \begin{theorem}[Moreau] \label{t:6} Let $f\in\Gamma_0(\mathcal H)$. Then $f^*\in\Gamma_0(\mathcal H)$, $f^{**}=f$, and $\ensuremath{\mathrm{prox}}_f+\ensuremath{\mathrm{prox}}_{f^*}=\ensuremath{\mathrm{Id}}$. \end{theorem} A function $f\in\Gamma_0(\mathcal H)$ is \emph{differentiable} at $x\in\ensuremath{\mathrm{dom}\,} f$ if there exists a vector $\nabla f(x)\in\mathcal H$, called the \emph{gradient} of $f$ at $x$, such that \begin{equation} \label{e:grad} (\forall y\in\mathcal H)\quad \lim_{\alpha\downarrow 0}\dfrac{f(x+\alpha y)-f(x)}{\alpha}= \scal{y}{\nabla f(x)}. \end{equation} \begin{example} \label{ex:jjm7} Let $C$ be a nonempty closed convex subset of $\mathcal H$. Then $\nabla d_C^2/2=\ensuremath{\mathrm{Id}}-\ensuremath{\mathrm{proj}}_C$. \end{example} \begin{proposition} \label{p:11} Let $f\in\Gamma_0(\mathcal H)$, let $x\in\ensuremath{\mathrm{dom}\,} f$, and suppose that $f$ is differentiable at $x$. Then $\partial f(x)=\{\nabla f(x)\}$. \end{proposition} We close this section by examining fundamental properties of a canonical convex minimization problem. \begin{proposition} \label{p:17} Let $f\in\Gamma_0(\mathcal H)$, let $g\in\Gamma_0(\mathcal G)$, and let $L\colon\mathcal H\to\mathcal G$ be linear. Suppose that $L(\ensuremath{\mathrm{dom}\,} f)\cap\ensuremath{\mathrm{dom}\,} g\neq\ensuremath{\varnothing}$ and set $S=\ensuremath{\mathrm{Argmin}\,}(f+g\circ L)$. Then the following hold: \begin{enumerate} \item \label{p:17i} Suppose that $\lim_{\|x\|\to\ensuremath{+\infty}}f(x)+g(Lx)=\ensuremath{+\infty}$. Then $S\neq\ensuremath{\varnothing}$. \item \label{p:17ii} Suppose that $\ensuremath{\operatorname{ri}}(L(\ensuremath{\mathrm{dom}\,} f))\cap\ensuremath{\operatorname{ri}}(\ensuremath{\mathrm{dom}\,} g)\neq\ensuremath{\varnothing}$. Then \begin{align*} \label{e:17ii} S &=\menge{x\in\mathcal H}{0\in\partial f(x)+L^*\big(\partial g(Lx)\big)}\\ &=\menge{x\in\mathcal H}{(\ensuremath{\exists\,} v\in\partial g(Lx))\;-L^*v\in\partial f(x)}. \end{align*} \end{enumerate} \end{proposition} \begin{figure} \scalebox{0.63} { \begin{pspicture}(-2.9,-3.5)(4.0,4.3) \psline[linewidth=0.04cm,arrowsize=0.09cm 4.0,% arrowlength=1.4,arrowinset=0.4]{->}(-0.0,-3.0)(-0.0,3.2) \psline[linewidth=0.04cm,arrowsize=0.09cm 4.0,% arrowlength=1.4,arrowinset=0.4]{->}(-2.7,-0.0)(3.0,-0.0) \rput(3.3,0.0){\large $\mathcal H$} \rput(0.0,3.5){\large $\ensuremath{\mathbb R}$} \rput(-1.0,0.0){\large $\boldsymbol{|}$} \rput(-1.0,0.5){\large $-1$} \rput(0.0,-1.0){\large $\boldsymbol{-}$} \rput(0.6,-1.0){\large $-1$} \rput(1.0,0.0){\large $\boldsymbol{|}$} \rput(1.0,0.5){\large $1$} \psplot[linewidth=0.05cm,linestyle=solid,algebraic,% linecolor=dbrown]{-2.6}{-0.99}{-2*x-3} \psline[linewidth=0.05cm,linecolor=dbrown](-1.0,-1.0)(1.01,0.0) \psplot[linewidth=0.05cm,linestyle=solid,algebraic,% linecolor=dbrown]{1.0}{2.6}{0.5*x^2-0.5} \end{pspicture} \begin{pspicture}(-2.9,-3.5)(4.0,4.3) \psline[linewidth=0.04cm,arrowsize=0.09cm 4.0,% arrowlength=1.4,arrowinset=0.4]{->}(-0.0,-3.0)(-0.0,3.2) \psline[linewidth=0.04cm,arrowsize=0.09cm 4.0,% arrowlength=1.4,arrowinset=0.4]{->}(-2.7,-0.0)(3.0,-0.0) \rput(3.3,0.0){\large $\mathcal H$} \rput(0.0,3.5){\large $\mathcal H$} \psline[linewidth=0.05cm,linecolor=blue](-2.6,-2.0)(-1.0,-1.97) \psline[linewidth=0.05cm,linecolor=blue](-1.0,-1.995)(-1.0,0.525) \psline[linewidth=0.05cm,linecolor=blue](-1.0,0.5)(1.024,0.5) \psline[linewidth=0.05cm,linecolor=blue](1.0,0.5)(1.0,1.01) \psplot[linewidth=0.05cm,linestyle=solid,algebraic,linecolor=blue]% {0.995}{2.6}{x} \rput(0.0,-2.0){\large $\boldsymbol{-}$} \rput(0.6,-2.0){\large $-2$} \rput(0.0,1.0){\large $\boldsymbol{-}$} \rput(0.4,1.0){\large $1$} \rput(1.0,0.0){\large $\boldsymbol{|}$} \rput(1.0,-0.5){\large $1$} \end{pspicture} } \caption{Left: Graph of a function defined on $\mathcal H=\ensuremath{\mathbb R}$. Right: Graph of its subdifferential.} \label{fig:7} \end{figure} \subsection{Nonexpansive operators} \label{sec:23} We introduce the main classes of operators pertinent to our discussion. First, we need to define the notion of a relaxation for an operator. \begin{definition} \label{d:relax1} Let $T\colon\mathcal H\to\mathcal H$ and let $\lambda\in\ensuremath{\left]0,+\infty\right[}$. Then the operator $R=\ensuremath{\mathrm{Id}}+\lambda(T-\ensuremath{\mathrm{Id}})$ is a \emph{relaxation} of $T$. If $\lambda\leq 1$, then $R$ is an \emph{underrelaxation} of $T$ and, if $\lambda\geq 1$, $R$ is an \emph{overrelaxation} of $T$; in particular, if $\lambda=2$, $R$ is the \emph{reflection} of $T$. \end{definition} \begin{definition} \label{d:relax2} Let $\alpha\in\rzeroun$. An $\alpha$-relaxation sequence is a sequence $(\lambda_n)_{n\in\ensuremath{\mathbb N}}$ in $\left]0,1/\alpha\right[$ such that $\sum_{n\in\ensuremath{\mathbb N}}\lambda_n(1-\alpha\lambda_n)=\ensuremath{+\infty}$. \end{definition} \begin{example} \label{ex:relax} Let $\alpha\in\rzeroun$ and let $(\lambda_n)_{n\in\ensuremath{\mathbb N}}$ be a sequence in $\ensuremath{\left]0,+\infty\right[}$. Then $(\lambda_n)_{n\in\ensuremath{\mathbb N}}$ is an $\alpha$-relaxation sequence in each of the following cases: \begin{enumerate} \item \label{ex:relaxi} $\alpha<1$ and $(\forall n\in\ensuremath{\mathbb N})$ $\lambda_n=1$. \item \label{ex:relaxii} $(\forall n\in\ensuremath{\mathbb N})$ $\lambda_n=\lambda\in\left]0,1/\alpha\right[$. \item \label{ex:relaxiii} $\inf_{n\in\ensuremath{\mathbb N}}\lambda_n>0$ and $\sup_{n\in\ensuremath{\mathbb N}}\lambda_n<1/\alpha$. \item \label{ex:relaxiv} There exists $\varepsilon\in\zeroun$ such that $(\forall n\in\ensuremath{\mathbb N})$ $\varepsilon/\sqrt{n+1}\leq\lambda_n\leq 1/\alpha-\varepsilon/\sqrt{n+1}$. \end{enumerate} \end{example} An operator $T\colon\mathcal H\to\mathcal H$ is \emph{Lipschitzian} with constant $\delta\in\ensuremath{\left]0,+\infty\right[}$ if \begin{equation} \label{e:100} (\forall x\in\mathcal H)(\forall y\in\mathcal H)\quad\|Tx-Ty\|\leq\delta\|x-y\|. \end{equation} If $\delta<1$ above, then $T$ is a \emph{Banach contraction} (also called a \emph{strict contraction}). If $\delta=1$, that is, \begin{equation} \label{e:103} (\forall x\in\mathcal H)(\forall y\in\mathcal H)\quad\|Tx-Ty\|\leq\|x-y\|, \end{equation} then $T$ is \emph{nonexpansive}. On the other hand, $T$ is \emph{cocoercive} with constant $\beta\in\ensuremath{\left]0,+\infty\right[}$ if \begin{multline} \label{e:101} (\forall x\in\mathcal H)(\forall y\in\mathcal H)\quad \scal{x-y}{Tx-Ty}\geq\\ \beta\|Tx-Ty\|^2. \end{multline} If $\beta=1$ in \eqref{e:101}, then $T$ is \emph{firmly nonexpansive}. Alternatively, $T$ is firmly nonexpansive if \begin{multline} \label{e:107} (\forall x\in\mathcal H)(\forall y\in\mathcal H)\;\; \|Tx-Ty\|^2\leq\|x-y\|^2\\ -\|(\ensuremath{\mathrm{Id}}-T)x-(\ensuremath{\mathrm{Id}}-T)y\|^2. \end{multline} Equivalently, $T$ is firmly nonexpansive if the reflection \begin{equation} \label{e:104} \ensuremath{\mathrm{Id}}+2(T-\ensuremath{\mathrm{Id}})\;\:\text{is nonexpansive.} \end{equation} More generally, let $\alpha\in\rzeroun$. Then $T$ is $\alpha$-\emph{averaged} if the overrelaxation \begin{equation} \label{e:105-} \ensuremath{\mathrm{Id}}+\alpha^{-1}(T-\ensuremath{\mathrm{Id}})\;\:\text{is nonexpansive} \end{equation} or, equivalently, if there exists a nonexpansive operator $Q\colon\mathcal H\to\mathcal H$ such that $T$ can be written as the underrelaxation \begin{equation} \label{e:105} T=\ensuremath{\mathrm{Id}}+\alpha(Q-\ensuremath{\mathrm{Id}}). \end{equation} An alternative characterization of $\alpha$-averagedness is \begin{multline} \label{e:106} (\forall x\in\mathcal H)(\forall y\in\mathcal H)\;\; \|Tx-Ty\|^2\leq\|x-y\|^2\\ -{\dfrac{1-\alpha}{\alpha}}\|(\ensuremath{\mathrm{Id}}-T)x-(\ensuremath{\mathrm{Id}}-T)y\|^2. \end{multline} Averaged operators will be the most important class of nonlinear operators we use in this paper. They were introduced in \cite{Bail78} and their central role in many nonlinear analysis algorithms was pointed out in \cite{Opti04}, with further refinements in \cite{Jmaa15,Huan20}. Note that \begin{align} \label{e:110} T\;\text{is firmly nonexpansive} &\Leftrightarrow\:\ensuremath{\mathrm{Id}}-T\;\text{is firmly nonexpansive} \nonumber\\ &\Leftrightarrow\:T\;\text{is $1/2$-averaged} \nonumber\\ &\Leftrightarrow\:T\;\text{is $1$-cocoercive}. \end{align} Here is an immediate consequence of \eqref{e:moreau1} and \eqref{e:110}. \begin{example} \label{ex:1} Let $f\in\Gamma_0(\mathcal H)$. Then $\ensuremath{\mathrm{prox}}_f$ and $\ensuremath{\mathrm{Id}}-\ensuremath{\mathrm{prox}}_f$ are firmly nonexpansive. In particular, if $C$ is a nonempty closed convex subset of $\mathcal H$, then \eqref{e:98} implies that $\ensuremath{\mathrm{proj}}_C$ and $\ensuremath{\mathrm{Id}}-\ensuremath{\mathrm{proj}}_C$ are firmly nonexpansive. \end{example} \begin{figure} \begin{center} \scalebox{0.82} { \begin{pspicture}(-0.9,-1.2)(9.5,7.1) \rput(4.0,1.0){\large\color{nido} projection operators} \rput(4.0,1.9){\large\color{nido} proximity operators} \rput(4.0,3.2){\large\color{nido} firmly nonexpansive } \rput(4.0,2.7){\large\color{nido} operators/resolvents } \rput(2.7,4.2){\large\color{dred} $\alpha$-averaged operators, $\alpha<1$} \rput(1.7,5.2){\large\color{dred} nonexpansive operators} \rput(5.0,6.2){\large Lipschitzian operators} \rput(8.0,3.0){\large\color{blue}cocoercive} \rput(7.9,2.5){\large\color{blue}operators} \psframe[linecolor=nido,linewidth=0.04,dimen=outer] (2.0,0.5)(6.0,1.5) \psframe[linecolor=nido,linewidth=0.04,dimen=outer] (1.8,0.3)(6.2,2.4) \psframe[linecolor=nido,linewidth=0.04,dimen=outer] (1.6,0.1)(6.4,3.6) \psframe[linecolor=dred,linewidth=0.04,dimen=outer] (-0.4,-0.3)(6.6,4.8) \psframe[linecolor=dred,linewidth=0.04,dimen=outer] (-0.6,-0.5)(6.8,5.8) \psframe[linecolor=blue,linewidth=0.04,dimen=outer] (1.4,-0.1)(9.2,3.8) \psframe[linewidth=0.04,dimen=outer] (-0.8,-0.7)(9.4,6.8) \end{pspicture} } \end{center} \vskip -4mm \caption{Classes of nonlinear operators.} \label{fig:1} \end{figure} The relationships between the different types of nonlinear operators discussed so far are depicted in Fig.~\ref{fig:1}. The next propositions provide further connections between them. \begin{proposition} \label{p:19b} Let $\delta\in\zeroun$, let $T\colon\mathcal H\to\mathcal H$ be $\delta$-Lipschitzian, and set $\alpha=(\delta+1)/2$. Then $T$ is $\alpha$-averaged. \end{proposition} \begin{proposition} \label{p:2004-5} Let $T\colon\mathcal H\to\mathcal H$, let $\beta\in\ensuremath{\left]0,+\infty\right[}$, and let $\gamma\in\left]0,2\beta\right[$. Then $T$ is $\beta$-cocoercive if and only if $\ensuremath{\mathrm{Id}}-\gamma T$ is $\gamma/(2\beta)$-averaged. \end{proposition} It follows from the Cauchy-Schwarz inequality that a $\beta$-cocoercive operator is $\beta^{-1}$-Lipschitzian. In the case of gradients of convex functions, the converse is also true. \begin{proposition}[Baillon-Haddad] \label{p:BH} Let $f\colon\mathcal H\to\ensuremath{\mathbb R}$ be a differentiable convex function such that $\nabla f$ is $\beta^{-1}$-Lipschitzian for some $\beta\in\ensuremath{\left]0,+\infty\right[}$. Then $\nabla f$ is $\beta$-cocoercive. \end{proposition} We now describe operations that preserve averagedness and cocoercivity. \begin{proposition} \label{p:av4} Let $T\colon\mathcal H\to\mathcal H$, let $\alpha\in\zeroun$, and let $\lambda\in\left]0,1/\alpha\right[$. Then $T$ is $\alpha$-averaged if and only if $(1-\lambda)\ensuremath{\mathrm{Id}}+\lambda T$ is $\lambda\alpha$-averaged. \end{proposition} \begin{proposition} \label{p:av2} For every $i\in\{1,\ldots,m\}$, let $\alpha_i\in\zeroun$, let $\omega_i\in\rzeroun$, and let $T_i\colon\mathcal H\to\mathcal H$ be $\alpha_i$-averaged. Suppose that $\sum_{i=1}^m\omega_i=1$ and set $\alpha=\sum_{i=1}^m\omega_i\alpha_i$. Then $\sum_{i=1}^m\omega_iT_i$ is $\alpha$-averaged. \end{proposition} \begin{example} \label{ex:2} For every $i\in\{1,\ldots,m\}$, let $\omega_i\in\rzeroun$ and let $T_i\colon\mathcal H\to\mathcal H$ be firmly nonexpansive. Suppose that $\sum_{i=1}^m\omega_i=1$. Then $\sum_{i=1}^m\omega_iT_i$ is firmly nonexpansive. \end{example} \begin{proposition} \label{p:roma} For every $i\in\{1,\ldots,m\}$, let $\alpha_i\in\zeroun$ and let $T_i\colon\mathcal H\to\mathcal H$ be $\alpha_i$-averaged. Set \begin{equation} \label{e:nyc2014-09-25} T=T_1\circ\cdots\circ T_m\quad\text{and}\quad \alpha=\dfrac{1}{1+\dfrac{1} {\ensuremath{\displaystyle\sum}_{i=1}^m\dfrac{\alpha_i}{1-\alpha_i}}}. \end{equation} Then $T$ is $\alpha$-averaged. \end{proposition} \begin{example} \label{ex:roma} Let $\alpha_1\in\zeroun$, let $\alpha_2\in\zeroun$, let $T_1\colon\mathcal H\to\mathcal H$ be $\alpha_1$-averaged, and let $T_2\colon\mathcal H\to\mathcal H$ be $\alpha_2$-averaged. Set \begin{equation} T=T_1\circ T_2\quad\text{and}\quad \alpha=\frac{\alpha_1+\alpha_2-2\alpha_1\alpha_2} {1-\alpha_1\alpha_2}. \end{equation} Then $T$ is $\alpha$-averaged. \end{example} \begin{proposition}[\cite{Davi17}] \label{p:3magic} Let $T_1\colon\mathcal H\to\mathcal H$ and $T_2\colon\mathcal H\to\mathcal H$ be firmly nonexpansive, let $\alpha_3\in\zeroun$, and let $T_3\colon\mathcal H\to\mathcal H$ be $\alpha_3$-averaged. Set $\alpha=1/(2-\alpha_3)$ and \begin{equation} T=T_1\circ(T_2-\ensuremath{\mathrm{Id}}+T_3\circ T_2)+\ensuremath{\mathrm{Id}}-T_2. \end{equation} Then $T$ is $\alpha$-averaged. \end{proposition} \begin{proposition} \label{p:coco} For every $k\in\{1,\ldots,q\}$, let $0\neq L_k\colon\mathcal H\to\mathcal G_k$ be linear, let $\beta_k\in\ensuremath{\left]0,+\infty\right[}$, and let $T_k\colon\mathcal G_k\to\mathcal G_k$ be $\beta_k$-cocoercive. Set \begin{equation} \label{e:a12} T=\sum_{k=1}^qL_k^*\circ T_k\circ L_k\quad\text{and}\quad \beta=\dfrac{1}{{\ensuremath{\displaystyle\sum}_{k=1}^q}\dfrac{\|L_k\|^2}{\beta_k}}. \end{equation} Then the following hold: \begin{enumerate} \item \label{p:cocoi} $T$ is $\beta$-cocoercive {\rm\cite{Livre1}}. \item \label{p:cocoii} Suppose that $\sum_{k=1}^q\|L_k\|^2\leq 1$ and that the operators $(T_k)_{1\leq k\leq q}$ are firmly nonexpansive. Then $T$ is firmly nonexpansive {\rm\cite{Livre1}}. \item \label{p:cocoiii} Suppose that $\sum_{k=1}^q\|L_k\|^2\leq 1$ and that $(T_k)_{1\leq k\leq q}$ are proximity operators. Then $T$ is a proximity operator {\rm\cite{Comb18}}. \end{enumerate} \end{proposition} \begin{remark} \label{r:2018} The statement of Proposition~\ref{p:coco}\ref{p:cocoiii} can be made more precise \cite{Comb18}. To wit, for every $k\in\{1,\ldots,q\}$, let $\omega_k\in\ensuremath{\left]0,+\infty\right[}$, let $0\neq L_k\colon\mathcal H\to\mathcal G_k$ be linear, let $g_k\in\Gamma_0(\mathcal G_k)$, and let $h_k\colon v\mapsto\inf_{w\in\mathcal G_k}(g_k^*(w)+\|v-w\|^2/2)$ be the \emph{Moreau envelope} of $g_k^*$. Then, if $\sum_{k=1}^q\omega_k\|L_k\|^2\leq 1$, we have \begin{multline} \sum_{k=1}^q\omega_k\big(L_k^*\circ\ensuremath{\mathrm{prox}}_{g_k}\circ L_k\big) =\ensuremath{\mathrm{prox}}_{f},\quad\text{where}\\ f=\Bigg(\sum_{k=1}^q\omega_kh_k \circ L_k\Bigg)^*-\dfrac{\|\cdot\|^2}{2}. \end{multline} \end{remark} \bigskip Let $T\colon\mathcal H\to\mathcal H$ and let \begin{equation} \ensuremath{\text{\rm Fix}\,} T=\menge{x\in\mathcal H}{Tx=x} \end{equation} be its set of \emph{fixed points}. If $T$ is a Banach contraction, then it admits a unique fixed point. However, if $T$ is merely nonexpansive, the situation is quite different. Indeed, a nonexpansive operator may have no fixed point (take $T\colon x\mapsto x+z$, with $z\neq 0$), exactly one (take $T=-\ensuremath{\mathrm{Id}}$), or infinitely many (take $T=\ensuremath{\mathrm{Id}}$). Even those operators which are firmly nonexpansive can fail to have fixed points. \begin{example} \label{ex:f8} $T\colon\ensuremath{\mathbb R}\to\ensuremath{\mathbb R}\colon x\mapsto({x+\sqrt{x^2+4}})/2$ is firmly nonexpansive and $\ensuremath{\text{\rm Fix}\,} T=\ensuremath{\varnothing}$. \end{example} \begin{proposition} \label{p:f1} Let $T\colon\mathcal H\to\mathcal H$ be nonexpansive. Then $\ensuremath{\text{\rm Fix}\,} T$ is closed and convex. \end{proposition} \begin{proposition} \label{p:f2} Let $(T_i)_{1\leq i\leq m}$ be nonexpansive operators from $\mathcal H$ to $\mathcal H$, and let $(\omega_i)_{1\leq i\leq m}$ be real numbers in $\rzeroun$ such that $\sum_{i=1}^m\omega_i=1$. Suppose that $\bigcap_{i=1}^m\ensuremath{\text{\rm Fix}\,} T_i\neq\ensuremath{\varnothing}$. Then $\ensuremath{\text{\rm Fix}\,}(\sum_{i=1}^m\omega_iT_i)=\bigcap_{i=1}^m\ensuremath{\text{\rm Fix}\,} T_i$. \end{proposition} \begin{proposition} \label{p:f3} For every $i\in\{1,\ldots,m\}$, let $\alpha_i\in\zeroun$ and let $T_i\colon\mathcal H\to\mathcal H$ be $\alpha_i$-averaged. Suppose that $\bigcap_{i=1}^m\ensuremath{\text{\rm Fix}\,} T_i\neq\ensuremath{\varnothing}$. Then $\ensuremath{\text{\rm Fix}\,}(T_1\circ\cdots\circ T_m)=\bigcap_{i=1}^m\ensuremath{\text{\rm Fix}\,} T_i$. \end{proposition} \subsection{Monotone operators} \label{sec:22} Let $A\colon\mathcal H\to 2^{\mathcal H}$ be a set-valued operator. Then $A$ is described by its graph \begin{equation} \label{e:ge1} \ensuremath{\mathrm{gra}\,} A=\menge{(x,u)\in\mathcal H\times\mathcal H}{u\in Ax}, \end{equation} and its inverse $A^{-1}$, defined by the relation \begin{equation} \label{e:ge-1} (\forall (x,u)\in\mathcal H\times\mathcal H)\quad x\in A^{-1}u\quad \Leftrightarrow\quad u\in Ax, \end{equation} always exists (see Fig.~\ref{fig:5}). The operator $A$ is \emph{monotone} if \begin{multline} \label{e:ge2} \big(\forall (x,u)\in\ensuremath{\mathrm{gra}\,} A\big)\big(\forall (y,v)\in\ensuremath{\mathrm{gra}\,} A\big)\\ \scal{x-y}{u-v}\geq 0, \end{multline} in which case $A^{-1}$ is also monotone. \begin{figure} \scalebox{0.63} { \begin{pspicture}(-2.9,-3.5)(4.0,4.3) \psline[linewidth=0.04cm,arrowsize=0.09cm 4.0,% arrowlength=1.4,arrowinset=0.4]{->}(-0.0,-3.0)(-0.0,3.2) \psline[linewidth=0.04cm,arrowsize=0.09cm 4.0,% arrowlength=1.4,arrowinset=0.4]{->}(-2.7,-0.0)(3.0,-0.0) \rput(3.3,0.0){\large $\mathcal H$} \rput(0.0,3.5){\large $\mathcal H$} \psplot[linewidth=0.05cm,linestyle=solid,algebraic,linecolor=blue]% {-2.4}{-0.3}{(x+1)^2-1} \psline[linewidth=0.05cm,linecolor=blue](-0.3,-0.2955)(-0.3,0.99) \psplot[linewidth=0.05cm,linestyle=solid,algebraic,linecolor=blue]% {0.3}{1.8}{2.5+0.4*(x-1.8)^3} \psline[linewidth=0.05cm,linecolor=blue](2.1,-2.1)(2.1,2.0) \psline[linewidth=0.05cm,linecolor=blue](2.075,-2.1)(2.7,-2.1) \end{pspicture} \begin{pspicture}(-2.9,-3.5)(4.0,4.3) \psline[linewidth=0.04cm,arrowsize=0.09cm 4.0,% arrowlength=1.4,arrowinset=0.4]{->}(-0.0,-3.0)(-0.0,3.2) \psline[linewidth=0.04cm,arrowsize=0.09cm 4.0,% arrowlength=1.4,arrowinset=0.4]{->}(-2.7,-0.0)(3.0,-0.0) \rput(3.3,0.0){\large $\mathcal H$} \rput(0.0,3.5){\large $\mathcal H$} \psplot[linewidth=0.05cm,linestyle=solid,algebraic,linecolor=nido]% {-1.00}{-0.51}{sqrt(x+1)-1} \psplot[linewidth=0.05cm,linestyle=solid,algebraic,linecolor=nido]% {-1.00}{0.96}{-sqrt(x+1)-1} \psline[linewidth=0.05cm,linecolor=nido](-0.2955,-0.3)(0.99,-0.3) \psplot[linewidth=0.05cm,linestyle=solid,algebraic,linecolor=nido]% {1.15}{2.5}{1.8-((2.5-x)/0.4)^(0.333333)} \psline[linewidth=0.05cm,linecolor=nido](-2.1,2.1)(2.0,2.1) \psline[linewidth=0.05cm,linecolor=nido](-2.08,2.1)(-2.08,2.7) \end{pspicture} } \caption{Left: Graph of a (nonmonotone) set-valued operator. Right: Graph of its inverse.} \label{fig:5} \end{figure} \begin{figure} \scalebox{0.63} { \begin{pspicture}(-2.9,-3.5)(4.2,4.3) \psline[linewidth=0.04cm,arrowsize=0.09cm 4.0,% arrowlength=1.4,arrowinset=0.4]{->}(-0.0,-3.0)(-0.0,3.2) \psline[linewidth=0.04cm,arrowsize=0.09cm 4.0,% arrowlength=1.4,arrowinset=0.4]{->}(-2.7,-0.0)(3.0,-0.0) \rput(3.3,0.0){\large $\mathcal H$} \rput(0.0,3.5){\large $\mathcal H$} \rput(0.65,-0.5){\color{red}\Large $x_0$} \rput(0.5,0.0){\color{red}\large $\boldsymbol{|}$} \rput(-0.5,1.5){\color{red}\Large $u_0$} \rput(-0.0,1.5){\color{red}\large $\boldsymbol{-}$} \psplot[linewidth=0.05cm,linestyle=solid,algebraic,linecolor=blue]% {-2.6}{-0.51}{(x+1.7)^3-2} \psline[linewidth=0.05cm,linecolor=blue](-0.53,-0.3)(0.3,-0.3) \psplot[linewidth=0.05cm,linestyle=solid,algebraic,linecolor=blue]% {0.7}{2.5}{2.0+0.5*(x-1.2)^3} \psline[linewidth=0.05cm,linecolor=blue](0.3,-0.32)(0.3,1.0) \psdot[linewidth=0.04cm,linecolor=red](0.5,1.50) \end{pspicture} \begin{pspicture}(-2.9,-3.5)(4.2,4.3) \psline[linewidth=0.04cm,arrowsize=0.09cm 4.0,% arrowlength=1.4,arrowinset=0.4]{->}(-0.0,-3.0)(-0.0,3.2) \psline[linewidth=0.04cm,arrowsize=0.09cm 4.0,% arrowlength=1.4,arrowinset=0.4]{->}(-2.7,-0.0)(3.0,-0.0) \rput(3.3,0.0){\large $\mathcal H$} \rput(0.0,3.5){\large $\mathcal H$} \psplot[linewidth=0.05cm,linestyle=solid,algebraic,linecolor=blue]% {-2.6}{-0.51}{(x+1.7)^3-2} \psline[linewidth=0.05cm,linecolor=blue](-0.53,-0.3)(0.3,-0.3) \psplot[linewidth=0.05cm,linestyle=solid,algebraic,linecolor=blue]% {0.295}{2.5}{2.0+0.5*(x-1.2)^3} \psline[linewidth=0.05cm,linecolor=blue](0.3,-0.32)(0.3,1.65) \end{pspicture} } \caption{Left: Graph of a monotone operator which is not maximally monotone: we can add the point $(x_0,u_0)$ to its graph and still get a monotone graph. Right: Graph of a maximally monotone operator: adding any point to this graph destroys its monotonicity. } \label{fig:6} \end{figure} \begin{example} Let $f\colon\mathcal H\to\ensuremath{\left]-\infty,+\infty\right]}$ be a proper function, let $(x,u)\in\ensuremath{\mathrm{gra}\,}\partial f$, and let $(y,v)\in\ensuremath{\mathrm{gra}\,}\partial f$. Then \eqref{e:subdiff} yields \begin{equation} \begin{cases} \scal{x-y}{u}+f(y)\geq f(x)\\ \scal{y-x}{v}+f(x)\geq f(y). \end{cases} \end{equation} Adding these inequality yields $\scal{x-y}{u-v}\geq 0$, which shows that $\partial f$ is monotone. \end{example} A natural question is whether the operator obtained by adding a point to the graph of a monotone operator $A\colon\mathcal H\to 2^{\mathcal H}$ is still monotone. If it is not, then $A$ is said to be \emph{maximally monotone}. Thus, $A$ is maximally monotone if, for every $(x,u)\in\mathcal H\times\mathcal H$, \begin{equation} \label{e:maxmon2} (x,u)\in\ensuremath{\mathrm{gra}\,} A\;\Leftrightarrow\;(\forall (y,v)\in\ensuremath{\mathrm{gra}\,} A)\;\; \scal{x-y}{u-v}\geq 0. \end{equation} These notions are illustrated in Fig.~\ref{fig:6}. Let us provide some basic examples of maximally monotone operators, starting with the subdifferential of \eqref{e:subdiff} (see Fig.~\ref{fig:7}). \begin{example}[Moreau] \label{ex:mono1} Let $f\in\Gamma_0(\mathcal H)$. Then $\partial f$ is maximally monotone and $(\partial f)^{-1}=\partial f^*$. \end{example} \begin{example} \label{ex:mono5} Let $T\colon\mathcal H\to\mathcal H$ be monotone and continuous. Then $T$ is maximally monotone. In particular, if $T$ is cocoercive, it is maximally monotone. \end{example} \begin{example} \label{ex:mono3} Let $T\colon\mathcal H\to\mathcal H$ be nonexpansive. Then $\ensuremath{\mathrm{Id}}-T$ is maximally monotone. \end{example} \begin{example} \label{ex:mono4} Let $T\colon\mathcal H\to\mathcal H$ be linear (hence continuous) and \emph{positive} in the sense that $(\forall x\in\mathcal H)$ $\scal{x}{Tx}\geq 0$. Then $T$ is maximally monotone. In particular, if $T$ is \emph{skew}, i.e., $T^*=-T$, then it is maximally monotone. \end{example} Given $A\colon\mathcal H\to 2^{\mathcal H}$, the \emph{resolvent} of $A$ is the operator $J_A=(\ensuremath{\mathrm{Id}}+A)^{-1}$, that is, \begin{equation} \label{e:gm} (\forall(x,p)\in\mathcal H\times\mathcal H)\quad p\in J_{\!A}x \quad\Leftrightarrow\quad x-p\in Ap. \end{equation} In addition, the \emph{reflected resolvent} of $A$ is \begin{equation} \label{e:RA} R_A=2J_A-\ensuremath{\mathrm{Id}}. \end{equation} A profound result which connects monotonicity and nonexpansiveness is Minty's theorem \cite{Mint62}. It implies that if, $A\colon\mathcal H\to 2^{\mathcal H}$ is maximally monotone, then $J_A$ is single-valued, defined everywhere on $\mathcal H$, and firmly nonexpansive. \begin{theorem}[Minty] \label{t:minty} Let $T\colon\mathcal H\to\mathcal H$. Then $T$ is firmly nonexpansive if and only if it is the resolvent of a maximally monotone operator $A\colon\mathcal H\to 2^{\mathcal H}$. \end{theorem} \begin{example} \label{ex:mono2} Let $f\in\Gamma_0(\mathcal H)$. Then $J_{\partial f}=\ensuremath{\mathrm{prox}}_f$. \end{example} Let $f$ and $g$ be functions in $\Gamma_0(\mathcal H)$ which satisfy the constraint qualification $\ensuremath{\operatorname{ri}}(\ensuremath{\mathrm{dom}\,} f)\cap\ensuremath{\operatorname{ri}}(\ensuremath{\mathrm{dom}\,} g)\neq\ensuremath{\varnothing}$. In view of Proposition~\ref{p:17}\ref{p:17ii} and Example~\ref{ex:mono1}, the minimizers of $f+g$ are precisely the solutions to the inclusion $0\in Ax+Bx$ involving the maximally monotone operators $A=\partial f$ and $B=\partial g$. Hence, it may seem that in minimization problems the theory of subdifferentials should suffice to analyze and solve problems without invoking general monotone operator theory. As discussed in \cite{Comb18}, this is not the case and monotone operators play an indispensable role in various aspects of convex minimization. We give below an illustration of this fact in the context of Proposition~\ref{p:17}. \begin{example}[\cite{Siop11}] \label{ex:m+s} Given $f\in\Gamma_0(\mathcal H)$, $g\in\Gamma_0(\mathcal G)$, and a linear operator $L\colon\mathcal H\to\mathcal G$, the objective is to \begin{equation} \label{e:p} \minimize{x\in\mathcal H}{f(x)+g(Lx)} \end{equation} using $f$ and $g$ separately by means of their respective proximity operators. To this end, let us bring into play the Fenchel-Rockafellar dual problem \begin{equation} \label{e:d} \minimize{v\in\mathcal G}{f^*(-L^*v)+g^*(v)}. \end{equation} We derive from \cite[Theorem~19.1]{Livre1} that, if $(x,v)\in\mathcal H\times\mathcal G$ solves the inclusion \begin{equation} \label{e:d3} \begin{bmatrix} 0\\ 0 \end{bmatrix} \in \underbrace{\begin{bmatrix} \partial f&0\\ 0&\partial g^*\\ \end{bmatrix}}_{\text{subdifferential}} \begin{bmatrix} x\\ v \end{bmatrix} + \underbrace{\begin{bmatrix} 0&L^*\\ -L&0\\ \end{bmatrix}}_{\text{skew}} \begin{bmatrix} x\\ v \end{bmatrix}, \end{equation} then $x$ solves \eqref{e:p} and $v$ solves \eqref{e:d}. Now introduce the variable $\boldsymbol{z}=(x,v)$, the function $\Gamma_0(\mathcal H\times\mathcal G)\ni\boldsymbol{h} \colon\boldsymbol{z}\mapsto f(x)+g^*(v)$, the operator $\boldsymbol{A}=\partial\boldsymbol{h}$, and the skew operator $\boldsymbol{B}\colon\boldsymbol{z}\mapsto(L^*v,-Lx)$. Then it follows from Examples~\ref{ex:mono1} and \ref{ex:mono4} that \eqref{e:d3} can be written as the maximally monotone inclusion $\boldsymbol{0}\in\boldsymbol{Az}+\boldsymbol{Bz}$, which does not correspond to a minimization problem since $\boldsymbol{B}$ is not a gradient \cite[Proposition~2.58]{Livre1}. As a result, genuine monotone operator splitting methods were employed in \cite{Siop11} to solve \eqref{e:d3} and, thereby, \eqref{e:p} and \eqref{e:d}. Applications of this framework can be found in image restoration \cite{Ocon14} and in empirical mode decomposition \cite{Pust14}. \end{example} \begin{example} The primal-dual pair \eqref{e:p}--\eqref{e:d} can be exploited in various ways; see for instance \cite{Cham11,Icip14,Svva10,Komo15}. A simple illustration is found in sparse signal recovery and machine learning, where one often aims at solving \eqref{e:p} by choosing $g$ to be a norm $|||\cdot|||$ \cite{Argy12,Bach12,Nume19,Dono03,McDo16}. Now let $|||\cdot|||_*\colon\mathcal G\to\ensuremath{\mathbb R}\colon v\mapsto \sup_{|||y|||\leq 1}\scal{y}{v}$ be the dual norm and let $B_*=\menge{v\in\mathcal G}{|||v|||_*\leq 1}$ be the associated unit ball. Then \eqref{e:d} is the constrained optimization problem \begin{equation} \minimize{v\in B_*}{f^*(-L^*v)}. \end{equation} This dual formulation underlies several investigations, e.g., \cite{Elgh12,Ndia17}. \end{example} \section{Fixed point algorithms} \label{sec:3} We review the main fixed point construction algorithms. \subsection{Basic iteration schemes} First, we recall that finding the fixed point of a Banach contraction is relatively straightforward via the standard Banach-Picard iteration scheme \eqref{e:emile}. \begin{theorem}[\cite{Livre1}] \label{t:banach} Let $\delta\in\zeroun$, let $T\colon\mathcal H\to\mathcal H$ be $\delta$-Lipschitzian, and let $x_0\in\mathcal H$. Set \begin{equation} \label{e:banach} (\forall n\in\ensuremath{\mathbb N})\quad x_{n+1}=Tx_n. \end{equation} Then $T$ has a unique fixed point $\overline{x}$ and $x_n\to\overline{x}$. More precisely, $(\forall n\in\ensuremath{\mathbb N})$ $\|x_n-\overline{x}\|\leq\delta^n\|x_0-\overline{x}\|$. \end{theorem} If $T$ is merely nonexpansive (i.e., $\delta=1$) with $\ensuremath{\text{\rm Fix}\,} T\neq\ensuremath{\varnothing}$, Theorem~\ref{t:banach} fails. For instance, let $T\neq\ensuremath{\mathrm{Id}}$ be a rotation in the Euclidean plane. Then it is nonexpansive with $\ensuremath{\text{\rm Fix}\,} T=\{0\}$ but the sequence $(x_n)_{n\in\ensuremath{\mathbb N}}$ constructed by the successive approximation process \eqref{e:banach} does not converge. Such scenarios can be handled via the following result. \begin{theorem}[\cite{Livre1}] \label{t:1} Let $\alpha\in\rzeroun$, let $T\colon\mathcal H\to\mathcal H$ be an $\alpha$-averaged operator such that $\ensuremath{\text{\rm Fix}\,} T\neq\ensuremath{\varnothing}$, let $(\lambda_n)_{n\in\ensuremath{\mathbb N}}$ be an $\alpha$-relaxation sequence. Set \begin{equation} \label{e:groetsch1} (\forall n\in\ensuremath{\mathbb N})\quad x_{n+1}=x_n+\lambda_n\big(Tx_n-x_n\big). \end{equation} Then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to a point in $\ensuremath{\text{\rm Fix}\,} T$. \end{theorem} \begin{remark} \label{r:1} In connection with Theorems~\ref{t:banach} and \ref{t:1}, let us make the following observations. \begin{enumerate} \item \label{r:1i} If $\alpha<1$ in Theorem~\ref{t:1}, choosing $\lambda_n=1$ in \eqref{e:groetsch1} (see Example~\ref{ex:relax}\ref{ex:relaxi}) yields \eqref{e:banach}. \item \label{r:1ii} In contrast with Theorem~\ref{t:banach}, the convergence in Theorem~\ref{t:1} is not linear in general \cite{Baus09,Borw15}. \item \label{r:1iii} When $\alpha=1$, \eqref{e:groetsch1} is known as the \emph{Krasnosel'ski\u\i-Mann iteration}. \end{enumerate} \end{remark} Next, we present a more flexible fixed point theorem which involves iteration-dependent composite averaged operators. \begin{theorem}[\cite{Jmaa15}] \label{t:2} Let $\varepsilon\in\left]0,1/2\right[$ and let $x_0\in\mathcal H$. For every $n\in\ensuremath{\mathbb N}$, let $\alpha_{1,n}\in\left]0,1/(1+\varepsilon)\right]$, let $\alpha_{2,n}\in\left]0,1/(1+\varepsilon)\right]$, let $T_{1,n}\colon\mathcal H\to\mathcal H$ be $\alpha_{1,n}$-averaged, and let $T_{2,n}\colon\mathcal H\to\mathcal H$ be $\alpha_{2,n}$-averaged. In addition, for every $n\in\ensuremath{\mathbb N}$, let \begin{equation} \lambda_n\in\big[\varepsilon, {(1-\varepsilon)(1+\varepsilon\alpha_n)}/{\alpha_n}\big], \end{equation} where $\alpha_n=({\alpha_{1,n}+\alpha_{2,n}-2\alpha_{1,n}\alpha_{2,n}})/ (1-\alpha_{1,n}\alpha_{2,n})$, and set \begin{equation} \label{e:beforecluster} x_{n+1}=x_n+\lambda_n\big(T_{1,n}(T_{2,n}x_n)-x_n\big). \end{equation} Suppose that $S=\bigcap_{n\in\ensuremath{\mathbb N}}\ensuremath{\text{\rm Fix}\,}(T_{1,n}\circ T_{2,n})\neq\ensuremath{\varnothing}$. Then the following hold: \begin{enumerate} \item \label{t:2i} $(\forall x\in S)$ $\sum_{n\in\ensuremath{\mathbb N}}\|T_{2,n}x_n-x_n-T_{2,n}x+x\|^2<\ensuremath{+\infty}$. \item \label{t:2ii} Suppose that a subsequence of $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to a point in $S$. Then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to a point in $S$. \end{enumerate} \end{theorem} \begin{remark} \label{r:mi} The assumption in Theorem~\ref{t:2}\ref{t:2ii} holds in particular when, for every $n\in\ensuremath{\mathbb N}$, $T_{1,n}=T_1$ and $T_{2,n}=T_2$. \end{remark} Below, we present a variant of Theorem~\ref{t:1} obtained by considering the composition of $m$ operators. In the case of firmly nonexpansive operators, this result is due to Martinet \cite{Mart72}. \begin{theorem}[\cite{Opti04}] \label{t:5} For every $i\in\{1,\ldots,m\}$, let $\alpha_i\in\zeroun$ and let $T_i\colon\mathcal H\to\mathcal H$ be $\alpha_i$-averaged. Let $x_0\in\mathcal H$, suppose that $\ensuremath{\text{\rm Fix}\,}(T_1\circ\cdots\circ T_m)\neq\ensuremath{\varnothing}$, and iterate \begin{equation} \label{e:2004} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{ll} x_{mn+1}&\hskip -3mm={T_m}x_{mn}\\ x_{mn+2}&\hskip -3mm={T_{m-1}}x_{mn+1}\\ &\hskip -2mm\vdots\\ x_{mn+m-1}&\hskip -3mm={T_2}x_{mn+m-2}\\ x_{mn+m}&\hskip -3mm={T_1}x_{mn+m-1}. \end{array} \right.\\[2mm] \end{array} \end{equation} Then $(x_{mn})_{n\in\ensuremath{\mathbb N}}$ converges to a point $\overline{x}_1$ in $\ensuremath{\text{\rm Fix}\,}(T_1\circ\cdots\circ T_m)$. Now set $\overline{x}_m=T_m\overline{x}_1$, $\overline{x}_{m-1}=T_{m-1}\overline{x}_m$, \ldots, $\overline{x}_2=T_2\overline{x}_3$. Then, for every $i\in\{1,\ldots,m-1\}$, $(x_{mn+i})_{n\in\ensuremath{\mathbb N}}$ converges to $\overline{x}_{m+1-i}$. \end{theorem} \subsection{Algorithms for fixed point selection} The algorithms discussed so far construct an unspecified fixed point of a nonexpansive operator $T\colon\mathcal H\to\mathcal H$. In some applications, one may be interested in finding a specific fixed point, for instance one of minimum norm or, more generally, one that minimizes some quadratic function \cite{Artz79,Sign03}. One will find in \cite{Sign03} several algorithms to minimize convex quadratic functions over fixed point sets, as well as signal recovery applications. Beyond quadratic selection, one may wish to minimize a strictly convex function $g\in\Gamma_0(\mathcal H)$ over the closed convex set (see Proposition~\ref{p:f1}) $\ensuremath{\text{\rm Fix}\,} T$, i.e., \begin{equation} \label{e:bi} \minimize{x\in\ensuremath{\text{\rm Fix}\,} T} g(x). \end{equation} Instances of such formulations can be found in signal interpolation \cite{Ono15} and machine learning \cite{Naka20}. Algorithms to solve \eqref{e:bi} have been proposed in \cite{Sico00,Hirs06,Yama01} under various hypotheses. Here is an example. \begin{proposition}[\cite{Yama01}] \label{y:isao} Let $T\colon\mathcal H\to\mathcal H$ be nonexpansive, let $g\colon\mathcal H\to\ensuremath{\mathbb R}$ be strongly convex and differentiable with a Lipschitzian gradient, let $x_0\in\mathcal H$, and let $(\alpha_n)_{n\in\ensuremath{\mathbb N}}$ be a sequence in $[0,1]$ such that $\alpha_n\to 0$, $\sum_{n\in\ensuremath{\mathbb N}}\alpha_n=\ensuremath{+\infty}$, and $\sum_{n\in\ensuremath{\mathbb N}}|\alpha_{n+1}-\alpha_n|<\ensuremath{+\infty}$. Suppose that \eqref{e:bi} has a solution and iterate \begin{equation} (\forall n\in\ensuremath{\mathbb N})\quad x_{n+1}=Tx_n-\alpha_n\nabla g(Tx_n). \end{equation} Then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to the solution to \eqref{e:bi}. \end{proposition} \subsection{A fixed point method with block operator updates} \label{sec:upda} We turn our attention to a composite fixed point problem. \begin{problem} \label{prob:21} Let $(\omega_i)_{1\leq i\leq m}$ be real numbers in $\rzeroun$ such that $\sum_{i=1}^m\omega_i=1$. For every $i\in\{0,\ldots,m\}$, let $T_i\colon\mathcal H\to\mathcal H$ be $\alpha_i$-averaged for some $\alpha_i\in\zeroun$. The task is to find a fixed point of $T_0\circ\sum_{i=1}^m\omega_iT_i$, assuming that such a point exists. \end{problem} A simple strategy to solve Problem~\ref{prob:21} is to set $R=\sum_{i=1}^m\omega_iT_i$, observe that $R$ is averaged by Proposition~\ref{p:av2}, and then use Theorem~\ref{t:2} and Remark~\ref{r:mi} to find a fixed point of $T_0\circ R$. This, however, requires the activation of the $m$ operators $(T_i)_{1\leq i\leq m}$ to evaluate $R$ at each iteration, which is a significant computational burden when $m$ is sizable. In the degenerate case when the operators $(T_i)_{0\leq i\leq m}$ have common fixed points, Problem~\ref{prob:21} amount to finding such a point (see Propositions~\ref{p:f2} and \ref{p:f3}) and this can be done using the strategies devised in \cite{Baus96,Nume06,Aiep96,Lopu97} which require only the activation of blocks of operators at each iteration. Such approaches fail in our more challenging setting, which assumes only that $\ensuremath{\text{\rm Fix}\,}(T_0\circ\sum_{i=1}^m\omega_iT_i)\neq\ensuremath{\varnothing}$. However, with a strategy based on tools from mean iteration theory \cite{Siop17}, it is possible to devise an algorithm which operates by updating only a block of operators $(T_i)_{i\in I_n}$ at iteration $n$. \begin{theorem}[\cite{Upda20}] \label{t:3} Consider the setting of Problem~\ref{prob:21}. Let $M$ be a strictly positive integer and let $(I_n)_{n\in\ensuremath{\mathbb N}}$ be a sequence of nonempty subsets of $\{1,\ldots,m\}$ such that \begin{equation} \label{e:K} (\forall n\in\ensuremath{\mathbb N})\quad\bigcup_{k=n}^{n+M-1}I_k=\{1,\ldots,m\}. \end{equation} Let $x_0\in\mathcal H$, let $(t_{i,-1})_{1\leq i\leq m}\in\mathcal H^m$, and iterate \begin{equation} \label{e:a2} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} \text{for every}\;i\in I_n\\ \left\lfloor \begin{array}{l} t_{i,n}=T_ix_n \end{array} \right.\\ \text{for every}\;i\in\{1,\ldots,m\}\smallsetminus I_n\\ \left\lfloor \begin{array}{l} t_{i,n}=t_{i,n-1}\\ \end{array} \right.\\[1mm] x_{n+1}=T_0\big(\sum_{i=1}^m\omega_it_{i,n}\big). \end{array} \right.\\ \end{array} \end{equation} Then the following hold: \begin{enumerate} \item \label{c:1i} Let $x$ be a solution to Problem~\ref{prob:21} and let $i\in\{1,\ldots,m\}$. Then $x_n-T_ix_n\to x-T_ix$. \item \label{c:1ii} $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to a solution to Problem~\ref{prob:21}. \item \label{c:1iv} Suppose that, for some $i\in\{0,\ldots,m\}$, $T_i$ is a Banach contraction. Then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges linearly to the unique solution to Problem~\ref{prob:21}. \end{enumerate} \end{theorem} At iteration $n$, $I_n$ is the set of indices of operators to be activated. The remaining operators are not used and their most recent evaluations are recycled to form the update $x_{n+1}$. Condition \eqref{e:K} imposes the mild requirement that each operator in $(T_i)_{1\leq i\leq m}$ be evaluated at least once over the course of any $M$ consecutive iterations. The choice of $M$ is left to the user. \subsection{Perturbed fixed point methods} For various modeling or computational reasons, exact evaluations of the operators in fixed point algorithms may not be possible. Such perturbations can be modeled by deterministic additive errors \cite{Opti04,Lema96,Mart72} but also by stochastic ones \cite{Comb15,Ermo69}. Here is a stochastically perturbed version of Theorem~\ref{t:1}, which is a straightforward variant of \cite[Corollary~2.7]{Comb15}. \begin{theorem} \label{t:1stoch} Let $\alpha\in\rzeroun$, let $T\colon\mathcal H\to\mathcal H$ be an $\alpha$-averaged operator such that $\ensuremath{\text{\rm Fix}\,} T\neq\ensuremath{\varnothing}$, and let $(\lambda_n)_{n\in\ensuremath{\mathbb N}}$ be an $\alpha$-relaxation sequence. Let $x_0$ and $(e_n)_{n\in\ensuremath{\mathbb N}}$ be $\mathcal H$-valued random variables. Set \begin{equation} \label{e:groetsch2} (\forall n\in\ensuremath{\mathbb N})\quad x_{n+1}=x_n+\lambda_n\big(Tx_n+e_n-x_n\big). \end{equation} Suppose that $\sum_{n\in\ensuremath{\mathbb N}}\lambda_n\sqrt{\EC{\|e_n\|^2}{\ensuremath{\EuScript{X}}_n}}<\ensuremath{+\infty}$ $\ensuremath{\text{a.~\!s.}}$, where $\ensuremath{\EuScript{X}}_n$ is the $\sigma$-algebra generated by $(x_0,\ldots,x_n)$. Then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges $\ensuremath{\text{a.~\!s.}}$ to a $(\ensuremath{\text{\rm Fix}\,}{T})$-valued random variable. \end{theorem} \subsection{Random block-coordinate fixed point methods} \label{sec:87} We have seen in Section~\ref{sec:upda} that the computational cost per iteration could be reduced in certain fixed point algorithms by updating only some of the operators involved in the model. In this section, we present another approach to reduce the iteration cost by considering scenarios in which the underlying Euclidean space $\ensuremath{\boldsymbol{\mathcal H}}$ is decomposable in $m$ factors $\ensuremath{\boldsymbol{\mathcal H}}=\mathcal H_1\times\cdots\times\mathcal H_m$. In the spirit of the Gauss-Seidel algorithm, one can explore the possibility of activating only some of the coordinates of certain operators at each iteration of a fixed point method. The potential advantages of such a procedure are a reduced computational cost per iteration, reduced memory requirements, and an increased implementation flexibility. In the product space $\ensuremath{\boldsymbol{\mathcal H}}$, consider the basic update process \begin{equation} \label{e:basiciter} \boldsymbol{x}_{n+1}= \boldsymbol{T}_{\!n}\boldsymbol{x}_n, \end{equation} under the assumption that the operator $\boldsymbol{T}_{\!n}$ is decomposable explicitly as \begin{equation} \boldsymbol{T}_{\!n}\colon\ensuremath{\boldsymbol{\mathcal H}}\to\ensuremath{\boldsymbol{\mathcal H}}\colon \boldsymbol{x}\mapsto (T_{1,n}\boldsymbol{x},\ldots, T_{m,n}\boldsymbol{x}), \end{equation} with $T_{i,n}\colon\ensuremath{\boldsymbol{\mathcal H}}\to\mathcal H_i$. Updating only some coordinates is performed by modifying iteration \eqref{e:basiciter} as \begin{equation} \label{e:stocbasiciter} (\forall i\in\{1,\ldots,m\})\quad x_{i,n+1}=x_{i,n}+\varepsilon_{i,n} \big(T_{i,n}x_n-x_{i,n}\big), \end{equation} where $\varepsilon_{i,n}\in\{0,1\}$ signals the activation of the $i$-th coordinate of $\boldsymbol{x}_n$. If $\varepsilon_{i,n}=1$, the $i$-th component is updated whereas, if $\varepsilon_{i,n}=0$, it is unchanged. The main difficulty facing such an approach is that the nonexpansiveness property of an operator is usually destroyed by coordinate sampling. To remove this roadblock, a possibility is to make the activation variables random, which results in a stochastic algorithm for which almost sure convergence holds \cite{Comb15,Iutz13}. \begin{theorem}[\cite{Comb15}] Let $\alpha\in\rzeroun$, let $\epsilon\in\left]0,1/2\right[$, and let $\boldsymbol{T}\colon\ensuremath{\boldsymbol{\mathcal H}}\to\ensuremath{\boldsymbol{\mathcal H}}\colon \boldsymbol{x}\mapsto(T_{\!i}\, \boldsymbol{x})_{1\leq i\leq m}$ be an $\alpha$-averaged operator where ${T}_i\colon\ensuremath{\boldsymbol{\mathcal H}}\to\mathcal H_i$. Let $(\lambda_n)_{n\in\ensuremath{\mathbb N}}$ be in $[\epsilon,\alpha^{-1}-\epsilon]$, set $D=\{0,1\}^m\smallsetminus \{\boldsymbol{0}\}$, let $\boldsymbol{x}_0$ be an $\ensuremath{\boldsymbol{\mathcal H}}$-valued random variable, and let $(\boldsymbol{\varepsilon}_n)_{n\in\ensuremath{\mathbb N}}$ be identically distributed $D$-valued random variables. Iterate \begin{equation} \label{e:2014-02-09} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} \text{for}\;i=1,\ldots,m\\ \left\lfloor \begin{array}{l} x_{i,n+1}=x_{i,n}+\varepsilon_{i,n}\lambda_n\big( T_i\boldsymbol{x}_n-x_{i,n}\big). \end{array} \right. \end{array} \right. \end{array} \end{equation} In addition, assume that the following hold: \begin{enumerate} \item $\ensuremath{\text{\rm Fix}\,}\boldsymbol{T}\neq\ensuremath{\varnothing}$. \item For every $n\in\ensuremath{\mathbb N}$, $\boldsymbol{\varepsilon}_n$ and $(\boldsymbol{x}_0,\ldots,\boldsymbol{x}_n)$ are mutually independent. \item $(\forall i\in\{1,\ldots,m\})$ $\ensuremath{\mathsf{Prob}\,}[\varepsilon_{i,0}=1]>0$. \end{enumerate} Then $(\boldsymbol{x}_n)_{n\in\ensuremath{\mathbb N}}$ converges $\ensuremath{\text{a.~\!s.}}$ to a $\ensuremath{\text{\rm Fix}\,}\boldsymbol{T}$-valued random variable. \end{theorem} Further results in this vein for iterations involving nonstationary compositions of averaged operators can be found in \cite{Comb15}. Mean square convergence results are also available under additional assumptions on the operators $(\boldsymbol{T}_{\!n})_{n\in\ensuremath{\mathbb N}}$ \cite{Comb19}. \section{Fixed point modeling of monotone inclusions} \label{sec:4} \subsection{Splitting sums of monotone operators} \label{sec:A} Our first basic model is that of finding a zero of the sum of two monotone operators. It will be seen to be central in understanding and solving data science problems in optimization form (see also Example~\ref{ex:m+s} for a special case) and beyond. \begin{problem} \label{prob:7} Let $A\colon\mathcal H\to 2^{\mathcal H}$ and $B\colon\mathcal H\to 2^{\mathcal H}$ be maximally monotone operators. The task is to \begin{equation} \label{e:r29p} \text{find}\;\;x\in\mathcal H\;\;\text{such that}\;\;0\in Ax+Bx, \end{equation} under the assumption that a solution exists. \end{problem} A classical method for solving Problem~\ref{prob:7} is the \emph{Douglas-Rachford} algorithm, which was first proposed in \cite{Lion79} (see also \cite{Ecks92}; the following relaxed version is from \cite{Eoop01}). \begin{proposition}[Douglas-Rachford splitting] \label{p:DR} Let $(\lambda_n)_{n\in\ensuremath{\mathbb N}}$ be a $1/2$-relaxation sequence, let $\gamma\in\ensuremath{\left]0,+\infty\right[}$, and let $y_0\in\mathcal H$. Iterate \begin{equation} \label{e:dr4} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} x_n=J_{\gamma B}y_n\\ z_n=J_{\gamma A}(2x_n-y_n)\\ y_{n+1}=y_n+\lambda_n(z_n-x_n). \end{array} \right.\\[2mm] \end{array} \end{equation} Then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to a solution to Problem~\ref{prob:7}. \end{proposition} The Douglas-Rachford algorithm requires the ability to evaluate two resolvents at each iteration. However, if one of the operators is single-valued and Lipschitzian, it is possible apply it explicitly, hence requiring only one resolvent evaluation per iteration. The resulting algorithm, proposed by Tseng \cite{Tsen00}, is often called the \emph{forward-backward-forward} splitting algorithm since it involves two explicit (forward) steps using $B$ and one implicit (backward) step using $A$. \begin{proposition}[Tseng splitting] \label{p:khk09} In Problem~\ref{prob:7}, assume that $B$ is $\delta$-Lipschitzian for some $\delta\in\ensuremath{\left]0,+\infty\right[}$. Let $x_0\in\mathcal H$, let $\varepsilon\in\left]0,1/(\delta+1)\right[$, let $(\gamma_n)_{n\in\ensuremath{\mathbb N}}$ be in $[\varepsilon,(1-\varepsilon)/\delta]$, and iterate \begin{equation} \label{e:rio1010} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} y_n=x_n-\gamma_n Bx_n\\ z_n=J_{\gamma_n A}y_n\\ r_n=z_n-\gamma_n Bz_n\\ x_{n+1}=x_n-y_n+r_n. \end{array} \right.\\[2mm] \end{array} \end{equation} Then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to a solution to Problem~\ref{prob:7}. \end{proposition} As noted in Section~\ref{sec:23}, if $B$ is cocoercive, then it is Lipschitzian, and Proposition~\ref{p:khk09} is applicable. However, in this case it is possible to devise an algorithm which requires only one application of $B$ per iteration, as opposed to two in \eqref{e:rio1010}. To see this, let $\gamma_n\in\left]0,2\beta\right[$ and $x\in\mathcal H$. Then it follows at once from \eqref{e:gm} that $x$ solves Problem~\ref{prob:7} $\Leftrightarrow$ $-\gamma_nBx\in\gamma_n Ax$ $\Leftrightarrow$ $(x-\gamma_nBx)-x\in\gamma_n Ax$ $\Leftrightarrow$ $x=J_{\gamma_n A}(x-\gamma_nBx)$ $\Leftrightarrow$ $x\in\ensuremath{\text{\rm Fix}\,}(T_{1,n}\circ T_{2,n})$, where $T_{1,n}=J_{\gamma_n A}$ and $T_{2,n}=\ensuremath{\mathrm{Id}}-\gamma_nB$. As seen in Theorem~\ref{t:minty}, $T_{1,n}$ is $1/2$-averaged. On the other hand, we derive from Proposition~\ref{p:2004-5} that, if $\alpha_{2,n}=\gamma_n/(2\beta)$, then $T_{2,n}$ is $\alpha_{2,n}$-averaged. With these considerations, we invoke Theorem~\ref{t:2} to obtain the following algorithm, which goes back to \cite{Merc79}. \begin{proposition}[forward-backward splitting \cite{Jmaa15}] \label{p:fb13} Suppose that, in Problem~\ref{prob:7}, $B$ is $\beta$-cocoercive for some $\beta\in\ensuremath{\left]0,+\infty\right[}$. Let $\varepsilon\in\left]0,\min\{1/2,\beta\}\right[$, let $x_0\in\mathcal H$, and let $(\gamma_n)_{n\in\ensuremath{\mathbb N}}$ be in $\left[\varepsilon,2\beta/(1+\varepsilon)\right]$. Let \begin{equation} (\forall n\in\ensuremath{\mathbb N})\quad \lambda_n\in\big[\varepsilon,(1-\varepsilon) \big(2+\varepsilon-{\gamma_n}/(2\beta)\big)\big]. \end{equation} Iterate \begin{equation} \label{e:FB1} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} u_n=x_n-\gamma_n Bx_n\\ x_{n+1}=x_n+\lambda_n\big(J_{\gamma_n A}u_n-x_n\big). \end{array} \right.\\[2mm] \end{array} \end{equation} Then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to a solution to Problem~\ref{prob:7}. \end{proposition} We now turn our attention to a more structured version of Problem~\ref{prob:7}, which includes an additional Lipschitzian monotone operator. \begin{problem} \label{prob:8} Let $A\colon\mathcal H\to 2^{\mathcal H}$ and $B\colon\mathcal H\to 2^{\mathcal H}$ be maximally monotone operators, let $\delta\in\ensuremath{\left]0,+\infty\right[}$, and let $C\colon\mathcal H\to\mathcal H$ be monotone and $\delta$-Lipschitzian. The task is to \begin{equation} \label{e:r29q} \text{find}\;\;x\in\mathcal H\;\;\text{such that}\;\;0\in Ax+Bx+Cx, \end{equation} under the assumption that a solution exists. \end{problem} The following approach provides also a dual solution. \begin{proposition}[splitting three operators I \cite{Svva12}] \label{p:71} Consider Problem~\ref{prob:8} and let $\varepsilon\in\left]0,1/(2+\delta)\right[$. Let $(\gamma_n)_{n\in\ensuremath{\mathbb N}}$ be in $\left[\varepsilon,(1-\varepsilon)/(1+\delta)\right]$, let $x_0\in\mathcal H$, and let $u_0\in\mathcal H$. Iterate \begin{equation} \label{e:3ops1} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} y_n=x_n-\gamma_n(Cx_n+u_n)\\ p_n=J_{\gamma_n A}\,y_n\\ q_n=u_n+\gamma_n\big(x_n-J_{B/\gamma_n}(u_n/\gamma_n+x_n)\big)\\ x_{n+1}=x_n-y_n+p_n-\gamma_n(Cp_n+q_n)\\ u_{n+1}=q_n+\gamma_n(p_n-x_n). \end{array} \right.\\[2mm] \end{array} \end{equation} Then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to a solution to Problem~\ref{prob:8} and $(u_n)_{n\in\ensuremath{\mathbb N}}$ converges to a solution $u$ to the dual problem, i.e., $0\in-(A+C)^{-1}(-u)+B^{-1}u$. \end{proposition} When $C$ is $\beta$-cocoercive in Problem~\ref{prob:8}, we can take $\delta=1/\beta$. In this setting, an alternative algorithm is obtained as follows. Let us fix $\gamma\in\ensuremath{\left]0,+\infty\right[}$ and define \begin{align} \label{e:T3ops} T=J_{\gamma A}\circ \big(2J_{\gamma B}-\ensuremath{\mathrm{Id}}-\gamma C\circ J_{\gamma B}\big)+\ensuremath{\mathrm{Id}}-J_{\gamma B}. \end{align} By setting $T_1=J_{\gamma A}$, $T_2=J_{\gamma B}$, and $T_3=\ensuremath{\mathrm{Id}}-\gamma C$ in Proposition~\ref{p:3magic}, we deduce from Proposition~\ref{p:2004-5} that, if $\gamma\in\left]0,2\beta\right[$ and $\alpha=2\beta/(4\beta-\gamma)$, then $T$ is $\alpha$-averaged. Now take $y\in\mathcal H$ and set $x=J_{\gamma B}y$, hence $y-x\in\gamma Bx$ by \eqref{e:gm}. Then $y\in\ensuremath{\text{\rm Fix}\,} T$ $\Leftrightarrow$ $J_{\gamma A}(2x-y-\gamma Cx)+y-x=y$ $\Leftrightarrow$ $J_{\gamma A}(2x-y-\gamma Cx)=x$ $\Leftrightarrow$ $x-y-\gamma Cx\in\gamma Ax$ by \eqref{e:gm}. Thus, $0=(x-y)+(y-x)\in\gamma(Ax+Bx+Cx)$, which shows that $x$ solves Problem~\ref{prob:8}. Altogether, since $y$ can be constructed via Theorem~\ref{t:1}, we obtain the following convergence result. \begin{proposition}[splitting three operators II \cite{Davi17}] \label{p:72} In Problem~\ref{prob:8}, assume that $C$ is $\beta$-cocoercive for some $\beta\in\ensuremath{\left]0,+\infty\right[}$. Let $\gamma\in\left]0,2\beta\right[$ and set $\alpha=2\beta/(4\beta-\gamma)$. Furthermore, let $(\lambda_n)_{n\in\ensuremath{\mathbb N}}$ be an $\alpha$-relaxation sequence and let $y_0\in\mathcal H$. Iterate \begin{equation} \label{e:3ops2} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} x_n=J_{\gamma B}\,y_n\\ r_n=y_n+\gamma C x_n\\ z_n=J_{\gamma A}(2x_n-r_n)\\ y_{n+1}=y_n+\lambda_n(z_n-x_n). \end{array} \right.\\[2mm] \end{array} \end{equation} Then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to a solution to Problem~\ref{prob:8}. \end{proposition} \begin{remark} \label{re:3op}\ \begin{enumerate} \item Work closely related to Proposition~\ref{p:72} can be found in \cite{Brice15,Bric18,Ragu13}. See also \cite{Ragu19}, which provides further developments and a discussion of \cite{Brice15,Davi17,Ragu13}. \item Unlike algorithm \eqref{e:3ops1}, \eqref{e:3ops2} imposes constant proximal parameters and requires the cocoercivity of $C$, but it involves only one application of $C$ per iteration. An extension of \eqref{e:3ops2} appears in \cite{Yanm18} in the context of minimization problems. \end{enumerate} \end{remark} \subsection{Splitting sums of composite monotone operators} \label{se:splitmon} The monotone inclusion problems of Section~\ref{sec:A} are instantiations of the following formulation, which involves an arbitrary number of maximally monotone operators and compositions with linear operators. \begin{problem} \label{prob:88} Let $\delta\in\ensuremath{\left]0,+\infty\right[}$ and let $A\colon\mathcal H\to 2^\mathcal H$ be maximally monotone. For every $k\in\{1,\ldots,q\}$, let $B_k\colon\mathcal G_k\to 2^{\mathcal G_k}$ be maximally monotone, let $0\neq L_k\colon\mathcal H\to\mathcal G_k$ be linear, and let $C_k\colon\mathcal G_k\to\mathcal G_k$ be monotone and $\delta$-Lipschitzian. The task is to \begin{multline} \label{e:r29qq} \text{find}\;\;x\in\mathcal H\;\;\text{such that}\\ 0\in Ax+\sum_{k=1}^q L_k^*\big((B_k+C_k)(L_kx)\big), \end{multline} under the assumption that a solution exists. \end{problem} In the context of Problem~\ref{prob:88}, the principle of a splitting algorithm is to involve all the operators individually. In the case of a set-valued operator $A$ or $B_k$, this means using the associated resolvent, whereas in the case of a single-valued operator $C_k$ or $L_k$, a direct application can be considered. An immediate difficulty one faces with \eqref{e:r29qq} is that it involves many set-valued operators. However, since inclusion is a binary relation, for reasons discussed in \cite{Siop11,Play13} and analyzed in more depth in \cite{Ryue20}, it is not possible to deal with more than two such operators. To circumvent this fundamental limitation, a strategy is to rephrase Problem~\ref{prob:88} as a problem involving at most two set-valued operators in a larger space. This strategy finds its root in convex feasibility problems \cite{Pier84} and it was first adapted to the problem of finding a zero of the sum of $m$ operators in \cite{Gols87,Spin83}. In \cite{Siop11}, it was used to deal with the presence of linear operators (see in particular Example~\ref{ex:m+s}), with further developments in \cite{Botr13,Botr14,Icip14,Svva12,Bang13}. In the same spirit, let us reformulate Problem~\ref{prob:88} by introducing \begin{equation} \label{e:d9} \begin{cases} \boldsymbol{L}\colon\mathcal H\to\ensuremath{\boldsymbol{\mathcal G}}\colon x\mapsto (L_1x,\ldots,L_qx)\\ \boldsymbol{B}\colon\ensuremath{\boldsymbol{\mathcal G}}\to 2^{\ensuremath{\boldsymbol{\mathcal G}}}\colon (y_k)_{1\leq k\leq q}\mapsto\cart_{\!k=1}^{\!q}B_ky_k\\ \boldsymbol{C}\colon\ensuremath{\boldsymbol{\mathcal G}}\to\ensuremath{\boldsymbol{\mathcal G}}\colon (y_k)_{1\leq k\leq q}\mapsto (C_ky_k)_{1\leq k\leq q}\\ \boldsymbol{V}=\ensuremath{\mathrm{range}\,}\boldsymbol{L}. \end{cases} \end{equation} Note that $\boldsymbol{L}$ is linear, $\boldsymbol{B}$ is maximally monotone, and $\boldsymbol{C}$ is monotone and $\delta$-Lipschitzian. In addition, the inclusion \eqref{e:r29qq} can be rewritten more concisely as \begin{equation} \label{e:r29qq2} \text{find}\;\;x\in\mathcal H\;\;\text{such that}\;\; 0\in Ax+\boldsymbol{L}^*\big((\boldsymbol{B}+\boldsymbol{C}) (\boldsymbol{L}x)\big). \end{equation} In particular, suppose that $A=0$. Then, upon setting $\boldsymbol{y}=\boldsymbol{L}x\in\boldsymbol{V}$, we obtain the existence of a point $\boldsymbol{u}\in (\boldsymbol{B}+\boldsymbol{C})\boldsymbol{y}$ in $\ker\boldsymbol{L}^*=\boldsymbol{V}^\perp$. In other words, \begin{equation} \label{e:reforprob88} \boldsymbol{0}\in N_{\boldsymbol{V}}\boldsymbol{y}+\boldsymbol{B}\boldsymbol{y}+ \boldsymbol{C}\boldsymbol{y}. \end{equation} Solving this inclusion is equivalent to solving a problem similar to Problem~\ref{prob:8}, formulated in $\ensuremath{\boldsymbol{\mathcal G}}$. Thus, applying Proposition~\ref{p:72} to \eqref{e:reforprob88} leads to the following result. \begin{proposition} \label{p:71b} In Problem~\ref{prob:88}, suppose that $A=0$, that the operators $(C_k)_{1\leq k\leq q}$ are $\beta$-cocoercive for some $\beta\in\ensuremath{\left]0,+\infty\right[}$, and that $Q=\sum_{k=1}^qL_k^*\circ L_k$ is invertible. Let $\gamma\in\left]0,2\beta\right[$, set $\alpha=2\beta/(4\beta-\gamma)$, and let $(\lambda_n)_{n\in\ensuremath{\mathbb N}}$ be an $\alpha$-relaxation sequence. Further, let $\boldsymbol{y}_0\in\ensuremath{\boldsymbol{\mathcal G}}$, set $s_0=Q^{-1}\Big(\sum_{k=1}^qL_k^*y_{0,k}\Big)$, and iterate \begin{equation} \label{e:3opsbis} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} \text{for}\;k=1,\ldots,q\\ \left\lfloor \begin{array}{l} p_{n,k}=J_{\gamma B_k}\,y_{n,k}\\ \end{array} \right.\\[2mm] x_n=Q^{-1}\big(\sum_{k=1}^qL_k^* p_{n,k}\big)\\[1mm] c_n=Q^{-1}\big(\sum_{k=1}^qL_k^* C_k p_{n,k}\big)\\ z_n=x_n-s_n-\gamma c_n\\ \text{for}\;k=1,\ldots,q\\ \left\lfloor \begin{array}{l} y_{n+1,k}=y_{n,k}+\lambda_n (x_n+z_n-p_{n,k}) \end{array} \right.\\[2mm] s_{n+1}=s_n+\lambda_n z_n. \end{array} \right.\\[2mm] \end{array} \end{equation} Then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to a solution to \eqref{e:r29qq}. \end{proposition} A strategy for handling Problem~\ref{prob:88} in its general setting consists of introducing an auxiliary variable $\boldsymbol{v}\in\boldsymbol{B}(\boldsymbol{L}x)$ in \eqref{e:r29qq2}, which can then be rewritten as \begin{equation} \label{e:monskew} \begin{cases} 0\in Ax+\boldsymbol{L}^*\boldsymbol{v} +\boldsymbol{L}^*\big(\boldsymbol{C}(\boldsymbol{L}x)\big)\\ \boldsymbol{0}\in -\boldsymbol{L} x +\boldsymbol{B}^{-1} \boldsymbol{v}. \end{cases} \end{equation} This results in an instantiation of Problem~\ref{prob:7} in $\ensuremath{\boldsymbol{\mathcal K}}=\mathcal H\times\ensuremath{\boldsymbol{\mathcal G}}$ involving the maximally monotone operators \begin{equation} \begin{cases} \boldsymbol{\mathcal{A}}_1\colon\ensuremath{\boldsymbol{\mathcal K}}\to 2^{\ensuremath{\boldsymbol{\mathcal K}}}\colon (x,\boldsymbol{v})&\mapsto \begin{bmatrix} A & 0\\ 0 & \boldsymbol{B}^{-1} \end{bmatrix} \begin{bmatrix} x\\ \boldsymbol{v} \end{bmatrix}\\[4mm] \boldsymbol{\mathcal{B}}_1\colon\ensuremath{\boldsymbol{\mathcal K}}\to\ensuremath{\boldsymbol{\mathcal K}} \colon (x, \boldsymbol{v})&\mapsto\begin{bmatrix} \boldsymbol{L}^*\circ\boldsymbol{C}\circ\boldsymbol{L} & \boldsymbol{L}^*\\ -\boldsymbol{L} & 0 \\ \end{bmatrix} \begin{bmatrix} x\\ \boldsymbol{v} \end{bmatrix}. \end{cases} \end{equation} We observe that, in $\ensuremath{\boldsymbol{\mathcal K}}$, $\boldsymbol{\mathcal{B}}_1$ is Lipschitzian with constant $\chi=\|\boldsymbol{L}\|(1+\delta\| \boldsymbol{L}\|)$. By applying Proposition~\ref{p:khk09} to \eqref{e:monskew}, we obtain the following algorithm. \begin{proposition}[\cite{Svva12}] \label{prop:MLFB} Consider Problem~\ref{prob:88}. Set \begin{equation} \chi=\sqrt{\textstyle{\sum_{k=1}^q}\|L_k\|^2} \Big(1+\delta\sqrt{\textstyle{\sum_{k=1}^q}\|L_k\|^2}\Big). \end{equation} Let $x_0\in\mathcal H$, let $\boldsymbol{v}_0\in\ensuremath{\boldsymbol{\mathcal G}}$, let $\varepsilon\in\left]0,1/(\chi+1)\right[$, let $(\gamma_n)_{n\in\ensuremath{\mathbb N}}$ be in $[\varepsilon,(1-\varepsilon)/\chi]$, and iterate \begin{equation} \label{e:rio1010b} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} u_n=x_n-\gamma_n \sum_{k=1}^q L_k^* (C_k(L_kx_n)+v_{n,k})\\ p_n=J_{\gamma_n A} u_n\\ \text{for}\;k=1,\ldots,q\\ \left\lfloor \begin{array}{l} y_{n,k}=v_{n,k}+\gamma_nL_k x_n\\ z_{n,k}=y_{n,k}-\gamma_n J_{\gamma^{-1} B_k} \big({y_{n,k}}/{\gamma_n}\big)\\ s_{n,k}=z_{n,k}+\gamma_nL_k p_n\\ v_{n+1,k}=v_{n,k}-y_{n,k}+s_{n,k}\\ \end{array} \right.\\[2mm] r_n=p_n-\gamma_n\sum_{k=1}^q L_k^* (C_k(L_kp_n)+z_{n,k})\\ x_{n+1}=x_n-u_n+r_n. \end{array} \right.\\[2mm] \end{array} \end{equation} Then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to a solution to Problem~\ref{prob:88}. \end{proposition} An alternative approach consists of reformulating \eqref{e:monskew} in the form of Problem~\ref{prob:7} with the maximally monotone operators \begin{equation} \begin{cases} \boldsymbol{\mathcal{A}}_2 \colon\ensuremath{\boldsymbol{\mathcal K}}\to 2^{\ensuremath{\boldsymbol{\mathcal K}}}\colon (x,\boldsymbol{v}) &\mapsto \begin{bmatrix} A & \boldsymbol{L}^*\\ -\boldsymbol{L} & \boldsymbol{B}^{-1}\\ \end{bmatrix} \begin{bmatrix} x\\ \boldsymbol{v} \end{bmatrix}\\[4mm] \boldsymbol{\mathcal{B}}_2\colon\ensuremath{\boldsymbol{\mathcal K}}\to\ensuremath{\boldsymbol{\mathcal K}}\colon (x,\boldsymbol{v}) &\mapsto \begin{bmatrix} \boldsymbol{L}^*\circ\boldsymbol{C}\circ\boldsymbol{L} & 0\\ 0 & 0 \end{bmatrix} \begin{bmatrix} x\\ \boldsymbol{v} \end{bmatrix}. \end{cases} \end{equation} Instead of working directly with these operators, it may be judicious to use preconditioned versions $\boldsymbol{V}\circ\boldsymbol{\mathcal{A}}_2$ and $\boldsymbol{V}\circ\boldsymbol{\mathcal{B}}_2$, where $\boldsymbol{V}\colon\ensuremath{\boldsymbol{\mathcal K}}\to\ensuremath{\boldsymbol{\mathcal K}}$ is a self-adjoint strictly positive linear operator. If $\ensuremath{\boldsymbol{\mathcal K}}$ is renormed with \begin{equation} \|\cdot\|_{\boldsymbol{V}}\colon (x,\boldsymbol{v})\mapsto \sqrt{\scal{(x,\boldsymbol{v})}{\boldsymbol{V}^{-1} (x,\boldsymbol{v})}}, \end{equation} then $\boldsymbol{V}\circ\boldsymbol{\mathcal{A}}_2$ is maximally monotone in the renormed space and, if $\boldsymbol{C}$ is cocoercive in $\ensuremath{\boldsymbol{\mathcal G}}$, then $\boldsymbol{V}\circ\boldsymbol{\mathcal{B}}_2$ is cocoercive in the renormed space. Thus, setting \begin{equation} \label{e:metricV} \boldsymbol{V}= \begin{bmatrix} W & 0\\ 0 & (\sigma^{-1}\ensuremath{\boldsymbol{\mathrm{Id}}}-\boldsymbol{L}\circ W\circ\boldsymbol{L}^*)^{-1} \end{bmatrix}, \end{equation} where $W\colon\mathcal H\to\mathcal H$, and applying Proposition~\ref{p:fb13} in this context yields the following result (see \cite{Icip14}). \begin{proposition} \label{p:fb13b} Suppose that, in Problem~\ref{prob:88}, $A=0$ and $(C_k)_{1\leq k\leq q}$ are $\beta$-cocoercive for some $\beta\in\ensuremath{\left]0,+\infty\right[}$. Let $W\colon\mathcal H\to\mathcal H$ be a self-adjoint strictly positive linear operator and let $\sigma\in\ensuremath{\left]0,+\infty\right[}$ be such that $\kappa=\|\boldsymbol{L}\circ W\circ\boldsymbol{L}^*\| <\min\{1/\sigma,2\beta\}$. Let $\varepsilon\in\left]0,\min\{1/2,\beta/\kappa\}\right[$, let $x_0\in\mathcal H$, and let $\boldsymbol{v}_0\in\ensuremath{\boldsymbol{\mathcal G}}$. For every $n\in\ensuremath{\mathbb N}$, let \begin{equation} \lambda_n\in\big[\varepsilon,(1-\varepsilon) \big(2+\varepsilon-{\kappa}/{2\beta}\big)\big]. \end{equation} Iterate \begin{equation} \label{e:6} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} \text{for}\;k=1,\ldots,q\\ \left\lfloor \begin{array}{l} s_{n,k}=C_k(L_k x_n)\\ \end{array} \right.\\[2mm] z_n=x_n-W\big(\sum_{k=1}^q L_k^* (s_{n,k}+v_{n,k})\big)\\ \begin{array}{l} \text{for}\;k=1,\ldots,q\\ \left\lfloor \begin{array}{l} w_{n,k}=v_{n,k}+\sigma L_k z_n\\ y_{n,k}=w_{n,k}-\sigma J_{\sigma^{-1} B_k} \left({w_{n,k}}/{\sigma}\right)\\ v_{n+1,k}=v_{n,k}+\lambda_n (y_{n,k}-v_{n,k})\\ \end{array} \right.\\[2mm] \end{array}\\ u_n=x_n-W\big(\sum_{k=1}^q L_k^*(s_{n,k}+y_{n,k})\big)\\ x_{n+1}=x_n+\lambda_n (u_n-x_n). \end{array} \right.\\[2mm] \end{array} \end{equation} Then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to a solution to Problem~\ref{prob:88}. \end{proposition} Other choices of the metric operator $\boldsymbol{V}$ are possible, which lead to different primal-dual algorithms \cite{Optim14,Cond13,Komo15,Bang13}. An advantage of \eqref{e:rio1010b} and \eqref{e:6} over \eqref{e:3opsbis} is that the first two do not require the inversion of linear operators. \subsection{Block-iterative algorithms} As will be seen in Problems~\ref{prob:2} and \ref{prob:3}, systems of inclusions arise in multivariate optimization problems (they will also be present in Nash equilibria; see, e.g., \eqref{e:as} and \eqref{e:vi1}). We now focus on general systems of inclusions involving maximally monotone operators as well as linear operators coupling the variables. \begin{problem} \label{prob:82} For every $i\in I=\{1,\ldots,m\}$ and $k\in K=\{1,\ldots,q\}$, let $A_i\colon\mathcal H_i\to 2^{\mathcal H_i}$ and $B_k\colon\mathcal G_k\to 2^{\mathcal G_k}$ be maximally monotone, and let $L_{k,i}\colon\mathcal H_i\to\mathcal G_k$ be linear. The task is to \begin{multline} \label{e:2015-02-26p} \text{find}\;\;\overline{x}_1\in\mathcal H_1,\ldots, \overline{x}_m\in\mathcal H_m\;\text{such that}\; (\forall i\in I)\\ 0\in A_i\overline{x}_i+\ensuremath{\displaystyle\sum}_{k\in K}L_{k,i}^* \bigg(B_k\bigg(\ensuremath{\displaystyle\sum}_{j\in I}L_{k,j}\overline{x}_j\bigg)\bigg), \end{multline} under the assumption that the \emph{Kuhn-Tucker set} \begin{multline} \label{e:kt3} \boldsymbol{Z}=\bigg\{(\overline{\boldsymbol{x}}, \overline{\boldsymbol{v}}) \in\ensuremath{\boldsymbol{\mathcal H}}\times\ensuremath{\boldsymbol{\mathcal G}}\:\bigg |\: (\forall i\in I)\; -\sum_{k\in K}L_{k,i}^*\overline{v}_k\in A_i\overline{x}_i\\ \text{and}\:\;(\forall k\in K)\;\; \sum_{i\in I}L_{k,i}\overline{x}_i\in B_k^{-1}\overline{v}_k \bigg\} \end{multline} is nonempty. \end{problem} We can regard $m$ as the number of coordinates of the solution vector $\overline{\boldsymbol{x}}=(\overline{x}_i)_{1\leq i\leq m}$. In large-scale applications, $m$ can be sizable and so can the number of terms $q$, which is often associated with the number of observations. We have already discussed in Sections~\ref{sec:upda} and \ref{sec:87} techniques in which not all the indices $i$ or $k$ need to be activated at a given iteration. Below, we describe a block-iterative method proposed in \cite{MaPr18} which allows for partial activation of both the families $(A_i)_{1\leq i\leq m}$ and $(B_k)_{1\leq k\leq q}$, together with individual, iteration-dependent proximal parameters for each operator. The method displays an unprecedented level of flexibility and it does not require the inversion of linear operators or knowledge of their norms. The principle of the algorithm is as follows. Denote by $I_n\subset I$ and $K_n\subset K$ the blocks of indices of operators to be updated at iteration $n$. We impose the mild condition that there exist $M\in\ensuremath{\mathbb N}$ such that each operator index $i$ and $k$ is used at least once within any $M$ consecutive iterations, i.e., for every $n\in\ensuremath{\mathbb N}$, \begin{equation} \label{e:n24G} \bigcup_{j=n}^{n+M-1}I_j=\{1,\ldots,m\} \;\:\text{and}\: \bigcup_{j=n}^{n+M-1}K_j=\{1,\ldots,q\}. \end{equation} For each $i\in I_n$ and $k\in K_n$, we select points $(a_{i,n},a^*_{i,n})\in\ensuremath{\mathrm{gra}\,} A_i$ and $(b_{k,n},b^*_{k,n})\in\ensuremath{\mathrm{gra}\,} B_k$ and use them to construct a closed half-space $\boldsymbol{H}_n\subset\ensuremath{\boldsymbol{\mathcal H}}\times\ensuremath{\boldsymbol{\mathcal G}}$ which contains $\boldsymbol{Z}$. The primal variable $\boldsymbol{x}_n$ and the dual variable $\boldsymbol{v}_n$ are updated as $(\boldsymbol{x}_{n+1},\boldsymbol{v}_{n+1}) =\ensuremath{\mathrm{proj}}_{\boldsymbol{H}_n}(\boldsymbol{x}_n,\boldsymbol{v}_n)$. The resulting algorithm can also be implemented with relaxations and in an asynchronous fashion \cite{MaPr18}. For simplicity, we present the unrelaxed synchronous version. \begin{proposition}[\cite{MaPr18}] \label{p:1} Consider the setting of Problem~\ref{prob:82}. Take sequences $(I_n)_{n\in\ensuremath{\mathbb N}}$ in $I$ and $(K_n)_{n\in\ensuremath{\mathbb N}}$ in $K$ satisfying \eqref{e:n24G}, with $I_0=I$ and $K_0=K$. Let $\varepsilon\in\zeroun$ and, for every $i\in I$ and every $k\in K$, let $(\gamma_{i,n})_{n\in\ensuremath{\mathbb N}}$ and $(\mu_{k,n})_{n\in\ensuremath{\mathbb N}}$ be sequences in $[\varepsilon,1/\varepsilon]$. Let $\boldsymbol{x}_{0}\in\ensuremath{\boldsymbol{\mathcal H}}$, let $\boldsymbol{v}_{0}\in\ensuremath{\boldsymbol{\mathcal G}}$, and iterate \begin{equation} \label{e:n03a} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} \hskip -2mm \begin{array}{l} \text{for every}\;i\in I_n\\ \left\lfloor \begin{array}{l} l^*_{i,n}=\sum_{k\in K}L_{k,i}^*v_{k,n}\\ a_{i,n}=J_{\gamma_{i,n} A_i}\big(x_{i,n}-\gamma_{i,n} l^*_{i,n}\big)\\ a_{i,n}^*=\gamma_{i,n}^{-1}(x_{i,n}-a_{i,n})-l^*_{i,n}\\ \end{array} \right.\\[1mm] \text{for every}\;i\in I\smallsetminus I_n\\ \left\lfloor \begin{array}{l} (a_{i,n},a_{i,n}^*)=(a_{i,n-1},a_{i,n-1}^*)\\ \end{array} \right.\\[1mm] \text{for every}\;k\in K_n\\ \left\lfloor \begin{array}{l} l_{k,n}=\sum_{i\in I}L_{k,i}x_{i,n}\\ b_{k,n}=J_{\mu_{k,n}B_k} \big(l_{k,n}+\mu_{k,n}v_{k,n}\big)\\ b^*_{k,n}=v_{k,n}+\mu_{k,n}^{-1} (l_{k,n}-b_{k,n})\\ \end{array} \right.\\[1mm] \text{for every}\;k\in K\smallsetminus K_n\\ \left\lfloor \begin{array}{l} (b_{k,n},b^*_{k,n})=(b_{k,n-1},b^*_{k,n-1})\\ \end{array} \right.\\[1mm] \text{for every}\;i\in I\\ \left\lfloor \begin{array}{l} t^*_{i,n}=a^*_{i,n}+\sum_{k\in K}L_{k,i}^*b^*_{k,n}\\ \end{array} \right.\\[1mm] \text{for every}\;k\in K\\ \left\lfloor \begin{array}{l} t_{k,n}=b_{k,n}-\sum_{i\in I}L_{k,i}a_{i,n}\\ \end{array} \right.\\[1mm] \tau_n=\sum_{i\in I}\|t_{i,n}^*\|^2+\sum_{k\in K}\|t_{k,n}\|^2\\ \text{if}\;\tau_n>0\\ \left\lfloor \begin{array}{l} \theta_n=\dfrac{1}{\tau_n}\,\text{\rm max} \big\{0,\sum_{i\in I}\big(\scal{x_{i,n}}{t^*_{i,n}}- \scal{a_{i,n}}{a^*_{i,n}}\big)\\ \hskip 15mm +\sum_{k\in K} \big(\sscal{t_{k,n}}{v_{k,n}}-\sscal{b_{k,n}}{b^*_{k,n}}\big) \big\}\\ \end{array} \right.\\ \text{else~} \theta_n=0\\ \text{for every}\;i\in I\\ \left\lfloor \begin{array}{l} x_{i,n+1}=x_{i,n}-\theta_n t^*_{i,n}\\ \end{array} \right.\\ \text{for every}\;k\in K\\ \left\lfloor \begin{array}{l} v_{k,n+1}=v_{k,n}-\theta_n t_{k,n}. \end{array} \right.\\ \end{array} \end{array} \right.\\[4mm] \end{array} \end{equation} Then $(\boldsymbol{x}_n)_{n\in\ensuremath{\mathbb N}}$ converges to a solution to Problem~\ref{prob:82}. \end{proposition} Recent developments on splitting algorithms for Problem~\ref{prob:82} as well as variants and extensions thereof can be found in \cite{Jmaa20,Moor21,Gise19,John20,John21}. \section{Fixed point modeling of minimization problems} \label{sec:5} We present key applications of fixed point models in convex optimization. \subsection{Convex feasibility problems} \label{sec:cfp} The most basic convex optimization problem is the convex feasibility problem, which asks for compliance with a finite number of convex constraints the object of interest is known to satisfy. This approach was formalized by Youla \cite{Youl78,Youl82} in signal recovery and it has enjoyed a broad success \cite{Proc93,Aiep96,Herm09,Star87,Thao21,Trus84}. \begin{problem} \label{prob:1} Let $(C_i)_{1\leq i\leq m}$ be nonempty closed convex subsets of $\mathcal H$. The task is to \begin{equation} \label{e:cfp1} \text{find}\;\;x\in\bigcap_{i=1}^mC_i. \end{equation} \end{problem} Suppose that Problem~\ref{prob:1} has a solution and that each set $C_i$ is modeled as the fixed point set of an $\alpha_i$-averaged operator $T_i\colon\mathcal H\to\mathcal H$ some some $\alpha_i\in\zeroun$. Then, applying Theorem~\ref{t:1} with $T=T_1\circ\cdots\circ T_m$ (which is averaged by Proposition~\ref{p:roma}) and $\lambda_n=1$ for every $n\in\ensuremath{\mathbb N}$, we obtain that the sequence $(x_n)_{n\in\ensuremath{\mathbb N}}$ constructed via the iteration \begin{equation} \label{e:g2} (\forall n\in\ensuremath{\mathbb N})\quad x_{n+1}=(T_1\circ\cdots\circ T_m)x_n \end{equation} converges to a fixed point $x$ of $T_1\circ\cdots\circ T_m$. However, in view of Proposition~\ref{p:f3}, $x$ is a solution to \eqref{e:cfp1}. In particular, if each $T_i$ is the projection operator onto $C_i$ (which was seen to be $1/2$-averaged), we obtain the classical POCS (Projection Onto Convex Sets) algorithm \cite{Breg65,Erem65} \begin{equation} \label{e:g3} (\forall n\in\ensuremath{\mathbb N})\quad x_{n+1}= \big(\ensuremath{\mathrm{proj}}_{C_1}\circ\cdots\circ\ensuremath{\mathrm{proj}}_{C_m}\big)x_n \end{equation} popularized in \cite{Youl82} and which goes back to \cite{Kacz37} in the case of affine hyperplanes. In this algorithm, the projection operators are used sequentially. Another basic projection method for solving \eqref{e:cfp1} is the barycentric projection algorithm \begin{equation} \label{e:g4} (\forall n\in\ensuremath{\mathbb N})\quad x_{n+1}=\dfrac{1}{m}\sum_{i=1}^m \ensuremath{\mathrm{proj}}_{C_i}x_n, \end{equation} which uses the projections simultaneously and goes back to \cite{Cimm38} in the case of affine hyperplanes. Its convergence is proved by applying Theorem~\ref{t:1} to $T=m^{-1}\sum_{i=1}^m\ensuremath{\mathrm{proj}}_{C_i}$ which is $1/2$-averaged by Example~\ref{ex:2}. More general fixed point methods are discussed in \cite{Baus96,Nume06,Imag97,Lopu97}. \subsection{Split feasibility problems} The so-called \emph{split feasibility problem} is just a convex feasibility problem involving a linear operator \cite{Byrn02,Cens94,Cens05}. \begin{problem} \label{prob:9} Let $C\subset\mathcal H$ and $D\subset\mathcal G$ be closed convex sets and let $0\neq L\colon\mathcal H\to\mathcal G$ be linear. The task is to \begin{equation} \label{e:split1} \text{find}\;\;x\in C\;\;\text{such that}\;\;Lx\in D, \end{equation} under the assumption that a solution exists. \end{problem} In principle, we can reduce this problem to a 2-set version of \eqref{e:g3} with $C_1=C$ and $C_2=L^{-1}(D)$. However the projection onto $C_2$ is usually not tractable, which makes projection algorithms such as \eqref{e:g3} or \eqref{e:g4} not implementable. To work around this difficulty, let us define $T_1=\ensuremath{\mathrm{proj}}_{C}$ and $T_2=\ensuremath{\mathrm{Id}}-\gamma G_2$, where $G_2=L^*\circ(\ensuremath{\mathrm{Id}}-\ensuremath{\mathrm{proj}}_D)\circ L$ and $\gamma\in\ensuremath{\left]0,+\infty\right[}$. Then $(\forall x\in\mathcal H)$ $Lx\in D$ $\Leftrightarrow$ $G_2x=0$.\footnote{Set $T=\ensuremath{\mathrm{Id}}-\ensuremath{\mathrm{proj}}_D$ and fix $\overline{x}\in\mathcal H$ such that $L\overline{x}\in D$. Then $T(L\overline{x})=0$ and thus $G_2\overline{x}=0$. Conversely, take $x\in\mathcal H$ such that $G_2x=0$. Since $T$ is firmly nonexpansive by Example~\ref{ex:1}, applying \eqref{e:101} with $\beta=1$ yields $0=\scal{0}{x-\overline{x}}= \scal{G_2x-G_2\overline{x}}{x-\overline{x}}= \scal{L^*(T(Lx)-T(L\overline{x}))}{x-\overline{x}}= \scal{T(Lx)-T(L\overline{x})}{Lx-L\overline{x}}\geq \|T(Lx)-T(L\overline{x})\|^2=\|T(Lx)\|^2$. So $T(Lx)=0$ and therefore $Lx=\ensuremath{\mathrm{proj}}_D(Lx)\in D$.} Hence, \begin{equation} \label{e:g7} \ensuremath{\text{\rm Fix}\,} T_1=C\quad\text{and}\quad\ensuremath{\text{\rm Fix}\,} T_2=\menge{x\in\mathcal H}{Lx\in D}. \end{equation} Furthermore, $T_1$ is $\alpha_1$-averaged with $\alpha_1=1/2$. In addition, $\ensuremath{\mathrm{Id}}-\ensuremath{\mathrm{proj}}_D$ is firmly nonexpansive by \eqref{e:110} and therefore $1$-cocoercive. It follows from Proposition~\ref{p:coco} that $G_2$ is cocoercive with constant $1/\|L\|^2$. Now let $\gamma\in\left]0,2/\|L\|^2\right[$ and set $\alpha_2=\gamma\|L\|^2/2$. Then Proposition~\ref{p:2004-5} asserts that $\ensuremath{\mathrm{Id}}-\gamma G_2$ is $\alpha_2$-averaged. Altogether, we deduce from Example~\ref{ex:roma} that $T_1\circ T_2$ is $\alpha$-averaged. Now let $(\lambda_n)_{n\in\ensuremath{\mathbb N}}$ be an $\alpha$-relaxation sequence. According to Theorem~\ref{t:1} and Proposition~\ref{p:f3}, the sequence produced by the iterations \begin{align} \label{e:g9} &\hskip -1mm(\forall n\in\ensuremath{\mathbb N})\quad x_{n+1}=x_n+\lambda_n\nonumber\\ &\qquad\quad \cdot\Big(\ensuremath{\mathrm{proj}}_C\big(x_n-\gamma L^*(Lx_n-\ensuremath{\mathrm{proj}}_D(Lx_n))\big) -x_n\Big)\nonumber\\ &\hskip 23.5mm=x_n+\lambda_n\big(T_1(T_2x_n)-x_n\big) \end{align} converges to a point in $\ensuremath{\text{\rm Fix}\,} T_1\cap\ensuremath{\text{\rm Fix}\,} T_2$, i.e., in view of \eqref{e:g7}, to a solution to Problem~\ref{prob:9}. In particular, if we take $\lambda_n=1$, the update rule in \eqref{e:g9} becomes \begin{equation} \label{e:g6} x_{n+1}=\ensuremath{\mathrm{proj}}_C\Big(x_n-\gamma L^*\big(Lx_n- \ensuremath{\mathrm{proj}}_D(Lx_n)\big)\Big). \end{equation} \subsection{Convex minimization} \label{sec:convmin} We deduce from Fermat's rule (Theorem~\ref{t:4}) and Proposition~\ref{p:11} the fact that a differentiable convex function $f\colon\mathcal H\to\ensuremath{\mathbb R}$ admits $x\in\mathcal H$ as a minimizer if and only if $\nabla f(x)=0$. Now let $\gamma\in\ensuremath{\left]0,+\infty\right[}$. Then this property is equivalent to $x=x-\gamma\nabla f(x)$, which shows that \begin{equation} \label{e:fa} \ensuremath{\mathrm{Argmin}\,} f=\ensuremath{\text{\rm Fix}\,} T,\quad\text{where}\quad T=\ensuremath{\mathrm{Id}}-\gamma\nabla f. \end{equation} If we add the assumption that $\nabla f$ is $\delta$-Lipschitzian, then it is $1/\delta$-cocoercive by Proposition~\ref{p:BH}. Hence, if $0<\gamma<2/\delta$, it follows from Proposition~\ref{p:2004-5}, that $T$ in \eqref{e:fa} is $\alpha$-averaged with $\alpha=\gamma\delta/2$. We then derive from Theorem~\ref{t:1} the convergence of the steepest-descent method. \begin{proposition}[steepest-descent] \label{p:sd} Let $f\colon\mathcal H\to\ensuremath{\mathbb R}$ be a differentiable convex function such that $\ensuremath{\mathrm{Argmin}\,} f\neq\ensuremath{\varnothing}$ and $\nabla f$ is $\delta$-Lipschitzian for some $\delta\in\ensuremath{\left]0,+\infty\right[}$. Let $\gamma\in\left]0,2/\delta\right[$, let $(\lambda_n)_{n\in\ensuremath{\mathbb N}}$ be a $\gamma\delta/2$-relaxation sequence, and let $x_0\in\mathcal H$. Set \begin{equation} \label{e:sd1} (\forall n\in\ensuremath{\mathbb N})\quad x_{n+1}=x_n-\gamma\lambda_n\nabla f(x_n). \end{equation} Then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to a point in $\ensuremath{\mathrm{Argmin}\,} f$. \end{proposition} Now, let us remove the smoothness assumption by considering a general function $f\in\Gamma_0(\mathcal H)$. Then it is clear from \eqref{e:moreau1} that $(\forall x\in\mathcal H)$ $x=\ensuremath{\mathrm{prox}}_fx$ $\Leftrightarrow$ $(\forall y\in\mathcal H)$ $f(x)\leq f(y)$. In other words, we obtain the fixed point characterization \begin{equation} \label{e:fb} \ensuremath{\mathrm{Argmin}\,} f=\ensuremath{\text{\rm Fix}\,} T,\quad\text{where}\quad T=\ensuremath{\mathrm{prox}}_f. \end{equation} In turn, since $\ensuremath{\mathrm{prox}}_f$ is firmly nonexpansive (see Example~\ref{ex:1}), we derive at once from Theorem~\ref{t:1} the convergence of the proximal point algorithm. \begin{proposition}[proximal point algorithm] \label{p:ppa} Let $f\in\Gamma_0(\mathcal H)$ be such that $\ensuremath{\mathrm{Argmin}\,} f\neq\ensuremath{\varnothing}$. Let $\gamma\in\ensuremath{\left]0,+\infty\right[}$, let $(\lambda_n)_{n\in\ensuremath{\mathbb N}}$ be a $1/2$-relaxation sequence, and let $x_0\in\mathcal H$. Set \begin{equation} \label{e:sd2} (\forall n\in\ensuremath{\mathbb N})\quad x_{n+1}= x_n+\lambda_n\big(\ensuremath{\mathrm{prox}}_{\gamma f}x_n-x_n\big). \end{equation} Then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to a point in $\ensuremath{\mathrm{Argmin}\,} f$. \end{proposition} \begin{remark} We can interpret the barycentric projection algorithm \eqref{e:g4} as an unrelaxed instance of the proximal point algorithm \eqref{e:sd2} with $\gamma=1$ by applying Remark~\ref{r:2018} with $q=m$ and, for every $k\in\{1,\ldots,q\}$, $\omega_k=1/q$, $\mathcal G_k=\mathcal H$, $L_k=\ensuremath{\mathrm{Id}}$, and $g_k=\iota_{C_k}$. \end{remark} A more versatile minimization model is the following instance of the formulation discussed in Proposition~\ref{p:17}. \begin{problem} \label{prob:5} Let $f\in\Gamma_0(\mathcal H)$ and $g\in\Gamma_0(\mathcal H)$ be such that $(\ensuremath{\operatorname{ri}}\ensuremath{\mathrm{dom}\,} f)\cap(\ensuremath{\operatorname{ri}}\ensuremath{\mathrm{dom}\,} g)\neq\ensuremath{\varnothing}$ and $\lim_{\|x\|\to\ensuremath{+\infty}}f(x)+g(x)=\ensuremath{+\infty}$. The task is to \begin{equation} \label{e:r28p} \minimize{x\in\mathcal H}{f(x)+g(x)}. \end{equation} \end{problem} It follows from Proposition~\ref{p:17}\ref{p:17i} that Problem~\ref{prob:5} has a solution and from Proposition~\ref{p:17}\ref{p:17ii} that it is equivalent to Problem~\ref{prob:8} with $A=\partial f$ and $B=\partial g$. It then remains to invoke Proposition~\ref{p:DR} and Example~\ref{ex:mono2} to obtain the following algorithm, which employs the proximity operators of $f$ and $g$ separately. \begin{proposition}[Douglas-Rachford splitting] \label{p:18} Let $(\lambda_n)_{n\in\ensuremath{\mathbb N}}$ be a $1/2$-relaxation sequence, let $\gamma\in\ensuremath{\left]0,+\infty\right[}$, and let $y_0\in\mathcal H$. Iterate \begin{equation} \label{e:DRv} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} x_n=\ensuremath{\mathrm{prox}}_{\gamma g}y_n\\ z_n=\ensuremath{\mathrm{prox}}_{\gamma f}(2x_n-y_n)\\ y_{n+1}=y_n+\lambda_n(z_n-x_n). \end{array} \right.\\[2mm] \end{array} \end{equation} Then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to a solution to Problem~\ref{prob:5}. \end{proposition} The Douglas-Rachford algorithm was first employed in signal and image processing in \cite{Comb07} and it has since been applied to various problems, e.g., \cite{Chiz18,Lind21,Papa14,Stei10,Yuyu17}. For a recent application to joint scale/regression estimation in statistical data analysis involving several product space reformulations, see \cite{Ejst20}. We now present two applications to matrix optimization problems. Along the same lines, the Douglas-Rachford algorithm is also used in tensor decomposition \cite{Gand11}. \begin{example} Let $\mathcal H$ be the space of $N\times N$ real symmetric matrices equipped with the Frobenius norm. We denote by $\xi_{i,j}$ the $ij$th component of $X\in\mathcal H$. Let $O\in\mathcal H$. The {\em graphical lasso problem} \cite{Fried08,Ravi11} is to \begin{equation} \label{e:GLASSO} \minimize{X\in\mathcal H}{f(X)+\ell(X)+\operatorname{trace}(OX)}, \end{equation} where \begin{equation} f(X)=\chi\sum_{i=1}^N\sum_{j=1}^N|\xi_{i,j}|,\quad \text{with}\;\chi\in\ensuremath{\left[0,+\infty\right[}, \end{equation} and \begin{equation} \ell(X)= \begin{cases} -\ln\det X,&\text{if $X$ is positive definite};\\ \ensuremath{+\infty},&\text{otherwise.} \end{cases} \end{equation} Problem \eqref{e:GLASSO} arises in the estimation of a sparse precision (i.e., inverse covariance) matrix from an observed matrix $O$ and it has found applications in graph processing. Since $\ell\in\Gamma_0(\mathcal H)$ is a symmetric function of the eigenvalues of its arguments, by \cite[Corollary~24.65]{Livre1}, its proximity operator at $X$ is obtained by performing an eigendecomposition $[U,(\mu_i)_{1\leq i\leq N}]=\operatorname{eig}(X)$ $\Leftrightarrow$ $X=U\operatorname{Diag}(\mu_1,\ldots,\mu_N) U^\top$. Here, given $\gamma\in\ensuremath{\left]0,+\infty\right[}$, \cite[Example~24.66]{Livre1} yields \begin{equation} \ensuremath{\mathrm{prox}}_{\gamma\ell}X=U \operatorname{Diag} \big((\ensuremath{\mathrm{prox}}_{-\gamma\ln}\mu_1,\ldots,\ensuremath{\mathrm{prox}}_{-\gamma\ln}\mu_N)\big) U^\top, \end{equation} where $\ensuremath{\mathrm{prox}}_{-\gamma\ln}\colon\xi\mapsto(\xi+\sqrt{\xi^2+4\gamma})/2$. Let $(\lambda_n)_{n\in\ensuremath{\mathbb N}}$ be a $1/2$-relaxation sequence, let $\gamma\in\ensuremath{\left]0,+\infty\right[}$, and let $Y_0\in\mathcal H$. Upon setting $g=\ell+\scal{\cdot}{O}$, the Douglas-Rachford algorithm of \eqref{e:DRv} for solving \eqref{e:GLASSO} becomes \begin{equation} \label{e:DRGLASSO} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} [U_n,(\mu_{i,n})_{1\leq i\leq N}]= \operatorname{eig}(Y_n-\gamma O)\\ X_n=U_n \operatorname{Diag}\big((\ensuremath{\mathrm{prox}}_{-\gamma\ln} \,\mu_{i,n})_{1\leq i\leq N}\big)U_n^\top\\ Z_n=\operatorname{soft}_{\gamma \chi}(2X_n-Y_n)\\ Y_{n+1}=Y_n+\lambda_n(Z_n-X_n), \end{array} \right.\\[2mm] \end{array} \end{equation} where $\operatorname{soft}_{\gamma \chi}$ denotes the soft-thresholding operator on $[-\gamma\chi,\gamma\chi]$ applied componentwise. Applications of \eqref{e:DRGLASSO} as well as variants with other choices of $\ell$ and $g$ are discussed in \cite{Benf20}. \end{example} \begin{example}[robust PCA] \label{ex:robustPCA} Let $M$ and $N$ be integers such that $M\geq N>0$, and let $\mathcal H$ be the space of $N\times M$ real matrices equipped with the Frobenius norm. The robust Principal Component Analysis (PCA) problem \cite{Cand11,Vasw18} is to \begin{equation} \label{e:robustPCA} \minimize{\substack{X\in\mathcal H, Y\in\mathcal H\\ X+Y=O}} {\|Y\|_{\rm nuc}+\chi\|X\|_1}, \end{equation} where $\|\cdot\|_1$ is the componentwise $\ell_1$-norm, $\|\cdot\|_{\rm nuc}$ is the nuclear norm, and $\chi\in\ensuremath{\left]0,+\infty\right[}$. Let $X=U\operatorname{Diag}(\sigma_1,\ldots,\sigma_N)V^\top$ be the singular value decomposition of $X\in\mathcal H$. Then $\|X\|_{\rm nuc}=\sum_{i=1}^N \sigma_i$ and, by \cite[Example~24.69]{Livre1}, \begin{equation} \ensuremath{\mathrm{prox}}_{\chi\|\cdot\|_{\rm nuc}}X=U\operatorname{Diag} \big(\operatorname{soft}_{\chi}\sigma_1,\ldots, \operatorname{soft}_{\chi}\sigma_N\big)V^\top. \end{equation} An implementation of the Douglas-Rachford algorithm in the product space $\mathcal H\times\mathcal H$ to solve \eqref{e:robustPCA} is detailed in \cite[Example~28.6]{Livre1}. \end{example} By combining Propositions~\ref{p:fb13}, \ref{p:11}, and \ref{p:BH}, together with Example~\ref{ex:mono2}, we obtain the convergence of the forward-backward splitting algorithm for minimization. The broad potential of this algorithm in data science was evidenced in \cite{Smms05}. Inertial variants are presented in \cite{Apid20,Atto19,Beck09,Biou07,Cham15,Siop17}. \begin{proposition}[forward-backward splitting] \label{p:fb17} Suppose that, in Problem~\ref{prob:5}, $g$ is differentiable everywhere and that its gradient is $\delta$-Lipschitzian for some $\delta\in\ensuremath{\left]0,+\infty\right[}$. Let $\varepsilon\in\left]0,\min\{1/2,1/\delta\}\right[$, let $x_0\in\mathcal H$, and let $(\gamma_n)_{n\in\ensuremath{\mathbb N}}$ be in $\left[\varepsilon,2/(\delta(1+\varepsilon))\right]$, and let \begin{equation} \label{e:215} (\forall n\in\ensuremath{\mathbb N})\quad \lambda_n\in\big[\varepsilon,(1-\varepsilon) \big(2+\varepsilon-{\delta\gamma_n}/{2}\big)\big]. \end{equation} Iterate \begin{equation} \label{e:FB2} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} u_n=x_n-\gamma_n\nabla g(x_n)\\ x_{n+1}=x_n+\lambda_n\big(\ensuremath{\mathrm{prox}}_{\gamma_n f}u_n-x_n\big). \end{array} \right.\\[2mm] \end{array} \end{equation} Then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to a solution to Problem~\ref{prob:5}. \end{proposition} \begin{example} Let $M$ and $N$ be integers such that $M\geq N>0$, and let $\mathcal H$ be the space of $N\times M$ real-valued matrices equipped with the Frobenius norm. The task is to reconstruct a low-rank matrix given its projection $O$ onto a vector space $V\subset\mathcal H$. Let $L=\ensuremath{\mathrm{proj}}_V$. The problem is formulated as \begin{equation} \label{e:matcomp} \minimize{X\in\mathcal H}{\frac12\|O-LX\|^2+\chi\|X\|_{\rm nuc}}, \end{equation} where $\chi\in\ensuremath{\left]0,+\infty\right[}$. As seen in Example~\ref{ex:robustPCA}, the proximity operator of the nuclear norm has a closed form expression. In addition, $g\colon X\mapsto\|O-LX\|^2/2$ is convex and its gradient $\nabla g\colon X\mapsto L^*(LX-O)=LX-O$ is nonexpansive. Problem \eqref{e:matcomp} can thus be solved by algorithm \eqref{e:FB2} where $f=\chi\|\cdot\|_{\rm nuc}$ and $\delta=1$. A particular case of \eqref{e:matcomp} is the matrix completion problem \cite{Cand09,Cand10}, where only some components of the sought matrix are observed. If $\mathbb{K}$ denotes the set of indices of the unknown matrix components, we have $V=\menge{X\in\mathcal H}{(\forall (i,j)\in\mathbb{K})\;\xi_{i,j}=0}$. \end{example} \begin{example} \label{ex:MMSE} Let $X$ and $W$ be mutually independent $\ensuremath{\mathbb R}^N$-valued random vectors. Assume that $X$ is absolutely continuous and square-integrable, and that its probability density function is log-concave. Further, assume that $W$ is Gaussian with zero-mean and covariance $\sigma^2\mathrm{I}_{N}$, where $\sigma\in\ensuremath{\left]0,+\infty\right[}$. Let $Y=X+W$. For every $y\in\ensuremath{\mathbb R}^N$, $Qy=\ensuremath{\mathsf E}(X\mid Y=y)$ is the minimum mean square error (MMSE) denoiser for $X$ given the observation $y$. The properties of $Q$ have been investigated in \cite{Gribo13}. It can be shown that $Q$ is the proximity operator of the conjugate of $h=(-\sigma^2\log p)^*-\|\cdot\|^2/2\in \Gamma_{0}(\ensuremath{\mathbb R}^N)$, where $p$ is the density of $Y$. Let $g\colon\ensuremath{\mathbb R}^N\to\ensuremath{\mathbb R}$ be a differentiable convex function with a $\delta$-Lipschitzian gradient for some $\delta\in\ensuremath{\left]0,+\infty\right[}$, and let $\gamma\in\left]0,2/\delta\right[$. The iteration \begin{equation} (\forall n\in\ensuremath{\mathbb N})\quad x_{n+1}=Q\big(x_{n}-\gamma\nabla g(x_n)\big) \end{equation} therefore turns out to be a special case of the forward-backward algorithm \eqref{e:FB2}, where $f=h^*/\gamma$ and $(\forall n\in\ensuremath{\mathbb N})$ $\lambda_n=1$. This algorithm is studied in \cite{Xu20} from a different perspective. \end{example} The projection-gradient method goes back to the classical papers \cite{Gold64,Lev66a}. A version can be obtained by setting $f=\iota_C$ in Proposition~\ref{p:fb17}, where $C$ is the constraint set. Below, we describe the simpler formulation resulting from the application of Theorem~\ref{t:1} to $T=\ensuremath{\mathrm{proj}}_C\circ(\ensuremath{\mathrm{Id}}-\gamma\nabla g)$. \begin{example}[projection-gradient] \label{ex:19} Let $C$ be a nonempty closed convex subset of $\mathcal H$ and let $g\colon\mathcal H\to\ensuremath{\mathbb R}$ be a differentiable convex function, with a $\delta$-Lipschitzian gradient for some $\delta\in\ensuremath{\left]0,+\infty\right[}$. The task is to \begin{equation} \label{e:28q} \minimize{x\in C}{g(x)}, \end{equation} under the assumption that $\lim_{\|x\|\to\ensuremath{+\infty}}g(x)=\ensuremath{+\infty}$ or $C$ is bounded. Let $\gamma\in\left]0,2/\delta\right[$ and set $\alpha=2/(4-\gamma\delta)$. Furthermore, let $(\lambda_n)_{n\in\ensuremath{\mathbb N}}$ be an $\alpha$-relaxation sequence and let $x_0\in\mathcal H$. Iterate \begin{equation} \label{e:pg1} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} y_n=x_n-\gamma\nabla g(x_n)\\ x_{n+1}=x_n+\lambda_n\big(\ensuremath{\mathrm{proj}}_{C}y_n-x_n\big). \end{array} \right.\\[2mm] \end{array} \end{equation} Then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to a solution to \eqref{e:28q}. \end{example} As a special case of Example~\ref{ex:19}, we obtain the convergence of the \emph{alternating projections} algorithm \cite{Che59a,Lev66a}. \begin{example}[alternating projections] \label{ex:20} Let $C_1$ and $C_2$ be nonempty closed convex subsets of $\mathcal H$, one of which is bounded. Given $x_0\in\mathcal H$, iterate \begin{equation} \label{e:u8} (\forall n\in\ensuremath{\mathbb N})\quad x_{n+1}=\ensuremath{\mathrm{proj}}_{C_1}\big(\ensuremath{\mathrm{proj}}_{C_2}x_n\big). \end{equation} Then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to a solution to the constrained minimization problem \begin{equation} \label{e:28f} \minimize{x\in C_1}{d_{C_2}(x)}. \end{equation} This follows from Example~\ref{ex:19} applied to $g=d_{C_2}^2/2$. Note that $\nabla g=\ensuremath{\mathrm{Id}}-\ensuremath{\mathrm{proj}}_{C_2}$ has Lipschitz constant $\delta=1$ (see Example~\ref{ex:jjm7}) and hence \eqref{e:u8} is the instance of \eqref{e:pg1} obtained by setting $\gamma=1$ and $(\forall n\in\ensuremath{\mathbb N})$ $\lambda_n=1$ (see Example~\ref{ex:relax}\ref{ex:relaxi}). \end{example} The following version of Problem~\ref{prob:5} involves $m$ smooth functions. \begin{problem} \label{prob:33} Let $(\omega_i)_{1\leq i\leq m}$ be real numbers in $\rzeroun$ such that $\sum_{i=1}^m\omega_i=1$. Let $f_0\in\Gamma_0(\mathcal H)$ and, for every $i\in\{1,\ldots,m\}$, let $\delta_i\in\ensuremath{\left]0,+\infty\right[}$ and let $f_i\colon\mathcal H\to\ensuremath{\mathbb R}$ be a differentiable convex function with a $\delta_i$-Lipschitzian gradient. Suppose that \begin{equation} \label{e:coer1} \lim_{\|x\|\to\ensuremath{+\infty}}f_0(x)+\sum_{i=1}^m\omega_i f_i(x)=\ensuremath{+\infty}. \end{equation} The task is to \begin{equation} \label{e:prob33} \minimize{x\in\mathcal H}{f_0(x)+\sum_{i=1}^m\omega_i f_i(x)}. \end{equation} \end{problem} To solve Problem~\ref{prob:33}, an option is to apply Theorem~\ref{t:3} to obtain a forward-backward algorithm with block-updates. \begin{proposition}[\cite{Upda20}] \label{p:33} Consider the setting of Problem~\ref{prob:33}. Let $(I_n)_{n\in\ensuremath{\mathbb N}}$ be a sequence of nonempty subsets of $\{1,\ldots,m\}$ such that \eqref{e:K} holds for some $M\in\ensuremath{\mathbb N}\smallsetminus\{0\}$. Let $\gamma\in\left]0,2/\max_{1\leq i\leq m}\delta_i\right[$, let $x_0\in\mathcal H$, let $(t_{i,-1})_{1\leq i\leq m}\in\mathcal H^m$, and iterate \begin{equation} \label{e:a4} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} \text{for every}\;i\in I_n\\ \left\lfloor \begin{array}{l} t_{i,n}=x_n-\gamma\nabla f_i(x_n)\\ \end{array} \right.\\ \text{for every}\;i\in\{1,\ldots,m\}\smallsetminus I_n\\ \left\lfloor \begin{array}{l} t_{i,n}=t_{i,n-1}\\ \end{array} \right.\\[1mm] x_{n+1}=\ensuremath{\mathrm{prox}}_{\gamma f_0}\big(\sum_{i=1}^m\omega_it_{i,n}\big). \end{array} \right.\\ \end{array} \end{equation} Then the following hold: \begin{enumerate} \setlength{\itemsep}{0pt} \item \label{p:33i} Let $x$ be a solution to Problem~\ref{prob:33} and let $i\in\{1,\ldots,m\}$. Then $\nabla f_i(x_n)\to\nabla f_i(x)$. \item \label{p:33ii} $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to a solution to Problem~\ref{prob:33}. \item \label{p:33iv} Suppose that, for some $i\in\{0,\ldots,m\}$, $f_i$ is strongly convex. Then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges linearly to the unique solution to Problem~\ref{prob:33}. \end{enumerate} \end{proposition} A method related to \eqref{e:a4} is proposed in \cite{Mish20}; see also \cite{Mokh18} for a special case. Here is a data analysis application. \begin{example} \label{ex:23} Let $(e_k)_{1\leq k\leq N}$ be an orthonormal basis of $\mathcal H$ and, for every $k\in\{1,\ldots,N\}$, let $\psi_k\in\Gamma_0(\ensuremath{\mathbb R})$. For every $i\in\{1,\ldots,m\}$, let $0\neq a_i\in\mathcal H$, let $\mu_i\in\ensuremath{\left]0,+\infty\right[}$, and let $\phi_i\colon\ensuremath{\mathbb R}\to\ensuremath{\left[0,+\infty\right[}$ be a differentiable convex function such that $\phi_i'$ is $\mu_i$-Lipschitzian. The task is to \begin{equation} \label{e:prob9} \minimize{x\in\mathcal H}{\sum_{k=1}^N\psi_k(\scal{x}{e_k})+\dfrac{1}{m} \sum_{i=1}^m\phi_i(\scal{x}{a_i})}. \end{equation} As shown in \cite{Upda20}, \eqref{e:prob9} is an instantiation of \eqref{e:prob33} and, given $\gamma\in\left]0,2/(\max_{1\leq i\leq m}\mu_i\|a_i\|^2)\right[$ and subsets $(I_n)_{n\in\ensuremath{\mathbb N}}$ of $\{1,\ldots,m\}$ such that \eqref{e:K} holds, it can be solved by \eqref{e:a4}, which becomes \begin{equation} \label{e:a41} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} \text{for every}\;i\in I_n\\ \left\lfloor \begin{array}{l} t_{i,n}=x_n-\gamma\phi_i'(\scal{x_n}{a_i})a_i\\ \end{array} \right.\\ \text{for every}\;i\in\{1,\ldots,m\}\smallsetminus I_n\\ \left\lfloor \begin{array}{l} t_{i,n}=t_{i,n-1}\\ \end{array} \right.\\[1mm] y_n=\sum_{i=1}^m\omega_it_{i,n}\\ x_{n+1}=\sum_{k=1}^N\big( \ensuremath{\mathrm{prox}}_{\gamma\psi_k}\scal{y_n}{e_k}\big)e_k. \end{array} \right.\\ \end{array} \end{equation} A popular setting is obtained by choosing $\mathcal H=\ensuremath{\mathbb R}^N$ and $(e_k)_{1\leq k\leq N}$ as the canonical basis, $\alpha\in\ensuremath{\left]0,+\infty\right[}$, and, for every $k\in\{1,\ldots,K\}$, $\psi_k=\alpha|\cdot|$. This reduces \eqref{e:prob9} to \begin{equation} \label{e:prob91} \minimize{x\in\ensuremath{\mathbb R}^N}{\alpha\|x\|_1+\sum_{i=1}^m \phi_i(\scal{x}{a_i})}. \end{equation} Choosing, for every $i\in\{1,\ldots,m\}$, $\phi_i\colon t\mapsto|t-\eta_i|^2$ where $\eta_i\in\ensuremath{\mathbb R}$ models an observation, yields the lasso formulation, whereas choosing $\phi_i\colon t\mapsto\ln(1+\exp(t))-\eta_it$, where $\eta_i\in\{0,1\}$ models a label, yields the penalized logistic regression framework \cite{Hast09}. \end{example} Next, we extend Problem~\ref{prob:5} to a flexible composite minimization problem. See \cite{Botr14,Chie15,Chie14,Chou19,Chou15,Icip14,Siim19,% Ejst20,Moer15,Papa14,Pham14,Repe19} for concrete instantiations of this model in data science. \begin{problem} \label{prob:888} Let $\delta\in\ensuremath{\left]0,+\infty\right[}$ and let $f\in\Gamma_0(\mathcal H)$. For every $k\in\{1,\ldots,q\}$, let $g_k\in\Gamma_0(\mathcal G_k)$, let $0\neq L_k\colon\mathcal H\to\mathcal G_k$ be linear, and let $h_k\colon\mathcal G_k\to\ensuremath{\mathbb R}$ be a differentiable convex function, with a $\delta$-Lipschitzian gradient. Suppose that $\lim_{\|x\|\to\ensuremath{+\infty}}f(x)+\sum_{k=1}^q (g_k(L_kx)+ h_k(L_kx))=\ensuremath{+\infty}$ and that \begin{equation} \label{e:z3} (\ensuremath{\exists\,} z\in\ensuremath{\operatorname{ri}}\ensuremath{\mathrm{dom}\,} f)(\forall k\in\{1,\ldots,q\})\quad L_kz\in\ensuremath{\operatorname{ri}}\ensuremath{\mathrm{dom}\,} g_k. \end{equation} The task is to \begin{equation} \minimize{x\in\mathcal H}{f(x)+\sum_{k=1}^q\big(g_k(L_kx)+h_k(L_kx)\big)}. \end{equation} \end{problem} Thanks to the qualification condition \eqref{e:z3}, Problem~\ref{prob:888} is an instance of Problem~\ref{prob:88} where $A=\partial f$ and, for every $k\in\{1,\ldots,q\}$, $B_k=\partial g_k$ and $C_k=\nabla g_k$. Since the operators $(C_k)_{1\leq k\leq q}$ are $1/\delta$-cocoercive, the iterative algorithms from Propositions~\ref{p:71b}, \ref{prop:MLFB}, and~\ref{p:fb13b} are applicable. For example, Proposition~\ref{p:fb13b} with the substitution $J_{\sigma^{-1} B_k}=\ensuremath{\mathrm{prox}}_{\sigma^{-1} g_k}$ (see Example~\ref{ex:mono2}) allows us to solve the problem. In particular, the resulting algorithm was proposed in \cite{Chen13,Lori11} in the case when $W=\tau\ensuremath{\mathrm{Id}}$ with $\tau\in\ensuremath{\left]0,+\infty\right[}$. See also \cite{Cham11,Optim14,Cond13,Cond20,Esse10,Xiao12,Komo15,Bang13} for related work. \begin{example} Let $o\in\ensuremath{\mathbb R}^N$ and let $M\in\ensuremath{\mathbb R}^{K\times N}$ be such that $\mathrm{I}_N-M^\top M$ is positive semidefinite. Let $\varphi\in\Gamma_0(\ensuremath{\mathbb R}^N)$ and let $C$ be a nonempty closed convex subset of $\ensuremath{\mathbb R}^N$. The denoising problem of \cite{Sele20} is cast as \begin{equation} \label{e:debnonconv} \minimize{x\in C}{\psi(x)+\frac12\|x-o\|^2}, \end{equation} where the function \begin{equation} \psi\colon x\mapsto \varphi(x)-\inf_{y\in\mathcal H}\Big( \varphi(y)+\frac12\|M(x-y)\|^2\Big) \end{equation} is generally nonconvex. However, \eqref{e:debnonconv} is a convex problem. Further developments can be found in \cite{Abe20}. Note that \eqref{e:debnonconv} is actually equivalent to Problem~\ref{prob:888} with $q=2$, $\mathcal H=\ensuremath{\mathbb R}^N\times\ensuremath{\mathbb R}^N$, $\mathcal G_1=\mathcal H$, $\mathcal G_2=\ensuremath{\mathbb R}^{N}$, $f\colon (x,y)\mapsto\varphi(x)$, $h_1\colon (x,y)\mapsto\iota_{C}(x)$, $g_1\colon (x,y)\mapsto x^\top(\mathrm{I}_N-M^\top M) x/2-\scal{x}{o}+\|My\|^2/2$, $g_2=\varphi^*$, $L_1=\ensuremath{\mathrm{Id}}$, $L_2\colon (x,y)\mapsto M^\top M (x-y)$, and $h_2=0$. \end{example} \begin{remark}[ADMM] \label{r:ADMM} Let us revisit the composite minimization problem of Proposition~\ref{p:17} and Example~\ref{ex:m+s}. Let $f\in\Gamma_0(\mathcal H)$, let $g\in\Gamma_0(\mathcal G)$, and let $L\colon\mathcal H\to\mathcal G$ be linear. Suppose that $\lim_{\|x\|\to\ensuremath{+\infty}}f(x)+g(Lx)=\ensuremath{+\infty}$ and $\ensuremath{\operatorname{ri}}(L(\ensuremath{\mathrm{dom}\,} f))\cap\ensuremath{\operatorname{ri}}(\ensuremath{\mathrm{dom}\,} g)\neq\ensuremath{\varnothing}$. Then the problem \begin{equation} \label{e:r24p} \minimize{x\in\mathcal H}{f(x)+g(L x)} \end{equation} is a special case of Problem~\ref{prob:888} and it can therefore be solved by any of the methods discussed above. Now let $\gamma\in\ensuremath{\left]0,+\infty\right[}$ and let us make the following additional assumptions: \begin{enumerate} \item $L^*\circ L$ is invertible. \item The operator \[ \ensuremath{\mathrm{prox}}_{\gamma f}^L\colon\mathcal G\to\mathcal H\colon y\mapsto\underset{x\in\mathcal H}{\operatorname{argmin}} \bigg(f(x)+\dfrac{\|Lx-y\|^2}{2}\bigg) \] is easy to implement. \end{enumerate} Then, given $y_0\in\mathcal G$ and $z_0\in\mathcal G$, the alternating-direction method of multipliers (ADMM) constructs a sequence $(x_n)_{n\in\ensuremath{\mathbb N}}$ that converges to a solution to \eqref{e:r24p} via the iterations \cite{Boyd10,Ecks92,Gaba83,Glow89} \begin{equation} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} x_n=\ensuremath{\mathrm{prox}}_{\gamma f}^L(y_n-z_n)\\ d_n=Lx_n\\ y_{n+1}=\ensuremath{\mathrm{prox}}_{\gamma g}(d_n+z_n)\\ z_{n+1}=z_n+d_n-y_{n+1}. \end{array} \right.\\[2mm] \end{array} \end{equation} This iteration process can be viewed as an application of the Douglas-Rachford algorithm \eqref{e:DRv} to the Fenchel dual of \eqref{e:r24p} \cite{Gaba83,Ecks92}. Variants of this algorithm are discussed in \cite{Bane21,Banf11,Ecks94}, and applications to image recovery in \cite{Afon10,Afon11,Figu10,Giov05,Gold09,Setz10}. \end{remark} \subsection{Inconsistent feasibility problems} \label{e:inc1} We consider a more structured variant of Problem~\ref{prob:1} which can also be considered as an extension of Problem~\ref{prob:9}. \begin{problem} \label{prob:10} Let $C$ be a nonempty closed convex subset of $\mathcal H$ and, for every $i\in\{1,\ldots,m\}$, let $L_i\colon\mathcal H\to\mathcal G_i$ be a nonzero linear operator and let $D_i$ be a nonempty closed convex subset of $\mathcal G_i$. The task is to \begin{multline} \label{e:split3} \text{find}\;\;x\in C\;\;\text{such that}\; (\forall i\in\{1,\ldots,m\})\;L_ix\in D_i. \end{multline} \end{problem} To address the possibility that this problem has no solution due to modeling errors \cite{Cens18,Sign94,Youl86}, we fix weights $(\omega_i)_{1\leq i\leq m}$ in $\rzeroun$ such that $\sum_{i=1}^m\omega_i=1$ and consider the surrogate problem \begin{equation} \label{e:hig1} \minimize{x\in C}{\frac12\sum_{i=1}^m\omega_id_{D_i}^2(L_ix)}, \end{equation} where $C$ acts as a hard constraint. This is a valid relaxation of \eqref{e:split3} in the sense that, if \eqref{e:split3} does have solutions, then those are the only solutions to \eqref{e:hig1}. Now set $f_0=\iota_C$. In addition, for every $i\in\{1,\ldots,m\}$, set $f_i\colon x\mapsto (1/2)d_{D_i}^2(L_ix)$ and notice that $f_i$ is differentiable and that its gradient $\nabla f_i=L_i^*\circ(\ensuremath{\mathrm{Id}}-\ensuremath{\mathrm{proj}}_{D_i})\circ L_i$ has Lipschitz constant $\delta_i=\|L_i\|^2$. Furthermore, \eqref{e:coer1} holds as long as $C$ is bounded or, for some $i\in\{1,\ldots,m\}$, $D_i$ is bounded and $L_i$ is invertible. We have thus cast \eqref{e:hig1} as an instance of Problem~\ref{prob:33} \cite{Upda20}. In view of \eqref{e:a4}, a solution is found as the limit of the sequence $(x_n)_{n\in\ensuremath{\mathbb N}}$ produced by the block-update algorithm \begin{equation} \label{e:a34} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} \text{for every}\;i\in I_n\\ \left\lfloor \begin{array}{l} t_{i,n}=x_n+\gamma L_i^*\big(\ensuremath{\mathrm{proj}}_{D_i}(L_ix_n)-L_ix_n\big)\\ \end{array} \right.\\ \text{for every}\;i\in\{1,\ldots,m\}\smallsetminus I_n\\ \left\lfloor \begin{array}{l} t_{i,n}=t_{i,n-1}\\ \end{array} \right.\\[1mm] x_{n+1}=\ensuremath{\mathrm{proj}}_{C}\big(\sum_{i=1}^m\omega_it_{i,n}\big), \end{array} \right.\\ \end{array} \end{equation} where $\gamma$ and $(I_n)_{n\in\ensuremath{\mathbb N}}$ are as in Proposition~\ref{p:33}. \subsection{Stochastic forward-backward method} Consider the minimization of $f+g$, where $f\in\Gamma_0(\mathcal H)$ and $g\colon\mathcal H\to\ensuremath{\mathbb R}$ is a differentiable convex function. In certain applications, it may happen that only stochastic approximations to $f$ or $g$ are available. A generic stochastic form of the forward-backward algorithm for such instances is \cite{Comb16} \begin{equation} \label{e:FBstoch} (\forall n\in\ensuremath{\mathbb N})\quad x_{n+1}= x_n+\lambda_n\big(\ensuremath{\mathrm{prox}}_{\gamma_n f_n} (x_n-\gamma_nu_n)+a_n-x_n\big), \end{equation} where $\gamma_n\in\ensuremath{\left]0,+\infty\right[}$, $\lambda_n\in\rzeroun$, $f_n\in\Gamma_0(\mathcal H)$ is an approximation to $f$, $u_n$ is a random variable approximating $\nabla g(x_n)$, and $a_n$ is a random variable modeling a possible additive error. When $f=f_n=0$, $\lambda_n=1$, and $a_n=0$, we recover the standard stochastic gradient method for minimizing $g$, which was pioneered in \cite{Ermo69,Ermo66}. \begin{example} As in Problem~\ref{prob:33}, let $f\in\Gamma_0(\mathcal H)$ and let $g=m^{-1}\sum_{i=1}^m g_i$, where each $g_i\colon\mathcal H\to\ensuremath{\mathbb R}$ is a differentiable convex function. The following specialization of \eqref{e:FBstoch} is obtained by setting, for every $n\in\ensuremath{\mathbb N}$, $f_n=f$ and $u_n=\nabla g_{\mathrm{i}(n)}(x_n)$, where $\mathrm{i}(n)$ is a $\{1,\ldots,m\}$-valued random variable. This leads to the incremental proximal stochastic gradient algorithm described by the update equation \begin{equation} x_{n+1}=x_n+\lambda_n\Big(\ensuremath{\mathrm{prox}}_{\gamma_n f} \big(x_n-\gamma_n \nabla g_{\mathrm{i}(n)}(x_n)\big) -x_n\Big). \end{equation} For related algorithms, see \cite{Bert11,Def14a,Defa14,John13,Schm17}. \end{example} Various convergence results have been established for algorithm~\eqref{e:FBstoch}. If $\nabla g$ is Lipschitzian, \eqref{e:FBstoch} is closely related to the fixed point iteration in Theorem~\ref{t:1stoch}. The almost sure convergence of $(x_n)_{n\in\ensuremath{\mathbb N}}$ to a minimizer of $f+g$ can be guaranteed in several scenarios \cite{Atch17,Comb16,Rosa20}. Fixed point strategies allow us to derive convergence results such as the following. \begin{theorem}[\cite{Comb16}] \label{t:23} Let $f\in\Gamma_0(\mathcal H)$, let $\delta\in\ensuremath{\left]0,+\infty\right[}$, and let $g\colon\mathcal H\to\ensuremath{\mathbb R}$ be a differentiable convex function such that $\nabla g$ is $\delta$-Lipschitzian and $S=\ensuremath{\mathrm{Argmin}\,}(f+g)\neq\ensuremath{\varnothing}$. Let $\gamma\in\left]0,2/\delta\right[$ and let $(\lambda_n)_{n\in\ensuremath{\mathbb N}}$ be a sequence in $\left]0,1\right]$ such that $\sum_{n\in\ensuremath{\mathbb N}}\lambda_n=\ensuremath{+\infty}$. Let $x_0$, $(u_n)_{n\in\ensuremath{\mathbb N}}$, and $(a_n)_{n\in\ensuremath{\mathbb N}}$ be $\mathcal H$-valued random variables with finite second-order moments. Let $(x_n)_{n\in\ensuremath{\mathbb N}}$ be a sequence produced by \eqref{e:FBstoch} with $\gamma_n=\gamma$ and $f_n=f$. For every $n\in\ensuremath{\mathbb N}$, let $\ensuremath{\EuScript{X}}_n$ be the $\sigma$-algebra generated by $(x_0,\ldots,x_n)$ and set $\zeta_n=\EC{\|u_n-\EC{u_n}{\ensuremath{\EuScript{X}}_n}\|^2}{\ensuremath{\EuScript{X}}_n}$. Assume that the following are satisfied \ensuremath{\text{a.~\!s.}}: \begin{enumerate} \item \label{a:t2i} $\sum_{n\in\ensuremath{\mathbb N}}\lambda_n\sqrt{\EC{\|a_n\|^2}{\ensuremath{\EuScript{X}}_n}}<\ensuremath{+\infty}$. \item \label{a:t2ii} $\sum_{n\in\ensuremath{\mathbb N}}\sqrt{\lambda_n} \|\EC{u_n}{\ensuremath{\EuScript{X}}_n}-\nabla g(x_n)\|<\ensuremath{+\infty}$. \item \label{a:t2iii} $\sup_{n\in\ensuremath{\mathbb N}} \zeta_n<\ensuremath{+\infty}$ and $\sum_{n\in\ensuremath{\mathbb N}} \sqrt{\lambda_n\zeta_n}<\ensuremath{+\infty}$. \end{enumerate} Then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges $\ensuremath{\text{a.~\!s.}}$ to an $S$-valued random variable. \end{theorem} Extensions of these stochastic optimization approaches can be designed by introducing an inertial parameter \cite{Rosa16b} or by bringing into play primal-dual formulations \cite{Comb16}. \subsection{Random block-coordinate optimization algorithms} We design block-coordinate versions of optimization algorithms presented in Section~\ref{sec:convmin}, in which blocks of variables are updated randomly. \begin{problem} \label{prob:2} For every $i\in\{1,\ldots,m\}$ and $k\in\{1,\ldots,q\}$, let $f_i\in\Gamma_0(\mathcal H_i)$, let $g_k\in\Gamma_0(\mathcal G_k)$, and let $0\neq L_{k,i}\colon\mathcal H_i\to\mathcal G_k$ be linear. Suppose that \begin{multline} (\ensuremath{\exists\,}\boldsymbol{z}\in\ensuremath{\boldsymbol{\mathcal H}})(\ensuremath{\exists\,}\boldsymbol{w}\in\ensuremath{\boldsymbol{\mathcal G}}) (\forall i\in\{1,\ldots,m\}) (\forall k\in\{1,\ldots,q\})\\ -\sum_{j=1}^qL_{j,i}^*w_j\in\partial f_i(z_i) \;\:\text{and}\:\; \sum_{j=1}^mL_{k,j}z_j\in \partial g_k^*(w_{k}). \end{multline} The task is to \begin{equation} \label{p:probstocopt} \minimize{\boldsymbol{x}\in\ensuremath{\boldsymbol{\mathcal H}}} {\sum_{i=1}^m f_i(x_i)+\sum_{k=1}^q g_k\bigg(\sum_{i=1}^m L_{k,i}x_i\bigg)}. \end{equation} \end{problem} Let $\gamma\in\ensuremath{\left]0,+\infty\right[}$, let $(\lambda_n)_{n\in\ensuremath{\mathbb N}}$ be a sequence in $\left]0,2\right[$, and set \begin{multline} \boldsymbol{V}=\bigg\{(x_1,\ldots,x_m,y_1,\ldots,y_q)\in \ensuremath{\boldsymbol{\mathcal H}}\times\ensuremath{\boldsymbol{\mathcal G}}\\ \bigg|~ (\forall k\in\{1,\ldots,q\})\;y_k= \sum_{i=1}^m L_{k,i}x_i\bigg\} \end{multline} Let us decompose $\ensuremath{\mathrm{proj}}_{\boldsymbol{V}}$ as $\ensuremath{\mathrm{proj}}_{\boldsymbol{V}}\colon\boldsymbol{x}\mapsto ({Q}_j\boldsymbol{x})_{1\leq j\leq m+q}$. A random block-coordinate form of the Douglas-Rachford algorithm for solving Problem~\ref{prob:2} is \cite{Comb15} \begin{equation} \label{e:DRbcr} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} \text{for}\;i=1,\ldots,m\\ \left\lfloor \begin{array}{l} z_{i,n+1}=z_{i,n}+\varepsilon_{i,n} \big({Q}_i(\boldsymbol{x}_n,\boldsymbol{y}_n) -z_{i,n}\big)\\[1mm] x_{i,n+1}=x_{i,n}\\ \qquad\qquad+\varepsilon_{i,n}\lambda_n \big(\ensuremath{\mathrm{prox}}_{\gamma f_i}(2z_{i,n+1}-x_{i,n})-z_{i,n+1}\big) \end{array} \right.\\ \text{for}\;k=1,\ldots,q\\ \left\lfloor \begin{array}{l} w_{k,n+1}=w_{k,n}+\varepsilon_{m+k,n} \big({Q}_{m+k}(\boldsymbol{x}_n,\boldsymbol{y}_n) -w_{k,n}\big)\\[1mm] y_{k,n+1}=y_{k,n}\\ \;\;+\varepsilon_{m+k,n}\lambda_n \big(\ensuremath{\mathrm{prox}}_{\gamma g_k} (2w_{k,n+1}-y_{k,n})-w_{k,n+1}\big), \end{array} \right. \end{array} \right.\\ \end{array} \end{equation} where $\boldsymbol{x}_n=(x_{i,n})_{1\leq i\leq m}$ and $\boldsymbol{y}_n=(y_{k,n})_{1\leq k\leq q}$. Moreover, $(\varepsilon_{j,n})_{1\leq j\leq m+q,n\in\ensuremath{\mathbb N}}$ are binary random variables signaling the activated components. \begin{proposition}[\cite{Comb15}] \label{p:l4} Let $\boldsymbol{S}$ be the set of solutions to Problem~\ref{prob:2} and set $D=\{0,1\}^{m+q}\smallsetminus\{\boldsymbol{0}\}$. Let $\gamma\in\ensuremath{\left]0,+\infty\right[}$, let $\epsilon\in\zeroun$, let $(\lambda_n)_{n\in\ensuremath{\mathbb N}}$ be in $\left[\epsilon,2-\epsilon\right]$, let $\boldsymbol{x}_0$ and $\boldsymbol{z}_0$ be $\ensuremath{\boldsymbol{\mathcal H}}$-valued random variables, let $\boldsymbol{y}_0$ and $\boldsymbol{w}_0$ be $\ensuremath{\boldsymbol{\mathcal G}}$-valued random variables, and let $(\boldsymbol{\varepsilon}_n)_{n\in\ensuremath{\mathbb N}}$ be identically distributed $D$-valued random variables. In addition, suppose that the following hold: \begin{enumerate} \item \label{c:2014-04-09iiv-} For every $n\in\ensuremath{\mathbb N}$, $\boldsymbol{\varepsilon}_n$ and $(\boldsymbol{x}_0,\ldots,\boldsymbol{x}_n,\boldsymbol{y}_0, \ldots,\boldsymbol{y}_n)$ are mutually independent. \item \label{c:2014-04-09iiv} $(\forall j\in\{1,\ldots,m+q\})$ $\ensuremath{\mathsf{Prob}\,}[\varepsilon_{j,0}=1]>0$. \end{enumerate} Then the sequence $(\boldsymbol{z}_n)_{n\in\ensuremath{\mathbb N}}$ generated by \eqref{e:DRbcr} converges $\ensuremath{\text{a.~\!s.}}$ to an $\boldsymbol{S}$-valued random variable. \end{proposition} Applications based on Proposition~\ref{p:l4} appear in the areas of machine learning \cite{Nume19} and binary logistic regression \cite{Bric19}. If the functions $(g_k)_{1\leq k\leq q}$ are differentiable in Problem~\ref{prob:2}, a block-coordinate version of the forward-backward algorithm can also be employed, namely, \begin{equation} \label{e:bcFB} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} \text{for}\;i=1,\ldots,m\\ \left\lfloor \begin{array}{l} r_{i,n}=\varepsilon_{i,n}\Big(x_{i,n}-\\ \hskip 18mm \gamma_{i,n}\sum_{k=1}^q L_{k,i}^*\Big(\nabla g_k\big(\sum_{j=1}^mL_{k,j} x_{j,n}\big)\Big)\Big)\\[2mm] x_{i,n+1}=x_{i,n}+\varepsilon_{i,n}\lambda_n \big(\ensuremath{\mathrm{prox}}_{\gamma_{i,n}f_i}r_{i,n}-x_{i,n}\big), \end{array} \right. \end{array} \right.\\ \end{array} \end{equation} where $\gamma_{i,n}\in\ensuremath{\left]0,+\infty\right[}$ and $\lambda_n\in\rzeroun$. The convergence of \eqref{e:bcFB} has been investigated in various settings in terms of the expected value of the cost function \cite{Neco16,Rich14,Rich16,Salz19}, the mean square convergence of the iterates \cite{Comb19,Rich14,Rich16}, or the almost sure convergence of the iterates \cite{Comb15,Salz19}. It is shown in \cite{Salz19} that algorithms such as the so-called \emph{random Kaczmarz method} to solve standard linear systems are special cases of \eqref{e:bcFB}. A noteworthy feature of the block-coordinate forward-backward algorithm \eqref{e:bcFB} is that, at iteration $n$, it allows for the use of distinct parameters $(\gamma_{i,n})_{1\leq i\leq m}$ to update each component. This was observed to be beneficial to the convergence profile in several applications \cite{Chou16,Rich14}. See also \cite{Salz19} for further developments along these lines. \subsection{Block-iterative multivariate minimization algorithms} We investigate a specialization of a primal-dual version of the multivariate inclusion Problem~\ref{prob:82} in the context of Problem~\ref{prob:2}. \begin{problem} \label{prob:3} Consider the setting of Problem~\ref{prob:2}. The task is to solve the primal minimization problem \begin{equation} \label{e:12p} \minimize{\boldsymbol{x}\in\ensuremath{\boldsymbol{\mathcal H}}} {\sum_{i=1}^mf_i(x_i)+\sum_{k=1}^q g_k\bigg(\sum_{i=1}^mL_{k,i}x_i\bigg)}, \end{equation} along with its dual problem \begin{equation} \label{e:12d} \minimize{\boldsymbol{v}^*\in\ensuremath{\boldsymbol{\mathcal G}}} {\sum_{i=1}^mf_i^*\bigg(-\sum_{k=1}^qL_{k,i}^*v^*_k\bigg) +\sum_{k=1}^qg^*_k(v^*_k)}. \end{equation} \end{problem} We solve Problem~\ref{prob:3} with algorithm \eqref{e:n03a} by replacing $J_{\gamma_{i,n}A_i}$ by $\ensuremath{\mathrm{prox}}_{\gamma_{i,n}f_i}$ and $J_{\mu_{k,n}B_k}$ by $\ensuremath{\mathrm{prox}}_{\mu_{k,n}g_k}$. This block-iterative method then produces a sequence $(\boldsymbol{x}_n)_{n\in\ensuremath{\mathbb N}}$ which converges to a solution to \eqref{e:12p} and a sequence $(\boldsymbol{v}^*_n)_{n\in\ensuremath{\mathbb N}}$ which converges to a solution to \eqref{e:12d} \cite{MaPr18}. Examples of problems that conform to the format of Problems~\ref{prob:2} or \ref{prob:3} are encountered in image processing \cite{Berg16,Nmtm09,Jmiv11} as well as in machine learning \cite{Argy12,Bach12,Nume19,Jaco09,Jena11,McDo16,Vill14,Yuan06}. \subsection{Splitting based on Bregman distances} The notion of a Bregman distance goes back to \cite{Breg67} and it has been used since the 1980s in signal recovery; see \cite{Byrn01,Cens97}. Let $\varphi\in\Gamma_0(\mathcal H)$ be strictly convex, and differentiable on $\ensuremath{\mathrm{int\,dom}\,}\varphi\neq\ensuremath{\varnothing}$ (more precisely, we require a Legendre function, see \cite{Baus97,Sico03} for the technical details). The associated \emph{Bregman distance} between two points $x$ and $y$ in $\mathcal H$ is \begin{equation} \label{e:Df} D_\varphi(x,y)= \begin{cases} \varphi(x)-\varphi(y)-\scal{x-y}{\nabla \varphi(y)},\\ \hskip 21mm \text{if}\;\;y\in\ensuremath{\mathrm{int\,dom}\,} \varphi;\\ \ensuremath{+\infty},\hskip 13mm \text{otherwise}. \end{cases} \end{equation} This construction captures many interesting discrepancy measures in data analysis such as the Kullback-Leibler divergence. Another noteworthy instance is when $\varphi=\|\cdot\|^2/2$, which yields $D_\varphi(x,y)=\|x-y\|^2/2$ and suggests extending standard tools such as projection and proximity operators (see Theorems~\ref{t:11} and \ref{t:12}) by replacing the quadratic kernel by a Bregman distance \cite{Baus97,Sico03,Breg67,Cens92,Ecks93,Tebo92}. For instance, under mild conditions on $f\in\Gamma_0(\mathcal H)$ \cite{Sico03}, the \emph{Bregman proximal point} of $y\in\ensuremath{\mathrm{int\,dom}\,}\varphi$ relative to $f$ is the unique point $\ensuremath{\mathrm{prox}}^\varphi_fy$ which solves \begin{equation} \label{e:Dprox} \minimize{p\in\ensuremath{\mathrm{int\,dom}\,}\varphi}{f(p)+D_\varphi(p,y)}. \end{equation} The \emph{Bregman projection} $\ensuremath{\mathrm{proj}}_C^\varphi y$ of $y$ onto a nonempty closed convex set $C$ in $\mathcal H$ is obtained by setting $f=\iota_C$ above. Various algorithms such as the POCS algorithm \eqref{e:g3} or the proximal point algorithm \eqref{e:sd2} have been extended in the context of Bregman distances \cite{Baus97,Sico03}. For instance \cite{Baus97} establishes the convergence to a solution to Problem~\ref{prob:1} of a notable extension of POCS in which the sets are Bregman-projected onto in arbitrary order, namely \begin{equation} \label{e:h} (\forall n\in\ensuremath{\mathbb N})\quad x_{n+1}=\ensuremath{\mathrm{proj}}^\varphi_{C_{\mathrm{i}(n)}}x_n, \end{equation} where $\mathrm{i}\colon\ensuremath{\mathbb N}\to\{1,\ldots,m\}$ is such that, for every $p\in\ensuremath{\mathbb N}$ and every $j\in\{1,\ldots,m\}$, there exists $n\geq p$ such that $\mathrm{i}(n)=j$. A motivation for such extensions is that, for certain functions, proximal points are easier to compute in the Bregman sense than in the standard quadratic sense \cite{Baus17,Joca16,Nguy17}. Some work has also focused on monotone operator splitting using Bregman distances as an extension of standard methods \cite{Joca16}. The Bregman version of the basic forward-backward minimization method of Proposition~\ref{p:fb17}, namely, \begin{equation} \label{e:FB4} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} u_n=\nabla\varphi(x_n)-\gamma_n\nabla g(x_n)\\ x_{n+1}=\big(\nabla\varphi+\gamma_n\partial f\big)^{-1}u_n \end{array} \right.\\[2mm] \end{array} \end{equation} has also been investigated in \cite{Baus17,Buim20,Nguy17} (note that the standard quadratic kernel corresponds to $\nabla\varphi=\ensuremath{\mathrm{Id}}$). In these papers, it was shown to converge in instances when \eqref{e:FB2} cannot be used because $\nabla g$ is not Lipschitzian. \section{Fixed point modeling of Nash equilibria} \label{sec:6} In addition to the notation of Section~\ref{sec:not}, given $i\in\{1,\ldots,m\}$, $x_i\in\mathcal H_i$, and $\boldsymbol{y}\in\ensuremath{\boldsymbol{\mathcal H}}$, we set \begin{equation} \begin{cases} \ensuremath{\boldsymbol{\mathcal H}}_{\smallsetminus i}= \mathcal H_1\times\cdots\times\mathcal H_{i-1}\times\mathcal H_{i+1}\times \cdots\times\mathcal H_{m}\\ \boldsymbol{y}_{\smallsetminus i}=(y_j)_{1\leq j\leq m, j\neq i}\\ (x_i;\boldsymbol{y}_{\smallsetminus i})=(y_1,\ldots,y_{i-1},x_i, y_{i+1},\ldots,y_m). \end{cases} \end{equation} In various problems arising in signal recovery \cite{Aujo04,Aujo06,Berg16,Nmtm09,Jmiv11,Dani12,Darb20,Demo04}, telecommunications \cite{Lasa11,Scut10}, machine learning \cite{Brav18,Dasg19}, network science \cite{Yip19a,Yinh11}, and control \cite{Belg19,Borz13,Zhan19}, the solution is not a single vector but a collections of vectors $\boldsymbol{x}=(x_1,\ldots,x_m)\in\ensuremath{\boldsymbol{\mathcal H}}$ representing the actions of $m$ competing players. Oftentimes, such solutions cannot be modeled via a standard minimization problem of the form \begin{equation} \label{e:s4} \minimize{\boldsymbol{x}\in\ensuremath{\boldsymbol{\mathcal H}}}{\boldsymbol{h}(\boldsymbol{x})} \end{equation} for some function $\boldsymbol{h}\colon\ensuremath{\boldsymbol{\mathcal H}}\to\ensuremath{\left]-\infty,+\infty\right]}$, but rather as a Nash equilibrium \cite{Nash51}. In this game-theoretic setting \cite{Lara19}, player $i$ aims at minimizing his individual loss (or negative payoff) function $\boldsymbol{h}_i\colon\ensuremath{\boldsymbol{\mathcal H}}\to\ensuremath{\left]-\infty,+\infty\right]}$, that incorporates the actions of the other players. An action profile $\overline{\boldsymbol{x}}\in\ensuremath{\boldsymbol{\mathcal H}}$ is called a \emph{Nash equilibrium} if unilateral deviations from it are not profitable, i.e., \begin{equation} \label{e:u4} (\forall i\in\{1,\ldots,m\})\quad\boldsymbol{h}_i (\overline{x}_i;\overline{\boldsymbol{x}}_{\smallsetminus i}) =\min_{x_i\in\mathcal H_i}{\boldsymbol{h}_i (x_i;\overline{\boldsymbol{x}}_{\smallsetminus i})}. \end{equation} In other words, if \begin{multline} \label{e:best} \!\!\ensuremath{\mathrm{best}}_i\colon\ensuremath{\boldsymbol{\mathcal H}}_{\smallsetminus i}\to 2^{\mathcal H_i}\colon \boldsymbol{x}_{\smallsetminus i}\mapsto\\ \quad\menge{x_i\in\mathcal H_i}{(\forall y_i\in\mathcal H_i)\: \boldsymbol{h}_i(y_i;\boldsymbol{x}_{\smallsetminus i})\geq \boldsymbol{h}_i(x_i;\boldsymbol{x}_{\smallsetminus i})} \end{multline} denotes the \emph{best response operator} of player $i$, $\overline{\boldsymbol{x}}\in\ensuremath{\boldsymbol{\mathcal H}}$ is a Nash equilibrium if and only if \begin{equation} \label{e:u2} (\forall i\in\{1,\ldots,m\})\quad\overline{x}_i\in \ensuremath{\mathrm{best}}_i(\overline{\boldsymbol{x}}_{\smallsetminus i}). \end{equation} This property can also be expressed in terms of the set-valued operator \begin{equation} \label{ma719} \boldsymbol{B}\colon\ensuremath{\boldsymbol{\mathcal H}}\to 2^{\ensuremath{\boldsymbol{\mathcal H}}}\colon \boldsymbol{x}\mapsto \ensuremath{\mathrm{best}}_1(\boldsymbol{x}_{\smallsetminus 1})\times\cdots\times \ensuremath{\mathrm{best}}_m(\boldsymbol{x}_{\smallsetminus m}). \end{equation} Thus, a point $\boldsymbol{\overline{x}}\in\ensuremath{\boldsymbol{\mathcal H}}$ is a Nash equilibrium if and only if it is a fixed point of $\boldsymbol{B}$ in the sense that $\overline{\boldsymbol{x}}\in\boldsymbol{B} \overline{\boldsymbol{x}}$. \subsection{Cycles in the POCS algorithm} \label{sec:cycles} Let us go back to feasibility and Problem~\ref{prob:1}. The POCS algorithm \eqref{e:g3} converges to a solution to the feasibility problem \eqref{e:cfp1} when one exists. Now suppose that Problem~\ref{prob:1} is inconsistent, with $C_1$ bounded. Then, as seen in Example~\ref{ex:20}, in the case of $m=2$ sets, the sequence $(x_{2n})_{n\in\ensuremath{\mathbb N}}$ produced by the alternating projection algorithm \eqref{e:u8}, written as \begin{equation} \label{e:23} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} x_{2n+1}=\ensuremath{\mathrm{proj}}_{C_2}x_{2n}\\ x_{2n+2}=\ensuremath{\mathrm{proj}}_{C_1}x_{2n+1}, \end{array} \right.\\[2mm] \end{array} \end{equation} converges to a point $\overline{x}_1\in\ensuremath{\text{\rm Fix}\,}(\ensuremath{\mathrm{proj}}_{C_1}\circ\ensuremath{\mathrm{proj}}_{C_2})$, i.e., to a minimizer of $d_{C_2}$ over $C_1$. More precisely \cite{Che59a}, if we set $\overline{x}_2=\ensuremath{\mathrm{proj}}_{C_2}\overline{x}_1$, then $\overline{x}_1=\ensuremath{\mathrm{proj}}_{C_1}\overline{x}_2$ and $(\overline{x}_1,\overline{x}_2)$ solves \begin{equation} \label{e:jeux1} \minimize{x_1\in C_1,\,x_2\in C_2}{\|x_1-x_2\|}. \end{equation} An extension of the alternating projection method \eqref{e:23} to $m$ sets is the POCS algorithm \eqref{e:g3}, which we write as \begin{equation} \label{e:pocs2} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{ll} x_{mn+1}&\hskip -3mm=\ensuremath{\mathrm{proj}}_{C_m}x_{mn}\\ x_{mn+2}&\hskip -3mm=\ensuremath{\mathrm{proj}}_{C_{m-1}}x_{mn+1}\\ &\hskip -2mm\vdots\\ x_{mn+m}&\hskip -3mm=\ensuremath{\mathrm{proj}}_{C_1}x_{mn+m-1}. \end{array} \right.\\[2mm] \end{array} \end{equation} As first shown in \cite{Gubi67} (this is also a consequence of Theorem~\ref{t:5}), for every $i\in\{1,\ldots,m\}$, $(x_{mn+i})_{n\in\ensuremath{\mathbb N}}$ converges to a point $\overline{x}_{m+1-i}\in C_{m+1-i}$; in addition $(\overline{x}_i)_{1\leq i\leq m}$ forms a \emph{cycle} in the sense that (see Fig.~\ref{fig:2}) \begin{multline} \label{e:21c} \overline{x}_1=\ensuremath{\mathrm{proj}}_{C_1}\overline{x}_2,\;\ldots,\; \overline{x}_{m-1}=\ensuremath{\mathrm{proj}}_{C_{m-1}}\overline{x}_m,\\ \text{and}\quad \overline{x}_m=\ensuremath{\mathrm{proj}}_{C_m}\overline{x}_1. \end{multline} \begin{figure}[h!tb] \begin{center} \scalebox{0.43}{ \begin{pspicture}(1,-4.85)(18.24,6.1) \definecolor{color96b}{rgb}{0.9,0.90,1.0} \definecolor{red1}{rgb}{0.0,0.60,0.6} \definecolor{red2}{rgb}{0.90,0.0,0.0} \rput{18.0}(-0.18953674,-2.8316424)% {\psellipse[linewidth=0.06,dimen=outer,fillstyle=solid,% fillcolor=color96b](8.844375,-2.0141652)(5.0,1.5)} \psellipse[linewidth=0.06,dimen=outer,fillstyle=solid,% fillcolor=color96b](11.444375,3.185835)(5.8,1.8) \psellipse[linewidth=0.06,dimen=outer,fillstyle=solid,% fillcolor=color96b](17.4,-2.18)(1.8,2.0) \psline[linewidth=0.06cm,linestyle=solid,linecolor=red1,% arrowsize=0.18cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}% (0.204375,0.20583488)(5.7243,2.926) \psline[linewidth=0.06cm,linestyle=solid,linecolor=red1,% arrowsize=0.18cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}% (5.7243,2.926)(7.304375,-0.994165) \psline[linewidth=0.06cm,linestyle=solid,linecolor=red1,% arrowsize=0.18cm 2.0,arrowlength=1.4,% arrowinset=0.4]{->}(7.304375,-0.994165)(15.63,-1.9) \psline[linewidth=0.06cm,linestyle=solid,linecolor=red1,% arrowsize=0.18cm 2.0,arrowlength=1.4,% arrowinset=0.4]{->}(15.63,-1.9)(14.70,1.74) \psline[linewidth=0.06cm,linestyle=solid,linecolor=red1,% arrowsize=0.18cm 2.0,arrowlength=1.4,% arrowinset=0.4]{->}(14.70,1.74)(13.25,-0.11) \psline[linewidth=0.06cm,arrowsize=0.18cm 2.0,arrowlength=1.4,% arrowinset=0.4,linecolor=red2]{->}(15.1,1.83)(13.54,-0.29) \psline[linewidth=0.06cm,arrowsize=0.18cm 2.0,arrowlength=1.4,% arrowinset=0.4,linecolor=red2]{->}(13.5,-0.36)(15.75,-1.36) \psline[linewidth=0.06cm,arrowsize=0.18cm 2.0,arrowlength=1.4,% arrowinset=0.4,linecolor=red2]{->}(15.8,-1.36)(15.1,1.78) \psdots[dotsize=0.18,linecolor=red1](0.24,0.22) \rput(8.0,-2.8){\LARGE $C_2$} \rput(10.8,3.3){\LARGE $C_3$} \rput(18.0,-2.0){\LARGE $C_1$} \rput(0.22,-0.20){\LARGE $x_0$} \rput(7.5,-1.4){\LARGE $x_2$} \rput(6.2,3.1){\LARGE $x_1$} \rput(16.0,-2.2){\LARGE $x_3$} \rput(14.3,2.04){\LARGE $x_4$} \rput(12.9,0.3){\LARGE $x_5$} \rput(13.1,-0.7){\LARGE $\overline{x}_2$} \rput(15.4,2.37){\LARGE $\overline{x}_3$} \rput(16.4,-1.33){\LARGE $\overline{x}_1$} \psdots[dotsize=0.18](15.1,1.83) \psdots[dotsize=0.18](13.5,-0.36) \psdots[dotsize=0.18](15.8,-1.36) \end{pspicture} } \caption{The POCS algorithm with $m=3$ sets and initialized at $x_0$ produces the cycle $(\overline{x}_1,\overline{x}_2,\overline{x}_3)$.} \label{fig:2} \end{center} \end{figure} As shown in \cite{Baill12}, in stark contrast with the case of $m=2$ sets and \eqref{e:jeux1}, there exists no function $\Phi\colon\mathcal H^m\to\ensuremath{\mathbb R}$ such that cycles solve the minimization problem \begin{equation} \label{e:best3} \minimize{x_1\in C_1,\ldots,\,x_m\in C_m}{\Phi(x_1,\ldots,x_m)}, \end{equation} which deprives cycles of a minimization interpretation. Nonetheless, cycles are equilibria in a more general sense, which can be described from three different perspectives. \begin{itemize} \item Fixed point theory: Define two operators $\boldsymbol{P}$ and $\boldsymbol{L}$ from $\mathcal H^m$ to $\mathcal H^m$ by \begin{equation} \begin{cases} \boldsymbol{P}\colon\boldsymbol{x}\mapsto (\ensuremath{\mathrm{proj}}_{C_1}x_1,\ldots,\ensuremath{\mathrm{proj}}_{C_m}x_m)\\ \boldsymbol{L}\colon\boldsymbol{x}\mapsto (x_2,\ldots,x_m,x_1). \end{cases} \end{equation} Then, in view of \eqref{e:21c}, the set of cycles is precisely the set of fixed points of $\boldsymbol{P\circ L}$, which is also the set of fixed points of $\boldsymbol{T}=\boldsymbol{P\circ F}$, where $\boldsymbol{F}=(\ensuremath{\boldsymbol{\mathrm{Id}}}+\boldsymbol{L})/2$ (see \cite[Corollary~26.3]{Livre1}). Since Example~\ref{ex:1} implies that $\boldsymbol{P}$ is firmly nonexpansive and since $\boldsymbol{L}$ is nonexpansive, $\boldsymbol{F}$ is firmly nonexpansive as well. It thus follows from Example~\ref{ex:roma}, that the cycles are the fixed points of the $2/3$-averaged operator $\boldsymbol{T}$. \item Game theory: Consider a game in $\mathcal H^m$ in which the goal of player $i$ is to minimize the loss \begin{equation} \label{e:u3} \boldsymbol{h}_i \colon(x_i;\boldsymbol{x}_{\smallsetminus i})\mapsto \iota_{C_i}(x_i)+\frac{1}{2}\|x_i-x_{i+1}\|^2, \end{equation} i.e., to be in $C_i$ and as close as possible to the action of player $i+1$ (with the convention $x_{m+1}=x_1$). Then a cycle $(\overline{x}_1,\ldots,\overline{x}_m)$ is a solution to \eqref{e:u4} and therefore a Nash equilibrium. Let us note that the best response operator of player $i$ is $\ensuremath{\mathrm{best}}_i\colon\boldsymbol{x}_{\smallsetminus i}\mapsto \ensuremath{\mathrm{proj}}_{C_i}x_{i+1}$. \item Monotone inclusion: Applying Fermat's rule to each line of \eqref{e:u4} in the setting of \eqref{e:u3}, and using \eqref{e:normalcone}, we obtain \begin{equation} \label{e:as} \begin{cases} 0\in N_{C_1}\overline{x}_1+\overline{x}_1-\overline{x}_2\\ \hskip 4mm\vdots\\ 0\in N_{C_{m-1}}\overline{x}_{m-1}+\overline{x}_{m-1} -\overline{x}_m\\ 0\in N_{C_m}\overline{x}_m+\overline{x}_m -\overline{x}_1. \end{cases} \end{equation} In terms of the maximally monotone operator $\boldsymbol{A}=N_{C_1\times\cdots\times C_m}$ and the cocoercive operator \begin{equation} \boldsymbol{B}\colon\boldsymbol{x}\mapsto (x_1-x_2,\ldots,{x}_{m-1}-{x}_m,x_m-x_1), \end{equation} \eqref{e:as} can be rewritten as an instance of Problem~\ref{prob:7} in $\mathcal H^m$, namely, $\boldsymbol{0}\in\boldsymbol{A}\overline{\boldsymbol{x}}+ \boldsymbol{B}\overline{\boldsymbol{x}}$. \end{itemize} \subsection{Proximal cycles} \label{sec:IIC} We have seen in Section~\ref{sec:cycles} a first example of a Nash equilibrium. This setting can be extended by replacing the indicator function $\iota_{C_i}$ in \eqref{e:u3} by a general function $\varphi_i\in\Gamma_0(\mathcal H)$ modeling the self-loss of player $i$, i.e., \begin{equation} \label{e:u5} \boldsymbol{h}_ \colon(x_i;\boldsymbol{x}_{\smallsetminus i})\mapsto \varphi_i(x_i)+\frac{1}{2}\|x_i-x_{i+1}\|^2. \end{equation} The solutions to the resulting problem \eqref{e:u4} are \emph{proximal cycles}, i.e., $m$-tuples $(\overline{x}_i)_{1\leq i\leq m}\in\mathcal H^m$ such that \begin{multline} \label{e:21d} \overline{x}_1=\ensuremath{\mathrm{prox}}_{\varphi_1}\overline{x}_2,\;\ldots,\; \overline{x}_{m-1}=\ensuremath{\mathrm{prox}}_{\varphi_{m-1}}\overline{x}_m,\\ \text{and}\quad \overline{x}_m=\ensuremath{\mathrm{prox}}_{\varphi_m}\overline{x}_1. \end{multline} Furthermore, the equivalent monotone inclusion and fixed point representations of the cycles in Section~\ref{sec:cycles} remain true with \begin{equation} \boldsymbol{P}\colon\ensuremath{\boldsymbol{\mathcal H}}\to\ensuremath{\boldsymbol{\mathcal H}}\colon\boldsymbol{x}\mapsto \big(\ensuremath{\mathrm{prox}}_{\varphi_1}x_1,\ldots,\ensuremath{\mathrm{prox}}_{\varphi_m}x_m\big) \end{equation} and $\boldsymbol{A}=\partial\boldsymbol{f}$, where $\boldsymbol{f}\colon\boldsymbol{x}\mapsto \sum_{i=1}^m\varphi_i(x_i)$. Here, the best response operator of player $i$ is $\ensuremath{\mathrm{best}}_i\colon\boldsymbol{x}_{\smallsetminus i} \mapsto\ensuremath{\mathrm{prox}}_{\varphi_i}x_{i+1}$. Examples of such cycles appear in \cite{Nmtm09,Smms05}. \subsection{Construction of Nash equilibria} A more structured version of the Nash equilibrium formulation \eqref{e:u4}, which captures \eqref{e:u5} and therefore \eqref{e:u3}, is provided next. \begin{problem} \label{prob:j} For every $i\in\{1,\ldots,m\}$, let $\psi_i\in\Gamma_0(\mathcal H_i)$, let $\boldsymbol{f}_i\colon\ensuremath{\boldsymbol{\mathcal H}}\to\ensuremath{\left]-\infty,+\infty\right]}$, let $\boldsymbol{g}_i\colon\ensuremath{\boldsymbol{\mathcal H}}\to\ensuremath{\left]-\infty,+\infty\right]}$ be such that, for every $\boldsymbol{x}\in\ensuremath{\boldsymbol{\mathcal H}}$, $\boldsymbol{f}_i (\cdot;\boldsymbol{x}_{\smallsetminus i})\in\Gamma_0(\mathcal H_i)$ and $\boldsymbol{g}_i (\cdot;\boldsymbol{x}_{\smallsetminus i})\in\Gamma_0(\mathcal H_i)$. The task is to \begin{multline} \label{e:u7} \text{find}\;\;\overline{\boldsymbol{x}}\in\ensuremath{\boldsymbol{\mathcal H}} \quad\text{such that}\quad (\forall i\in\{1,\ldots,m\})\\ \overline{x}_i\in\Argmind{x_i\in\mathcal H_i}{\psi_i(x_i)+ \boldsymbol{f}_i(x_i;\overline{\boldsymbol{x}}_{\smallsetminus i})+ \boldsymbol{g}_i (x_i;\overline{\boldsymbol{x}}_{\smallsetminus i})}. \end{multline} \end{problem} Under suitable assumptions on $(\boldsymbol{f}_i)_{1\leq i\leq m}$ and $(\boldsymbol{g}_i)_{1\leq i\leq m}$, monotone operator splitting strategies can be contemplated to solve Problem~\ref{prob:j}. This approach was initiated in \cite{Cohe87} in a special case of the following setting, which reduces to that investigated in \cite{Bric13} when $(\forall i\in\{1,\ldots,m\})$ $\psi_i=0$. \begin{assumption} \label{a:j1} In Problem~\ref{prob:j}, the functions $(\boldsymbol{f}_i)_{1\leq i\leq m}$ coincide with a function $\boldsymbol{f}\in\Gamma_0(\ensuremath{\boldsymbol{\mathcal H}})$. For every $i\in\{1,\ldots,m\}$ and every $\boldsymbol{x}\in\ensuremath{\boldsymbol{\mathcal H}}$, $\boldsymbol{g}_i(\cdot;\boldsymbol{x}_{\smallsetminus i})$ is differentiable on $\mathcal H_i$ and $\nabli{i}\boldsymbol{g}_i(\boldsymbol{x})$ denotes its derivative relative to $x_i$. Moreover, \begin{multline} \label{e:Cmon} (\forall\boldsymbol{x}\in\ensuremath{\boldsymbol{\mathcal H}})(\forall\boldsymbol{y}\in\ensuremath{\boldsymbol{\mathcal H}})\\ \sum_{i=1}^m\scal{\nabli{i}\boldsymbol{g}_i (\boldsymbol{x})-\nabli{i}\boldsymbol{g}_i (\boldsymbol{y})}{x_i-y_i}\geq 0, \end{multline} and \begin{multline} (\ensuremath{\exists\,}\boldsymbol{z}\in\ensuremath{\boldsymbol{\mathcal H}})\quad -\big(\nabli{1}\boldsymbol{g}_1 (\boldsymbol{z}),\ldots,\nabli{m}\boldsymbol{g}_m (\boldsymbol{z})\big)\\ \in\partial\boldsymbol{f}(\boldsymbol{z})+ \overset{m}{\underset{i=1}{\cart}}\partial\psi_i(z_i). \end{multline} \end{assumption} In the context of Assumption~\ref{a:j1}, let us introduce the maximally monotone operators on $\ensuremath{\boldsymbol{\mathcal H}}$ \begin{equation} \label{e:B} \begin{cases} \boldsymbol{A}=\partial\boldsymbol{f}\\ \boldsymbol{B}\colon\boldsymbol{x}\mapsto\cart_{\!i=1}^{\!m} \partial\psi_i(x_i)\\ \boldsymbol{C}\colon \boldsymbol{x}\mapsto\big(\nabli{1}\boldsymbol{g}_1 (\boldsymbol{x}),\ldots,\nabli{m}\boldsymbol{g}_m (\boldsymbol{x})\big). \end{cases} \end{equation} Then the solutions to the inclusion problem (see Problem~\ref{prob:8}) $\boldsymbol{0}\in\boldsymbol{A} \boldsymbol{x}+\boldsymbol{B}\boldsymbol{x}+ \boldsymbol{C}\boldsymbol{x}$ solve Problem~\ref{prob:j} \cite{Bric13}. In turn, applying the splitting scheme of Proposition~\ref{p:71} leads to the following implementation. \begin{proposition} \label{p:n1} Consider the setting of Assumption~\ref{a:j1} with the additional requirement that, for some $\delta\in\ensuremath{\left]0,+\infty\right[}$, \begin{multline} \label{e:CLip} (\forall\boldsymbol{x}\in\ensuremath{\boldsymbol{\mathcal H}}) (\forall\boldsymbol{y}\in\ensuremath{\boldsymbol{\mathcal H}})\quad \sum_{i=1}^m\|\nabli{i}\boldsymbol{g}_i (\boldsymbol{x})-\nabli{i}\boldsymbol{g}_i (\boldsymbol{y})\|^2\\ \leq\delta^2\sum_{i=1}^m\|x_i-y_i\|^2. \end{multline} Let $\varepsilon\in\left]0,1/(2+\delta)\right[$, let $(\gamma_n)_{n\in\ensuremath{\mathbb N}}$ be in $\left[\varepsilon,(1-\varepsilon)/(1+\delta)\right]$, let $\boldsymbol{x}_0\in\ensuremath{\boldsymbol{\mathcal H}}$, and let $\boldsymbol{v}_0\in\ensuremath{\boldsymbol{\mathcal H}}$. Iterate \begin{equation} \label{e:fz79} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} \text{for}\:\:i=1,\ldots,m\\ \lfloor\:y_{i,n}=x_{i,n}-\gamma_n\big(\nabli{i}\boldsymbol{g}_i (\boldsymbol{x}_n)+v_{i,n}\big)\\%[2mm] \boldsymbol{p}_n=\ensuremath{\mathrm{prox}}_{\gamma_n\boldsymbol{f}}\: \boldsymbol{y}_n\\%[2mm] \text{for}\:\:i=1,\ldots,m\\ \left\lfloor \begin{array}{l} q_{i,n}=v_{i,n}+\gamma_n\big(x_{i,n}-\ensuremath{\mathrm{prox}}_{\psi_i/\gamma_n} (v_{i,n}/\gamma_n+x_{i,n})\big)\\ x_{i,n+1}=x_{i,n}-y_{i,n}+p_{i,n}-\gamma_n \big(\nabli{i}\boldsymbol{g}_i(\boldsymbol{p}_n)+q_{i,n}\big)\\ v_{i,n+1}=q_{i,n}+\gamma_n(p_{i,n}-x_{i,n}). \end{array} \right. \end{array} \right. \end{array} \end{equation} Then there exists a solution $\overline{\boldsymbol{x}}$ to Problem~\ref{prob:j} such that, for every $i\in\{1,\ldots,m\}$, $x_{i,n}\to\overline{x}_i$. \end{proposition} \begin{example} \label{ex:juil1} Let $\varphi_1\colon\mathcal H_1\to\ensuremath{\mathbb R}$ be convex and differentiable with a $\delta_1$-Lipschitzian gradient, let $\varphi_2\colon\mathcal H_2\to\ensuremath{\mathbb R}$ be convex and differentiable with a $\delta_2$-Lipschitzian gradient, let $L\colon\mathcal H_1\to\mathcal H_2$ be linear, and let $C_1\subset\mathcal H_1$, $C_2\subset\mathcal H_2$, and $\boldsymbol{D}\subset\mathcal H_1\times\mathcal H_2$ be nonempty closed convex sets. Suppose that there exists $\boldsymbol{z}\in\mathcal H_1\times\mathcal H_2$ such that $-(\nabla\varphi_1(z_1)+L^*z_2,\nabla\varphi_2(z_2)-Lz_1) \in N_{\boldsymbol{D}}(z_1,z_2)+N_{C_1}z_1\times N_{C_2}z_2$. Then the 2-player game \begin{equation} \label{e:j23} \begin{cases} \overline{x}_1\in\Argmind{x_1\in C_1} {\iota_{\boldsymbol{D}}(x_1,\overline{x}_2)+ \varphi_1(x_1)+\scal{Lx_1}{\overline{x}_2}}\\ \overline{x}_2\in\Argmind{x_2\in C_2} {\iota_{\boldsymbol{D}}(\overline{x}_1,x_2)+ \varphi_2(x_2)-\scal{L\overline{x}_1}{x_2}} \end{cases} \end{equation} is an instance of Problem~\ref{prob:j} with $\boldsymbol{f}_1=\boldsymbol{f}_2=\iota_{\boldsymbol{D}}$, $\psi_1=\iota_{C_1}$, $\psi_2=\iota_{C_2}$, and \begin{equation} \label{e:juil2} \begin{cases} \boldsymbol{g}_1\colon(x_1,x_2)\mapsto \varphi_1(x_1)+\scal{Lx_1}{x_2}\\ \boldsymbol{g}_2\colon(x_1,x_2)\mapsto \varphi_2(x_2)-\scal{Lx_1}{x_2}. \end{cases} \end{equation} In addition, Assumption~\ref{a:j1} is satisfied, as well as \eqref{e:CLip} with $\delta=\max\{\delta_1,\delta_2\}+\|L\|$. Moreover, in view of \eqref{e:98}, algorithm \eqref{e:fz79} becomes \begin{equation} \label{e:fz80} \hskip -3mm \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} y_{1,n}=x_{1,n}-\gamma_n\big(\nabla\varphi_1(x_{1,n})+L^*x_{2,n} +v_{1,n}\big)\\%[2mm] y_{2,n}=x_{2,n}-\gamma_n\big(\nabla\varphi_2(x_{2,n})-Lx_{1,n} +v_{2,n}\big)\\%[2mm] \boldsymbol{p}_n=\ensuremath{\mathrm{proj}}_{\boldsymbol{D}}\: \boldsymbol{y}_n\\%[2mm] q_{1,n}=v_{1,n}+\gamma_n\big(x_{1,n}-\ensuremath{\mathrm{proj}}_{C_1} (v_{1,n}/\gamma_n+x_{1,n})\big)\\ q_{2,n}=v_{2,n}+\gamma_n\big(x_{2,n}-\ensuremath{\mathrm{proj}}_{C_2} (v_{2,n}/\gamma_n+x_{2,n})\big)\\ x_{1,n+1}=x_{1,n}-y_{1,n}+p_{1,n}\\ \hskip 15mm -\gamma_n \big(\nabla\varphi_1(p_{1,n})+L^*p_{2,n}+q_{1,n}\big)\\ x_{2,n+1}=x_{2,n}-y_{2,n}+p_{2,n}\\ \hskip 15mm -\gamma_n \big(\nabla\varphi_2(p_{2,n})-Lp_{1,n}+q_{2,n}\big)\\ v_{1,n+1}=q_{1,n}+\gamma_n(p_{1,n}-x_{1,n})\\ v_{2,n+1}=q_{2,n}+\gamma_n(p_{2,n}-x_{2,n}). \end{array} \right. \end{array} \end{equation} \end{example} Condition \eqref{e:CLip} means that the operator $\boldsymbol{C}$ of \eqref{e:B} is $\delta$-Lipschitzian. The stronger assumption that it is cocoercive, allows us to bring into play the three-operator splitting algorithm of Proposition~\ref{p:72} to solve Problem~\ref{prob:j}. \begin{proposition} \label{p:n2} Consider the setting of Assumption~\ref{a:j1} with the additional requirement that, for some $\beta\in\ensuremath{\left]0,+\infty\right[}$, \begin{multline} \label{e:Ccoco} (\forall\boldsymbol{x}\in\ensuremath{\boldsymbol{\mathcal H}}) (\forall\boldsymbol{y}\in\ensuremath{\boldsymbol{\mathcal H}})\quad\sum_{i=1}^m \scal{x_i-y_i}{\nabli{i}\boldsymbol{g}_i(\boldsymbol{x})- \nabli{i}\boldsymbol{g}_i(\boldsymbol{y})}\\ \geq\beta\sum_{i=1}^m\|\nabli{i}\boldsymbol{g}_i (\boldsymbol{x})-\nabli{i}\boldsymbol{g}_i (\boldsymbol{y})\|^2. \end{multline} Let $\gamma\in\left]0,2\beta\right[$ and set $\alpha=2\beta/(4\beta-\gamma)$. Furthermore, let $(\lambda_n)_{n\in\ensuremath{\mathbb N}}$ be an $\alpha$-relaxation sequence and let $\boldsymbol{y}_0\in\ensuremath{\boldsymbol{\mathcal H}}$. Iterate \begin{equation} \label{e:j9} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \begin{array}{l} \text{for}\:\:i=1,\ldots,m\\ \left\lfloor \begin{array}{l} x_{i,n}=\ensuremath{\mathrm{prox}}_{\gamma\psi_i}\,y_{i,n}\\ r_{i,n}=y_{i,n}+\gamma \nabli{i}\boldsymbol{g}_i (\boldsymbol{x}_n)\\ \end{array} \right.\\[2mm] \boldsymbol{z}_n=\ensuremath{\mathrm{prox}}_{\gamma\boldsymbol{f}} (2\boldsymbol{x_n}-\boldsymbol{r}_n)\\ \boldsymbol{y}_{n+1}=\boldsymbol{y}_n+ \lambda_n(\boldsymbol{z}_n-\boldsymbol{x}_n). \end{array} \right.\\[2mm] \end{array} \end{equation} Then there exists a solution $\overline{\boldsymbol{x}}$ to Problem~\ref{prob:j} such that, for every $i\in\{1,\ldots,m\}$, $x_{i,n}\to\overline{x}_i$. \end{proposition} \begin{example} \label{ex:n45} For every $i\in\{1,\ldots,m\}$, let $C_i\subset\mathcal H_i$ be a nonempty closed convex set, let $L_i\colon\mathcal H_i\to\mathcal G$ be linear, and let $o_i\in\mathcal G$. The task is to solve the Nash equilibrium (with the convention $L_{m+1}\overline{x}_{m+1}=L_1\overline{x}_1$) \begin{multline} \label{e:j8} \text{find}\;\;\overline{\boldsymbol{x}}\in\ensuremath{\boldsymbol{\mathcal H}} \quad\text{such that}\quad (\forall i\in\{1,\ldots,m\})\\ \overline{x}_i\in\Argmind{x_i\in C_i} {\psi_i(x_i)+\dfrac{\|L_ix_i+L_{i+1}\overline{x}_{i+1} -o_i\|^2}{2}}. \end{multline} Here, the action of player $i$ must lie in $C_i$, and it is further penalized by $\psi_i$ and the proximity of the linear mixture $L_ix_i+L_{i+1}\overline{x}_{i+1}$ to some vector $o_i$. For instance if, for every $i\in\{1,\ldots,m\}$, $C_i=\mathcal H_i$, $o_i=0$, and $L_i=(-1)^i\ensuremath{\mathrm{Id}}$, we recover the setting of Section~\ref{sec:IIC}. The equilibrium \eqref{e:j8} is an instantiation of Problem~\ref{prob:j} with $\boldsymbol{f}_1=\boldsymbol{f}_2\colon\boldsymbol{x} \mapsto\sum_{i=1}^m\iota_{C_i}(x_i)$ and, for every $i\in\{1,\ldots,m\}$, $\boldsymbol{g}_i\colon\boldsymbol{x}\mapsto \|L_ix_i+L_{i+1}x_{i+1}-o_i\|^2/2$. In addition, as in \cite[Section~9.4.3]{Bric13}, \eqref{e:Ccoco} holds with $\beta=(2\max_{1\leq i\leq m}\|L_i\|^2)^{-1}$. Finally, \eqref{e:j9} reduces to (with the convention $L_{m+1}x_{m+1,n}=L_1x_{1,n}$) \begin{equation} \label{e:j6} \hskip -2mm \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor \hskip -1mm \begin{array}{l} \text{for}\:\:i=1,\ldots,m\\ \left\lfloor \begin{array}{l} \hskip -1mm x_{i,n}=\ensuremath{\mathrm{prox}}_{\gamma\psi_i}\,y_{i,n}\\ \hskip -1mm r_{i,n}=y_{i,n}+\gamma L_i^*(L_ix_{i,n}+L_{i+1}x_{i+1,n}-o_i)\\ \hskip -1mm z_{i,n}=\ensuremath{\mathrm{proj}}_{C_i}(2x_{i,n}-r_{i,n})\\ y_{i,n+1}=y_{i,n}+\lambda_n(z_{i,n}-x_{i,n}). \end{array} \right.\\[2mm] \end{array} \right.\\[2mm] \end{array} \end{equation} \end{example} \begin{remark}\ \label{r:j1} \begin{enumerate} \item As seen in Example~\ref{ex:juil1}, the functions of \eqref{e:juil2} satisfy the Lipschitz condition \eqref{e:CLip}. However the cocoercivity condition \eqref{e:Ccoco} does not hold. For instance, if $\varphi_1=0$ and $\varphi_2=0$ then, for every $\boldsymbol{x}$ and $\boldsymbol{y}$ in $\mathcal H_1\times\mathcal H_2$, \begin{multline} \label{e:211} \scal{\nabli{1}\boldsymbol{g}_1 (\boldsymbol{x})-\nabli{1}\boldsymbol{g}_1 (\boldsymbol{y})}{x_1-y_1}\hspace{2cm}\\+ \scal{\nabli{2}\boldsymbol{g}_2 (\boldsymbol{x})-\nabli{2}\boldsymbol{g}_2 (\boldsymbol{y})}{x_2-y_2}=0. \end{multline} \item Distributed splitting algorithms for finding Nash equilibria are discussed in \cite{Belg18,Belg19,Yip19a,Yip19b}. \item An asynchronous block-iterative decomposition algorithm to solve Nash equilibrium problems involving a mix of nonsmooth and smooth functions acting on linear mixtures of actions is proposed in \cite{Nash21}. \end{enumerate} \end{remark} \section{Fixed point modeling of other non-minimization problems} \label{sec:7} \subsection{Neural network structures} \begin{figure} \scalebox{0.50} { \begin{pspicture}(-0.05,-1.7)(15.9,2.1) \psline[linewidth=0.04cm,arrowsize=2.2mm]{->}(0.35,0.0)(1.0,0.0) \psline[linewidth=0.04cm,arrowsize=2.2mm]{->}(3.0,0.0)(3.65,0.0) \psline[linewidth=0.04cm,arrowsize=2.2mm]{->}(4.35,0.0)(5.0,0.0) \psline[linewidth=0.04cm,arrowsize=2.2mm]{->}(4.0,1.2)(4.0,0.36) \psframe[linewidth=0.04,dimen=outer](1.0,-1.0)(3.0,1.0) \pscircle[linewidth=0.04,dimen=outer](4.0,0.0){0.35} \rput(0.15,0.0){\Large$\boldsymbol{{x}}$} \rput(2.0,0.0){\Large$\boldsymbol{{W_1}}$} \rput(4.0,0.0){\Large$\boldsymbol{+}$} \rput(4.0,1.5){\Large$\boldsymbol{{b_1}}$} \rput(6.0,0.0){\Large$\boldsymbol{{R_1}}$} \psframe[linewidth=0.04,dimen=outer](5.0,-1.0)(7.0,1.0) \psline[linewidth=0.04cm,arrowsize=2.2mm]{->}(7.00,0.0)(7.65,0.0) \rput(8.5,0.0){\Large$\boldsymbol{{\cdots}}$} \psline[linewidth=0.04cm,arrowsize=2.2mm]{->}(9.35,0.0)(10.0,0.0) \psline[linewidth=0.04cm,arrowsize=2.2mm]{->}(12.0,0.0)(12.65,0.0) \psline[linewidth=0.04cm,arrowsize=2.2mm]{->}(13.35,0.0)(14.0,0.0) \psline[linewidth=0.04cm,arrowsize=2.2mm]{->}(13.0,1.2)(13.0,0.36) \psframe[linewidth=0.04,dimen=outer](10.0,-1.0)(12.0,1.0) \pscircle[linewidth=0.04,dimen=outer](13.0,0.0){0.35} \rput(11.0,0.0){\Large$\boldsymbol{{W_m}}$} \rput(13.0,0.0){\Large$\boldsymbol{+}$} \rput(13.0,1.5){\Large$\boldsymbol{{b_m}}$} \rput(15.0,0.0){\Large$\boldsymbol{{R_m}}$} \rput(17.0,0.05){\Large$\boldsymbol{{Tx}}$} \psframe[linewidth=0.04,dimen=outer](14.0,-1.0)(16.0,1.0) \psline[linewidth=0.04cm,arrowsize=2.2mm]{->}(16.00,0.0)(16.65,0.0) \end{pspicture} } \vskip -1mm \caption{Feedforward neural network: the $i$th layer involves a linear weight operator $W_i$, a bias vector $b_i$, and an activation operator $R_i$, which is assumed to be an averaged nonexpansive operator.} \label{fig:4} \end{figure} A feedforward neural network (see Fig.~\ref{fig:4}) consists of the composition of nonlinear activation operators and affine operators. More precisely, such an $m$-layer network can be modeled as \begin{equation} \label{e:zib} T=T_m\circ\cdots\circ T_1, \end{equation} where $T_i=R_i\circ(W_i\cdot +\,b_i)$, with $W_i\in\ensuremath{\mathbb R}^{N_i\times N_{i-1}}$, $b_i\in\ensuremath{\mathbb R}^{N_i}$, and $R_i\colon\ensuremath{\mathbb R}^{N_i}\to\ensuremath{\mathbb R}^{N_i}$ (see Fig.~\ref{fig:4}). If the $i$-th layer is convolutional, then the corresponding weight matrix $W_i$ has a Toeplitz (or block-Toeplitz) structure. Many common activation operators are separable, i.e., \begin{equation} \label{e:actsep} R_i\colon(\xi_k)_{1\leq k\leq N_i}\mapsto \big(\varrho_{i,k}(\xi_k)\big)_{1\leq k\leq N_i}, \end{equation} where $\varrho_{i,k}\colon\ensuremath{\mathbb R}\to\ensuremath{\mathbb R}$. For example, the ReLU activation function is given by \begin{equation} \label{e:ReLU} \varrho_{i,k}\colon\xi\mapsto \begin{cases} \xi,&\text{if}\;\;\xi>0;\\ 0,&\text{if}\;\;\xi\leq 0, \end{cases} \end{equation} and the unimodal sigmoid activation function is \begin{equation} \label{e:sig} \varrho_{i,k}\colon\xi\mapsto \frac{1}{1+e^{-\xi}}-\frac{1}{2}. \end{equation} An example of a nonseparable operator is the softmax activator \begin{equation} R_i\colon (\xi_k)_{1\leq k\leq N_i}\mapsto \left(e^{\xi_k}\left/ \displaystyle\sum_{j=1}^{N_i} e^{\xi_j}\right. \right)_{1\leq k\leq N_i}. \end{equation} It was observed in \cite{Smds20} that almost all standard activators are actually averaged operators in the sense of \eqref{e:105-}. In particular, as discussed in \cite{Svva20}, many activators are proximity operators in the sense of Theorem~\ref{t:12}. In this case, in \eqref{e:actsep}, there exist functions $(\phi_k)_{1\leq k\leq N_i}$ in $\Gamma_0(\ensuremath{\mathbb R})$ such that \begin{equation} R_i\colon(\xi_k)_{1\leq k\leq N_i}\mapsto \big(\ensuremath{\mathrm{prox}}_{\phi_k}\xi_k\big)_{1\leq k\leq N_i}. \end{equation} For ReLU, $\phi_k$ reduces to $\iota_{[0,+\infty[}$ whereas, for the unimodal sigmoid, it is the function \begin{equation} \xi\mapsto \begin{cases} (\xi+1/2)\ln(\xi+1/2)+(1/2-\xi)\ln(1/2-\xi)\\ \hskip 13mm -(|\xi|^2+1/4)/2, \hskip 7mm\text{if}\;\;|\xi|<1/2;\\ -1/4,\hskip 36mm\text{if}\;\;|\xi|=1/2;\\ \ensuremath{+\infty},\hskip 37.8mm\text{if}\;\;|\xi|>1/2. \end{cases} \end{equation} For softmax, we have $R_i=\ensuremath{\mathrm{prox}}_{\varphi_i}$ where \begin{equation} \varphi_i\colon(\xi_k)_{1\leq k\leq N_i}\mapsto \begin{cases} \sum_{i=1}^{N_i}\big(\xi_k\ln\xi_k- {|\xi_k|^2}/{2}\big),\\ \qquad\text{if}\;\; \displaystyle{\min_{1\leq k\leq N_i}}\xi_k\geq 0\;\text{and}\; \sum_{k=1}^{N_i}\xi_k=1;\\ \ensuremath{+\infty},\;\text{otherwise.} \end{cases} \end{equation} The weight matrices $(W_i)_{1\leq i\leq m}$ play a crucial role in the overall nonexpansiveness of the network. Indeed, under suitable conditions on these matrices, the network $T$ is averaged. For example, let $W=W_m\cdots W_1$ and let \begin{multline} \label{e:defthetam} \theta_m=\|W\|+\sum_{\ell=1}^{m-1}\sum_{0\leq j_1<\cdots<j_\ell\leq m-1}\|W_m\cdots W_{j_\ell+1}\|\\ \times\|W_{j_\ell}\cdots W_{j_{\ell-1}+1}\|\cdots\|W_{j_1}\cdots W_0\|. \end{multline} Then, if there exists $\alpha\in [1/2,1]$ such that \begin{equation} \label{e:alphaNN} \|W-2^m(1-\alpha)\ensuremath{\mathrm{Id}}\|-\|W\|+2\theta_m\leq 2^m\alpha, \end{equation} $T$ is $\alpha$-averaged. Other sufficient conditions have been established in \cite{Svva20}. These results pave the way to a theoretical analysis of neural networks from the standpoint of fixed point methods. In particular, assume that $N_m=N_0$ and consider a recurrent network of the form \begin{equation} (\forall n\in\ensuremath{\mathbb N})\quad x_{n+1}=(1-\lambda_n)x_n+\lambda_n T x_n, \end{equation} where $\lambda_n\in\ensuremath{\left]0,+\infty\right[}$ models a skip connection. Then, according to Theorem~\ref{t:1}, the convergence of $(x_n)_{n\in\ensuremath{\mathbb N}}$ to a fixed point of $T$ is guaranteed under condition~\eqref{e:alphaNN} provided that $(\lambda_n)_{n\in\ensuremath{\mathbb N}}$ is an $\alpha$-relaxation sequence. As shown in \cite{Svva20}, when for every $i\in\{1,\ldots,m\}$, $R_i$ is the proximity operator of some function $\varphi_i\in\Gamma_0(\ensuremath{\mathbb R}^{N_i})$, the recurrent network delivers asymptotically a solution to the system of inclusions \begin{equation} \label{e:vi1} \begin{cases} b_1\in&\hskip -2mm \overline{x}_1-W_1\overline{x}_m+\partial \varphi_1(\overline{x}_1)\\ b_2\in&\hskip -2mm\overline{x}_2-W_2\overline{x}_1+ \partial\varphi_2(\overline{x}_2)\\ \hskip 6mm\vdots\\ b_m\in&\hskip -2mm\overline{x}_m-W_m\overline{x}_{m-1}+ \partial\varphi_m(\overline{x}_m), \end{cases} \end{equation} where $\overline{x}_m\in\ensuremath{\text{\rm Fix}\,} T$ and, for every $i\in\{2,\ldots,m\}$, $\overline{x}_i=T_i\overline{x}_{i-1}$. Alternatively, \eqref{e:vi1} is a Nash equilibrium of the form \eqref{e:u4} where (we set $\overline{x}_0=\overline{x}_m$) \begin{equation} \label{e:u9} \boldsymbol{h}_i\colon(x_i; \overline{\boldsymbol{x}}_{\smallsetminus i})\mapsto \varphi_i(x_i)+\dfrac{1}{2}\|x_i-b_i-W_i\overline{x}_{i-1}\|^2. \end{equation} Fixed point theory also allows us to provide conditions for $T$ to be Lipschitzian and to calculate an associated Lipschitz constant. Such results are useful to evaluate the robustness of the network to adversarial perturbations of its input \cite{Szeg13}. As shown in \cite{Smds20}, if $\theta_m$ is given by \eqref{e:defthetam}, $\theta_m/2^{m-1}$ is a Lipschitz constant of $T$ and \begin{equation} \label{e:sandLip} \|W\|\leq\frac{\theta_m}{2^{m-1}}\leq\|W_1\|\cdots\|W_m\|. \end{equation} This bound is thus more accurate than the product of the individual bounds corresponding to each layer used in \cite{Szeg13}. Tighter estimations can also be derived, especially when the activation operators are separable \cite{Smds20,Lato20,Scam18}. Note that the lower bound in \eqref{e:sandLip} would correspond to a linear network where all the nonlinear activation operators would be removed. Interestingly, when all the weight matrices have components in $\ensuremath{\left[0,+\infty\right[}$, $\|W\|$ is a Lipschitz constant of the network \cite{Smds20}. Special cases of the neural network model of \cite{Svva20} are investigated in \cite{Hasa20,Tang20}. Another special case of interest is when the operator $T$ in \eqref{e:zib} corresponds to the \emph{unrolling} (or \emph{unfolding}) of a fixed point algorithm \cite{Mong21}, that is, each operator $T_i$ corresponds to one iteration of such an algorithm \cite{Bane20,Lecu10,Yang16,Zhan18}. The algorithm parameters, as well as possible hyperparameters of the problem, can then be optimized from a training set by using differentiable programming. Let us note that the results of \cite{Svva20,Smds20} can be used to characterize the nonexpansiveness properties of the resulting neural network \cite{Bert20}. \subsection{Plug-and-play methods} \label{sec:7b} The principle of the so-called \emph{plug-and-play} (PnP) methods \cite{Buzz18,Onos17,Rond16,Ryue19,Suny19,Venka13} is to replace a proximity operator appearing in some proximal minimization algorithm by another operator $Q$. The rationale is that, since a proximity operator can be interpreted as a denoiser \cite{Smms05}, one can consider replacing this proximity operator by a more sophisticated denoiser $Q$, or even learning it in a supervised manner from a database of examples. Example~\ref{ex:MMSE} described implicitly a PnP algorithm that can be interpreted as a minimization problem. Here are some techniques that go beyond the optimization setting. \begin{algorithm}[PnP forward-backward] Let $f\colon\mathcal H\to\ensuremath{\mathbb R}$ be a differentiable convex function, let $Q\colon\mathcal H\to\mathcal H$, let $\gamma\in\ensuremath{\left]0,+\infty\right[}$, let $(\lambda_n)_{n\in\ensuremath{\mathbb N}}$ be a sequence in $\ensuremath{\left]0,+\infty\right[}$, and let $x_0\in\mathcal H$. Iterate \begin{equation} \label{e:fbpnp} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor\begin{array}{l} y_n=x_n-\gamma \nabla f(x_n)\\ x_{n+1}=x_n+\lambda_n(Qy_n-x_n). \end{array}\right. \end{array} \end{equation} \end{algorithm} The convergence of $(x_n)_{n\in\ensuremath{\mathbb N}}$ in \eqref{e:fbpnp} is related to the properties of $T=Q\circ(\ensuremath{\mathrm{Id}}-\gamma\nabla f)$. Suppose that $T$ is $\alpha$-averaged with $\alpha\in\rzeroun$, and that $S=\ensuremath{\text{\rm Fix}\,} T\neq\ensuremath{\varnothing}$. Then it follows from Theorem~\ref{t:1} that, if $(\lambda_n)_{n\in\ensuremath{\mathbb N}}$ is an $\alpha$-relaxation sequence, then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to a point in $S$. \begin{algorithm}[PnP Douglas-Rachford] \label{a:DR} Let $f\in\Gamma_0(\mathcal H)$, let $Q\colon\mathcal H\to\mathcal H$, let $\gamma\in\ensuremath{\left]0,+\infty\right[}$, let $(\lambda_n)_{n\in \ensuremath{\mathbb N}}$ be a sequence in $\ensuremath{\left]0,+\infty\right[}$, and let $x_0\in\mathcal H$. Iterate \begin{equation} \label{e:m1} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor\begin{array}{l} x_n=\ensuremath{\mathrm{prox}}_{\gamma f} y_n\\ y_{n+1}=y_n+\lambda_n \big(Q(2x_n-y_n)-x_n\big). \end{array}\right. \end{array} \end{equation} \end{algorithm} In view of \eqref{e:m1}, \begin{equation} \label{e:dr} (\forall n\in\ensuremath{\mathbb N})\quad y_{n+1}= \Big(1-\frac{\lambda_n}{2}\Big)y_n+\frac{\lambda_n}{2}Ty_n, \end{equation} where $T=(2Q-\ensuremath{\mathrm{Id}})\circ(2\ensuremath{\mathrm{prox}}_{\gamma f}-\ensuremath{\mathrm{Id}})$. Now assume that $Q$ is such that $T$ is $\alpha$-averaged for some $\alpha\in\rzeroun$ and $\ensuremath{\text{\rm Fix}\,} T\neq\ensuremath{\varnothing}$. Then it follows from Theorem~\ref{t:1} that, if $(\lambda_n/2)_{n\in\ensuremath{\mathbb N}}$ is an $\alpha$-relaxation sequence, then $(y_n)_{n\in\ensuremath{\mathbb N}}$ converges to a point in $\ensuremath{\text{\rm Fix}\,} T$ and we deduce that $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to a point in $S=\ensuremath{\mathrm{prox}}_{\gamma f}(\ensuremath{\text{\rm Fix}\,} T)$. Conditions for $T$ to be a Banach contraction in the two previous algorithms are given in \cite{Ryue19}. Applying the Douglas-Rachford algorithm to the dual of Problem \ref{prob:5} leads to a simple form of the alternating direction method of multipliers. Thus, consider algorithm~\ref{a:DR}, where $f$, $\gamma$, and $Q$ are replaced by $f^*$, $1/\gamma$ and $\ensuremath{\mathrm{Id}}+\gamma^{-1}Q(-\gamma\cdot)$, respectively, and $(\forall n\in\ensuremath{\mathbb N})$ $\lambda_n=1$. Then we obtain the following algorithm \cite{Chan17}, which is applied to image fusion in \cite{Teod19}. \begin{algorithm}[PnP ADMM] \label{a:ADMM} Let $f\in\Gamma_0(\mathcal H)$, let $Q\colon\mathcal H\to\mathcal H$, let $\gamma\in\ensuremath{\left]0,+\infty\right[}$, let $y_0\in\mathcal H$, let $z_0\in\mathcal H$, and let $\gamma\in\ensuremath{\left]0,+\infty\right[}$. Iterate \begin{equation} \begin{array}{l} \text{for}\;n=0,1,\ldots\\ \left\lfloor\begin{array}{l} x_n=Q(y_n-z_n)\\ y_{n+1}=\ensuremath{\mathrm{prox}}_{\gamma f}(x_n+z_n)\\ z_{n+1}=z_n+x_n-y_{n+1}. \end{array} \right. \end{array} \end{equation} \end{algorithm} Note that, beyond the above fixed point descriptions of $S$, the properties of the solutions in plug-and-play methods are elusive in general. \subsection{Adjoint mismatch problem} \label{sec:7c} A common inverse problem formulation is to \begin{equation} \minimize{x\in\mathcal H}{f(x)+\frac{1}{2}\|Hx-y\|^2+ \frac{\kappa}{2}\|x\|^{2}}, \end{equation} where $f\in\Gamma_{0}(\mathcal H)$, $y\in\mathcal G$ models the observation, $H\colon\mathcal H\to\mathcal G$ is a linear operator, and $\kappa\in\ensuremath{\left[0,+\infty\right[}$. This is a particular case of Problem~\ref{prob:5} where \begin{equation} g=\frac12\|H\cdot-y\|^2+\frac{\kappa}{2}\|\cdot\|^{2}, \end{equation} has Lipschitzian gradient $\nabla g\colon x\mapsto H^*(Hx-y)+\kappa x$. It can therefore be solved via Proposition~\ref{p:fb17}, which therefore requires the application of the adjoint operator $H^*$ at each iteration. Due to both physical and computational limitations in certain applications, this adjoint may be hard to implement and it is replaced by a linear approximation $K\colon\mathcal G\to\mathcal H$ \cite{Lore18,Zeng20}. This leads to a surrogate of the proximal-gradient scheme \eqref{e:FB2} of the form \begin{multline} \label{e:FBt} (\forall n\in\ensuremath{\mathbb N})\quad x_{n+1}=x_{n}+\\ \lambda_n\Big(\ensuremath{\mathrm{prox}}_{\gamma f}\big((1-\gamma\kappa)x_{n}- \gamma K(Hx_n-y)\big)-x_n\Big), \end{multline} with $\gamma\in\ensuremath{\left]0,+\infty\right[}$ and $\{\lambda_n\}_{n\in\ensuremath{\mathbb N}}\subset\rzeroun$. Let us assume that $L=K\circ H+\kappa \ensuremath{\mathrm{Id}}$ is a cocoercive operator. Then the above algorithm is an instance of the forward-backward splitting algorithm introduced in Proposition~\ref{p:fb13} to solve Problem~\ref{prob:7} where $A=\partial f$ and $B=L\cdot -K y$. This means that a solution produced by algorithm~\eqref{e:FBt} no longer solves a minimization problem since $L$ is not a gradient in general \cite[Proposition~2.58]{Livre1}. However, suppose that $g$ is $\nu$-strongly convex with $\nu\in\ensuremath{\left]0,+\infty\right[}$, let $\zeta_{\rm min}$ be the minimum eigenvalue of $L+L^*$, set $\chi=1/(\nu+\zeta_{\rm min})$, let $\widehat{x}$ be the solution to Problem~\ref{prob:5}, and let $\widetilde{x}$ be the solution to Problem~\ref{prob:7}. Then, as shown in \cite{Chou21}, \begin{equation} \label{e:biasgen} \|\widetilde{x}-\widehat{x}\|\leq \chi \,\|(H^{*}-K)(H\widehat{x}-y)\|. \end{equation} A sufficient condition ensuring that $L$ is cocoercive is that $\zeta_{\rm min}>0$. The problem of adjoint mismatch when $f=0$ is studied in \cite{Dong19}. \subsection{Problems with nonlinear observations} We describe the framework presented in \cite{Eusi20,Ibap20} to address the problem of recovering an ideal object $\overline{x}\in\mathcal H$ from linear and nonlinear transformations $(r_k)_{1\leq k\leq q}$ of it. \begin{problem} \label{prob:z} For every $k\in\{1,\ldots,q\}$, let $R_k\colon\mathcal H\to\mathcal G_k$ and let $r_k\in\mathcal G_k$. The task is to \begin{equation} \label{e:z1} \text{find}\;x\in\mathcal H\;\:\text{such that}\;\: (\forall k\in \{1,\ldots,q\})\;\:R_kx=r_k. \end{equation} \end{problem} In the case when $q=2$, $\mathcal G_1=\mathcal G_2=\mathcal H$, and $R_1$ and $R_2$ are projectors onto vector subspaces, Problem~\ref{prob:z} reduces to the classical linear recovery framework of \cite{Youl78} which can be solved by projection methods. We can also express Problem~\ref{prob:1} as a special case of Problem~\ref{prob:z} by setting $m=q$ and \begin{equation} \label{e:z2} (\forall k\in\{1,\ldots,q\})\quad r_k=0\quad\text{and}\quad R_k=\ensuremath{\mathrm{Id}}-\ensuremath{\mathrm{proj}}_{C_k}. \end{equation} In the presence of more general nonlinear operators, however, projection techniques are not applicable to solve \eqref{e:z1}. Furthermore, standard minimization approaches such as minimizing the least-squares residual $\sum_{k=1}^q\|R_kx-r_k\|^2$ typically lead to an intractable nonconvex problem. Yet, we can employ fixed point arguments to approach the problem and design a provably convergent method to solve it. To this end, assume that \eqref{e:z1} has a solution and that each operator $R_k$ is \emph{proxifiable} in the sense that there exists $S_k\colon\mathcal G_k\to\mathcal H$ such that \begin{equation} \label{e:pr} \begin{cases} S_k\circ R_k\;\text{is firmly nonexpansive}\\ (\forall x\in\mathcal H)\quad S_k(R_kx)=S_kr_k\;\;\Rightarrow\;\; R_kx=r_k. \end{cases} \end{equation} Clearly, if $R_k$ is firmly nonexpansive, e.g., a projection or proximity operator (see Fig.~\ref{fig:1}), then it is proxifiable with $S_k=\ensuremath{\mathrm{Id}}$. Beyond that, many transformations found in data analysis, including discontinuous operations such as wavelet coefficients hard-thresholding, are proxifiable \cite{Eusi20,Ibap20}. Now set \begin{equation} \label{e:21} (\forall k\in\{1,\ldots,q\})\quad T_k=S_kr_k+\ensuremath{\mathrm{Id}}-S_k\circ R_k. \end{equation} Then the operators $(T_k)_{1\leq k\leq q}$ are firmly nonexpansive and Problem~\ref{prob:z} reduces finding one of their common fixed points. In view of Propositions~\ref{p:roma} and \ref{p:f3}, this can be achieved by applying Theorem~\ref{t:1} with $T=T_1\circ\cdots\circ T_q$. The more sophisticated block-iterative methods of \cite{Nume06,Ibap20} are also applicable. Let us observe that the above model is based purely on a fixed point formalism which does not involve monotone inclusions or optimization concepts. See \cite{Eusi20,Ibap20} for data science applications. \section{Concluding remarks} \label{sec:8} We have shown that fixed point theory provides an essential set of tools to efficiently model, analyze, and solve a broad range of problems in data science, be they formulated as traditional minimization problems or in more general forms such as Nash equilibria, monotone inclusions, or nonlinear operator equations. Thus, as illustrated in Section~\ref{sec:7}, nonlinear models that would appear to be predestined to nonconvex minimization methods can be effectively solved with the fixed point machinery. The prominent role played by averaged operators in the construction of provably convergent fixed point iterative methods has been highlighted. Also emphasized is the fact that monotone operators are the backbone of many powerful modeling approaches. We believe that fixed point strategies are bound to play an increasing role in future advances in data science. \medskip {\bfseries Acknowledgment.} The authors thank Minh N. B\`ui and Zev C. Woodstock for their careful proofreading of the paper.
1,314,259,994,412
arxiv
\section*{Abstract}{ Using the algebro-geometric approach, we study the structure of semi-classical eigenstates in a weakly-anisotropic quantum Heisenberg spin chain. We outline how classical nonlinear spin waves governed by the anisotropic Landau--Lifshitz equation arise as coherent macroscopic low-energy fluctuations of the ferromagnetic ground state. Special emphasis is devoted to the simplest types of solutions, describing precessional motion and elliptic magnetisation waves. The internal magnon structure of classical spin waves is resolved by performing the semi-classical quantisation using the Riemann--Hilbert problem approach. We present an expression for the overlap of two semi-classical eigenstates and discuss how correlation functions at the semi-classical level arise from classical phase-space averaging. } \vspace{10pt} \noindent\rule{\textwidth}{1pt} \tableofcontents\thispagestyle{fancy} \noindent\rule{\textwidth}{1pt} \vspace{10pt} \section{Introduction} \label{sec:intro} Thermodynamic systems of interacting quantum particles present an outstanding challenge theoretical physics. In spite of their inherent complexity, tremendous progress has been made recently in understanding various facets of quantum many-body systems, including thermalisation~\cite{D_Alessio_2016, Gogolin_2016}, far-from-equilibrium dynamics~\cite{Essler_2016, Mussardo_2016}, quantum transport~\cite{Vasseur_2016,Bertini_review} and entanglement dynamics~\cite{Alba_2017}, especially after the inception of generalised hydrodynamics~\cite{Doyon_2016, Bertini_2016}. In this regard, dimensionally constrained models played a prominent role as they not only permit for moderately efficient numerical simulations but also provide a playground for developing and testing analytical approaches. Integrable models are of special interest as they enable us to obtain non-perturbative closed-form solutions that are otherwise rarely available. This paper is devoted to study the structure and properties of semi-classical eigenstates and the emergence of classical dynamics in the quantum Heisenberg chain, an archetypal example of a quantum many-body system. Emergent integrability in the $\mathcal{N}=4$ superconformal Yang--Mills theory~\cite{Minahan_2003, Kruczenski_2004} generated some amount of interest in the semi-classical limits in integrable quantum spin chains. Particularly, the complete semi-classical spectrum of isotropic Heisenberg model has been first described in \cite{Kazakov_2004} and subsequently studied in great detail in \cite{Bargheer_2008}. The Heisenberg model however appears in a variety of physics applications in the domain of statistical mechanics and is of particular relevance for studying basic principles of out-of-equilibrium many-body dynamics. Following the spirit of Ref.~\cite{Bargheer_2008}, we devote this work to study the semi-classical eigenstates in the easy-axis regime of the anisotropic Heisenberg spin-$1/2$ chain. A separate motivation for studying the semi-classical part of the spectrum originates from recent interest in magnetisation transport in integrable quantum chains, where the anisotropic Heisenberg chain plays a prominent role. Several conspicuous similarities with a purely classical magnetisation dynamics governed by the Landau--Lifshitz ferromagnet \cite{Faddeev_1987} have been found, both at qualitative and quantitative levels, which firmly point towards a particular type of a classical--quantum correspondence~\cite{Misguich_2017, Gamayun_2019, Misguich_2019}. On the classical side, a key piece of evidence rests on an exact solution to the initial value problem with a domain-wall initial profile \cite{Gamayun_2019} which discerns three different dynamical regimes depending on the value of anisotropy: (i) ballistic spin transport in the easy-plane regime, (ii) absence of transport in the easy-axis regime and (iii) diffusion with a multiplicative logarithmic correction at the isotropic point. This matches the phenomenology of the quantum isotropic Heisenberg spin-$1/2$ chain inferred previously in \cite{Misguich_2017}. In this paper we specialise to the easy-axis regime where absence of transport has been linked with the presence of stable kink in the spectrum \cite{Gamayun_2019}. Despite that, in what precise manner do such kinks arise as coherent superposition of magnon excitations of the underlying quantum chain remains unknown. The task at hand is to perform semi-classical quantisation of classical nonlinear spin waves that arise in the weakly-anisotropic ferromagnetic Heisenberg chain. The future hope is to study non-equilibrium dynamics directly at the level of semi-classical eigenstates and thus put the classical--quantum correspondence on a firm footing. It deserves to be emphasised that the aforementioned classical--quantum correspondence of spin transport is not a particularity of the domain wall physics but likewise manifests itself in thermal equilibrium states (i.e. at finite density of magnon excitations). There however it comprises different types of dynamics regimes. Most prominently, in the \emph{isotropic} Heisenberg quantum chain, finite-temperature spin transport (in the zero magnetisation section) is superdiffusive~\cite{Znidaric11,Ljubotina_2017, Ilievski_2018}, belonging to the KPZ universality class (characterised by dynamical exponent $z=3/2$ \cite{GV19, De_Nardis_2019}). Such an anomalous behaviour has been attributed to interacting (and thermally dressed) `giant magnons', referring to semi-classical eigenstates which at the classical level show up as soliton modes. In contrast, genuine quantum excitations associated to bound magnons states (Bethe strings) become suppressed in this regime~\cite{De_Nardis_2020_2}. Curiously, this anomalous feature nevertheless entirely disappears upon introducing any amount of interaction anisotropy: on the easy-plane side, one finds ballistic transport characterised by a finite spin Drude weight~\cite{PZ13,IN17}, whereas easy-axis anisotropy restores normal diffusion \cite{NBD19,GHKV18,GV19}. In our perspective, these findings offer an extra motivation to carefully examine the structure of semi-classical eigenstates in integrable quantum lattice models. In this work we use the {\it asymptotic Bethe ansatz} approach~\cite{Sutherland_1978} to identify and describe the semi-classical part of the spectrum in the Heisenberg spin-$1/2$ chain with uniaxial anisotropy. We shall in large part follow the footsteps of Refs.~\cite{Kazakov_2004, Bargheer_2008} by employing an algebro-geometric integration technique~\cite{Dubrovin_1981,Belokolos_book}, the Riemann--Hilbert formalism. The main object of interest is the so-called classical spectral curve which provides complete information about the classical finite-gap spectrum of the anisotropic Landau--Lifshitz ferromagnet. Our aim is to make the exposition self-contained and pedagogical. Our attention will be largely devoted to the emergence of special types of semi-classical solutions that represent bions and kinks, for which anisotropy proves crucial for their stability. In addition, we shall provide closed expression for the semi-classical norms and overlaps, building on earlier works~\cite{Gromov_2012, Kostov_2012_PRL, Kostov_2012, Bettelheim_2014}. Finally, we briefly discuss the structure of the semi-classical limits of static correlation functions. The article is structured as follows. In Sec.~\ref{sec:HO} we begin by briefly introducing the notion of the spectral curve, giving the most basic example of the harmonic oscillator. Next, in Sec.~\ref{sec:ABE} we outline the asymptotic Bethe ansatz technique by applying it to the anisotropic Heisenberg chain. We proceed by solving the classical finite-gap solutions and explicitly working out two specific examples in Sec.~\ref{sec:finitegap} and \ref{sec:examples}. The semi-classical quantisation is then carried out in Sec.~\ref{sec:quantumcontour} and calculations of semi-classical overlaps are given in Sec.~\ref{sec:normandoverlap}. Lastly, in Sec.~\ref{sec:correlationfunc} we formulate a conjecture for a classical--quantum correspondence of correlation functions. We finish in Sec.~\ref{sec:conclusion} with a conclusion and an outlook. \section{Harmonic oscillator: introducing the spectral curve} \label{sec:HO} Prior to delving deep into the realm of many-body systems, we would like to first familiarise the reader with several technical tools that constitute the foundations of the algebro-geometric method to solve differential equations. To this end, we shall describe the semi-classical spectrum of a single quantum-mechanical degree of freedom, the good old quantum harmonic oscillator, \begin{equation} \hat{H} = \frac{\hat{p}^2}{2 m} + \frac{1}{2} m \omega^2 \hat{x}^2. \end{equation} As usual, we use a canonical pair of position and momentum operators, satisfying \begin{equation} [\hat{x}, \hat{p}] = {\rm i} \hbar . \end{equation} In the language of second quantisation, the Hamiltonian of the quantum harmonic oscillator takes the diagonal form \begin{equation} \hat{H} = \hbar \omega \left( \hat{a}^\dag \hat{a} + \frac{1}{2} \right), \end{equation} in terms of the bosonic annihilation operator \begin{equation} \hat{a} = \frac{1}{\sqrt{2 \hbar}} \left( \sqrt{m \omega} \, \hat{x} + \frac{{\rm i}}{\sqrt{m \omega}} \, \hat{p} \right). \end{equation} The ground state $\ket{0}$ is the Fock vacuum and satisfies $\hat{a} | 0 \rangle = 0$. Excited eigenstates $\ket{n}$, which obey $\hat{a}^\dag \hat{a} \ket{n} = n \ket{n}$, are produced by iterative application of the creation operator $\hat{a}^{\dag}$ on the ground state $\ket{0}$, \begin{equation} | n \rangle = \frac{\big[ \hat{a}^\dag \big]^n}{\sqrt{n !}} | 0 \rangle , \end{equation} such that \begin{equation} \hat{H} | n \rangle = E_n | n \rangle = \hbar \omega \left( n + \frac{1}{2} \right) | n \rangle . \end{equation} Using a dimensionless coordinate $u = \sqrt{\frac{m \omega}{\hbar}} x$, eigenfunctions $\psi (u) = \langle u | n \rangle$ can be expressed in terms of their normalised logarithmic derivative called {\it quasi-momentum} (see e.g. \cite{Gromov_2017}), \begin{equation} \mathfrak{p} (u) = \frac{\sqrt{\hbar m \omega}}{2}\, \frac{\partial_u \psi(u)}{\psi(u)}. \end{equation} suppressing the subscript $n$ mostly for the sake of clarity. The Schr\"{o}dinger equation for $\psi(u)$ accordingly transforms into a Ricatti equation \begin{equation} \mathfrak{p}^2 (u) - {\rm i} \sqrt{\hbar } \partial_u \mathfrak{p} = 2 m (E - V(u)) , \qquad V(u) = \frac{\hbar \omega}{2} u^2, \end{equation} whereas nodes (i.e. zeros) of the excited wavefunctions $\ket{n}$ have now turned into simple poles. After a short exercise, one can deduce the following representation \cite{Gromov_2017} \begin{equation} \mathfrak{p} (u) = {\rm i} \sqrt{ m \hbar \omega } \left( u - \sum_{j=1}^n \frac{1}{u - u_j} \right), \end{equation} with poles $u_{j}$ of the quasi-momentum satisfying a simple system of equations \begin{equation} u_j = \sum_{k \neq j} \frac{1}{u_j - u_k }. \label{HOBethe} \end{equation} In this description, every eigenstate $\ket{n}$ gets assigned a unique set of poles $u_{j}$, with $j=1,\ldots,n$ and accordingly Eqs.~\eqref{HOBethe} bear a direct analogy to the celebrated Bethe ansatz equations arising in integrable \emph{interacting} quantum systems. By maintaining this analogy, it is furthermore convenient to introduce the $Q$-polynomial \begin{equation} Q_n (u) = \prod_{j=1}^n ( u -u_j ), \label{HOQfunc} \end{equation} which in integrable quantum spin chains corresponds to the Baxter's $Q$-function. By integrating the quasi-momentum we can readily retrieve the wavefunction for the $n$th excited state, \begin{equation} \psi_n (u) = \frac{1}{\sqrt{\mathcal{N}}}\, e^{-u^2 / 2} Q_n (u), \label{HOwavefunction} \end{equation} with energy $E_n = \left( n + \frac{1}{2}\right) \hbar \omega$ and normalisation $\mathcal{N}$. The solutions to the Bethe-like equations~\eqref{HOBethe} are none other than Hermite polynomials $H_{n}(u)$ of order $n$, that is \begin{equation} Q_n (u) = H_n (u ). \end{equation} Poles $u_j$ are thus identified with zeros (roots) of Hermite polynomials. \subsection{Semi-classical limit} We next analyse the semi-classical eigenstates of the quantum harmonic oscillator. Prior to that, we shall briefly remind on some of the well-known results of the classical harmonic oscillator \begin{equation} \mathcal{H} = \frac{p^2 }{2 m} + \frac{1}{2} m \omega^2 x^2. \end{equation} Here position $x$ and momentum $p$ are phase-space coordinates with the canonical Poisson bracket \begin{equation} \{ p , x \} = 1 . \end{equation} Solutions to the classical harmonic oscillator are conventionally expressed in terms of trigonometric functions \begin{equation} p = \sqrt{2 m E} \cos (\omega t + \phi ) , \qquad x =\frac{1}{\omega} \sqrt{\frac{2 E}{m}} \sin (\omega t + \phi ) , \end{equation} where $E$ is the value of energy and offset angle $\phi$ is determined by the initial condition. The classical harmonic oscillator is possibly the simplest example of a dynamical system that is integrable in the Liouville--Arnol'd sense. The associated action-angle pair of variables is simply \begin{equation} {\rm S} = \frac{E}{\omega} , \qquad \varphi = \omega\,t , \end{equation} satisfying the canonical Poisson relation $\{{\rm S} , \varphi \} = 1$. \paragraph*{Lax representation.} We proceed by outlining an algebraic reformulation of the above construction. The notion of algebraic integrability rests on the concept of the Lax representation~\cite{babelon_bernard_talon_2003}. Here we shortly describe how this construction works by working it out for the toy model -- the classical harmonic oscillator. To begin with, we emphasise that the Lax representation for an integrable dynamical system is not unique. In the present case, a suitable choice for the \emph{Lax pair} of $2\times 2$ matrices $L$ and $M$ is as follows, \begin{equation} L(v;t) = \begin{pmatrix} p + {\rm i} m \omega x v & m \omega x - {\rm i} p v \\ m \omega x - {\rm i} p v & - p - {\rm i} m \omega x v \end{pmatrix} , \qquad M = \frac{\omega}{2} \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}. \end{equation} Lax matrix $L(v;t)$ depends on time $t$ through $x(t)$ and $p(t)$. In addition, there is \emph{analytic} dependence on the so-called spectral parameter $v \in \mathbb{C}$ which, as we clarify in a moment, is of pivotal importance for algebraic integrability. The equation of motion for the classical harmonic oscillator is equivalent to the following equation of motion of the Lax matrix $L(v;t)$, \begin{equation} \frac{\mathrm{d} }{\mathrm{d} t} L (v;t) = \left[ M , L (v;t) \right] . \end{equation} Although at first glance it may appear that we have not gained much at all, in the Lax formulation one can immediately recognise the fact that the characteristic polynomial of the Lax matrix, \begin{equation} {\rm det} (w - L (v)) = 0, \end{equation} is a time independent quantity. Formally speaking, it defines a complex curve $\Sigma(w,v) \subset \mathbb{C}$ called the \textit{spectral curve}. In the current case it is simply an algebraic curve of the form~\cite{babelon_bernard_talon_2003} \begin{equation} \Sigma:\qquad w^2(v) = 2 m E\,(1 - v^2). \label{HOspectralcurve} \end{equation} Eigenvalues $w_{\pm}(v)$ can be conveniently parametrised in terms of a single double-valued complex function \begin{equation} \mathfrak{p}_{\rm cl}(v) = \sqrt{2 m E\,(1 - v^2)}, \label{classicalqpho} \end{equation} called the \emph{classical} quasi-momentum, satisfying $2\cos \mathfrak{p}_{\rm cl} (v) = \mathrm{tr} \left( \exp L (v) \right)$. The quasi-momentum $\mathfrak{p}_{\rm cl}(v)$ features a pair of square-root branch points at $\pm 1$. One can thus interpret the spectral curve as a two-sheet Riemann surface with a branch cut along the interval $\mathcal{I}\equiv [-1,1]$ and a pole at infinity, $v_{\rm p} = \infty$. By analyticity, one furthermore has $\oint_{\mathcal{I}}{\rm d} \mathfrak{p}_{\rm cl}(v)=0$. \paragraph*{Semi-classical limit.} We now consider the quantum harmonic oscillator and examine the highly-excited eigenstates whose nodes densely distribute along the real axis. Our purpose here is to demonstrate how the classical spectral curve emerges out of a condensate of zeros of the $Q$-polynomial~\eqref{HOwavefunction}. This indeed corresponds to the semi-classical limit of large mode numbers $n\to \infty$, with $\hbar \sim 1/n$. From the viewpoint of the classical system, the resulting condensate shows up as a square-root branch cut of the classical spectral curve~\eqref{HOspectralcurve}. One can likewise think of the reverse process in which the branch cut disintegrates into a collection of simple poles, which is none other than the familiar Bohr--Sommerfeld quantisation rule \begin{equation} n \gg 1:\qquad \frac{1}{2\pi}\oint_{\mathcal{C}}{\rm d} v\,\mathfrak{p}(v) = \hbar\,n . \end{equation} The (quantum) quasi-momentum, upon substituting $u = \sqrt{2 n} v$ and subsequently taking the limit $n \to \infty$, becomes \begin{equation} \begin{split} \lim_{n \to \infty} \mathfrak{p} (v) &= \lim_{n \to \infty} {\rm i} \sqrt{m \hbar \omega} \left[ \sqrt{2 n} v - \sum_{j=1}^n \frac{1}{\sqrt{2 n} v - u_j } \right] \\ &= {\rm i} \sqrt{ 2 m n \hbar \omega} \left[ v - \int_{-1}^1 d y \frac{\rho (y)}{v - y} \right] , \end{split} \end{equation} where we have introduced the distribution of zeros of Hermite polynomial \begin{equation} \rho (y) = \lim_{n\to \infty}\frac{1}{n} \sum_{j=1}^n \delta \left(y - \frac{u_j}{\sqrt{2 n}} \right). \end{equation} We have thus retrieved the classical quasi-momentum \begin{equation} \mathfrak{p}_{\rm cl} (v) = {\rm i} \lim_{n \to \infty} \sqrt{2 m n \hbar \omega} \sqrt{v^2 - 1} = \sqrt{2 m E (1- v^2 )} , \end{equation} characterised by the classical energy \begin{equation} E = \lim_{n \to \infty} \lim_{\hbar \to E/(\omega n)} n \hbar \omega. \end{equation} \begin{figure} \centering \begin{minipage}[t]{.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/wavefunction_HO_mode_num_20.pdf} \caption*{\centering(a)} \end{minipage} \begin{minipage}[t]{.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/classical_spectral_curve_HO.pdf} \caption*{\centering(b)} \end{minipage} \begin{minipage}[t]{.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/classical_spectral_curve_HO_in_z.pdf} \caption*{\centering(c)} \end{minipage} \caption{(a) Normalised wavefunction $\psi_{20} (v)$ with mode number $20$ and rescaled coordinate $v = u / \sqrt{40}$. Its zeros are marked by red crosses. At large mode numbers, zeros of wavefunctions approach each other and eventually condense. The resulting condensate can be viewed as a square-root branch cut of a spectral curve, cf. Eq.~\eqref{HOspectralcurve}. Classical spectral curve of the harmonic oscillator in the $v$-plane (b) and $z$-plane (c). The branch cut in (b) gets mapped into two punctures of the Riemann surface. The dashed line indicates the motion of dynamical variable $z^\star$.} \label{fig:HOspectralcurve} \end{figure} \paragraph*{Canonical angle.} The spectral curve can be associated a dynamical angle-type variable $\varphi$. To find $\varphi$, one requires the solution to the auxiliary linear problem. The latter takes the form of the Lax equations \begin{equation} L\, \psi_{\pm}(v;t) = \lambda_\pm(v) \psi_{\pm}(v;t) , \qquad \frac{\mathrm{d} }{\mathrm{d} t} \psi_{\pm}(v;t) = M\, \psi_{\pm}(v;t). \end{equation} The standard recipe is to identify the canonical angle variables with dynamical poles of the appropriately normalised eigenvectors $\psi_{\pm}$, normally known in the literature as the Baker--Akhiezer vectors~\cite{babelon_bernard_talon_2003}. Below we employ a slightly different (but equivalent) approach using the {\it squared eigenfunction}, mainly to circumvent the normalisation ambiguity inherent to the Baker--Akhiezer vectors. Introducing a $2\times 2$ matrix of eigenvectors $\psi_{\pm} (v,t)$, \begin{equation} \boldsymbol{\psi}(v;t) = \big( \psi_+(v;t) , \psi_-(v;t) \big), \end{equation} we defined the `squared eigenfunction' as \begin{equation} \boldsymbol{\Psi} = \boldsymbol{\psi} \, \sigma^z \, \boldsymbol{\psi}^{-1}, \end{equation} satisfying $\det \boldsymbol{\Psi} = -1$. Notice that $\mathbf{\Psi}$ differs from $L$ by the time-independent normalisation. Matrix $\boldsymbol{\Psi}$ itself thus also evolves according to the Lax equation of motion, \begin{equation} \frac{{\rm d} }{{\rm d} t} \boldsymbol{\Psi}(v;t) = \left[ M , \boldsymbol{\Psi}(v;t) \right] . \label{timeevolutionPsiHO} \end{equation} In the present example of the classical harmonic oscillator, the solution to the above differential equation admits the following explicit form \begin{equation} \boldsymbol{\Psi}(v;t) = \frac{1}{\sqrt{2mE (1-v^2)}} \begin{pmatrix} {\rm i} m \omega x \left( v - \frac{{\rm i} p}{m \omega x} \right) & -{\rm i} p \left( v + \frac{{\rm i} m \omega x}{p} \right) \\ -{\rm i} p \left( v + \frac{i m \omega x}{p} \right) & - {\rm i} m \omega x \left( v - \frac{{\rm i} p}{m \omega x} \right) \end{pmatrix}, \end{equation} where $x=x(t)$ and $p=p(t)$. Based on the general rule, the dynamical variables $\gamma_j$ correspond to zeros of the off-diagonal element of the squared eigenfunction $\boldsymbol{\Psi}$. In the language of algebraic geometry, the full set of dynamical variables $\{\gamma_{j}\}$ constitutes the so-called dynamical divisor of a Riemann surface. Their equations of motion on a surface are governed by a system of differential equations that go under the name of Dubrovin equations. In the case of hyperelliptic algebraic curves of genus $\mathfrak{g}$, the total number of dynamical variables equals $\mathfrak{g}+1$. Our toy example involves a single degree of freedom and therefore we deal with a surface of genus $\mathfrak{g} = 0$ (i.e. a Riemann sphere). Accordingly, there is a single dynamical variable $\gamma_{1}=\gamma_{1}(t)$, satisfying a simple evolution law \begin{equation} \gamma_1(t) = - i \tan \omega t = - \frac{{\rm i} p}{m \omega x} . \end{equation} To explicitly see this, we perform the following variable transformation, \begin{equation} v \mapsto z(v):\qquad v = - \frac{z - 1/z}{z+ 1/z}, \end{equation} which resolves the square-root type singularities at $v^{\pm}_{\star}$ and renders $\lambda^{2}$ a rational function of $z$, \begin{equation} \Sigma:\qquad \lambda^2(z) = 2 m E \frac{4}{\left(z + 1/z \right)^2}. \end{equation} The original square-root branch cut with branch points $v^{\pm}_{\star}=\pm 1$, see Eq.~\eqref{HOspectralcurve}, have turned into two regular points on the Riemann surface located at $z_{\star}\in \{0,\infty\}$, whereas the original pole at infinity has two pre-images at two punctures $z^{\pm}_{\rm p}=\pm {\rm i}$, one on each Riemann sheet. The dynamical variable $z^\ast(t) = \exp{(\varphi(t))}$ satisfies has periodic motion with a linearly-evolving angle variable $\varphi(t)=\omega\,t$, as shown in Fig.~\ref{fig:HOspectralcurve}. \subsection{Classical limit of quantum correlation functions} We finally examine the expectation values of quantum observables computed in semi-classical eigenstates. We explain how the latter manifest themselves as classical quantities, thereby establishing an exact classical--quantum correspondence at the level of correlation functions. In the semi-classical limit, the average of a quantum observable $\hat{O}$ should be identified with ergodic phase-space averages \begin{equation} \lim_{n \to \infty , \hbar \to \frac{E}{\omega n}} \langle n | \hat{O} (t_1 , \cdots , t_m) | n \rangle = \frac{1}{T} \int_0^T {\rm d} t\, O (t + t_1 , \cdots , t + t_m) , \label{classicalquantumHO} \end{equation} where $T = 2 \pi / \omega$ denotes the fundamental period. The correspondence can be readily exemplified on a few particular examples. We first consider, as a simple example, operator $\hat{O} = \hat{x}^2 (0)$. Expectation values on a normalised quantum eigenstates read \begin{equation} \langle n | \hat{x}^2 (0) | n \rangle = \frac{\hbar}{m \omega} \left( n + \frac{1}{2} \right) , \end{equation} which in the semi-classical limit yields \begin{equation} \lim_{n \to \infty , \hbar \to \frac{E}{\omega n}} \langle n | \hat{x}^2 (0) | n \rangle = \frac{E}{m \omega^2} = \frac{\omega}{2 \pi} \int_0^{\frac{\omega}{2 \pi}} {\rm d} t\,x^2 ( t ) . \end{equation} The outlined correspondence continues to hold even when $\hat{O} $ consists of several non-commuting operators. To illustrate this, we consider two observables $\hat{O}_1 = \hat{x} (0) \hat{p} (t_0) $ and $\hat{O}_2 = \hat{p} (t_0) \hat{x} (0)$, whose quantum averages yield \begin{equation} \langle n | \hat{O}_1 | n \rangle = \langle n | \hat{x} (0) \hat{p} (t_0) | n \rangle = \frac{i \hbar}{2} \left[ e^{i \omega t_0} (n+1) - e^{ - i \omega t_0} n \right] , \end{equation} \begin{equation} \langle n | \hat{O}_2 | n \rangle = \langle n | \hat{p} (t_0) \hat{x} (0) | n \rangle = \frac{i \hbar}{2} \left[ e^{i \omega t_0} n - e^{ - i \omega t_0} (n+1) \right] , \end{equation} Taking again the semi-classical limit, we arrive at \begin{equation} \begin{split} & \lim_{n \to \infty , \hbar \to \frac{E}{\omega n}} \langle n | \hat{O}_1 | n \rangle = \lim_{n \to \infty , \hbar \to \frac{E}{\omega n}} \langle n | \hat{O}_2 | n \rangle \\ & = - \frac{E}{\omega} \sin (\omega t_0 ) = \frac{\omega}{2 \pi} \int_0^{\frac{\omega}{2 \pi}} {\rm d} t\,x ( t ) p(t+ t_0) . \end{split} \end{equation} The upshot of this rather elementary calculation is that expectation values of quantum observables in the semi-classical limit of large mode numbers can be effectively replaced by the corresponding classical observables. There is no ordering ambiguity at the leading order, which only enter in the form of `quantum corrections', i.e. subleading contributions ($\sim \mathcal{O} (\hbar)$) to the correlation functions. \medskip This concludes our pedagogical introduction to the basic notions of algebro-geometric methods. In the remainder of this article we shall employ the same methodology on a genuine many-body interacting model -- the quantum Heisenberg spin-$1/2$ chain with anisotropic interaction (the XXZ model). Due to multiple degrees of freedom involved, analytical treatment becomes much more challenging and how correlators simplify in the classical limit is no longer a trivial task. Before we come back to this issue in Sec.~\ref{sec:correlationfunc} we first introduce the formalism and other key ingredients. \section{Asymptotic Bethe Ansatz: the XXZ Heisenberg model} \label{sec:ABE} The anisotropic (XXZ) quantum spin-$1/2$ chain is often considered as an archetypal model of many-body quantum dynamics. Besides that, it is one of the best studied solvable models by Bethe ansatz \cite{Bethe_1931, Korepin_1993, Gaudin_2009} \begin{equation} \hat{H} = -J \left[ \sum_{j=1}^L \hat{S}^{\rm x}_j \hat{S}^{\rm x}_{j+1} + \hat{S}^{\rm y}_j \hat{S}^{\rm y}_{j+1} + \Delta \left(\hat{S}^{\rm z}_j \hat{S}^{\rm z}_{j+1} - \frac{1}{4} \right) \right ] , \label{quantumhamil} \end{equation} using the generators $\hat{S}^{\alpha} = \hat{\sigma}^{\alpha}/2$ (where $\hat{\sigma}^{\alpha}$ are the Pauli matrices) of the $\mathfrak{su}(2)$ spin algebra, satisfying commutation relations $[\hat{\sigma}^{\alpha},\hat{\sigma}^{\beta}]=2{\rm i}\sum_{\gamma}\epsilon_{\alpha \beta \gamma} \hat{\sigma}^{\gamma}$. We assume the periodic boundary condition. We shall be interested in the ferromagnetic regime ($J > 0$, $\Delta\geq 0$) and for our convenience set (with no loss of generality) $J = 1$. There are three phases (regimes) to be distinguished: $\Delta > 1$, $\Delta = 1$ and $0 \leq \Delta < 1$, conventionally called as the gapped, isotropic and gapless regimes, in respective order. This nomenclature refers to properties of the antiferromagnetic ground state. Our focus in this article will be exclusively on the gapped regime. As customary, we parametrise anisotropy as \begin{equation} \Delta = \tfrac{1}{2}(q+q^{-1}) = \cosh \eta,\qquad q=\exp{\eta},\qquad \eta \in \mathbb{R}, \end{equation} where $q$ stands for `deformation parameter' of quantum symmetry algebra $\mathcal{U}_{q}(\mathfrak{su}(2))$. The model can be diagonalised by the Bethe Ansatz, a celebrated method invented by Hans Bethe~\cite{Bethe_1931}. In what follows, we adopt the ferromagnetic eigenstate $\ket{\uparrow}^{\otimes L}$ as the reference particle pseudovacuum to construct the entire spectrum of eigenstates. In fact, the ferromagnetic states are two-fold degenerate and both ferromagnetic vacua are required to obtain the full spectrum of eigenstates.\footnote{This is no longer the case if the $SU(2)$ invariance is broken by a boundary twist, in which case a single vacuum suffices.} Since the $z$-component of total spin is conserved, the Hamiltonian block-decomposes into magnetisation sectors labelled by quantum number $M$, pertaining to the number of down-turned spins with respect to pseudovacuum. For fixed $M$, every finite-volume eigenstate, $|\{ \vartheta_j \}^{M}_{j=1} \rangle$, is uniquely characterised by a set of (in general complex-valued) rapidities $\vartheta_j$ called Bethe roots, corresponding to solutions to a coupled system of equations \begin{equation} \left[ \frac{\sin (\vartheta_j + i \frac{\eta}{2})}{\sin (\vartheta_j - i \frac{\eta}{2})} \right]^L \prod_{k\neq j}^M \frac{\sin (\vartheta_j - \vartheta_k - i \eta)}{\sin (\vartheta_j - \vartheta_k + i \eta)} = 1 , \label{betheeq} \end{equation} known as the \emph{Bethe equations} (BE). A suggestive physical interpretation underneath the Bethe equations is as follows: the term in the square bracket on the left-hand side equals $e^{{\rm i} p}$, with $p=p(\vartheta)$ being the (bare) momentum of a single magnon excitation \begin{equation} p(\vartheta) = -{\rm i} \log \frac{\sin (\vartheta + {\rm i} \frac{\eta}{2})}{\sin (\vartheta - {\rm i} \frac{\eta}{2})}, \end{equation} whereas the product of quotients of trigonometric functions appearing on the right-hand side is interpreted as a net $U(1)$-valued scattering amplitude acquired by an individual magnon upon undergoing elastic collisions with all the remaining magnons. The total momentum and energy of an eigenstate are obtained by summing over all the constituent magnons, yielding manifestly additive expressions of the form \begin{equation} P \big(\{ \vartheta_j \}_M \big) = \sum_{j=1}^M p(\vartheta_j) , \quad E \big(\{ \vartheta_j \}_M \big) = \sum_{j=1}^M \frac{\sin^2 {\rm i} \eta}{\cos 2 \vartheta_j - \cos {\rm i} \eta} . \label{PandEquantum} \end{equation} To facilitate the asymptotic analysis of Eqs.~\eqref{betheeq}, it is convenient to express them in terms of the Baxter $Q$-functions~\cite{Baxter_2016} \begin{equation} Q (\vartheta;\{\vartheta_{j}\}) = \prod_{j=1}^{M} \sin \left( \vartheta - \vartheta_j \right) , \end{equation} representing `trigonometric polynomials' of degree $M$ whose zeros (in the fundamental domain) correspond precisely to the Bethe roots of a given eigenstate. Denoting $Q_0 (\vartheta) \equiv \sin^L (\vartheta)$, and making use of compact notations for imaginary shifts, $f^{[\pm k]}(\vartheta)\equiv f(\vartheta \pm {\rm i} \eta/2)$, the Bethe equations can be presented in the form \begin{equation} \frac{Q_0^+ (\vartheta_j)}{Q_0^- (\vartheta_j)} = \frac{Q_j^{[+2]} (\vartheta_j)}{Q_j^{[-2]} (\vartheta_j)}, \end{equation} where $Q_{j} (\vartheta) \equiv \prod_{k \neq j}^M \sin (\vartheta - \vartheta_k )$, or in an equivalent logarithmic form \begin{equation} \log Q_0^+ (\vartheta_j) - \log Q_0^- (\vartheta_j) = 2 n_j \pi {\rm i} + \log Q_{j}^{[+2]} (\vartheta_j ) -\log Q_{j}^{[-2]} (\vartheta_j ). \label{betheeqlog} \end{equation} The above form is universal and provides a useful starting point to obtain the semi-classical limits in many other quantum integrable models. \paragraph*{Thermodynamic limit.} In physics applications one is mostly interested in thermodynamic properties. To this end, one has to infer the structure of solutions to the Bethe equations for large system size, i.e. in the $L\to \infty$ limit with $M \sim \mathcal{O}(L) \to \infty$ magnons. A typical thermodynamic eigenstate comprises of elementary magnon excitations and bound states thereof. The latter show up as compounds of complex Bethe roots (known as the Bethe strings) whose constituent magnon rapidities with unit (imaginary) equidistant separation (neglecting finite-size deviations that are typically exponentially small in $L$) \begin{equation} \vartheta^{(k)}_{j} = \vartheta^{(k)} + \frac{{\rm i} \eta}{2}(k+1-2j),\qquad j=1,\ldots,k, \end{equation} with the centre $\vartheta^{(k)} \in \mathbb{R}$. \paragraph*{Low-energy scaling limit.} There exists another thermodynamic limit that is distinct to the one described above. To extract the low-energy spectrum of spin fluctuations at long wavelengths, one has to consider a thermodynamic \emph{scaling} limit by taking both $L$ and $M \sim \mathcal{O}(L)$ large, while additionally demanding that all magnons have low momenta scaling as $\mathcal{O}(1/L)$. This way we are left with a finite number $m \sim \mathcal{O}(1)$ macroscopic bound states. We stress that, in this regime, the Bethe's original argument for the string formation is no longer valid. What happens instead is that bound states become less dense and in general appreciably deformed. This low-energy eigenstates have been first investigated by Sutherland~\cite{Sutherland_1995} and afterwards also in Ref.~\cite{Dhar_2000}, where they are dubbed as `quantum Bloch walls'. Their classical nature has however been elucidated only later in Ref.~\cite{Kazakov_2004}, establishing an explicit connection with (nonlinear) spin waves governed by the continuous Landau--Lifshitz ferromagnet with isotropic interaction. The method outlined in Ref.~\cite{Kazakov_2004} is rather general and can be extended to accommodate also for the interaction anisotropy. As shown below, in the presence of the interaction anisotropy $\eta$, the thermodynamic scaling limit that governs the semi-classical spectrum of low-energy eigenstates requires to additionally assume that anisotropy is weak, $\Delta \gtrapprox 1$. To be more specific, after reinstating lattice spacing $a$ and writing $\ell \equiv L\,a \in \mathcal{O}(1)$, anisotropy parameter $\eta \sim \mathcal{O}(1/L)\to 0$ has to be rescaled as \begin{equation} \eta = \frac{\epsilon\,\ell}{L} + \mathcal{O} \left( \frac{1}{L^2} \right), \end{equation} with parameter \begin{equation} \epsilon \equiv \sqrt{\delta},\qquad \delta = \frac{2(\Delta - 1)}{a^2} \in \mathcal{O}(1), \end{equation} kept fixed whilst taking the continuum thermodynamic limit, $L\to \infty$ and $a \to 0$. Here parameter $\ell$ plays the role of a length and later on, in Sec.~\ref{sec:transfermat}, we show that it corresponds precisely to the circumference of the emergent classical phase space. By first expanding the logarithm of $Q_{0}^{\pm}$ we find \begin{equation} \log Q_{0}^{\pm} (\vartheta_{j}) = \log Q_0 (\vartheta_j ) \pm \frac{i \eta}{2} \frac{{\rm d}}{{\rm d} \vartheta} \log Q_{0} (\vartheta) \rvert_{\vartheta = \vartheta_{j}} - \frac{\eta^{2}}{8} \frac{{\rm d}^{2}}{{\rm d} \vartheta^{2}} \log Q_{0}(\vartheta ) \rvert_{\vartheta = \vartheta_{j}} + \mathcal{O} \left( \frac{1}{L^{2}} \right) , \label{Q0expansion} \end{equation} where we have assumed that $\vartheta \in \mathcal{O}(1)$. At this stage it is convenient to perform a change of variable by introducing \begin{equation} \mu = \frac{\eta}{\ell} \frac{{\rm d} }{{\rm d} \vartheta} \log Q_0 (\vartheta ) = \frac{\epsilon }{\tan \vartheta}, \qquad \mu_j = \lim_{\vartheta \to \vartheta_j } \mu, \label{spectral_parameter} \end{equation} which then yields \begin{equation} \log Q_{0}^+ (\vartheta_{j}) - \log Q_{0}^{-} (\vartheta_{j}) = {\rm i} \ell \mu_{j} + \mathcal{O} \left( \frac{1}{L^{2}} \right) . \end{equation} The part involving $Q(\vartheta)$ can be treated analogously. The details are given in Appendix~\ref{app:finitesizeRH}. By combining the two contributions, we finally arrive at the following compact representation in the $\mu$-variable~\footnote{Exact solutions to this algebraic equation for a single mode number can be found in Ref.~\cite{YM_Jacobi}. A similar construction for the isotropic case appeared in Ref.~\cite{Shastry_2001}.} \begin{equation} \mu_{j} = \frac{2 \pi}{\ell} n_{j} - \frac{2}{L} \sum_{k \neq j}^{M} \frac{\mu_{j} \mu_{k} + \delta}{\mu_{j} - \mu_{k}} + \mathcal{O} \left( \frac{1}{L} \right) . \label{asympBEdiscrete} \end{equation} Upon taking the thermodynamic scaling limit, the Bethe roots $\mu_{j}$ condense along certain one-dimensional segments (contours). In general, there are several disjoint contours $\mathcal{C} \equiv \cup_{j} \mathcal{C}_{j}$. Accordingly, the Bethe equations turn into singular integral equations of the form \begin{equation} \frac{\ell \mu}{2} = \pi n_{j} - \ell \Xint-_{\mathcal{C}} {\rm d} \lambda\,\mathcal{K}_{\delta}(\mu,\lambda)\rho (\lambda), \qquad \mu \in \mathcal{C}_{j}, \label{asympBE} \end{equation} with integral kernel \begin{equation} \mathcal{K}_{\delta}(\mu,\lambda) \equiv \frac{\mu \lambda + \delta}{\mu - \lambda} . \end{equation} Eqs.~\eqref{asympBE} are known as the \emph{asymptotic Bethe equations} (ABE). To satisfy the reality constraint, the Bethe roots must appear in complex-conjugate pairs, implying that contours $\mathcal{C}_j$ are symmetric under reflection about the real axis. The leading correction term to Eq.~\eqref{asympBEdiscrete} is of the order $\mathcal{O}(1/L)$ (cf. Appendix~\ref{app:finitesizeRH}), \begin{equation} \frac{1}{L}\pi \rho^\prime (\mu) \ell^{2} (\mu^{2} + \delta)^{2} \coth \left[ \pi \ell (\mu^2 + \delta) \rho (\mu) \right]. \label{finite_size_mu} \end{equation} It turns out that this correction to Eqs.~\eqref{asympBE} can only be safely discarded when \begin{equation} \pi \ell (\mu^{2} + \delta) \rho (\mu) \neq {\rm i} \pi n , \qquad n \in \mathbb{Z}, \end{equation} which is satisfied when the density of roots near the real axis is sufficiently low. In contrast, when the density of Bethe roots is high enough, the assumptions underlying the above perturbative expansion are no longer justified and one has to resort to the full quantum Bethe equations non-perturbatively (at least in the region near the real axis where the effect is most pronounced). We shall return to this subtlety in Sec.~\ref{sec:betherootcond} and examine it closely on a specific class of solutions. As it turns out, the effect is responsible for emergence of certain special features in the solutions called \emph{condensates}~\cite{Bargheer_2008}. \paragraph*{Riemann--Hilbert problem.} The asymptotic Bethe equations \eqref{asympBE} can be formulated as a Riemann-Hilbert problem. To this end, we define the \emph{spectral resolvent} \begin{equation} G(\mu ) = \ell \int_\mathcal{C} {\rm d} \lambda \mathcal{K}_{\delta}(\mu,\lambda) \rho (\lambda), \label{resolvent_def} \end{equation} and define \begin{equation} \mathfrak{p} (\mu) = G(\mu) + \frac{\ell \mu }{2}. \label{quasimomentum_def} \end{equation} At every point $\mu$ along the density contour $\mathcal{C}_{j}$ function $\mathfrak{p}(\mu)$ experiences a jump discontinuity that is proportional to the density (of Bethe roots) $\rho(\mu)$, \begin{equation} \mathfrak{p} (\mu + {\rm i} 0) - \mathfrak{p} (\mu - {\rm i} 0) = 2 {\rm i} \pi \ell (\mu^2 + \delta) \rho (\mu ), \qquad \mu \in \mathcal{C}_{j} , \label{jump} \end{equation} with $\pm {\rm i} 0$ denoting infinitesimal displacements to either side of the contour.~\footnote{Orientation of integration contours is in the direction towards the branch point with a negative imaginary part.} Individual contours $\mathcal{C}_j$ can thus be pictured as the $j$th branch cut (of square-root type) of a two-sheeted Riemann surface. The end points of $\mathcal{C}_j$ thus correspond to branch points. In this view, function $\mathfrak{p}(\mu)$ is a double-valued complex function which, apart from contours $\mathcal{C}_j$, is analytic everywhere on the complex $\mu$-plane. A branch cut of square-root type implies that upon crossing it we end up on another Riemann sheet and $\mathfrak{p}(\mu)$ flips its sign. In addition, $\mathfrak{p} (\mu)$ in general picks up an integer multiple of $2\pi$, namely \begin{equation} \mathfrak{p}(\mu + {\rm i} 0) +\mathfrak{p} (\mu - {\rm i} 0) = 2 \pi n_{j}, \qquad \mu \in \mathcal{C}_{j}. \label{RHquasip} \end{equation} Remarkably, $\mathfrak{p}(\mu)$ is precisely the classical quasi-momentum pertaining to the completely integrable classical anisotropic ferromagnet. As explained and demonstrated in the next section, quasi-momentum encodes the eigenvalues of the classical monodromy matrix obtained from a path-ordered exponential of the classical Lax connection. In the Sec.~\ref{sec:quantumcontour} we shall make a direct comparison between the branch cuts and discrete distributions of Bethe roots for finite system sizes. There we find it useful to rewrite the Riemann-Hilbert problem in terms of variable $\zeta = 1/ \mu $. In Appendix~\ref{app:zetaRH} we provide a dictionary between the two parametrisations used. \subsection{Anisotropic Landau--Lifshitz field theory} \label{sec:transfermat} The task at hand is to derive the classical equation of motion for the low-energy spectrum of the Heisenberg XXZ chain. We present how to achieve this in a systematic fashion. The first step is to infer the spatial component of the classical Lax connection from the semi-classical expansion of the quantum monodromy matrix. This will also enable us to explicitly establish that $\mathfrak{p}(\mu)$ from the previous section is truly the classical quasi-momentum that belongs to the classical \emph{axially anisotropic Landau--Lifshitz model}. Written in terms of a $S^{2}$-valued classical spin field $\vec{\mathcal{S}}(x,t)$, the latter is described by the following partial differential equation \begin{equation} \vec{\mathcal{S}}_{t} = \vec{\mathcal{S}} \times \vec{\mathcal{S}}_{xx} +\vec{\mathcal{S}} \times \mathrm{J} \vec{\mathcal{S}},\qquad \mathrm{J}\, \equiv \mathrm{diag} (0,0,\delta). \label{classicaleom} \end{equation} Here and subsequently we make use of compact notations $\vec{\mathcal{S}}_{t}\equiv \partial_{t}\vec{\mathcal{S}}(x,t)$, $\vec{\mathcal{S}}_{x}\equiv \partial_{x}\vec{\mathcal{S}}(x,t)$, and similarly for higher partial derivatives. The Hamiltonian that generates Eq.~\eqref{classicaleom} is of the form \begin{equation} \mathcal{H} = \frac{1}{2} \int_{0}^{\ell} {\rm d} x \left[ \vec{\mathcal{S}}_{x}(x)\cdot \vec{\mathcal{S}}_{x}(x) + \delta - \delta (\mathcal{S}^{\rm z} (x))^2 \right]. \label{classical_hamiltonian} \end{equation} \paragraph*{Zero-curvature representation.} Complete integrability of Eq.~\eqref{classicaleom} can be made manifest by recasting it in the form of a \emph{zero-curvature condition} \begin{equation} {\bf U}_{t}(\mu;x,t) - {\bf V}_{x}(\mu;x,t) + [{\bf U}(\mu;x,t) , {\bf V}(\mu;x,t)] = 0. \label{zerocurvature} \end{equation} The latter ensures compatibility of the following \emph{auxiliary linear problem}, \begin{equation} \Psi_{x}(\mu; x,t) = {\bf U}(\mu; x,t) \Psi(\mu; x,t) , \qquad \Psi_{t}(\mu; x,t) = {\bf V}(\mu; x,t) \Psi(\mu; x,t) , \label{ALP_PDE} \end{equation} where \cite{Faddeev_1987} \begin{equation} {\bf U}(\mu; x,t)= \frac{1}{2 {\rm i}} \begin{pmatrix} \mu\,\mathcal{S}^{\rm z} & \sqrt{\mu^2 + \delta}\, \mathcal{S}^- \\ \sqrt{\mu^2 + \delta}\, \mathcal{S}^+ & - \mu\,\mathcal{S}^{\rm z} \end{pmatrix}. \label{uconnection} \end{equation} \begin{equation} {\bf V}(\mu;x,t) = \frac{{\rm i}}{2} \begin{pmatrix} (\mu^2 + \delta) \mathcal{S}^{\rm z} & \mu \sqrt{\mu^2 + \delta} \mathcal{S}^- \\ \mu \sqrt{\mu^2 + \delta} \mathcal{S}^+ & - (\mu^2 + \delta) \mathcal{S}^{\rm z} \end{pmatrix} \\ - \frac{1}{2 {\rm i} } \begin{pmatrix} \mu\, \mathcal{J}_{0}^{\rm z} & \sqrt{\mu^2 + \delta} \, \mathcal{J}_{0}^- \\ \sqrt{\mu^2 + \delta} \, \mathcal{J}_{0}^+ & - \mu \, \mathcal{J}_{0}^{\rm z} \end{pmatrix}, \label{vconnection} \end{equation} are the spatial and the temporal component of the classical Lax connection, respectively, with \begin{equation} \mathcal{J}_{0}\equiv \vec{\mathcal{S}}_{x}\times \vec{\mathcal{S}}, \end{equation} denoting the spin current density at $\delta = 0$. \paragraph*{Semi-classical limit.} We next show how the above (classical) Lax connection can be retrieved from the Lax operator of the quantum chain. The quantum Lax operator is the elementary building block of commuting transfer matrices which facilitate algebraic diagonalisation of the XXZ Hamiltonian \eqref{quantumhamil}. The fundamental Lax operator $\mathbf{L}(\vartheta)$ acts on a one-site (physical) Hilbert space $\mathcal{V}_{p}\cong \mathbb{C}^{2}$ of a spin-$1/2$ degree of freedom and an auxiliary space $\mathcal{V}_{a}$ associated to the fundamental representation of the quantum group $\mathcal{U}_{q}(\mathfrak{su}(2))$ (with deformation parameter $q = e^{\eta})$, and depends analytically on the complex (spectral) parameter $\vartheta$. It reads explicitly \begin{equation} {\bf L}(\vartheta) = \frac{1}{\sinh \eta} \begin{pmatrix} \sin (\vartheta + {\rm i} \eta \mathbf{S}^{\rm z}) & i \sinh \eta \mathbf{S}^- \\ i \sinh \eta \mathbf{S}^+ & \sin (\vartheta - {\rm i} \eta \mathbf{S}^{\rm z}) \end{pmatrix} , \end{equation} in terms of auxiliary spin generators which enclose the $q$-deformed commutation relations \begin{equation} [\mathbf{S}^+ , \mathbf{S}^-] = [2 \mathbf{S}^z]_{q} , \qquad q^{2 \mathbf{S}^{\rm z} } \mathbf{S}^\pm = q^{\pm 2} \mathbf{S}^\pm q^{2 \mathbf{S}^{\rm z} }, \end{equation} using the standard notation $[x]_q =\sinh(\eta\,x)/\sinh \eta$. We proceed by constructing the fundamental row transfer matrix of the quantum XXZ spin chain, obtained as a partial trace (over the fundamental auxiliary space $\mathcal{V}_{a}\cong \mathbb{C}^{2}$) of the monodromy matrix, i.e. an ordered product of Lax matrices \begin{equation} T(\vartheta) = \mathrm{Tr}_{\mathcal{V}_{a}}\,\mathbf{M}(\vartheta) = \mathrm{Tr}_{\mathcal{V}_{a}} {\bf L}^{(1)} (\vartheta) {\bf L}^{(2)} (\vartheta) \cdots {\bf L}^{(N)} (\vartheta). \end{equation} Here we have adopted the right-to-left ordering convention and used ${\bf L}^{(k)} (\vartheta)$ to denote the embedding into the $k$th factor in an $L$-fold product Hilbert space $\mathcal{H}\cong \mathcal{V}^{\otimes L}_{p}$. By virtue of the quantum Yang--Baxter relation, matrices $T(\vartheta)$ mutually commute, $[T(\vartheta),T(\vartheta^\prime)]=0$ for all $\vartheta,\vartheta^\prime \in \mathbb{C}$. Therefore, commuting transfer matrices serve as the generating operator for the local (and nonlocal) conserved charges~\cite{Korepin_1993, Kazakov_2004}. An infinite tower of commuting fused transfer matrices $T_{j}(\vartheta)$ with $(j+1)$-dimensional auxiliary unitary representations of $\mathcal{U}_{q}(\mathfrak{su}(2))$ can be constructed in a similar manner, providing additional quasilocal conservation laws of the XXZ model. While these are of utmost importance for thermodynamic properties at finite energy density (see e.g. Refs.~\cite{Ilievski_2013, Prosen_2014, Ilievski_2015, Ilievski_2016}), they do not play any role upon taking the semi-classical limit. We have thus far analysed the asymptotic scaling limit $L \to \infty $ at the level of the Bethe equations by parametrising the interaction parameter as $\eta = \epsilon\,\ell/L \to 0$ and subsequently taking $\eta \to 0$ ($q \to 1$). Now we do the same at the level of the transfer matrix, where we are allowed to substitute the $q$-deformed spin generators with the fundamental $\mathfrak{su}(2)$ spins, $\mathbf{S}^\alpha \to \hat{S}^\alpha = \tfrac{1}{2} \hat{\sigma}^\alpha$ for $ \alpha \in \{ {\rm x,y,z} \}$. In this limit, the diagonal elements of the quantum Lax operator become \begin{equation} \lim_{\eta \to 0} \frac{\sin (\vartheta \mathds{1} + {\rm i} \eta \mathbf{S}^{\rm z} )}{\sinh \eta} = \frac{\sin \vartheta}{\eta} \mathds{1} + {\rm i} \cos{(\vartheta)} \hat{S}^{\rm z} + \mathcal{O} (\eta), \end{equation} whence \begin{equation} {\bf L}(\vartheta) \simeq \frac{\sin \vartheta}{\eta} \left[ \mathds{1} + {\rm i} \eta \begin{pmatrix} \cot{(\vartheta)} \hat{S}^{\rm z} & \csc{(\vartheta)} \hat{S}^- \\ \csc{(\vartheta)} \hat{S}^+ & - \cot{(\vartheta)} \hat{S}^{\rm z} \end{pmatrix} \right]. \end{equation} By reinstating the lattice spacing $a = \ell / L$ and using the spectral parameter \begin{equation} \mu = \frac{\epsilon}{\tan{\vartheta}}, \end{equation} the Lax matrix writes \begin{equation} {\bf L}(\mu) \simeq \frac{1}{a \sqrt{\mu^2 + \delta } } \left[ \mathds{1} + {\rm i} a \begin{pmatrix} \mu \hat{S}^{\rm z} & \sqrt{\mu^2 + \delta} \hat{S}^- \\ \sqrt{\mu^2 + \delta} \hat{S}^+ & - \mu \hat{S}^{\rm z} \end{pmatrix} \right], \end{equation} whereas the asymptotic scaling limit of the associated monodromy matrix ${\bf M}(\mu)$ is given by the following path-ordered product \begin{equation} \begin{split} \frac{{\bf M}(\mu)}{(a \sqrt{\mu^2 + \delta})^L} & \sim \prod_{j=L}^1 \left[ \mathds{1} + {\rm i} a \begin{pmatrix} \mu \hat{S}^{\rm z}_j & \sqrt{\mu^2 + \delta}\, \hat{S}^-_j \\ \sqrt{\mu^2 + \delta}\, \hat{S}^+_j & -\mu \, \hat{S}^{\rm z}_j \end{pmatrix} \right]. \end{split} \end{equation} In the final step we replace the quantum spins $\hat{S}^{\alpha}$ with classical spin variables via $\mathcal{S}^{\alpha}_{j}\equiv \mathcal{S}^{\alpha}(x=j\,a)$, and subsequently take the continuum limit. We thus arrive as the following semi-classical approximation of the quantum monodromy matrix \begin{equation} {\bf M}_{\rm cl}(\mu) \equiv \mathscr{P} \exp \left[ \frac{{\rm i}}{2} \oint_{0}^{\ell} \frac{{\rm d} x}{2\pi} {\bf U}(\mu;x,t) \right], \label{classical_monodromy} \end{equation} with ${\bf U}(\mu;x,t)$ being the spatial component of the Lax connection introduced earlier in Eq.~\eqref{uconnection}. \section{Finite-gap integration method} \label{sec:finitegap} Having retrieved the classical quasi-momentum $\mathfrak{p}(\mu)$ from the semi-classical expansion of the Heisenberg quantum chain, we proceed to explain how its analytic structure encodes the spectrum of interacting nonlinear phases that characterise classical spin-field configurations. We shall confined our considerations, as usual, to a class of solutions that involve only a finite number of excited nonlinear modes (corresponding to Riemann surfaces of finite genus), commonly known in the literature as the finite-gap solutions \cite{Dubrovin_1975,Dubrovin_1976,Dubrovin_1981}. We next describe the full programme for performing an algebro-geometric integration of completely integrable nonlinear partial differential equations which permits to solve the initial value problem. The main steps comprise of: \begin{enumerate} \item prescribing an appropriately normalised meromorphic differential ${\rm d} \mathfrak{p}$ on a hyperelliptic Riemann surface of finite genus, \item constructing the fundamental matrix solution to the associated auxiliary linear problem, \item identifying the dynamical separated variables and deriving their equations of motion, \item employing the Abel--Jacobi transformation to obtain canonical action-angle variables satisfying a linear evolution law on a Jacobian hypertorus, \item expressing physical fields through the solution to the inverse problem. \end{enumerate} We stress that the outlined finite-gap integration scheme is not particular to the model at hand but rather applies universality. Moreover, the method has been previously developed also for the Landau--Lifshitz model in Refs.~\cite{Date_1983, Bikbaev_2014}. Nevertheless, we wish to offer a different formulation here that we find conceptually simpler. Specifically, we shall avoid the conventional use of the Baker--Akhiezer vectors. Our plan is to first discuss some general aspects and then to provide two explicit realisations for $\mathfrak{g}=0$ and $\mathfrak{g}=1$. We do not pay much attention to the construction of the action-angle variables but rather give an explicit time dependence of the physical spin fields in terms of the Riemann theta functions. \subsection{Auxiliary linear problem} \label{subsec:ALP} In this section we describe how to solve the auxiliary linear problem \eqref{ALP_PDE}. By formally integrating along the spatial direction at a fixed time-slice, we have \begin{equation} \boldsymbol{\psi}(\mu;x) = \mathscr{P}\,\exp{\left(\int_{x_{0}}^{x}{\rm d} x^{\prime}\,{\bf U}(\mu;x^{\prime})\right)} \equiv {\bf T}_{\rm cl}(x,x_{0}) . \end{equation} By imposing periodic boundary conditions $\vec{\mathcal{S}}(x+\ell )=\vec{\mathcal{S}}(x)$, we define the monodromy matrix~\eqref{classical_monodromy} \begin{equation} {\bf M}_{\rm cl}(\mu) = {\bf T}_{\rm cl}(x_{0}+\ell ,x_{0}). \end{equation} According to the Bloch theorem, there are two linearly-independent solutions that linearise this translation, \begin{equation} \boldsymbol{\psi}(\mu;x+ \ell ) = \boldsymbol{\Lambda}(\mu)\,\boldsymbol{\psi}(\mu;x), \end{equation} where Bloch multiplier $\boldsymbol{\Lambda}(\mu)={\rm diag}(\Lambda_{+}(\mu),\Lambda_{-}(\mu))$ is given by eigenvalues of ${\bf M}_{\rm cl}(\mu)$, which (by virtue of the zero-curvature condition \eqref{zerocurvature}) are conserved under time evolution, $\partial \Lambda_{\pm}(\mu)/\partial t = 0$. Since ${\rm Tr}\,{\bf U}_{\rm cl}(\mu)=0$, monodromy matrix ${\bf M}_{\rm cl}(\mu)$ is unimodular and hence its eigenvalues can be parametrised as $\Lambda_{\pm}(\mu)=\exp{(\pm {\rm i} \mathfrak{p}(\mu))}$ in terms of a single complex-valued function $\mathfrak{p}(\mu)$ called quasi-momentum. Subsequently we make use of the following general parametrisation \begin{equation} {\bf M}_{\rm cl}(\mu) = \cos{(\mathfrak{p}(\mu))} + {\rm i} \sin{(\mathfrak{p}(\mu))}\boldsymbol{\Psi}(\mu;x_{0}). \end{equation} In analogy with the harmonic oscillator presented in the introductory section~\ref{sec:HO}, we introduce the \emph{squared eigenfunctions} through \begin{equation} \boldsymbol{\Psi}(\mu;x_{0}) = \boldsymbol{\psi}(\mu;x_{0})\,\boldsymbol{\sigma}^{\rm z}\,\boldsymbol{\psi}(\mu;x_{0})^{-1}, \end{equation} representing a periodic matrix solution (that is $\boldsymbol{\Psi}(\mu;x+\ell)=\boldsymbol{\Psi}(\mu;x)$) to the \emph{adjoint} linear system \begin{equation} \boldsymbol{\Psi}_{x}(\mu;x,t) = \big[{\bf U}(\mu;x,t),\boldsymbol{\Psi}(\mu;x,t)\big],\qquad \boldsymbol{\Psi}_{t}(\mu;x,t) = \big[{\bf V}(\mu;x,t),\boldsymbol{\Psi}(\mu;x,t)\big]. \label{ALPadjoint} \end{equation} \paragraph*{Uniformisation.} Introducing an uniformised spectral parameter $z$, \begin{equation} \mu = \frac{1}{2}\left(z - \frac{\delta}{z}\right),\qquad \sqrt{\mu^2 + \delta} = \frac{1}{2}\left(z + \frac{\delta }{z}\right), \end{equation} the solution to the above linear problem can be sought in the form of a formal Laurent series \begin{equation} \boldsymbol{\Psi}(\mu) = \sum_{n=0}^{\infty}\frac{\boldsymbol{\Psi}_{n}}{z^{n}}. \end{equation} Defining $S^{2}$-valued matrices \begin{equation} {\bf S} \equiv \vec{\boldsymbol{\sigma}}\otimes \vec{\mathcal{S}} = \begin{pmatrix} \mathcal{S}^{\rm z} & \mathcal{S}^{-} \\ \mathcal{S}^{+} & -\mathcal{S}^{\rm z} \\ \end{pmatrix},\qquad \widetilde{\bf S} \equiv \boldsymbol{\sigma}^{\rm z}\,{\bf S}\,\boldsymbol{\sigma}^{\rm z}, \end{equation} the first few terms can be written compactly in the form \begin{equation} \boldsymbol{\Psi}_{0} = {\bf S},\quad \boldsymbol{\Psi}_{1} = {\rm i}[{\bf S},{\bf S}_{x}],\quad \boldsymbol{\Psi}_{2} = -{\rm Tr}({\bf S}_{x})^{2}{\bf S} - [{\bf S},[{\bf S},{\bf S}_{xx}]] +\frac{\epsilon^{2}}{4}[{\bf S},[\widetilde{\bf S},{\bf S}]], \end{equation} and so forth. \paragraph*{Local conserved charges.} The quasi-momentum can be related to the squared eigenfunctions via~\cite{Faddeev_1987} \begin{equation} \mathfrak{p}(z) = -{\rm i} \int^{\ell}_{0}{\rm d} x\,{\rm Tr}\big[{\bf U}(\boldsymbol{\Psi}+\boldsymbol{\sigma}^{\rm z})^{-1}\big] = -\frac{z \, \ell}{4} + \sum_{n=0}^{\infty}\frac{Q_{n}}{z^{n}}. \label{traceidentity} \end{equation} where coefficients $Q_{n}$ provide (extensive) local conserved charges for a given solution. The first two take the form (modulo total derivatives) \begin{equation} \begin{split} Q_{0} &= \frac{{\rm i}}{4}\int^{\ell}_{0}{\rm d} x\,\frac{\mathcal{S}^{-}\mathcal{S}^{+}_{x}-\mathcal{S}^{+}\mathcal{S}^{-}_{x}} {1+\mathcal{S}^{\rm z}} = -\frac{\mathcal{P}}{2},\\ Q_{1} &= -\frac{1}{2}\int^{\ell}_{0}{\rm d} x\,\Big[{\vec{\mathcal{S}}}^{2}_{x} +\epsilon^{2}\big(1-(\mathcal{S}^{\rm z})^{2}\big)-\frac{\epsilon^{2}}{2}\Big] = -\mathcal{H} + \frac{\epsilon^{2}\ell}{4}, \end{split} \end{equation} which yields \begin{equation} \mathfrak{p}(\mu) = -\frac{\mu \ell}{2} - \frac{\mathcal{P}}{2} - \frac{\mathcal{H}}{\mu} + \mathcal{O}(\mu^{-2}). \label{quasimomentumlargemu} \end{equation} \subsection{Finite-gap solutions} \label{subsec:finitegap} The quasi-momentum $\mathfrak{p}(\mu)$ contains only information about the conserved quantities encoded in a particular spectral curve. In order to reconstruct the spin field $\vec{\mathcal{S}}(x,t)$ from the spectral data, we also need knowledge of the dynamical degrees of freedom. Below we outline the algebro-geometric procedure to infer the time evolution of the spin field $\vec{\mathcal{S}}(x,t)$ for the class of {\it finite-gap solutions}. To achieve this, we parametrise the squared eigenfunction as \begin{equation} \boldsymbol{\Psi}_{\mathfrak{g}}(\mu) = \frac{1}{\sqrt{\mathcal{R}_{2\mathfrak{g} +2}(\mu)}} \begin{pmatrix} a_{\mathfrak{g} +1}(\mu) & \sqrt{\mu^{2} + \epsilon^{2}}\,b_{\mathfrak{g}}(\mu) \\ \sqrt{\mu^{2} + \epsilon^{2}}\,\ol{b}_{\mathfrak{g} }(\mu) & -a_{\mathfrak{g} +1}(\mu) \end{pmatrix} , \end{equation} where functions $a_{d}(\mu)$, $b_{d}(\mu)$ and $\mathcal{R}_{d}(\mu)$ are \emph{polynomials} in variable $\mu$ of degree $d$ In particular, $\mathcal{R}_{2\mathfrak{g}+2}(\mu)$ is a polynomial of degree $2 \mathfrak{g} +2$, \begin{equation} \mathcal{R}_{2 \mathfrak{g} +2}(\mu) = \prod_{j=1}^{\mathfrak{g} + 1} (\mu - \mu_{j})(\mu - \ol{\mu}_{j}) = \sum_{k=0}^{2 \mathfrak{g} + 2} (-1)^{k}r_{2 \mathfrak{g} +2-k} \mu^{k}, \end{equation} which specifies a hyperelliptic algebraic curve in $\mathbb{C}^{2}$, \begin{equation} \Sigma:\qquad y^{2}(\mu) = \mathcal{R}_{2 \mathfrak{g} +2}(\mu). \end{equation} The curve is fully characterised by $2( \mathfrak{g} +1)$ branch points $\mu_{j}$ (or, equivalently, symmetric polynomials $r_{k}$ thereof, with $r_{2\mathfrak{g} +2}=1$). By the unimodularity constraint, $\det \boldsymbol{\Psi}_{\mathfrak{g}} = -1 $, functions $a(\mu)$ and $b(\mu)$ are not independent but are instead subjected to an algebraic relation \begin{equation} a^{2}_{\mathfrak{g} +1}(\mu) + \left( \mu^{2}+\delta \right) b_{\mathfrak{g}}(\mu)\ol{b}_{\mathfrak{g}}(\mu) = \mathcal{R}_{2\mathfrak{g} +2}(\mu). \label{quadraticconstraint} \end{equation} From the trace identity \eqref{traceidentity} one can readily infer the following compact expression for the quasi-momentum \begin{equation} \mathfrak{p}(\mu) = -\frac{\mu}{2}\int^{\ell}_{0}{\rm d} x\,\mathcal{S}^{\rm z}(x) - \frac{\mu^{2}+\delta }{4}\int^{\ell}_{0}{\rm d} x\,\frac{\mathcal{S}^{-}\ol{b}_{ \mathfrak{g} }(\mu) +\mathcal{S}^{+} b_{\mathfrak{g}}(\mu)}{\sqrt{\mathcal{R}_{2 \mathfrak{g} +2}(\mu)}+ a_{\mathfrak{g} +1}(\mu)}. \label{pab} \end{equation} This form is compatible with the correct asymptotic expansion about $\mu \to \infty$. A series expansion of $\mathfrak{p}(\mu)$ will thus involve only $(\mathfrak{g} +1)$ functionally independent integrals of motion $Q_{n}$. They can be expressed as certain functions of coefficients $r_{j}$ of $\mathcal{R}_{2 \mathfrak{g} +2}(\mu)$. Moreover, the total filling fraction $\nu$ with respect to ferromagnetic vacuum $\mathcal{S}^{\rm z}_{\rm vac}=1$, \begin{equation} \nu \equiv \frac{1}{2 \ell}\int^{\ell}_{0}{\rm d} x\,(1-\mathcal{S}^{\rm z}(x)), \end{equation} can be obtained as \begin{equation} \mathfrak{p}(\mu = \pm {\rm i} \epsilon) = \mp \frac{{\rm i} \epsilon}{2}\int^{\ell}_{0}{\rm d} x\,\mathcal{S}^{\rm z}(x) = \mp \frac{{\rm i} \epsilon \ell}{2} (1- 2\nu) . \label{fillingfractionandp} \end{equation} \subsubsection{Dynamical divisor} \label{subsec:SoV} Off-diagonal elements of the fundamental matrix solution $\boldsymbol{\Psi}_{\mathfrak{g}}(\mu)$, i.e. dynamical zeros of the $b$-function, provide \emph{dynamical} degrees of freedom of finite-gap solutions~\footnote{ Indeed, function $b_{\mathfrak{g}}(\mu)$ can be interpreted as the classical analogue of the ${\bf B}$-operator, namely the off-diagonal element of the quantum monodromy matrix ${\bf B}\equiv {\bf M}_{12}$, whose operator-valued zeros are the quantum separated variables~\cite{Sklyanin_1995}.}, enabling the reconstruction of the time-evolved spin field $\vec{\mathcal{S}} (x,t)$. To satisfy quadratic constraint \eqref{quadraticconstraint}, we parametrise \begin{equation} b_{\mathfrak{g}}(\mu;x,t) = \mathcal{S}^-(x,t) \prod_{j=1}^{\mathfrak{g}}\big(\mu - \gamma_{j} (x,t)\big). \label{b-function} \end{equation} where the leading coefficient has been fixed to match the asymptotics at $\mu \to \infty$. The set $\mathcal{D} = \{\gamma_{j}\}_{j=1}^{g}$ is known as the \emph{dynamical divisor} of the Riemann surface $\Sigma$. Since $\mathcal{S}^{-}(x,t)$ should also be regarded as an independent dynamical variable, we have in total $(\mathfrak{g} + 1)$ dynamical degrees of freedom. This number exactly matches the number of action variables and corresponds to the number of forbidden zones in the finite-gap spectrum. \paragraph*{Dubrovin equations.} Dynamics of variables $\gamma_{j}=\gamma_{j}(x,t)$ takes place on the Riemann surface $\Sigma$. Below we employ an extended dynamical divisor $\mathcal{D}_{\rm ext}$ by adjoining it two extra non-dynamical variables $\gamma_{\pm}\equiv \pm {\rm i} \epsilon$ which we label by $\gamma_{\mathfrak{g} + 1} $ and $\gamma_{\mathfrak{g} +2}$, respectively. Using Lagrange interpolation formula, we can then restore functions $a_{\mathfrak{g} +1}(\mu)$ from Eq.~\eqref{quadraticconstraint}, yielding \begin{equation} a_{\mathfrak{g} +1}(\mu;x,t) = \sum_{j=1}^{\mathfrak{g} + 2} \sqrt{\mathcal{R}_{2 \mathfrak{g} +2}(\gamma_{j}(x,t))} \prod_{k\neq j}^{\mathfrak{g} + 2} \frac{\mu - \gamma_{k}(x,t)}{\gamma_{j}(x,t)-\gamma_{k}(x,t)}. \label{a-function} \end{equation} The equations of motion~\eqref{ALPadjoint} in terms of $\gamma$-variables from $\mathcal{D}_{\rm ext}$ take the form \begin{equation} \begin{split} \partial_{x}\gamma_{j}(x,t) &= {\rm i} \sqrt{\mathcal{R}_{2 \mathfrak{g} +2}(\gamma_{j}(x,t))}\prod_{k\neq j}^{\mathfrak{g}} (\gamma_{j}(x,t)-\gamma_{k}(x,t))^{-1},\\ \partial_{t}\gamma_{j}(x,t) &= {\rm i}\,\Gamma(x,t)\,\sqrt{\mathcal{R}_{2 \mathfrak{g} +2}(\gamma_{j}(x,t))}\prod_{k\neq j}^{\mathfrak{g}} (\gamma_{j}(x,t)-\gamma_{k}(x,t))^{-1}, \label{Dubrovint} \end{split} \end{equation} with \begin{equation} \Gamma(x,t) \equiv \frac{r_{1}}{2} - \sum_{k\neq j}^{\mathfrak{g}} \gamma_{k}(x,t),\qquad r_{1}=\sum_{j=1}^{\mathfrak{g} +1}(\mu_{j}+\ol{\mu}_{j}). \end{equation} We have obtained a system of differential equations that governs the motion of the dynamical divisor of a Riemann surface, commonly known in the literature under the name of \emph{Dubrovin equations} \cite{Dubrovin_1975,Its_1975,Dubrovin_1981}. The form of these equations is universal, namely they do not depend on the model under consideration. What is model-specific instead are the reconstruction formulae, i.e. how $\gamma$-variables relate to physical fields. A spin field whose target space is a $2$-sphere can be described by two degrees of freedom, e.g. the $\mathcal{S}^{\rm z}$ and $\mathcal{S}^{-}$ components. These can be restored from Eqs.~\eqref{ALPadjoint}, which yields \begin{equation} \mathcal{S}^{\rm z}(x,t) = \sum_{j=1}^{\mathfrak{g} + 2} \frac{ \sqrt{\mathcal{R}_{2 \mathfrak{g} +2}(\gamma_{j}(x,t))}}{\prod_{k\neq j}^{\mathfrak{g} + 2} (\gamma_{j}(x,t)-\gamma_{k}(x,t)) } , \label{Sz_resotre} \end{equation} and \begin{equation} {\rm i} \frac{\mathcal{S}^{-}_{x}(x,t)}{\mathcal{S}^{-}(x,t)} = \sum_{j=1}^{\mathfrak{g} + 2} \frac{ \gamma_{j}\sqrt{\mathcal{R}_{2 \mathfrak{g} +2}(\gamma_{j}(x,t))}}{ \prod_{k\neq j}^{\mathfrak{g} + 2} (\gamma_{j}(x,t)-\gamma_{k}(x,t)) }, \label{Sm_restore1} \end{equation} \begin{equation} {\rm i} \frac{\mathcal{S}^{-}_{t}(x,t)}{\mathcal{S}^{-}(x,t)} = {\rm i} \frac{r_{1}}{2}\frac{\mathcal{S}^{-}_{x}(x,t)}{\mathcal{S}^{-}(x,t)} -\sum_{j=1}^{\mathfrak{g} + 2} \frac{ \gamma_{j}\sqrt{\mathcal{R}_{2 \mathfrak{g} +2}(\gamma_{j})}\left(\sum_{k\neq j}^{\mathfrak{g} + 2} \gamma_{k}(x,t)\right)}{\prod_{k\neq j}^{\mathfrak{g} + 2} (\gamma_{j}(x,t)-\gamma_{k}(x,t)) }. \label{Sm_restore2} \end{equation} \paragraph*{Abel--Jacobi transformation.} Dubrovin equations~\eqref{Dubrovint} allow for exact integration. To this end, we define the standard basis of $2 \mathfrak{g}$ closed cycles on $\Sigma$. They further split into $\mathcal{A}$-cycles and their conjugate $\mathcal{B}$-cycles according to the following prescription: $\mathcal{A}_{j}$-cycle encircles the the $j$th branch cut $\mathcal{C}_{j}$ on the upper Riemann sheet of $\Sigma$, whereas $\mathcal{B}_{jk}$ denotes a cycle that passes through cut $\mathcal{C}_{j}$ on the upper sheet and closes back to itself through $\mathcal{C}_{k}$, as shown in Fig.~\ref{fig:Riemann_surface}. Dubrovin equations for the dynamical divisor on $\Sigma$ can be integrated with aid of the Abel--Jacobi transformation, \begin{equation} \varphi_{j} = 2\pi\sum_{k=1}^{\mathfrak{g}}\int^{\gamma_{k}(x,t)}_{\gamma_{k}(0,0)}\omega_{j}, \label{AbelJacobi} \end{equation} where $\omega_{j}$ form the basis of holomorphic differentials of the Riemann surface. The above mapping provides a variable transformation from $\gamma$-variables to $\varphi$-variables of the angle-type, $\{\gamma_{j}(x,t)\} \mapsto \{\varphi_{j}(x,t)\}$. The holomorphic differentials $\omega_{j}$ are formally the form \begin{equation} \omega_{j} = \sum_{k=1}^{\mathfrak{g}}\frac{C_{j k}\mu^{\mathfrak{g} -k}{\rm d} \mu}{\sqrt{\mathcal{R}_{2 \mathfrak{g} +2}(\mu)}},\qquad j=1,2,\ldots \mathfrak{g}. \end{equation} Coefficients $C_{jk}$ are determined by requiring canonical normalisation with respect to $\mathcal{A}$-cycles \begin{equation} \oint_{\mathcal{A}_{j}}\omega_{k} = \delta_{jk}. \end{equation} Taking into account Eq.~\eqref{AbelJacobi}, equations of motion of the dynamical divisor~\eqref{Dubrovint} linearise. Indeed, making use of the Lagrange interpolation formulae, one can find \begin{equation} \partial_{x}\varphi(x,t) = 2\pi {\rm i}\,C_{j1},\qquad \partial_{t}\varphi(x,t) = 2\pi {\rm i} \left[ \frac{r_{1}}{2}C_{j1}+C_{j2} \right], \end{equation} implying \begin{equation} \varphi_{j}(x,t) = k_{j}\,x + w_{j}\,t + \varphi_{j}(0,0), \end{equation} with wave numbers $k_{j}$ are frequencies $w_{j}$ reading \begin{equation} k_{j} = 2\pi {\rm i} C_{j1},\qquad w_{j} = 2\pi {\rm i} \Big[\frac{r_{1}}{2}C_{j1}+C_{j2}\Big] . \label{Cj1Cj2} \end{equation} Phases $\varphi_{j}(x,t)$ satisfy linear evolution in both space and time, exercising a quasiperiodic motion on a Liouville torus $\mathbb{T}^{2 \mathfrak{g}}$ of real dimension $2 \mathfrak{g}$. For any physically admissible initial condition, $\gamma$-variables evolve along closed trajectories which are homotopically equivalent to $\mathcal{A}$-cycles $\mathcal{A}_{j}$, such that there is precisely one variable per cycle $\mathcal{C}_{j}$. \begin{figure} \centering \includegraphics[width=.85\linewidth]{figures/Riemann_surface.pdf} \caption{Cycles on a two-sheeted Riemann surface, illustrated on an example of two cuts $\mathcal{C}_{j}$ and $\mathcal{C}_{k}$. Upon tunnelling to the other Riemann sheet, the integration orientation is reversed~\cite{Kazakov_2004}.} \label{fig:Riemann_surface} \end{figure} \paragraph*{Periodicity constraints.} A family of periodic (i.e. closed) solutions $\vec{\mathcal{S}}(x)=\vec{\mathcal{S}}(x+\ell)$ is further distinguished by the periodicity of angles, $\varphi_{j}(x+\ell,t)=\varphi_{j}(x,t)+2\pi\,n_{j}$ where integers $n_{j} \in \mathbb{Z}$ specify the mode numbers assigned to each branch cut. Similarly, invariance under translation for a temporal period $T$ implies quantisation of frequencies $w_{j}$. Under these extra conditions, coefficients $C_{j1}$ and $(r_{1}/2)C_{j1}+C_{j2}$ become integer-valued, imposing a non-trivial restriction on the admissible algebraic curves. \paragraph*{Extra degree of freedom.} Now we come to an important subtlety in the above construction: the dynamical separated variables forming the divisor $\mathcal{D}\equiv \{\gamma_{j}\}_{j=1}^{\mathfrak{g}}$ \emph{do not} provide the complete set of dynamical degrees of freedom for the class of finite-gap solutions. It turns out there is an additional degree of freedom that is not amongst the canonical angle variables~\eqref{AbelJacobi} obtained from dynamical zeros $\gamma_{j}$ of the $b$-functions. This fact indeed becomes apparent already in the simplest case of genus-zero solutions which, as we in turn demonstrate, governs an evolution of a spin field described by a single angle variable. More generally, for algebraic curves of genus $\mathfrak{g}$, there are thus $(\mathfrak{g} +1)$ phases in total (that is precisely the number of cuts, i.e. half the number of branch points) and accordingly the linearised dynamics takes place in the Liouville torus of real dimension $2(\mathfrak{g} +1)$. Substituting the form of the $b$-functions~\eqref{b-function}, into Eq.~\eqref{pab}, and using the restoration formulae \eqref{Sz_resotre}, \eqref{Sm_restore1}, \eqref{Sm_restore2}, we deduce the following identities \begin{equation} \int^{\ell}_{0}{\rm d} x \frac{\ol{b}(\mu;x,t)\mathcal{S}^{-}(x,t)}{\sqrt{\mathcal{R}(\mu)} +a(\mu)} = \int^{\ell}_{0}{\rm d} x \frac{b(\mu;x,t)\mathcal{S}^{+}(x,t)}{\sqrt{\mathcal{R}(\mu)}+a(\mu)} . \end{equation} Introducing an auxiliary function \begin{equation} r(\mu,\gamma) = \frac{\sqrt{\mathcal{R}(\mu)}-\sqrt{\mathcal{R}(\gamma)}}{\mu-\gamma} - \frac{1}{2}\sum_{\sigma\in \{+,-\}} \frac{\sqrt{\mathcal{R}(\sigma {\rm i} \epsilon)}-\sqrt{\mathcal{R}(\gamma)}}{\sigma {\rm i} \epsilon - \gamma} , \end{equation} we can present the quasi-momentum as a period integral of the form \begin{equation} \mathfrak{p}(\mu) = -\frac{1}{2}\int^{\ell}_{0}{\rm d} x\,\sum_{j=1}^{\mathfrak{g}} \frac{r(\mu,\gamma_{j})}{\prod_{k\neq j}(\gamma_{j}-\gamma_{k}) } . \label{pr} \end{equation} When $\mathfrak{g}=0$, we put Eqs.~\eqref{b-function} and \eqref{a-function} into Eq.~\eqref{pab}, which yields \begin{equation} \mathfrak{p}(\mu) = -\frac{\ell}{2}\left( \sqrt{\mathcal{R}_2 (\mu)} -\frac{1}{2} \Big(\sqrt{\mathcal{R}_2 ({\rm i} \epsilon)} + \sqrt{\mathcal{R}_2 (-{\rm i} \epsilon)} \Big)\right) . \label{quasimomentum1cut} \end{equation} Comparing Eq.~\eqref{pr} with Eq.~\eqref{Sm_restore1}, we find that \begin{equation} n_{\mathfrak{g} + 1} \equiv \frac{1}{2\pi {\rm i}}\int^{\ell}_{0}{\rm d} x\,\partial_{x}\log \mathcal{S}^{-}, \end{equation} provides an extra mode number $n_{\mathfrak{g} + 1}$. Using the fact that $\gamma$-cycles are topologically equivalent to $\mathcal{A}$-cycles that encircle the cuts of $\Sigma$, we can write \begin{equation} \mathfrak{p}(\mu) = \frac{{\rm i}}{2}\sum_{k=1}^{\mathfrak{g}} n_{k}\oint_{\mathcal{A}_{k}}\frac{\sqrt{\mathcal{R}(\mu)} -\sqrt{\mathcal{R}(\gamma)}}{\mu-\gamma}\frac{{\rm d} \gamma}{\sqrt{\mathcal{R}(\gamma)}} - \pi n_{\mathfrak{g} +1} , \label{psolutionRH} \end{equation} where $n_k$ is the mode number of the $k$th cut. On Riemann surface $\Sigma$, quasi-momentum $\mathfrak{p}(\mu)$ is a single-valued function. Alternatively, it can be understood as a double-valued function on the complex spectral plane with $\mu \in \mathbb{C}$, experiencing jumps on different Riemann sheets upon traversing branch cuts with $\sqrt{\mathcal{R}(\mu)}$ flipping sign. The discontinuity condition for each cut reads \begin{equation} \mathfrak{p}(\mu + {\rm i} 0) + \mathfrak{p}(\mu - {\rm i} 0) = 2\pi(n_{k}-n_{\mathfrak{g} + 1}),\qquad \mu \in \mathcal{C}_{k}, \label{RHcuts} \end{equation} where infinitesimal shifts $\pm {\rm i} 0$ pertain to points on the either side of the cut $\mathcal{C}_{k}$. The above equations are known as the \emph{Riemann--Hilbert problem}. An additional mode number $n_{\mathfrak{g} +1}$ can be inferred from the asymptotic condition at $\mu = \infty_{\pm}$, reading \begin{equation} \mathfrak{p}(\infty_{+}) + \mathfrak{p}(\infty_{-}) = -2\pi n_{\mathfrak{g} + 1}. \label{RHasymptotics} \end{equation} It proves convenient to introduce a new basis of open $\mathcal{B}$-cycles $\mathcal{B}_{k}$ (for $k =1,2,\ldots,\mathfrak{g} +1$), connecting infinity $\infty_{+}$ on the upper sheet with $\infty_{-}$ on the lower sheet by passing through the cut $\mathcal{C}_{k}$, as demonstrated in Fig.~\ref{fig:Riemann_surface}, Eqs.~\eqref{RHcuts} alongside \eqref{RHasymptotics} can be compactly stated as \begin{equation} \oint_{\mathcal{B}_{k}}{\rm d} \mathfrak{p}(\mu) = 2\pi n_{k},\qquad k=1,2,\ldots, \mathfrak{g} +1. \label{RHcompact} \end{equation} \paragraph*{An alternative approach.} Despite Eq.~\eqref{psolutionRH} provides a solution to the Riemann--Hilbert problem~\eqref{RHcuts}, the behaviour at $\mu \to \infty$ does not comply with the form of Eq.~\eqref{quasimomentumlargemu}. An extra requirement on the spectral curve is needed, namely demanding integrality of the wave numbers~\eqref{Cj1Cj2}. One route to achieve this is to construct quasi-momentum $\mathfrak{p} (\mu)$ that satisfies the required asymptotics at $\mu \to \infty$ by considering a meromorphic~(second-kind) differential \begin{equation} {\rm d} \mathfrak{p} = - \frac{\ell}{2} \frac{\mathcal{P}_{2 \mathfrak{g} +2}(\mu) {\rm d} \mu }{\sqrt{\mathcal{R}(\mu)}},\qquad \mathcal{P}_{2 \mathfrak{g} +2}(\mu) = \sum_{j=0}^{\mathfrak{g} +1} c_j \mu^{j}. \label{derivativep} \end{equation} Then $\mathfrak{p} (\mu)$ is unambiguously determined by demanding analyticity and specifying its asymptotics; the latter readily fixes the highest two coefficients of $\mathcal{P}_{2 \mathfrak{g} +2}(\mu)$, namely \begin{equation} c_{ \mathfrak{g} + 1} = 1, \qquad c_{\mathfrak{g}} = -\frac{r_{1}}{2} = -\frac{1}{2} \sum_{j=1}^{2 \mathfrak{g} +2} \mu_j, \label{two_coef} \end{equation} whereas the remaining $m= \mathfrak{g} -1$ coefficients are fixed by \begin{equation} \oint_{\mathcal{A}_{j} }{\rm d} \mathfrak{p} (\mu) = 0 , \quad j = 1 ,2 , \cdots \mathfrak{g} . \end{equation} Finally, the spectral curve is uniquely fixed by the condition~\eqref{RHcompact}. \section{Examples of finite-gap solutions} \label{sec:examples} \subsection{One-cut rational solution} \label{subsec:example1-cut} The simplest solutions of the Riemann-Hilbert problem~\eqref{RHquasip} belong to algebraic curves of genus zero (Riemann surfaces with a single branch cut). These correspond to quadratic polynomials of the form \begin{equation} \mathcal{R}_{2}(\mu) = (\mu - \mu_1 ) (\mu - \bar{\mu}_1), \end{equation} where branch points $\mu_{1} , \bar{\mu}_{1} \in \mathbb{C}$ are conjugate to one another in order to obey the reality condition. This leads to solutions that involve two real degrees of freedom. In what follows, we set the classical period to $\ell = 1$. With this choice, the admissible values of the wave numbers are $k = 2\pi\,n$ with $n$ integer. As a consequence, the branch points cannot be chosen arbitrarily but get ``quantised'' as well. Quasi-momentum $\mathfrak{p} (\mu)$ is given by Eq.~\eqref{quasimomentum1cut}. To satisfy the ``quantisation condition'' and to obtain the prescribed filling fraction~\eqref{fillingfractionandp}, we have \begin{equation} \frac{1}{2} \left( \sqrt{\mathcal{R}_2 ({\rm i} \epsilon)} + \sqrt{\mathcal{R}_2 (-{\rm i} \epsilon)} \right) = 2 \pi n , \quad - \frac{1}{4} \left( \sqrt{\mathcal{R}_2 ({\rm i} \epsilon)} - \sqrt{\mathcal{R}_2 (-{\rm i} \epsilon)} \right) = - \frac{{\rm i} \epsilon}{2} (1-2 \nu ), \end{equation} allowing us to parametrise the branch points as \begin{equation} \mu_1 + \bar{\mu}_1 = 4 \pi n (1 - 2 \nu ) , \quad |\mu_1 |^2 = 4 \pi^2 n^2 + 4 \delta \nu(1 - \nu) . \end{equation} The algebraic curve can be expressed in terms of mode number $n$ and filling fraction $\nu$, reading \begin{equation} y^2 (\mu ) = \mathcal{R}_2 (\mu ) = \mu^2 - 4 \pi n (1 - 2 \nu ) \mu + 4 \pi^2 n^2 + 4 \delta (1 - \nu), \end{equation} whereas the associated quasi-momentum, cf. Eq.~\eqref{quasimomentum1cut}, can be written as \begin{equation} \begin{split} \mathfrak{p}_{1}(\mu) & = - \pi n - \frac{1}{2} \sqrt{ (\mu - \mu_1 ) (\mu - \bar{\mu}_1 )} \\ & = - \pi n - \frac{1}{2} \sqrt{\mu^2 - 4 \pi n (1 - 2 \nu ) \mu + 4 \pi^2 n^2 + 4 \delta (1 - \nu) } . \end{split} \label{quasip1cutexplicit} \end{equation} On the other hand, the quasi-momentum can also be constructed as an integral of a suitable meromorphic differential on the rational curve (cf. Eq.~\eqref{derivativep}). In $\mathfrak{g}=0$ case, such differential is described with coefficients $c_0$ and $c_1$, \begin{equation} \frac{{\rm d} \mathfrak{p}_{1}(\mu)}{ {\rm d} \mu} = -\frac{1}{2}\frac{c_{1}\mu + c_{0}}{\sqrt{\mathcal{R}_{2}(\mu)}} . \end{equation} In this case, both coefficients are readily fixed by the asymptotics, see Eq.~\eqref{two_coef}, yielding $c_{1}=1$ and $c_{0}=-(\mu_{1}+\ol{\mu}_{1})/2$. By taking an integral of ${\rm d} \mathfrak{p}_{1}(\mu)$, we correctly recover the quasi-momentum~\eqref{quasip1cutexplicit}. Notice that presently (i.e. in the zero-genus case) there is no canonical $\mathcal{A}$-cycle. There is a single wave number which can be retrieved by evaluating the `open' $\mathcal{B}$-cycle \begin{equation} k = \oint_{\mathcal{B}_1} {\rm d} \mathfrak{p}_{1} = - \left( \mathfrak{p}_{1} (\infty_+) + \mathfrak{p}_{1} (\infty_-) \right) = 2 \pi n. \end{equation} Here $n \in \mathbb{Z}$ is the mode number of the solution. Coefficients of the asymptotic expansion of $\mathfrak{p}_{1}$ (cf. Eq.~\eqref{quasimomentumlargemu}) \begin{equation} \mu \to \infty:\qquad \mathfrak{p}_1 (\mu ) \sim -\frac{\mu }{2} - \frac{\mathcal{P}}{2} - \frac{\mathcal{H} }{\mu } + \mathcal{O} \left( \mu^{-2} \right), \end{equation} provide phase-space averages of local charges evaluated on a particular one-cut solution. The initial two coefficients correspond to total momentum and energy, reading explicitly \begin{equation} \mathcal{P} = 2 \pi n \nu , \qquad \mathcal{H} = \frac{1}{2} (4 \pi^2 n^2 + \delta ) \nu(1 - \nu). \label{expansioncharge1cut} \end{equation} The knowledge of the one-cut quasi-momentum allows to express the dynamics of the spin field $\vec{\mathcal{S}} (x,t)$ in terms of canonical separated $\gamma$-variables. However, since we deal with an algebraic curve of genus zero, we remind that there is no genuine $\gamma$-variable present. Instead, the only dynamical degree of freedom is the transversal spin component $\mathcal{S}^{-}(x,t)$. We first use Eq.~\eqref{Sz_resotre} to deduce the longitudinal component \begin{equation} \mathcal{S}^{\rm z} (x,t) = \frac{\sqrt{\mathcal{R}_2 ( \gamma_1 ) }}{(\gamma_1 - \gamma_2 )} + \frac{\sqrt{\mathcal{R}_2 ( \gamma_2 ) }}{(\gamma_2 - \gamma_1 )} = 1 - 2 \nu , \end{equation} where we have made use of the frozen (i.e. non-dynamical) $\gamma$-variables $\gamma_1 = {\rm i} \epsilon$ and $\gamma_2 = - {\rm i} \epsilon$. In the next step, we can solve Eqs.~\eqref{Sm_restore1} and \eqref{Sm_restore2} to find the transversal component of the spin field, \begin{equation} {\rm i} \partial_x \log \mathcal{S}^- = 2 \pi n , \qquad {\rm i} \left( \partial_t \log \mathcal{S}^- - \pi n (1-2 \nu ) \partial_x \log \mathcal{S}^- \right) = \delta (1-2 \nu) . \end{equation} By imposing normalisation constraint $|\vec{\mathcal{S}} (x,t)| = 1$, we finally arrive at the following general form of the one-cut solution \begin{equation} \mathcal{S}^{\rm z} (x , t) = 1 - 2 \nu = \cos \theta_0 , \qquad \mathcal{S}^{\pm} (x , t) = \sin \theta_0 \exp \left[\pm {\rm i} (k x - w t) \right], \label{classical1cutsol} \end{equation} where the wave number and frequency are \begin{equation} k = 2 \pi n , \quad w = (4 \pi^2 n^2 + \delta ) \cos \theta_0 . \end{equation} The momentum and energy can be alternatively computed by direct integration using Eq.~\eqref{quasimomentumlargemu} \begin{equation} \begin{split} & \mathcal{P} = \frac{1}{2 {\rm i}} \int^{1}_{0} {\rm d} x\, \frac{\mathcal{S}^- \mathcal{S}^+_x - \mathcal{S}^+ \mathcal{S}^-_x}{1+ \mathcal{S}^{\rm z} } = 2 \pi n \nu ,\\ & \mathcal{H} = \frac{1}{2} \int^{1}_{0} {\rm d} x\, \left[\vec{\mathcal{S}}_x \cdot \vec{\mathcal{S}}_x + \delta (1- (\mathcal{S}^{\rm z})^2 )\right] = \frac{1}{2} (4 \pi^2 n^2 + \delta) \nu(1 - \nu), \end{split} \end{equation} in agreement with Eq.~\eqref{expansioncharge1cut}. \subsubsection{One-cut solution from asymptotic Bethe ansatz} \begin{figure} \centering \begin{minipage}{.49\linewidth} \includegraphics[width=\linewidth]{figures/momentum_1-cut_filling_fraction_n_1.pdf} \caption*{\centering(a)} \end{minipage} \begin{minipage}{.49\linewidth} \centering \includegraphics[width=\linewidth]{figures/energyL_1-cut_filling_fraction_n_1.pdf} \caption*{\centering(b)} \end{minipage} \caption{(a) Comparison between momentum of a one-cut solution with mode number $n = 1$ (red dashed line, for both the isotropic and anisotropic interaction $\delta$) and momenta of the corresponding quantum eigenstates (shown for different system sizes $L = M/\nu$, with $M = 30$ and filling fraction $\nu$), with green (black) circles representing the isotropic (anisotropic, with $\delta=1$) cases. (b) Comparison between energies of a one-cut solution with mode number $n = 1$ (red dashed line for isotropic case and blue dashed line for anisotropic case with $\delta = 1$) and rescaled energies ($E \cdot L$) of the corresponding quantum eigenstates (the same system sizes as in panel (a)).} \label{fig:1cutphysicalmeaning} \end{figure} Before describing how to perform semi-classical quantisation of finite-gap solutions (cf. Sec.~\ref{sec:quantumcontour}), we wish to briefly discuss the one-cut solutions from the vantage point of low-energy eigenstates in the Heisenberg anisotropic spin chain. Our aim is mainly to identify which solutions to the Bethe equations \eqref{betheeq} show up classically as one-cut solutions. A numerical method for achieving this is outlined in Appendix~\ref{app:numericalbethe}. Specifically, we are seeking for low-energy solutions comprising of macroscopically $\mathcal{O}(L)$ excited magnons that condense into a single coherent mode. By fixing the mode number to $n=1$, we can compute the total momentum $P$ and total energy $E$ of such state with different system sizes $L$ and filling fraction $\nu = M/L$ fixed, see Eqs.~\eqref{PandEquantum}. The results are shown in Fig.~\ref{fig:1cutphysicalmeaning}. On the other hand, we can likewise compute the momenta and energies for classical one-cut solutions, cf. Eq.~\eqref{expansioncharge1cut}, by setting the classical period $\ell = 1$. Despite that the quantum quasi-momentum in the classical limit reduces to its classical counterpart, we remind that a direct comparison of the respective energies requires to account for an extra rescaling factor of $L$, namely $H \cdot L \sim \mathcal{H}$, where $H$ denotes the eigenvalue of the quantum Hamiltonian defined in Eq.~\eqref{quantumhamil}, while the classical Hamiltonian $\mathcal{H}$ is given by Eq.~\eqref{classical_hamiltonian}. At the classical level, one-cut solutions describe a simple precessional motion. In some sense, one may view them as a finite-density analogue of Goldstone modes (representing elementary ferromagnetic excitations called magnons). Remarkably, it turns out (see Sec.~\ref{sec:quantumcontour}) that quantisation of such solutions can give rise to certain non-perturbative effects (first discussed in Ref.~\cite{Bargheer_2008}) which necessitate to incorporate quantum corrections into the picture. \subsection{Bions from degenerate two-cut solutions} \label{subsec:examplebion} As a non-trivial example, we next focus on the class of two-cut solutions. These are, physically speaking, periodic elliptic magnetisation waves which are often referred to as `cnoidal waves'. The space of two-cut solution is associated to elliptic algebraic curves corresponding to Riemann surfaces of genus $\mathfrak{g}=1$, characterised by two branch cuts. In the special limit of unit elliptic modulus, the profiles become trigonometric and one retrieves the celebrated soliton solutions. For the particular case of the easy-axis anisotropic Landau--Lifshitz model, there exist special types of two-cut solutions known as \emph{bions}; these represent two-mode bound states formed of a kink and an anti-kink. Moreover, a special degeneration of such a bion solution (upon decompactifying the circumference) produces a static kink. As mentioned in the introduction, kinks are ultimately responsible for the observed freezing of a magnetic domain wall, as demonstrated in \cite{Gamayun_2019}. In Sec.~\ref{subsec:quantumbion} below, we shall perform the semi-classical quantisation on a bion and study its subtle features. Before that, we derive it here using the outlined integration procedure. The elliptic curve encoding the bion spectrum has the form \begin{equation} \mathcal{R}_{\rm bion}^2 (\mu) = \mathcal{R}_{4}^2 (\mu) = (\mu^2 + \xi^2_1)(\mu^2 + \xi^2_2), \end{equation} parametrised by two pairs of complex-conjugate branch points located on the imaginary axis ${\rm Re}(\mu)=0$ at $\mu_{j} \in \{\pm {\rm i} \xi_1,\pm {\rm i} \xi_2\}$, satisfying conditions \begin{equation} \xi_1 > \epsilon > 0 , \qquad 0 < \xi_2 < \epsilon. \end{equation} The upshot here is that the bion solutions can only be found in the regime $\delta > 0$ (the easy-axis regime). In what follows we put, mostly for simplicity, $\delta = \epsilon = 1$. In fact, from the classical equation of motion for the spin field $\vec{\mathcal{S}} (x,t)$ given by Eq.~\eqref{classicaleom}, the solution at $\delta = 1$ can be mapped to another solution with $\delta^\prime > 0$ by a simple rescaling \begin{equation} \vec{\mathcal{S}} (x,t) \quad \mapsto \quad \vec{\mathcal{S}} (x^\prime = \epsilon\,x, t^\prime = \delta\,t). \end{equation} We next outline how to reconstruct of the spin field from the algebraic curve. To set the stage, we introduce the standard full elliptic integrals, cf. Appendix~\ref{app:ellipticfunc}, \begin{equation} K_1 = K \left( \frac{\xi_2^2}{\xi_1^2} \right) , \qquad K_2 = K \left( 1 - \frac{\xi_2^2}{\xi_1^2} \right) . \end{equation} When the argument of the elliptic function is omitted we adopt that ${\rm k} = \xi_{2} / \xi_{1}$. Since the Riemann surface involves two branch cuts, this time we do have a genuine dynamical $\gamma$-variable $\gamma_{1} (x,t)$. As said earlier, there exist two extra non-dynamical variables pinned to locations $\gamma_{2} = {\rm i}$ and $\gamma_{3} = -{\rm i}$ (recall that $\epsilon = 1$). From the Dubrovin equations~\eqref{Dubrovint} we find \begin{equation} \gamma_1 (x , t) = \frac{{\rm i} \xi_1 }{{\rm sn} ( u ) } = - {\rm i} \xi_2 \, {\rm sn} ( u + {\rm i} K_2 ) , \qquad u = \xi_1 x + \varrho , \end{equation} where we have used ${\rm sn} (x + {\rm i} K_2 ) {\rm sn} (x) {\rm k} = 1$, and used $\varrho$ to denote the integration constant. Remarkably, it turns out that in this particular subclass of two-cut solutions even $\gamma_1 $ is {\it static}. Using the reconstruction formulae \eqref{Sz_resotre} for $\mathcal{S}^{\rm z}$ component, we thus have \begin{equation} \mathcal{S}^{\rm z} = \frac{\sqrt{\mathcal{R}_{4} (\gamma_1 ) } + {\rm i} \sqrt{\mathcal{R}_{4} ({\rm i} )} \gamma_1 }{\gamma_1^2 + 1 } . \end{equation} To fix the integration constant $\varrho$, we impose the reality condition $\mathcal{S}^{\rm z} \in \mathbb{R}$, \begin{equation} \varrho = \mathrm{arcsn} \left( {\rm i} \frac{\xi_1}{\xi_2} \sqrt{\frac{1 - \xi_2^2}{\xi_1^2 - 1}} \right), \end{equation} which yields a time-independent profile \begin{equation} \mathcal{S}^{\rm z} (x) = - \xi_2 \frac{\mathrm{cn} (\xi_1 x )}{\mathrm{dn} (\xi_1 x )}. \end{equation} The solution has a spatial period \begin{equation} \ell = \frac{4 n K_1}{\xi_1}, \end{equation} where $\pm n$ are the mode numbers associated with the two cuts. Variable $\gamma_1$ can be therefore expressed as \begin{equation} \gamma_1 (x ) = - \xi_1 \xi_2 \frac{-{\rm i} \xi_1\sqrt{\mathcal{R}_{4} ({\rm i} )} \, \mathrm{cn} (\xi_1 x ) \mathrm{dn} (\xi_1 x ) + {\rm i} (\xi_1^2 - \xi_2^2) {\rm sn} (\xi_1 x) }{\xi_1^2 (1- \xi_2^2) + (\xi_1^2 - 1) \xi_2^2 \, {\rm sn} (\xi_1 x)^2 } . \end{equation} The other independent component of the spin field, say $\mathcal{S}^{-}(x,t)$, can be reconstructed with aid of formulae~\eqref{Sm_restore1} and \eqref{Sm_restore2}, i.e. \begin{equation} \mathcal{S}^- (x , t) = \frac{1}{\sqrt{1+ \gamma_1^2 (x, t) } } \exp \left( - {\rm i} \int {\rm d} x \frac{-{\rm i} \sqrt{\mathcal{R}_4 ({\rm i} ) } }{1+\gamma_1^2 (x, t) } \right) . \end{equation} Using the properties of elliptic functions, we have \begin{equation} \mathcal{S}^- (x ) = C \frac{\Theta( \xi_1 x+\varrho + {\rm i} K_2 )}{\Theta(\xi_1 x +K_1 )} \exp\left(\xi_1 x\frac{\pi {\rm i} }{2 K_1}+ \xi x \frac{\Theta^\prime (\beta)}{\Theta(\beta)} \right) , \end{equation} where an auxiliary function $\Theta (x)$ is defined as (cf. Appendix~\ref{app:ellipticfunc}) \begin{equation} \Theta(x) = \vartheta_3\left( \frac{\pi x}{2 K_1} + \frac{\pi}{2} , - i \frac{K_2}{K_1} \right). \end{equation} Finally, constant $C$ is can be fixed by requiring normalisation $\vec{\mathcal{S}}^{2} = 1$, yielding \begin{equation} \mathcal{S}^- (x ) = \sqrt{1 - \xi_2^2} \frac{\Theta( \xi_1 x+\rho+{\rm i} K_2 )}{\Theta(\xi_1 x +K_1 )} \exp\left(\xi_1 x\frac{\pi {\rm i} }{2 K_1}+ \xi x \frac{\Theta^\prime (\beta)}{\Theta(\beta)} + {\rm i} \phi_0 \right) , \end{equation} where $\phi_0 \in \mathbb{R}$ is a phase that is determined by the initial condition. We just found that for the bion solution $\mathcal{S}^-$ is time-independent as well. A representative example of a bion solution is shown in Fig.~\ref{fig:bionspinfield}. \begin{figure} \centering \includegraphics[width=.95\linewidth]{figures/bion_spin_field.pdf} \caption{Spin-field components of a typical bion solution, depicted for anisotropy parameter $\delta=1$ and branch points $\xi_1 \simeq 1.0583559$ and $\xi_2 = 0.9$. Components $\mathcal{S}^{\rm z}(x)$, $\mathcal{S}^{\rm x}(x) = \mathrm{Re}\,\mathcal{S}^{-}(x)$, and $\mathcal{S}^{\rm y} (x) =-\mathrm{Im}\,\mathcal{S}^{-}(x)$ are shown by blue, yellow and red curves, respectively. The branch points are determined using the method in Appendix~\ref{app:numericalrecipe}. The bion solution is periodic with period $\ell_2 = 4 K_1 / \xi_1 \simeq 15.956517$.} \label{fig:bionspinfield} \end{figure} \paragraph*{Static kink.} As our final example, we consider a particular degeneration of a bion solution which produces a kink. One can think of it as `soliton limit' which generally corresponds to `blowing up' the period $\ell$. This requires to bring both branch points of $\sqrt{\mathcal{R}_4(\mu)}$ in the upper-half plane together to meet at ${\rm i} \epsilon$, that is sending $\xi_{1,2} \to \epsilon$ (presently $\epsilon = 1$) and thus collapsing both branch cuts to a point. In order to perform this soliton limit, we first shift the argument of the elliptic function by quarter period $K_{1} / \xi_{1} $; for instance the $\mathcal{S}^{\rm z}$ field takes the form (cf. Eqs.~\eqref{halfperiodelliptic}) \begin{equation} \mathcal{S}^{\rm z} (x ) = - \xi_1 \frac{\mathrm{c n } ( \xi_1 (x + K_1 / \xi_1) ) }{\mathrm{d n } ( \xi_1 (x + K_1 / \xi_1) )} = \xi_1 \mathrm{s n} (\xi_1 x ). \end{equation} Now taking the limits $\xi_{1,2} \to 1$ and accordingly ${\rm k} = \xi_{2} / \xi_{1} \to 1$, we find \begin{equation} \mathcal{S}^{\rm z}_{\rm kink} (x ) = \tanh (x) , \end{equation} which is none other than a \emph{static kink}! The transversal components can be easily deduced from the equation of motion~\eqref{classicaleom}, yielding \begin{equation} \mathcal{S}^{\rm -}_{\rm kink} (x ) = \sech (x) \, e^{{\rm i} \phi_0 } , \end{equation} where $\phi_0 \in \mathbb{R}$ is a phase determined by the initial condition. \section{Semi-classical quantisation} \label{sec:quantumcontour} We have now fully prepared the stage to carry out the semi-classical quantisation on finite-gap solutions. In this respect, the associated spectral curves (encoding complete information about the conserved quantities) will be of vital importance. An important remark is in order at this point. First, recall that moduli of algebraic curves are completely fixed by locations of the square-root branch points, i.e. roots of polynomial $\mathcal{R}_{2 \mathfrak{g} + 2}(\mu)$, which (by the reality condition) must always appear in complex-conjugate pairs. Local conserved charges are expressible as functions of symmetric polynomials of branch points, namely coefficients $r_{j}$ of $\mathcal{R}_{2 \mathfrak{g} + 2}(\mu)$. While branch points $\{\mu_j\}$ themselves are directly linked physical quantities, the branch cuts (of square-root type) obtained by pairwise connecting the branch points are not physical but merely a matter of convention. There is indeed plenty of freedom in assigning branch cuts to a given set of branch points. For instance, the prevalent choice in the finite-gap literature~\cite{babelon_bernard_talon_2003} is to place the cuts along straight vertical lines connecting every complex conjugate pair of branch points, which are in turn encircled by the canonical basis $\mathcal{A}$-cycles. Such a choice is, purely from the standpoint of classical finite-gap solutions, perfectly adequate. One the other hand, if classical solutions are instead thought of as emergent macroscopic bound states of magnons of the underlying quantum spin chain, it is natural to cut the surface along one-dimensional disjoint segments associated to forbidden zones of the classical transfer function~\cite{Kazakov_2004}, corresponding to contours on which magnons (Bethe roots) condense. This prescription for the branch cuts is physically distinguished. As demonstrated in turn, such physical cuts not only appreciably differ from the conventional straight cuts in general, but also undergo the phenomenon of condensate formation. Computing the Bethe root densities along the physical contours is thus the essential step to perform the semi-classical quantisation. \subsection{Physical contours} In this section we describe a general procedure to determine the physical contours. The algorithm we employ has been proposed and implemented in Ref.~\cite{Bargheer_2008}. We shall also facilitate a direct comparison with exact low-momentum quantum eigenstates at finite $L$ in the weakly-anisotropic regime. For our convenience, we carry out this computation in $\zeta$-plane~\footnote{Our variable $\zeta$ is analogous to variable $x$ used in Refs.~\cite{Kazakov_2004, Beisert_2005, Bargheer_2008} in the case of the isotropic Heisenberg model.} by applying the following anti-holomorphic transformation~\footnote{Upon this transformation, orientation of integration contours gets reversed.} \begin{equation} \mu \mapsto \zeta(\mu):\qquad \zeta = \frac{1}{\mu} = \frac{\tan \vartheta}{\epsilon}. \end{equation} Firstly, we introduce the \emph{complex} density function \begin{equation} \rho(\zeta) = \frac{\mathfrak{p} (\zeta + {\rm i} 0 ) - \mathfrak{p} (\zeta - {\rm i} 0)}{2 {\rm i} \pi \ell (1 + \delta \zeta^2)} = \pm\frac{\mathfrak{p}(\zeta \pm {\rm i} 0)-\pi n_{j}}{{\rm i} \pi \ell (1+\delta \zeta^2 )},\qquad \zeta \in \mathcal{C}_{j}, \label{defrhozeta} \end{equation} where $n_{j} \in \mathbb{Z}$ is the mode number associated to cut $\mathcal{C}_{j}$. By virtue of the second equality in Eqs.~\eqref{defrhozeta}, the density function $\rho(\zeta)$ can be defined on the entire Riemann surface. As a direct consequence of $\mathfrak{p}(\zeta = \zeta_{\star})=\pi n_{j}$ at the square root branch points $\zeta_{\star}\in \{\zeta_{j},\ol{\zeta}_{j}\}$, the density always vanishes, $\rho(\zeta_{\star})=0$. In a small neighbourhood around it, one finds $\rho(\zeta = \zeta_{\star} + \varepsilon) = \mathcal{O}(\sqrt{\varepsilon})$. Away from the branch points $\rho(\zeta)$ in general takes complex values. The task is to determine the physical contours $\mathcal{C}_{j}$ The latter can be singled out by the following condition: starting from the branch point $\zeta_{j}$, the integrated density differential $\rho(\zeta){\rm d} \zeta$ must always remain real, \begin{equation} \int^{\zeta}_{\zeta_{j}} {\rm d} \zeta^\prime \rho (\zeta^\prime) \in \mathbb{R} \qquad {\rm for}\qquad \zeta \in \mathcal{C}_{j}. \label{realitycond} \end{equation} This prescription has a transparent physical interpretation; physically $\rho(\zeta){\rm d} \zeta$ corresponds to the number of excitations (Bethe roots) within an infinitesimal interval in $\zeta$-plane, which is a positive definite quantity by definition. This condition alone is however not sufficient yet. In particular, it turns out that there are three distinct contours emanating out of each branch point compatible with the above positivity requirement~\cite{Bargheer_2008}. Amongst those, one of the contour carries an infinite filling fraction and can be thus immediately ruled out as unphysical. Out of the remaining two contours, only one can be physical. The defining condition is that the total filling fraction does not exceed the threshold value of maximal total filling $\nu_{\rm max} = 1/2$, that is \begin{equation} \int_{\mathcal{C}} {\rm d} \zeta^\prime \rho (\zeta^\prime) \leq \nu_{\rm max} , \qquad \mathcal{C} = \bigcup_j \mathcal{C}_j . \label{realitycond2} \end{equation} Consider now a certain reference finite-gap solution. To quantise it at the semi-classical level, every density contour (physical cut) has to be dissolved into a large (but finite) number of individual magnons. This invariably requires to reintroduce the length $L$ of the underlying spin chain, thus rendering the total magnetisation carried by individual coherent states to come in integer quanta of $M_{j} \sim \mathcal{O}(L)$. The precise requirements are that (i) $M_{j}/L \approx \nu_{j}$ and (ii) that the Bethe roots are distributed approximately with density $\rho_{j}(\zeta)$ along $\mathcal{C}_{j}$. Here it is important to make a distinction with the exact quantisation which instead takes quantum fluctuations fully into account to all orders in the effective Planck constant. This means, in other words, that the semi-classical solutions produced with the outlined procedure can be at best an approximation of finite-volume exact quantum-mechanical eigenstates at large wavelengths, while a full non-perturbative (i.e. exact) quantisation would require solving the Bethe ansatz equations~\eqref{betheeq}. \paragraph*{Single contour at low density.} To benchmark the above procedure, we proceed by illustrating first how one-cut solutions emerge as semi-classical eigenstates in the anisotropic gapped Heisenberg spin-$1/2$ chain. We shall first suppose that the filling fraction of a physical cut is sufficiently \emph{low}, ensuring that the finite-size effects (cf. Eqs.~\eqref{finite_size_mu} and \eqref{finite_size_zeta}) can be safely neglected at large system sizes. We then find that the Bethe roots patterns which solve the asymptotic Bethe equations distribute along certain arcs in the complex rapidity plane, as exemplified in Fig.~\ref{fig:1cutnocond}. To be concrete, we put anisotropy to $\delta = 1$ and set the filling fraction to $\nu = 0.1$ and the mode number to $n=1$. Using the above prescription, we next compute the density contour satisfying Eqs.~\eqref{realitycond} and \eqref{realitycond2}, as shown in Fig.~\ref{fig:1cutnocond} (the procedure to numerically solve the Bethe ansatz equations~\eqref{betheeq} for the case of a single quantised one-cut solution is described in Appendix~\ref{app:numericalbethe}). Upon taking the $L \to \infty$ limit and rescaling the rapidity variable, the semi-classical eigenstate will eventually be described by a dense arrangement of Bethe roots distributing along the contour specified by the conditions~\eqref{realitycond} and \eqref{realitycond2}. By ramping up the filling fraction $\nu$, we observe that `quantum fluctuations' (contained in higher order terms in the ABE) progressive amplify. As announced earlier, this eventually leads to a critical phenomenon of condensate formation. This feature will be closely examined in the next section. \begin{figure} \centering \includegraphics[width=.5\linewidth]{figures/xxz_delta_1_contour_condensate_0_1.pdf} \caption{Direct comparison between the physical density contour of a classical one-cut solution (red dashed line, corresponding to anisotropy $\delta = 1$, filling fraction $\nu = 0.1$ and mode number $n=1$) determined by imposing the positivity condition~\eqref{realitycond}, and the corresponding solution to Bethe ansatz equations~\eqref{betheeq} with $M=30$ Bethe roots, system size $L=300$ and anisotropy $\eta = \sqrt{\delta}/L = 1/300$ (blue dots). The Bethe roots are given by $\zeta_{j} = \tan (\vartheta_{j})/\sqrt{\delta}$.} \label{fig:1cutnocond} \end{figure} \subsection{Formation of condensates} \label{sec:betherootcond} \emph{Condensates} refer to segments of a uniform density as a part of a physical contour. We borrowed this name from Refs.~\cite{Beisert_2003, Kazakov_2004} where (to the best of our knowledge) such objects have been first identified. Condensates enter the picture whenever the maximal density along one of the physical contours exceeds a particular critical value which is signalled by a divergence of the finite-size correction given by Eq.~\eqref{finite_size_mu} (see also Eq.~\eqref{finite_size_zeta}). We shall first examine the phenomenon on the simplest case of the one-cut solution, using the $\zeta$-plane parametrisation. By starting at some sufficiently low filling fraction $\nu$ we can observed that upon gradually increasing the filling fraction, the value of $\rho(\zeta)(1+ \delta \zeta^2)$ on real axis approaches the value of ${\rm i}$. This value is reached at the critical filling fraction $\nu_{\rm crit}$, precisely when quantum fluctuation of order $\mathcal{O}(1/L)$ become divergent, as indicated by Eq.~\eqref{finite_size_zeta}. For larger fillings $\nu>\nu_{\rm crit}$, the density contour develops a vertically straight segment of unit uniform density. Such a condensate appears first on the real axis (right after crossing $\nu_{\rm cric}$) and progressively expands outwards upon further increasing $\nu$. From the viewpoint of the underlying quantum chain, the spacing between constituent Bethe roots is always equal to ${\rm i} \eta$. One can therefore think of condensates as giant regular Bethe strings. In the complex spectral plane associated to finite-gap solutions, the ends points of a condensate correspond to branch cuts of \emph{logarithmic} type.~\footnote{Logarithmic branch cuts get likewise produced in a well-known soliton degeneration process, corresponding to merging two nearby standard square-root branch cuts by coalescing their type branch points in a pairwise manner. In effect, finite-gap quasi-momentum is no longer meromorphic. Condensates are different in this respect, as their quasi-momentum differential remains meromorphic all the way through.} \begin{figure} \centering \begin{minipage}[t]{.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/xxz_delta_1_contour_part_1.pdf} \caption*{\centering(a)} \end{minipage} \begin{minipage}[t]{.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/xxz_delta_1_contour_part_2.pdf} \caption*{\centering(b)} \end{minipage} \begin{minipage}[t]{.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/xxz_delta_1_contour_part_3.pdf} \caption*{\centering(c)} \end{minipage} \caption{Branch points ($\zeta_1$ and $\bar{\zeta}_1$) and fluctuation points ($\zeta_F$) for the one-cut solution with anisotropy parameter $\delta = 1$ and various filling fractions $\nu = \{0.1, 0.206354963,1/3\}$, increasing from left to right. Panel (b) corresponds to the critical filling fraction $\nu_c \simeq 0.206354963$, when the fluctuation point $\zeta_F$ collide with the physical cut.} \label{fig:fluctuation_demonstration} \end{figure} In spite of appearance of an additional condensate above $\nu>\nu_{\rm crit}$, it is still possible to extract the physical contour solely from the knowledge of a finite-gap solution by taking into account conditions~\eqref{realitycond} and \eqref{realitycond2}. Emergence of condensates is intimately tied to the notion of `fluctuation points', playing a pivotal role in classical modulation stability theory~\cite{Forest_Lee_1986}. Fluctuation points can perceived as small fluctuations of a reference finite-gap solution, corresponding to tiny cuts that possess infinitesimal filling fractions. Upon increasing their filling fraction, they grow up into nonlinear finite-gap mode. Let us slightly expand on this point. Imagine a reference finite-gap solution with $m$ cuts, and let $\{n_1, n_2 \cdots n_{m}\}$ denote the occupied mode numbers. To excite a mode with an unoccupied $n$, call it $n_{*}$, the following condition for the quasi-momentum $\mathfrak{p}(\zeta )$ has to satisfy the periodic boundary condition, \begin{equation} \mathfrak{p}(\zeta_{m,n_{*}} ) = n_{*}\pi , \end{equation} using $\zeta_{m,n_{*}}$ to label distinct fluctuation points. Note that these can either be real, or may occur in complex-conjugate pairs (owing to the square-root branch cut nature of the quasi-momentum). Fluctuation points are treated as ``almost degenerate'' branch cuts, so small that they do not affect the form of the quasi-momentum $\mathfrak{p}(\zeta)$. This raises an interesting question whether classically any given finite-gap solution is modulationally stable under such fluctuations, see for instance discussions in Ref.~\cite{Forest_Lee_1986}. In this respect, we note that the stability condition coincides with the condition for the formation of condensates. In the following we shall first take a look at the basic case with a single cut. We find the physical contours made out of Bethe roots consist of three pieces: two parts of to the contours which connect to the square-root branch points are joined by a uniform condensate attached to `the middle', with two additional bent contours emanating from the intersection points that connect to the nearby fluctuation point(s). We give an explicit demonstration of this scenario in Sec.~\ref{subsec:1-cutwithcond}. Presence of multiple excited cuts makes the situation even more involved as cuts exert among themselves an effectively attractive interaction. This situation is described in Sec.~\ref{subsec:multiplecut}. Let us also mention that a somewhat reminiscent phenomenon is known to appear in the context of matrix models~\cite{Brezin_2000} (which are described by a similar type of Riemann-Hilbert problems) and also elsewhere, e.g. in the large-$N$ Yang--Mills theory~\cite{Gross_1980, Wadia_1980, Douglas_1993} and random tiling models~\cite{Zinn-Justin_2000, Colomo_2013}. They commonly go under the name of the Douglas--Kazakov phase transition~\cite{Douglas_1993}, a variant of a third-order phase transition. Analogous condensates also appear in the semi-classical regime of the Lieb-Liniger model with attractive interaction~\cite{Flassig_2016, Piroli_2016} where a quantum phase transition can be detected through the calculation of correlation functions in the ground state~\cite{Piroli_2016}. We emphasise however that in our case there is no real phase transition going on in the sense that branch points and the quasi-momentum $\mathfrak{p}(\zeta)$ itself do not undergo any discontinuous change, in distinction with the case of Douglas--Kazakov transition where the free energy becomes non-smooth after formation of a ``condensate''~\cite{Colomo_2013}. \subsubsection{One-cut case with condensate} \label{subsec:1-cutwithcond} In the case of a single cut, we have \begin{equation} \int_{\mathcal{C}_1} {\rm d} \zeta \rho (\zeta ) = \nu_1 \in \mathcal{O} (1) , \end{equation} with an upper-bounded filling fraction $\nu_1 < 1/2 $. Locations of fluctuation points, denote below by $\zeta_{1,k}$, can be inferred from the density \begin{equation} \rho_k (\zeta ) = \pm \frac{\mathfrak{p} (\zeta ) - \pi k }{2\pi i (1 + \delta \zeta^2 )} , \qquad \mathfrak{p} (\zeta_{1 , k} ) = k \pi , \label{rhofluc} \end{equation} \paragraph*{Mode number $n=1$.} The condensate phenomenon can be best illustrated on the basic example of a one-cut solution with mode number $n=1$. Below the density threshold we find a single smooth arc-shaped contour, as exemplified in panel (a) in Fig.~\ref{fig:fluctuation_demonstration}. The closest fluctuation point sits at a finite distance away from the cut (somewhere to the left of it), whereas the density at the real axis satisfies \begin{equation} \rho (\zeta_\ast) (1 + \delta \zeta_\ast^2 ) < {\rm i}. \end{equation} Increasing the filling fraction will cause an increase in the density on the real axis. During the process, the nearby fluctuation point approaches the physical contour until at the critical filling it eventually collides with it at $\zeta_{\ast}$, \begin{equation} \rho (\zeta_\ast) (1 + \delta \zeta_\ast^2 ) = {\rm i}. \end{equation} This event is shown in panel (b) in Fig.~\ref{fig:fluctuation_demonstration}.~\footnote{Comparing to the isotropic case in \cite{Bargheer_2008}, $\nu_{c} \simeq 0.2092896452$, the condensate appears with a slightly smaller filling fraction.} \begin{figure} \centering \begin{minipage}{.49\linewidth} \centering \includegraphics[width=\linewidth]{figures/demonstration_fluc_point_n_1.pdf} \caption*{\centering(a)} \end{minipage} \begin{minipage}{.49\linewidth} \centering \includegraphics[width=\linewidth]{figures/xxz_contour_condensate_1_third.pdf} \caption*{\centering(b)} \end{minipage} \caption{(a) Fluctuation point $\zeta_F$, lying on the real axis, can be viewed as an infinitesimal (collapsed) branch cut -- a deactivated mode. (b) Comparison between the classical contour (dashed line) with anisotropic parameter $\delta = 1$, filling fraction $\nu = 0.3$, and mode number $n=1$ (computed based on reality condition~\eqref{realitycond}) with a condensate (red dashed line) and an additional contour originating from $\zeta_F$ (brown dashed line) and the corresponding solution to the Bethe equations (cf. Eq.~\eqref{betheeq}) with $M=48$, $L=144$ and $\eta = \frac{\sqrt{\delta}}{L} = \frac{1}{144}$ (blue dots). Notice that the Bethe roots plotted are $\frac{\tan \vartheta}{\sqrt{\delta}}$.} \label{fig:1cutcond} \end{figure} Upon increasing the filling fraction even further, the fluctuation points after collision `tunnel through' the cut. This leaves behind a straight condensate positioned in a vertical direction. The nearest fluctuation point on the real axis will then appear to the right of the physical cut, as pictured in panel (c) in Fig.~\ref{fig:fluctuation_demonstration}. One can nonetheless recover the same quasi-momentum $\mathfrak{p} (\zeta)$ by considering an additional pair of contours which emanate out of the fluctuation point(s), satisfying $\rho_{2} (\zeta ) {\rm d} \zeta \in \mathbb{R}$ with density defined through Eq.~\eqref{rhofluc} (depicted by brown dashed lines in panel (c) in Fig.~\ref{fig:fluctuation_demonstration}). Due to an extra condensate, the original contour cannot accommodate for all the Bethe roots, and some ``excess'' Bethe roots will lie along those additional contours. We wish to emphasise again that the quasi-momentum $\mathfrak{p} (\zeta)$ remains intact. There is a suggestive explanation behind the above picture if one pictures a one-cut solution as a limiting (degenerate) case of a more general two-cut solution with one of its cuts `switched off' to a fluctuation point. This is neatly captured in Fig.~\ref{fig:1cutcond} in panel (a), where the blue solid lines represent parts of the original physical connecting to the pair branch points $(\zeta_1 , \bar{\zeta}_1)$, while the red dashed line depicts the Bethe-root condensate of uniform density. The extra green solid line belongs to one of the ``unphysical contours''\footnote{We call it ``unphysical'' because the green contour alone does not yield the correct value for the filling fraction for the infinitesimal cut. Yet the combination of all three parts here is clearly physically meaningful.} associated with the infinitesimal branch cut $(\zeta_F , \bar{\zeta}_F)$. Combining all the ingredients, we are therefore able to determine the densities of Bethe roots along these three contours. This amounts to account for the leading-order quantum corrections to ABE~\eqref{asympBE} in non-perturbatively fashion. To better corroborate the above picture, we made a direct comparison with the contours obtained numerically by solving the Bethe equations for large system sizes, cf. Fig.~\ref{fig:1cutcond}. Moreover, we have performed another quantitative test for the proposed contours through the calculation of the leading-order Gaudin norm for the Bethe states, as shown in Fig.~\ref{fig:loggaudinthird}. We emphasise that physical contours are a key ingredient for the functional integral approach to compute overlaps (and norms) between semi-classical Bethe eigenstates, thus providing an opportunity to verify whether the described contours are indeed suitable. The numerical evidence is collected in Appendix~\ref{subsubsec:numericalcheck1}. \paragraph*{Mode number $n\geq 2$.} One can encounter even more exotic situations. While $\mathfrak{p} (\zeta_\ast) = (n+1) \pi$ has only one real solution for $n=1$, higher mode numbers $n \geq 2$ permit for complex-conjugate pairs of fluctuation points~\cite{Bargheer_2008}. In this scenario, the same condition $\rho_{n+1} (\zeta ) {\rm d} \zeta \in \mathbb{R}$ yields an additional contour with a condensate appearing between the intersection points, along the lines of the proceeding discussion. This time instead, such contours can be understood at the classical level as arising from a three-cut solution with one large physical cut and two almost degenerate tiny cuts located at the complex-conjugated fluctuation points $\zeta_F$ and $\bar{\zeta}_F$. For instance, in Fig.~\ref{fig:xxx_1_cut_n_2} we give an illustration of that for the isotropic Heisenberg spin chain with mode number $n=2$. Unfortunately, for the anisotropic ferromagnet the employed numerical method for producing analogous solutions does not work for $n \geq 2$, cf. Appendix~\ref{app:numericalbethe}. Given that the distributions of Bethe roots do not appreciably change upon introducing a tiny deformation parameter $\eta \sim \mathcal{O}\left( \frac{1}{L} \right) $, we expect the phenomenon to survive the presence of weak interaction anisotropy. \begin{figure} \begin{minipage}{.49\linewidth} \centering \includegraphics[width=\linewidth]{figures/demonstration_fluc_point_n_2.pdf} \caption*{\centering(a)} \end{minipage} \begin{minipage}{.49\linewidth} \centering \includegraphics[width=\linewidth]{figures/xxx_contour_condensate_0_1_n_2.pdf} \caption*{\centering(b)} \end{minipage} \caption{(Left) Complex fluctuation points $\zeta_F$ and $\bar{\zeta}_F$. They can be seen as collapsed cuts of a three-cut solution with $\zeta_{F,1} \to \zeta_{F,2} \to \zeta_F$ and $\bar{\zeta}_{F,1} \to \bar{\zeta}_{F,2} \to \bar{\zeta}_F$. (Right) Comparison between the classical density contour (dashed line), including with the condensate (red dashed line) and the additional contours emanating from fluctuation points $\zeta_F$ and $\bar{\zeta}_F$ (brown and purple dashed lines), obtained from reality condition~\eqref{realitycond} for the case of isotropic interaction ($\delta = 0$), with filling fraction $\nu = 0.1$ and mode number $n=2$, to the corresponding solution to Bethe equations~\eqref{betheeq} with $M=60$, $L=600$ and $\eta = 0$. The Bethe roots $\zeta_j$ (blue dots) are rescaled by the system size $L$ and plotted in the inverse spectral plane, i.e. $\zeta_{j} = 1/(L\lambda_j)$, where $\lambda_j$ solve the isotropic Bethe equations, $\left( \frac{\lambda_j + {\rm i} / 2}{\lambda_j - {\rm i} / 2} \right)^L = \prod_{k\neq j}^{M} \frac{\lambda_j - \lambda_k + {\rm i}}{\lambda_j - \lambda_k - {\rm i}}$.} \label{fig:xxx_1_cut_n_2} \end{figure} \subsubsection{Multiple cuts} \label{subsec:multiplecut} When multiple cuts get involved, the situation is far more complicated. In that case, the condensates can appear not only out of fluctuation points but also via an attractive interaction amongst the physical cuts. Here we focus for simplicity on the two-cut case, since a general scenario with more cuts can be largely described based on the phenomenology of the two-cut case. In Ref.~\cite{Bargheer_2008}, the authors made an exhaustive survey on the two-cut case at isotropic point ($\delta = 0$). The anisotropic case with $\eta = \frac{\epsilon}{L} > 0$ can be analysed in a similar fashion. There are several discernible features we wish to highlight. Firstly, when two physical cuts are far apart from one another, each branch cut can produce a condensate upon colliding with their nearby fluctuation points, in analogy with the one-cut case discussed in Ref.~\cite{Bargheer_2008}. However, when the physical cuts approach closer their mutual attraction amplifies until they eventually merge with one another. The result of this are two joined contours glued via a condensate at the cusps. \begin{figure} \centering \begin{minipage}{.49\linewidth} \includegraphics[width=\linewidth]{figures/xxx_contour_cond_2_cut_without_cond.pdf} \caption*{\centering(a)} \end{minipage} \begin{minipage}{.49\linewidth} \centering \includegraphics[width=\linewidth]{figures/xxx_contour_cond_2_cut.pdf} \caption*{\centering(b)} \end{minipage} \caption{(Left) Comparison between the density contour (dashed line) of a classical two-cut solution (shown for the isotropic case ($\delta = 0$), with partial filling fractions $\nu_1 = 0.02$, $\nu_2 = 0.06$ and mode numbers $n_1=1$, $n_2 = 2$), obtained from reality condition~\eqref{realitycond}, and the corresponding numerical solution to Bethe equations~\eqref{betheeq} with $M_1=10$, $M_2 = 30$, $L=500$ and $\eta = 0$ (blue dots). (Right) Comparison between the classical contour (dashed line) (shown for the isotropic case ($\delta = 0$), with partial filling fractions $\nu_1 = 0.025$, $\nu_2 = 0.075$ and mode numbers $n_1=1$, $n_2 = 2$), obtained from reality condition~\eqref{realitycond}, to the corresponding numerical solution to Bethe equations~\eqref{betheeq} with $M1=10$, $M_2 = 30$, $L=400$ and $\eta = 0$ (blue dots). The Bethe roots plotted are $\zeta_{j}=1/(L \lambda_j)$, same as in Fig.~\ref{fig:xxx_1_cut_n_2}.} \label{fig:xxx_2_cut} \end{figure} A basic instance of the above phenomenon involves two cuts with consecutive mode numbers, namely $n_2 = n_1 +1$. The moment the two physical contours intersect, say at points $\zeta_{\rm int}$ and $\bar{\zeta}_{\rm int}$, the combined density satisfies \begin{equation} \left[ \rho_{n_1} (\zeta_{\rm int}) + \rho_{n_1+1} (\zeta_{\rm int}) \right] \left( 1 + \delta \zeta_{\rm int}^2 \right) = \left[ \rho_{n_1} (\bar{\zeta}_{\rm int}) + \rho_{n_1+1} ( \bar{\zeta}_{\rm int}) \right] \left( 1 + \delta \bar{\zeta}_{\rm int}^2 \right) = {\rm i}, \end{equation} giving birth to a condensate. Indeed, installing a condensate between the two such intersection points does not alter the the quasi-momentum and hence the filling fraction stays intact. We have confirmed this to be the case by numerically solving the Bethe equations for moderately large system sizes, as demonstrated in Fig.~\ref{fig:xxx_2_cut} (again for the isotropic case). In particular, at low filling fractions for both cuts their mutual ``attraction'' becomes apparent (cf. the second cut connecting $(\zeta_1 , \bar{\zeta}_1)$ in panel (a) in Fig.~\ref{fig:xxx_2_cut}). The four branch points in panel (a) in Fig.~\ref{fig:xxx_2_cut}, reading $\zeta_1 = 0.10884679 + 0.047665716 {\rm i}$ and $\zeta_2 = 0.07330641 + 0.04152184 {\rm i}$ (with filling fractions $\nu_1 = 0.02$, $\nu_2 = 0.06$, $\ell=1$ and mode numbers $n_1=1$, $n_2 = 2$), have been determined numerically using the recipe given in Appendix~\ref{app:branchpoints}. At a certain critical value of the filling fractions the two cuts merge together. The intersection point becomes a logarithmic branch point of a condensate, as exemplified in panel (b) in Fig.~\ref{fig:xxx_2_cut}. The four branch points in panel (b) in Fig.~\ref{fig:xxx_2_cut} are $\zeta_1 = 0.09587725 + 0.05961115 {\rm i}$ and $\zeta_2 = 0.07169871 + 0.04814544 {\rm i}$ with filling fractions $\nu_1 = 0.025$, $\nu_2 = 0.075$, $\ell=1$ and mode numbers $n_1=1$, $n_2 = 2$. For the anisotropic interaction we encountered the same numerical difficulties as previously for the one-cut solution with $n \geq 2$, cf. Appendix~\ref{app:numericalbethe}. We nevertheless do not expect any qualitative difference compared to the isotropic model. \subsection{Special case: bion} \label{subsec:quantumbion} As discussed earlier, the easy-axis regime (i.e. for $\delta >0$) permits for a distinguished subclass of two-cut solutions that do not take place in the other two (that is isotropic and easy-plane) regimes. Here we have in mind the bion solution which we have described and parametrised in Sec.~\ref{subsec:examplebion}. One part of the motivation for investigating this case in detail is to elucidate the microscopic origin and stability of kinks in Landau--Lifshitz field theory, which we expect to have a pivotal importance for understanding the freezing property of a domain-wall profile, investigated recently in Refs.~\cite{Gamayun_2019, Misguich_2019}. \begin{figure} \centering \includegraphics[width=.5\linewidth]{figures/xxz_bion_contour_condensate.pdf} \caption{Quantised bion configuration with the proposed density contours (physical cuts). The two nonlinear modes with $\delta = 1$, parametrised by pairs of branch points $(\zeta_1 , \bar{\zeta}_1)$ and $(\zeta_2 , \bar{\zeta}_2)$ on imaginary $\zeta$-axis, have associated mode numbers $n_1 = 1$ and $n_2 = -1$. The branch points have been found numerically and are located at $\zeta_1 = (1/0.9) {\rm i} $ and $\zeta_2 = (1/1.058355921) {\rm i}$, The red dashed line represents the ``double condensate''. The corresponding classical solution is plotted in Fig.~\ref{fig:bionspinfield}.} \label{fig:bionspectral} \end{figure} To quantise the classical bion solution we demand the same reality condition as previously in the one-cut case. Notice however that bion solutions belong to maximally saturated states with the total filling $\nu = \frac{1}{2}$. Condensates appear to be a common feature at half filling. In describing a bion solution, there is no loss of generality in fixing the mode numbers of the two cuts to $\pm 1$, such that $\Delta n = 2$. Recall that in a general situation with two cuts being far apart, each cut can grow a condensate on its own. The bion case is different in that the two branch cuts reside close to each other and share a condensate in common. In fact, in the isotropic ferromagnet studied in Refs.~\cite{Bargheer_2008, Beisert_2003} the two-cut solution with mode numbers set to $\pm 1$ is known to result in a ``double condensate''. Led by this observation, we conjecture that the same phenomenon takes place presently in the case of bions; a ``double condensate'' emerges when a pair of branch cuts with mode numbers $\pm 1$ intersect with one another, thereby producing a logarithmic cut with of `doubled' uniform density $ 2 {\rm i} $. Analogous objects which are twice as dense as ordinary Bethe strings have been previously found in Ref.~\cite{Beisert_2003} in the study of solutions to the isotropic Bethe equations. We proceed by semi-classically quantising the bion solution using the conjectured form of its contours with a double condensate, depicted in Fig.~\ref{fig:bionspectral} by the red dashed line satisfying \begin{equation} \rho (\zeta) (1+ \delta \zeta^2 ) = 2 {\rm i} . \end{equation} We have been able to explicitly match the classical values of the filling fraction, momentum and energy: the two partial filling fractions add up exactly to one half, i.e. $\nu = \nu_1 + \nu_2 = \frac{1}{2}$, whereas total momentum $P = 0 \, (\mathrm{mod} \, 2 \pi)$ and total energy $E = 3.96045$ (obtained by numerically integrating along the proposed contours) match those of a classical bion configuration, with $P = 0$ and $E = 3.960358(6)$. These results strongly indicate that we have indeed correctly identified the physical contours associated to a quantised bion. As discussed earlier in Section~\ref{subsec:examplebion}, kinks arise as a particular (soliton) limit of a bion solution in which the two branch points $\zeta_1 , \zeta_2$ on the imaginary axis coalesce at ${\rm i} / \epsilon$. By inspecting this degeneration process at the level of semi-classical eigenstates, we find a uniform condensate with a double density of Bethe roots running along the imaginary axis between $-{\rm i}/ \epsilon$ and ${\rm i} / \epsilon$. We note that (anti)kinks are not compatible with periodic boundary conditions. In an infinitely extended quantum chain however, the kink and antikink eigenstates represent extra degenerate ground states (with broken translational symmetry) of the XXZ ferromagnet in the gapped phase. Kink eigenstates have been derived in Refs.~\cite{Reffert_2009, Reffert_2010} using a curious correspondence between the XXZ quantum chain and the problem of a melting crystal corner. This derivation however does not require any use of the Bethe quantisation condition and consequently cannot reveal the internal magnon structure of the kink. It would be valuable to devise a method to extract the corresponding numerical solution to the Bethe equations for large system sizes. The task of solving the anisotropic Bethe equations~\eqref{betheeq} in the vicinity of half filling remains quite challenging at this moment. Perhaps one could get some hints by first scanning through the complete list of exact eigenstates for relatively small system sizes (typically of order $L\sim 10$, using e.g. the techniques proposed in Refs.~\cite{Hagemans_2007, Marboe_2017}) and look for traces of finite-size bions. At this junction, our statements regarding the kink solution thus remain to an extent conjectural. \section{Semi-classical norms and overlaps} \label{sec:normandoverlap} In this section we outline how to compute an overlap between two semi-classical Bethe eigenstates. We provide closed formulae for (i) the Gaudin norm and (ii) the Slavnov inner product between an on-shell and off-shell Bethe states \cite{Slavnov_1989}. There are two possible routes to achieve this. The first one, proposed by Gromov et al. in~\cite{Gromov_2012}, is to perform coarse-graining directly at the level of the general determinant formula for a specific finite-gap density resolvent. The other approach, developed for the isotropic (XXX) Heisenberg model by Kostov and Bettelheim in Refs.~\cite{Kostov_2012_PRL, Kostov_2012, Bettelheim_2014}, makes use of functional integration techniques with a bit of complex analysis. Both methods provide the leading (i.e. classical) contributions to the overlaps and norms. We shall not repeat the derivations here but instead only succinctly summarise the main formulae for the model of our interest. Moreover, in Sections \ref{subsubsec:numericalcheck1} and \ref{subsubsec:numericalcheck2} we provide a direct numerical confirmation based on the finite-size analysis. \subsection{Gaudin norm} \label{subsec:gaudinnorm} The method proposed by Kostov in Refs.~\cite{Kostov_2012_PRL, Kostov_2012} has already been generalised for the specific case of the anisotropic Heisenberg model in \cite{Jiang_2017, Babenko_2017}. To compute the Gaudin norm we instead employ a more direct approach of \cite{Gromov_2012}, which we generalise here by including the interaction anisotropy. The idea is to convert the logarithm of the Gaudin determinant into a Riemann summation which, after taking the $L \to \infty$ limit, corresponds to complex integration along the physical contours which support the Bethe roots. The Gaudin norm of a finite-volume Bethe eigenstate, \begin{equation} \mathcal{N} = \langle \{ \vartheta \} | \{ \vartheta \} \rangle, \end{equation} grows exponentially in system size, i.e. $\log \mathcal{N} \sim \mathcal{O}(L)$ to the leading order of $L$. In the $\zeta$-plane parametrisation, we find the following explicit form \begin{equation} \begin{split} \log \mathcal{N} &= C_1 L + o (L) , \\ C_1 &= \int_\mathcal{C} {\rm d} \zeta \left[{\rm i} \pi \ell (1 + \delta\, \zeta^2) \rho(\zeta) + 2 \int_0^{\rho(\zeta)(1+\delta \, \zeta^2)} \frac{{\rm d} \xi}{1+\delta \, \xi^2} \log \big(( 2 \sinh(\pi \xi) \big) \right] , \end{split} \label{GaudinnormXXZ} \end{equation} where we have expressed the volume-law coefficient $C_1$ in terms of the resolvent density $\rho(\zeta)$ with support on a union of contours $\mathcal{C} = \cup_j \mathcal{C}_j$. We note that the dominant subleading correction to the above formula is quite subtle and has the form $\log \mathcal{N}(L) = C_1 \, L + \frac{1}{2} \log L + \mathcal{O}(L^0)$ (for $\quad C_1\, \in \mathcal{O} (1)$), as discussed in Ref.~\cite{Gromov_2012}. Several numerical verifications (both with or without a condensate) are presented in Appendix~\ref{subsubsec:numericalcheck1}. \subsection{Slavnov overlap} \label{subsec:slavnovoverlap} To express the semi-classical overlaps we follow instead the functional integral approach devised in Refs.~\cite{Kostov_2012_PRL, Kostov_2012}. This method does not rely on the clustering properties of Gaudin determinant as in Ref.~\cite{Gromov_2012}. Here we merely quote the final result of \cite{Jiang_2017} (cf. formula (1.5) in there) for the anisotropic Landau--Lifshitz field theory \begin{equation} \log \langle \{ \vartheta_1 \} | \{ \vartheta_2 \} \rangle \simeq \oint_{\mathcal{C}_{1 \cup 2}} \frac{ {\rm d} \zeta }{2 \pi {\rm i} } \log \Phi_{\sqrt{\eta}} \big(\mathfrak{p}_1 (\zeta) + \mathfrak{p}_2 (\zeta) + \pi \big) , \label{Jiang_formula} \end{equation} involving two classical quasimomenta $\mathfrak{p}_{1}(\zeta)$ and $\mathfrak{p}_{2}(\zeta)$ that correspond to the semi-classical Bethe eigenstates $\ket{\{ \vartheta_1 \}}$ and $\ket{\{ \vartheta_2 \}}$~\footnote{Only one of the states has to be on-shell, i.e. solution to Bethe ansatz equations~\eqref{betheeq}.}, respectively. Function $\Phi_{b} (z)$ stands for quantum dilogarithm \cite{Faddeev_1994}, defined through the following integral representation \begin{equation} \Phi_b (z) = \exp \left[ \frac{{\rm i}}{2} \int_{\mathbb{R} + {\rm i} 0} \frac{{\rm d} t}{t} \frac{e^{zt} }{\sin (b^2 t) \sinh (\pi t)} \right] , \end{equation} This function can be understood of as a `quantum deformation'~\footnote{The word `quantum' here refers to the $q$-deformation parameter of `quantum calculus' which should not be confused with the $q$-parameter of the quantum algebra $\mathcal{U}_{q}(\mathfrak{su}(2))$ of the underlying anisotropic Heisenberg chain.} of the ordinary dilogarithm function $\mathrm{Li}_2 (z)$ to which it reduces in the isotropic limit $\delta \to 0$. The contour prescription in Eq.~\eqref{Jiang_formula} is such that $\mathcal{C}_{1,2}$ wrap around tightly around the supports of the respective density resolvents, cf. Ref.~\cite{Kostov_2012}. Further simplification of the above formula can be made in the semi-classical limit $\eta \to \epsilon/L$ which implies $b = \sqrt{\eta} \to 0$. In this limit the quantum dilogarithm simplifies to~\cite{Kashaev_2011} \footnote{Beware that the definition of $\Phi_b(z)$ in~\cite{Kashaev_2011} differs from the definition in \cite{Jiang_2017} and the one used here.} \begin{equation} \lim_{b\to 0} \Phi_b (z) = \exp \left[ \frac{{\rm i} L}{\sqrt{\delta}} \mathrm{Li}_2 (- e^{{\rm i} z} ) \right] + \mathcal{O} (L^0) , \end{equation} and accordingly at the leading order $\mathcal{O}(L)$ the logarithmic overlap is approximately \begin{equation} \begin{split} & \lim_{\eta \to \epsilon / L} \log \langle \{ \vartheta \}_1 | \{ \vartheta\}_2 \rangle = C_2 L + o (L) , \\ & C_2 = \frac{1}{\sqrt{\delta}}\oint_{\mathcal{C}_{1 \cup 2}} \frac{{\rm d} \zeta}{2 \pi (1 + \delta \, \zeta^2)} \mathrm{Li}_2 \left[ e^{{\rm i} \big(\mathfrak{p}_1 (\zeta ) + \mathfrak{p}_2 (\zeta ) \big)} \right]. \end{split} \label{Kostovformulaoverlap} \end{equation} Similarly to the Gaudin norm, the dominant subleading correction is of form $\log \langle \{ \vartheta \}_1 | \{ \vartheta\}_2 \rangle = C_2 L + \frac{1}{2} \log L + \mathcal{O} (L^0)$. For the coinciding sets of rapidities, this correctly reproduces the leading order expression for the Gaudin norm, \begin{equation} \lim_{\eta \to \epsilon / L} \log \langle \{ \vartheta \} | \{ \vartheta \} \rangle \simeq \frac{L}{\sqrt{\delta}}\oint_{\mathcal{C}} \frac{{\rm d} \zeta}{\pi (1 + \delta \, \zeta^2)} \mathrm{Li}_2 \left( e^{2 {\rm i} \mathfrak{p} (\zeta ) } \right). \end{equation} which can be readily reconciled with Eq.~\eqref{GaudinnormXXZ} upon expressing the resolvent density in terms of the quasi-momentum, ${\rm i} \pi \rho (\zeta) = \pi n - \mathfrak{p} (\zeta)$, and performing the following integral \begin{equation} \int_{0}^{\rho (\zeta) (1+\delta \zeta^2)} \frac{{\rm d} \xi}{1+\delta \xi^2} \log \big( 2 \sinh(\pi \xi) \big) = \frac{1}{2\pi}\mathrm{Li}_2 \left(e^{2 {\rm i} \mathfrak{p} (\zeta)} \right) + \frac{\pi}{2} \rho^2 (\zeta) (1+\delta \zeta^2)^2 - \frac{\pi}{12} . \end{equation} There is a practical limitation of Eq.~\eqref{Kostovformulaoverlap} that concerns the placement of integration contours, acknowledged previously in Ref.~\cite{Kostov_2012}. The requirement is that the integration contours $\mathcal{C}$ must avoid crossing any branch cut of the function in the integrand. This issue is presently further complicated by additional logarithmic branch cuts due to the dilogarithm function. This shortcoming makes the numerical verification a challenging task. We nonetheless still managed to verify its validity in special case of overlaps with a vacuum descendant (see Appendix~\ref{subsubsec:numericalcheck2}). \section{Correlation functions} \label{sec:correlationfunc} We have thus far demonstrated that the knowledge of physical contours not only proves useful in calculating the spectrum, overlaps and norms of semi-classical Bethe eigenstates, but also facilitates the semi-classical quantisation of the finite-gaps solutions. On the other hand, we have not yet addressed the expectation values of physical observables. This section is devoted to discuss some properties of correlation functions at the semi-classical level. Despite integrability, the task of computing exact expectation values of physical observables, including their correlation functions, appears quite challenging. There have already been numerous works on this subject, employing either the form-factor expansion or bootstrap methods, e.g.~\cite{Korepin_1993, Smirnov_1992, Jimbo_1994, Hagemans_2006, Klauser_2011}. Here, however, we are particularly interested in the semi-classical regime where those methods are not directly applicable. We are specifically interested whether the aforementioned classical--quantum correspondence for correlations functions, established analytically in the introductory section~\ref{sec:HO} on the toy example of the harmonic oscillator, holds on more general grounds. A direct generalisation of this principle from a single-particle paradigm is complicated by the fact that integrable field theories governed by PDEs involve many degrees of freedom which, moreover, couple (i.e. interact) in a non-trivial fashion. One viable empirical approach to obtain correlation functions in classical regime (enabling a comparison with their quantum counterparts) is to build on the semi-classical form-factor approach developed by Smirnov~\cite{Smirnov_1998}. Accordingly, the matrix elements would be represented as integrals over $\gamma$ variables, see Eqs. (33)--(36) in \cite{Smirnov_1998}. Let us consider a periodic solution associated with one branch cut. It is described by a single dynamic variable $\gamma$. We further replace the integration over variable $\gamma$ with the integration over one period, similar to the phase space averaging \cite{Flaschka_1980}. In addition, we compute numerically (for small system sizes) the quantum correlators in the eigenstate that corresponds to the cut. \begin{figure} \centering \begin{minipage}{.49\linewidth} \includegraphics[width=\linewidth]{figures/sx_sx_correlation_function_filling_fraction_1_3.pdf} \caption*{\centering(a)} \end{minipage} \begin{minipage}{.49\linewidth} \includegraphics[width=\linewidth]{figures/sz_sz_correlation_function_filling_fraction_1_3.pdf} \caption*{\centering(b)} \end{minipage} \caption{(a) Quantum correlation functions $\langle \hat{\sigma}^{\rm x}_j \hat{\sigma}^{\rm x}_{j+n} \rangle$ (shown in green circles for $L=15$ and black crosses for $L=18$) versus averaged classical correlation functions (red dashed line), with anisotropy $\delta=1$ and classical period $\ell = 1$. (b) The same comparison for longitudinal correlators $\langle \hat{\sigma}^{\rm z}_j \hat{\sigma}^{\rm z}_{j+n} \rangle$.} \label{fig:correlation_function} \end{figure} We focus only on static (i.e. equal-time) correlation functions, considering one-point functions $\langle \hat{\sigma}^{\rm x}_j \rangle$ and $\langle \hat{\sigma}^{\rm z}_j \rangle$ and two-point point functions $\langle \hat{\sigma}^{\rm x}_j \hat{\sigma}^{\rm x}_k \rangle$ and $\langle \hat{\sigma}^{\rm z}_j \hat{\sigma}^{\rm z}_k \rangle$. Specifically, we set the filling fraction to $\nu = \frac{1}{3}$ and the mode number to $n=1$. We present our results for system sizes $L=15$ and $18$ with the with number of down-turned spins being $5$ and $6$, respectively. The corresponding averages are denoted as $\langle \cdots \rangle_5$ and $\langle \cdots \rangle_6$. The coordinate Bethe ansatz wavefunctions can be represented in the local eigenstate basis of $\hat{\sigma}^{\rm z}$ by solving Bethe ansatz equations \eqref{betheeq}. An important thing to keep in mind is that quantum states (wavefunctions) are eigenfunctions of the momentum operator and consequently expectations values of one-point observables do not have any dependence on the spatial coordinate (lattice index). Two-point functions on the other hand can only be a function of the distance. In contrast, classical spin field configurations exhibit non-uniform dependence on the spatial coordinate $x$ and can be thus compared to the quantum correlation functions evaluated on semi-classical eigenstates after an appropriate phase-space averaging. In particular, for a periodic classical spin-field configuration with period $\ell$ and an operator (product of operators) $O[\mathcal{S}]$ that functionally depends on the spin configuration $\mathcal{S}$, we expect the corresponding correlation functions to take the form \begin{equation} \langle \{ \vartheta \} | O[\mathcal{S}]| \{ \vartheta \} \rangle \simeq \left\langle O[\mathcal{S}]\right\rangle_{\rm cl} = \frac{1}{\ell} \int_0^{\ell} {\rm d} x \, O[\mathcal{S} (x) ] , \end{equation} with $|\{ \vartheta \} \rangle$ denoting the corresponding semi-classical quantum eigenstate in the thermodynamic limit $L \to \infty$. The classical spin configuration for $n=1$ and $\nu = \frac{1}{3}$ reads \begin{equation} \begin{split} & \mathcal{S}^{\rm z} (x ,t) = 1- 2 \nu = \frac{1}{3} , \\ & \mathcal{S}^{\rm x} = 4 (\nu - \nu^2) \cos (k x + w t) = \frac{8}{9} \cos \left[ \frac{2 \pi}{\ell} x +\frac{1}{3} \left( \frac{4 \pi^2}{\ell^2} +\delta \right) t \right]. \end{split} \end{equation} Therefore for the one-point functions, we find \begin{equation} \langle \hat{\sigma}^{\rm x}_j \rangle_5 = \langle \hat{\sigma}^{\rm x}_k \rangle_6 = 0 = \frac{1}{\ell} \int_0^\ell {\rm d} x \, \mathcal{S}^{\rm x} (x),\qquad \langle \hat{\sigma}^{\rm z}_j \rangle_5 = \langle \hat{\sigma}^{\rm z}_k \rangle_6 = \frac{1}{\ell }\int_0^\ell {\rm d} x \, \mathcal{S}^{\rm z} (x) = \frac{1}{3} , \end{equation} for $j=1,2,\ldots 15$ and $k=1,2,\ldots,18$, thus confirming the correspondence. Our results for two point functions are presented in Fig.~\ref{fig:correlation_function}. In this case we notice some discrepancies between the classical expectation values, \begin{equation} \langle \hat{\sigma}^{\rm x}_j \hat{\sigma}^{\rm x}_{j+n} \rangle \simeq \frac{1}{\ell} \int_0^\ell {\rm d} x \, \mathcal{S}^{\rm x} (x) \mathcal{S}^{\rm x} \left(x + \frac{n \ell}{L}\right) = 2(\nu - \nu^2 ) \cos \left( \frac{2 \pi n}{L} \right), \end{equation} \begin{equation} \langle \hat{\sigma}^{\rm z}_j \hat{\sigma}^{\rm z}_{j+n} \rangle \simeq \frac{1}{\ell} \int_0^\ell {\rm d} x \, \mathcal{S}^{\rm z} (x) \mathcal{S}^{\rm z} \left(x + \frac{n \ell}{L}\right) = ( 1 - 2 \nu )^2, \end{equation} and their quantum counterparts, which we attribute to the finite number of spins. Indeed, instead of having a condensed contour of Bethe roots representing the branch cut in the complex plane, we consider solutions with at most 6 Bethe roots. We nonetheless find it plausible that with increasing system sizes the deviations would gradually diminish and we therefore expect to recover the classical result in the $L\to \infty$ limit. Finite-size effects also depend on type of operators that appear in the correlator; in the case of $\langle \hat{\sigma}^{\rm z}_j \hat{\sigma}^{\rm z}_{j+n} \rangle$ the deviation from the asymptotic result is found to be larger than in the case of $\langle \hat{\sigma}^{\rm x}_j \hat{\sigma}^{\rm x}_{j+n} \rangle$, see Fig.~\ref{fig:correlation_function}. Here we have demonstrated the first step in understanding such correlation functions. A systematic and comprehensive numerical analysis of the correspondence and finite-size corrections is postponed for future work. \section{Conclusion and outlook} \label{sec:conclusion} We have studied the structure of the semi-classical spectrum of the anisotropic Heisenberg spin-$1/2$ chain in the easy-axis regime with weak anisotropies. Using the framework of algebraic integrability, we have established that these semi-classical eigenstates emerge classically as interacting nonlinear spin waves governed by the Landau--Lifshitz field theory with uniaxial anisotropy. Firstly, we have expressed the asymptotic Bethe equations in the form of a singular integral equation for the spectral resolvent, which we subsequently recast in the form of the Riemann--Hilbert problem, providing jump discontinuity conditions for a double-valued complex function called quasi-momentum. The latter encodes the moduli of hyperelliptic Riemann surfaces which provide complete information about the spectrum of nonlinear eigenmodes for a class of finite-gap solutions of the anisotropic Landau--Lifshitz ferromagnet. We have outlined the main ingredients of the algebro-geometric integration technique. The starting point of this approach is the usual Lax representation which realises an auxiliary linear problem of parallel transport on a smooth manifold with a flat connection. In our formulation we made use of the adjoint representation, enabling us to parametrise the solutions in terms of squared polynomial eigenfunctions whose zeros contain information about the dynamical phases evolving on a finite-genus Riemann surface. We have demonstrated how their dynamics can be mapped to a linear evolution on the Liouville hypertorus using the Abel--Jacobi transform. Finally, individual components of the physical spin field can be retrieved with aid of reconstruction formulae. We have implemented the finite-gap integration procedure for two simplest classes of solutions: (i) the single-mode (one-cut) solutions, describing precessional motion around the anisotropy axis, and (ii) the two-mode (two-cut) solutions which take the shape of elliptic waves. Amongst the two-cut solutions, there are special elliptic solutions that describe bions, a bound state of kink and antikink. In a particular singular limit, the bion solution degenerates into the static kink. One central result of our work is an algorithm for performing semi-classical quantisation of classical finite-gap solutions. In this respect, the key object is the density resolvent associated to the classical quasi-momentum. The spectral resolvent is supported on a union of one-dimensional segments in the complex plane which may be adopted as branch cuts of a Riemann surface of finite genus. By following the programme of Ref.~\cite{Bargheer_2008}, we described and implemented a numerical algorithm for determining the locations of physical cuts (associated with the density contours). Each branch cut is a magnon condensate that represents a nonlinear mode in the spectrum of the effective classical equation of motion. In this view, semi-classical quantisation amounts to dissolve each branch cut of a finite-gap quasi-momentum into a large but finite number of magnetisation quanta (carried by magnons) with a prescribed accuracy; the resolvent density along each contour specifies a local density of magnon excitations. When the local density of magnons exceeds a critical threshold value, the physical contours experience a certain `non-perturbative effect' which leads to the formation of special condensates with uniform unit density of Bethe roots. A proper resolution of such situations necessitates to take into account quantum corrections. In this work we devote most attention to various formal properties of semi-classical eigenstates and other related mathematical underpinnings. We hope this can provide a foundation for further developments which would ultimately pave the way to physical applications. Particularly in the domain of out-of-equilibrium dynamics there has been tremendous progress recently in employing integrability techniques that enabled us, among others, to study late-time relaxation dynamics from highly-excited many-body initial states which goes commonly under the name of `quantum quenches'~\cite{Essler_2016}, see also \cite{PhysRevLett.106.227203,PhysRevA.89.033601,PhysRevLett.115.157201}). A particularly useful tool in this regard is the functional integral representation, dubbed the Quench Action~\cite{Caux_2013, Caux_2016}, which exploits exact knowledge of thermodynamic overlap coefficients. Our hope is to obtain its semi-classical counterpart. Despite that general expressions for the overlap coefficients between a semi-classical eigenstates and an off-shell state are explicitly known due to Ref.~\cite{Kostov_2012_PRL, Kostov_2012, Jiang_2017}, we have not been successful in employing them in practice yet. We have nonetheless been able to provide several benchmarks for the computation of Gaudin norms and Slavnov overlaps for a few simple finite-gap solutions and found good convergence. The main difficulty when dealing with the overlap formulae was to satisfy the requirements for the contour prescription. Since avoiding all the branch cuts of the integrand does not appear to be easily overcome, it seems that an alternative formula based solely on the resolvent densities (similarly to that for the semi-classical limit of the Gaudin norm~\cite{Gromov_2012}) might be preferable. Overcoming this issue would be a stepping stone for formulating a quench problem at the level of semi-classical states, a prominent example of which would be the semi-classical version of the domain wall melting which has recently been solved analytically by the authors in \cite{Gamayun_2019} using the inverse scattering transformation. This could help to solidify the classical--quantum correspondence also brought forward in \cite{Gamayun_2019} and corroborated in \cite{Misguich_2019}. Lastly, we shortly examined the structure of correlation functions in the semi-classical eigenstates of the Heisenberg XXZ chain and compared them to their classical counterparts, namely correlators of classical fields as finite-gap solutions. We found empirical evidence for a classical--quantum correspondence between static multipoint correlators on both sides, in alignment with the earlier results of Ref.~\cite{Smirnov_1998}. Importantly, the semi-classical correlators can only be compared to correlators of classical fields after computing phase-space averages, as demonstrated on a few basic examples. While there are strong indications that such a correspondence should hold generally in quantum integrable models that possess (integrable) classical limits, a proof is still lacking at the moment. We believe that it would be fruitful to investigate this matter in the framework of quantum separated variables, see e.g. \cite{Sklyanin_1995, Cavaglia_2019, Gromov_2020}, to learn how classical separated variables on Riemann surfaces \cite{babelon_bernard_talon_2003} emerge from the microscopic quantum model that sits underneath. In our opinion, quantum integrable lattice models and spin chains provide paradigmatic examples to address these aspects. \section*{Acknowledgements} We are very grateful to Jean-S\'{e}bastien Caux for his involvement at the starting stage of the project and numerous discussions. We thank Filippo Colomo, Andrea De Luca, Andrii Liashyk, Gr{\'e}goire Misguich, Vincent Pasquier, and Dmytro Volin for valuable discussions. We are greatly indebted to Nikolay Gromov and Ivan Kostov for sharing their insights on various facets of the problem. Y.M. thanks Vincenzo Alba and Alvise Bastianello for useful suggestions on numerical implementations. Y.M., and O.G. acknowledge the support from the European Research Council under ERC Advanced grant 743032 DYNAMINT. E.I. is supported by the research programme P1-0402 of Slovenian Research Agency. \begin{center} {\huge \bf Appendices } \end{center} \begin{appendix} \section{Riemann-Hilbert problem in $\zeta$-plane} \label{app:zetaRH} In order to study the formation of condensates, the Riemann--Hilbert problem is most conveniently written in terms of spectral parameter $\zeta = 1/\mu$, namely \begin{equation} \mathfrak{p} (\zeta + {\rm i} 0) + \mathfrak{p} (\zeta - {\rm i} 0) = 2 \pi n_j , \quad \zeta \in \mathcal{C}_j , \end{equation} where $\mathcal{C}_j$ denotes the $j$-th branch cut in $\zeta$ plane, whereas quasi-momentum $\mathfrak{p}(\zeta)$ is defined as \begin{equation} \mathfrak{p} (\zeta) = G(\zeta ) - \frac{\ell }{2 \zeta} = \ell \int {\rm d} \xi \tilde{\mathcal{K}}_{\delta} (\zeta, \xi) \rho (\xi) - \frac{\ell }{2 \zeta} , \end{equation} with integration kernel \begin{equation} \tilde{\mathcal{K}}_{\delta} (\zeta, \xi) = \frac{1 + \delta \xi \zeta}{\zeta - \xi } . \end{equation} The density (of the Bethe roots) is accordingly given by \begin{equation} \rho (\zeta) = \frac{\mathfrak{p} (\zeta + {\rm i} 0 ) - \mathfrak{p} (\zeta - {\rm i} 0)}{2 {\rm i} \pi \ell (1 + \delta \zeta^2)} , \quad \zeta \in \mathcal{C}_j. \end{equation} Note that the orientation of integration along $\mathcal{C}_j$ is now in the opposite direction as previously, i.e. it goes from the branch point with negative imaginary part to the one with positive imaginary part. \section{Finite size corrections to Riemann-Hilbert problem} \label{app:finitesizeRH} Here we outline how to take the semi-classical limit of the logarithm of $Q_{j}^{[\pm 2]}$. The first step is to split the term into the anomalous part and normal part~\cite{Beisert_2005, Hernandez_2005}, i.e. \begin{equation} \begin{split} \log Q_{j}^{[\pm 2]} (\vartheta_j ) & = \sum_{k \neq j }^M \log \sin (\vartheta_j - \vartheta_k \pm {\rm i} \eta ) \\ & = \sum_{0 < | k-j | \leq K } \log \sin (\vartheta_j - \vartheta_{k} \pm {\rm i} \eta ) + \sum_{| k-j | > K } \log \sin (\vartheta_j - \vartheta_{k} \pm i \eta ) , \end{split} \end{equation} where parameter $K$ is a cut-off with the following properties, \begin{equation} \vartheta_j - \vartheta_{k} \sim \begin{cases} \mathcal{O} \left( 1/L \right), \quad |k - j| \leq K\\ \mathcal{O} \left( 1 \right),\quad |k - j| > K\\ \end{cases}. \end{equation} We denote the anomalous part as \begin{equation} \log Q_{j}^a (\vartheta_j \pm {\rm i} \eta ) = \sum_{0 < | k-j | \leq K } \log \sin (\vartheta_j - \vartheta_{k} \pm {\rm i} \eta ) , \end{equation} while the normal part is \begin{equation} \log Q_{j}^n (\vartheta_j \pm {\rm i} \eta ) = \sum_{| k-j | > K } \log \sin (\vartheta_j - \vartheta_{k} \pm {\rm i} \eta ) . \end{equation} For the normal part, we can perform the same expansion as in Eq.~\eqref{Q0expansion}, namely \begin{equation} \begin{split} \log Q_{j}^n (\vartheta_j \pm {\rm i} \eta ) = \log Q_{j}^n (\vartheta_j ) \pm {\rm i} \eta \frac{d }{d \vartheta} \log Q_{j}^n (\vartheta ) \rvert_{\vartheta = \vartheta_j} \\ - \frac{\eta^2}{2} \frac{d^2 }{d \vartheta^2} \log Q_{j}^n (\vartheta ) \rvert_{\vartheta = \vartheta_j} + \mathcal{O} \left( \frac{1}{L^2} \right) , \end{split} \end{equation} and \begin{equation} {\rm i} \eta \frac{d }{d \vartheta} \log Q_{j}^n (\vartheta ) \rvert_{\vartheta = \vartheta_j} = \frac{\epsilon \ell}{L}\sum_{ |k - j| > K } \frac{1}{\tan (\vartheta_j - \vartheta_{k})} = \frac{\ell}{L} \sum_{ |k - j| > K } \frac{\mu_j \mu_k + \delta }{\mu_j - \mu_k} . \end{equation} Combining the two parts, we obtain \begin{equation} \log Q_{j}^n (\vartheta_j + {\rm i} \eta ) - \log Q_{j}^n (\vartheta_j - {\rm i} \eta ) = \frac{2 \ell}{L } \sum_{ |k - j| > K } \frac{\mu_j \mu_k + \delta }{\mu_j - \mu_k} + \mathcal{O} \left( \frac{1}{L^2} \right) . \end{equation} Meanwhile, for the anomalous part, denoting $m = k-j$, we have \begin{equation} \begin{split} \log Q_{j}^n (\vartheta_j + {\rm i} \eta ) - \log Q_{j}^n (\vartheta_j - {\rm i} \eta ) & = \sum_{0< | m | < K} \log \frac{\sin (\vartheta_j - \vartheta_{j+m} + {\rm i} \eta)}{\sin (\vartheta_j - \vartheta_{j+m} + {\rm i} \eta)} \\ &= \sum_{0< | m | < K} \log \frac{L (\mu_j - \mu_{j+m}) + {\rm i} \ell (\mu_j^2 + \delta) }{L (\mu_j - \mu_{j+m}) - {\rm i} \ell (\mu_j^2 + \delta)} . \end{split} \end{equation} We can develop an expansion \begin{equation} L \mu_{j+m} \sim c_1 L + c_2 m + \frac{1}{2} \frac{c_3 m^2}{L} + \mathcal{O} \left( \frac{1}{L^2} \right) , \quad |m| \leq K , \end{equation} where all the ``constants'' can be expressed in terms of density $\rho(\mu)$, i.e. \begin{equation} c_1 = \mu_j , \quad c_2 = \frac{1}{\rho (\mu_j)} , \quad c_3 = - \frac{\rho^\prime (\mu_j)}{\rho (\mu_j)^3} , \label{constants_finitesize} \end{equation} and \begin{equation} \rho (\mu) = \frac{1}{L} \sum_{j=1}^M \delta (\mu - \mu_j ) , \quad \rho(\mu) \simeq \frac{{\rm d} j}{{\rm d} \mu} . \end{equation} By combining the $m$-th and $(-m)$-th terms in the sum, we can express the leading order of the sum as \begin{equation} \sum_{m=1}^K \frac{1}{{\rm i}} \left( \log \frac{L(\mu_j - \mu_{j-m}) + {\rm i} \ell ( \mu_j^2 + \delta ) }{L(\mu_j - \mu_{j-m}) - {\rm i} \ell (\mu_j^2 + \delta )} + \log \frac{L(\mu_j - \mu_{j+m}) + {\rm i} \ell (\mu_j^2 + \delta ) }{L(\mu_j - \mu_{j+m}) - {\rm i} \ell (\mu_j^2 + \delta )} \right) , \end{equation} using \begin{equation} \begin{split} & \frac{1}{{\rm i}} \left( \log \frac{L(\mu_j - \mu_{j-m}) + {\rm i} \ell (\mu_j^2 +\delta) }{L(\mu_j - \mu_{j-m}) - {\rm i} \ell (\mu_j^2 +\delta)} + \log \frac{L(\mu_j - \mu_{j+m}) + {\rm i} \ell (\mu_j^2 + \delta) }{L(\mu_j - \mu_{j+m}) - {\rm i} \ell (\mu_j^2 +\delta)} \right) \\ &= \frac{1}{{\rm i}} \log \frac{b^2 m^2 - [{\rm i} \ell (\mu_j^2 + \delta) - \frac{c_3 m^2}{2 L}]^2}{c_2^2 m^2 - [{\rm i} \ell (\mu_j^2 + \delta) + \frac{c_3 m^2}{2 L}]^2} \\ & = \frac{2 c_3 \ell (\mu_j^2 + \delta) }{c_2^2 L} \left( 1 - \frac{1}{\frac{c_2^2}{\ell^2 (\mu_j^2 + \delta)^2} +1 } \right) + \mathcal{O} \left( \frac{1}{L^2} \right) . \end{split} \end{equation} The first part can be combined with the sum for $|m| > K$, since \begin{equation} \frac{2 \ell (\mu_j \mu_{j-m} + \delta)}{L(\mu_j - \mu_{j-m})} + \frac{2 \ell (\mu_j \mu_{j+m} + \delta )}{L(\mu_j - \mu_{j+m})} \simeq \frac{2 c_3 (\mu_j^2 +\delta )}{c_2^2 L} . \end{equation} Taking the limit $K\to \infty$ (beware that $K / L \to 0$), for the second part we have \begin{equation} - \sum_{m=1}^{\infty} \frac{2 c_3 \ell (\mu_j^2 + \delta )}{c_2^2 L \left[ \frac{c_2^2}{\ell^2 (\mu_j^2 +\delta )^2} +1 \right] } = \frac{c_3 \ell (\mu_j^2 + \delta)}{c_2^2 L} \left[ 1 - \frac{\pi \ell (\mu_j^2 + \delta)}{c_2} \coth \left( \frac{\pi \ell (\mu_j^2 + \delta )}{c_3} \right) \right] . \label{finitesize1} \end{equation} Substituting back in the values in Eq.~\eqref{constants_finitesize}, we will obtain the finite-size correction in Eq.~\eqref{finite_size_mu}. In addition, the finite-size correction in terms of $\zeta$ variable takes the form \begin{equation} \frac{\pi \rho^\prime (\zeta) \ell^2 (1 + \delta \zeta^2 )^2}{L} \coth \left[ \pi \ell (1 + \delta \zeta^2) \rho (\zeta) \right] + \mathcal{O} \left( \frac{1}{L^2 } \right) . \label{finite_size_zeta} \end{equation} \section{Useful formulae for elliptic functions} \label{app:ellipticfunc} We collect several useful functions and formulae used in the derivations in Section~\ref{subsec:examplebion}. We begin by defining the elliptic integral of the first kind \begin{equation} K ( { \rm k }^2 ) = \int_0^1 \frac{{\rm d} x}{\sqrt{(1 - x^2) (1 - {\rm k}^2 x^2 )}}. \label{ellipticK} \end{equation} The Jacobi elliptic function $\mathrm{sn} (x, {\rm k}^2 )$ is defined as the inverse of the elliptic integral of the first kind, \begin{equation} {\rm w} = \mathrm{sn} (x , {\rm k}^2 ) , \quad x = \int_{0}^{\rm w} \frac{{\rm d} z }{\sqrt{(1 - z^2) (1 - {\rm k}^2 z^2 )}} , \label{ellipticsn} \end{equation} and, without ambiguity, we can put $\mathrm{sn} (x, {\rm k}^2 ) = : \mathrm{sn} (x)$. Other types of Jacobi elliptic functions can be defined in a similar way, \begin{equation} {\rm w} = \mathrm{cn} (x , {\rm k}^2 ) , \quad x = \int_{\rm w}^{1} \frac{{\rm d} z }{\sqrt{(1 - z^2) (1 -{\rm k}^2 + {\rm k}^2 z^2 )}} , \end{equation} and \begin{equation} {\rm w} = \mathrm{dn} (x , {\rm k}^2 ) , \quad x = \int_{\rm w}^{1} \frac{{\rm d} z }{\sqrt{(1 - z^2) ( z^2 + {\rm k}^2 -1 )}} , \end{equation} such that \begin{equation} {\rm sn}^2 x + \mathrm{cn}^2 x = 1 , \quad {\rm k}^2 {\rm sn}^2 x + \mathrm{dn}^2 x = 1 . \end{equation} When shifting the argument by one quarter of the period of $K ({\rm k^2})$, we have \begin{equation} \mathrm{cn} \left( x + K ({\rm k^2}) \right) = - \sqrt{1 - {\rm k}^2 }\frac{\mathrm{s n} (x)}{\mathrm{d n} (x)} , \quad \mathrm{cn} \left( x + K ({\rm k^2}) \right) = \sqrt{1 - {\rm k}^2 }\frac{1}{\mathrm{d n} (x)} . \label{halfperiodelliptic} \end{equation} In addition, we also make use of theta functions to express the spin field. The most important one here is \begin{equation} \vartheta_3 (z , \tau ) = \sum_{n= -\infty}^{+\infty} e^{ {\rm i} \pi \tau n^2 + 2 {\rm i} z n } . \label{ellptictheta3} \end{equation} For a more detailed exposition and other properties of elliptic functions we refer the reader to Refs.~\cite{Akhiezer_1990, McKean_1997}. \section{Numerical tests} \label{app:tests} \subsection{Gaudin norm} \label{subsubsec:numericalcheck1} We present the data for several numerical checks. Firstly, we computed the Gaudin norm of a one-cut solution without a condensate. Secondly, we include a condensate, and consider two regimes: (i) isotropic interaction with $\delta = 0$ and (a) anisotropic regimes with $\delta > 0$. Case (i) without a condensate has been studied in Ref.~\cite{Gromov_2012}, and we use it as a benchmark. Case (a) is more interesting, as it enables a non-trivial quantitative confirmation of our proposal for determining the location of a condensate or additional contours, cf. Sec.~\ref{subsec:1-cutwithcond}. We next present our numerical results for the Gaudin norm computed on the one-cut solution with mode number $n=1$ and filling fraction $\nu = 0.1$, for both cases (i) and (a). For this choice of parameters, there is no condensates involved. A linear fit on the finite-size numerical data yields \begin{equation} \log \mathcal{N} - \frac{1}{2} \log L = 0.00714654(1) \, L + 0.068763(9) , \quad \delta = 0 , \end{equation} and \begin{equation} \log \mathcal{N} - \frac{1}{2} \log L = 0.0083405(3) \, L + 0.083756(1) , \quad \delta = 1 . \end{equation} \begin{figure} \centering \includegraphics[width=.5\linewidth]{figures/log_Gaudin_norm_filling_fraction_0_1.pdf} \caption{Logarithm of the Gaudin norm, shown for the one-cut solution with $n=1$ and $\nu = 0.1$. Green circles (black crosses) show numerical results for $\delta = 0$ ($\delta = 1$), respectively. Linear fits are indicated by dashed lines.} \label{fig:loggaudintenth} \end{figure} \begin{figure} \centering \includegraphics[width=.5\linewidth]{figures/log_Gaudin_norm_filling_fraction_1_over_3.pdf} \caption{Logarithm of the Gaudin norm, shown for the one-cut solution with $n=1$, $\nu = \frac{1}{3}$. Green (black) circles show numerical results for $\delta = 0$ ($\delta = 1$). Linear fits are indicated by dashed lines.} \label{fig:loggaudinthird} \end{figure} Comparing these results to those obtained from the functional integral approach, cf. Eq.~\eqref{GaudinnormXXZ} (denoted by $C_1$ in the table below), we have \begin{center} \begin{tabularx}{0.8\textwidth} { | >{\centering\arraybackslash}X | >{\centering\arraybackslash}X | >{\centering\arraybackslash}X | } \hline & $C_1$ numerical & $C_1$ functional \\ \hline $\delta = 0$ & 0.00714654(1) & 0.007156(1) \\ \hline $\delta = 1$ & 0.0083405(3) & 0.008383(8) \\ \hline \end{tabularx} \end{center} We can see that the functional approach (only requiring the knowledge of the density contours) yields very accurate results in both the isotropic (i) and anisotropic (a) case. Next up, we analyse the cases (i) and (a) with an extra condensate, computing the Gaudin norm for a one-cut solution with mode number $n=1$ and filling fraction $\nu = \frac{1}{3}$. In the functional integral approach, this amounts to compute the integral in Eq.~\eqref{GaudinnormXXZ} along a contour $\mathcal{C}$ comprising of three parts, \begin{equation} \mathcal{C} = \mathcal{C}_1 + \mathcal{C}_2 + \mathcal{C}_{\rm cond}. \end{equation} Here $\mathcal{C}_1$ pertains to the original contour with density $\rho_1(\zeta)$ in Eq.~\eqref{rhofluc} (green dashed line in panel (b) in Fig.~\ref{fig:1cutcond}), whereas contour $\mathcal{C}_2$ has density $\rho_2(\zeta)$, depicted in Eq.~\eqref{rhofluc} by yellow dashed line in panel (b) of Fig.~\ref{fig:1cutcond}. Finally, $\mathcal{C}_{\rm cond}$ is the straight condensate contour with density $\rho_{cond} = \frac{i}{1 + \delta \zeta^2}$. This time, a linear extrapolation of the numerical finite-size data yields \begin{equation} \log \mathcal{N} - \frac{1}{2} \log L = 0.091273(6) \, L + 1.92813(4) , \quad \delta = 0 , \end{equation} and \begin{equation} \log \mathcal{N} - \frac{1}{2} \log L = 0.082597(1) L + 2.09809(5) , \quad \delta = 1, \end{equation} while comparing to the results of the functional integral approach, see Eq.~\eqref{GaudinnormXXZ} ($C_1$ in the table below), we obtain \begin{center} \begin{tabularx}{0.8\textwidth} { | >{\centering\arraybackslash}X | >{\centering\arraybackslash}X | >{\centering\arraybackslash}X | } \hline & $C_1$ numerical & $C_1$ functional \\ \hline $\delta = 0$ & 0.091273(6) & 0.091121(9) \\ \hline $\delta = 1$ & 0.082597(1) & 0.081761(2) \\ \hline \end{tabularx} \end{center} In spite of an extra condensate, the functional integral method yields very accurate results. Even more importantly, this check provides a robust confirmation for the additional condensate contour(s). We note that any different contour, e.g. the usual arc-shaped contour without a condensate, produces an appreciable numerical mismatch. \subsection{Slavnov overlap} \label{subsubsec:numericalcheck2} \begin{figure} \centering \begin{minipage}{.49\linewidth} \includegraphics[width=\linewidth]{figures/contour_demo_vacuum_descendant.pdf} \caption*{\centering(a)} \end{minipage} \begin{minipage}{.49\linewidth} \includegraphics[width=\linewidth]{figures/log_overlap_vacuum_descendantfilling_fraction_0_1.pdf} \caption*{\centering(b)} \end{minipage} \caption{(a) Integration contours for computing the overlap coefficient between a one-cut state and the vacuum descendant. Square-root branch cut of $\mathfrak{p} (\zeta)$ is indicated by blue dashed line, whereas all the additional branch cuts due to the dilogarithm function are marked by green dashed line. The integration contour, marked in red (orange) on the upper (bottom) Riemann sheets, escape to infinity on the bottom sheet. (b) Logarithm of the overlap coefficient between the one-cut solution with $n=1$ and $\nu = 0.1$ and a vacuum descendant state. Green circles (black crosses) show numerical data for interaction parameter $\delta = 0$ ($\delta = 1$), respectively, with dashed lines corresponding to linear fits.} \label{fig:vacuum_descendant} \end{figure} In this section, we present a numerical check of an overlap formula. Here we compute the overlap between a semi-classical Bethe eigenstate with a single cut and a vacuum descendant state, which is a ``domain-wall state'' of the form $| \downarrow \cdots \downarrow \uparrow \cdots \uparrow \rangle$. Again, we perform computations for both the isotropic case (i) at $\delta = 0$ and for the anisotropic interaction (a) by setting the anisotropy parameter to $\delta = 1$. The (unnormalised) overlap can be obtained from the general Algebraic Bethe ansatz determinant formula due to Slavnov with help of L'H\^{o}pital rule (presented previously in e.g. \cite{Mossel_2010}), \begin{equation} \mathcal{V} = \langle \phi | \{ \vartheta \} \rangle = \prod_{l=1}^M \sin \left( \vartheta_l + i \frac{\eta}{2} \right)^M \frac{\det H }{\prod_{j<k} \sin (\vartheta_j - \vartheta_k) } , \end{equation} \begin{equation} H_{a b} = \cot \left( \vartheta_a - i \frac{\eta}{2} \right)^b - \cot \left( \vartheta_a + i \frac{\eta}{2} \right)^b , \end{equation} taking the ``domain-wall state'' $| \phi \rangle =| \downarrow \cdots \downarrow \uparrow \cdots \uparrow \rangle $ with $M$ down-turned spins and lattice size $L$. In the isotropic limit, we obtain \begin{equation} \mathcal{V} = \langle \phi | \{ \lambda \} \rangle = \prod_{l=1}^M \left( \lambda_l + \frac{i}{2} \right)^M \frac{\det H }{\prod_{j<k} (\lambda_j - \lambda_k) } , \end{equation} \begin{equation} H_{a b} = \left( \frac{1}{\lambda_a - i /2} \right)^b - \left( \frac{1}{\lambda_a + i /2} \right)^b . \end{equation} In the following we shall ignore the phase and consider only the absolute value of the overlap. We expect, similarly as previously for the Gaudin norm, the following behavior at large $L$, \begin{equation} \log \left| \mathcal{V} \right| = C_2 L + \frac{1}{2} \log L + \mathcal{O} (1) , \quad C_2 \in \mathcal{O} (1). \end{equation} The results of computations are shown in Fig.~\ref{fig:vacuum_descendant}. By numerically fitting the finite-size data, we obtained \begin{equation} \log | \mathcal{V} | - \frac{1}{2} \log L = - 0.144278(5) L - 0.272812(7) , \quad \delta = 0 , \end{equation} and \begin{equation} \log | \mathcal{V} | - \frac{1}{2} \log L = - 0.143827(7) L - 0.254592(6) , \quad \delta = 1 . \end{equation} Before we can repeat the computation using Eq.~\eqref{Kostovformulaoverlap}, we need to find the ``quasi-momentum'' corresponding to the vacuum descendant $| \phi \rangle$. With aid of Algebraic Bethe ansatz, a vacuum descendant state (with no inhomogeneities) is given by \cite{Mossel_2010} \begin{equation} | \phi \rangle = | \underbrace{\downarrow \cdots \downarrow}_M \uparrow \cdots \uparrow \rangle = \lim_{\xi_j \to 0} \prod_{j=1}^M B(\xi_j) | 0 \rangle , \quad | 0 \rangle = | \uparrow \cdots \uparrow \rangle , \end{equation} where $|0 \rangle $ is the ferromagnetic Bethe vacuum $| \uparrow \cdots \uparrow \rangle$, and $B(\lambda)$ is the magnon excitation operator corresponding to the upper off-diagonal element of the quantum monodromy matrix. The density of the ``off-shell Bethe roots'' can then be expressed as \begin{equation} \rho_\phi (\zeta ) = \nu_1 \delta (\zeta ) , \quad \xi_1 , \cdots \xi_M \to 0 . \end{equation} With no loss of generality, we set subsequently the classical period to $\ell = 1$. The quasi-momentum associated to $| \phi \rangle$ then reads \begin{equation} \mathfrak{p}_{\phi} (\zeta) = \ell \int_{\mathcal{C}_\phi} d \zeta^\prime \frac{\rho (\zeta^\prime) (1+ \delta \zeta \zeta^\prime)}{\zeta - \zeta^\prime} - \frac{\ell}{2 \zeta} = \frac{\nu_1 - 1/2}{\zeta} . \end{equation} Now we are ready to employ the functional integral formula \eqref{Kostovformulaoverlap}. The appropriate choice of contours is shown in panel (a) in Fig.~\ref{fig:vacuum_descendant} \footnote{The rationale behind this choice is to avoid the branch cuts of the dilogarithm function. More details on this can be found in~\cite{Kostov_2012}.}. Comparing the two computations of coefficient $C_2$, we find \begin{center} \begin{tabularx}{0.8\textwidth} { | >{\centering\arraybackslash}X | >{\centering\arraybackslash}X | >{\centering\arraybackslash}X | } \hline & $C_2$ numerical & $C_2$ functional \\ \hline $\delta = 0$ & -0.144278(5) & -0.144485(3) \\ \hline $\delta = 1$ & -0.143827(7) & -0.142267(2) \\ \hline \end{tabularx} \end{center} Once again the computation using formula \eqref{Kostovformulaoverlap} works quite well, both in the isotropic and anisotropic cases. \section{Numerical recipes} \label{app:numericalrecipe} \subsection{Numerical solution to Bethe equations} \label{app:numericalbethe} We outline how to numerically solve for the Bethe roots to equations \eqref{betheeq} (for finite but possibly large system length $L$) for a specific class of quantum eigenstates that in the thermodynamic limit become one-cut classical solution. To this end, we employ the algorithm described in Section 7 of Ref.~\cite{Bargheer_2008} for the rational Bethe equations (i.e. for isotropic interaction, $\delta = 0$). We use this method in combination with another method, given in Appendix C of Ref.~\cite{Jiang_2017}, where the solution to the isotropic chain is used as the initial condition for the Newton-Raphson iteration during which the anisotropy parameter gets gradually increased. However, while this procedure works quite well for the simplest case of mode number $n=1$, we could not achieve good convergence for mode numbers $n > 2$ and consequently could not perform any benchmark on classical solutions with two or more cuts. \subsection{Determining branch points from filling fractions and mode numbers} \label{app:branchpoints} We describe a numerical procedure to determine the branch points from a given set of moduli, that is the mode numbers and filling fractions. The method is completely general and applies to solutions with an arbitrary number of cuts. For simplicity however, we demonstrate it below on the class of two-cut solutions. There are four branch points $\{\mu_1 $, $\bar{\mu}_1$, $\mu_2 $, $\bar{\mu}_2\}$ that appear in complex-conjugate pairs. Thus, there are in total four real parameters (real and imaginary components of each branch point). The finite-gap solution is parametrised by equivalently four parameters, i.e. mode numbers and filling fractions of both branch cuts, relating to the previous four parameters in a nonlinear manner. In addition, the solution must be periodic with the period $\ell$, which adds an additional constraint. We would like to remark that, unlike in the one-cut case, the determination is highly nonlinear, related to elliptic functions and integrals. Hence, there is not a simple analytic closed-form formula available in this case. Instead, we are going to use the following numerical procedure: \begin{itemize} \item We first fix the real part of branch points of the first cut to $a\equiv \mathrm{Re} \, \mu_1 = 1$. \item We next scan a range of values $\mathrm{Im} \mu_1 \in (b_1 , b_2)$ and $\mathrm{Re} \, \mu_2 \in (c_1 , c_2)$, and find the value of $\mathrm{Im} \, \mu_2$ that yields the required mode numbers $n_1$ and $n_2$. More specifically, for any $\mathrm{Im} \, \mu_2$, we compute $\frac{d \mathfrak{p}}{\ell}$ by demanding the $\mathcal{A}$-cycle to vanish. Since the classical period reads \begin{equation} \ell = \frac{2 \pi n_1}{\int_{\mathcal{B}_1} {\rm d} \mathfrak{p} / \ell } , \label{classical_period_numeric} \end{equation} we can numerically determine $\mathrm{Im} \, \mu_2$ from the requirement \begin{equation} \int_{\mathcal{B}_2} {\rm d} \mathfrak{p} = 2 \pi n_2 . \end{equation} \item Having done the above, we can readily compute following quantities, \begin{equation} \tilde{\nu}_1 = \oint_{\mathcal{A}_1} \frac{{\rm d} \mathfrak{p}}{\mu} , \quad \tilde{\nu}_2 = \oint_{\mathcal{A}_2} \frac{{\rm d} \mathfrak{p}}{\mu} , \end{equation} which moreover depend on $\mathrm{Im} \, \mu_1$ and $\mathrm{Re} \, \mu_2 $. \item By requiring that $\tilde{\nu}_1 = \nu_1$, we obtain a ``curve'' in the plane spanned by $\mathrm{Im} \mu_1$ and $\mathrm{Re} \, \mu_2$. By finally requesting also that $\tilde{\nu}_1 = \nu_2$, we are left with a single point, say $(b,c)$. The last point, call it $d$, corresponds to $\mathrm{Im} \, \mu_2$. \item We have thus determined to complex branch points $\mu_1 = 1 + b i$, $\mu_2 = c + d i$ (alongside their complex conjugates) which yields the prescribed classical period $\ell$, mode number $n_1$, $n_2$ and filling fraction $\nu_1$, $\nu_2$ of a general two-solution. \end{itemize} In making a comparison with the Bethe root distributions, we normally prefer to set the classical period to $\ell = 1$. In this case we simply divide the above branch points by $\ell$, see Eq.~\eqref{classical_period_numeric}, that is \begin{equation} \mu_{1,n} = \frac{1}{\ell} + \frac{b}{\ell} i , \quad \mu_{2,n} = \frac{c}{\ell} + \frac{d}{\ell} i , \end{equation} or equivalently in terms of spectral parameter $\zeta = 1 / \mu $, \begin{equation} \zeta_{1,n} = \frac{1}{\bar{\mu}_{1,n}} , \quad \zeta_{2,n} = \frac{1}{\bar{\mu}_{2,n}} . \end{equation} \end{appendix}
1,314,259,994,413
arxiv
\section{Introduction} Personal stance towards an issue affects the decision making of an individual, while the stance holds by the public towards thousands of potential topics explains more. Stance detection is an important topic in research communities of both natural language processing (NLP) and social computing \cite{kuccuk2020stance, aldayel2021stance}. Similar to all NLP tasks, early works on stance detection focused on rule-based approaches, and later made a transition into traditional machine learning based algorithms. Since 2014, deep learning models quickly become the mainstream techniques for stance detection. Later on, with the great success of Google's bidirectional encoder representations from transformers (BERT) model, a new NLP research paradigm emerges which is utilizing large pre-trained language models (PLM) together with a fine tuning process. This pre-train and fine-tune paradigm provides exceptional performance for most NLP downstream tasks including stance detection, because the abundance of training data enables PLMs to learn enough general purpose features and knowledge for modeling different languages. Following BERT, more and more PLMs are proposed with different specialties and characteristics, including the ELMo series, the GPT series, the Turing series, varieties of BERT and many more. ChatGPT is the most recent PLM optimized for dialogue and attracted over 1 million users within 5 days. Programmers use it to interpret code, artists use it to generate prompts for AIGC models, clerks use is to write and translate documents; writers challenge ChatGPT to write poems and film scripts and etc. To what extent will ChatGPT transform the society and people's way of doing and thinking? For NLP experts, is ChatGPT just another pre-trained language model? In this work we conduct experiments on ChatGPT for stance detection tasks by directly asking ChatGPT for the result. This approach can be considered as a zero-shot prompting strategy. Experimental results show that ChatGPT can achieve SOTA or similar performance for commonly used datasets including SemEval-2016 and P-Stance with a simple prompt. Since ChatGPT is trained for dialogues, it is surprisingly easy to know the reason of the model's decision making by directly asking why. Furthermore, interacting with ChatGPT with a chain of inputs can potentially further improves the performance. This paper is structured as follows: after a brief overview of related work in Section 2, our proposed prompting methods and results are detailed in Section 3. Section 4 contains discussions and future work. \section{Related Work} Before getting into more detail, we first give a formal definition of stance detection. For an input in the form of a piece of text and a target pair, stance detection is a classification problem where the stance of the author of the text is sought in the form of a category label from this set: \{Favor, Against, Neither\}. Occasionally, the category label of Neutral is also added to the set of stance categories and the target may or may not be explicitly mentioned in the text. Researchers approach this task by converting it into a text classification task. Stance detection studies originally focused on parliamentary debates and gradually shifted to social media contents including Twitter, Facebook, Instagram, online blogs and etc. The techniques to approach these problems also evolve with time. Early research works on stance detection from the 1950s mainly adopted rule-based techniques \cite{anand2011cats, walker2012stance}. Since the 1990s, machine learning based models gradually replaced small scale rule-based methods. Traditional machine learning models build text classifiers for stance detection based on selected features. The effective algorithms for the classifiers are support vector machine (SVM) \cite{addawood2017stance, mohammad2017stance}, logistic regression \cite{ferreira2016emergent, tsakalidis2018nowcasting, skeppstedt2017detection}, naive bayes \cite{hacohen2017stance,simaki2017stance}, decision tree \cite{wojatzki2016stance} and etc. With the fast advancement of deep learning in the 2010s, models based on deep neural networks (DNN) become mainstream in this field. These methods design neural networks with different structures and connections to obtain the desired stance classifier, which can be categorized as conventional DNN models, attention-based DNN models and graph convolutional network (GCN) models. Convolutional neural network (CNN) and long short-term memory (LSTM) models are most commonly used conventional DNN models \cite{augenstein2016stance, du2017stance}; the attention-based methods mainly utilize target-specific information as the attention query, and deploy an attention mechanism for inferring the stance polarity \cite{dey2018topical, sun2018stance}; and the GCN methods propose a graph convolutional network to model the relation between target and text \cite{LiPLSLWYH22, bowenacl, ConfortiBPGTC21}. Inspired by the recent success of PLMs, fine-tuning methods have led to improvements in stance detection tasks\cite{liu2021enhancing}. Fine-tuning models adapt PLMs by building a stance classification head on top of the ``{$<$cls$>$}'' token, and fine-tune the whole model. The PLMs are getting larger and larger because the performance and sample efficiency on downstream tasks are normally proportional to the scale of the model, and some abilities like the prompting strategies, popularized by GPT-3, are considered to be effective only when the model reaches a certain scale \cite{wei2022emergent}. The main idea of prompt-based methods is mimicking PLMs to design a template suitable for classification tasks and then build a mapping (called verbalizer) from the predicted token to the classification labels to perform class prediction, which bridges a projection between the vocabulary and the label space. The prompting strategies provide further improvements for stance detection performance\cite{shin2020autoprompt}. Models like LaMDA, GPT-3 and etc. also gain success on few-shot prompting\cite{wei2022emergent}, which alleviates the demand for large amount of training data and the tedious training process. Generally speaking, stance detection techniques and NLP algorithms in general experienced four main paradigms: (1) rule-based models; (2) traditional machine learning based models; (3) deep neural network models and (4) PLM pre-train and fine-tune paradigm. Quite recently, the 5th paradigm "pre-train, prompt and predict" starts to draw wide attention\cite{liu2021pre}. \begin{table}[h!] \small \begin{center} \begin{tabular}{lccc} \hline Model & HC & FM & LA \\ \hline Bicond \cite{augenstein2016stance} & 32.7 & 40.6 & 34.4 \\ CrossNet \cite{xu2018cross} & 38.3 & 41.7 & 38.5 \\ SEKT \cite{zhang2020enhancing} & 50.1 & 44.2 & 44.6 \\ TPDG \cite{LiangF00DHX21} & 50.9 & 53.6 & 46.5 \\ Bert\_Spc \cite{Bert} & 49.6 & 41.9 & 44.8 \\ Bert-GCN \cite{linbertgcn} & 50.0 & 44.3 & 44.2 \\ PT-HCL \cite{liang2022zero} & 54.5 & 54.6 & 50.9 \\ \hline ChatGPT & \textbf{68.4} & \textbf{58.2} & \textbf{79.5} \\ \hline \end{tabular} \end{center} \caption{Performance comparison (F1-m) on SemEval-2016 dataset with zero shot setup.} \label{tab1} \end{table} \begin{figure*}[htbp] \centering \includegraphics[width=0.85\linewidth]{example1.png} \caption{Example of Question to ChatGPT} \label{fig1} \end{figure*} \begin{table*}[h!] \small \begin{center} \begin{tabular}{lllllll} \hline Methods & \multicolumn{2}{c}{FM} & \multicolumn{2}{c}{LA} & \multicolumn{2}{c}{HC} \\ \cline{2-7} & F1-avg & F1-m & F1-avg & F1-m & F1-avg & F1-m \\ \hline BiLSTM \citet{augenstein2016stance} & 48.04 & 52.16 & 51.59 & 54.04 & 47.47 & 57.38 \\ BiCond \cite{augenstein2016stance} & 57.39 & 61.37 & 52.32 & 54.48 & 51.89 & 59.75 \\ TextCNN \cite{kiml} & 55.65 & 61.37 & 58.8 & 63.2 & 52.35 & 58.45 \\ MemNet \cite{TangQL16} & 51.07 & 57.75 & 58.86 & 61.03 & 52.3 & 58.45 \\ AOA \cite{huang2018aspect} & 55.37 & 59.96 & 58.32 & 62.42 & 51.55 & 58.17 \\ TAN \cite{du2017stance} & 55.77 & 58.26 & 63.72 & 65.74 & 65.38 & 67.71 \\ ASGCN \cite{zhang2019aspect} & 56.21 & 58.52 & 59.51 & 62.87 & 62.16 & 64.27 \\ Bert\_Spc \cite{Bert} & 57.3 & 60.6 & 63.97 & 66.31 & 65.78 & 69.1 \\ TPDG \cite{LiangF00DHX21} & 67.3 & / & 74.7 & / & 73.4 & / \\ \hline ChatGPT & \textbf{68.44} & \textbf{72.63} & 58.17 & 59.29 & \textbf{79.48} & \textbf{77.97} \\ \hline \end{tabular} \end{center} \caption{Performance comparison on SemEval-2016 dataset with in-domain setup.} \label{tab2} \end{table*} \begin{table*}[h!] \small \centering \begin{tabular}{lllllll} \hline Methods & \multicolumn{2}{c}{Trump} & \multicolumn{2}{c}{Biden} & \multicolumn{2}{c}{Bernie} \\ \cline{2-7} & F1-avg & F1-m & F1-avg & F1-m & F1-avg & F1-m \\ \hline BiLSTM \cite{augenstein2016stance} & 72.01 & 69.74 & 69.53 & 68.67 & 63.91 & 63.81 \\ BiCond \cite{augenstein2016stance} & 72.98 & 70.56 & 69.37 & 68.41 & 64.58 & 64.06 \\ TextCNN \cite{kiml} & 77.23 & 76.92 & 78.21 & 77.95 & 70.23 & 69.75 \\ MemNet \cite{TangQL16} & 77.67 & 76.80 & 77.56 & 77.22 & 72.78 & 71.40 \\ AOA \cite{huang2018aspect} & 77.74 & 77.15 & 77.80 & 77.69 & 71.70 & 71.24 \\ TAN \cite{du2017stance} & 77.52 & 77.10 & 77.92 & 77.64 & 71.99 & 71.60 \\ ASGCN \cite{zhang2019aspect} & 77.01 & 76.82 & 78.35 & 78.17 & 70.76 & 70.56 \\ Bert\_Spc \cite{Bert} & 81.58 & 81.43 & 81.67 & 81.46 & 78.43 & 78.28 \\ \hline ChatGPT & \textbf{83.19} & \textbf{82.78} & \textbf{82.30} & \textbf{82.03} & \textbf{79.43} & \textbf{79.43} \\ \hline \end{tabular} \caption{Performance comparison on P-Stance dataset with in-domain setup.} \label{tab3} \end{table*} \section{Methods and Results} \textbf{Task definition:} We use $X = \{x, p\}_{i=1}^{N}$ to denote the collection of data, where each $x$ denotes the input text and $p$ denotes the corresponding target. $N$ represents the number of instances. Stance detection aims to predict a stance label for the input sentence $x$ towards the given target $p$ by using the stance predictor. In this Section, we reveal the performance of the ChatGPT method for stance detection. We utilize a special case of prompt to construct the stance predictor by creating a template of a direct question. Specifically, we directly ask the ChatGPT model the stance polarity of a certain tweet towards a specific target. Figure \ref{fig1} shows an example. Given the input: ``\textit{RT \@GunnJessica: Because i want young American women to be able to be proud of the 1st woman president \#SemST}'', the question for ChatGPT input is: \textit{``What is the attitude of the sentence : "RT \@GunnJessica: Because i want young American women to be able to be proud of the 1st woman president \#SemST'' to the target ``Hillary Clinton" select from ``favor, against' or neutral'.} For this particular example, ChatGPT returns a correct result. \begin{figure*}[h!] \centering \includegraphics[width=0.85\textwidth]{Example3.jpg} \caption{ChatGPT's Explanation when the Stance is Explicitly Expressed in the Text} \label{fig2} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[width=0.85\textwidth]{Example2.jpg} \caption{ChatGPT's Explanation when the Stance is Implicitly Expressed in the Text} \label{fig3} \end{figure*} \textbf{Results:} To compare the effectiveness of ChatGPT, we carried out experimental validations in the SemEval-2016 stance dataset\cite{StanceSemEval2016} and P-Stance dataset\cite{li2021p}. The SemEval-2016 is a dataset of 4870 tweets in English with manual annotation for stance towards 6 selected targets and ‘Hillary Clinton (HC)’, ‘Feminist Movement (FM)’ and ‘Legalization of Abortion (LA)' are three commonly used ones. Similarly, P-Stance dataset contains 21574 English tweets with political contents with stance annotations towards three targets including “Donald Trump,” “Joe Biden,” and “Bernie Sanders.” Since OpenAI has not provide API for using ChatGPT yet, experiments has only been conducted on these two benchmark datasets on social media texts. Following \cite{zhang2020enhancing, liang2022zero} we use the micro average F1-score (denoted as F1-avg) and marcro-F1 score (denoted as F1-m) for performance evaluation. We constructed both zero-shot stance detection and in-domain stance detection setups for results comparison. The zero-shot setup means the model is directly tested without any adjustment with training data, which is a fair comparison with our proposed prompt method using ChatGPT. We also compared our zero-shot results of ChatGPT with other mainstream stance detection models in an in-domain setup, which means these models are optimized with 80\% tweets as training data. The results are summerized in Table \ref{tab1} to \ref{tab3}. The results show that ChatGPT achieves SOTA results in zero-shot setup. For example, ChatGPT achieves a 15.3\% improvement on average compared with the best competitor PT-HCL in zero-shot setup. Compared with in-domain setup, where these methods first learned from 80\% training corpus, ChatGPT still yields better performance than all the baselines in most tasks. For example, ChatGPT achieves a 1.08\% improvement on average of F1-avg compared with the fine-tuned Bert model. \begin{figure*}[h!] \centering \includegraphics[width=0.85\textwidth]{Example4.png} \caption{ChatGPT's Explanation when it cannot Provide a Stance Detection Result (Case 1)} \label{fig4} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[width=0.85\textwidth]{Example5.png} \caption{ChatGPT's Explanation when it cannot Provide a Stance Detection Result (Case 2)} \label{fig5} \end{figure*} \section{Discussions and Future Work} Results in Section 3 demonstrate the emergent ability of ChatGPT on zero-shot prompting for stance detection tasks. By using a simple prompt of directly asking the dialogue model for the stance with no training, ChatGPT returns SOTA results in both zero-shot and in-domain setups. The launch of ChatGPT would potentially transform the whole research area. We would like to discuss three research directions which might further improve the performance of ChatGPT on stance detection tasks. (1) \textbf{Are there better prompt templates?} In this work, only one prompt template for stance detection has been tested with ChatGPT. Engineering the prompt template may further improve the zero-shot performance of using ChatGPT or unlock the use of ChatGPT to other NLP tasks. Futher studies can take the intuitive approach of manually selecting prompt templates or design an automated process for template selection. (2) \textbf{How well can ChatGPT explain itself?} ChatGPT is a language model trained for dialogues, thus it is a natural next step to ask the model why it provides certain answer. As shown in Figure \ref{fig2} and \ref{fig3}, ChatGPT provides perfect explanations for why the given tweet is in favor of the target Hillary Clinton weather the stance is explicitly or implicitly expressed in the text. Such results indicate that ChatGPT carries out stance classification based on logic reasoning instead of pure probability calculation. These explanations opens up the possibility of building explanatory AI for stance detection. (3)\textbf{Can multi-round conversation help to improve the results?} ChatGPT has already shown exceptional results with zero-shot prompting, however, it is more powerful than a stance classifier. For some instances when ChatGPT cannot provide prediction results, it can still explain why it cannot produce a prediction, e.g. "the sentence does not mention or directly reference the target", as shown in \ref{fig4} or even "instruct the speaker to express opinion with respect and empathy", as shown in \ref{fig5}. These explanations help us select the innately flawed data in the dataset, for which no model and even no human can accurately decide the stance only by the given information. For those flawed tweets, it is still possible to determine the stance of it by fixing the issue in the following conversation. In a multi-round conversation with ChatGPT, we can feed a variety of information to the model including background knowledge, missing part of the sentence, stance classification examples and etc. Future investigation on how to design a multi-round conversation may further improve the performance of ChatGPT model on more NLP tasks including stance detection.
1,314,259,994,414
arxiv
\section{Preliminaries} \subsubsection*{Valuations and Problems Definition} Let $M$ be a set of items, $|M|=m$. Let $v:2^M\rightarrow \mathbb R$ be a set function. We assume that $v(\emptyset)=0$. $v$ is \emph{monotone non-decreasing} if for each $S\subseteq T$, $v(S)\leq v(T)$. In this paper we are mainly interested in the following two problems\footnote{We also consider some generalizations, which will be defined in the proper subsections.}: in the \emph{maximization subject to cardinality constraint} problem we are given a number $k$, and we want to find the largest value set $S$, $|S|=k$. In the \emph{maximization subject to budget constraint problem} we are given a budget $B$ and a cost $c_i$ for each item $i$. The goal is to find the largest-value set $S$, such that $\Sigma_{j\in S}c_j\leq B$. We sometimes use the name \emph{maximization subject to a knapsack constraint} for this problem. We consider this problem under various restrictions on the valuations. We say that $v$ is \emph{submodular} if for every $S,T\subseteq M$ we have that $v(S)+v(T)\geq v(S\cup T)+v(S\cap T)$. $v$ is \emph{subadditive} if for every $S,T\subseteq M$ we have that $v(S)+v(T)\geq v(S\cup T)$. A valuation is \emph{additive} if $v(S)=\Sigma_{j\in S}v(\{j\})$. Notice that every submodular valuation is also subadditive, and every additive valuation is submodular. Another type of valuations that we consider is fractionally subadditive (or XOS). A valuation is \emph{XOS} if there exist additive valuations $v_1,\ldots ,v_l$ such that $v(S)=\max_iv_i(S)$ (we say that $i$ is a \emph{maximizing clause} of $S$). It is known \cite {LLN01} that every XOS valuation is subadditive, and that every monotone submodular valuation is XOS, and that there are subadditive valuations that are not XOS, and XOS valuations that are not submodular. We sometimes use the notation $v(S|T)$ to denote $v(S\cup T)-v(T)$. For the problem of maximizing a function subject to budget constraint, we sometimes use the notation $C(S)=\Sigma_{j\in S}c_j$. \subsubsection*{Demand Queries} Given prices $p_1,\ldots, p_m$, a \emph{demand query} returns a bundle $S$, $S\in \arg\max_Tv(T)-\Sigma_{j\in T}p_j$. The set of bundles that achieve the max is called the \emph{demand set} of the query. Our algorithms provide the guaranteed approximation ratio without making any assumption about which specific bundle is returned from the demand set. For lower bounds, as pointed out in \cite{NS05}, some unnatural tie-breaking rules may supply unrealistic information about the valuation. Therefore, as in \cite{NS05}, our lower bound assumes any tie-breaking rule that does not depend on the valuation, e.g., return the lexicographically-first bundle in the demand set. Another type of well-studied query is \emph{value query}: given a set $S$, return $v(S)$. It is known \cite{BN09} that a value query can be simulated by polynomially many demand queries (but exponentially many value queries may be required to simulate a single demand query). This paper concentrates in designing algorithm with polynomially many demand queries, so we freely assume that we have access to value queries.
1,314,259,994,415
arxiv
\section{Introduction} In this paper, we study the following two systems: \begin{equation}\label{erlianbo} \left\{ \begin{array}{l} \phi''+c\phi'+f(\phi)=0,\quad x\in(-\infty,0],\\ \psi''+c\psi'+g(\psi)=0,\quad x\in[0,\infty),\\ \phi(0)=\psi(0)=0,\\ \phi(-\infty)=\psi(+\infty)=1,\\ c=-\alpha \phi'(0)-\beta \psi'(0), \end{array} \right. \end{equation} and \begin{equation}\label{sanlianbo} \left\{ \begin{array}{l} \phi_{1}''+c\phi_{1}'+f_1(\phi_1)=0, \quad x\in(-\infty,0],\\ \phi_{2}''+c\phi_{2}'+f_2(\phi_2)=0,\quad x\in[0,h],\\ \phi_{3}''+c\phi_{3}'+f_3(\phi_3)=0,\quad x\in[h,\infty),\\ \phi_1(0)=\phi_2(0)=\phi_2(h)=\phi_3(0)=0, \\ \phi_1(-\infty)=\phi_3(+\infty)=1,\\ c=-\alpha \phi_1'(0)-\beta_l \phi_2'(0),\\ c=-\beta_r \phi_2'(h)-\gamma \phi_3'(0), \end{array} \right. \end{equation} where $\alpha, \beta, \beta_l, \beta_r, \gamma$ are positive constants, $f, g, f_1, f_2$ and $f_3$ are monostable or bistable types of nonlinearities and $c$ is a constant to be determined together with the unknowns $\phi, \psi, \phi_1$, etc.. In what follows, we say that $f$ is a monostable type of nonlinearity ($f$ is of (f$_M$) type, for short), if $f\in C^1([0,\infty))$ and \begin{equation*} f(0)=0<f'(0), \quad f(1)=0> f'(1),\quad (1-s)f(s) >0 \ \mbox{for } s>0, s\not= 1; \end{equation*} we say that $f$ is a bistable type of nonlinearity ($f$ is of (f$_B$) type, for short), if \begin{equation*} \hskip 10mm \left\{ \begin{array}{l} f\in C^1([0,\infty)), \ f(0)=0> f'(0), \ f(1)=0 > f'(1),\ \int_0^1 f(s) ds >0,\\ \ f(\cdot) <0 \mbox{ in } (0,\theta)\cup (1,\infty),\medskip f(\cdot)>0 \mbox{ in } (\theta,1) \mbox{ for some } \theta \in (0,1). \end{array} \right. \end{equation*} A typical monostable $f$ is $f(u) =u(1-u)$, and a typical bistable $f$ is $f(u)=u(u-\theta)(1-u)$ with $\theta \in (0,\frac{1}{2})$. It is known that the equation \eqref{erlianbo}$_1$ has monotonically decreasing traveling front on $\mathbb{R}$ when $c=c^*_f$, where $c^*_f>0$ is the minimal traveling speed when $f$ is of {\rm (f$_M$)} type, or the unique traveling speed when $f$ is of {\rm (f$_B$)} type (c.f. section 2). Similarly, the equation \eqref{erlianbo}$_2$ has monotonically increasing traveling front when $c=c^*_g $, where $c^*_g <0$ is the maximal speed when $g$ is of {\rm (f$_M$)} type, or the unique speed when $g$ is of {\rm (f$_B$)} type (c.f. section 2). On the problem \eqref{erlianbo} we have the following main result. \begin{thm}\label{thm:existence1} Assume that $f$ is of {\rm (f$_M$)} or {\rm (f$_B$)} type, $g$ is of {\rm (f$_M$)} or {\rm (f$_B$)} type. \begin{enumerate} \item[\rm (i)] Let $\alpha>0$ be a given constant. Then for any $c\in (c^*_g, \hat{c}_f)$, where $\hat{c}_f >0$ depends only on $\alpha$ and $f$, there exists a unique $\beta(c)>0$ such that \eqref{erlianbo} has a unique solution $(\phi, \psi, c)$. Moreover, $\beta(c)$ is continuous and strictly decreasing in $c\in (c^*_g, \hat{c}_f)$ and \begin{equation}\label{thm1-1} c\to \hat{c}_f \ \Leftrightarrow \ \beta\to 0,\qquad c\to c^*_g \ \Leftrightarrow \ \beta\to \infty,\qquad c>0\ \Leftrightarrow \ \beta< \tilde{\beta}, \end{equation} where $\tilde{\beta} := \alpha (\int_0^1f(s)ds / \int_0^1g(s)ds)^{1/2}$. \item[\rm (ii)] Let $\beta>0$ be a given constant. Then for any $c\in (\hat{c}_g, c^*_f)$, where $\hat{c}_g <0$ depends only on $\beta$ and $g$, there exists a unique $\alpha(c)>0$ such that \eqref{erlianbo} has a unique solution $(\phi, \psi, c)$. Moreover, $\alpha(c)$ is continuous and strictly increasing in $c\in (\hat{c}_g, c^*_f)$ and \begin{equation}\label{thm1-2} c\to \hat{c}_g \ \Leftrightarrow \ \alpha\to 0,\qquad c\to c^*_f \ \Leftrightarrow \ \alpha\to \infty,\qquad c>0\ \Leftrightarrow \ \alpha > \tilde{\alpha}, \end{equation} where $\tilde{\alpha} := \beta (\int_0^1 g(s)ds / \int_0^1 f(s)ds)^{1/2}$. \end{enumerate} \end{thm} \noindent This theorem indeed implies that, for any $\alpha, \beta>0$ problem \eqref{erlianbo} has a unique solution $(\phi(\alpha,\beta)$, $\psi(\alpha,\beta), c(\alpha,\beta))$, and \eqref{thm1-1} holds when $\alpha$ is fixed, \eqref{thm1-2} holds when $\beta$ is fixed. This conclusion is an analogue of \cite[Theorem 1.1]{CC}. On the problem \eqref{sanlianbo} we have the following result. \begin{thm}\label{thm:existence2} Assume that $f_1, f_2, f_3$ are of {\rm (f$_M$)} or {\rm (f$_B$)} type. Let $\alpha, \gamma>0$ be given constants, $\sigma\in(0,1)$ (in case $f_2$ is of {\rm (f$_M$)} type), or $\sigma\in (\bar{\theta},1)$ (in case $f_2$ is of {\rm (f$_B$)} type) be a given constant. Then there exist $c_- <0 <c_+$ depending only on $f_1, f_2, f_3, \alpha,\gamma$ and $\sigma$ such that for any $c\in (c_-, c_+)$, there exists a unique pair $(\beta_l (c), \beta_r(c))$, $\beta_l(c)$ (resp. $\beta_r(c)$) is continuous and strictly decreasing (resp. increasing) in $c$, such that problem \eqref{sanlianbo} has solution $(\phi_1, \phi_2, \phi_3, c)$ with $\|\phi_2\|_{L^\infty} =\sigma$ when $\beta_l=\beta_l(c)$ and $\beta_r=\beta_r(c)$. Moreover, $c >0$ iff $\beta_l(c) < \tilde{\beta}_l$, or iff $\beta_r (c)> \tilde{\beta}_r$, where \begin{equation}\label{tilde beta} \tilde{\beta}_l := \frac{ \alpha\sqrt{\int_0^1f_1(s)ds}}{ {\sqrt{\int_0^{\sigma}f_2(s)ds}}}, \quad \tilde{\beta}_r := \frac{ \gamma\sqrt{\int_0^1f_3(s)ds}}{ {\sqrt{\int_0^{\sigma}f_2(s)ds}}}. \end{equation} \end{thm} Problem \eqref{erlianbo} arises in the study of traveling wave solutions of the following system of reaction diffusion equations: \begin{equation}\label{p} \left\{ \begin{array}{l} u_{t}=u_{xx}+f(u),\quad \quad \quad\quad\quad\quad \quad x<s(t),\ t>0,\\ v_{t}=v_{xx}+g(v),\quad \quad \quad\quad\quad\quad \quad \ x>s(t),\ t>0,\\ u(x,t)=v(x,t)=0,\quad \quad \quad \quad \quad \ x=s(t),\ t>0,\\ s'(t)=-\alpha u_{x}(x,t)-\beta v_{x}(x,t),\quad x=s(t),\ t>0,\\ s(0)=0,\ u(x,0)=u_{0}(x)(x<0),\ v(x,0)=v_{0}(x)(x>0), \end{array} \right. \end{equation} where $x=s(t)$ is the free boundary to be determined together with $u$ and $v$, $f,g\in C^1$ satisfying $f(0)=g(0)=0$. In population ecology, the appearance of regional partition of multi-species through strong competition is one interesting phenomena. In \cite{MYY85,MYY86,MYY87}, Mimura, Yamada and Yotsutani used problem \eqref{p} to describe regional partition of two species, which are struggling on a boundary to obtain their own habitats. Among others, they obtained the global existence, uniqueness, regularity and asymptotic behavior of solutions for the problem. Later \cite{CDHMN, DHMP, HIMN, MN} studied similar strong competitive models. Recently Du and Lin \cite{DuLin} and Du and Lou \cite{DuLou} studied a free boundary problem, which is essentially the problem \eqref{p} in case $v\equiv 0$. They constructed some semi-waves to characterize the spreading of $u$ which represents the density of a new species. Motivated by these works, Chang and Chen \cite{CC} recently study the traveling wave solution of \eqref{p} (i.e. problem \eqref{erlianbo}) with logistic type of nonlinearities: $$ f(u)= u(1-u),\quad g(v)=v(1-v). $$ They obtain the existence and uniqueness of traveling wave solution, similar as our Theorem \ref{thm:existence1} but for logistic type of $f$ and $g$. One of our purpose in this paper is to study problem \eqref{p} for general monostable or bistable type of nonlinearity. In what follows, when $f$ and $g$ are of (f$_M$) type and (f$_B$) type, respectively, we call the solution of \eqref{erlianbo} a {\it MB-type} traveling wave solution for convenience. {\it MM-type}, {\it BM-type} and {\it BB-type} of traveling wave solutions are defined similarly (see Figure 1). Thus \cite{CC} presented a special {\it MM-type} traveling wave solution, while our Theorem \ref{thm:existence1} gives all these four types of traveling wave solutions. When three (or more) species are involved in contesting the habitats, one should consider the following competitive model: \begin{equation}\label{p1} \left\{ \begin{array}{l} u_{1t}=u_{1xx}+f_1(u_1),\hskip 42mm x<s_l(t),\ t>0,\\ u_{2t}=u_{2xx}+f_2(u_2),\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad s_l(t)<x<s_r(t),\ t>0,\\ u_{3t}=u_{3xx}+f_3(u_3),\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad x>s_r(t),\ t>0,\\ u_1(x,t)=u_2(x,t)=u_2(\widetilde{x},t)=u_3(\widetilde{x},t)=0, \quad x=s_l(t),\ \widetilde{x}=s_r(t),\ t>0,\\ {s_l}'(t)=-\alpha u_{1x}(x,t)-\beta_l u_{2x}(x,t),\quad \quad \quad \quad\quad \ x=s_l(t),\ t>0,\\ {s_r}'(t)=-\beta_r u_{2x}(\widetilde{x},t)-\gamma u_{3x}(\widetilde{x},t), \quad \quad \quad \quad\quad \widetilde{x}=s_r(t),\ t>0,\\ u_2(x,0)=u_{20}(x) (0<x<h), \quad s_l(0)=0,\quad s_r(0)=h (0<h<\infty),\\ u_1(x,0)=u_{10}(x) (x<0),\quad u_3(x,0)=u_{30}(x) (x>h). \end{array} \right. \end{equation} Our problem \eqref{sanlianbo} is nothing but the problem for the traveling wave solutions of \eqref{p1}: $$ \begin{array}{c} u_1(x,t)= \phi_1(x-ct)\ (x\leq ct),\quad u_2(x,t)= \phi_2(x-ct)\ (ct\leq x\leq ct+h),\\ u_3(x,t)= \phi_3(x-ct-h)\ (x\geq ct+h),\quad s_l(t)=ct,\quad s_r(t)=ct+h. \end{array} $$ As above, if $f_1, f_2$ and $f_3$ are of (f$_B$), (f$_M$) and (f$_B$) types of nonlinearities, respectively, we call the solution of \eqref{sanlianbo} a {\it BMB-type} traveling wave solution for convenience. Similarly, one can define {\it MMM-type}, {\it MBM-type} and other types of traveling wave solutions (see Figure 1). Our Theorem \ref{thm:existence2} indeed includes all of these types. We point out that similar conclusions as in Theorems \ref{thm:existence1} and \ref{thm:existence2} remain true for the models including four or more species. In other words, for such a model, one can construct a traveling wave which consists of two semi-waves and several compactly supported waves in between, each intersecting with its neighbor at the free boundary. In section 2, we give some basic phase plane analysis and prove Theorem \ref{thm:existence1}. In section 3 we prove Theorem \ref{thm:existence2}. \section{The Proof of Theorem 1.1}\label{sec:existenceBBtype} In this section we prove Theorem \ref{thm:existence1} for {\it BB-type} traveling wave solutions, that is, for the case where both $f$ and $g$ are of (f$_B$) type. Other types can be proved similarly. \subsection{Semi-waves and Phase Plane Analysis} \label{sec:semi-waves} As in \cite{DuLou}, we call $\phi(z)$ a semi-wave with speed $c$ if $(c,\phi(z))$ satisfies \begin{equation}\label{semi-wave} \left\{ \begin{array}{l} \phi'' +c\phi' +f(\phi)=0 \quad \mbox{ for } z\in (-\infty,0],\\ \phi(0)=0, \ \phi(-\infty)=1, \ \phi(z)>0\ \mbox{ for } z\in (-\infty,0). \end{array} \right. \end{equation} The equation in \eqref{semi-wave} can be written in the equivalent form \begin{equation} \label{newq-p} \phi'=: \Phi,\quad \Phi'=-c\Phi-f(\phi). \end{equation} As long as $\Phi<0$, $\Phi$ can be regarded as a function of $\phi$ which satisfies \begin{equation} \label{newP11} \frac{d\Phi(\phi)}{d\phi}=-c-\frac{f(\phi)}{\Phi}. \end{equation} For any $\omega<0$, one can consider this equation with initial data $\Phi(\phi)|_{\phi=0} =\omega$. By a phase plane analysis (c.f. \cite{AW1, AW2, DuLou}), we see that for each $\omega <0$, there exists exactly one $c=c(\omega)$ such that the solution of \eqref{newP11} satisfies $\Phi(\phi)\to 0$ as $\phi\to 1^-$. This solution corresponds to a trajectory of \eqref{newq-p} through $(0,\omega)$ and $(1,0)$ in the semistrip \begin{equation*} S_{\phi}=\{(\phi,\Phi):0<\phi<1,\Phi<0\} \end{equation*} in $\phi \Phi$-phase plane. This trajectory gives a unique solution $(c(\omega), \phi(z;c(\omega)))$ for the problem \eqref{semi-wave} with $\phi'(0;c(\omega)) =\omega$. Moreover, as in \cite{AW1, AW2, DuLou}, $c(\omega)$ is continuous and increasing in $\omega\in (-\infty,0)$ and \begin{equation}\label{c omega limit} c(\omega)\to c^*_f \mbox{ as } \omega \to 0,\qquad c(\omega)\to -\infty \mbox{ as } \omega \to -\infty, \end{equation} where $c^*_f >0$ is the unique traveling speed of the following problem \begin{equation}\label{decreasing TW} \left\{ \begin{array}{l} \phi'' +c\phi' +f(\phi)=0 \quad \mbox{ for } z\in \mathbb{R},\\ \phi(-\infty)=1, \ \phi(\infty)=0,\ \phi'(z)<0 \mbox{ for } z\in \mathbb{R}. \end{array} \right. \end{equation} In summary we have the following result. \begin{lem}\label{lem:phi Phi} For any $c\in (-\infty, c^*_f)$, problem \eqref{semi-wave} has a unique solution $(c, \phi (z;c))$. Moreover, $\phi' (0;c)\ (=\omega)$ is continuous and increasing in $c\in (-\infty,c^*_f)$. \end{lem} We also need to consider a similar semi-wave $\psi$ with increasing profile: \begin{equation}\label{semi-wave2} \left\{ \begin{array}{l} \psi'' + c\psi' +g(\psi)=0,\;z\in [0,\infty),\\ \psi(0)=0,\;\psi(\infty)=1,\;\psi(z)>0 \mbox{ for } z\in (0,\infty). \end{array}\right. \end{equation} Denote $c^*_g $ the unique traveling speed of the following problem \begin{equation}\label{semi-wave-3} \left\{ \begin{array}{l} \psi'' +c\psi' +g(\psi)=0 \quad \mbox{ for } z\in \mathbb{R},\\ \psi(-\infty)=0, \ \psi(\infty)=1,\ \psi'(z)>0 \mbox{ for } z\in \mathbb{R}. \end{array} \right. \end{equation} Then $c^*_g <0$ and in a similar way as above one can obtain the following result. \begin{lem}\label{lem:psi Psi} For any $c\in (c^*_g, \infty)$, problem \eqref{semi-wave2} has a unique solution $(c, \psi (z;c))$. Moreover, $\psi' (0;c)$ is continuous and increasing in $c\in (c^*_g, \infty)$. \end{lem} \subsection{Proof of Theorem \ref{thm:existence1}} We only prove (i). The proof of (ii) is similar. Now $\alpha>0$ is given. Since $(\alpha \phi'(0;c)+ c)|_{c=0} <0$ and $(\alpha \phi'(0;c)+ c)|_{c\to c^*_f} >0$, we know by Lemma \ref{lem:phi Phi} that there exists a unique $\hat{c}_f \in (0,c^*_f)$ such that \begin{equation}\label{D <0} \alpha \phi'(0;\hat{c}_f)+ \hat{c}_f =0\quad \mbox{ and } \quad \alpha \phi'(0;c)+ c <0 \mbox{ for all } c\in (c^*_g, \hat{c}_f). \end{equation} For any $\beta\geq 0$, we consider the function \begin{equation}\label{functionofc} D(c;\beta):=\alpha \phi'(0;c)+\beta \psi'(0;c)+c,\quad c\in (c^*_g, c^*_f). \end{equation} By Lemmas \ref{lem:phi Phi} and \ref{lem:psi Psi}, $D(c;\beta)$ is continuous and strictly increasing in $c$. For any $c\in (c^*_g, \hat{c}_f)$, $D(c;0)<0$ by \eqref{D <0} and $D(c;\infty)=\infty$. Hence there exists a unique $\beta(c)>0$ such that $D(c;\beta(c))=0$, that is, $$ \alpha \phi'(0;c)+\beta(c) \psi'(0;c)+c \equiv 0,\quad \forall c\in (c^*_g, \hat{c}_f). $$ By Lemmas \ref{lem:phi Phi} and \ref{lem:psi Psi} again, we have $\beta(c)$ is continuous and strictly decreasing in $c\in (c^*_g, \hat{c}_f)$. Moreover, as $c\to \hat{c}_f$ we have $\beta(c)\to 0$ by \eqref{D <0}. As $c\to c^*_g $ we have $\alpha \phi'(0;c)+c = \alpha \phi' (0;{c^*_g})+c^*_g <0$ and $\psi'(0;c) \to 0$, and so $\beta(c)\to \infty$. When $c=0$, by integration we have $$ D(0;\tilde{\beta})=-\alpha\sqrt{2\int_0^1 f(s)ds}+\tilde{\beta}\sqrt{2\int_0^1 g(s)ds} =0, \quad \mbox{ where } \tilde{\beta} := \frac{ \alpha\sqrt{\int_0^1f(s)ds}}{ {\sqrt{\int_0^1 g(s)ds}}}, $$ that is, $\beta(0)=\tilde{\beta}$. Therefore, $c>0$ if and only if $\beta<\tilde{\beta}$. This completes the proof of Theorem \ref{thm:existence1}. \qed \section{The Proof of Theorem \ref{thm:existence2} }\label{sec:existenceBMBtype} In this section we prove Theorem \ref{thm:existence2}, only for {\it BMB-type} traveling wave solution. {\it MMM-type}, {\it MBM-type} and other types of traveling wave solutions are proved similarly. So in this section, we assume $f_1,f_2,f_3$ are of (f$_B$), (f$_M$) and (f$_B$) types, respectively. \subsection{Compactly Supported Traveling Wave}\label{sec:traveling-pulse} For any given $\sigma\in (0,1)$ (when $f_2$ is of (f$_B$) type, we choose $\sigma \in (\bar{\theta},1)$), we call $\phi_2(z)$ a {\it compactly supported traveling wave} with speed $c$ and with height $\sigma$ if, for some $h>0$, the pair $(c,\phi_2(z))$ solves \begin{equation}\label{travelingpulse} \left\{ \begin{array}{lll} \phi_2''(z) +c\phi_2'(z) +f_2(\phi_2(z))=0 \mbox{ for } z\in (0,h),\\ \phi_2(0)=\phi_2(h)=0, \ \phi_2'(0)>0,\phi_2'(h)<0,\\ \phi_2(z)>0\ (z\in (0,h)) \mbox{ and } \|\phi_2\|_{C([0,h])} =\sigma. \end{array} \right. \end{equation} For such a solution, we will see below, there exists a unique $z_0\in (0,h)$ such that $\phi_2$ is strictly increasing in $(0,z_0)$ and strictly decreasing in $(z_0, h)$, and $\phi_2(z_0)=\sigma$. To study the existence of solutions of \eqref{travelingpulse}, we use a phase plane analysis as above. The equation of $\phi_2$ is equivalent to \begin{equation}\label{newq-p1} \phi_2'=: \Phi_2,\; \Phi_2'=-c\Phi_2-f_2(\phi_2). \end{equation} This system has many trajectories, depending on $c$, passing through $(\sigma,0)$ on the $\phi_2\Phi_2$-phase plane. More precisely, for any $\omega_l>0$, there exists a unique $c_l (\omega_l)$ such that the trajectory $T_l(\omega_l;\sigma)$ of \eqref{newq-p1} with $c=c_l (\omega_l)$ lies in $\{(\phi_2,\Phi_2):0<\phi_2<\sigma,\Phi_2>0\}$ and passes through $(0,\omega_l)$ and $(\sigma, 0)$. As in the previous section, $c_l (\omega_l)$ is strictly increasing in $\omega_l\in (0,\infty)$, and $$ c_l(\omega_l)\to c^*_l \mbox{ as } \omega_l \to 0,\qquad c_l (\omega_l)\to \infty \mbox{ as } \omega_l \to \infty, $$ where $c^*_l$ is a constant depending only on $\sigma$ and $f_2$ such that the trajectory $T^*_l(\sigma)$ of \eqref{newq-p1} with $c=c^*_l$ passes through $(0,0)$ and $(\sigma, 0)$. To study the sign of $c^*_l$, we consider another trajectory $T_l(\omega_l;1)$ besides $T_l(\omega_l;\sigma)$. Here $T_l(\omega_l;1)$ is a trajectory lying above the $\phi_2$-axis and passing through $(0,\omega_l)$ and $(1,0)$. Such a trajectory clearly exists for some $c=c_l(\omega_l;1)< c_l (\omega_l)$, and so $T_l(\omega_l;1)$ is above $T_l(\omega_l;\sigma)$. Taking limit as $\omega_l\to 0$ we have $$ T_l(\omega_l;\sigma)\rightarrow T^*_l (\sigma),\quad T_l(\omega_l;1)\rightarrow T^*_l(1),\quad T^*_l(1) \mbox{ is above } T^*_l(\sigma) \mbox{ and } $$ $$ c^*_l = \lim\limits_{\omega_l \to 0} c_l (\omega_l) \geq c^*_l(1):= \lim\limits_{\omega_l\to 0} c_l(\omega_l;1), $$ where $T^*_l(1)$ is the trajectory corresponding to the traveling wave (the solution of \eqref{semi-wave-3} with $g$ replaced by $f_2$) with maximal speed $c^*_l(1)\ (<0)$ (c.f. \cite{AW1, AW2}). Trajectory $T^*_l(\sigma)$ gives a function $\Phi_2 (\phi;\sigma)$ satisfying \begin{equation*} \frac{d\Phi_2}{d\phi}= - c^*_l -\frac{f_2(\phi)}{\Phi_2}, \quad \Phi_2 (0)= \Phi_2(\sigma)=0. \end{equation*} Integrating over $[0,\sigma]$ we have $$ -c^*_l \int_0^\sigma\Phi_2 (s;\sigma)ds=\int_0^\sigma f_2(s)ds. $$ Similarly the function $\Phi_2(\phi;1)$ given by $T^*_l(1)$ satisfies $$ -c^*_l (1) \int_0^1 \Phi_2 (s;1)ds=\int_0^1 f_2(s)ds. $$ Since $T_l^\ast(1)$ lies above $T_l^\ast(\sigma)$, we have $\int_0^\sigma\Phi_2(s;\sigma)ds<\int_0^1\Phi_2(s;1)ds$ and so \begin{equation}\label{c*l} c^*_l = - \frac{\int_0^\sigma f_2(s)ds}{\int_0^\sigma\Phi_2(s;\sigma)ds} < - \frac{\int_0^\sigma f_2(s)ds}{\int_0^1\Phi_2(s;1)ds}=L_\sigma := c^*_l(1) \frac{\int_0^\sigma f_2(s)ds}{\int_0^1 f_2 (s) ds} <0 . \end{equation} Similarly, for any $\omega_r<0$, there exists a unique $c_r (\omega_r)$ such that the trajectory $T_r(\omega_r;\sigma)$ of \eqref{newq-p1} with $c=c_r (\omega_r)$ lies in $\{(\phi_2,\Phi_2):0<\phi_2<\sigma,\Phi_2 <0\}$ and passes through $(0,\omega_r)$ and $(\sigma, 0)$. The function $c=c_r(\omega_r)$ is strictly increasing, its range is $(-\infty, c^*_r)$, where $c^*_r $ is a constant such that the trajectory of \eqref{newq-p1} with $c=c^*_r$ passes through $(0,0)$ and $(\sigma, 0)$ in $\{(\phi_2,\Phi_2):0<\phi_2<\sigma,\Phi_2 <0\}$, and \begin{equation}\label{c*r} c^*_r > R_\sigma := -c^*_l (1) \frac{\int_0^\sigma f_2(s)ds}{\int_0^1 f_2 (s) ds} >0 . \end{equation} In summary we have the following result. \begin{lem}\label{lem:existence of phi2} For any $c\in \mathbf{C}:= (c^*_l , c^*_r)$, problem \eqref{travelingpulse} has a unique solution $\phi_2 (z;c)$ on $[0,h_c]$ for some $h_c >0$. Moreover, $\phi'_2(0;c)$ and $\phi'_2(h_c ;c)$ is strictly increasing in $c\in \mathbf{C}$. \end{lem} \subsection{The Proof of Theorem \ref{thm:existence2}}\label{sec:propertylr} For the functions $\phi_1$ and $\phi_3$ in problem \eqref{sanlianbo}, as in Lemmas \ref{lem:phi Phi} and \ref{lem:psi Psi} we have the following result. \begin{lem}\label{lem:1-3} For any $c\in (-\infty, c^*_1)$ with $c^*_1 := c^*_{f_1} >0$, problem \eqref{semi-wave} with $f=f_1$ has a unique solution $(c,\phi_1(z;c))$, and $\phi'_1(0;c)$ is continuous and strictly increasing in $c$; For any $c\in (c^*_3,\infty)$ with $c^*_3 := c^*_{f_3}<0$, problem \eqref{semi-wave2} with $g=f_3$ has a unique solution $(c, \phi_3 (z;c))$, and $\phi'_3 (0;c)$ is continuous and increasing in $c\in (c^*_3,\infty)$. \end{lem} Now we study the free boundary conditions in \eqref{sanlianbo} (c.f. Figures 2 and 3). For any $\sigma\in (0,1)$, define \begin{equation}\label{lc0} D_l(c;\beta_l) :=\alpha \phi'_1(0;c)+\beta_l \phi'_2 (0;c)+c. \end{equation} Now we consider the domain of the variable $c$. Denote $\hat{c}_1$ the unique root of $$ D_l(c;0) = \alpha \phi'_1(0;c) + c =0, $$ then $0<\hat{c}_1 <c^*_1$. We have two cases: $$ {\bf Case\ 1:}\ 0<\hat{c}_1 \leq c^*_r;\qquad {\bf Case\ 2:}\ 0<c^*_r < \hat{c}_1. $$ The domain of $c$ in \eqref{lc0} is $(c^*_l, c^*_1)\cap (c^*_l, c^*_r)$ in {\bf Case\ 1}, and it is $(c^*_l, c^*_r)$ in {\bf Case\ 2}. In {\bf Case 1} (see Figure 2), for any $\beta_l\geq 0$, since $D_l(\hat{c}_1 ;\beta_l) \geq 0$ and $D_l (c^*_l;\beta_l) <0$, $D_l(c;\beta_l)=0$ has a unique root $c = C_l (\beta_l)\in (c^*_l , \hat{c}_1)$. Moreover, $C_l (\beta_l)$ is strictly decreasing in $\beta_l$, $C_l(\beta_l)\rightarrow \hat{c}_1 >0$ as $\beta_l\rightarrow0$ and $C_l(\beta_l)\rightarrow c^*_l <0 $ as $\beta_l\rightarrow\infty$. In {\bf Case 2}, $D_l (c^*_r;0) <0$ and $D_l (c^*_r;\beta_l) >0$ for large $\beta_l>0$. Hence there exists a unique $\beta^0_l >0$ such that $$ D_l (c^*_r; \beta^0_l) = \alpha \phi'_1(0;c^*_r)+\beta^0_l \phi'_2 (0;c^*_r)+c^*_r =0. $$ It is easily seen that $D_l(c;\beta_l)=0$ has a root $c=C_l(\beta_l)$ if and only if $\beta_l \geq \beta^0_l$. In summary we have \begin{equation}\label{Cl} \frac{\partial C_l(\beta_l)} {\partial \beta_l}<0, \mbox{ and } \left\{ \begin{array}{ll} C_l(0)=\hat{c}_1 >0,& \ C_l(\infty)\rightarrow c^*_l <0 \mbox{ in {\bf Case\ 1}},\\ C_l(\beta^0_l)= c^*_r >0, & \ C_l(\infty)\rightarrow c^*_l <0 \mbox{ in {\bf Case\ 2}}. \end{array} \right. \end{equation} This gives the zeros of $D_l(c;\beta_l)$ in \eqref{lc0}. Similarly, define \begin{equation}\label{rc0} D_r(c;\beta_r) :=\beta_r \phi'_2(h_c ; c)+\gamma \phi'_3 (0;c)+c. \end{equation} Denote $\hat{c}_3$ the unique root of $$ D_r(c;0) = \gamma \phi'_3(0;c) +c =0, $$ then $c^*_3 < \hat{c}_3 <0$. On their relations with $c^*_l$ we have two cases: $$ {\bf Case\ I:}\ c^*_l \leq \hat{c}_3 < 0;\qquad {\bf Case\ II:}\ \hat{c}_3 < c^*_l <0. $$ So the domain of $c$ in \eqref{rc0} is $(c^*_3, c^*_r)\cap (c^*_l, c^*_r)$ in {\bf Case\ I}, and it is $(c^*_l, c^*_r)$ in {\bf Case\ II} (see Figure 2). In a similar way as above we see that the unique root $c = C_r (\beta_r)$ of $D_r(c;\beta_r)=0$ satisfies \begin{equation}\label{Cr} \frac{\partial C_r(\beta_r)} {\partial \beta_r}>0, \mbox{ and } \left\{ \begin{array}{ll} C_r(0)=\hat{c}_3 <0,& \ C_r(\infty)\rightarrow c^*_r >0 \mbox{ in {\bf Case\ I}},\\ C_r(\beta^0_r)= c^*_l <0, & \ C_r(\infty)\rightarrow c^*_r >0 \mbox{ in {\bf Case\ II}}, \end{array} \right. \end{equation} where $\beta^0_r >0$ is the unique root of $$ D_r(c;\beta_r) :=\beta_r \phi'_2(h_{c^*_l} ; c^*_l)+\gamma \phi'_3 (0;c^*_l)+c^*_l =0. $$ This gives the zeros of $D_r(c;\beta_r)$ in \eqref{rc0}. \medskip Now we try to find some $c$ such that both $D_l(c;\beta_l)=0$ and $D_r(c;\beta_r)=0$ hold at the same time for suitably chosen pair: $(\beta_l, \beta_r)$. This will give a solution of \eqref{sanlianbo}. First we consider {\bf Case 1} and {\bf Case II} (see Figure 3). In this case the range of $C_l(\beta_l)$ is $(c^*_l, \hat{c}_1]$, and the range of $C_r(\beta_r)$ is $(c^*_l, c^*_r)$. Therefore, for any $c\in (c^*_l, \hat{c}_1)$, there exist a unique $\beta_l = \beta_l (c)$ and a unique $\beta_r=\beta_r (c)$ such that $$ D_l(c;\beta_l (c)) = D_r(c;\beta_r (c))=0\quad \mbox{ and so } \ C_l(\beta_l (c))=c=C_r(\beta_r(c)). $$ Moreover, by the definition of $D_l(c;\beta_l)$ (resp. $D_r(c;\beta_r)$) we see that $\beta_l(c)$ (resp. $\beta_r(c)$) is strictly decreasing (resp. increasing) function of $c$. Other cases can be studied similarly. In summary we have the following result. \begin{thm}\label{thm:exist} In {\bf Case 1} and {\bf Case I} (resp. in {\bf Case 1} and {\bf Case II}, {\bf Case 2} and {\bf Case I}, {\bf Case 2} and {\bf Case II}), for any $c\in (\hat{c}_3, \hat{c}_1)$ (resp. $c\in (c^*_l, \hat{c}_1)$, $c\in (\hat{c}_3, c^*_r)$, $c\in (c^*_l, c^*_r)$), there exists a unique positive pair $(\beta_l (c), \beta_r(c))$ such that \eqref{sanlianbo} has a solution. \end{thm} Finally we study the sign of $c$. Again we only consider {\bf Case 1} and {\bf Case II}, since other cases are treated similarly. For $\tilde{\beta}_l$ and $\tilde{\beta}_r$ defined in \eqref{tilde beta}, since $$ D_l(0;\tilde{\beta}_l)=\alpha \phi'_1(0;0)+\tilde{\beta}_l \phi'_2(0;0)= -\alpha\sqrt{2\int_0^1 f_1(s)ds}+ \tilde{\beta}_l\sqrt{2\int_0^\sigma f_2(s)ds} =0, $$ we have $\beta_l (0) = \tilde{\beta}_l$. Similarly, $\beta_r (0) = \tilde{\beta}_r$. Observing Figure 3 one can find that $$ c>0 \ \Leftrightarrow \ 0< \beta_l<\tilde{\beta}_l\ \Leftrightarrow \ \beta_r > \tilde{\beta}_r . $$ This completes the proof of Theorem \ref{thm:existence2}. \qed \medskip {\bf Acknowledgement}. The authors would like to thank Professor C.C. Chen for sending the paper \cite{CC} to them, thank Professors M. Nagayama, K.I. Nakamura and Y. Yamada for helpful discussions.
1,314,259,994,416
arxiv
\section{Introduction} In recent years, we have witnessed the booming development of low earth orbit (LEO) satellite networks. Companies such as SpaceX, Amazon, and OneWeb are accelerating the formation of a network of tens of thousands of LEO satellites \cite{del2019technical}. Since LEO satellite communication has relatively low latency and unique ability to provide seamless global coverage \cite{kodheli2020satellite}, \cite{yaacoub2020key}, part of real-time communication services are being delivered from ground to space \cite{c2014system}. In terms of low latency and ultra-long distance communication, the LEO satellite network has excellent advantages over ground networks and high orbit satellite networks \cite{chaudhry2020free}. In ultra-long distance communication, multiple satellites are used as relays to complete multi-hop routing. How to select the relay satellite to achieve the minimum latency routing becomes one of the challenges \cite{tang2018multipath}, \cite{he2016delay}. \par Different from the traditional planar routing, satellites are distributed on a closed sphere, and the maximum distance between satellites is limited due to earth blockage \cite{al2021analytic}. For a network where the number and location of satellites are constantly changing, it is more challenging to implement routing in the time-varying topology than in the traditional static topology \cite{sun2020routing}. For small LEO satellites, both computing and storage capacity are limited \cite{zhang2021service}. In a massive LEO satellite network, frequent position changes lead to high computational costs. In addition, each satellite collects only the current state of its neighbors in most cases, which means that it is highly demanding for a single satellite to obtain and store global information such as the location of the satellite. However, using only local information can only get the approximate shortest path, which has limited improvement on the whole constellation latency performance \cite{li2019temporal}. \par Existing routing schemes provide strategies to address some of the challenges, but they are not suitable for dynamic large-scale satellite constellations. Stochastic geometry provides a powerful mathematical method for routing in massive constellations. The coverage probability of LEO satellite constellation and two-dimensional plane routing have been studied based on stochastic geometry. Based on these studies, we propose an algorithm to solve the routing problem of a dynamic constellation. At the end of this section, the contributions of this paper are described in more detail. \subsection{Related Work} Most of the existing LEO satellite routing is based on the store-and-forward mechanism \cite{lu2015complexity}, \cite{lu2019some}, which undoubtedly brings considerable delay. The following algorithms can achieve real-time communication in specific scenarios \cite{li2019temporal}, \cite{wang2019adaptive}, \cite{pan2019opspf}. In \cite{wang2019adaptive}, medium orbit satellites and high orbit satellites are used to collect and exchange global information to find a route with minimum latency for low-orbit satellites. However, due to the increased complexity of the algorithm, this method is only suitable for small-scale networks but not a massive dynamic network. In \cite{pan2019opspf}, the latency is effectively reduced according to the regular motion of the satellite, and there is no need to collect global information and pay the great computational cost. However, the algorithm is only suitable for a specific small network composed of 8 satellites, and the algorithm cannot optimize link latency. Compared to \cite{pan2019opspf}, the algorithm in \cite{li2019temporal} is local optimum and scalable. By dividing the sphere into many grids, the satellite is positioned by the grid. However, the algorithm reaches the square complexity and is only suitable for static topology. In addition, from the global point of view, it is difficult to guarantee the lower bound of the algorithm. \par For massive dynamic satellite networks, the main reason the existing routing algorithms can not combine low complexity and global optimization is that they are designed for each satellite's specific constellations and specific behavior. As an effective mathematical tool, stochastic geometry is especially suitable for analyzing network topology from system-level \cite{haenggi2012stochastic}. So far, many methods have been developed to analyze LEO satellite systems based on stochastic geometry. Binomial point process (BPP) is used to model a closed-area network with a finite number of satellites in \cite{talgat2020stochastic} and \cite{ok-1}. \cite{ok-1}, \cite{talgat2020nearest} and \cite{Al-1} give different forms of contact distance distribution, respectively, that is, the distribution of the distance between a reference point and the nearest satellite. Contact distance distribution provides an important theoretical basis for the analysis of this article. \par In addition, there are several of two-dimensional planar routing strategies based on stochastic geometry \cite{stamatiou2010delay}, \cite{dhillon2015wireless}, \cite{farooq2015stochastic}. Among them, \cite{sasaki2017energy} and \cite{routingimportant} provide the concept of a reliable region, which ensures the routing can always follow the established direction. The concept of routing efficiency is used to measure the maximum gap between the proposed routing strategy and the optimal one \cite{routingimportant}. By sacrificing the optimality of the algorithm, a sub-optimal routing strategy is given on the premise that only local information is available \cite{richter2018optimal}. According to this idea, the optimal routing is derived in an ideal scenario. Then the sub-optimal routing strategy is proposed when only local information is available. \subsection{Contribution} So far, this is the first study of satellite routing based on stochastic geometry. The contributions can be summarized as follows: \begin{itemize} \item Three propositions are given in the ideal scenario where there are available satellites at any location. Based on these propositions, we provide a solution for the ideal scenario and use it as an upper bound for the proposed algorithm. \item Equal interval, minimum deflection angle, and maximum step size relay strategies are derivatives of propositions in the ideal scenario. We obtain the proposed algorithm by improving the equal-interval relay strategy. The remaining two are used as the baselines. \item We provide two approximations to estimate the gap between the algorithm and the best possible solution. Numerical results show that these two approximations can give tight upper and lower bounds for the algorithm delay. \item According to three deterministic LEO satellite constellations, algorithm complexity, average and maximum search area required for finding at least one satellite are analyzed. \item We study the influence of parameters such as communication distance, constellation height, and the number of satellites on latency. \end{itemize} \begin{table*}[] \centering \caption{Summary of Notations.} \begin{tabular}{|M{2.8cm} | M{12cm}|} \hline \textbf{Notation} & \textbf{Description} \\ \hline \hline $N_{\rm{Sat}}$; $n$; $ N_{\min}$ & Number of satellites; number of hops that one link contains; minimum number of hops \\ \hline ${r_\oplus}$; $r_{\rm{Sat}}$; $r$ & Radius of the Earth; height of the satellite orbits; radius of the sphere where satellites locate \\ \hline $\mathcal{H}$; $h_i$; $d_i$ & The set which contains the IDs of a link; ID of the $i^{th}$ satellite; distance of the $i^{th}$ hop \\ \hline $x_i$, $\theta_i$, $\varphi_i$; $\Phi$ & The location, polar angle, azimuth angle of the $i^{th}$ satellite; the homogeneous BPP \\ \hline $T$; $\varepsilon$ & Latency of the multi-hop link; link tolerable probability of interruption \\ \hline $\theta_i^h$; $\theta_{_{02n}}^h$ & Dome angle between satellites $x_{h_{i-1}}$ and $x_{h_i}$; starting satellite $x_{h_0}$ and ending satellite $x_{h_n}$ \\ \hline $d_{\max}$; ${\theta_{\max}} $ & Maximum communication distance; upper bound of dome angle between satellites \\ \hline $\theta_0$; $\theta_r$ & Contact angle; reliable angle \\ \hline $\widetilde{E}_1$; $\widetilde{E}_2$ & Contour integral approximation of the efficiency; binomial approximation of the efficiency \\ \hline \end{tabular} \end{table*} \section{Optimal Routing Scheme} Let us consider a scenario where two satellites are too far apart to communicate directly. Several satellites act as relays to complete multi-hop satellite to satellite link communication. \subsection{Problem Formulation} To formalize the problem, this section introduces ({\romannumeral1}) satellite distribution, ({\romannumeral2}) link routing model, ({\romannumeral3}) coordinate system and ({\romannumeral4}) optimization problem in order. \par Consider a massive constellation composed of $N_{\rm{Sat}}$ satellites, which are independently distributed on a spherical surface according to a homogeneous Binomial Point Process (BPP) \cite{ok-1}. The radius of the sphere is denoted as $r={r_\oplus}+r_{\rm{Sat}}$, where $r_\oplus=6371\rm{ \,Km}$ is the radius of the Earth, and $r_{\rm{Sat}}$ is the height of the satellite orbits. \par The latency required for transmission is often measured in milliseconds, which is much smaller than the orbital period of LEO satellites. The change of satellite position with time in single routing is negligible. A transmission from one satellite to another is called a hop. A link with $n$ hops can be expressed as $\mathcal{H}=\{h_0,h_1,...,h_n\}$. $h_i$ is the ID of the $i^{th}$ satellite, which is a positive integer less than $N_{\rm{Sat}}$. $x_{h_0}$ and $x_{h_n}$ are the positions of the starting point and the ending point, respectively. \par \begin{figure*}[t] \centering \includegraphics[width=0.98\linewidth]{figure1.png} \caption{Explanatory figure of the three propositions.} \label{fig:System model} \vspace{-0.4cm} \end{figure*} Since the distribution of satellites forms a homogeneous BPP, the rotation of the coordinate system do not affect the distribution. Set the center of the Earth as the origin. All satellites have the same radial distance $r$. We establish the coordinate system by the coordinates of the starting satellite $x_{h_0}$ and the ending satellite $x_{h_n}$ of the multi-hop link. As is shown in Fig.~\ref{fig:System model}, the $x$-axis is parallel to the line segment between $x_{h_0}$ and $x_{h_n}$, and the $z$-axis is the midperpendicular of this segment, so the $y$-coordinates of $x_{h_0}$ and $x_{h_n}$ are 0. Since satellites are distributed on a sphere, spherical coordinates are more practical than rectangular coordinates. Coordinate $(r,\theta_i,\varphi_i)$ is used to represent the location of $i^{th}$ satellite $x_i$. $\theta_i$ and $\varphi_i$ are the polar and azimuth angles, respectively. Furthermore, the homogeneous BPP is denoted as $\Phi=\{x_1,x_2,...,x_{N_{\rm{Sat}}}\}$. $d_i$ is used to describe the distance of the $i^{th}$ hop, that is, the spatial distance from $x_{h_{i-1}}$ to $x_{h_i}$, \begin{equation} \begin{split} \label{d_i} d_i = r \bigg[ 2\Big(1 - \cos{\theta_{h_{i-1}}}\cos{\theta_{h_i}} -\sin{\theta_{h_{i-1}}}\sin{\theta_{h_i}}\cos\left(\varphi_{h_{i-1}}-\varphi_{h_i}\right)\Big)\bigg]^{\frac{1}{2}}, \end{split} \end{equation} where $i=1,2,...,n$. \par To minimize the latency by selecting the number of satellites and their positions, we consider the following optimization problem, \begin{subequations} \begin{alignat}{2} \mathscr{P}_0:\quad &\underset{ {n,\mathcal{H}}}{\text{minimize}} &\quad& T = \frac{1}{c}\sum_{i=1}^{n} d_i, \label{eq:opt}\\ &\textrm{subject to:} & & d_i\leq 2\sqrt{r^2-r_\oplus^2}, \; \forall i, \label{st:constraint} \\ & & & d_i\leq d_{\max}, \; \forall i. \label{st:constraint2} \end{alignat} \end{subequations} \par In (\ref{eq:opt}), the optimization objective is the latency of the multi-hop link, where $c=3\times10^2 \rm{ \,Km/ms}$ is the speed of laser propagation. Constraint (\ref{st:constraint}) guarantees that the satellites are within line-of-sight of each other \cite{ok-1}, and constraint (\ref{st:constraint2}) limits the maximum communication distance $d_{\max}$ between satellites. Note that We omit the power constraint issues in $\mathscr{P}_0$. Since the objective function is related to the position and number of satellites, the problem is not convex. \subsection{The Ideal Scenario Solution}\label{The Ideal Scenario Solution} To make the problem more manageable, we start with an ideal scenario, which assumes satellites are available anywhere on the sphere. Before solving the optimization problem $\mathscr{P}_0$, the following definitions are required. \begin{definition}[Central Angle] For a circle passing satellites A and B, the central angle of the circle is the angle between the line connecting A and the center of the circle and the line connecting B and the center of the circle. \end{definition} \begin{definition}[Dome Angle] For a circle centered at the origin, passing satellites A and B, the central angle for this specific circle is called the dome angle. \end{definition} \begin{definition}[Shortest Inferior Arc] The circle centered at the origin, with radius $r$, passing the staring point $x_{h_0}$ and the ending point $x_{h_n}$, are divided into two arcs by $x_{h_0}$ and $x_{h_n}$. The arc with a shorter arc length is called the shortest inferior arc. \end{definition} \par An ideal solution of problem $\mathscr{P}_0$ is derived through the following three propositions. \par \begin{proposition}\label{prop1} In the ideal scenario, optimal positions $x_{h_i}^*$ in $\mathscr{P}_0$ are located on the shortest inferior arc. \end{proposition} \begin{proof} See Appendix~\ref{app:prop1}. \end{proof} Based on proposition~\ref{prop1}, all satellites are assumed to locate on the shortest inferior arc. Therefore, an equivalence problem for $\mathscr{P}_0$ is given by, \begin{subequations} \begin{alignat}{2} \mathscr{P}_1:\quad &\underset{ {n,\mathcal{H}}}{\text{minimize}} &\quad& T = \frac{1}{c}\sum_{i=1}^{n} 2r\sin\left(\frac{\theta_i^h}{2}\right),\label{opt2}\\ &\textrm{subject to:} & & \theta_i^h\leq 2\arccos\left(\frac{r_\oplus}{r}\right) \; \forall i, \label{st:constraint2-1}\\ & & & \theta_i^h\leq 2\arcsin\left(\frac{d_{\max}}{r}\right), \; \forall i,\label{st:constraint2-2}\\ & & & \sum_{i=1}^{n} \theta_i^h = \theta_{_{02n}}^h,\label{st:constraint2-3} \end{alignat} \end{subequations} where $\theta_i^h$ is the dome angle between satellites $x_{h_{i-1}}$ and $x_{h_i}$. As is shown in Fig.~\ref{fig:angles} $\theta_{_{02n}}^h$ is the dome angle between starting satellite $x_{h_0}$ and ending satellite $x_{h_n}$, which is given as, \begin{equation}\label{theta_02n} \begin{split} \theta_{_{02n}}^h = \arcsin \bigg( \frac{\sqrt{2}}{2}\Big(1 - \cos{\theta_{h_0}}\cos{\theta_{h_n}} -\sin{\theta_{h_0}}\sin{\theta_{h_n}}\cos\left(\varphi_{h_0}-\varphi_{h_n}\right)\Big)^{\frac{1}{2}} \bigg). \end{split} \end{equation} ${\theta_{_{02n}}^h}$ is also defined as the dome angle of the multi-hop link. It can be derived intuitively by the formula (\ref{d_i}) with the aid of simple geometric relations. The following proposition will further give a more specific distribution of relay satellite positions. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{figure3.pdf} \caption{Schematic diagram of equal interval routing strategy.} \label{fig:angles} \vspace{-0.4cm} \end{figure} \begin{proposition}\label{prop2} In the ideal scenario, for an $n$-hop link, if the satellites are located on the shortest inferior arc, the optimal dome angle $\theta_i^{h*}$ in $\mathscr{P}_1$ is equal to ${\theta_{_{02n}}^h}/{n}$. \end{proposition} \begin{proof} See Appendix~\ref{app:prop2}. \end{proof} Proposition~\ref{prop2} decreases delay by the equally spaced distribution of relay satellites, while proposition~\ref{prop3} minimizes the latency by determining the optimal number of satellites. Both propositions are shown in Fig.~\ref{fig:System model}. \begin{proposition}\label{prop3} In an ideal scenario, assume the satellites are equally spaced distributed on the shortest inferior arc, the optimal number of hops is \begin{equation}\label{N_min} N_{\min}=\bigg\lceil \frac{\theta_{_{02n}}^h}{\theta_{\max}} \bigg\rceil + 1, \end{equation} where $\lceil . \rceil$ means rounding up to an integer, and \begin{equation}\label{theta_max} \theta_{\max} = \min\left\{2\arccos\left(\frac{r_\oplus}{r}\right),2\arcsin\left(\frac{d_{\max}}{r}\right)\right\}. \end{equation} \begin{proof} See Appendix~\ref{app:prop3}. \end{proof} \end{proposition} In proposition~\ref{prop3}, $\theta_{\max}$ is the upper bound of the dome angle between satellites that have established communication links. $\theta_{\max}$ ensures that two satellites are within the LoS region of each other and the maximum communication distance $d_{\max}$. By combining the above propositions, the global optimum solution to the problem $\mathscr{P}_0$ under ideal conditions is given by the following theorem. \begin{theorem}\label{ideal solution} In the ideal scenario, the global optimal multi-hop link in $\mathscr{P}_0$ has $N_{\min}$ hops, and each hop is located on the inferior arc with equal interval distribution, and the dome angle between each hop is ${\theta_{_{02n}}^h}/{N_{\min}}$. \end{theorem} \subsection{Practical Strategies Discussion} Although the optimal solution is derived in section~\ref{The Ideal Scenario Solution}, it cannot be applied in practice because an infinite number of satellites is required. Based on propositions 1\,-\,3, we designed three strategies to transition multi-hop routing from the ideal scenario to the situation with limited satellites. Fig.~\ref{fig:Examples of four algorithms} is a top view along the direction of the negative $z$-axis. It gives an example of these strategies. \par In \textbf{minimum deflection angle strategy}, each satellite should look for the satellite with the least deflection from the shortest inferior arc as its next hop. Only satellites satisfying the distance constraints are eligible to be relay satellites. The next-hop satellite also needs to be shorter from the ending satellite than the previous one to ensure that each hop keeps approaching the destination satellite. These requirements also need to be met in the two subsequent strategies. From the algorithm's perspective, since $\theta=0$ for the shortest inferior arc, the strategy finds the satellite with the minimum value of $\theta_i$ that meets the requirements. \par \textbf{Equal interval strategy} finds the nearest satellite as the relay in every optimal position obtained under the ideal scenario. As an intuitive extension of the ideal scenario solution, this strategy can bring extremely low latency. The cost of low delay is the poor reliability since it is highly likely that relay satellites do not meet the constraints (\ref{st:constraint}) and (\ref{st:constraint2}). \par In \textbf{maximum stepsize strategy}, the satellite chooses the farthest satellite within communication range as its next hop. It reduces the number of hops as much as possible on the premise of ensuring successful communication. In order to avoid the relay satellite being too far away from the shortest inferior arc, we set up a reliable region, which is the dark area in Fig.~\ref{fig:Examples of four algorithms}. \begin{figure*}[t] \centering \includegraphics[width=0.9\linewidth]{figure4.pdf} \caption{An example of three strategies.} \label{fig:Examples of four algorithms} \vspace{-0.4cm} \end{figure*} As a result, minimum deflection angle strategy and maximum stepsize strategy are set as baselines. The proposed algorithm is designed on the basis of the equal interval strategy, and it is proved to have the lowest latency and high reliability. \section{Algorithm Design and Performance Analysis}\label{The Practical Solution} In this section, we first determine the number of hops of the multi-hop link by introducing contact angle and reliable angle. After that, a complete nearest neighbor search algorithm is given, and its reliability is analyzed. Finally, we define link efficiency to measure the maximum gap between algorithm delay and possible optimal solution. \subsection{Contact Angle and Reliable Angle} Since the interval ${\theta_{_{02n}}^h}/{n}$ decreases as the number of hops $n$ increases, one way to improve the reliability of equal interval strategy is to increase $n$. However, proposition~\ref{prop3} shows that the latency is also increased with $n$. To choose a proper $n$ which can balance the latency and reliability, the concepts of contact angle $\theta_0$ and reliable angle $\theta_r$ need to be introduced first, which is shown at the top of Fig.~\ref{fig:angles}. \begin{definition}[Contact Angle] The contact angle is the dome angle between a randomly placed reference and the closest point from the process (the nearest satellite in this article). \end{definition} \par Since the satellites form a uniform BPP, any randomly selected reference points have the same contact angle distribution. \begin{lemma}\label{CDF of contact} The \ac{CDF} of the contact angle is obtained as, \begin{equation} F_{\theta_0}\left(\theta\right) = 1 - \left( \frac{1+\cos \theta}{2} \right) ^ {N_{\rm{Sat}}}, \ 0 \leq \theta \leq \theta_{\max}, \end{equation} where $\theta_{\max}$ is defined in (\ref{theta_max}). \begin{proof} See Appendix~\ref{app:CDF of contact}. \end{proof} \end{lemma} Based on Lemma~\ref{CDF of contact}, the \ac{PDF} of the contact angle can be obtained by taking the derivative of \ac{CDF} with respect to $\theta$. \begin{lemma}\label{PDF of contact} The \ac{PDF} of the contact angle is obtained as, \begin{equation} f_{\theta_0}\left(\theta\right) = \frac{N_{\rm{Sat}} }{2} \sin{\theta} \left( \frac{1+\cos \theta}{2} \right) ^ {N_{\rm{Sat}}-1}, \ 0 \leq \theta \leq \theta_{\max}, \end{equation} where $\theta_{\max}$ is defined in (\ref{theta_max}). \end{lemma} \begin{definition}[Reliable Angle] Reliable angle is the minimum dome angle that ensures that at least one satellite can be found within a specified range. \end{definition} However, even given a large region for search, no satellite may be available because of the randomness. Therefore, we can only guarantee that the probability of not finding any satellite is lower than an acceptable threshold. The value of reliable angle is related to this predefined threshold. \begin{definition}[Link Tolerable Probability of Interruption] Link tolerable probability of interruption $\varepsilon$ is the upper bound of the probability that no satellite is available within the reliable angle range in at least one hop. \end{definition} The absence of a satellite available within a reliable angle range does not mean that the hop will be interrupted because the interruption also depends on the location of the other relay satellite. Therefore, $\varepsilon$ is not equivalent to the average link interruption probability but an upper bound. In addition, $\varepsilon$ can be regarded as a system parameter determined by the requirements rather than an optimization variable. For a fixed $\varepsilon$, the more hops the link has, the higher the reliability required for a single hop. Therefore, reliable angle $\theta_r$ is a monotonically increasing function of $n$. The following lemma will give the relationship among reliable angle $\theta_r$, link tolerable probability of interruption $\varepsilon$, and the number of hops $n$. \begin{lemma}\label{reliable angle with n} For an $n$-hop link with link tolerable probability of interruption $\varepsilon$, the reliable angle $\theta_r$ is given by, \begin{equation} \theta_r\left(n\right) = \arccos \left( 2 \left( 1 - \left( 1 - \varepsilon \right)^{\frac{1}{n}} \right)^{\frac{1}{N_{\rm{Sat}}}} - 1 \right). \end{equation} \begin{proof} See Appendix~\ref{app:reliable angle with n}. \end{proof} \end{lemma} \subsection{Type-\uppercase\expandafter{\romannumeral1} Interruption Analysis} Through the above analysis, the following results about the number of hops $n$ can be summarized. The latency increases monotonically with the increase of $n$. The relationship between interruption probability and the number of hops is not intuitive. Increasing $n$ requires a lower interruption probability for a single hop but brings a larger area for finding a satellite. If $n$ is too large and the single-hop interval is too small, two relay locations of the one-hop may choose the same satellite, which leads to severe errors. Furthermore, on the premise that the probability of type-\uppercase\expandafter{\romannumeral2} interruption is lower than $\varepsilon$, the number of hops should be as small as possible. To satisfy the distance constraints, the dome angle of each hop $\theta_i^{h}$ should satisfy, \begin{equation} \theta_i^{h} + 2 \theta_r\left( n \right) \leq \theta_{\max}. \end{equation} To ensure that the multi-hop communication can be completed within $n$ hops, we have, \begin{equation} n \cdot \theta_i^{h} \geq \theta_{_{02n}}^h. \end{equation} By combining the above two inequalities, a loose lower bound on $\theta_r\left( n \right)$ can be obtained, \begin{equation} n \left( \theta_{\max} - 2 \theta_r\left( n \right) \right) \geq \theta_{_{02n}}^h, \end{equation} To avoid the possibility of selecting the same satellite for two relay positions of a single hop, an upper bound of $\theta_r\left( n \right)$ is given as, \begin{equation} \theta_r\left( n \right) \leq \frac{\theta_{\max}}{2}, \end{equation} The following algorithm can give the minimum number of hops between the upper and lower bounds through iteration. \begin{algorithm}[!ht] \caption{Iterative Method for Deriving the Number of Hops} \label{alg.number of hops} \begin{algorithmic} [1] \STATE \textbf{Input}: Dome angle $\theta_{_{02n}}^h$, number of satellites $N_{\rm{Sat}}$ and link tolerable probability of interruption $\varepsilon$. \STATE $n\leftarrow N_{\min}$. \STATE $\theta_r \leftarrow \arccos \left( 2 \left( 1 - \left( 1 - \varepsilon \right)^{\frac{1}{n}} \right)^{\frac{1}{N_{\rm{Sat}}}} - 1 \right)$. \WHILE{$\frac{1}{2} \left( \theta_{\max} - \frac{\theta_{_{02n}}^h}{n} \right) \leq \theta_r \leq \frac{1}{2} \theta_{\max}$} \STATE $n \leftarrow n + 1$. \STATE $\theta_r \leftarrow \arccos \left( 2 \left( 1 - \left( 1 - \varepsilon \right)^{\frac{1}{n}} \right)^{\frac{1}{N_{\rm{Sat}}}} - 1 \right)$. \ENDWHILE \STATE \textbf{Output}: Minimum number of hops $n$ and reliable angle $\theta_r$. \end{algorithmic} \end{algorithm} \par Note that the minimum number of hops is not related to the positions of satellites and it can be expressed as, \begin{equation} \begin{split} &\hat{N}_{\min} = \min\Bigg\{ n: \left( \theta_{\max} - 2\theta_r \left(n-1\right) \right)\left( \theta_{\max} - 2\theta_r \left(n\right) \right) < 0 \ \ \mathbf{or} \\ & \left( \theta_{\max} - \frac{\theta_{_{02n}}^h}{n-1} - 2\theta_r\left(n-1\right) \right)\left( \theta_{\max} - \frac{\theta_{_{02n}}^h}{n} - 2\theta_r \left(n\right) \right) < 0 \Bigg\}, \end{split} \end{equation} which is another representation of step (4) of the algorithm. Both $\theta_r\left(n\right)$ and $\frac{1}{2} \left( \theta_{\max} - {\theta_{_{02n}}^h}/{n} \right)$ increase with $n$. When the algorithm ends the loop as $\frac{1}{2} \left( \theta_{\max} - {\theta_{_{02n}}^h}/{n} \right) > \theta_r\left(n\right)$ satisfied, the output $n$ is the required minimum number of hops. Otherwise, when the algorithm ends the loop as $2\theta_r\left( n \right) > \theta_{\max}$, no value of $n$ guarantees tolerable probability of interruption $\varepsilon$. For a constellation with a small number of satellites, it is not realistic to guarantee a low tolerable probability of interruption. Such problems due to poor system design are defined as type-\uppercase\expandafter{\romannumeral1} interruption. \par \begin{definition}[Type-\uppercase\expandafter{\romannumeral1} interruption] Type-\uppercase\expandafter{\romannumeral1} interruption is a qualitative indicator to describe the rationality of multi-hop communication system design. \end{definition} \par In addition to running an algorithm to determine whether the type-\uppercase\expandafter{\romannumeral1} interruption occurred, The proposition also provides a sufficient condition for the type-\uppercase\expandafter{\romannumeral1} interruption not to occur. \begin{proposition}\label{Minimum Nsat} If there exists a $\theta_{t}<\frac{1}{2}\theta_{\max}$ let the following inequality satisfied, there must be a routing scheme that makes the probability that no satellite is available within the reliable angle range in at least one hop lower than $\varepsilon$, \begin{sequation}\label{sufficient condition} N_{\rm{Sat}} \geq \frac{1}{\ln \left( \frac{1 + \cos \theta_t}{2} \right)} \ln \left( 1 - \left( 1 - \varepsilon \right)^{1 \big/ \left( \Big\lceil \frac{\theta_{_{02n}}^h}{\theta_{\max}-2\theta_t} \Big\rceil + 1\right) } \right), \end{sequation} where $\theta_{_{02n}}^h$ and $\theta_{\max}$ are defined in (\ref{theta_02n}) and (\ref{theta_max}) respectively. \begin{proof} See Appendix~\ref{app:Minimum Nsat}. \end{proof} \end{proposition} \subsection{Type-\uppercase\expandafter{\romannumeral2} Interruption Analysis and Nearest Neighbor Search Algorithm} Considering that even if the constellation is suitable for multi-hop transmission, communication interruption may still happen due to the randomness of the satellite position. Such interruptions are defined as type-\uppercase\expandafter{\romannumeral2} interruption. \begin{definition}[Type-\uppercase\expandafter{\romannumeral2} Interruption] Type-\uppercase\expandafter{\romannumeral2} interruption is an event that happens when the distance in any hop does not satisfy at least one constraint in $\mathscr{P}_0$. \end{definition} \par Although the two types of interruptions happen for different reasons, the occurrences of these two types of interruptions are not independent. The occurrence of type-\uppercase\expandafter{\romannumeral1} interruption often leads to type-\uppercase\expandafter{\romannumeral2} interruption. Because type-\uppercase\expandafter{\romannumeral2} interruption cannot be avoided by the parametric design of the satellite constellation, so we deal with the interruption after it occurs. Suppose the distance between each satellite is too far to communicate. In that case, the satellite at the starting point of the hop is expected to looking for one or several satellites closest to the shortest inferior arc as relays. As is shown at the bottom of Fig.\ref{fig:angles}, it can be regarded as using minimum deflection angle strategy within a single hop. \par As mentioned, the algorithm proposed in this article is based on the equal interval strategy. If two types of interruptions are resolved, the probability of errors occurring in the equal-interval strategy is significantly reduced, thus ensuring low latency and high reliability. The practical nearest neighbor search algorithm is divided into four stages: (\romannumeral1) calculate the minimum number of hops through iteration, (\romannumeral2) find the relay position according to equal interval strategy, (\romannumeral3) find nearest satellite in the neighborhood of the relay position to establish the link, and (\romannumeral4) adopting minimum deflection angle strategy in the single hop when the two satellites of the hop cannot satisfy the distance constraints. The last three steps of the algorithm are as follows. \begin{algorithm}[!ht] \caption{Nearest Neighbor Search Algorithm} \label{alg.Nearest Neighbor Search Algorithm} \begin{algorithmic} [1] \STATE \textbf{Input}: Locations of point process $\Phi$, the number of hops $\hat{N}_{\min}$, starting point ID $h_0$ and ending point ID $h_{\hat{N}_{\min}}$. \STATE \textbf{Initialize}: $T \leftarrow 0$. \FOR{$i = 1 : \hat{N}_{\min}-1$} \STATE $\theta_i^h \leftarrow \theta_{h_0} \Big| \frac{2i}{\hat{N}_{\min}} - 1 \Big|$. \IF{$i < \frac{\hat{N}_{\min}}{2}+1$} \STATE $\varphi_i^h \leftarrow 0$. \ELSE \STATE $\varphi_i^h \leftarrow \pi$. \ENDIF \STATE ${h_i} \leftarrow \arg {\min\limits _{j}} \; d\big( \theta_i^h,\varphi_i^h,\theta_j,\varphi_j \big)$. \ENDFOR \STATE $\mathcal{H} \leftarrow \{ h_0,h_1,...,h_{\hat{N}_{\min}-1},h_{\hat{N}_{\min}} \}. $ \FOR{$i = 1 : \hat{N}_{\min}$} \IF{$d( \theta_{h_{i-1}}, \varphi_{h_{i-1}}, \theta_{h_i}, \varphi_{h_i} ) > d_{\max}$} \STATE Use minimum deflection angle strategy to find the relay satellite IDs $\mathcal{H}^{(i)}\{h_0^{(i)},h_1^{(i)}...,h_n^{(i)}\}$ in $i^{th}$-hop. \STATE $T \leftarrow T + \sum_{j=1}^{n} d \Big( \theta_{h_{j-1}^{(i)}},\varphi_{h_{j-1}^{(i)}},\theta_{h_{j}^{(i)}},\varphi_{h_{j}^{(i)}} \Big).$ \ELSE \STATE $T \leftarrow T + d( \theta_{h_{i-1}},\varphi_{h_{i-1}},\theta_{h_i},\varphi_{h_i} ).$ \ENDIF \ENDFOR \STATE \textbf{Output}: IDs of the multi-hop link $\mathcal{H}$ and Latency $T$. \end{algorithmic} \end{algorithm} To simplify the description of the algorithm, the distance between two points is defined as, \begin{equation}\label{d function} \begin{split} d\left( \theta_1, \varphi_1, \theta_2, \varphi_2 \right) = r \big( 2 ( 1 - \cos{\theta_1}\cos{\theta_2} - \sin{\theta_1}\sin{\theta_2}\cos\left(\varphi_1-\varphi_2\right) )\big)^\frac{1}{2}. \end{split} \end{equation} In addition, the start ID $h_0^{(i)} = h_{i-1}$ and the end ID $h_n^{(i)} = h_i$ in set $\mathcal{H}^{(i)}$. The nearest neighbor search algorithm cannot guarantee finding the optimal solution even when the two types of interruptions do not occur. For example, a link with many hops may meet the distance constraints even after two links are merged. Since the sum of the two sides of the triangle is greater than the third, the combined link has a lower latency. Therefore, it is necessary to analyze the latency performance of the algorithm. \subsection{Efficiency Analysis}\label{efficiency} For an optimization problem that is difficult to find the optimal solution, the most concerning issue is the gap between the found solution and the optimal solution. Unfortunately, according to the available data, no algorithm can find the optimal solution to the problem. The latency of the optimal solution in the ideal scenario is an unattainable lower bound. It is also an upper bound of the difference between the proposed method and the optimal solution. Therefore, efficiency is defined to quantify the difference. \begin{definition}[Efficiency] Efficiency is the ratio of minimum latency in the ideal scenario to the latency of the proposed method. \end{definition} Since satellites are uniformly and independently distributed on the sphere and the intervals of relay positions on multi-hop links are equal, the distance between single hops is independent and identically distributed. Therefore, analyzing the efficiency of multi-hop links can be equivalent to studying that of single-hop. The increase in the distance caused by random distribution can be equivalent to the increase of the dome angle. In other words, it offsets the random distribution of satellites by moving their relay positions. Thus, the following two approximations are given. \begin{theorem}\label{contour integral approximation} For an $\hat{N}_{\min}$-hop link with dome angle ${\theta_{_{02n}}^h}$, the contour integral approximation of the efficiency is given as, \begin{equation} \widetilde{E}_1 = \frac{N_{\min}\cdot\sin\left(\frac{\theta_{_{02n}}^h}{2N_{\min}}\right)} {\hat{N}_{\min}\cdot\sin\left(\frac{\theta_{_{02n}}^h}{2\hat{N}_{\min}}\right)\left( 2\overline{\alpha}\left(\frac{\theta_{_{02n}}^h}{2\hat{N}_{\min}}\right)-1 \right)}, \end{equation} where $N_{\min}$ is defined in (\ref{N_min}), and $\overline{\alpha}\left(\theta^h\right)$ is defined as, \begin{equation}\label{overline alpha} \begin{split} \overline{\alpha}\left(\theta^h\right) = \frac{\sqrt{2}}{2\pi} \int_0^{\theta_{\max}} \int_0^{\pi} \frac{f_{\theta_0}\left(\theta\right)}{\sin\left(\frac{\theta^h}{2}\right)} \Big( -\cos(\theta_0)\cos(\theta^h) -\sin(\theta_0)\sin(\theta^h)\cos\varphi+1 \Big)^{\frac{1}{2}} \,\mathrm{d}\varphi \,\mathrm{d}\theta. \end{split} \end{equation} \begin{proof} See Appendix~\ref{app:contour integral approximation}. \end{proof} \end{theorem} \begin{theorem} For an $\hat{N}_{\min}$-hop link with dome angle ${\theta_{_{02n}}^h}$, the binomial approximation of the efficiency is given as, \begin{equation} \widetilde{E}_2 = \frac{N_{\min}\cdot\sin\left(\frac{\theta_{_{02n}}^h}{2N_{\min}}\right)} {\hat{N}_{\min}\cdot\eta\left(\frac{\theta_{_{02n}}^h}{2\hat{N}_{\min}}\right)}, \end{equation} where $N_{\min}$ is defined in (\ref{N_min}), and $\eta\left(\theta^h\right)$ is defined as, \begin{equation} \begin{split} \eta\left(\theta^h\right) &= \int_0^{\theta_{\max}} \int_0^{\theta_{\max}} \frac{1}{4} f_{\theta_0}\left(\theta_1\right) f_{\theta_0}\left(\theta_2\right) \Big( \sin(\theta^h-\theta_1-\theta_2)+\sin(\theta^h+\theta_1-\theta_2) \\ &+\sin(\theta^h-\theta_1+\theta_2)+\sin(\theta^h+\theta_1+\theta_2) \Big) \,\mathrm{d}\theta_1\,\mathrm{d}\theta_2. \end{split} \end{equation} \begin{proof} Assuming that the contact angle between the relay position and its nearest satellite is $\theta_0$, the satellites are uniformly distributed on a circle with radius $r\sin\theta_0$. By approximating this distribution as a binomial distribution, satellites are distributed at the nearest or farthest from the adjacent relay position with equal probability. Take the expectation of contact angles, and the above result can be obtained. \end{proof} \end{theorem} \begin{table*}[] \caption{Reliability analysis of deterministic LEO satellite constellations.} \renewcommand\arraystretch{1} \resizebox{16.2cm}{3.15cm}{ \begin{tabular}{|c|c|c|c|} \hline & Starlink & OneWeb & Kuiper \\ \hline \hline Constellation altitude [Km] & 550 & 1200 & 590, 610, 630 \\ \hline Number of (planned) satellites & 11927 & 650 & 3236 \\ \hline Expectation of contact angle &0.0162 & 0.0695 & 0.0312 \\ \hline Number of hops & 9~/~10 & 69~/~8 & 12~/~13 \\ \hline Reliable angle [rad] & 0.0386~/~0.0481 & 0.1996~/~0.2026 & $\approx0.0765~/~\approx0.0941$ \\ \hline Minimum number of satellites required & 710~/~2535 & 2053~/~7889 & $ \approx 777~/~\approx2520 $ \\ \hline Type-\uppercase\expandafter{\romannumeral1} interruption occurs or not & No~/~No & Yes~/~Yes & No~/~No \\ \hline Probability of Type-\uppercase\expandafter{\romannumeral2} interruption occurs & $<0.01\%~/~<0.01\%$ & $9.41\%~/~100\%$ & $<0.01\%~/~<0.01\%$ \\ \hline Efficiency & $99.44\%~/~99.17\%$ & $97.80\%~/~96.27\%$ & $97.91\%~/~97.56\%$ \\ \hline \end{tabular} } \label{table:models} \end{table*} \section{Numerical Results} This section analyzes the performance of the algorithm based on the results of numerical simulation. For the existing deterministic constellations, we analyze the feasibility of the algorithm. Then different approximation methods and routing strategies are compared from the perspective of latency. \subsection{Reliability Analysis of Deterministic Constellations} Table~\ref{table:models} shows the simulation results of three deterministic LEO satellite constellations \cite{robert2020small}. Set the maximum distance at which the satellite can maintain stable communication as $d_{max}=3000\rm{ \,Km}$. Within this distance, the satellites in all three constellations are in the LoS region. Suppose two satellites on opposite sides of the earth need to communicate. Since Kuiper's satellites will be distributed in three different altitude orbits, we approximate that all satellites are distributed in the 610 \,Km orbit. For the last five parameters, the left and right sides of the slash correspond to $\varepsilon=0.1/0.01$, respectively. \par Since the latency of satellite communication is usually tens to hundreds of milliseconds, it is necessary to consider the calculation delay of the algorithm and the delay of search. The complexity of iterative method for deriving number of hops is linear. Iterations can end in a finite number of steps, and the number of hops should satisfy, \begin{equation}\label{max iterations} n \leq \frac{\ln\left( 1 - \varepsilon \right)}{\ln \left( 1- \frac{1}{2} \left( 1 - \cos \frac{\theta_{\max}}{2} \right)^{N_{\rm{Sat}}} \right)}. \end{equation} It can be seen that the number of iterations mostly ranges from 1 to 6. The expected contact angle and reliable angle are used to analyze the area of the search region. According to the description of the nearest neighbor search algorithm, traversing all satellites can only stay at the theoretical level. In practice, since satellite systems are massive and moving, it is difficult for a single satellite to get global information. Therefore, it is more meaningful to analyze the required area for finding a satellite than the algorithm complexity. The expectation of contact angle can be derived from the following simple derivation, \begin{equation} \begin{split} \mathbb{E}\left[\theta_0\right] &= \int_0^{\theta_{\max}} 1 - F_{\theta_0}\left(\theta\right)\mathrm{d}\theta = \int_0^{\theta_{\max}} \left(\frac{1+\cos\theta}{2}\right)^{N_{\rm{Sat}}} \mathrm{d}\theta\\ & = 2\int_0^{\frac{\theta_{\max}}{2}}\left(\cos\widetilde{\theta} \right)^{2N_{\rm{Sat}}} \mathrm{d}\widetilde{\theta} \overset{(a)}{\approx} \pi \prod \limits_{i=1}^{N_{\rm{Sat}}} \frac{2i-1}{2i}, \end{split} \end{equation} where (a) follows Wallis' integrals, since $1-F_{\theta_0}(\theta)$ is very close to 0 when $\theta>\theta_{\max}$ \cite{Al-1}, the result can be approximated by continuation of the domain. Assume the spherical caps with radius of the expected contact angle and reliable angle as the average search area and maximum search area required for finding a satellite. This region is chosen as a spherical cap for computational convenience. Taking Starlink as an example, for a ten-hop link, the average search area is 0.066\% of the entire spherical area. The maximum search area is no more than 0.58\% of the spherical area. The minimum deflection angle strategy needs to search along the belt region near the shortest inferior arc. The maximum step size strategy needs to search in the whole communication range. When the reliable region is not set, the search area of the maximum step size strategy is 10.2\% of the entire spherical area for a ten-hop link. In conclusion, the proposed algorithm can end in a linear number of iterations and generally only takes a few iterations. It requires a tiny search area and has a huge advantage over other strategies. At last, note that shape of the search area is not necessary to a spherical cap, as well as surface of arbitrary shape. Since satellites are uniformly distributed on the sphere, the probability of finding a satellite is a constant for a given surface area for search. \par The minimum number of satellites required is obtained by testing several sets of $\theta_t$ according to proposition~\ref{Minimum Nsat} and taking the smallest of them. The probability of type-\uppercase\expandafter{\romannumeral2} interruption is obtained by Monte Carlo method: (\romannumeral1) running the algorithm for $10^6$ rounds, (\romannumeral2) recording the number of interrupt rounds and (\romannumeral3) dividing the number of interrupt rounds by $10^6$ as the probability of interruption. It can be seen that as long as the number of satellites obtained by any set of $\theta_t$ is smaller than the number of satellites in the actual constellation, type-\uppercase\expandafter{\romannumeral2} interruption does not occur. The opposite may not be accurate. For example, in the Kuiper constellation, when $\varepsilon=0.01$ and number of hops is 8, the required number of satellites obtained is 5544, which exceeds the number of satellites of the Kuiper constellation 3236. However, the second type of error still does not occur. \par The last discussion is about type-\uppercase\expandafter{\romannumeral1} interruption. For Oneweb constellation with parameter $\varepsilon=0.1$, we get $n \leq 68.077$ from (\ref{max iterations}) the number of iterations reached 61. When $\varepsilon=0.01$, the iterations do not start because the reliable angle $\theta_0=0.2026$ exceeded half of the maximum dome angle $\frac{\theta_{\max}}{2} = 0.1994$. Both situations lead to the type-\uppercase\expandafter{\romannumeral1} interruption, which further leads to the occurrence of type-\uppercase\expandafter{\romannumeral2} interruption. In addition, the algorithm has high efficiency for all constellations. \subsection{Comparison of Different Approximations} As shown in Fig.~\ref{fig:approximation}, the performances of the two estimation methods are compared, and the relationships between latency and constellation parameters are described. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{figure7.pdf} \caption{Comparison of different approximations.} \label{fig:approximation} \vspace{-0.4cm} \end{figure} In Fig.~\ref{fig:approximation}, link tolerable probability of interruption $\varepsilon=0.01$, $d_{\max}=3000\rm{\,\,Km}$, the simulation result is the exact latency obtained by Monte Carlo simulation. Both approximation methods are accurate under different constellation altitudes, satellite numbers, and link distances. Under the existing groups of parameters, binomial approximation provides a tight lower bound for the latency. At the same time, the contour integral approximation gives a tight upper bound for the latency. The accuracy of the two approximations is reduced for scenarios where the number of satellites corresponding to the red dot and dash is insufficient. Especially when the distance between the starting satellite and the ending satellite is large, binomial approximation has a relatively large gap with the actual results for the red line. As the communication distance increases and the number of satellites is insufficient, the probability of link interruption increases. In this case, the introduction of the minimum deflection strategy brings larger latency. \par Use the solid blue line (1000 satellites and $500\rm{ \,Km}$ constellation altitude) in Fig.~\ref{fig:approximation} as a baseline. When the communication distance is fixed, the latency is negatively correlated with the number of satellites and positively correlated with the constellation height. The decrease in the number of satellites lead to the locations of the found satellite deviating from the ideal optimal relay location, which increases latency. Although the increase of constellation height also causes the satellite location to deviate from the expected position, reducing the shortest inferior arc length has a more significant effect on the latency. A similar view can be found in proposition~\ref{prop1}. In addition, the latency increases almost linearly with the increase of communication distance, and the constellation with larger latency has a larger slope of growth. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{figure9.pdf} \caption{Probability of type-\uppercase\expandafter{\romannumeral2} interruption under different parameters.} \label{fig:type 2 interruption} \vspace{-0.4cm} \end{figure} \par Fig.~\ref{fig:type 2 interruption} further explains the results in Fig.~\ref{fig:approximation} through numerical results. In Fig.~\ref{fig:type 2 interruption}, the communication distance is $10000\rm{ \,Km}$ and $d_{\max}=3000\rm{ \,Km}$. When number of satellites $N_{\rm{Sat}}>400$, type-\uppercase\expandafter{\romannumeral2} interruption rarely occurs. When $N_{\rm{Sat}}<200$, the probability of type-\uppercase\expandafter{\romannumeral2} interruption is significantly increased with the decrease of $N_{\rm{Sat}}$ and the increase in constellation height $r_{\rm{Sat}}$. This suggests that when satellites are insufficient, the probability of type-\uppercase\expandafter{\romannumeral2} interruption is closely related to the number of satellites per unit sphere area. Furthermore, the influence of $r_{\rm{Sat}}$ on the probability of type-\uppercase\expandafter{\romannumeral2} interruption is not as significant as $N_{\rm{Sat}}$, especially when $N_{\rm{Sat}}<200$. \subsection{Comparison of Different Strategies} Fig.~\ref{fig:800_550} and Fig.~\ref{fig:100_550} provide the results of latency changing with distance between starting and ending satellites for different strategies. In both figures, latitude is fixed as $500\rm{ \,Km}$ and $d_{\max}=3000\rm{ \,Km}$. The number of satellites in Fig.~\ref{fig:800_550} is sufficient (800 satellites) while the number of satellites in Fig.~\ref{fig:100_550} is insufficient (100 satellites). \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{figure6.pdf} \caption{Influence of communication distance on different strategies ($N_{\rm{Sat}}=800$).} \label{fig:800_550} \vspace{-0.4cm} \end{figure} In terms of latency, the optimal scenario, the proposed algorithm, the minimum deflection angle strategy, and the maximum step size strategy are sequentially ranked from small to large. When the number of satellites is sufficient, the latency of the maximum stepsize strategy is much larger than that of other methods. The minimum deflection angle strategy and the proposed algorithm's performances are close to the lower bound. When the number of satellites is insufficient, the proposed algorithm has a remarkable advantage over the minimum deflection angle strategy. For different tolerance rates, $\varepsilon=0.01$ performs better with fewer satellites, while $\varepsilon=0.1$ performs better when satellites are sufficient. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{figure5.pdf} \caption{Influence of communication distance on different strategies ($N_{\rm{Sat}}=100$).} \label{fig:100_550} \vspace{-0.4cm} \end{figure} Fig.~\ref{fig:800_10000} considers the scenario where latency varies with constellation height. The number of satellites and is fixed as 800, the communication distance is fixed as $10000\rm{ \,Km}$ and $d_{\max}=3000\rm{ \,Km}$. Overall, the performance of the methods is similar to that in Fig.~\ref{fig:800_550}. The main difference is that for the proposed algorithm and the maximum step size strategy, the latency decreases with the height of the constellation. The change of the minimum deflection angle strategy is not obvious. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{figure8.pdf} \caption{Influence of constellation altitude on different strategies.} \label{fig:800_10000} \vspace{-0.4cm} \end{figure} \section{Further Extensions} Since the shortest routing problem on a three-dimensional sphere is not an easy problem to deal with, we simplify the model for the convenience of analysis. Although our simple model has limitations when facing some practical issues, the model is fortunately extensible. \subsection{Expansion to multi-tier networks} Practically, LEO satellites may assist ground base stations \cite{homssi2021modeling} with global coverage or rely on ground gateways \cite{talgat2020stochastic} to communicate. In addition, satellite systems at different altitudes (including those in synchronous orbits) also interact, such as satellites in the Kuiper constellation at three different altitudes. Therefore, cross-tier communication scenarios should be considered. \par Hence, it is required to investigate routing in a spherical heterogeneous network consisting of ground stations, high altitude platforms (HAP), and multi-tier LEO satellites, where satellite communications start and end with ground stations. The theoretical analysis in this paper is basically applicable to the above heterogeneous network, with the following three major changes. Firstly, the values of some parameters such as maximum communication distance $d_{\max}$ vary with different types of the relay device. This means that global information will be harder to obtain and store for ground stations. \par Secondly, as an essential parameter in analyzing the efficiency and reliability of the proposed algorithm, the expression and domain of the contact angle in a multi-tier network have minor modifications. Specifically, the contact angle will be replaced by the conditional contact angle, which is the contact angle of satellites distributed within the reliable communication range of both the previous and next hop. \par Finally, in the reliability analysis, the tier on which the relay device is located affects the probability of type-\uppercase\expandafter{\romannumeral2} interruption. Therefore, discrete Markov networks, state transition matrices, and absorption states are recommended for reliability analysis. Note that routine starts and ends on the ground stations, thus the first, middle, and last hops of the network need to be designed differently. \subsection{Latency of Computation and Search} Only transmission latency is considered as the objective function in this article. Computation and search latency should also be taken into account. As is mentioned, although the proposed algorithm provides a low computational complexity solution for finding the shortest latency routing on the closed sphere, its computational complexity still reaches $\mathcal{O}(N_{\min} \cdot N_{\rm{Sat}})$. The latency corresponding to this computational complexity is still large for a real-time routing with a total transmission latency of tens of milliseconds. The algorithm complexity can be reduced to $\mathcal{O}(N_{\min})$ through any of the following two schemes since only steps (5) - (9) in algorithm 2 need to be executed for both of the schemes. \par When ground stations are available, we can sacrifice storage space on the ground stations for less latency. A specific data structure called Two Line Elements (TLEs) can store the dynamic positions of the satellites, and the IDs of satellites around the target position can be quickly found by index when a routing task arrives. One possible disadvantage of this scheme is that when the source is not the ground equipment but the satellite, the source needs to spend extra latency to communicate with the ground equipment. \par The second scheme applies to scenarios where ground stations are unavailable. The satellite transmits a signal to the target position (obtained in step (4) - (9) of algorithm 2), and the next-hop satellite within the beam forward this information in the above method and respond to the previous hop. Similarly, this scheme also includes extra search latency related to the contact angle, reliable angle and beamwidth. When the satellite does not receive a response from the next hop, it assumes no satellite in the beam and continues to send messages to surrounding areas. In addition, when several satellites receive the messages from the previous hop and are busy, it requires short-distance communication to schedule a single satellite for routing. \subsection{Outage Probability and Buffering Latency} When considering power limits, the probability of interruption and latency are related not only to distance but also to transmission signal power. Under the assumption that regenerative hops are used, a longer single-hop distance and a lower transmission power result in a larger probability of interruption and buffering latency. Under this circumstance, the \ac{SINR} serves as a bridge between them. Since satellites are less dense than ground networks and the beam is highly directional, the interference caused by other satellites can be approximated to a small constant. Assuming that the path loss of single-hop satellite-satellite channel follows the free-space fading model, the average SINR is a decreasing function of to the single-hop distance squared. \par Different from the qualitative analysis before, outage probability can be a quantitative substitute for type-\uppercase\expandafter{\romannumeral2} interruption and single-hop maximum reliable distance $d_{\max}$. The outage probability is defined as the probability of receiving \ac{SINR} smaller than a predefined threshold $\mathbb{P}[\rm{SINR}<\gamma]$. The maximum step size proposition may not be optimal because a long single-hop distance may lead to a high probability of communication failure \cite{haenggi2005routing}. Because of the randomness of fading, signal interruptions always occur, and the retransmission mechanism can be introduced \cite{routingimportant}. \par Average achievable rate is regarded as an upper bound on the as the upper bound of the transmission rate and the lower bound of the buffering latency. It is defined as the ergodic capacity from the Shannon-Hartley theorem over a fading communication link \cite{wang2021ultra}, which is proportional to $\log_2\left(1+\rm{SINR}\right)$. When the packet size is much larger than the maximum amount of data transmitted per millisecond under the average achievable rate, buffering latency is necessary to be taken into account. In order to decrease the buffering latency, a large data packet can be divided into parts and transmitted in multiple separate paths. The number of paths is determined by traffic and the average achievable rate of the relay satellites. According to proposition~\ref{prop1}, the path corresponding to the inferior arc with a smaller central angle is selected preferentially. \subsection{Small Satellite Swarms and Storage-and-Forward Communication} In the case of an insufficient number of satellites swarms with large packet sizes \cite{nag2020designing}, the accessibility of data transmission is restricted, and it is challenging to realize real-time communication. These networks are demonstrated as delay/disruption tolerant networks (DTN), in which satellites store information for an amount of time after receiving it \cite{7555254}. The proposed algorithm can be extended to reduce the latency of networks with sufficient interactions. For example, with the accessibility of Earth-to-satellite links, the proposed algorithm applies to Earth observation satellite constellations. \par Furthermore, small satellite swarms can help update the satellite's information (such as positions) around the relay satellite, which is beneficial for the proposed algorithm in this paper that relies on information interaction. The strategy combining the proposed algorithm with store-and-forward communication is also extendable to small spacecraft swarms communicating for interstellar exploration \cite{PARKIN2018370}. \section{Conclusion} The latency minimization of multi-hop satellite links under the maximum distance constraints is studied. We propose a nearest neighbor search algorithm to determine the number of hops of multi-hop links and the position of the relay satellite in each hop. Numerical results show that the algorithm achieves linear complexity and can complete iteration in finite steps. At the same time, the search area required by the algorithm only accounts for a tiny part of the whole sphere area. The latency performance of this algorithm is very close to the minimum latency in the ideal scenario. Take Starlink constellation for example, the algorithm only needs two iterations and searches 0.066\% of the entire spherical area. The extra latency it needs to pay is no more than 1\% of the total latency of the optimal case. Furthermore, two approximations are provided to estimate the maximum gap between the latency of the proposed algorithm and the lower bound of the latency in the ideal scenario. They provide tight upper and lower bounds for latency in most cases. Finally, the influence of system parameters on multi-hop link latency is studied. \appendices \section{Proof of Proposition~\ref{prop1}}\label{app:prop1} Among all circles passing $x_{h_0}$ and $x_{h_n}$ on the sphere where the satellites are located, the circle centered at the origin has the largest radius. Therefore, the shortest inferior arc divided by these two points has the smallest central angle. Based on the fact that the smaller the central angle, the shorter the length of the arc, this inferior arc has the shortest length among all arcs passing through $x_{h_0}$ and $x_{h_n}$. \par For an arbitrary routing scheme, as shown in Fig.~\ref{fig:System model}, we can always locate the corresponding relay satellite on the shortest inferior arc to achieve lower latency. The correspondence of satellite positions between the two schemes is shown in Fig.~\ref{fig:System model}. In the scheme corresponding to the sky blue arrow, the distance of each hop is no longer than that of the scheme corresponding to the green arrow. Note that all subsequent concepts related to the central angle refer to the dome angle unless otherwise stated. \section{Proof of Proposition~\ref{prop2}}\label{app:prop2} Use (\ref{opt2}) and (\ref{st:constraint2-3}) to construct the Lagrange function, \begin{sequation} \mathcal{L}\left(\theta_1^h,\theta_2^h,...,\theta_n^h\right)=\frac{1}{c}\sum_{i=1}^{n} 2r\sin\left(\frac{\theta_i^h}{2}\right)+\lambda\left(\sum_{i=1}^{n} \theta_i^h - \theta_{_{02n}}^h\right), \end{sequation} take the partial derivative with respect to $\theta_i^h$, we get \begin{equation} \frac{\partial \mathcal{L}\left(\theta_1^h,\theta_2^h,...,\theta_N^h\right)}{\partial \theta_i^h}=\frac{r}{c}\cos\left(\frac{\theta_i^h}{2}\right)+\lambda, \end{equation} set the result of the partial derivative to $0$, the optimal $\theta_i^{h*}$ is \begin{equation} \theta_i^{h*} = 2\arccos\left(-\frac{\lambda c}{r}\right), \end{equation} which is not related to $i$. Finally, the proof can be completed by combining the constraint (\ref{st:constraint2-3}). \section{Proof of Proposition~\ref{prop3}}\label{app:prop3} Assume that the satellites keep equal dome angles on the shortest inferior arc. The latency can be expressed as, \begin{equation} T = \frac{2r}{c}\sum_{i=1}^{n} \sin\left(\frac{\theta_{_{02n}}^h}{2n}\right)=\frac{2rn}{c}\sin\left(\frac{\theta_{_{02n}}^h}{2n}\right), \end{equation} take partial derivative with respect to $n$, \begin{equation} \frac{\partial T}{\partial n}=\frac{2r}{c}\sin \left(\frac{\theta_{_{02n}}^h}{2n}\right)-\frac{\theta_{_{02n}}^hr}{cn}\cos \left(\frac{\theta_{_{02n}}^h}{2n}\right), \end{equation} since an inferior arc is chosen, $\theta_{_{02n}}^h/\left(2n\right) < \pi/2$, when $n \neq 1$, we have $\cos\left({\theta_{_{02n}}^h}/{2n}\right)>0$, and \begin{equation} \frac{c}{2nr} \frac{\partial T}{\partial n} = \frac{1}{\cos\left({\theta_{_{02n}}^h}/{2n}\right)}\left(\tan\left(\frac{\theta_{_{02n}}^h}{2n}\right) - \frac{\theta_{_{02n}}^h}{2n}\right), \end{equation} for the right-hand side of the equation, $\tan\left(\frac{\theta_{_{02n}}^h}{2n}\right)>\frac{\theta_{_{02n}}^h}{2n}$ when $\theta_{_{02n}}^h/\left(2N\right) < \pi/2$. The above analysis shows that $\frac{\partial T}{\partial n}>0$ is always satisfied. As $n$ increases, the latency $T$ increases, so we need to select the minimum number of hops that satisfies the constraints (\ref{st:constraint2-1}) and (\ref{st:constraint2-2}), the upper bound of $\theta_i^{h}$ is limited as $\theta_{\max}$ defined in (\ref{theta_max}), by solving \begin{equation} \begin{split} \theta_{_{02n}}^h = \sum_{i=1}^{N_{\min}} \theta_i^h \leq N_{\min} \theta_{\max}, \end{split} \end{equation} and based on the fact that $N_{\min}$ is an integer, the final result is obtained. \section{Proof of Lemma~\ref{CDF of contact}}\label{app:CDF of contact} Start deriving the CDF of the contact angle distribution from the definition, \begin{equation} \label{CDF of tagged - 1} \begin{split} F_{\theta_0}\left(\theta\right) & = 1 - {\mathbb{P}}\left[ \theta_0 > \theta \right] = 1 - \mathbb{P}\left[ {\mathcal{N}\left( \mathcal{A} \right) = 0} \right] \overset{(a)}{=} 1 - \left( 1 - \frac{\mathcal{S}\left({\mathcal{A}}\right)}{4 \pi r^2} \right)^{N_{\rm{Sat}}} \\ &\overset{(b)}{=} 1 - \left( 1 - \frac{2 \pi r \left( r-r\cos\theta \right)}{4 \pi r^2} \right)^{N_{\rm{Sat}}} = 1 - \left( \frac{1+\cos \theta}{2} \right) ^ {N_{\rm{Sat}}}, \end{split} \end{equation} where $\mathcal{N}\left( \mathcal{A} \right)$ counts the number of the satellites in the spherical cap $\mathcal{A}$ shown in Fig.~\ref{fig:angles}, $\mathcal{S}\left( \mathcal{A} \right)$ is the area measure of spherical cap $\mathcal{A}$. According to step (a), for a homogeneous point process, the probability of having satellites on the spherical cap is equal to the ratio of the area of the spherical cap to the total surface area of the sphere with radius $r$. Step (b) comes from the area formula of a spherical cap, where $r-r\cos\theta$ is the height of the spherical cap. In addition, the domain of $\theta_0$ should meet the constraints. It's easy to verify that for a constellation of hundreds of satellites, $F_{\theta_0}\left(\theta_{\max}\right)$ is very close to 1 \cite{Al-1}. \section{Proof of Lemma~\ref{reliable angle with n}}\label{app:reliable angle with n} Since satellites' locations are assumed to be independent, the average probability interruption of each hop should be equal. For an $n$-hop link with tolerable probability of interruption $\varepsilon$, the tolerable probability of interruption of each hop is, \begin{equation} \varepsilon_1 = 1 - \left( 1 - \varepsilon \right)^{\frac{1}{n}}. \end{equation} In the spherical cap determined by reliable angle, the probability of having a satellite should be greater than $1-\varepsilon_1$. Since the reliable angle is the minimum angle that satisfies the above constraint, it can be obtained by the definition of the contact angle CDF, \begin{equation}\label{substitute N_h} 1 - \left( \frac{1+\cos \theta_r\left(n\right)}{2} \right) ^ {N_{\rm{Sat}}} = \left( 1 - \varepsilon \right)^{\frac{1}{n}}, \end{equation} transpose and take the square root of $N_{\rm{Sat}}$ times on both sides, \begin{equation} \frac{1+\cos \theta_r\left(n\right)}{2} = \left( 1 - \left( 1 - \varepsilon \right)^{\frac{1}{n}} \right)^{\frac{1}{{N_{\rm{Sat}}}}}, \end{equation} final conclusion can be reached through simple mathematical operations. \section{Proof of Proposition~\ref{Minimum Nsat}}\label{app:Minimum Nsat} Since the reliable angle $\theta_r\left(n\right)$ is related to the number of hops, a $\theta_t$ unrelated to n is taken as the search radius to simplify the relationship. In this case, the minimum number of hops $N_h$ is given as, \begin{equation}\label{N_h} N_t = \bigg\lceil \frac{\theta_{_{02n}}^h}{\theta_{\max}-2\theta_t} \bigg\rceil + 1. \end{equation} $2\theta_t < \theta_{\max}$ ensures that $N_t$ is positive. Substitute (\ref{N_h}) into (\ref{substitute N_h}), \begin{equation} \left( \frac{1+\cos \theta_t}{2} \right) ^ {N_{\rm{Sat}}} \leq 1 - \left( 1 - \varepsilon \right)^{1 \big/ \left( \Big\lceil \frac{\theta_{_{02n}}^h}{\theta_{\max}-2\theta_t} \Big\rceil + 1\right) }, \end{equation} take the logarithm of both sides, and divide by the $\ln \left( \frac{1+\cos \theta_t}{2} \right)$ of both sides to get the result. Note that (\ref{substitute N_h}) guarantees that the $\theta_t$ satisfying (\ref{sufficient condition}) must be greater than or equal to the reliable angle. A set of practical $\theta_t$ can be taken as, \begin{equation} \bigg\{\frac{1}{2} \left( \theta_{\max} - \frac{\theta_{_{02n}}^h}{N_{\min}+k} \right),k=0,1,2...\bigg\}. \end{equation} \section{Proof of Proposition~\ref{contour integral approximation}}\label{app:contour integral approximation} Assuming that the contact angles between the two relay positions and their nearest satellites are $\theta_0^{(1)}$ and $\theta_0^{(2)}$, respectively. In this case, these two satellites are uniformly distributed on circles $\mathcal{O}_1(\theta_0^{(1)})$ and $\mathcal{O}_2(\theta_0^{(2)})$ with radius $r\sin\theta_0^{(1)}$ and $r\sin\theta_0^{(2)}$ respectively. The average distance at contact angles $\theta_0^{(1)}$ and $\theta_0^{(2)}$ can be obtained by contour integral around two circles with respect to single-hop distance $d_1$. Therefore, the expectation of single-hop distance $d_1$ can be expressed as, \begin{equation} \mathbb{E}\left[d_1\right] = \mathbb{E}_{\theta_0^{(1)},\theta_0^{(2)}}\left[ \oint_{\mathcal{O}_1(\theta_0^{(1)})} \oint_{\mathcal{O}_2(\theta_0^{(2)})} f_{d_1} \mathrm{d}\mathcal{O}_2\mathrm{d}\mathcal{O}_1\right], \end{equation} where $f_{d_1}$ is the PDF of $d_1$, it is related to the contact angles $\theta_0^{(1)}$, $\theta_0^{(2)}$ and the positions on the corresponding $\mathcal{O}_1$, $\mathcal{O}_2$. The expression of $f_{d_1}$ is hard to express in either rectangular or spherical coordinates. Let us split the problem in two. One of the satellites is fixed to the relay position, while the other is uniformly distributed on the circle. The uniform distribution of a satellite can be offset by changing the position of a relay position. This amount of change can be described by $\alpha$. By symmetrically making the same change of the other relay position, the amount of change becomes $2\alpha-1$. \par Since rotation does not affect the distribution of the satellite, let the spherical coordinate of the relay position be $(r,0,0)$. The coordinate of the fixed satellite is $(r,\theta^h,0)$, where $\theta^h$ is the dome angle of the single hop. Assume the contact angles between the relay position and its nearest satellite is $\theta_0$, by equation \begin{equation} \frac{1}{\pi}\int_0^{\theta_{\max}} d(\theta_0,\varphi,\theta_h,0) \mathrm{d}\varphi = \alpha\left(\theta_0,\theta^h\right) 2r\sin\left(\frac{\theta_h}{2}\right), \end{equation} where $d(\theta_0,\varphi,\theta_h,0)$ is defined in (\ref{d function}), the amount of change $\alpha\left(\theta_0,\theta^h\right)$ can be written as, \begin{equation} \begin{split} \alpha\left(\theta_0,\theta^h\right) = \frac{\sqrt{2}}{2\pi\sin\frac{\theta^h}{2}} \int_0^{\pi} \sqrt{1-\cos\theta_0\cos\theta_h-\sin\theta_0\sin\theta^h\cos\varphi}\, \mathrm{d}\varphi. \end{split} \end{equation} Note that for a small $\theta_0$, \begin{equation} \begin{split} \alpha\left(\theta_0,\theta^h\right) \approx \frac{\sqrt{2(1-\cos\theta_h)}}{2\sin{\frac{\theta^h}{2}}} = 1. \end{split} \end{equation} Take the expectation of $\alpha\left(\theta_0,\theta^h\right)$ with respect to $\theta_0$, \begin{equation} \overline{\alpha}\left(\theta^h\right) = \int_0^{\theta_{\max}}f_{\theta_0}(\theta)\alpha\left(\theta_0,\theta^h\right)\mathrm{d}\theta, \end{equation} the result in (\ref{overline alpha}) is derived. Since the propagation speed of the laser is constant, the ratio of latency is equivalent to the ratio of distance, the proof of theorem~\ref{contour integral approximation} is finished. \bibliographystyle{IEEEtran}
1,314,259,994,417
arxiv
\section{Introduction} Machine learning models used in critical decision-making scenarios can potentially amplify biases and have an adverse social and economic impact on individuals and protected demographic groups, e.g., race, gender \citep{barocas-hardt-narayanan}. Classification, scoring, and ranking are typical use cases for machine learning models to automate and assist decisions. We investigate fairness in ranking. A utilitarian view of ranking is to order a given set of items according to their individual merit or utility. In contrast, group fairness in ranking demands fairness to different protected demographic groups while maximizing utility. Research in this field range from defining fair ranking metrics and their evaluation \citep{KEMZGGK2017,YS2017,DME+2020} to designing algorithms for maximizing ranking utility while satisfying group fairness \citep{CSV2018,GAK2019,SJ2018,WZW2018,ZBCHMB2017}. A natural approach to obtain group-fair ranking is via post-processing: first, get the unconstrained (or utility-maximizing) ranking and then re-arrange the items to satisfy group-fairness constraints. Previous works have proposed deterministic post-processing algorithms such that the ranking they output satisfies group fairness measured in terms of the \textit{representation} of each group in the top few ranks. Post-processing algorithms for group-fair ranking can be viewed as performing two steps: first, selecting a sufficient number of items from each group, and second, ranking them together so that utility is maximized while satisfying group-fairness constraints. Both these steps require accurate and unbiased observation of the merit of all items. Only then can one select the items with the highest observed merit from each group and finally merge their group-wise ranked lists to create a combined ranking while satisfying group-fairness constraints. However, accurate and unbiased observations of merit are difficult in the real world and could contain \emph{implicit bias} towards social groups. In the presence of implicit bias, \citet{CMV2020} show that maximizing the observed utility of ranking based on the observed merit can be suboptimal for the true or latent utility. \citet{CMV2020} provide theoretical guarantees on optimality when the latent scores of all the candidates are drawn i.i.d.~from a uniform distribution. However, this assumption may not always hold in real-world applications. We proceed with a weaker assumption that only an ordinal ranking of items within each group is available, and we do not have any scores or utility values to compare items (especially items across different groups). Our assumption circumvents implicit bias and allows us to consider group-fair ranking even under incomplete or biased data about pairwise comparisons. Any deterministic ranking algorithm is restricted to assign each rank to only one item, hence, only one group. Using unreliable inter-group comparisons can lead to a loss of opportunities for other groups. For example, consider multiple companies with each having a few similar open positions and suppose they use the same recruitment system to rank a common candidate pool for job interviews. If the ranking is deterministic and the number of groups is more than the number of candidates interviewed by each company, representation constraints for group-fairness cannot be satisfied. Moreover, if the candidates from certain protected groups are sufficiently represented but systematically left out of the top ranks due to biased comparisons with non-protected groups, they have fewer opportunities to be interviewed and hired even in a deterministic and group-fair ranked list of candidates. Randomized ranking is a way to create opportunities for items in a way that a deterministic ranking cannot. Deterministic fair ranking has an inevitable trade-off with underranking of individuals in the original ranking without fairness constraints \citep{GDL2021}. Recent work by \citet{SKJ2021} studies randomized rankings under uncertainty in the merit scores of items, where the observed features give rise to posterior merit distributions. Given these merit distributions, \citet{SKJ2021} give a randomized ranking to optimally trade-off a notion of approximate individual fairness to the true merit of each item against the overall ranking utility. On a different note from \citet{SKJ2021} but intending to combine the benefits of randomized ranking and group-fairness, we take an axiomatic approach to define randomized group-fair rankings and design algorithms to sample random group-fair rankings. \subsection{Our Contributions} In order to sample a random group-fair ranking that can work even in the presence of biased or missing comparisons across groups, we assume that the order of items belonging to the same group (intra-group) is given to us and is reliable, whereas we do not use any information about the comparison of items from different groups (inter-group) as that may be unreliable. Our main theoretical contributions are as follows: Our first contribution is a mathematical formulation of the above assumption using a set of consistency and fairness axioms (\Cref{sec:group_fairness}). Our second contribution is to prove that there is exactly one distribution over the set of all feasible rankings that satisfies all our axioms of consistency and fairness (\Cref{thm:uniqueness}). Our third contribution is to give efficient algorithms to sample a random group-fair ranking: Our first algorithm is a fast approximate sampling algorithm (\Cref{alg:random_walk}) that samples fair rankings from a distribution $\epsilon$-close to the unique distribution that satisfies all our axioms. This algorithm works for any configuration of the parameters, that is, given any lower and upper bounds on the representation for each group. When the difference between these bounds is sufficiently large, our algorithm has a faster expected running time \Cref{thm:main}. We show that our algorithm is practical and efficient on real-world datasets, even when the above condition does not hold. We also give a second, exact sampling algorithm, that runs in time exponential in the number of groups (\Cref{thm:alg1}). When the difference between the upper and lower bounds is very small, we give an alternate efficient but brute-force algorithm (\Cref{thm:bruteforce}). We validate our theoretical results on the real-world datasets using our first algorithm (\Cref{sec:experiments}). Our implementation of \Cref{alg:random_walk} has been made publicly available at \href{https://github.com/sruthigorantla/sampling_random_group_fair_rankings}{https://github.com/sruthigorantla/sampling\_random\_group\_fair\_rankings}. \section{Related Work} \label{sec:related_works} Algorithmic fairness has been an important area of study in the past decade. In fair classification literature, individual fairness is defined as similar outcomes for similar individuals \citep{DHPRZ2012}, whereas group fairness is defined as equal outcomes (e.g., demographic parity, equalized odds) for different demographic groups \citep{barocas-hardt-narayanan}. Combining individual fairness and group fairness, \citet{Castillo2019} lays down the principles of fair ranking as treating similar items consistently, sufficient presence of items from socially salient groups, and proportional representation from every group. Even in fair-ranking literature, the dichotomy between individual fairness and group fairness exists, and both have been studied along with their trade-off \citep{SJ2019,BHMY2021,SKJ2021,garciasoriano2020}. Our work is focused on randomized post-processing algorithms for group-fairness in ranking. However, we discuss related work from the broader fair ranking literature below to put it in perspective. \textbf{Group-fairness in ranking.} \citet{YS2017} propose a group-fairness metric for ranking based on statistical parity. They propose a multi-criteria optimization objective to maximize ranking utility while satisfying group fairness constraints based on the above metric. \citet{GAK2019} propose a different group-fairness metric based on the skew of group-wise representation in the top ranks and show its relation to other group-fairness constraints. They give a deterministic post-processing or re-ranking algorithm with provable guarantee of proportional representation for at most $3$ demographic groups. Subsequently, deterministic post-processing algorithms have been proposed to guarantee sufficient or proportional group-wise representation in each prefix of the top $k$ ranks \citep{CSV2018,ZBCHMB2017} or in blocks of $k$ consecutive ranks \citep{GDL2021}. Our work also uses representation-based fairness constraints, which generalize several of these commonly used fairness constraints in ranking, as discussed in \Cref{sec:group_fairness}. \textbf{Other definitions of fair ranking.} \citet{BCDQ2019,NCGW2020} propose separate metrics for intra-group and inter-group fairness. Intra-group fairness requires that two items from the same group must be treated consistently, whereas inter-group fairness requires that items from the protected groups should not be treated differently when compared with items from the favored/majority group. \citet{KVR2019} propose group-fairness metrics called rank equality, rank calibration, and rank parity based on pairwise comparison errors, and propose bias mitigation techniques for fairness with respect to these metrics. When different demographic groups have incomparable qualities, \citet{KRW2017} propose a notion of meritocratic fairness for cross-population selection that requires each item to be treated according to its relative performance in its group. \textbf{Randomized fair rankings.} Recent work by \citet{DME+2020} also observe that a deterministic ranking cannot always achieve group fairness. They propose stochastic versions of post-processing algorithms to achieve equality of expected exposure. \citet{SJ2018} give a randomized ranking that maximizes the ranking utility subject to group-fairness constraints. \citet{BGW2018} propose that individuals should receive equal amortized attention/exposure when ranked repeatedly and propose re-ranking algorithms to achieve this. The above results on randomized fair ranking focus on group-fairness metrics derived for equality of exposure. However, none of these algorithms work without complete information about the true utilities of the items, which might be difficult to get in real-world setting \cite{CMV2020}. Most aligned to our work is the randomized ranking algorithm \textit{fair $\epsilon$-greedy}, proposed by \citet{GS2020}. Similar to us, it avoids comparing two items from different groups. This alleviates adverse effects of bias in the merit scores of the protected groups, and entirely relies on intra-group comparison given by the merit scores. The group fairness notions considered are similar to our notions. We compare our algorithm to their algorithm in \Cref{sec:experiments}. Previous work has also used randomization in ranking, recommendations, and summarization of ranked results to demonstrate its other benefits such as controlling polarization \citep{CKSV2019controlling}, mitigating data bias \citep{CKV2020}, and promoting diversity \citep{CKSDKV2018fair}. \section{Group Fairness in Ranking} \label{sec:group_fairness} Given a set $N := [n]$ of items, a \textit{top-$k$ ranking} is a selection of $k<n$ items followed by assignment of each rank in $[k]$ to exactly one of the selected items. Let $a,a'\in N$ be two different items such that the item $a$ is assigned to rank $i$ and item $a'$ is assigned to rank $i'$. Whenever $i < i'$ we say that item $a$ is \textit{ranked lower} than item $a'$. Going by the convention we assume that being ranked at lower ranks give items better visibility \cite{GDL2021}. Throughout the paper, we refer to a top-$k$ ranking by just ranking. We use index $i$ to refer to a rank, index $j$ to refer to a group, and $a$ to refer to elements in the set $N$. The set $N$ can be partitioned into $\ell$ disjoint groups of items depending on a sensitive attribute. A group fair ranking is any ranking that satisfies a set of group fairness constraints. Our fairness constraints are \textit{representation constraints}; lower and upper bounds, $L_j, U_j \in [k]$ respectively, on the number of top $k$ ranks assigned to group $j$, for each group $j \in [\ell]$. More about the constraints in \Cref{subsec:definition}. Throughout the paper, we assume that we are given a ranking of the items within the same group for all groups. We call these rankings \textit{in-group rankings}. We now take an axiomatic approach to characterize a random group fair ranking. \subsection{Random Group-Fair Ranking} \label{subsec:definition} The three axioms we state below are natural consistency and fairness requirements for a distribution over all the rankings. The first axiom states that for any ranking sampled from the distribution, for any two items $a,a'$ from the same group, their order in the randomly sampled ranking should be consistent with their order in their in-group ranking. This abides with our assumption that the comparisons of the items from the same group are reliable. Because lower ranks give higher visibility, it is always desirable for a group to be consistent with its in-group ranking (e.g., its best item, that is the top-$1$ item in its in-group ranking, will get best visibility by ranking it at the lowest rank among the ranks assigned to that group). \begin{axiom}[In-group consistency] \label{axm:intra_group} For any ranking sampled from the distribution, for all items $a,a'$ belonging to the same group $j\in [\ell]$, item $a$ is ranked lower than item $a'$ if and only if item $a$ is ranked lower than item $a'$ in the in-group ranking of group $j$. \end{axiom} Many post-processing algorithms that achieve group fairness in ranking also satisfy this axiom, that is, they do not rearrange the items within the same group \cite{GDL2021,CSV2018,ZEHLIKE2022102707,ZBCHMB2017}. The support of any distribution that satisfies \Cref{axm:intra_group} consists only of rankings with items within the same group ranked in the order of their in-group ranking. Once we fix which group to be assigned to a rank $i$, there exists exactly one item that can be ranked at rank $i$. Hence, for the next axioms we look at the group assignments instead of rankings. A \textit{group assignment} is an assignment of each rank in the top $k$ ranking to exactly one of the $\ell$ groups. Let $Y_i$ be a random variable representing the group $i$th rank is assigned to. Therefore $\boldsymbol{Y} = \paren{ Y_1, Y_2, \ldots, Y_k}$ is a random vector representing a group assignment. Let $y = \paren{ y_1, y_2, \ldots, y_k}$ represent an instance of a group assignment. A \textit{group fair assignment} is a group assignment that satisfies the representation constraints. Therefore the set of group fair assignments is, \begin{equation} \centering \label{eq:set_y} \set{y \in [\ell]^k : L_j \le \sum\nolimits_{i \in [k]} \mathbb{I}[y_i = j] \le U_j, \forall j \in [\ell]}, \end{equation} where $\mathbb{I}[\cdot]$ is an indicator function. The ranking can then be obtained by assigning the items within the same group according to their in-group ranking, to the ranks assigned to the group. We use $Y_0$ to represent a dummy group assignment of length $0$ for notational convenience when no group assignment is made to any group (e.g.~in \Cref{axm:inter_group}). Let $X_j$ be a random variable representing the number of ranks assigned to group $j$ in a group assignment, for all $j \in [\ell]$. Therefore $\boldsymbol{X} = \paren{X_1, X_2, \ldots, X_{\ell}}$ represents a random vector for a \textit{group representation}. Let $x = \paren{x_1, x_2, \ldots, x_{\ell}}$ represent an instance of a group representation. Then the set of group fair representations is, \begin{equation} \label{eq:set_x} \set{x \in {\mathbb Z}_{\geq 0}^{\ell} :~~\sum\nolimits_{j \in [\ell]} x_j = k~~\text{and}~~L_j \le x_j \le U_j, \forall j \in [\ell]}. \end{equation} These representation based fairness constraints have been studied previously in ranking. They generalize several fairness notions; setting $L_j = U_j = \frac{k}{\ell}, \forall j\in[\ell]$ ensures equal representation of all the groups (section 5 of \cite{Zehlike_part1}), and setting $L_j = U_j = p_j\cdot k, \forall j \in [\ell]$, ensures proportional representation of the groups, where $p_j$ is the proportion of the group $j$ in the population \cite{GDL2021,GS2020,GAK2019}. Note that it may not always be possible to satisfy equal or proportional representation constraints exactly. The lower and upper bounds on the number of items, $L_j$s and $U_j$s, that we consider, can handle any arbitrary fairness requirement on the number of items from groups \cite{Zehlike_part1,GDL2021,CMV2020,CSV2018}, and can be used in case of multiple groups. Such fairness constraints have also been studied in other problems such as subset selection \cite{SYV2018}, matching \cite{GOTO201640}, clustering \cite{CKLV2017} and others. Since we do not trust the between-group comparisons of the items, we do not know which of the feasible group fair representations is the best one, that is, for us, each of them is equally likely to be the best one. The next axiom captures this by asking for a distribution that has highest entropy (or randomness) while choosing a group fair representation which is nothing but a uniform distribution over all feasible group fair representations. Formally, the axiom is stated as follows, \begin{axiom}[Representation Fairness] \label{axm:rep_fairness} All the non-group fair representations should be sampled with probability zero, and all the group fair representations should be sampled uniformly at random. \end{axiom} There could be many distributions over rankings that satisfy \Cref{axm:intra_group} and \Cref{axm:rep_fairness}. Consider a distribution that samples a group representation $x$ uniformly at random. Let $x_1\in [L_1,U_1]$ be the representation corresponding to group $1$. Let us assume that this distribution always assigns ranks $k-x_1+1$ to $k$ to group $1$. Due to in-group consistency, the best $x_1$ items in group $1$ get assigned these ranks. However, always being at the bottom of ranking is not fair to group $1$, since it gets low visibility. Therefore, we introduce a third axiom that asks for fairness in the second step of ranking -- assigning the top $k$ ranks to the groups in a \textit{rank-aware} manner. \begin{axiom}[Ranking Fairness] \label{axm:inter_group} For any two groups $j,j'\in [\ell]$, for all $i \in \set{0,\ldots, k-2}$, conditioned on the top $i$ ranks and a group representation $x$, the $(i+1)$-th and the $(i+2)$-th ranks are assigned to $j$ and $j'$ interchangeably with equal probability. That is, $\forall j, j' \in [\ell],\forall i \in \set{0,\ldots,k-2}$, \[ \Pr\sparen{Y_{i+1} = j, Y_{i+2} = j' \mid Y_0, Y_1, \ldots, Y_i, \boldsymbol{X}} =\Pr\sparen{Y_{i+1} = j', Y_{i+2} = j \mid Y_0, Y_1, \ldots, Y_i, \boldsymbol{X}}. \] \end{axiom} Let $\mathcal{U}$ represent a uniform distribution. In the result below, we show that there exists a unique distribution over the rankings that satisfies all three axioms. \begin{theorem} \label{thm:uniqueness} Let $\mathcal{D}$ be a distribution from which a ranking is sampled as follows, \begin{enumerate}[leftmargin=*] \itemsep0em \item Sample an $x$ as, $ \boldsymbol{X}\sim\mathcal{U}\set{x\in {\mathbb Z}_{\geq 0}^{\ell} :\sum_{j \in [\ell]} x_j = k \land L_j \le x_j \le U_j, \forall j \in [\ell]}. $ \item Sample a $y$, given $x$, as, $ \boldsymbol{Y}\mid x\sim\mathcal{U}\set{y\in [\ell]^{k}: \sum_{i \in k}\mathbb{I}[y_i = j] = x_j, \forall j \in [\ell]}. $ \item Rank the items within the same group in the order consistent with their in-group ranking, in the ranks assigned to the groups in the group assignment $y$. \end{enumerate} Then $\mathcal{D}$ is the unique distribution that satisfies all three axioms. \end{theorem} \begin{proof}[Proof of \Cref{thm:uniqueness}] Recall that $x = \paren{x_1, x_2, \ldots, x_{\ell}}$ is defined as \textit{group representation} where $x_j$ is the number of ranks assigned to group $j$ for all $j \in [\ell]$, and $y = \paren{ y_1, y_2, \ldots, y_k}$ is defined as \textit{group assignment} where $y_i$ is the group assigned to rank $i$ for all $i \in [k]$. For \Cref{axm:intra_group} to be satisfied, the distribution should consist only of rankings where the items from the same group are ranked in the order of their merit. Clearly $\mathcal{D}$ satisfies \Cref{axm:intra_group}. To satisfy \Cref{axm:rep_fairness} all the group fair representations need to be sampled uniformly at random, and all the non-group fair rankings need to be sampled with probability zero. Hence, $\mathcal{D}$ also satisfies \Cref{axm:rep_fairness}. We now use strong induction on the prefix length $i$ to show that any distribution over group assignments that satisfies \Cref{axm:inter_group} has to sample each group assignment $y$, conditioned on a group representation $x$, with equal probability. We note that whenever we say common prefix, we refer to the longest common prefix. \textbf{Induction hypothesis.} Any two rankings with a common prefix of length $i$, for some $0 \le i \le k-2$, have to be sampled with equal probability. \textbf{Base case ($i = k-2$).} Let $y$ and $y'$ represent a pair of group assignments with fixed group representation $x$ and common prefix till ranks $k-2$. Then there exist exactly two groups $j,j'\in [\ell]$ such that \begin{gather*} y_{k-1} = y'_{k} = j\quad\text{and}\quad y_{k} = y'_{k-1} = j'. \end{gather*} Therefore, to satisfy \Cref{axm:inter_group}, these two group assignments $y$ and $y'$ need to be sampled with equal probability. Therefore we can conclude that for a fixed $x$, any two group assignments with the same prefix of length $k-2$ have to be sampled with equal probability. We note here that there do not exist two or more group assignments with group representation $x$ and common prefix of length exactly $k-1$. \textbf{Induction step.} Assume that for some $i < k-2$, any two group assignments with group representation $x$ and common prefix of length $i' \in \set{i+1, i+2, \ldots, k-2}$ are equally likely. Then we want to show that any two group assignments with group representation $x$ and common prefix of length $i$ are also equally likely. Let $y^{(s)}$ and $y^{(t)}$ be two different group assignments with group representation $x$ and common prefix of length $i$. Let $w = \paren{w_1, w_2, \ldots, w_i}$ represent this common prefix of length $i$, that is, \[ w_1 := y^{(s)}_1 = y^{(t)}_1, w_2 := y^{(s)}_2 = y^{(t)}_2, \cdots, w_i := y^{(s)}_i = y^{(t)}_i. \] Observe that if $x_j'$ represents the number of ranks assigned to group $j$ in ranks $\paren{i+1, i+2, \ldots, k}$ in $y^{(s)}$, then the number of ranks assigned to group $j$ in ranks $\paren{i+1, i+2, \ldots, k}$ in $y^{(t)}$ is also $x_j'$ for all $j \in [\ell]$, since $y^{(s)}$ and $y^{(t)}$ have common prefix of length $i$, and both have group representation $x$. Since $w$ is of length exactly $i$ we also have that $y^{(s)}_{i+1}\neq y^{(t)}_{i+1}$. But the observation above give us that the group assigned to rank $i+1$ in $y^{(t)}$ appears in one of the ranks between $i+2$ and $k$ in $y^{(s)}$. Let $\mathcal{P}$ be the set of all permutations of the elements in the multi-set \[ \set{y^{(s)}_{i+2},y^{(s)}_{i+3}, \ldots, y^{(s)}_{k}}\Big\backslash \set{y^{(t)}_{i+1}}, \] that is, we remove one occurrence of the group assigned to rank $i+1$ in the group assignment $y^{(t)}$ from the multi-set $\set{y^{(s)}_{i+2},y^{(s)}_{i+3}, \ldots, y^{(s)}_{k}}$. We then have that each element of $\mathcal{P}$ is a tuple of length $k-i-2$. We now construct two sets of group assignments $M^{(s)}$ and $M^{(t)}$ as follows, \begin{align*} M^{(s)} := \set{ \paren{\underbrace{w_1, w_2, \ldots, w_i}_{\text{first}~i},\underbrace{y^{(s)}_{i+1}}_{i+1},\underbrace{y^{(t)}_{i+1}}_{i+2}, \underbrace{\hat{w}_1, \hat{w}_2, \ldots, \hat{w}_{k-i-2}}_{\text{last}~k-i-2}}, \forall \hat{w} \in \mathcal{P}},\\ M^{(t)} := \set{ \paren{\underbrace{w_1, w_2, \ldots, w_i}_{\text{first}~i},\underbrace{y^{(t)}_{i+1}}_{i+1},\underbrace{y^{(s)}_{i+1}}_{i+2}, \underbrace{\hat{w}_1, \hat{w}_2, \ldots, \hat{w}_{k-i-2}}_{\text{last}~k-i-2}}, \forall \hat{w} \in \mathcal{P}}. \end{align*} For a fixed $\hat{w}\in \mathcal{P}$ there is exactly one group assignment in $M^{(s)}$ and one group assignment in $M^{(t)}$ such that their $i+1$st and $i+2$nd coordinates are interchanged, and their first $i$ and last $k-i-2$ coordinates are same. Therefore, $\abs{M^{(s)}} = \abs{M^{(t)}}$. We also have from the induction hypothesis that all the group assignments in $M^{(s)}$ are equally likely since they have a common prefix of length $i+2$. Similarly all the group assignments in $M^{(t)}$ are equally likely. For any group assignment in $M^{(s)}$ let $\delta^{(s)}$ be the probability of sampling it. Similarly, for any group assignment in $M^{(t)}$ let $\delta^{(t)}$ be the probability of sampling it. Then, \begin{multline} \Pr\sparen{Y_{i+1} = y^{(s)}_{i+1}, Y_{i+2} = y^{(t)}_{i+1} \mid Y_0, \paren{Y_1, \ldots, Y_i} = w, \boldsymbol{X}=x} \\ = \Pr\sparen{\text{sampling a group assignment from}~M^{(s)}} = \abs{M^{(s)}}\delta^{(s)},\label{eq:ms} \end{multline} \begin{multline} \Pr\sparen{Y_{i+1} = y^{(t)}_{i+1}, Y_{i+2} = y^{(s)}_{i+1} \mid Y_0, \paren{Y_1, \ldots, Y_i} = w, \boldsymbol{X}=x} \\= \Pr\sparen{\text{sampling a group assignment from}~M^{(t)}} = \abs{M^{(t)}}\delta^{(t)}.\label{eq:mt} \end{multline} Fix two group assignments $y^{(s')}\in M^{(s)}$ and $y^{(t')}\in M^{(t)}$. By the induction hypothesis $y^{(s)}$ and $y^{(s')}$ are equally likely since they have a common prefix of length $i+1$. Similarly $y^{(t)}$ and $y^{(t')}$ are also equally likely. Therefore, for $y^{(s)}$ and $y^{(t)}$ to be equally likely we need $y^{(s')}$ and $y^{(t')}$ to be equally likely. \paragraph{Comparing $y^{(s')}$ and $y^{(t')}$ instead of $y^{(s)}$ and $y^{(t)}$.} We know from above that $y^{(s')}$ and $y^{(t')}$ are sampled with probability $\delta^{(s)}$ and $\delta^{(t)}$ respectively. Therefore for any distribution satisfying \Cref{axm:inter_group} we have, \begin{align*} &\Pr\sparen{Y_{i+1} = y^{(s)}_{i+1}, Y_{i+2} = y^{(t)}_{i+1} \mid Y_0, \paren{Y_1, \ldots, Y_i} = w, \boldsymbol{X}=x} \\ &\qquad\qquad\qquad= \Pr\sparen{Y_{i+1} = y^{(t)}_{i+1}, Y_{i+2} = y^{(s)}_{i+1} \mid Y_0, \paren{Y_1, \ldots, Y_i} = w, \boldsymbol{X}=x}\\ &\implies \abs{M^{(s)}}\delta^{(s)} = \abs{M^{(t)}}\delta^{(t)} \qquad\qquad\qquad\text{from Equations}~(\ref{eq:ms})~\text{and}~(\ref{eq:mt}) \\ &\implies \delta^{(s)} = \delta^{(t)}.\qquad\qquad\qquad\qquad\qquad\qquad\because \abs{M^{(s)}}=\abs{M^{(t)}} \end{align*} Note that the converse is also easy to show, which means that \Cref{axm:inter_group} is satisfied if and only if $y^{(s')}$ and $y^{(t')}$ are equally likely. Therefore, \Cref{axm:inter_group} is satisfied if and only if $y^{(s)}$ and $y^{(t)}$ are equally likely. For a fixed group representation $x$, for any two group assignments with corresponding group representation $x$, there exists an $i \in \set{0,1,\ldots, k-2}$ such that they have a common prefix of length $i$. Therefore, any two group assignments, for a fixed group representation $x$, have to be equally likely. Therefore $\mathcal{D}$ is the unique distribution that satisfies all three axioms. \end{proof} We also have the following additional characteristic of the distribution in \Cref{thm:uniqueness}. It guarantees that every rank in a randomly sampled group assignment is assigned to group $j$ with probability at least $\frac{L_j}{k}$ and at most $\frac{U_j}{k}$. Hence, every rank gets a sufficient representation of each group. Note that no deterministic group fair ranking can achieve this. Let $\widehat{\mathcal{D}}$ be a distribution that differs from $\mathcal{D}$ as follows: $\boldsymbol{X}$ is sampled from a distribution $\epsilon$-close to uniform distribution in Step 1 of $\mathcal{D}$, in total-variation distance, $\boldsymbol{Y}|x$ is sampled similarly as in Step 2 of $\mathcal{D}$ and the items are also assigned similarly as in Step 3 of $\mathcal{D}$. Then it is easy to show that $\widehat{\mathcal{D}}$ is $\epsilon$-close to $\mathcal{D}$ in total-variation distance. We then have the following theorem and its corollary. \begin{theorem} \label{thm:rep_at_i} For any $\epsilon > 0$ and group assignment $\boldsymbol{Y}$ sampled from $\widehat{\mathcal{D}}$, for every group $j \in [\ell]$ and for every rank $i \in [k]$, $ \frac{L_j}{k} \le \Pr_{\widehat{\mathcal{D}}}\sparen{Y_i = j}\le\frac{U_j}{k}$. \end{theorem} \begin{proof}[Proof of \Cref{thm:rep_at_i}] Given an $\epsilon > 0$ and a distribution $\widehat{\mathcal{D}}$ that is at total-variation distance of $\epsilon$ from $\mathcal{D}$ defined in \Cref{thm:uniqueness}, when sampling group representation. Therefore, \begin{equation} \label{eq:tv_for_cX} \sup_{A \subseteq \mathcal{X}}\abs{\Pr_{\mathcal{D}}(A) - \Pr_{\widehat{\mathcal{D}}}(A)} = \epsilon. \end{equation} Now, fix a group $j \in [\ell]$ and a rank $i \in [k]$. Let $\mathcal{X}$ be the set of all group fair representations for given constraints, $L_j, U_j, \forall j \in [\ell]$. Then, \begin{align*} \Pr_{\widehat{\mathcal{D}}}\sparen{Y_i = j} &= \sum_{x\in\mathcal{X}}\Pr_{\widehat{\mathcal{D}}}\sparen{X = x}\Pr_{\widehat{\mathcal{D}}}\sparen{Y_i = j | X} &\text{by the law of total probability}\\ &= \sum_{x\in\mathcal{X}}\Pr_{\widehat{\mathcal{D}}}\sparen{X = x}\frac{x_j}{k}\\ &\ge \frac{L_j}{k}\sum_{x\in\mathcal{X}}\Pr_{\widehat{\mathcal{D}}}\sparen{X = x}\\ &= \frac{L_j}{k}. \end{align*} Similarly we get $\Pr_{\widehat{\mathcal{D}}}\sparen{Y_i = j} \le \frac{U_j}{k}$. \end{proof} \begin{corollary} \label{cor:rep_at_prefix} Let $i, i' \in [k]$ be such that $i \le i'$ and let $Z_{i,i'}^j$ be a random variable representing the number of ranks assigned to group $j$ in ranks $i$ to $i'$, for a ranking sampled from $\widehat{\mathcal{D}}$. Then, for every group $j \in [\ell]$ and for every rank $i \in [k]$, $ \paren{\frac{i'-i+1}{k}}\cdot L_j \le \mathbb E_{\widehat{\mathcal{D}}}\sparen{Z_{i,i'}^j}\le\paren{\frac{i'-i+1}{k}}\cdot U_j$. \end{corollary} \begin{proof}[Proof of \Cref{cor:rep_at_prefix}] Given an $\epsilon > 0$ and a distribution $\widehat{\mathcal{D}}$ that is at total-variation distance of $\epsilon$ from $\mathcal{D}$ defined in \Cref{thm:uniqueness}, when sampling group representation. Fix a group $j \in [\ell]$ and rank $i,i' \in [k]$ such that $i \le i'$. Let $\mathcal{X}$ be the set of all group fair representations for given constraints, $L_j, U_j, \forall j \in [\ell]$. \begin{align*} \mathbb E_{\widehat{\mathcal{D}}}\sparen{Z_{i,i'}^j} &= \mathbb E_{\widehat{\mathcal{D}}}\sparen{\sum_{\hat{i} = i}^{i'}\mathbb{I}\sparen{Y_{\hat{i}} = j}} \\ &= \sum_{\hat{i} = i}^{i'}\mathbb E_{\widehat{\mathcal{D}}}\sparen{\mathbb{I}\sparen{Y_{\hat{i}} = j}} &\text{by linearity of expectation}\\ &= \sum_{\hat{i} = i}^{i'}\Pr_{\widehat{\mathcal{D}}}\sparen{Y_{\hat{i}} = j}\\ &\ge \frac{i'-i+1}{k}\cdot L_j. &\text{from \Cref{thm:rep_at_i}} \end{align*} Similarly $\mathbb E_{\widehat{\mathcal{D}}}\sparen{Z_{i,i'}^j} \le \frac{i'-i+1}{k}\cdot U_j$. \end{proof} Three comments are in order. First, fixing $i = 0$ in \Cref{cor:rep_at_prefix} gives us that every prefix of the ranking sampled from $\widehat{\mathcal{D}}$ will have sufficient representation from the groups, in expectation. Such fairness requirements are consistent with those studied in the ranking literature \cite{CSV2018}. Second, let $k' := i' - i$ for some $i, i' \in [k]$ such that $i \le i'$. Then \Cref{cor:rep_at_prefix} also gives us that any consecutive $k'$ ranks of the ranking sampled from $\widehat{\mathcal{D}}$ also satisfy representation constraints. Such fairness requirements are consistent with those studied in \cite{GDL2021}. Third, \Cref{cor:rep_at_prefix} gives us that $\widehat{\mathcal{D}}$ also satisfies \textit{ex-ante} fairness for any $k' < k$ consecutive ranks in the top $k$ ranking, that is, in any $k'$ consecutive ranks, it satisfies the representation constraints, in expectation. We also have from \Cref{axm:rep_fairness} that the distribution $\widehat{\mathcal{D}}$ in \Cref{thm:rep_at_i} satisfies \textit{ex-post} fairness for top $k$ ranking since the support of the distribution consists only of rankings that satisfy representation constraints. This is important when the fairness constraints are legal or strict requirements. Note that the randomized rankings that achieve equal expected exposure \cite{DME+2020,SJ2018} only satisfy \textit{ex-ante} fairness because they can satisfy equality of exposure only in expectation. \paragraph{Time taken to sample from $\mathcal{D}$ (in \Cref{thm:uniqueness}).} Given a group representation $x$, one can find a uniform random $y$, for a given $x$, by sampling a random binary string of length at most $\log k!$. Since $x$ is fixed, this gives us a uniform random sample of $y$ conditioned on $x$. This takes time $\mathcal{O}(k \log k)$. This sampling takes care of Step 2. Step 3 simply takes $\mathcal{O}(k)$ time, given in-group rankings of all the groups. The main difficulty is to provide an efficient algorithm to perform Step 1. Therefore, in the next section, we focus on sampling a uniform random group fair representation in Step 1. \section{Sampling a Uniform Random Group-Fair Representation} \label{sec:algorithms} We first note that each group fair representation corresponds to a unique integral point in the convex polytope $K$ defined below, \begin{equation} K = \Big\{\paren{x_1, x_2, \ldots, x_{\ell}}\in\mathbb R^{\ell}~~\Big|~~\sum_{j \in [\ell]} x_j = k~~\text{and}~~L_j \le x_j \le U_j, \forall j \in [\ell] \Big\}.\label{eq:polytopeK} \end{equation} Therefore, sampling a uniform random group-fair representation is equivalent to sampling an integral or a lattice point uniformly at random from the convex set $K$. \subsection{Approximate Uniform Sampling} We now give our first algorithm (\Cref{alg:random_walk}) that outputs integral point from $K$, defined in \Cref{eq:polytopeK}, from a density that is close to the uniform distribution over the set of integral points in $K$, with respect to the total variation distance. There is a long line of work on polynomial-time algorithms to sample a point approximately uniformly from a given convex polytope or a convex body \citep{DFK1991,LV2006,CV2018}. We use the algorithm by \citet{CV2018} as {\sc Sampling-Oracle}~in \Cref{alg:random_walk}. We get an algorithm with expected running time {\sf poly}$(k,\ell,1/\epsilon)$ to sample a close to uniform random group fair representation (\Cref{thm:main}). \begin{theorem} \label{thm:main} Let $L_j, U_j\in {\mathbb Z}_{\geq 0}, \forall j \in [\ell]$ be the fairness constraints and $k\in {\mathbb Z}_{\geq 0}$ be the size of the ranking. Let $\Delta:=\min\bigg\{\floor{\frac{k- \paren{\sum_{j \in [\ell]}L_j}}{\ell}},\floor{\frac{ \paren{\sum_{j \in [\ell]}U_j }-k}{\ell}},\min_{j \in [\ell]}\floor{\frac{U_j- L_j}{2}}\bigg\}$. Then for any non-negative number $\epsilon < e^{-2\frac{\ell\sqrt{\ell}}{\Delta}}$, \Cref{alg:random_walk} samples a random point from a density that is within total variation distance $\epsilon$ from the uniform distribution on the integral points in $K$ by making $1/\paren{e^{-2\frac{\ell\sqrt{\ell}}{\Delta}} - \epsilon}$ calls to the oracle in expectation. When $\epsilon$ is a non-negative constant, such that $\epsilon < e^{-2}$ and $\Delta = \Omega\paren{\ell^{1.5}}$, \Cref{alg:random_walk} calls the oracle only a constant number of times in expectation. \end{theorem} \begin{algorithm}[t] \setcounter{AlgoLine}{0} \SetAlgoLined \KwIn{Fairness constraints $L_j, U_j$ for all the groups $j \in [\ell]$, numbers $k \in {\mathbb Z}_{\geq 0}$ and $\epsilon$} \nonl \hrulefill $H := \set{\paren{x_1, x_2, \ldots, x_{\ell}}\in\mathbb R^{\ell}~~\middle\vert~~\sum_{j \in [\ell]} x_j = k}$.\label{step:def_H} $P := \set{\paren{x_1, x_2, \ldots, x_{\ell}}\in\mathbb R^{\ell}~~\middle\vert~~L_j \le x_j \le U_j, \forall j \in [\ell] }$. \tcp*[f]{note that $K = H \cap P$.}\label{step:def_P} $\Delta := \min\Bigg\{\floor{\frac{k- \paren{\sum_{j \in [\ell]}L_j}}{\ell}}, \floor{\frac{ \paren{\sum_{j \in [\ell]}U_j }-k}{\ell}},\min_{j \in [\ell]}\floor{\frac{U_j- L_j}{2}}\Bigg\}$.\label{step:def_delta} \tcp*[f]{radius of the largest ball } \nonl\tcp*[f]{inside $P$ with an integral point in $H$ as center.} \nonl \color{lightgray}\hrulefill \color{black} $x^*_j := L_j + \Delta, \forall j \in [\ell]$.\label{step:find_center_start}\tcp*[f]{find one such integral point $x^* \in H$ such that~$B(x^*,\Delta)\subseteq P.$} \For{$j := 1,2,\ldots, \ell$} {\label{step:for} \textbf{if }$\sum_{j' \in [\ell]}x_{j'}^* < k$\textbf{ then } $x^*_{j} := \min\set{k - \sum_{j' \neq j}x^*_{j'},~~U_{j} - \Delta}$\textbf{ end }}\label{step:find_center_end} \nonl \color{lightgray}\hrulefill \color{black} $K' := K - x^*$. \tcp*[f]{translate the polytope $K$ to $K'$; note $K' = (H - x^*)\cap (P-x^*)$.}\label{step:def_K_prime} \nonl \color{lightgray}\hrulefill \color{black} $z$ := {\sc Sampling-Oracle}$\paren{ \paren{1+\frac{\sqrt{\ell}}{\Delta}}K',\epsilon}$.\tcp*[f]{sample a rational point from close to uniform~~} \label{step:sample} \nonl \tcp*[f]{distribution on the expanded polytope.} \nl$\textbf{if }j \in \sparen{\abs{\sum_j \floor{z_j}}}, x_j := \ceil{z_j}; \textbf{ else }x_j := \floor{z_j}$.\label{step:rounding}\tcp*[f]{round it to an integer point on $H-x^*$.} \textbf{if } $x\in K'$, return $x+x^*$, \textbf{ else } reject $x$ and go to Step \ref{step:sample} \label{step:reject}.\tcp*[f]{accept if in $K'$, else reject.} \caption{Sampling an approximately uniform random group-fair representation} \label{alg:random_walk} \end{algorithm} \subsubsection{Overview of the algorithm and the proof of \Cref{thm:main}} \label{subsec:overview} Let $H, P,$ and $\Delta$ be as defined in Steps \ref{step:def_H}, \ref{step:def_P} and \ref{step:def_delta} respectively. Clearly $K = H \cap P$. We first find an integral center in $x^* \in H \cap P$ (Steps \ref{step:find_center_start} to \ref{step:find_center_end}) such that there is a ball of radius $\Delta$ in $P$ (see \Cref{lem:ball_contained}). We then translate the origin to this point $x^*$. This ensures that the translated polytope $K'$ (see Step \ref{step:def_K_prime}) contains a ball of radius $\Delta$ with center at origin. Moreover, $x^*$ being an integral point ensures that there exists a bijection between the set of integral points in the translated polytope $K'$ and the original polytope $K$ (see proof of \Cref{thm:main}). Now consider the expanded polytope $\paren{1+\frac{\sqrt{\ell}}{\Delta}}K'$. Note that for any scalar $\alpha$, the polytope $\alpha K'$ is constructed by adding to $\alpha K'$ a point $x$ whenever $\frac{1}{\alpha}x$ belongs to $K'$. Let $H' = H - x^*$ and $P' = P - x^*$. Then $K' = K - x^* = H \cap P - x^* = (H-x^*) \cap (P - x^*) = H'\cap P'$. Let $B(x,\Delta)$ represent an $l_2$ ball of radius $\Delta$ centered at $x$. We show in \Cref{lem:rounding} that our deterministic rounding algorithm (in Step \ref{step:rounding}) is designed such that the set of points in the expanded polytope that get rounded to an integral point on $H'$ is contained inside a cube of side length $2$ around this point. In \Cref{lem:cube_contained} we show this cube of side length $2$ is fully contained in this expanded polytope. \Cref{lem:same_size} gives us that for any two integral points $x$ and $x'$, there is a bijection between the set of points that get rounded to these points. Therefore, every integral point is sampled from distribution close to uniform, given the {\sc Sampling-Oracle}~samples any rational point in the expanded polytope from a distribution close to uniform. Therefore, in Step \ref{step:sample} we sample a random point from a distribution close to uniform, using a {\sc Sampling-Oracle}, from the expanded polytope. We then round the point sampled to an integer point on $H'$. If it belongs to $K'$ we accept, else we reject and go to \ref{step:sample}. Our algorithm has expected running time polynomial in $k$, $1/\epsilon$ and exponential in $\ell$ in expectation. However, if $\Delta = \Omega\paren{\ell^{1.5}}$ it has expected running time polynomial in $k$, $1/\epsilon$ and $\ell$. Note that in \Cref{alg:random_walk}, $K' = K-x^*$ where $x^*$ is an integral point such that $\sum_{j \in [\ell]}x_j^* = k$. Therefore, there is a one-to-one correspondence between the integral points in $K$ and those in $K'$. Therefore, we consider the polytope $K'$ in the above theorem. The value of $R^2$ in \Cref{thm:vempala} for the polytope $K'$ is $k^2$. Therefore, the algorithm by \citet{CV2018} gives a rational point from $K'$, from a distribution close to uniform, in time $\mathcal{O}^*\paren{k^2\ell^2}$. Therefore each oracle call in \Cref{thm:main} takes time $\mathcal{O}^*(k^2\ell^2)$ when we use \Cref{thm:vempala} as the {\sc Sampling-Oracle}. \subsubsection{Proof of \Cref{thm:main}} \label{subsec:proof_main} For completeness, we restate the Theorem 1.2 in \citep{CV2018} that states the running time and success probability of their uniform sampler, in \Cref{subsec:proof_main}. \begin{theorem}[Theorem 1.2 in \citep{CV2018}] \label{thm:vempala} There is an algorithm that, for any $\epsilon > 0$, $p > 0$, and any convex body $C \in \mathbb R^{d}$ that contains the unit ball and has $\mathbb E_C(\|X\|^2) = R^2$, with probability $1 - p$, generates random points from a density $\nu$ that is within total variation distance $\epsilon$ from the uniform distribution on $C$. In the membership oracle model, the complexity of each random point, including the first, is $\mathcal{O}^*\paren{\max\set{R^2{d}^2, {d}^3}}$\footnote{The $\mathcal{O}^*$ notation suppresses error terms and logarithmic factors.}. \end{theorem} \begin{lemma} \label{lem:ball_contained} $B(0,\Delta) \subseteq P'$. \end{lemma} \begin{proof} From the definition of $\Delta$ we have the following inequalities. \begin{align} \Delta \le \floor{\frac{k- \paren{\sum_{j \in [\ell]}L_j}}{\ell}} &\implies \Delta \le \frac{k- \paren{\sum_{j \in [\ell]}L_j}}{\ell}\implies \ell\cdot \Delta+\sum_{j \in [\ell]}L_j \le k \\ &\implies \sum_{j \in [\ell]}\paren{L_j+\Delta} \le k\label{eq:suml_delta}, \\ \Delta\le \floor{\frac{\paren{\sum_{j \in [\ell]}U_j}-k}{\ell}} &\implies \Delta \le \frac{\paren{\sum_{j \in [\ell]}U_j}-k}{\ell}\implies -\ell \cdot \Delta+\sum_{j \in [\ell]}U_j \ge k\\ &\implies \sum_{j \in [\ell]}\paren{U_j-\Delta} \ge k\label{eq:sumu_delta},\\ \text{and}\notag\\ \Delta \le \floor{\frac{U_j - L_j}{2}} &\implies \Delta \le \frac{U_j - L_j}{2}\implies 2\Delta \le U_j - L_j\implies L_j + \Delta \le U_j - \Delta.\label{eq:l_lessthan_u} \end{align} \noindent To show that Steps \ref{step:find_center_start} to \ref{step:find_center_end} find the correct center we use the following loop invariant. \paragraph{Loop invariant.} At the start of every iteration of the \textbf{for} loop $x^*$ is an integral point such that $L_j + \Delta \le x_j^* \le U_j - \Delta, \forall j \in [\ell]$. \begin{itemize} \item[] \textbf{Initialization:} In Step \ref{step:find_center_start} each $x_j^*$ is initialized to $L_j + \Delta$. From \Cref{eq:l_lessthan_u} we know that $L_j + \Delta \le U_j - \Delta$. Moreoever, $L_j, U_j$, and $\Delta$ are all integers. Therefore, $x^*$ is integral and satisfies $L_j + \Delta \le x_j^* \le U_j - \Delta, \forall j \in [\ell]$. \item[] \textbf{Maintenance:} If the condition in Step \ref{step:find_center_end} fails, the value of $x^*$ is not updated. Therefore the invariant is maintained. If the condition succeeds we have that, \begin{equation} \sum_{j'\in[\ell]} x_{j'}^* < k\label{eq:if_cond_succeeds} \end{equation} The value $x_j^*$ is set to $\min\set{k - \sum_{j' \in [\ell] : j' \neq j} x_{j'}^*~,~~~U_{j} - \Delta}$ in Step \ref{step:find_center_end}. The following two cases arise based on the minimum of the two quantities. \begin{itemize} \item \textbf{Case 1: $ k - \sum_{j' \in [\ell] : j' \neq j} x_{j'}^* \le U_{j}-\Delta $.}\\ In this case $x^*_{j}$ is set to $k - \sum_{j' \in [\ell] : j' \neq j} x_{j'}^* \le U_j - \Delta $, which is an an integer value since both $x_{j}^*$ and $k - \sum_{j' \in [\ell] : j' \neq j} x_{j'}^* $ are integers before the iteration. From (\ref{eq:if_cond_succeeds}) we have that \begin{equation} \label{eq:x_j_increases} k - \sum_{j' \in [\ell] : j' \neq j} x_{j'}^* = x_{j}^* + k - \sum_{j' \in [\ell]} x_{j'}^* > x_{j}^*. \end{equation} Since $x^*_{j} \ge L_{j} + \Delta$ before the iteration, (\ref{eq:x_j_increases}) gives us that $x^*_{j}$ is greater than $L_{j} + \Delta$ even after the update. \item \textbf{Case 2: $ k - \sum_{j' \in [\ell] : j' \neq j} x_{j'}^* > U_{j}-\Delta $.}\\ Since $U_{j} - \Delta \ge L_{j} + \Delta$ from \Cref{eq:l_lessthan_u} and since $U_{j} - \Delta$ is an integer, the value of $x^*_{j}$ after the update is an integer such that $U_{j}-\Delta \ge x^*_{j} \ge L_{j} + \Delta$. \end{itemize} Therefore in both the cases the invariant is maintained. \item[] \textbf{Termination:} At termination $j = \ell$. The invariant gives us that $x^*$ is an integral point such that $L_j + \Delta \le x_j^* \le U_j - \Delta, \forall j \in [\ell]$. \end{itemize} From \Cref{eq:suml_delta} we have that before the start of the \textbf{for} loop $\sum_{j \in [\ell]}x_j^* = \sum_{j \in [\ell]}L_j +\Delta \le k$. After the termination of the \textbf{for} loop we have that $x_j^* = U_j - \Delta$, forall $j \in [\ell]$, when the \textbf{if} condition in Step \ref{step:find_center_end} fails for all $j \in [\ell]$, or the \textbf{if} condition in Step \ref{step:find_center_end} succeeds for some $j$, in which case $\sum_{j \in [\ell]}x_j^*=k$, and the value of $x^*$ does not change after this iteration. Therefore, after the \textbf{for} loop we get $\sum_{j \in [\ell]}x_j^* = \min\set{\sum_{j \in [\ell]}U_j -\Delta, k}$. But \Cref{eq:sumu_delta} gives us that $\sum_{j \in [\ell]}U_j -\Delta \ge k$. Therefore, the \textbf{for} loop finds an integral point $x^*$ such that $L_j + \Delta \le x_j^* \le U_j - \Delta, \forall j \in [\ell]$, and $\sum_{j \in [\ell]}x_j^* = k$. Therefore there is an $l_1$ ball of radius $\Delta$ in $P$ centered at the integral point $x^* \in H$ (that is, $\sum_{j \in [\ell]}x_j^* = k$). Consequently there exists an $l_1$ ball of radius $\Delta$ centered at the origin in the polytope $P'$. Since an $l_1$ ball of radius $\Delta$ centered at origin encloses an $l_2$ ball of radius $\Delta$ centered at origin we get that an $l_2$ ball of radius $\Delta$ centered at the origin, $B(0,\Delta)$, is in the polytope $P'$. \end{proof} \noindent Let $C(x,\beta)\subseteq \mathbb R^{\ell}$ represent a cube of side length $\beta$ centered at $x$. For any integral point $x \in K'$ let $F_x$ represent the set of points in $\paren{1+\frac{\sqrt{\ell}}{\Delta}}K'$ that are rounded to $x$. \begin{lemma} \label{lem:rounding} For any integral point $x \in K'$, $F_x \subseteq H' \cap C(x,2)$. \end{lemma} \begin{proof} Let $z$ be the point sampled in Step \ref{step:sample}. Since $z \in \paren{1+\frac{\sqrt{\ell}}{\Delta}}K'$ we have that $\sum_{j \in [\ell]}z_j = 0$. Therefore, \[ \sum_{j \in [\ell]} \floor{z_j} \le 0 \qquad \text{and} \qquad \sum_{j \in [\ell]} \ceil{z_j} \ge 0. \] Then, \begin{align*} m = \abs{\sum_{j \in [\ell]} \floor{z_j}} = \abs{\sum_{j \in [\ell]} \floor{z_j} - \sum_{j \in [\ell]} z_j} =\abs{\sum_{j \in [\ell]} \paren{\floor{z_j} - z_j}} \le \sum_{j \in [\ell]} \abs{\floor{z_j} - z_j} \le \ell, \end{align*} where the second equality is because $ \sum_{j \in [\ell]} z_j = 0$. Hence, starting from $x_j = \floor{z_j}, \forall j \in [\ell]$, the algorithm has to round at most $\ell$ coordinates to $x_j = \ceil{z_j}$. Since $j \in [\ell]$ this is always possible. Therefore, the rounding in Step \ref{step:rounding} always finds an integral point $x$ that satisfies the following, \begin{equation} \sum_{j \in [\ell]} x_j = 0\qquad \text{and} \qquad \paren{\forall j \in [\ell], x_j = \floor{z_j}~~\text{or}~~x_j = \ceil{z_j}}.\label{eq:rounded_x} \end{equation} Therefore, the set of points $z \in \paren{1+\frac{\sqrt{\ell}}{\Delta}}K'$ that are rounded to the integral point $x\in K'$ satisfying (\ref{eq:rounded_x}) is a strict subset of \[ \set{z :\paren{\forall j \in [\ell], x_j = \floor{z_j} \lor x_j = \ceil{z_j}} \land \sum_{j \in [\ell]} z_j = 0 }, \] which is contained in $H' \cap C(x,2)$ since $\abs{z_j - \floor{z_j}} \le 1$ and $\abs{ \ceil{z_j} - z_j} \le 1, \forall j \in [\ell]$. \end{proof} \begin{lemma} For any $x \in K'$, $ H' \cap C(x,2) \subseteq \paren{1+\frac{\sqrt{\ell}}{\Delta}}K'$. \label{lem:cube_contained} \end{lemma} \begin{proof} Fix a point $x \in P'$. Then for any $x' \in C(x,2)$, $\|x'-x\|_2 \le \sqrt{\ell}$. \Cref{lem:ball_contained} gives us that the translated polytope $P'$ contains a ball of radius $\Delta$ centered at the origin. Then the polytope $\frac{\sqrt{\ell}}{\Delta}P'$ contains a ball of radius $\sqrt{\ell}$ centered at the origin, which implies that the polytope $\frac{\sqrt{\ell}}{\Delta}P'$ contains every vector of length at most $\sqrt{\ell}$. Therefore, $x'-x\in \frac{\sqrt{\ell}}{\Delta}P'$. Now since $x\in P'$ we get that $x'\in\paren{1+\frac{\sqrt{\ell}}{\Delta}}P'$. Therefore, $C(x,2) \subseteq \paren{1+\frac{\sqrt{\ell}}{\Delta}}P'$. Consequently, $H' \cap C(x,2) \subseteq H' \cap \paren{1+\frac{\sqrt{\ell}}{\Delta}}P' = \paren{1+\frac{\sqrt{\ell}}{\Delta}}(H' \cap P')$ since $\alpha H' = H'$ for any scalar $\alpha \neq 0$. Hence, $H' \cap C(x,2) \subseteq \paren{1+\frac{\sqrt{\ell}}{\Delta}}K'$ \end{proof} \begin{lemma} \label{lem:corollary} For any point $z \in \frac{1}{\paren{1+\frac{\sqrt{\ell}}{\Delta}}}K'$ the integral point it is rounded to belongs to the polytope $K'$. \end{lemma} \begin{proof} From \Cref{lem:rounding} we know that for any integral point $x \in K'$, $F_x \subseteq H' \cap C(x,2)$. Due to convexity of $K'$, $\frac{1}{\paren{1+\frac{\sqrt{\ell}}{\Delta}}}K'$ is contained entirely inside $K'$. Therefore, \Cref{lem:rounding} is true for all the points in $\frac{1}{\paren{1+\frac{\sqrt{\ell}}{\Delta}}}K'$. By arguing similarly as in the proof of \Cref{lem:cube_contained}, we can show that for any any $x \in \frac{1}{\paren{1+\frac{\sqrt{\ell}}{\Delta}}}K'$, $ H' \cap C(x,2) \subseteq K'$. This proves the lemma. \end{proof} Let $\mu$ be a uniform probability measure on the convex rational polytope $\paren{1+\frac{\sqrt{\ell}}{\Delta}}K'$. \begin{lemma} \label{lem:same_size} Fix any two distinct integral points $x,x' \in K'$, $\mu\paren{F_x} = \mu\paren{F_{x'}}$. \end{lemma} \begin{proof} Given two distinct integral points $x,x' \in K'$, let $c = x'-x$. Clearly $c$ is an integral point and $\sum_{j \in [\ell]} c_j = \sum_{j \in [\ell]} x'_j - \sum_{j \in [\ell]} x_j = 0-0 = 0$. Let $z\in F_x$ and $z' = z+c$. Then \begin{align*} \abs{\sum_{j \in [\ell]}\floor{z_j'}} = \abs{\sum_{j \in [\ell]}\floor{z_j}+c_j}= \abs{\sum_{j \in [\ell]}\floor{z_j}+\sum_{j \in [\ell]}c_j} =\abs{\sum_{j \in [\ell]}\floor{z_j}} = m. \end{align*} Therefore, for both $z$ and $z'$ the first $m$ coordinates are rounded up, and the remaining are rounded down, in Step \ref{step:rounding}. Since $\floor{z_j'} = \floor{z_j}+c_j$ and $\ceil{z_j'} = \ceil{z_j}+c_j$, the point $z'$ is rounded to is nothing but $x'$. Therefore, for every point $z\in F_x$ there is a unique point $z' \in F_{x'}$ such that they are rounded to $x$ and $x'$ respectively. This gives us a bijection between the sets $F_x$ and $F_{x'}$. Therefore, $\mu\paren{F_x} = \mu\paren{F_{x'}}$. \end{proof} \begin{proof}[Proof of \Cref{thm:main}] Let $K'' = \paren{1+\frac{\sqrt{\ell}}{\Delta}}K'$. Let $\nu$ be the distribution from which the {\sc Sampling-Oracle}~samples a point from $K''$. That is for a given $\epsilon > 0$, \begin{equation} \sup_{A \subseteq K''}\abs{\nu(A) - \mu(A)} \le \epsilon.\label{eq:tv} \end{equation} \paragraph{Close to uniform sample.} From \Cref{lem:rounding,lem:cube_contained} we know that for any point $x\in K'$, $F_x \subseteq K''$. Therefore, from (\ref{eq:tv}) we have that \begin{align*} \abs{\nu(F_x) - \mu(F_x)} \le \epsilon. \end{align*} Let $\nu'$ be the density from which \Cref{alg:random_walk} samples an integral point from $K$. We know that $\forall x, ~x \in K'\iff x+x^*\in K$. Since $x^*$ is an integral point, for any integral point $x$ we also have that $x \in K'\iff x + x^* \in K$. This gives us a bijection between the integral points in $K'$ and $K$. Therefore for any integral point $x \in K'$, $\nu'(x) = \nu(F_x)$. Let $\mu'(x) := \mu(F_x)$ for any integral point $x$ in $K'$. For any two integral points $x, x'\in K'$, \Cref{lem:same_size} gives us that $\mu\paren{F_x} = \mu\paren{F_{x'}}$. Moreover, since $F_x$ and $F_{x'}$ are both fully contained in $K''$, we get that $\mu'(F_x) = \mu(F_x) = \mu(F_{x'}) = \mu'(F_{x'})$. Therefore, $\mu'$ is a uniform measure over all the integral points in $K'$. Moreover, for any subset of integral points in $K$, say $I$, we have that \[ \nu'(I)=\nu\paren{\cup_{x \in I}F_x}. \] From \Cref{eq:tv} we have that \[ \abs{\nu\paren{\cup_{x \in I}F_x}-\mu\paren{\cup_{x \in I}F_x}} \le \epsilon. \] Consequently, \[ \abs{\nu'(I) - \mu'(I)} \le \epsilon. \] Therefore $\nu'$ over the integral points in $K'$ is at a total variation distance of at most $\epsilon$ from the uniform probability measure $\mu'$ over the integral points in $K'$. \paragraph{Probability of acceptance.}\Cref{alg:random_walk} samples points from $\paren{1+\frac{\sqrt{\ell}}{\Delta}}K'$ in each iteration. Due to \Cref{lem:corollary} we know for sure that whenever the algorithm samples a point from $\frac{1}{\paren{1+\frac{\sqrt{\ell}}{\Delta}}}K'$, it will be rounded to an integral point in $K'$. Therefore, the probability of sampling an integral point in $K'$ is \begin{align*} \ge\nu\paren{ \frac{1}{\paren{1+\frac{\sqrt{\ell}}{\Delta}}}K'} &\ge \mu\paren{ \frac{1}{\paren{1+\frac{\sqrt{\ell}}{\Delta}}}K'}-\epsilon &\text{from}~\Cref{eq:tv}\\ &=\frac{\vol{\ell-1}{ \frac{1}{\paren{1+\frac{\sqrt{\ell}}{\Delta}}}K'}}{\vol{\ell-1}{K''}}-\epsilon &\because \mu~\text{is a uniform distribution over}~K''\\ &=\frac{\vol{\ell-1}{ \frac{1}{\paren{1+\frac{\sqrt{\ell}}{\Delta}}}K'}}{\vol{\ell-1}{ \paren{1+\frac{\sqrt{\ell}}{\Delta}}K'}}-\epsilon &\text{by the definition of }~K''\\ &= \paren{1+\frac{\sqrt{\ell}}{\Delta}}^{-2(\ell-1)}-\epsilon &\text{{\sf Vol}}_{\ell-1}~\text{is volume in}~\ell-1~\text{dimensions}\\ &\ge e^{-\frac{\sqrt{\ell}}{\Delta}2(\ell-1)} - \epsilon&\text{using}~(1+x) \le e^x, \forall x \end{align*} Therefore the expected running time of the algorithm before it outputs an acceptable point is inversely proportional to the probability of acceptance, that is, $1/\paren{e^{-2\frac{\ell\sqrt{\ell}}{\Delta}} - \epsilon}$. When $\epsilon < e^{-2}$ and is a non-negative constant, and if $\Delta = \Omega\paren{\ell^{1.5}}$, this probability is at least a constant. Hence, repeating the whole process a polynomial number of times in expectation guarantees we sample an integral point from $K'$. \end{proof} \begin{remark} The polytope $\paren{1+\frac{\sqrt{\ell}}{\Delta}}K'$ is an $\ell-1$ dimensional polytope given to us by the $\mathcal{H}$ description in $\ell$ dimensions. The random walk-based algorithms used as {\sc Sampling-Oracle}~in Step \ref{step:sample} require the polytope they sample from to be full-dimensional. Below we describe a rotation operation such that the rotated polytope, that is, the polytope formed after applying the rotation on $\paren{1+\frac{\sqrt{\ell}}{\Delta}}K'$, is full-dimensional in $\ell-1$ dimensions. This is a well-known transformation used as a pre-processing step to make a polytope full-dimensional. Let $u_1, u_2, \ldots, u_{\ell}$ be orthonormal basis of $\mathbb R^{\ell}$ such that $u_{\ell} := (1,1,\ldots,1)^T$. We now construct a matrix $R$ such that $Ru_j = e_j, \forall j \in [\ell]$, where $e_1, e_2, \ldots, e_{\ell}$ are the standard basis vectors in $\ell$ dimensions. Fix $j \in [\ell]$. We know that $e_j = \sum_{j'\in [\ell]} \alpha^{(j)}_{j'} u_{j'}$, where $\forall j' \in [\ell], \alpha^{(j)}_{j'} = e_{j}^Tu_{j'}$. Thus, we get that $Re_j = \sum_{j'\in [\ell]} \alpha^{(j)}_{j'} Ru_{j'} = \sum_{j'\in [\ell]} \alpha^{(j)}_{j'} e_{j'}$, which implies that the vector $\paren{\alpha_1^{(j)}, \alpha_2^{(j)}, \ldots, \alpha_{\ell}^{(j)}}^T$ forms the $j$th column of $R$. It is also easy to verify that $R$ is orthogonal. Therefore, the rotation matrix $R$ can be computed efficiently. This rotation maps the hyperplane $\sum_{j \in [\ell]}x_j = 0$ into the $\ell-1$ dimensional space spanned by $e_1, e_2, \ldots, e_{\ell-1}$. Therefore, the rotated polytope is an $\ell-1$ dimensional polytope in $\ell-1$ dimensions. For any point $(x_1, x_2, \ldots, x_{\ell-1}) \in \mathbb R^{\ell-1}$ we check the membership of $R^{-1}(x_1, x_2, \ldots, x_{\ell-1},0)$ in $\paren{1+\frac{\sqrt{\ell}}{\Delta}}K'$. This gives us the membership oracle for the rotated polytope. We can then sample a rational point from this rotated polytope in Step \ref{step:sample}, apply $R^{-1}$ on the point sampled to get a point in $\paren{1+\frac{\sqrt{\ell}}{\Delta}}K'$, and proceed with our algorithm. \end{remark} Using \Cref{alg:random_walk} we can sample a group-fair representation. It is inspired from the algorithm by \citet{KV1997} to sample integral points from a convex polytope, from a distribution close to uniform. On a high level, their algorithm on polytope $K'$ works slightly differently when compared to our algorithm on $K'$. They first expand $K'$ by $\mathcal{O}\paren{\sqrt{\ell\log\ell}}$, sample a rational point from a distribution close to uniform, over this expanded polytope (similar to Step \ref{step:sample}), and use a probabilistic rounding method to round it to an integral point. If the integral point is in $K'$ they accept, otherwise reject and repeat from the sampling step. Their algorithm requires that a ball of radius $\Omega\paren{\ell^{1.5}\sqrt{\log\ell}}$ lies entirely inside $K'$. We expand the polytope $K'$ by $\mathcal{O}\paren{\sqrt{\ell}}$, sample a rational point from this polytope from a distribution close to uniform, and then deterministically round it to an integral point. If the integral point is in $K'$ we accept, otherwise reject and repeat from the sampling step. Our algorithm only requires that a ball of radius $\Omega\paren{\ell^{1.5}}$ lies inside $P-x^*$ with center on $H - x^*$, where $P$, $H$ and $x^*$ are as defined in \Cref{alg:random_walk}. As a result, we get an algorithm with expected running time {\sf poly}$(k, \ell, 1/\epsilon)$ for a larger set of fairness constraints. We also note here that the analysis of the success probability of \Cref{alg:random_walk} is the same as that of the algorithm by \citet{KV1997}. \subsection{Exact Uniform Sampling} \label{subsec:alg2} In this section we give our second, exact sampling algorithm to sample a uniform random group-fair representation. In \Cref{eq:polytopeK} the convex polytope $K$ is described using an $\mathcal{H}$ description defined as follows, \begin{definition}[\textbf{$\mathcal{H}$-description of a polytope}] A representation of the polytope as the set of solutions of finitely many linear inequalities. \end{definition} We can also have a representation of the polytope as described by its vertices, defined as follows, \begin{definition}[\textbf{$\mathcal{V}$-description of a polytope}] The representation of the polytope by the set of its vertices. \end{definition} \citet{Barvinok} gave an algorithm to count exactly, the number of integral points in $K$, as re-stated below. \begin{theorem}[Theorem 7.3.3 in \citep{Barvinok}] \label{thm:barvinok} Let us fix the dimension ${\ell}$. Then there exists a polynomial time algorithm that, for any given rational $\mathcal{V}$-polytope $P \subset \mathbb R^{\ell}$, computes the number $|P \cap {\mathbb Z}^{\ell}|$. The complexity of the algorithm in terms of the dimension ${\ell}$ is ${\ell}^{\mathcal{O}({\ell})}$. \end{theorem} We also have the algorithm by \citet{Pak2000OnSI} gives us an exact uniform random sampler for the integral points in $K$. \begin{theorem}[Theorem 1 in \citep{Pak2000OnSI}] \label{thm:pak} Let $P \subset \mathbb R^{\ell}$ be a rational polytope, and let $B = P \cap {\mathbb Z}^{\ell}$. Assume an oracle can compute $|B|$ for any $P$ as above. Then there exists a polynomial-time algorithm for sampling uniformly from $B$, which calls this oracle $\mathcal{O}\paren{{\ell}^2L^2}$ times where $L$ is the bit complexity of the input. \end{theorem} Using the counting algorithm given by \Cref{thm:barvinok} as the counting oracle in \Cref{thm:pak} gives us our second algorithm that samples a uniform random group representation exactly. \begin{theorem} \label{thm:alg1} For given fairness parameters $L_j, U_j \in {\mathbb Z}_{\geq 0}$ and an integer $k > 0$, there is an algorithm that samples an exact uniform random integral point in $K$ and runs in time $\ell^{\mathcal{O}(\ell)}\mathcal{O}\paren{\log^2k}$. \end{theorem} \begin{proof}[Proof of \Cref{thm:alg1}] The proof essentially follows from the proof of Theorem 1 in \citep{Pak2000OnSI}. They assume access to an oracle that counts the number of integral points in any convex polytope that their algorithm constructs. We show that Barvinok's algorithm can be used as an oracle for all the polytopes that are constructed in the algorithm to sample a uniform random integral point from our convex rational polytope $K$. The algorithm in \Cref{thm:pak} intersects the polytope by an axis-aligned hyperplane and recurses on one of the smaller polytopes (to be specified below). In the deepest level of recursion where the polytope in that level contains only one integral point, the algorithm terminates the halving process and outputs that point. The proof of their theorem shows that this gives us a uniform random integral point from the polytope we started with. Let us consider the dimension $1$ w.l.o.g. The algorithm finds a value $c$ such that $L_1 < c < U_1$, $|H_+ \cap B|/|B| \le 1/2$, $|H_- \cap B|/|B| \le 1/2$, where $H_+$ and $H_-$ are two halves of the space separated by the hyperplane $H$ defined by $x_1 = c$. That is, $H_+$ is the halfspace $x_1 \ge c$ and $H_-$ is the halfspace $x_1 \le c$. Therefore, there are three possible polytopes for the algorithm to recurse on, $H_+ \cap B, H_- \cap B$, and $H \cap B$. Here $|H \cap B|/|B|$ can be $\ge 1/2$. Let \[ f_{+} = |H_+ \cap B|/|B|,~~f_{-} = |H_- \cap B|/|B|,~~\text{and}~~f = |H \cap B|/|B|. \] Then the algorithm recurses on the polytope $H_+ \cap B$ with probability $f_+$, on $H_- \cap B$ with probability $f_-$, and on $H \cap B$ with probability $f$. Observe that $K$ is also defined by the axis aligned hyperplanes, $x_1=L_1$ and $x_1 = U_1$, amongst others. Therefore, $x_1 \ge L_1$ will become a redundant constraint if the algorithm recurses on $H_+\cap B$. Else $x_1 \le U_1$ will become a redundant constraint if the algorithm recurses on $H_- \cap B$. In both these cases, the number of integral points reduces by more than $1/2$. If the algorithm recurses on $H\cap B$, it fixes the value of $x_1$ to $c$, and the dimension of the problem reduces by $1$. Since the number of integral points is $\exp\paren{dL}$, the number of halving steps performed by the algorithm is at most $\mathcal{O}\paren{dL}$. Observe that in all levels of recursion, the polytopes constructed are of $d$ dimensions ($1 \le d \le \ell$) and are of the following form, \begin{align*} \Big\{\paren{x_1, x_2, \ldots, x_{d}}\in\mathbb R^{d}~~\Big|~~\sum_{j \in [d]} x_j = k'~~\text{and}~~c'_j \le x_j \le c''_j, \forall j \in [\ell] \Big\},\label{eq:polytope} \end{align*} where $k', c', c''$ are some constants. This gives us the $\mathcal{H}$-description of each of the polytopes the algorithm constructs in each level of recursion. The vertices of such a polytope are formed by the intersection of $d$ hyperplanes. Therefore, there could be at most ${2d+2\choose d} = 2^{\mathcal{O}\paren{{d}}}$ number of vertices for such a polytope in $d$ dimensions, which gives us that the $\mathcal{V}$-description can be computed from the $\mathcal{H}$-description in time $2^{\mathcal{O}\paren{{d}}}$. Therefore, for all these intermediate polytopes, we can use the counting algorithm given by \Cref{thm:barvinok} whose run time depends on $d^{(\mathcal{O}(d)}$. Using $ d \le \ell$, we get that the counting algorithm given by \Cref{thm:barvinok} takes time $\ell^{(\mathcal{O}(\ell)}$ for all the polytopes constructed by the algorithm. Further, in each step of recursion, the algorithm makes at most $\mathcal{O}\paren{dL}$ calls to this counting algorithm. Since the input to the algorithm consists of a number $k > 0$ and fairness parameters $0 \le L_j \le U_j \le k, \forall j \in [\ell]$, where all these parameters are integers, we have that the bit complexity of the input is $L = \mathcal{O}(\ell\log k)$. Therefore, the total running time of our algorithm is $\mathcal{O}\paren{d^2L^2}\ell^{\mathcal{O}(\ell)} = \mathcal{O}\paren{\ell^4\log^2 k}\ell^{\mathcal{O}(\ell)} = \mathcal{O}\paren{\log^2 k}\ell^{\mathcal{O}(\ell)}$. \end{proof} \subsection{Faster Exact Uniform Sampling for Small Gap Instances} \label{subsec:alg3} Our third algorithm is a brute-force algorithm that also samples a group fair representation uniformly at random; however, its running time is much faster than our first algorithm when the gap between the lower and upper bounds for most groups is small. This brute-force algorithm first enumerates the set of all integral points in $K$. Then it samples a point from this set with probability equal to the inverse of the size of this set. This gives us the following result, \begin{theorem} \label{thm:bruteforce} For any given representation constraints, $L_j, U_j \in {\mathbb Z}_{\geq 0}$, forall $j \in [\ell]$, for each group $j \in [\ell]$, the number of feasible group representations $x$ defined in (\ref{eq:set_x}) is at most $\mathcal{O}\paren{\Pi_{j \in [\ell]}(U_j - L_j+1)}$. \end{theorem} \begin{proof}[Proof of \Cref{thm:bruteforce}] Note that $x_j$ for group $j \in [\ell]$ can take integer values between $L_j$ and $U_j$. Therefore, there are at most $U_j - L_j + 1$ integral values possible for $x_j$. Therefore the total number of feasible integral values of $x = \paren{x_1, x_2, \ldots, x_{\ell}}$ is at most $\Pi_{j \in [\ell]}(U_j - L_j+1)$. The brute-force algorithm checks all these values and adds to set $\mathcal{X}$ the feasible points. Hence the statement of the theorem follows. \end{proof} Therefore uniformly randomly sampling from the set of all group fair representations can be performed in time $\mathcal{O}\paren{\log \paren{\Pi_{j \in [\ell]}(U_j - L_j+1)}}$. When there are a constant number of groups with a larger gap between the upper and the lower bounds and the rest of the groups with equal upper and lower bounds, we get the following result as a corollary to \Cref{thm:bruteforce}. \begin{corollary} Given representation constraints, $L_j, U_j \in {\mathbb Z}_{\geq 0}$, for each group $j \in [\ell]$, such that $L_j = U_j$ for all $j \in S\subseteq [\ell]$ and $\ell-|S| = \mathcal{O}(1)$, the brute force algorithm runs in time $k^{\mathcal{O}(1)}$. \end{corollary} \section{Experimental Results} \label{sec:experiments} In this section, we apply our algorithms to various real-world datasets and validate the guarantees provided by our algorithm. We implement our algorithm using the tool called \textit{PolytopeSampler}\footnote{\href{https://github.com/ConstrainedSampler/PolytopeSamplerMatlab}{github.com/ConstrainedSampler/PolytopeSamplerMatlab} (License: GNU GPL v3.0)} to sample a point from a distribution close to uniform, on the convex rational polytope $K$. This tool implements a constrained Riemannian Hamiltonian Monte Carlo for sampling from high dimensional distributions on polytopes \citep{kook2022sampling}. We call the implementation of \Cref{alg:random_walk} using this tool in Step \ref{step:sample} as `Random walk' in the plots. \begin{figure}[t] \centering \begin{subfigure}[b]{\linewidth} \centering \includegraphics[scale=0.12]{legend.pdf} \end{subfigure \begin{subfigure}[b]{0.25\linewidth} \centering \includegraphics[scale=0.13]{main_results/rep_german_age25.pdf} \label{fig:german_25_rep} \end{subfigure \begin{subfigure}[b]{0.48\linewidth} \centering \includegraphics[scale=0.13]{main_results/proportion_german_age25.pdf} \label{fig:german_35_rep} \end{subfigure \begin{subfigure}[b]{0.25\linewidth} \centering \includegraphics[scale=0.125]{main_results/ndcg_german_age25.pdf} \label{fig:german_35_rep} \end{subfigure \begin{subfigure}[b]{0.25\linewidth} \centering \includegraphics[scale=0.13]{main_results/rep_german_age35.pdf} \label{fig:german_25_prop} \end{subfigure \begin{subfigure}[b]{0.48\linewidth} \centering \includegraphics[scale=0.13]{main_results/proportion_german_age35.pdf} \label{fig:german_35_prop} \end{subfigure \begin{subfigure}[b]{0.25\linewidth} \centering \includegraphics[scale=0.13]{main_results/ndcg_german_age35.pdf} \label{fig:german_35_prop} \end{subfigure \caption{Results on the German Credit Risk dataset with \textit{age} $< 25$ as the protected group in the first row and \textit{age} $< 35$ as the protected group in the second row. For Fair $\epsilon$-greedy we use $\epsilon =0.3$ (see \Cref{fig:german_binary_eps015,fig:german_binary_eps05} for other values of $\epsilon$).} \label{fig:german_binary} \end{figure} \begin{figure*}[t] \centering \begin{subfigure}[b]{\linewidth} \centering \includegraphics[scale=0.12]{legend.pdf} \end{subfigure \begin{subfigure}[b]{0.25\linewidth} \centering \includegraphics[scale=0.13]{main_results/rep_jee2009_gender.pdf} \label{fig:german_25_rep} \end{subfigure \begin{subfigure}[b]{0.48\linewidth} \centering \includegraphics[scale=0.13]{main_results/proportion_jee2009_gender.pdf} \label{fig:german_35_rep} \end{subfigure \begin{subfigure}[b]{0.25\linewidth} \centering \includegraphics[scale=0.13]{main_results/ndcg_jee2009_gender.pdf} \label{fig:german_35_rep} \end{subfigure \caption{Results on the JEE 2009 dataset with \textit{gender} as the protected group. For Fair $\epsilon$-greedy we use $\epsilon =0.3$ (see \Cref{fig:jee_gender_eps015,fig:jee_gender_eps05} for other values of $\epsilon$).} \label{fig:jee_gender} \end{figure*} \paragraph{Datasets.} We evaluate our results on the German Credit Risk dataset\footnote{taken from \href{https://github.com/sruthigorantla/Underranking_and_group_fairness/tree/master/data/GermanCredit}{Gorantla et al. repository}} comprising credit risk scoring of $1000$ adult German residents \citep{Dua2019}, along with their demographic information (e.g., gender, age, etc.). We use the Schufa scores of these individuals to get the in-group rankings. A ranking based on the Schufa scores on the entire dataset is biased against the adults of $age < 25$, as observed by \citep{Castillo2019}. Their representation in the top $100$ ranks is $10\%$ even though their true representation in the whole dataset is $15\%$. Similarly the adults of $age < 35$ are only $27\%$ in the top $100$ ranks according to the Schufa scores even though they have $54\%$ representation in the dataset. Therefore, we evaluate our algorithm on the grouping based on $age < 25$ and $age < 35$ similar to \citep{ZBCHMB2017,GDL2021} (see \Cref{fig:german_binary}). We also evaluate our algorithm on the IIT-JEE 2009 dataset, also used in \citep{CMV2020}. The dataset\footnote{taken from \href{https://github.com/AnayMehrotra/Ranking-with-Implicit-Bias}{Celis et al. 2020b repository}} consists of the student test results of the joint entrance examination (JEE) conducted for undergrad admissions at the Indian Institutes of Technology (IITs). A total of $384,977$ students' total scores (sum of the scores in Math, Physics, and Chemistry) ranging from $-105$ to $480$ marks\footnote{average $=28.36$, maximum $=424$ and minimum $=-86$. } gives us the merit scores of the students, and hence the score-based in-group rankings. Additional information about the students include gender details (25\% women and 75\% men)\footnote{Only binary gender information was annotated in the dataset.}. In this dataset, female students are consistently underrepresented ($0.04\%$ in top $100$ \citep{CMV2020}), in a score-based ranking on the entire dataset, despite $25\%$ female representation in the dataset. We evaluate our algorithm with \textit{female} as the protected group. \paragraph{Baselines.} $(i)$ We compare our experimental results with \textit{fair $\epsilon$-greedy} \citet{GS2020}, which is a greedy algorithm with $\epsilon$ as a parameter (explained in detail in \Cref{subsec:observations}). To the best of our knowledge, this algorithm is the closest state-of-the-art baseline to our setting, as it does not rely on comparing the scores of two candidates from different groups. $(ii)$ We also compare our results with a recent deterministic re-ranking algorithm given by \citet{GDL2021}, that achieves the best balance of both group fairness and underranking of individual items compared to their original ranks in top-$k$. We call this algorithm GDL21 in our plots. \paragraph{Plots.} We plot our results for the protected groups in each dataset (see \Cref{fig:german_binary,fig:jee_gender}). We use the representation constraints $L_j = \ceil{(p_j^*-0.05)k}$ and $U_j = \floor{(p_j^*+0.05)k}$ for group $j$ where $p_j^*$ is the total fraction of items from group $j$ in the dataset. The ``representation" (on the y-axis) plot shows the fraction of ranks assigned to the protected group in a top $k'$ ranks (on the x-axis). For randomized algorithms, we sample $1000$ rankings and output the mean and the standard deviation. The dashed green line is the true representation of the protected group in the dataset, which we call $p^*$, dropping the subscript. The ``fraction of rankings" (on the y-axis) plot for randomized ranking algorithms represents the fraction of $1000$ rankings that assign rank $i$ (on the x-axis) to the protected group. For completeness, we plot the results for the ranking utility metric, normalized discounted cumulative gain, defined as $ \text{nDCG}@k = \paren{\sum_{i\in[k]} \frac{(2^{\hat{s}_i}-1)}{\log_2(i+1)}}\big/\paren{\sum_{i\in[k]} \frac{(2^{{s}_i}-1)}{\log_2(i+1)}}, $ where $\hat{s}_i$ and $s_i$ are the scores of the items assigned to rank $i$ in the group fair and the score-based ranking, respectively. \subsection{Observations} \label{subsec:observations} The rankings sampled by our algorithm have the following property: for any rank $i$, rank $i$ is assigned to the protected group in sufficient fraction of rankings (see plots with ``fraction of rankings'' on the y-axis). This experimentally validates our \Cref{thm:rep_at_i}. Moreover, this fraction is stable across the ranks. Hence the line looks almost flat. Whereas fair $\epsilon$-greedy fluctuates a lot, which can be explained as follows. For each rank $k' = 1$ to $k$, with $\epsilon$ probability, it assigns a group uniformly at random, and with $1-\epsilon$ probability, it assigns group $G_1 :=$ (age$\ge 25$) if the number of ranks assigned to $G_1$ is less than $(\frac{L_1 k'}{k})$ in the top $k'$ ranks, and to $G_2 :=$ (age $<25$) otherwise. Consider \Cref{fig:german_binary} top row where $L_1 = 80, L_2 = 10$, and $k=100$, and the plot on right shows the fraction of rankings (y-axis) assigning rank $i$ (x-axis) to $G_1$. Note that if $\epsilon = 0$ (no randomization), this algorithm gives a deterministic ranking where the first four ranks are assigned to $G_2$ and the fifth to $G_1$, and this pattern repeats after every five ranks. Hence, there would be peak in the plot at ranks $k'=5,10,15,20,\ldots$. Now, when $\epsilon=0.3$, fair-$\epsilon$-greedy introduces randomness in group assignment at each rank and, as a result, smoothens out the peaks as $k'$ increases, which is exactly what is observed. Therefore, the first four ranks will have very low representation, even in expectation. Similarly the ranks $6$ to $9$. Clearly, fair-$\epsilon$-greedy does not satisfy fairness for any $k' < k$ consecutive ranks. But our algorithm satisfies this property, as is also confirmed by \Cref{cor:rep_at_prefix}. Our algorithm also closely satisfies representation constraints for the protected group in the top $k'$ ranks for $k' = 20,40,60,80,100$, in expectation (see plots with ``representation'' on the y-axis). Fair $\epsilon$-greedy overcompensates for representing the protected group. The deterministic algorithm GDL21 achieves very high nDCG but very low representation for smaller values of $k'$, although all run with similar representation constraints. This is because the deterministic algorithm uses comparisons based on the scores, hence puts most of the protected group items in higher ranks (towards $k$). With larger value of $\epsilon$, fair $\epsilon$-greedy get much higher ``representation'' of the protected group than necessary (see \Cref{fig:german_binary_eps05,fig:jee_gender_eps05}), whereas with a smaller value of $\epsilon$, it fluctuates a lot in the ``fraction of rankings'' (see \Cref{fig:german_binary_eps015,fig:jee_gender_eps015}). We also run experiments on the JEE 2009 dataset with birth category defining the demographic groups (GE=60\%, OC=4\%, ON=23\%, SC=9.3\%, ST=3.2\%). We plot the results in \Cref{fig:jee_category}. We define a group-wise normalized discounted cumulative gain as follows, \[ \text{group-nDCG}@k = \frac{\min_{j \in [\ell]}\frac{1}{\hat{n}_j}\sum_{i\in[k]} \frac{(2^{\hat{s}_i}-1)\mathbb{I}[\hat{y}_i = j]}{\log_2(i+1)}}{\max_{j \in [\ell]}\frac{1}{{n}_j}\sum_{i\in[k]} \frac{(2^{{s}_i}-1)\mathbb{I}[{y}_i = j]}{\log_2(i+1)}}, \] where $y_i$ and $\hat{y}_i$ represent the group assigned to rank $i$ in the scored-based and group-fair ranking respectively. Similarly $n_i$ and $\hat{n}_i$ represent the total number of top $k$ ranks assigned to group $j$ in the scored-based and group-fair ranking, respectively. This definition is similar to the group ndcg metric defined in \citep{group_ndcg}. See \Cref{fig:group_ndcg} for the performance of the algorithms with respect to this metric. \begin{figure*} \centering \begin{subfigure}[b]{\linewidth} \centering \includegraphics[scale=0.15]{legend.pdf} \end{subfigure \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[scale=0.17]{results/jee_category/rep_jee2009_category_GE.pdf} \label{fig:german_25_rep} \end{subfigure \begin{subfigure}[b]{0.66\linewidth} \centering \includegraphics[scale=0.17]{results/jee_category/proportion_jee2009_category_GE.pdf} \label{fig:german_35_rep} \end{subfigure \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[scale=0.17]{results/jee_category/rep_jee2009_category_ON.pdf} \label{fig:german_25_rep} \end{subfigure \begin{subfigure}[b]{0.66\linewidth} \centering \includegraphics[scale=0.17]{results/jee_category/proportion_jee2009_category_ON.pdf} \label{fig:german_35_rep} \end{subfigure \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[scale=0.17]{results/jee_category/rep_jee2009_category_SC.pdf} \label{fig:german_25_rep} \end{subfigure \begin{subfigure}[b]{0.66\linewidth} \centering \includegraphics[scale=0.17]{results/jee_category/proportion_jee2009_category_SC.pdf} \label{fig:german_35_rep} \end{subfigure \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[scale=0.17]{results/jee_category/rep_jee2009_category_OC.pdf} \label{fig:german_25_rep} \end{subfigure \begin{subfigure}[b]{0.66\linewidth} \centering \includegraphics[scale=0.17]{results/jee_category/proportion_jee2009_category_OC.pdf} \label{fig:german_35_rep} \end{subfigure \begin{subfigure}[b]{0.25\linewidth} \centering \includegraphics[scale=0.13]{results/jee_category/rep_jee2009_category_ST.pdf} \label{fig:german_25_rep} \end{subfigure \begin{subfigure}[b]{0.48\linewidth} \centering \includegraphics[scale=0.13]{results/jee_category/proportion_jee2009_category_ST.pdf} \label{fig:german_35_rep} \end{subfigure \begin{subfigure}[b]{0.25\linewidth} \centering \includegraphics[scale=0.13]{results/jee_category/ndcg_jee2009_category.pdf} \label{fig:german_35_rep} \end{subfigure \caption{Results on the JEE 2009 dataset with \textit{birth category} as the protected group (with $5$ groups). For Fair $\epsilon$-greedy we use $\epsilon =0.3$.} \label{fig:jee_category} \end{figure*} \begin{figure*} \centering \begin{subfigure}[b]{\linewidth} \centering \includegraphics[scale=0.15]{legend.pdf} \end{subfigure \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[scale=0.14]{results/group_ndcg_german_age25.pdf} \label{fig:german_35_rep} \end{subfigure \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[scale=0.14]{results/group_ndcg_german_age35.pdf} \label{fig:german_35_rep} \end{subfigure \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[scale=0.14]{results/group_ndcg_jee2009_gender.pdf} \label{fig:german_35_rep} \end{subfigure \caption{ Group nDCG for the protected groups. For Fair $\epsilon$-greedy we use $\epsilon =0.3$.} \label{fig:group_ndcg} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}[b]{\linewidth} \centering \includegraphics[scale=0.12]{legend.pdf} \end{subfigure \begin{subfigure}[b]{0.25\linewidth} \centering \includegraphics[scale=0.13]{results/eps_015/rep_german_age25.pdf} \label{fig:german_25_rep} \end{subfigure \begin{subfigure}[b]{0.48\linewidth} \centering \includegraphics[scale=0.13]{results/eps_015/proportion_german_age25.pdf} \label{fig:german_35_rep} \end{subfigure \begin{subfigure}[b]{0.25\linewidth} \centering \includegraphics[scale=0.13]{results/eps_015/ndcg_german_age25.pdf} \label{fig:german_35_rep} \end{subfigure \begin{subfigure}[b]{0.25\linewidth} \centering \includegraphics[scale=0.13]{results/eps_015/rep_german_age35.pdf} \label{fig:german_25_prop} \end{subfigure \begin{subfigure}[b]{0.48\linewidth} \centering \includegraphics[scale=0.13]{results/eps_015/proportion_german_age35.pdf} \label{fig:german_35_prop} \end{subfigure \begin{subfigure}[b]{0.25\linewidth} \centering \includegraphics[scale=0.13]{results/eps_015/ndcg_german_age35.pdf} \label{fig:german_35_rep} \end{subfigure \caption{Results on the German Credit Risk dataset with \textit{age} $< 25$ as the protected group in the first row and \textit{age} $< 35$ as the protected group in the first row. For Fair $\epsilon$-greedy we use $\epsilon =0.15$.} \label{fig:german_binary_eps015} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}[b]{\linewidth} \centering \includegraphics[scale=0.12]{legend.pdf} \end{subfigure \begin{subfigure}[b]{0.25\linewidth} \centering \includegraphics[scale=0.13]{results/eps_015/rep_jee2009_gender.pdf} \label{fig:german_25_rep} \end{subfigure \begin{subfigure}[b]{0.48\linewidth} \centering \includegraphics[scale=0.13]{results/eps_015/proportion_jee2009_gender.pdf} \label{fig:german_35_rep} \end{subfigure \begin{subfigure}[b]{0.25\linewidth} \centering \includegraphics[scale=0.13]{results/eps_015/ndcg_jee2009_gender.pdf} \label{fig:german_35_rep} \end{subfigure \caption{Results on the JEE 2009 dataset with \textit{gender} as the protected group. For Fair $\epsilon$-greedy we use $\epsilon =0.15$.} \label{fig:jee_gender_eps015} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}[b]{\linewidth} \centering \includegraphics[scale=0.12]{legend.pdf} \end{subfigure \begin{subfigure}[b]{0.25\linewidth} \centering \includegraphics[scale=0.13]{results/eps_05/rep_german_age25.pdf} \label{fig:german_25_rep} \end{subfigure \begin{subfigure}[b]{0.48\linewidth} \centering \includegraphics[scale=0.13]{results/eps_05/proportion_german_age25.pdf} \label{fig:german_35_rep} \end{subfigure \begin{subfigure}[b]{0.25\linewidth} \centering \includegraphics[scale=0.13]{results/eps_05/ndcg_german_age25.pdf} \label{fig:german_35_rep} \end{subfigure \begin{subfigure}[b]{0.25\linewidth} \centering \includegraphics[scale=0.13]{results/eps_05/rep_german_age35.pdf} \label{fig:german_25_prop} \end{subfigure \begin{subfigure}[b]{0.48\linewidth} \centering \includegraphics[scale=0.13]{results/eps_05/proportion_german_age35.pdf} \label{fig:german_35_prop} \end{subfigure \begin{subfigure}[b]{0.25\linewidth} \centering \includegraphics[scale=0.13]{results/eps_05/ndcg_german_age35.pdf} \label{fig:german_35_prop} \end{subfigure \caption{Results on the German Credit Risk dataset with \textit{age} $< 25$ as the protected group in the first row and \textit{age} $< 35$ as the protected group in the first row. For Fair $\epsilon$-greedy we use $\epsilon =0.5$.} \label{fig:german_binary_eps05} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}[b]{\linewidth} \centering \includegraphics[scale=0.12]{legend.pdf} \end{subfigure \begin{subfigure}[b]{0.25\linewidth} \centering \includegraphics[scale=0.13]{results/eps_05/rep_jee2009_gender.pdf} \label{fig:german_25_rep} \end{subfigure \begin{subfigure}[b]{0.48\linewidth} \centering \includegraphics[scale=0.13]{results/eps_05/proportion_jee2009_gender.pdf} \label{fig:german_35_rep} \end{subfigure \begin{subfigure}[b]{0.25\linewidth} \centering \includegraphics[scale=0.13]{results/eps_05/ndcg_jee2009_gender.pdf} \label{fig:german_35_rep} \end{subfigure \caption{Results on the JEE 2009 dataset with \textit{gender} as the protected group. For Fair $\epsilon$-greedy we use $\epsilon =0.5$.} \label{fig:jee_gender_eps05} \end{figure*} The experiments were run on a Quad-Core Intel Core i5 processor consisting of 4 cores, with a clock speed of 2.3 GHz and DRAM of 8GB. See \Cref{tab:running_time} for details of the running time. Our random walk-based algorithm also runs as fast as the $\mathcal{O}(k\ell)$ time greedy algorithm. \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline $\delta$&$\ell$&random walk&fair-$0.15$-greedy&fair-$0.3$-greedy&fair-$0.5$-greedy \\ \hline 0.05&2&2.62 $\pm$ 0.07&0.14 $\pm$ 0.01&0.14 $\pm$ 0.01&0.13 $\pm$ 0.0\\ 0.05&5&8.05 $\pm$ 0.32&1.72 $\pm$ 0.05&1.73 $\pm$ 0.09&1.69 $\pm$ 0.05\\ 0.05&10&-&5.38 $\pm$ 0.05&5.39 $\pm$ 0.1&5.3 $\pm$ 0.18\\ \hline 0.1&2&2.54 $\pm$ 0.09&0.14 $\pm$ 0.01&0.14 $\pm$ 0.01&0.13 $\pm$ 0.01\\ 0.1&5&6.18 $\pm$ 0.52&1.71 $\pm$ 0.04&1.67 $\pm$ 0.05&1.71 $\pm$ 0.02\\ 0.1&10&18.03 $\pm$ 0.09&5.44 $\pm$ 0.1&5.39 $\pm$ 0.04&5.47 $\pm$ 0.02\\ \hline 0.2&2&2.56 $\pm$ 0.09&0.15 $\pm$ 0.0&0.14 $\pm$ 0.0&0.14 $\pm$ 0.01\\ 0.2&5&4.79 $\pm$ 0.46&1.73 $\pm$ 0.05&1.7 $\pm$ 0.03&1.72 $\pm$ 0.05\\ 0.2&10&8.59 $\pm$ 0.07&5.46 $\pm$ 0.07&5.44 $\pm$ 0.03&5.39 $\pm$ 0.06\\ \hline \end{tabular} \caption{Mean and standard deviation of running time in seconds, over $5$ runs, to sample $100$ rankings. The parameter $\ell$ represents number of groups and $\delta$ is used to define fairness constraints as $L_j = \ceil{\paren{p_j^* - \delta}k}$ and $U_j = \floor{\paren{p_j^* + \delta}k}$, where $k$ is the length of ranking we want to output.} \label{tab:running_time} \end{table} \section{Conclusion} \label{sec:conclusion} We take an axiomatic approach to define randomized group fair rankings and show that it leads to a unique distribution over all feasible rankings that satisfy lower and upper bounds on the group-wise representation in the top ranks. We propose practical and efficient algorithms to exactly and approximately sample a random group fair ranking from this distribution. Our approach requires merging a given set of ranked lists, one for each group, and can help circumvent implicit bias or incomplete comparison data across groups. The natural open problem is to extend our method to work even for noisy, uncertain inputs about rankings, scores, or comparisons within each group. Another open problem is investigating the possibility of polynomial-time algorithms to sample random group fair rankings under representation-based constraints for each group in every prefix. A limitation of our work as a post-processing method is that it cannot fix all sources of bias, e.g., bias in data collection and labeling. Our guarantees for group fairness may not necessarily reflect the right fairness metrics for downstream applications for reasons including biased, noisy, incomplete data and legal or ethical considerations in quantifying the eventual adverse impact on individuals and groups. \bibliographystyle{apalike}
1,314,259,994,418
arxiv
\section{Introduction} Throughout this paper our general assumption on $R$ is to be a commutative Noetherian regular ring of prime characteristic $p$. Let $e$ be a positive integer. Let $f: R\rightarrow R$ be the Frobenius homomorphism defined by $f(r)=r^{p}$ for all $r \in R$, whose $e$-th iteration is denoted by $f^{e}$. Let $M$ be an $R$-module. $F_{*}^{e}M=\{F_{*}^{e}m \mid m\in M \}$ denotes the Abelian group $M$ with the induced $R$-module structure via the $e$-th iterated Frobenius and it is given by \begin{center} $rF_*^{e}m=F_*^{e}r^{p^e}m$ for all $m \in M$ and $r\in R$ \end{center} In particular, $F_{*}^{e}R$ is the Abelian group $R$ with the induced $R$-module structure \begin{center} $rF_*^{e}s=F_*^{e}r^{p^e}s $ for all $r,s\in R$. \end{center} An $e$-th Frobenius map on $M$ is an $R$-linear map $\phi:M \rightarrow F_{*}^{e}M$, equivalently an additive map $\phi:M\rightarrow M$ such that $\phi(rm)=r^{p^{e}}\phi(m)$ for all $r\in R$ and $m\in M$. Let $R[X;f^{e}]$ be the skew-polynomial ring whose multiplication is subject to the rule $Xr=f^{e}(r)X=r^{p^{e}}X$ for all $r\in R$. Notice that defining an $e$-th Frobenius map on $M$ is equivalent to endowing $M$ with a left $R[X;f^{e}]$-module structure extending the rule $Xm=\phi(m)$ for all $m\in M$. This paper studies the notion of special ideals. It was introduced by R. Y. Sharp in \cite{RY}. For a left $R[\phi;f^{e}]$-module $M$, when $\phi$ is injective on $M$, he defines an ideal of $R$ to be $M$-special $R$-ideal if it is the annihilator of some $R[\phi;f^{e}]$-submodule of $M$ (cf. \cite[Section 1]{RY}). Later on, it was generalized by M. Katzman and used to study Frobenius maps on injective hulls in \cite{K1} and \cite{K5}. For a left $R[\phi;f^{e}]$-module $M$, Katzman defines an ideal of $R$ to be $M$-special if it is the annihilator of some $R[\phi;f^{e}]$-submodule of $M$ (cf. \cite[Section 6]{K1}). A special case of special ideals is when $R$ is local, $M$ is Artinian, and $\phi$ is injective. In this case, Sharp showed that the set of $M$-special ideals is a finite set of radicals, consisting of all intersections of the finitely many primes in it (\cite[Corollary 3.11]{RY}). It was also proved by F. Enescu and M. Hochster independently (\cite[Section 3]{EH}). When $R$ is complete local regular and $M$ is Artinian, the notion of special ideals becomes an important device to study Frobenius maps on injective hulls. In particular, since top local cohomology module of $R$ is isomorphic to the injective hull of the residue field of $R$, it provides an important insight to top local cohomology modules. In the case that $R$ is a finite dimensional formal power series ring over a field of prime characteristic $p$, in \cite{K3}, M. Katzman and W. Zhang focus on the $M$-special ideals when $M$ is Artinian. In this case, they define the special ideals depending on the $R[\phi;f]$-module structures on $E^{\alpha}$, where $E$ is the injective hull of the residue field of $R$ and $\alpha$ is a positive integer. They define an ideal of $R$ to be $\phi$-special if it is the annihilator of an $R[\phi;f]$-submodule of $E^{\alpha}$, where $\phi=U^{t}T$ with $T$ is the natural Frobenius on $E^{\alpha}$ and $U$ is an $\alpha\times\alpha$ matrix with entries in $R$ (see Section \ref{section:Katzman-Zhang Algorithm}). Furthermore, they use Katzman's $\Delta^{e}$ and $\Psi^{e}$ functors, which are extensions of Matlis duality keeping track of Frobenius maps, to define $\phi$-special ideals equivalently to be the annihilators of $R^{\alpha}/W$ for some submodule $W$ satisfying $UW\subseteq W^{[p]}$, where $W^{[p]}$ is the submodule generated by $\{ w^{[p]}=(w_{1}^{p},\dots,w_{\alpha}^{p})^{t}\mid w=(w_{1},\dots,w_{\alpha})^{t}\in W\}$ (see Proposition \ref{art1}). Katzman and Zhang show that there are only finitely many $\phi$-special ideals $P$ of $R$ with the property that $P$ is the annihilator of an $R[\phi;f^{e}]$-submodule $M$ of $E^{\alpha}$ such that the restriction of $\phi^{e}$ to $M$ is not zero for all $e$, and introduce an algorithm for finding special prime ideals with this property in \cite{K3}. They first present the case $\alpha=1$, which was considered by M. Katzman and K. Schwede in \cite{K2} with a geometric language. Then they extend this to the case $\alpha>1$. In this paper, we adapt the equivalent definition of $\phi$-special ideals above to the polynomial rings, and for an $\alpha\times\alpha$ matrix $U$ we define $U$-special ideals to be the annihilators of $R^{\alpha}/W$ for some submodule $W$ of $R^{\alpha}$ satisfying $UW\subseteq W^{[p]}$. We generalize the results in \cite{K3} to the case that $R$ is a finite dimensional polynomial ring over a field of prime characteristic $p$, and show that there are only finitely many $U$-special ideals with some non degeneracy conditions (see Theorem \ref{last4}). We also present an algorithm for finding $U$-special prime ideals of polynomial rings. Furthermore, we consider the notion of $F$-finite $F$-modules, which is a prime characteristic extension of local cohomology modules introduced by G. Lyubeznik in \cite{L1}, and we show that our new algorithm gives a method for finding the prime ideals of $R$ such that $\crk(H_{IR_{P}}^{i}(R_{P})) \neq 0$ (see Definition \ref{crk} and Theorem \ref{m1}). \section{Preliminaries} In this section, we collect some notations and necessary background for this paper. \subsection{The Frobenius Functor} Let $M$ be an $R$-module. The Frobenius functor $F_{R}$ from the category of $R$-modules to itself is defined by $F_{R}(M):=F_{*}R\otimes_{R}M$ where $F_{R}(M)$ acquires its $R$-module structure via the identification of $F_{*}R$ with $R$. The resulting $R$-module structure on $F_{R}(M)$ satisfies \[ s(F_{*}r\otimes m)=F_{*}sr\otimes m \text{ and } F_{*}s^{p}r\otimes m=F_{*}r\otimes sm \] for all $r,s \in R$ and $m \in M$. The $e$-th iteration of $F_{R}$ is denoted by $F_{R}^{e}$, and it is clearly given by $F_{R}^{e}(M)=F_{*}^{e}R\otimes_{R}M$. Regularity of $R$ implies that the Frobenius functor is exact. \subsection{Lyubeznik's $F$-modules} An $R$-module $\mathcal{M}$ is called to be an $F$-module if it is equipped with an $R$-module isomorphism $\theta : \mathcal{M} \rightarrow F_{R}(\mathcal{M})$ which we call the structure isomorphism of $\mathcal{M}$. An $F$-module homomorphism is an $R$-module homomorphism $\phi: \mathcal{M} \rightarrow \mathcal{M'}$ such that the following diagram commutes \[ \begin{CD} \mathcal{M} @>\phi>> \mathcal{M'}\\ @V\theta VV @VV\theta' V\\ F_{R}(\mathcal{M}) @>>F_{R}(\phi)> F_{R}(\mathcal{M'}) \end{CD} \] where $\theta$ and $\theta'$ are the structure isomorphisms of $\mathcal{M}$ and $\mathcal{M'}$, respectively. A generating morphism of an $F$-module $\mathcal{M}$ is an $R$-module homomorphism $\beta : M \rightarrow F_{R}(M)$, where $M$ is an $R$-module, such that $\mathcal{M}$ is the limit of the inductive system in top row of commutative diagram \[ \begin{CD} M @>\beta >> F_{R}(M) @>F_{R}(\beta)>> F_{R}^{2}(M) @>F_{R}^{2}(\beta)>> \cdots\\ @V\beta VV @V F_{R}(\beta)VV @V F_{R}^{2}(\beta)VV\\ F_{R}(M) @>>F_{R}(\beta)> F_{R}^{2}(M) @>>F_{R}^{2}(\beta)> F_{R}^{3}(M) @>>F_{R}^{3}(\beta)> \cdots \end{CD} \] and the structure isomorphism of $\mathcal{M}$ is induced by the vertical arrows in this diagram. An $F$-module $\mathcal{M}$ is called $F$-finite if it has a generating morphism $\beta : M \rightarrow F_{R}(M)$ with $M$ a finitely generated $R$-module. In addition, if $\beta$ is injective, $M$ is called a root of $\mathcal{M}$ and $\beta$ is called a root morphism of $\mathcal{M}$. \begin{exam} \label{t3} Any $R$-module isomorphism $\phi:R \rightarrow F_{R}(R)$ makes $R$ into an $F$-module. In particular, the canonical isomorphism \[ \phi:R \rightarrow F_{*}R \otimes_{R}R=F_{R}(R) \text{ defined by } r\mapsto F_{*}r\otimes 1. \] In particular, $R$ is $F$-finite $F$-module. Therefore, by \cite[Proposition 2.10]{L1}, local cohomology modules $H_{I}^{i}(R)$ with support on an ideal $I\subseteq R$ are $F$-finite $F$-modules. Furthermore, by \cite[Proposition 2.3]{L1}, we have \[ H_{I}^{i}(R)=\varinjlim(M\xrightarrow{\beta}F_{R}(M) \xrightarrow{F_{R}(\beta)} F_{R}^{2}(M) \xrightarrow{F_{R}^{2}(\beta)}\cdots) \] where $\beta : M \rightarrow F_{R}(M)$ is a root morphism. \end{exam} \subsection{$I_{e}(-)$ Operation and $\star$-closure} In this subsection, we will give definitions of $\Ie_{e}(-)$ operation and $\star$-closure, and some properties of them. To do this we need the property that $F_{*}^{e}R$ are intersection flat $R$-modules for all positive integer $e$. \begin{definition} An $R$-module $M$ is intersection flat\index{intersection flat modules} if it is flat and for all sets of $R$-submodules $\{N_{\lambda}\}_{\lambda \in \Lambda}$ of a finitely generated $R$-module $N$, $$M \otimes _{R} \bigcap _{\lambda \in \Lambda} N_{\lambda}= \bigcap _{\lambda \in \Lambda} (M \otimes _{R} N_{\lambda})$$ \end{definition} \begin{definition} Let $I$ be an ideal of $R$. Ideal generated by the set $\{r^{p^{e}} \mid r\in I \}$ is called the Frobenius power of $I$ and denoted by $I^{[p^{e}]}$. Consequently, if $I=\langle r_{1}, \dots ,r_{n}\rangle$, then $I^{[p^{e}]}=\langle r_{1}^{p^{e}}, \dots ,r_{n}^{p^{e}}\rangle$. \end{definition} \begin{remark} Since intersection flat $R$-modules include $R$ and closed under arbitrary direct sum, free $R$-modules are intersection flat. For instance, when $R=\Bbbk[x_{1},\dots,x_{n}]$ over a field $\Bbbk$ of prime characteristic $p$, $F_{*}^{e}R$ are free, and so intersection flat. In addition, by \cite[Proposition 5.3]{K1}, when $R=\Bbbk[\![x_{1},\dots,x_{n}]\!]$ over a field $\Bbbk$ of prime characteristic $p$, $F_{*}^{e}R$ are intersection flat. Because of regularity, these rings have the property that for any collection of ideals $\{A_{\lambda}\}_{\lambda \in \Lambda}$ of $R$, \[ (\cap_{\lambda \in \Lambda}A_{\lambda})^{[p^{e}]}\cong F_{R}^{e}(\cap_{\lambda \in \Lambda}A_{\lambda})\cong \cap_{\lambda \in \Lambda}F_{R}^{e}(A_{\lambda}) \cong \cap_{\lambda \in \Lambda}A_{\lambda}^{[p^{e}]}, \] and this is enough to define the minimal ideal $J \subseteq R$ with the property $A \subseteq J^{[p^{e}]}$. \end{remark} Henceforth $R$ will denote a ring with the property that $F_{*}^{e}R$ are intersection flat for all positive integer $e$. \begin{pd} \cite[Section 5]{K1} Let $e$ be a positive integer. \begin{enumerate} \item For an ideal $A \subseteq R$ there exists a minimal ideal $J \subseteq R$ with the property $A \subseteq J^{[p^{e}]}$. We denote this minimal ideal by $\Ie_{e}(A)$\index{$\Ie_{e}(-)$ operation}. \item Let $u \in R$ be a non zero element and $A \subseteq R$ an ideal. The set of all ideals $B \subseteq R$ which contain $A$ and satisfy $uB \subseteq B^{[p^{e}]}$ has a unique minimal element. We call this ideal the star closure of $A$ with respect to $u$ and denote it by $A^{\star^{e}u}$\index{$\star$-closure}. \end{enumerate} \end{pd} \begin{definition} Given any matrix (or vector) $V$ with entries in $R$, we define $V^{[p^{e}]}$ to be the matrix obtained from $V$ by raising its entries to the $p^{e}$-th power. Given any submodule $K \subseteq R^{\alpha}$, we define $K^{[p^{e}]}$ to be the $R$-submodule of $R^{\alpha}$ generated by $\{ v^{p^{e}} \mid v \in K \}$. \end{definition} The Proposition-Definition below extends $\Ie_{e}(-)$-operation and $\star$-closure defined on ideals to submodules of free $R$-modules. \begin{pd} Let $e$ be a positive integer. \begin{enumerate} \item Given a submodule $K \subseteq R^{\alpha}$ there exists a minimal submodule $L \subseteq R^{\alpha}$ for which $K \subseteq L^{[p^{e}]}$. We denote this minimal submodule $\Ie_{e}(K)$. \item Let $U$ be an $\alpha \times \alpha$ matrix with entries in $R$ and $V \subseteq R^{\alpha}$. The set of all submodules $K \subseteq R^{\alpha}$ which contain $V$ and satisfy $UK \subseteq K^{[p^{e}]}$ has a unique minimal element. We call this submodule the star closure of $V$ with respect to $U$ and denote it $V^{\star^{e}U}$. \end{enumerate} \end{pd} \begin{proof} For the proof of (\textit{1}) we refer to \cite[Section 3]{K3}. For the proof of (\textit{2}) we shall construct a similar method to that in \cite[Section 3]{K3}. Let $V_{0}=V$ and $V_{i+1}=I_{e}(UV_{i})+V_{i}$. Then $\{V_{i}\}_{i\geq 0}$ is an ascending chain and it stabilizes, since $R$ is Noetherian, i.e. $V_{j}=V_{j+k}$ fo all $k>0$ for some $j\geq 0$. Therefore, $V_{j}=I_{e}(UV_{j})+V_{j}$ implies $I_{e}(UV_{j}) \subseteq V_{j}$, and so $UV_{j} \subseteq V_{j}^{[p^{e}]}$. We show the minimality of $V_{j}$ by induction on $i$. Let $Z$ be any submodule of $R^{\alpha}$ containing $V$ with the property that $UZ \subseteq Z^{[p^{e}]}$. Then we clearly have $V_{0}=V \subseteq Z$, and suppose that $V_{i} \subseteq Z$ for some $i$. Thus, $UV_{i} \subseteq UZ \subseteq Z^{[p^{e}]}$, which implies $I_{e}(UV_{i}) \subseteq Z$ and so $V_{i+1} \subseteq Z$. Hence, $V_{j} \subseteq Z$. \end{proof} For the calculation of $\Ie_{e}(-)$ operation, if $R$ is a free $R^{p^{e}}$-module, we first fix a free basis $\mathcal{B}$ for $R$ as an $R^{p^{e}}$-module, then every element $v \in R^{\alpha}$ can be expressed uniquely in the form $v=\sum_{b\in\mathcal{B}}u_{b}^{[p^{e}]}b$ where $u_{b} \in R^{\alpha}$ for all $b\in \mathcal{B}$. \begin{proposition}\cite[Proposition 3.4]{K3} Let $e>0$. \begin{enumerate} \item For any submodules $V_{1}, \dots ,V_{n}$ of $R^{\alpha}$, $\Ie_{e}(V_{1}+ \cdots +V_{n})=\Ie_{e}(V_{1})+ \cdots +\Ie_{e}(V_{n})$. \item Let $\mathcal{B}$ be a free basis for $R$ as $R^{p^{e}}$-module. Let $v \in R^{\alpha}$ and $v=\sum_{b\in\mathcal{B}}u_{b}^{[p^{e}]}b$ be the unique expression for $v$ where $u_{b} \in R^{\alpha}$ for all $b\in \mathcal{B}$. Then $\Ie_{e}(\langle v\rangle)$ is the submodule of $R^{\alpha}$ generated by $\{u_{b}\mid b\in \mathcal{B}\}$. \end{enumerate} \end{proposition} The behaviour of the $\Ie_{e}(-)$ operation under localization is very crucial for our results. The following lemma shows that it commutes with localization. \begin{lemma}\cite[Lemma 2.5]{K4} \label{ie} Let $\mathcal{R}$ be a localization of $R$ or a completion at a prime ideal. For all $e\in\N$, and all submodules $K \subseteq R^{\alpha}$, $\Ie_{e}(K \otimes_{R} \mathcal{R})$ exists and equals to $\Ie_{e}(K) \otimes_{R} \mathcal{R}$. \end{lemma} \begin{lemma}\label{sp2} Let $U$ be a non-zero $\alpha \times \alpha$ matrix with entries in $R$ and $K \subseteq R^{\alpha}$ a submodule. For any prime ideal $P\subseteq R$, \[ (\widehat{K_{P}})^{\star^{e}U}=\widehat{(K^{\star^{e}U})_{P}}. \] \end{lemma} \begin{proof} Define inductively $K_{0}=K$ and $K_{i+1}=I_{e}(UK_{i})+K_{i}$, and also $L_{0}=\widehat{K_{P}}$ and $L_{i+1}=I_{e}(UL_{i})+ L_{i}$ for all $i \geq 0$. Since $\Ie_{e}(-)$ operation commutes with localization and completion, an easy induction shows that $L_{i}=\widehat{(K_{i})_{P}}$, and the result follows. \end{proof} \section{The Katzman-Schwede Algorithm} \label{section:Katzman-Schwede Algorithm} The purpose of this section is to redefine the algorithm described in \cite{K2} with a more algebraic language and show that it commutes with localization. Let $R=\Bbbk[x_{1},\dots,x_{n}]$ be a polynomial ring over a field of characteristic $p$ and $e$ be a positive integer. \begin{definition} For any $R$-linear map $\phi : F_{*}^{e}R \rightarrow R$, we say that an ideal $J \subseteq R$ is $\phi$-compatible\index{compatible ideal} if $\phi(F_{*}^{e}J) \subseteq J$. \end{definition} Given $\phi$ which is compatible with $J$ as above definition, there is always a commutative diagram $$\begin{array}[c]{ccc} F_{*}^{e}R&\stackrel{\phi}{\longrightarrow}&R\\ \downarrow\scriptstyle{}&&\downarrow\scriptstyle{}\\ F_{*}^{e}(R/J)&\stackrel{\phi'}{\longrightarrow}&R/J \end{array}$$ where the vertical arrows are the canonical surjections. \begin{lemma}\cite[Lemma 2.4]{K2} Assuming a commutative diagram as above, the $\phi$-compatible ideals containing $J$ are in the bijective correspondence with the $\phi'$-compatible ideals of $R/J$, where $\phi'$ is the induced map $F_{*}^{e}(R/J) \stackrel{\phi'}{\longrightarrow} R/J$ as in above diagram. \end{lemma} Next we will explain the $F_{*}^{e}R$-module structure of $\homm_{R}(F_{*}^{e}R,R)$, which is crucial for our computational techniques in this paper. \begin{remark} \label{rfree} Let $\mathcal{C}$ be a base for $\Bbbk$ as a $\Bbbk^{p^{e}}$-vector space which includes the identity element of $\Bbbk$. It is well known that $F_{*}^{e}R$ is a free $R$-module with the basis set \[ \mathcal{B}=\{ F_{*}^{e}\lambda x_{1}^{\alpha_{1}} \dots x_{n}^{\alpha_{n}} \mid 0 \leq \alpha_{1}, \dots,\alpha_{n} < p^{e}, \lambda \in \mathcal{C} \}. \] \end{remark} \begin{lemma} \cite[cf. Example 3.0.5]{BKS}\label{trace} Let $\pi_{e}:F_{*}^{e}R \rightarrow R$ be the projection map onto the free summand $RF_{*}^{e} x_{1}^{p^{e}-1} \dots x_{n}^{p^{e}-1}$. Then $\homm_{R}(F_{*}^{e}R,R)$ is generated by $\pi_{e}$ as an $F_{*}^{e}R$-module. \end{lemma} \begin{proof} For each basis element $F_{*}^{e} \lambda x_{1}^{\alpha_{1}} \dots x_{n}^{\alpha_{n}} \in \mathcal{B}$, the projection map onto the free summand $RF_{*}^{e} \lambda x_{1}^{\alpha_{1}} \dots x_{n}^{\alpha_{n}}$ is defined by the rule $$F_{*}^{e}z.\pi_{e}(-)=\pi_{e}(F_{*}^{e}z.-),$$ where $z=\lambda^{-1}x_{1}^{p^{e}-1-\alpha_{1}} \dots x_{n}^{p^{e}-1-\alpha_{n}}$. Since we can obtain all of the projections in this way, the map \[ \Phi:F_{*}^{e}R \rightarrow \homm_{R}(F_{*}^{e}R,R) \text{ defined by }\Phi(F_{*}^{e}u)=\phi_{u}, \] where $\phi_{u}:F_{*}^{e}R \rightarrow R$ is the $R$-linear map $\phi_{u}(-)=\pi_{e}(F_{*}^{e}u-)$, is surjective. On the other hand, if $\Phi(F_{*}^{e}u)=0$ for some $u\in R$, then we have \begin{center} $\phi_{u}(F_{*}^{e}r)=\pi_{e}(F_{*}^{e}ur)=F_{*}^{e}u.\pi_{e}(F_{*}^{e}r)=0$ for all $r\in R$. \end{center} This means that $F_{*}^{e}u$ must be zero, and so $\Phi$ is injective. Hence, $\Phi$ is an $F_{*}^{e}R$ isomorphism. In other words, $\pi_{e}$ generates $\homm_{R}(F_{*}^{e}R,R)$ as an $F_{*}^{e}R$-module. \end{proof} \begin{definition} Let the notation and situation be as in Lemma \ref{trace}. We call the map $\pi_{e}$ the trace map on $F_{*}^{e}R$, or just the trace map when the content is clear. \end{definition} Next lemma provides an important property of the trace map $\pi_{e}$ which gives the relation between elements of $\homm_{R}(F_{*}^{e}R,R)$ and $\Ie_{e}(-)$ operation (cf. \cite[Claim 6.2.2]{BKS}). \begin{lemma} \label{fa1} Let $A$ and $B$ be ideals of $R$. Then $\pi_{e}(F_{*}^{e}A) \subseteq B$ if and only if $A \subseteq B^{[p^{e}]}$. \end{lemma} \begin{proof} $(\Rightarrow)$ Since $R$ Noetherian, $A$ is finitely generated, and since $\pi_{e}$ is $R$-linear we may assume that $A$ is a principal ideal, i.e. $A=aR$ for some $a \in R$. Now since $F_{*}^{e}R$ is a free $R$-module with basis $\mathcal{B}$ as in Remark \ref{rfree}, $F_{*}^{e}a=\sum_{i}r_{i}F_{*}^{e}g_{i}$ for some $r_{i} \in R$ and $F_{*}^{e}g_{i} \in \mathcal{B}$. On the other hand, by Lemma \ref{trace}, $\pi_{e}(F_{*}^{e}z_{i}a)=r_{i}$ for some $z_{i}\in R$. This implies that $\pi_{e}(F_{*}^{e}Ra)=\langle r_{i} \rangle$. Then by the assumption $\pi_{e}(F_{*}^{e}A)=\langle r_{i}\rangle \subseteq B$, and since $F_{*}^{e}a=F_{*}^{e}\sum_{i}r_{i}^{p^{e}}g_{i}$ we have $a=\sum_{i}r_{i}^{p^{e}}g_{i} \in B^{[p^{e}]}$. Hence, $A \subseteq B^{[p^{e}]}$. $(\Leftarrow)$ Assume first that $A \subseteq B^{[p^{e}]}$ which implies that $F_{*}^{e}A \subseteq F_{*}^{e}B^{[p^{e}]}$. Therefore, \[ \pi_{e}(F_{*}^{e}A) \subseteq \pi_{e}(F_{*}^{e}B^{[p^{e}]})=\pi_{e}(BF_{*}^{e}R)=B\pi_{e}(F_{*}^{e}R)\subseteq B. \] \end{proof} \begin{corollary}\label{fac} Let $A$ be an ideal of $R$, and let $\phi \in \homm_{R}(F_{*}^{e}R,R)$ be such that $\phi(-)=\pi_{e}(F_{*}^{e}u-)$ for some $u\in R$. Then $\phi(F_{*}^{e}A)=\pi_{e}(F_{*}^{e}uA)=I_{e}(uA)$ and $\star$-closure of $A$ gives the smallest $\phi$-compatible ideal containing $A$. \end{corollary} \begin{proof} Since $uA \subseteq I_{e}(uA)^{[p^{e}]}$, the first claim follows from Lemma \ref{fa1}. The second claim follow from the fact that \begin{align*} A \text{ is } \phi-\text{compatible} &\Leftrightarrow\phi(F_{*}^{e}A)=\pi_{e}(F_{*}^{e}uA)=I_{e}(uA) \subseteq A\\ &\Leftrightarrow uA \subseteq A^{[p^{e}]} \end{align*} \end{proof} Next we recall Fedder's Lemma which translates the problem of finding compatible ideals of $R/I$ for an ideal $I$ to finding compatible ideals on $R$. In the case that $R$ is a Gorenstein local ring, this lemma was proved by R. Fedder in \cite{RF}. \begin{lemma}\cite[Lemma 1.6]{RF}\cite[Lemma 6.2.1]{BKS} \label{fedder} Let $S=R/I$ for some ideal $I$ and $\pi_{e}$ the trace map, then for any $\phi \in \homm_{R}(F_{*}^{e}R,R)$ satisfies $\phi(F_{*}^{e}I)\subseteq I$ if and only if there exists an element $u\in (I^{[p^{e}]}:I)$ such that $\phi(-)=\pi_{e}(F_{*}u-)$. More generally, there exists an isomorphism of $F_{*}^{e}S$-modules \[ \homm_{S}(F_{*}^{e}S,S)\cong \dfrac{\big(F_{*}^{e}(I^{[p^{e}]}:I)\big)}{\big(F_{*}^{e}I^{[p^{e}]}\big)}. \] \end{lemma} \begin{proof} By Lemma \ref{trace}, for any $\phi \in \homm_{R}(F_{*}^{e}R,R)$ there exists an element $u\in R$ such that $\phi(-)=\pi_{e}(F_{*}u-)$. Then by Lemma \ref{fa1}, \[ \phi(F_{*}^{e}I)=\pi_{e}(F_{*}^{e}uI)\subseteq I \Leftrightarrow uI \subseteq I^{[p^{e}]} \Leftrightarrow u \in (I^{[p^{e}]}:I). \] For the second claim, we shall show that the map $\Phi:F_{*}^{e}(I^{[p^{e}]}:I)\rightarrow\homm_{S}(F_{*}^{e}S,S)$ which sends $F_{*}^{e}z$ to the map $\pi_{e}(F_{*}^{e}z-)$ is surjective. It is easy to verify that this map is well-defined and $F_{*}^{e}R$-linear. Since $\homm_{R}(F_{*}^{e}S,S)=\homm_{S}(F_{*}^{e}S,S)$, by freeness of $F_{*}^{e}R$, for any map $\varphi \in \homm_{S}(F_{*}^{e}S,S)$ there always exists a map $\psi \in\homm_{R}(F_{*}^{e}R,R)$ such that $I$ is $\psi$-compatible. Namely, $\Phi$ is surjective. On the other hand, by Lemma \ref{fa1} again, $\kerr\Phi=(F_{*}^{e}I^{[p^{e}]})$, and the result follows by the first isomorphism theorem. \end{proof} \begin{lemma}\cite[Proposition 2.6.c]{K2} If $\phi$ is surjective, then the set of $\phi$-compatible ideals is a finite set of radicals closed under sum and primary decomposition. \end{lemma} For $\phi$-compatible prime ideals $P\subsetneq Q$, we say that $Q$ minimally contains $P$ if there is no $\phi$-compatible prime ideal strictly between $P$ and $Q$. For a given $\phi$-compatible prime ideal $P$, next proposition shows that how to compute $\phi$-compatible prime ideals which minimally contain $P$, and we turn it into an algorithm (cf. \cite[Theorem 4.1]{K3} and \cite[Section 4]{K2}). \begin{proposition} \label{katzsch} Let $\phi:F_{*}^{e}R \rightarrow R$ be an $R$-linear map where $\phi(-)=\pi_{e}(F_{*}^{e}u-)$ for some $u\in R$. Let $P$ and $Q$ be $\phi$-compatible prime ideals such that $Q$ minimally contains $P$, and let $J$ be the ideal whose image in $R/P$ defines the singular locus of $R/P$. Then: \begin{enumerate} \item If $(P^{[p^{e}]}:P) \subseteq (Q^{[p^{e}]}:Q)$ then $J \subseteq Q$, \item If $(P^{[p^{e}]}:P) \nsubseteq (Q^{[p^{e}]}:Q)$ then $(uR+P^{[p^{e}]}):(P^{[p^{e}]}:P) \subseteq Q$. \end{enumerate} \end{proposition} \begin{proof} For (\textit{1}), let $R_{Q}$ be the localization of $R$ at $Q$, and let $S=\widehat{R_{Q}}$ be the completion of $R_{Q}$ with respect to the maximal ideal $QR_{Q}$. Since colon ideals, Frobenius powers and singular locus commute with localization and completion, \begin{align*} P \text{ and } Q \text{ are $\phi$-compatible} \Rightarrow & uP\subseteq P^{[p^{e}]} \text{ and } uQ \subseteq Q^{[p^{e}]}\\ \Rightarrow & uPS \subseteq PS^{[p^{e}]} \text{ and } uQS \subseteq QS^{[p^{e}]}\\ \Rightarrow & PS \text{ and } QS \text{ are $u$-special ideals of } S\\ &\text{with } QS \text{ is a prime ideal} \end{align*} and we have $(P^{[p^{e}]}:P)\subseteq (Q^{[p^{e}]}:Q) \Rightarrow (PS^{[p^{e}]}:PS)\subseteq (QS^{[p^{e}]}:QS)$. Thus, by \cite[Theorem 4.1]{K3}, $JS \subseteq QS$, and so $J \subseteq Q$. For (\textit{2}), we refer to \cite[Theorem 4.1]{K3}. \end{proof} The following algorithm is the same algorithm described in \cite{K2}, which we call it here the Katzman-Schwede algorithm, finds all $\phi$-compatible prime ideals of $R$ which do not contain $\Ie_{e}(uR)$. We describe it here in a more algebraic language. \subsection*{Input:} An $R$ linear map $\phi:F_{*}R\rightarrow R$ where $\phi(-)=\pi_{e}(F_{*}^{e}u-)$ and $u\in R$. \subsection*{Output:} Set of all $\phi$-compatible prime ideals which do not contain $\Ie_{e}(uR)$. \subsection*{Initialize:} $\mathcal{A}_{R}=\{0\}$ and $\mathcal{B}=\emptyset$ \subsection*{Execute the following:} While $\mathcal{A}_{R} \neq \mathcal{B}$ pick any $P \in \mathcal{A}_{R}-\mathcal{B}$, set $S=R/P$; \begin{enumerate} \item Find the ideal $J \subseteq R$ whose image in $S$ defines the singular locus of $S$, and compute $J^{\star^{e} u}$, \item Find the minimal prime ideals of $J^{\star^{e} u}$, add them to $\mathcal{A}_{R}$, \item Compute the ideal $B:=((uR + P^{[p^{e}]}):(P^{[p^{e}]}:P))$, and compute $B^{\star^{e}u}$, \item Find the minimal prime ideals of $B^{\star^{e}u}$, add them to $\mathcal{A}_{R}$, \item Add $P$ to $\mathcal{B}$. \end{enumerate} Output $\mathcal{A}_{R}$ and stop.\\ The Katzman-Schwede algorithm produces a list of all $\phi$-compatible prime ideals which do not contain $L:=I_{e}(uR)$. Because for any prime ideal $Q$, whenever $L \subseteq Q$ we have the property that $Q$ is $\phi$-compatible if and only if $Q/L$ is $\phi'$-compatible where $\phi'$ is the induced map from $\phi$. But $Q/L$ is clearly compatible since $\phi'$ is zero. Thus, we do not need to assume that $\phi$ is surjective. \begin{discussion}\label{diss} Let $R_{\mathfrak{p}}$ be a localization of $R$ at a prime ideal $\mathfrak{p}$, and let $\widehat{R_{\mathfrak{p}}}$ be the completion of $R_{\mathfrak{p}}$ with respect to the maximal ideal $\mathfrak{p}R_{\mathfrak{p}}$. We know that $\mathfrak{p}\widehat{R_{\mathfrak{p}}}$ is the maximal ideal of $\widehat{R_{\mathfrak{p}}}$. Now let $X_{1},\dots,X_{s}$ be minimal generators of $\mathfrak{p}\widehat{R_{\mathfrak{p}}}$, and let $\field{K}[\![X_{1},\dots,X_{s}]\!]$ be the formal power series ring over the residue field $\field{K}$ of $R_{\mathfrak{p}}$. By the Cohen's structure theorem $S\cong \field{K}[\![X_{1},\dots,X_{s}]\!]$. Let $E=E_{S}(S/\mathfrak{m})$ be the injective hull of the residue field. Then by \cite[13.5.3 Example]{RY}, $E$ is isomorphic to the module of inverse polynomials $\field{K}[X_{1}^{-},\dots ,X_{s}^{-}]$ whose $R$-module structure is extended from the following rule \begin{align*} (\lambda X_{1}^{\alpha_{1}}\dots X_{s}^{\alpha_{s}})&(\mu X_{1}^{-\nu_{1}}\dots X_{s}^{-\nu_{s}})\\&= \begin{cases} \lambda\mu X_{1}^{-\nu_{1}+\alpha_{1}}\dots X_{s}^{-\nu_{s}+\alpha_{s}} & \text{ if } \alpha_{i} < \nu_{i} \text{ for all } i \\ 0 & \text{ if } \alpha_{i} \geq \nu_{i} \text{ for any } i \end{cases} \end{align*} for all $\lambda , \mu \in \Bbbk$, non-negative integers $\alpha_{1},\dots,\alpha_{s}$, and positive integers $\nu_{1},\dots,\nu_{s}$. Therefore, $E$ has a natural $S[T;f^{e}]$-module structure by extending additively the action $T(\lambda X_{1}^{\alpha_{1}}\dots X_{s}^{\alpha_{s}})=\lambda^{p^{e}} X_{1}^{-p^{e}\nu_{1}}\dots X_{s}^{-p^{e}\nu_{s}}$ for all $\lambda \in \field{K}$ and positive integers $\nu_{1},\dots,\nu_{s}$. Notice that $T:E\rightarrow E$ defines a Frobenius map. Following \cite[Section 4]{K3}, we can also view the Katzman-Schwede algorithm from the point of Frobenius maps on injective hull of residue fields. Any $S[\Theta;f^{e}]$-module structure on $E$ can be given by $\Theta=uT$ for some $u\in S$ where $T$ is the natural action as above. We also know that the set of $S$-submodules of $E$ is $\{\ann_{E}J \mid J \text{ is an ideal of } R \}$. In addition, \cite[Theorem 4.3]{K1} shows that an $S$-submodule $\ann_{E}J \subseteq E$ is an $S[\Theta ; f^{e}]$-submodule if and only if $uJ \subseteq J^{[p^{e}]}$. Thus, the Katzman-Schwede algorithm finds all submodules $\ann_{E}P$ of $E$ which are preserved by the Frobenius map $\Theta$, under the assumptions that $P$ is a prime ideal of $S$ and the restriction of $\Theta$ to $\ann_{E}P$ is not the zero map (i.e. it finds all the $\Theta$-special prime ideals of $S$, see Definition \ref{uspecial}). \end{discussion} All of the operations used in the Katzman-Schwede algorithm are defined for localizations of $R$. Therefore, we can apply the algorithm to any localization of $R$ at a prime ideal. In the rest of this section, we investigate behaviour of the Katzman-Schwede algorithm under localization. Let $R_{\mathfrak{p}}$ be a localization of $R$ at a prime ideal $\mathfrak{p}$. Our next theorem gives the exact relation between the output sets $\mathcal{A}_{R}$ and $\mathcal{A}_{R_{\mathfrak{p}}}$ of the Katzman-Schwede algorithm for $R$ and $R_{\mathfrak{p}}$, respectively. \begin{theorem}\label{al1} The Katzman-Schwede algorithm commutes with localization: for a given $u \in R$, if $\mathcal{A}_{R}$ and $\mathcal{A}_{R_{\mathfrak{p}}}$ are the output sets of the Katzman-Schwede algorithm for $R$ and $R_{\mathfrak{p}}$, respectively, then \[ \mathcal{A}_{R_{\mathfrak{p}}}=\{ PR_{\mathfrak{p}} \mid P \in \mathcal{A}_{R} \text{ and } P \subseteq \mathfrak{p} \} \] \end{theorem} \begin{proof} We shall show that the Katzman-Schwede algorithm commutes with localization step by step. Since the ideal defining singular locus commutes with localization, so is step \textit{1.} Since Frobenius powers and colon ideals commute with localization under Noetherian hypothesis, so is step \textit{3.} Then by Lemma \ref{sp2}, $\star$-closure commutes with localization. Therefore, step \textit{2.} and \textit{4.} follow from the fact that primary decomposition commutes with localization. Let $P$ be a $\phi$-compatible prime ideal of $R$. Then since $uP\subseteq P^{[p^{e}]} \Leftrightarrow uPR_{\mathfrak{p}}\subseteq P^{[p^{e}]}R_{\mathfrak{p}}$, $PR_{\mathfrak{p}}$ is a $\phi$-compatible prime ideal of $R_{\mathfrak{p}}$. Since the Katzman-Schwede algorithm commutes with localization, $Q$ is a $\phi$-compatible prime ideal of $R$ minimally containing $P$ if and only if $QR_{\mathfrak{p}}$ is a $\phi$-compatible prime ideal of $R_{\mathfrak{p}}$ minimally containing $PR_{\mathfrak{p}}$. Hence, $\mathcal{A}_{R_{\mathfrak{p}}}=\{ PR_{\mathfrak{p}} \mid P \in \mathcal{A}_{R} \text{ and } P \subseteq \mathfrak{p} \}$. \end{proof} \section{A Generalization of the Katzman-Zhang Algorithm} \label{section:Katzman-Zhang Algorithm} Let $R=\Bbbk[x_{1},\dots,x_{n}]$ be a polynomial ring over a field of characteristic $p$ and $e$ be a positive integer. Let $R_{\mathfrak{p}}$ be a localization of $R$ at a prime ideal $\mathfrak{p}$, and let $S=\widehat{R_{\mathfrak{p}}}$ be the completion of $R_{\mathfrak{p}}$ with respect to the maximal ideal $\mathfrak{m}=\mathfrak{p}R_{\mathfrak{p}}$. Let $E=E_{S}(S/\mathfrak{m})$ be the injective hull of residue field of $S$. The purpose of this section is to generalize the algorithm defined in Section 6 of \cite{K3} to $R$, and show that it commutes with localization. \begin{remark} Given an Artininan $S$-module $M$, we can embed $M$ in $E^{\alpha}$ for some positive integer $\alpha$, we can then embed $\coker(M\hookrightarrow E^{\alpha})$ in $E^{\beta}$ for some positive integer $\beta$. Continuing in this way, we get an injective resolution \[ 0\rightarrow M\rightarrow E^{\alpha}\xrightarrow{A^{t}}E^{\beta}\rightarrow \cdots \] of $M$, where $A$ is an $\alpha\times\beta$ matrix with entries in $S$ since $\homm_{S}(E^{\alpha},E^{\beta})\cong\homm_{S}(S^{\alpha},S^{\beta})$, and so $M\cong \ker A^{t}$. \end{remark} \begin{remark} \label{skew} Let $T: E\rightarrow E$ be as in Discussion \ref{diss}. We can extend this natural $S[T;f^{e}]$-module structure on $E$ to $E^{\alpha}$ which is given by \[ T \left( \begin{array}{ccc} a_{1} \\ \vdots \\ a_{\alpha} \end{array} \right) = \left( \begin{array}{ccc} Ta_{1} \\ \vdots \\ Ta_{\alpha} \end{array} \right) \] for all $a_{1},\dots,a_{\alpha}\in E$. \end{remark} \begin{remark} Following \cite[Section 3]{K1}, let $\mathfrak{C}^{e}$ be the category of Artinian $S[\theta;f^{e}]$-modules and $\mathfrak{D}^{e}$ be the category of $S$-linear maps $M\rightarrow F_{S}^{e}(M)$ where $M$ is Noetherian $S$-module and a morphism between $M\rightarrow F_{S}^{e}(M)$ and $N\rightarrow F_{S}^{e}(N)$ is a commutative diagram of $S$-linear maps $$\begin{array}[c]{ccc} M&\stackrel{\phi}{\longrightarrow}&N\\ \downarrow\scriptstyle{}&&\downarrow\scriptstyle{}\\ F_{S}^{e}(M)&\stackrel{F_{S}^{e}(\phi)}{\longrightarrow}&F_{S}^{e}(N) \end{array}$$ We define the functor $\Delta^{e}:\mathfrak{C}^{e}\rightarrow \mathfrak{D}^{e}$ as follows: given an $e$-th Frobenius map $\theta:M\rightarrow M$, we can obtain an $R$-linear map $\phi:F_{*}^{e}R\otimes M\rightarrow M$ such that $\phi(F_{*}r\otimes m)=r\theta(m)$ for all $r\in R$, $m\in M$. Applying Matlis duality to this map gives the $R$-linear map $M^{\vee}\rightarrow (F_{*}^{e}R\otimes M)^{\vee}\cong F_{*}^{e}R\otimes M^{\vee}$ where the last isomorphism is described in \cite[Lemma 4.1]{L1}. Conversely, we define the functor $\Psi^{e}:\mathfrak{D}^{e}\rightarrow \mathfrak{C}^{e}$ as follows: given a Noetherian $R$-module $N$ with an $R$-linear map $N\rightarrow F_{R}^{e}(N)$. Applying Matlis duality to this map gives the $R$-linear map $\varphi:F_{R}^{e}(N^{\vee})\cong F_{R}^{e}(N)^{\vee}\rightarrow N^{\vee}$ where the first isomorphism is the composition $F_{R}^{e}(N^{\vee})\cong F_{R}^{e}(N^{\vee})^{\vee\vee}\cong F_{R}^{e}(N^{\vee\vee})^{\vee}\cong F_{R}^{e}(N)^{\vee}$. Then we define the action of $\theta$ on $N^{\vee}$ by defining $\theta(n)=\varphi(1\otimes n)$ for all $n\in N^{\vee}$. \end{remark} The mutually inverse exact functors $\Delta^{e}$ and $\Psi^{e}$ are extensions of Matlis duality which also keep track of Frobenius actions. \begin{proposition}\label{art1}\cite[Proposition 2.1]{K3} Let $M\cong\ker A^{t}$ be an Artininan $S$-module where $A$ is an $\alpha\times\beta$ matrix with entries in $S$. For a given $e$-th Frobenius map on $M$, $\Delta^{e}(M)\in\homm_{S}(\coker A,\coker A^{[p^{e}]})$ and is given by an $\alpha\times\alpha$ matrix $U$ such that $U\im A\subseteq\im A^{[p^{e}]}$, conversely any such $U$ defines an $S[\Theta;f^{e}]$-module structure on $M$ which is given by the restriction to $M$ of the Frobenius map $\Theta:E^{\alpha} \rightarrow E^{\alpha}$ defined by $\Theta(a)=U^{t}T(a)$ for all $a\in E^{\alpha}$. \end{proposition} \begin{remark}\label{rm1} By Proposition \ref{art1}, for any Artinian submodule $M\cong\ker A^{t}$ of $E^{\alpha}$ with a given $S[\Theta;f^{e}]$-module structure, where $\Theta=U^{t}T$, there is a submodule $V$ of $S^{\alpha}$ such that $M=\ann_{E^{\alpha}}V^{t}:=\{a\in E^{\alpha} \mid V^{t}a=0 \}$ and $UV \subseteq V^{[p^{e}]}$, (in fact $V=\im A$). For simplicity, for $V \subseteq S^{\alpha}$ we denote $E(V)=\ann_{E^{\alpha}}V^{t}$. \end{remark} \begin{lemma} \label{nil1}\cite[Lemma 3.6, Lemma 3.7]{K3} Let $\Theta=U^{t}T:E^{\alpha}\rightarrow E^{\alpha}$ be a Frobenius map where $U$ is an $\alpha\times\alpha$ matrix with entries in $S$ and let $K\subset S^{\alpha}$. Then \begin{enumerate} \item $E(I_{e}(\im U^{[p^{e}-1]}U^{[p^{e}-2]}\cdots U))=\{a\in E^{\alpha} \mid \Theta^{e}(a)=0 \}$, \item $E(I_{1}(UK))=\{ a\in E^{\alpha} \mid \Theta(a)\in E(K) \}$. \end{enumerate} \end{lemma} \begin{remark} Let $M=\ann_{E^{\alpha}}V^{t}$ be as in Remark \ref{rm1}. Then $\ann_{S}M=\ann_{S}S^{\alpha}/V$ because $\ann_{S}M \subseteq \ann_{S}M^{\vee} \subseteq \ann_{S}M^{\vee\vee}\cong\ann_{S}M$. \end{remark} \begin{definition} \label{uspecial} Let $\Theta=U^{t}T: E^{\alpha} \rightarrow E^{\alpha}$ be a Frobenius map, where $U$ is an $\alpha \times \alpha$ matrix with entries in $S$. We call an ideal of $S$ a $\Theta$-special ideal if it is an annihilator of an $S[\Theta;f]$-submodule of $E^{\alpha}$, equivalently if it is the annihilator of $S^{\alpha}/W$ for some $W\subset S^{\alpha}$ with $UW \subseteq W^{[p^{e}]}$. \end{definition} Notice that the concept of injective hull of the residue field is not available for polynomial rings. Therefore, we adapt above definition for a more general setting and define special ideals depending on a given square matrix as follows. \begin{definition} Let $\mathcal{R}$ be $R$ or $R_{\mathfrak{p}}$ or $S$. For a given $\alpha\times\alpha$ matrix $U$ with entries in $\mathcal{R}$, we call an ideal of $\mathcal{R}$ a $U$-special ideal if it is the annihilator of $\mathcal{R}^{\alpha}/V$ for some submodule $V\subseteq \mathcal{R}^{\alpha}$ satisfying $UV\subseteq V^{[p^{e}]}$. \end{definition} Next we will provide some properties of special ideals. The following lemma gives the most important properties which are actually generalization of Lemma 3.8 and 3.10 in \cite{K3} to $R$ with similar proofs. \begin{lemma}\label{sp1} Let $\mathcal{R}$ be $R$ or $R_{\mathfrak{p}}$ or $S$. Let $U$ be an $\alpha\times\alpha$ matrix with entries in $\mathcal{R}$ and $J$ be a $U$-special ideal of $\mathcal{R}$. Then \begin{enumerate} \item Associated primes of $J$ are $U$-special, \item $V=(JR^{\alpha})^{\star^{e} U}$ is the smallest submodule of $\mathcal{R}^{\alpha}$ such that $J=\ann_{\mathcal{R}}\mathcal{R}^{\alpha}/V$ and $UV\subseteq V^{[p^{e}]}$. \end{enumerate} \end{lemma} \begin{proof} For \textit{1.} let $P$ be an associated prime of $J$ and $J=\ann_{\mathcal{R}}\mathcal{R}^{\alpha}/V$ for some $V\subseteq \mathcal{R}^{\alpha}$ such that $UV\subseteq V^{[p^{e}]}$. Then for a suitable element $r\in \mathcal{R}$ we have $P=(J: r)$. If $W=(V:_{\mathcal{R}^{\alpha}} r)=\{ w\in \mathcal{R}^{\alpha}\mid rw\in V \}$ then $P=\ann_{\mathcal{R}}\mathcal{R}^{\alpha}/W$ since $s \in P \Leftrightarrow rs \in J \Leftrightarrow rs\mathcal{R}^{\alpha} \subseteq V \Leftrightarrow s\mathcal{R}^{\alpha} \subseteq W$. On the other hand, since $UV \subseteq V^{[p^{e}]}$ and $rW \subseteq V$ we have $rUW \subseteq UV$ and so $r^{p^{e}}UW\subseteq r^{p^{e}-1}UV\subseteq r^{p^{e}-1}V^{[p^{e}]} \subseteq V^{[p^{e}]}$. This means that $UW \subseteq (V^{[p^{e}]}:_{\mathcal{R}^{\alpha}} r^{p^{e}})=(V:_{\mathcal{R}^{\alpha}} r)^{[p^{e}]}=W^{[p^{e}]}$. For \textit{2.} let $J=\ann_{\mathcal{R}}\mathcal{R}^{\alpha}/V$ for some $V\subseteq \mathcal{R}^{\alpha}$ such that $UV\subseteq V^{[p^{e}]}$. It is clear that $J\mathcal{R}^{\alpha}\subseteq (J\mathcal{R}^{\alpha})^{\star^{e} U}$ and $J\mathcal{R}^{\alpha} \subseteq V \Rightarrow (J\mathcal{R}^{\alpha})^{\star^{e} U}\subseteq V^{\star^{e} U}=V$. Therefore, $J \subseteq \ann_{\mathcal{R}}\mathcal{R}^{\alpha}/(J\mathcal{R}^{\alpha})^{\star^{e} U} \subseteq \ann_{\mathcal{R}}\mathcal{R}^{\alpha}/V=J$, and so $J=\ann_{\mathcal{R}}\mathcal{R}/(J\mathcal{R}^{\alpha})^{\star^{e} U}$. \end{proof} \begin{theorem}\label{T1}\cite[Theorem 5.1]{K3} There are only finitely many $\Theta$-special prime ideals $P$ of $S$ with the property that for some $S[\Theta;f]$-submodule $M \subseteq E^{\alpha}$ with $\ann_{S}M=P$ and the restriction of $\Theta$ to $M$ is not zero. \end{theorem} Theorem \ref{T1} was proved by induction on $\alpha$ using the aid of injective hull of the residue field of $S$, and turned into an algorithm in \cite{K3}, which we call it here Katzman-Zhang Algorithm. Since injective hulls of residue fields are not available for polynomial rings, we only use techniques of $\Ie_{e}(-)$ operation and $\star$-closure to generalize the Katzman-Zhang Algorithm to $R$. Next theorem allows us to prove polynomial version of Theorem \ref{T1}. \begin{theorem} \label{2} \cite[Theorem 3.2]{K4} Let $U$ be an $\alpha \times \alpha$ matrix with entries in $R$ and $\alpha\in\N$. \begin{enumerate} \item If $\Ie_{e}(U^{[p^{e-1}]}U^{[p^{e-2}]} \cdots UR^{\alpha})=\Ie_{e+1}(U^{[p^{e}]}U^{[p^{e-1}]} \cdots UR^{\alpha})$ then \[ \Ie_{e}(U^{[p^{e-1}]}U^{[p^{e-2}]} \cdots UR^{\alpha})= \Ie_{e+j}(U^{[p^{e+j-1}]}U^{[p^{e+j-2}]} \cdots UR^{\alpha}) \] for all $j \geq 0$. \item There exists an integer $e$ such that (1) holds. \end{enumerate} \end{theorem} For the rest of this section, we will fix an $\alpha\times\alpha$ matrix $U$ with entries in $R$, and $\mathcal{K}$ will denote the stable value of $\{\Ie_{e}(U^{[p^{e-1}]}U^{[p^{e-2}]} \cdots UR^{\alpha})\}_{e\geq1}$ as in Theorem \ref{2}. \begin{proposition} \label{rmklast} If $P$ is a prime ideal of $R$ with the property that $\mathcal{K} \subseteq PR^{\alpha}$ where $\mathcal{K}=\Ie_{e}(U_{e}R^{\alpha})$ and $U_{e}=U^{[p^{e-1}]}U^{[p^{e-2}]}\cdots U$, then $P$ is $U_{e}$-special. \end{proposition} \begin{proof} Let $P$ be a prime ideal of $R$ such that $\mathcal{K} \subseteq PR^{\alpha}$. Then \[ \mathcal{K} \subseteq PR^{\alpha}\Rightarrow U_{e}R^{\alpha}\subseteq P^{[p^{e}]}R^{\alpha}\Rightarrow U_{e}PR^{\alpha}\subseteq P^{[p^{e}]}R^{\alpha}\Rightarrow PR^{\alpha}=(PR^{\alpha})^{\star U_{e}}. \] Therefore, $P$ is $U_{e}$-special. \end{proof} By Proposition \ref{rmklast}, any prime ideal containing $\mathcal{K}$ is $U_{e}$-special. This is equivalent to saying that the action of $U_{e}$ on submodules $PR^{\alpha}$ containing $\mathcal{K}$ with $P$ being a prime is the same as the action of zero matrix. Henceforth, we will assume that $\mathcal{K} \neq 0$. Our next theorem is the generalization of Theorem \ref{2} to $R$, and we will prove it using a very similar method to that in \cite[Section 5]{K3}. \begin{theorem} \label{1} The set of all $U$-special prime ideals $P$ of $R$ with the property that $\mathcal{K} \nsubseteq PR^{\alpha}$ is finite. \end{theorem} We will prove Theorem \ref{1} by induction on $\alpha$. Assume that $\alpha=1$. For a prime ideal $P$ being a $u$-special prime, i.e. $P=\ann_{R}R/P^{\star u}$, is equivalent to the property that $uP \subseteq P^{[p]}$. This means, by Corollary \ref{fac}, that $P$ is a $\phi$-compatible ideal where $\phi(-)=\pi(F_{*}u-)$. Then the set of all $u$-special prime ideals are finite and the Katzman-Schwede algorithm finds such primes. Henceforth in this section, we will assume that Theorem \ref{1} holds for $\alpha-1$. For a $U$-special prime ideal $P$, we will present an effective method for finding all $U$-special prime ideals $Q\varsupsetneq P$ for which there is no $U$-special prime ideal strictly between $P$ and $Q$, and we will call such $U$-special prime ideals $Q$ as minimally containing $P$. The following lemma is a generalization of Lemma 5.2 in \cite{K3} to $R$, which is our starting point of finding $U$-special prime ideals minimally containing $P$. \begin{lemma} \label{last1} Let $P\subsetneq Q$ be $U$-special prime ideals of $R$ such that $Q$ contains $P$ minimally. If $a\in Q\setminus P$, then $Q$ is among the minimal prime ideals of $\ann_{R}R^{\alpha}/W$ where $W=((P+aR)R^{\alpha}) ^{\star U}$. \end{lemma} \begin{proof} Since $PR^{\alpha}\subseteq (P+aR)R^{\alpha} \subseteq QR^{\alpha}$ we have \[ (PR^{\alpha})^{\star U}\subseteq ((P+aR)R^{\alpha})^{\star U} \subseteq (QR^{\alpha})^{\star U}. \] Then by Lemma \ref{sp1}, \[ P=\ann_{R}\frac{R^{\alpha}}{(PR^{\alpha})^{\star U}} \subseteq \ann_{R}\frac{R^{\alpha}}{W} \subseteq \ann_{R}\frac{R^{\alpha}}{(QR^{\alpha})^{\star U}}=Q \] which implies that $Q$ contains a minimal prime ideal of $\ann_{R}R^{\alpha}/W$. Therefore, by Lemma \ref{sp1} again, this minimal prime is $U$-special. Since $Q$ contains $P$ minimally, it has to be $Q$ itself. \end{proof} Next, we will prove a generalization of Lemma 5.3 in \cite{K3} to $R$, which is a crucial step for proving Theorem \ref{1}. \begin{lemma} \label{last2} Let $Q$ be a $U$-special prime ideal of $R$, where $Q=\ann_{R}R^{\alpha}/W$ for some submodule $W\subseteq R^{\alpha}$ satisfying $UW\subseteq W^{[p]}$. Let $a\notin Q$ and $X$ be an invertible $\alpha\times\alpha$ matrix with entries in the localization $R_{a}$. Let $\nu \gg 0$ be such that $U_{1}=a^{\nu}X^{[p]}UX^{-1}$ has entries in $R$ and $W_{1}=XW_{a}\cap R^{\alpha}$. Then \begin{enumerate} \item $Q$ is a minimal prime of $\ann_{R}R^{\alpha}/W_{1}$ and $U_{1}W_{1}\subseteq W_{1}^{[p]}$, i.e. $Q$ is $U_{1}$-special. \item If $\Ie_{e}(U^{[p^{e-1}]}U^{[p^{e-2}]} \cdots UR^{\alpha})\nsubseteq W$, then $$\Ie_{e}(U_{1}^{[p^{e-1}]}U_{1}^{[p^{e-2}]} \cdots U_{1}R^{\alpha})\nsubseteq W_{1}.$$ \end{enumerate} \end{lemma} \begin{proof} Let $J=\ann_{R}R^{\alpha}/W_{1}$. Then \begin{align*} J_{a} &=(\ann_{R}R^{\alpha}/W_{1})_{a}=\ann_{R_{a}}R_{a}^{\alpha}/(W_{1})_{a}=\ann_{R_{a}}R_{a}^{\alpha}/XW_{a}\\ &\cong \ann_{R_{a}}R_{a}^{\alpha}/W_{a}=(\ann_{R}R^{\alpha}/W)_{a}=Q_{a}. \end{align*} Therefore, $Q$ is a minimal prime ideal of $J$. We also have \begin{align*} U_{1}W_{1}&=a^{\nu}X^{[p]}UX^{-1}(XW_{a}\cap R^{\alpha})\subseteq(a^{\nu}X^{[p]}UX^{-1}XW_{a})\cap R^{\alpha}\\ &\subseteq X^{[p]}W_{a}^{[p]}\cap R^{\alpha}=(XW_{a})^{[p]} \cap R^{\alpha}=(XW_{a}\cap R^{\alpha})^{[p]}=W_{1}^{[p]}. \end{align*} This means that $J$ is $U_{1}$-special. Therefore, by Lemma \ref{sp1}, $Q$ is $U_{1}$-special. Assume that \[ \Ie_{e}(U^{[p^{e-1}]}U^{[p^{e-2}]} \cdots UR^{\alpha})\nsubseteq W \text{, i.e. } U^{[p^{e-1}]}U^{[p^{e-2}]} \cdots UR^{\alpha}\nsubseteq W^{[p^{e}]}. \] Now suppose the contrary that \[ \Ie_{e}(U_{1}^{[p^{e-1}]}U_{1}^{[p^{e-2}]} \cdots U_{1}R^{\alpha})\subseteq W_{1}\text{, i.e. } U_{1}^{[p^{e-1}]}U_{1}^{[p^{e-2}]} \cdots U_{1}R^{\alpha}\subseteq W_{1}^{[p^{e}]}. \] Since \begin{align*} &U_{1}^{[p^{e-1}]}U_{1}^{[p^{e-2}]} \cdots U_{1}=(a^{\nu}X^{[p]}UX^{-1})^{[p^{e-1}]}(a^{\nu}X^{[p]}UX^{-1})^{[p^{e-2}]} \cdots a^{\nu}X^{[p]}UX^{-1}\\ &=a^{\nu(p^{e-1})}X^{[p^{e}]}U^{[p^{e-1}]}(X^{-1})^{[p^{e-1}]}a^{\nu(p^{e-2})}X^{[p^{e-1}]}U^{[p^{e-2}]}(X^{-1})^{[p^{e-2}]}\cdots a^{\nu}X^{[p]}UX^{-1}\\ &=a^{\nu(p^{e-1}+p^{e-2}+\cdots+1)}X^{[p^{e}]}U^{[p^{e-1}]}U^{[p^{e-2}]}\cdots UX^{-1}, \end{align*} we have $bX^{[p^{e}]}U^{[p^{e-1}]}U^{[p^{e-2}]}\cdots UX^{-1}R^{\alpha} \subseteq W_{1}^{[p^{e}]}=(XW_{a}\cap R^{\alpha})^{[p^{e}]}=X^{[p^{e}]}W_{a}^{^{[p^{e}]}}\cap R^{\alpha}$, where $b=a^{\nu(p^{e-1}+p^{e-2}+\cdots+1)}$. Therefore, $$X^{[p^{e}]}U^{[p^{e-1}]}U^{[p^{e-2}]}\cdots UX^{-1}R_{a}^{\alpha} \subseteq X^{[p^{e}]}W_{a}^{^{[p^{e}]}},$$ and so $U^{[p^{e-1}]}U^{[p^{e-2}]}\cdots UR_{a}^{\alpha} \subseteq W_{a}^{^{[p^{e}]}}$. Then $U^{[p^{e-1}]}U^{[p^{e-2}]}\cdots UR^{\alpha} \subseteq W^{^{[p^{e}]}}$ since $a$ is not a zero divisor on $R^{\alpha}/W^{[p^{e}]}$, which contradicts with our assumption. \end{proof} Next, we will give a generalization of Proposition 5.4 in \cite{K3} to $R$, which will give us an effective method for finding the $U$-special prime ideals containing a $U$-special prime $P$ minimally in an important case. \begin{proposition} \label{last3} Let $P$ be a $U$-special prime ideal of $R$ such that $\mathcal{K} \nsubseteq PR^{\alpha}$. Assume that the $\alpha$-th column of $U$ is zero and $PR^{\alpha}=(PR^{\alpha})^{\star U}$. Then the set of $U$-special prime ideals minimally containing $P$ is finite. \end{proposition} \begin{proof} Let $Q$ be a $U$-special prime ideal minimally containing $P$ and $W=(QR^{\alpha})^{\star U}$. Let $U_{0}$ be the top left $(\alpha-1)\times(\alpha-1)$ submatrix of $U$. Since $PR^{\alpha}=(PR^{\alpha})^{\star U}\Leftrightarrow UPR^{\alpha}\subseteq P^{[p]}R^{\alpha}$, all entries of $U$ are in $(P:P^{[p]})$. Therefore, $U_{0}PR^{\alpha-1}\subseteq P^{[p]}R^{\alpha-1}$, and so $P$ is $U_{0}$-special. Let $\mathcal{K}_{0}$ be the stable value of $\{ \Ie_{e}(U_{0}^{[p^{e-1}]}U_{0}^{[p^{e-2}]}\cdots U_{0}R^{\alpha-1}) \}_{e>0}$ as in Theorem \ref{2}. We now split our proof into two parts. Assume first that $\mathcal{K}_{0}\subseteq PR^{\alpha-1}$, i.e. $\Ie_{e}(U_{0}^{[p^{e-1}]}U_{0}^{[p^{e-2}]}\cdots U_{0}R^{\alpha-1}) \subseteq PR^{\alpha-1}$ for some $e>0$. \begin{enumerate} \item[1)] Let $(g_{1},\dots,g_{\alpha-1},0)$ be the last row of the matrix $U^{[p^{e-1}]}U^{[p^{e-2}]}\cdots U$. Note that its top left $(\alpha-1)\times(\alpha-1)$ submatrix is $U_{0}^{[p^{e-1}]}U_{0}^{[p^{e-2}]}\cdots U_{0}$. By our assumption, all entries of $U_{0}^{[p^{e-1}]}U_{0}^{[p^{e-2}]}\cdots U_{0}$ are in $P^{[p^{e}]}\subseteq Q^{[p^{e}]}$. Therefore, $\Ie_{e}(U_{0}^{[p^{e-1}]}U_{0}^{[p^{e-2}]}\cdots U_{0}R^{\alpha-1})\subseteq QR^{\alpha-1}$. Then by Proposition \ref{rmklast}, $P$ and $Q$ are $U_{0}^{[p^{e-1}]}U_{0}^{[p^{e-2}]}\cdots U_{0}$-special, and so the action of $U^{[p^{e-1}]}U^{[p^{e-2}]}\cdots U$ is the same action of a matrix $U_{e}$ whose first $\alpha-1$ rows are zero and last row is $(g_{1},\dots,g_{\alpha-1},0)$, and so we replace $U^{[p^{e-1}]}U^{[p^{e-2}]}\cdots U$ with $U_{e}$ without effecting any issues. We now define inductively $V_{0}=QR^{\alpha}$ and $V_{i+1}=\Ie_{e}(U_{e}V_{i})+V_{i}$ for all $i\geq0$. Since \[ U_{e}QR^{\alpha}=\{ (0,\dots,0,\sum_{i=1}^{\alpha-1}g_{i}q_{i})^{t} \mid \forall i, q_{i}\in Q\}, \] \[ \Ie_{e}(U_{e}QR^{\alpha})=\{ (0,\dots,0,v) \mid v\in\Ie_{e}(\sum_{i=1}^{\alpha-1}g_{i}Q) \}. \] Therefore, the sequence $\{ V_{i} \}_{i\geq0}$ stabilizes at $V_{1}=\Ie_{e}(U_{e}QR^{\alpha})+QR^{\alpha}$. By definition of $\star$-closure, we have $QR^{\alpha}\subseteq V_{1} \subseteq W$, and so $\ann_{R}R^{\alpha}/V_{1}=Q$. Furthermore, we have \[ \ann_{R}\frac{R}{\Ie_{e}(\sum_{i=1}^{\alpha-1}g_{i}Q)}=\ann_{R}\frac{R^{\alpha}}{\Ie_{e}(U_{e}QR^{\alpha})}\subseteq Q \text{ since } \Ie_{e}(U_{e}QR^{\alpha}) \subseteq V_{1}, \] which implies that \[ \Ie_{e}(\sum_{i=1}^{\alpha-1}g_{i}Q)=\sum_{i=1}^{\alpha-1}\Ie_{e}(g_{i}Q) \subseteq Q, \] i.e. $\Ie_{e}(g_{i}Q)\subseteq Q \Leftrightarrow g_{i}Q\subseteq Q^{[p^{e}]}$ for all $1\leq i<\alpha$. Hence, $Q$ is $g_{i}$-special for all $1\leq i<\alpha$. On the other hand, at least for one $g_{i}$ we must have $g_{i}\notin P^{[p^{e}]}$ so that we do not get a contradiction with our assumption $\mathcal{K}\nsubseteq PR^{\alpha}$. We can now produce all such $Q$ using the Katzman-Schwede algorithm. \end{enumerate} Let $\tau\subset R$ be intersection of the finite set of $U_{0}$-special prime ideals of $R$ minimally containing $P$. Let $ \rho:R^{\alpha}\rightarrow R^{\alpha-1}$ be the projection onto first $\alpha-1$ coordinates, and let $J=\ann_{R}R^{\alpha-1}/\rho(W)$. Then since $U_{0}\rho(W)=\rho(UW)\subseteq \rho(W^{[p]})=\rho(W)^{[p]}$, $J$ is $U_{0}$-special. Note that $Q\subseteq J$, and so $P\subsetneq J$. Assume now that $\mathcal{K}_{0}\nsubseteq PR^{\alpha-1}$. \begin{enumerate} \item[2)] We now compute $(\tau^{[p^{e}]}\mathcal{K}_{0})^{\star U_{0}}$ as the stable value of \begin{align*} L_{0}&=\tau^{[p^{e}]}\mathcal{K}_{0}\\ L_{1}&=\Ie_{1}(U_{0}L_{0})+L_{0}=\tau^{[p^{e-1}]}\Ie_{1}(U_{0}\mathcal{K}_{0})+\tau^{[p^{e}]}\mathcal{K}_{0}=\tau^{[p^{e-1}]}\mathcal{K}_{0}+\tau^{[p^{e}]}\mathcal{K}_{0}\\ \vdots\\ L_{e}&=\tau\mathcal{K}_{0}+L_{e-1}\\ \vdots \end{align*} and we deduce that $\tau\mathcal{K}_{0}\subseteq L_{e}\subseteq (\tau^{[p^{e}]}\mathcal{K}_{0})^{\star U_{0}}$. On the other hand, since $J$ is a $U_{0}$-special ideal strictly containing $P$, $\tau \subseteq \sqrt{J}$. Thus, for all large $e\geq 0$, we have $\tau^{[p^{e}]} \subseteq J$. Therefore, \[ \tau\mathcal{K}_{0}\subset (\tau^{[p^{e}]}\mathcal{K}_{0})^{\star U_{0}}\subseteq (JR^{\alpha-1})^{\star U}\subseteq \rho(W)^{\star U_{0}}=\rho(W). \] where the last equality follows from the fact that $UW\subseteq W^{[p]}$. Moreover, since $\tau \nsubseteq P$, we have $\tau\mathcal{K}_{0} \nsubseteq PR^{\alpha-1}$. \item[3)] Now we define $\bar{v}=(v_{1},\dots,v_{\alpha-1},0)^{t}$ for $v=(v_{1},\dots,v_{\alpha-1},v_{\alpha})^{t}\in R^{\alpha}$, and $\widebar{V}=\{\bar{v}\mid v\in V\}$ for any submodule $V\subseteq R^{\alpha}$. Let $l:R^{\alpha-1}\rightarrow R^{\alpha-1}\oplus R$ be the natural inclusion $l(v)=v\oplus0$. Note that $\widebar{V}=l(\rho(V))$. Then we also define $W_{0}=\{ w\in W\mid \rho(w)\in\tau\mathcal{K}_{0} \}$ and note that (2) implies that $\rho(W_{0})=\tau\mathcal{K}_{0}$. We have $W_{0}^{\star U} \subseteq W^{\star U}=W$ and $W_{0}^{\star U}=\Ie_{1}(UW_{0})^{\star U} +W_{0}$. Since $UW_{0}=U\widebar{W_{0}}=Ul(\tau\mathcal{K}_{0})$, $\Ie_{1}(Ul(\tau\mathcal{K}_{0}))^{\star U} \subseteq W_{0}^{\star U}\subseteq W$. On the other hand, if $\Ie_{1}(Ul(\tau\mathcal{K}_{0}))^{\star U}\subseteq PR^{\alpha}$, then \begin{align*} \Ie_{1}(Ul(\tau\mathcal{K}_{0})) \subseteq PR^{\alpha} &\Rightarrow Ul(\tau\mathcal{K}_{0}) \subseteq P^{[p]}R^{\alpha} \Rightarrow \rho(Ul(\tau\mathcal{K}_{0})) \subseteq \rho(P^{[p]}R^{\alpha})\\ &\Rightarrow U_{0}\tau\mathcal{K}_{0} \subseteq P^{[p]}R^{\alpha-1} \Rightarrow \tau^{[p]}U_{0}\mathcal{K}_{0}\subseteq P^{[p]}R^{\alpha-1}\\ &\Rightarrow \Ie_{1}(\tau^{[p]}U_{0}\mathcal{K}_{0})\subseteq PR^{\alpha-1} \Rightarrow \tau\Ie_{1}(U_{0}\mathcal{K}_{0})\subseteq PR^{\alpha-1}\\ &\Rightarrow \tau\mathcal{K}_{0}\subseteq PR^{\alpha-1} \end{align*} which contradicts with (2). Hence, we also have $\Ie_{1}(Ul(\tau\mathcal{K}_{0}))^{\star U} \nsubseteq PR^{\alpha}$. \item[4)] Let $M'$ be a matrix whose columns generate $\Ie_{1}(Ul(\tau\mathcal{K}_{0}))^{\star U}\subseteq W$. Choose an entry $a$ of $M'$ which is not in $P$. Then \begin{enumerate} \item If $a\in Q$, Lemma \ref{last1} shows that $Q$ is among the minimal prime ideals of $\ann_{R}R^{\alpha}/((P+aR)R^{\alpha})^{\star U}$. \item If $a\notin Q$, we shall apply Lemma \ref{last2} with the matrix $X$ with entries in $R_{a}$ such that the $\alpha$-th elementary vector $e_{\alpha}\in W_{1}=XW_{a}\cap R^{\alpha}$ and $U_{1}$ as in Lemma \ref{last2}. Then $R^{\alpha}/W_{1}\cong R^{\alpha-1}/\rho(W_{1})$, and so $Q$ is a minimal prime $\ann_{R} R^{\alpha-1}/\rho(W_{1})$. Let $U_{2}$ be the top left $(\alpha-1)\times(\alpha-1)$ submatrix of $U_{1}$. Then since $U_{2}\rho(W_{1})\subseteq \rho(U_{1}W_{1})\subseteq \rho(W_{1}^{[p]})=\rho(W_{1})^{[p]}$, $\ann_{R} R^{\alpha-1}/\rho(W_{1})$ is $U_{2}$-special, and so is $Q$. \end{enumerate} \end{enumerate} This shows that in any case $Q$ is an element of a finite set of prime ideals. Hence, there are only finitely many $U$-special prime ideals of $R$ which contain $P$ minimally. \end{proof} Next Theorem is a generalization of Theorem 5.5 in \cite{K3} to $R$, and it provides an effective algorithm for finding all $U$-special prime ideals $P$ of $R$ with the property that $\mathcal{K} \nsubseteq PR^{\alpha}$. \begin{theorem}\label{last4} Let $P$ a $U$-special prime ideal of $R$ such that $\mathcal{K} \nsubseteq PR^{\alpha}$, and $Q$ be a $U$-special prime ideal minimally containing $P$. Let $M$ be a matrix whose columns generate $(PR^{\alpha})^{\star U}$. \begin{enumerate} \item If $PR^{\alpha} \subsetneq \im M$, then either \begin{enumerate} \item all entries of $M$ are in $Q$, and so there exist an element $a\in Q\setminus P$ and $Q$ is among the minimal prime ideals of $\ann_{R}R^{\alpha}/((P+aR)R^{\alpha})^{\star U}$, or \item there exists an entry of $M$ which is not in $Q$, and $Q$ is a special prime over an $(\alpha-1)\times(\alpha-1)$ matrix. \end{enumerate} \item If$PR^{\alpha}=\im M$, then there exist an element $a_{1}\in R\setminus P$, an element $g\in (P^{[p]}:P)$, and an $\alpha\times\alpha$ matrix $V$ such that for some $\mu\gg0$, we have $a_{1}^{\mu}U \equiv gV \text{ modulo } P^{[p]}$. If $d=\det V$, then either \begin{enumerate} \item $d\in P$, and $Q$ is a special prime ideal over an $(\alpha-1)\times(\alpha-1)$ matrix, or \item $d \in Q\setminus P$, and $Q$ is among the minimal prime ideals of $\ann_{R}R^{\alpha}/((P+dR)R^{\alpha})^{\star U}$, or \item $d \notin Q$, and $Q$ is a $g$-special ideal of $R$. \end{enumerate} \end{enumerate} \end{theorem} \begin{proof} Let $W\subseteq R^{\alpha}$ be such that $UW\subseteq W^{[p]}$ and $Q=\ann_{R}R^{\alpha}/W$. When all entries of $M$ are in $P$, $\im M \subseteq PR^{\alpha}$, i.e., $\im M =(PR^{\alpha})^{\star U}=PR^{\alpha}$. Thus, if we are in case 1., we have at least one entry $a$ of $M$ which is not in $P$. If $a\in Q$, by Lemma \ref{last1}, $Q$ is among the minimal primes of $\ann_{R}R^{\alpha}/((P+aR)R^{\alpha})^{\star U}$. If $a\notin Q$, by Lemma \ref{last2}, $Q$ is a minimal prime of $\ann_{R}R^{\alpha}/W_{1}$ such that $U_{1}W_{1}\subseteq W_{1}^{[p]}$, where $U_{1}$ and $W_{1}$ as in Lemma \ref{last2}. On the other hand, since $a$ becomes a unit in $R_{a}$, we can choose the invertible matrix $X$ with entries in $R_{a}$ such that $W_{1}=XW_{a}\cap R^{\alpha}$ contains the $\alpha$-th elementary vector $e_{\alpha}$. Then we have $R^{\alpha}/W_{1}\cong R^{\alpha-1}/\rho(W_{1})$, where $\rho:R^{\alpha}\rightarrow R^{\alpha-1}$ is the projection onto first $\alpha-1$ coordinates. Let $U_{2}$ be the top left $(\alpha-1)\times(\alpha-1)$ submatrix of $U_{1}$. Then $\ann_{R}R^{\alpha}/W_{1}=\ann_{R}R^{\alpha-1}/\rho(W_{1})$ and $U_{2}\rho(W_{1})\subseteq \rho(U_{1}W_{1})\subseteq \rho(W_{1}^{[p]})=\rho(W_{1})^{[p]}$. Therefore, $\ann_{R}R^{\alpha}/W_{1}$ is $U_{2}$-special, and so is $Q$. Assume now that we are in case 2., by definition of $\star$-closure $UPR^{\alpha} \subseteq P^{[p]}R^{\alpha}$, i.e., the entries of $U$ are in $(P^{[p]}:P)$. On the other hand, by Lemma \ref{fedder}, if $A=R/P$, $F_{*}((P^{[p]}:P)/P^{[p]}) \cong\homm_{A}(F_{*}A,A)$ is rank one $F_{*}A$-module. This means that $(P^{[p]}:P)/P^{[p]}$ is rank one $A$-module, and so we can find an element $g \in (P^{[p]}:P)\setminus P^{[p]}$ such that $(P^{[p]}:P)/P^{[p]}$ is generated by $g+P^{[p]}$ as an $A$-module. Also we can find an element $a_{1} \in R\setminus P$ such that the localization of $(P^{[p]}:P)/P^{[p]}$ at $a_{1}$ is generated by $g/1+P_{a_{1}}^{[p]}$ as an $A_{a_{1}}$-module and hence as an $R_{a_{1}}$-module. If $a_{1}\in Q$, we can find $Q$ as in the case 1.(a), thus, we assume that $a_{1}\notin Q$. Then for any entry $u$ of $U$, working in the localization, we have an expression \[ \frac{u}{1}+P_{a_{1}}^{[p]}=\frac{r}{a_{1}^{w_{1}}}\frac{g}{1}+P_{a_{1}}^{[p]} \] which implies that $\dfrac{u-rg}{a_{1}^{w_{1}}} \in P_{a_{1}}^{[p]}$, i.e., $\dfrac{u- rg}{a_{1}^{w_{1}}}=\dfrac{r'}{a_{1}^{w_{2}}}$, where $r \in R$, $r' \in P^{[p]}$ and $w_{1},w_{2} \in \N$. Thus, \[ a_{1}^{w_{1}+w_{2}}u=a_{1}^{w_{2}}rg+a_{1}^{w_{1}}r' \] Therefore, we can write $a_{1}^{\mu}U=gV+V'$ for some $\mu\gg 0$ and $\alpha \times \alpha$ matrices $V$ and $V'$ with entries in $R$ and $P^{[p]}$, respectively. Then by Proposition \ref{rmklast}, we may replace $V'$ with the zero matrix, since $\Ie(V'R^{\alpha}) \subseteq PR^{\alpha}$. Let $d=\det V$. We now consider three cases: \begin{enumerate} \item If $d \in P$, then the determinant of $V$ in the fraction field $\mathbb{F}$ of $A$, say $\bar{d}$, will be zero. So we can find an invertible matrix $X$ with entries in $\mathbb{F}$ such that the last column of $VX^{-1}$ is zero, and so is $UX^{-1}$. Let $a_{2}$ is the product of all denominators of entries of $X$ and $X^{-1}$, i.e. the entries of $X$ and $X^{-1}$ are in $R_{a_{2}}$. If $a_{2}\in Q$, we can find $Q$ as in the case 1.(a) again, thus, we also assume that $a_{2}\notin Q$. Let $a=a_{1}a_{2}$. By Lemma \ref{last2}, $P$ and $Q$ are $U_{1}$-special prime ideals where $U_{1}=a^{\nu}X^{[p]}UX^{-1}$ whose last column is zero. Then since $PR^{\alpha}=(PR^{\alpha})^{\star U} \Leftrightarrow UPR^{\alpha}\subseteq P^{[p]}R^{\alpha}$, we also have \[ U_{1}PR^{\alpha}=a^{\nu}X^{[p]}UX^{-1}PR^{\alpha}\subseteq a^{\nu}X^{[p]}UPR^{\alpha} \subseteq UPR^{\alpha} \subseteq P^{[p]}R^{\alpha} \] which implies $PR^{\alpha}=(PR^{\alpha})^{\star U_{1}}$. Hence, we can produce $Q$ as in Proposition \ref{last3}. \item If $d\in Q\setminus P$, then by Lemma \ref{last1}, $Q$ is among minimal prime ideals of $\ann_{R}R^{\alpha}/((P+dR)R^{\alpha})^{\star U}$. \item If $d\notin Q$, let $a=da_{1}$, $W=(QR^{\alpha})^{\star U}$ and $X=I_{\alpha}$ be the $\alpha\times\alpha$ identity matrix. Then by Lemma \ref{last2}, $Q$ is a minimal prime ideal of $\ann_{R}R^{\alpha}/W_{1}$ where $W_{1}=(QR^{\alpha})_{a}^{\star U}\cap R^{\alpha}$. By definition of $\star$-closure $(QR^{\alpha})_{a}^{\star U}=(QR_{a}^{\alpha})^{\star U}$ is the stable value of the sequence \begin{align*} L_{0}&=QR_{a}^{\alpha}\\ L_{1}&=\Ie_{1}(UQR_{a}^{\alpha})+QR^{\alpha}=\Ie_{1}(gVQR_{a}^{\alpha})+QR_{a}^{\alpha}=\Ie_{1}(gQR^{\alpha})_{a}+QR_{a}^{\alpha}\\ L_{2}&=\\ \vdots& \end{align*} which also equals to $(QR^{\alpha})^{\star gI_{\alpha}}$. The third equality for $L_{1}$ is because of the fact that $\Ie_{e}(-)$-operation commutes with localization and $V$ is invertible. This implies that $\ann_{R}R^{\alpha}/W_{1}$ is $gI_{\alpha}$-special, and so is $Q$. Therefore, $Q$ is $g$-special and can be computed using the Katzman-Schwede algorithm, since $g \notin P^{[p]}$. \end{enumerate} This method also shows that for a given $U$-special ideal $P$, there are only finitely many $U$-special prime ideals minimally containing $P$. \end{proof} For the sake of integrity, we shall give the proof of Theorem \ref{1}. The main difference between our methods and the methods in \cite[Section 5]{K3} is that we do not use the aid of injective hulls of residue fields although our results are identical with the results in \cite[Section 5]{K3} over power series rings. \begin{proof}[Proof of Theorem \ref{1}] The proof is by induction on $\alpha$. The case $\alpha=1$ is established in section \ref{section:Katzman-Schwede Algorithm}. Assume that $\alpha>0$ and the claim is true for $\alpha-1$. Since zero ideal is always a $U$-special prime ideal of $R$, we start with $0$ and use Theorem \ref{last4} to find $U$-special prime ideals minimally containing $0$. Continuing this process recursively gives us bigger $U$-special prime ideals at each steps. Therefore, since $R$ is of finite dimension, the number of steps in this process is bounded by the dimension of $R$. Hence, there are only finitely many $U$-special prime ideals with the desired property. \end{proof} Next we turn Theorem \ref{last4} into an algorithm which gives us a generalization of the Katzman-Zhang algorithm to $R$. Note also that over power series rings the following is identical with the Katzman-Zhang algorithm. \subsection*{Intput:} An $\alpha\times\alpha$ matrix $U$ with entries in $R$ such that $\mathcal{K}\neq 0$. \subsection*{Output:} Set of all $U$-special prime ideals $P$ of $R$ with the property that $\mathcal{K} \nsubseteq PR^{\alpha}$. \subsection*{Initialize:} $\mathcal{A}_{R^{\alpha}}=\{0\}, \mathcal{B}=\emptyset$. \subsection*{Execute the following:} If $\alpha =1$, use the Katzman-Schwede Algorithm to find desired primes, put these in $\mathcal{A}_{R^{\alpha}}$, output $\mathcal{A}_{R^{\alpha}}$ and stop.\\ If $\alpha > 1$, then while $\mathcal{A}_{R^{\alpha}} \neq \mathcal{B}$, pick any $P \in \mathcal{A}_{R^{\alpha}}\setminus\mathcal{B}$. If $\mathcal{K}\subseteq PR^{\alpha}$, add $P$ to $\mathcal{B}$, if not, write $W=(PR^{\alpha})^{\star U}$ as the image of a matrix $M$ and do the following: \begin{enumerate} \item If there is an entry $a$ of $M$ which is not in $P$, then; \begin{enumerate} \item Find the minimal primes of $\ann_{R}\dfrac{R^{\alpha}}{((P+aR)R^{\alpha})^{\star U}}$, and add them to $\mathcal{A}_{R^{\alpha}}$, \item Find an invertible $\alpha \times \alpha$ matrix $X$ with entries in $R_{a}$ such that the $\alpha$-th elementary vector $e_{\alpha}\in XW_{a}\cap R^{\alpha}$, and choose $\nu\gg 0$ such that $U_{1}=a^{\nu}X^{[p]}UX^{-1}$ has entries in $R$. Let $U_{0}$ be the top left $(\alpha-1)\times(\alpha-1)$ submatrix of $U_{1}$. Then apply the algorithm recursively to $U_{0}$ and add resulting primes to $\mathcal{A}_{R^{\alpha}}$. \end{enumerate} \item If $\im M=PR^{\alpha}$, then find elements $a_{1} \in R\setminus P$, $g \in (P^{[p]}:P)$, and an $\alpha \times \alpha$ matrix $V$, and $\mu \gg 0$ such that $a_{1}^{\mu}U \equiv gV$ modulo $P^{[p]}$. Compute $d= \det V$ and do the following: \begin{enumerate} \item If $d \in P$, find an element $a_{2} \in R \setminus P$ and an invertible matrix $X$ with entries in $R_{a_{2}}$ such that the last column of $UX^{-1}$ is zero. Find $\nu \gg 0$ such that the entries of $U_{1}=(a_{1} a_{2})^{\nu} X^{[p]}UX^{-1}$ are in $R$. Let $U_{0}$ be the top left $(\alpha-1)\times(\alpha-1)$ submatrix of $U_{1}$, and $\mathcal{K}_{0}$ be the stable value of $\{\Ie_{e}(\im U_{0}^{[p^{e}-1]}U_{0}^{[p^{e}-2]} \cdots U_{0})\}_{e>0}$ as in Theorem \ref{2}. Then; \begin{enumerate} \item If $\mathcal{K}_{0} \subseteq PR^{\alpha -1}$, write the last row of the matrix $U_{1}^{[p^{e-1}]}U_{1}^{[p^{e-2}]} \cdots U_{1}$ as $(g_{1}, \dots ,g_{\alpha -1}, 0)$ and apply the Katzman-Schwede Algorithm to the case $u=g_{i}$ for each $i$, and add resulting primes to $\mathcal{A}_{R^{\alpha}}$, \item If $\mathcal{K}_{0}\nsubseteq PR^{\alpha -1}$, find recursively all prime ideals for $U_{0}$ which contain $P$ minimally and denote their intersection with $\tau$. Compute $\Ie_{1}(U_{1}l(\tau \mathcal{K}_{0}))^{\star U_{1}}$, and write this as the image of a matrix $M'$. Find an entry $a'$ of $M'$ not in $P$. Now; \begin{enumerate} \item Add the minimal primes of $\ann_{R}\dfrac{R^{\alpha}}{((P+a'R)R^{\alpha})^{\star U_{1}}}$ to $\mathcal{A}_{R^{\alpha}}$, \item Find an invertible matrix $X$ with entries in $R_{a'}$ such that the $\alpha^{\text{th}}$ elementary vector $e_{\alpha}\in X(\im M')_{a'} \cap R^{\alpha}$. Find $\nu \gg 0$ such that $U_{2}=(a')^{v}X^{[p]}U_{1}X^{-1}$ has entries in $R$. Let $U_{3}$ be the top left $(\alpha-1)\times(\alpha-1)$ submatrix of $U_{2}$. Apply the algorithm recursively to $U_{3}$, and add resulting primes to $\mathcal{A}_{R^{\alpha}}$. \end{enumerate} \end{enumerate} \item If $d \notin P$, then; \begin{enumerate} \item add the minimal primes of $\ann_{R}\dfrac{R^{\alpha}}{((P+dR)R^{\alpha})^{\star U}}$ to $\mathcal{A}_{R^{\alpha}}$, \item apply the Katzman-Schwede algorithm to the case $u=g$, and add resulting primes to $\mathcal{A}_{R^{\alpha}}$. \end{enumerate} \end{enumerate} \item Add $P$ to $\mathcal{B}$ \end{enumerate} Output $\mathcal{A}_{R^{\alpha}}$ and stop. Since all the operations used in the above algorithm are defined for localizations of $R$, we can apply our algorithm to any localization of $R$ at a prime ideal $\mathfrak{p}$. In the rest of this section, we investigate the relations between output sets of our algorithm applied to $R$ and $R_{\mathfrak{p}}$. \begin{lemma} \label{lemmalast} Let $\mathcal{R}$ be $R$ or $R_{\mathfrak{p}}$ or $\widehat{R_{\mathfrak{p}}}$. $P$ is a $U$-special ideal of $R$ not contained in $\mathfrak{p}$ if and only if $P\mathcal{R}$ is a $U$-special ideal of $\mathcal{R}$. \end{lemma} \begin{proof} Let $P$ be a prime ideal of $R$. Then \begin{align*} P \text{ is } U\text{-special }&\Leftrightarrow P=\ann_{R}R^{\alpha}/(PR^{\alpha})^{\star U}\\ & \Leftrightarrow P\mathcal{R}=\ann_{\mathcal{R}}\mathcal{R}^{\alpha}/(P\mathcal{R}^{\alpha})^{\star U} \Leftrightarrow P\mathcal{R} \text{ is } U\text{-special } \end{align*} \end{proof} Our next theorem gives the exact relation between the output sets $\mathcal{A}_{R^{\alpha}}$ and $\mathcal{A}_{R_{\mathfrak{p}}^{\alpha}}$ of our algorithm for $R$ and $R_{\mathfrak{p}}$, respectively. \begin{theorem} \label{3} Let $U$ be an $\alpha \times \alpha$ matrix with entries in $R$. Our algorithm commutes with localization: if $\mathcal{A}_{R^{\alpha}}$ and $\mathcal{A}_{R_{\mathfrak{p}}^{\alpha}}$ are the output sets of our algorithm for $R$ and $R_{\mathfrak{p}}$, respectively, then \[ \mathcal{A}_{R_{\mathfrak{p}}^{\alpha}}= \{ PR_{\mathfrak{p}} \mid P \in \mathcal{A}_{R^{\alpha}} \text{ and } P \subseteq \mathfrak{p} \}. \] \end{theorem} Before proving our claim we need a remark which we will use it in step 2. of the proof. \begin{remark} Keeping the notations of above theorem, for any prime ideal $P$ of $R$, and any submodule $K$ of $R^{\alpha}$ we have the property that $K \subseteq PR^{\alpha} \Leftrightarrow K_{\mathfrak{p}} \subseteq PR_{\mathfrak{p}}^{\alpha}$. We already know that $K \subseteq PR^{\alpha}$ implies $K_{\mathfrak{p}} \subseteq PR_{\mathfrak{p}}^{\alpha}$. For the converse, suppose the contrary that there is an element $k=(k_{1}, \dots ,k_{\alpha})^{t} \in K\setminus PR^{\alpha}$ where $k_{i} \in R\setminus P$ for some $i$. Then there exists an element $s \in R\setminus \mathfrak{p}$ such that $sk \in PR^{\alpha}$, i.e. $sk_{i} \in P$. Since $P$ is prime, $k_{i} \in P$ or $s \in P$, which is impossible. Therefore, $K_{\mathfrak{p}} \subseteq PR_{\mathfrak{p}}^{\alpha}$ implies that $K \subseteq PR^{\alpha}$. \end{remark} \begin{proof} By Theorem \ref{al1}, the Katzman-Schwede Algorithm commutes with localization. Therefore, we can, and do, assume $\alpha >1$. Let $P$ be the prime ideal of $R$ in the initial step of our algorithm, and $R_{\mathfrak{p}}$ be a localization of $R$ at a prime ideal $\mathfrak{p}$ containing $P$. Since the stable value of $\{\Ie_{e}(U^{[p^{e-1}]}U^{[p^{e-2}]} \cdots UR_{\mathfrak{p}}^{\alpha})\}_{e\geq1}$ is equal to the stable value of $\mathcal{K}_{\mathfrak{p}}=\{\Ie_{e}(U^{[p^{e-1}]}U^{[p^{e-2}]} \cdots UR^{\alpha})R_{\mathfrak{p}}\}_{e\geq1}$. we have $\mathcal{K} \subseteq PR^{\alpha} \Leftrightarrow \mathcal{K}_{\mathfrak{p}}\subseteq PR_{\mathfrak{p}}^{\alpha}$. Since $\star$-closure commutes with localization, whenever we write $(PR^{\alpha})^{\star U}$ as the image of a matrix $M$ with entries in $R$, we can write $(PR_{\mathfrak{p}}^{\alpha})^{\star U}=(PR^{\alpha})^{\star U}R_{\mathfrak{p}}$ as the image of same matrix but working in $R_{\mathfrak{p}}$. \begin{enumerate} \item Since $a \notin P \Leftrightarrow a \notin PR_{\mathfrak{p}}$, $a$ is an entry of $M$ not in $(PR_{\mathfrak{p}}^{\alpha})^{\star U}$. Then, by Lemma \ref{sp2}, step 1.(a) commutes with localization. However, for step 1.(b), we can take the same matrix $X$ with entries in $R_{a}$ but working in $R_{\mathfrak{p}}$. Then while we do operations in $R_{\mathfrak{p}}$, we see that $e_{\alpha} \in X(\im M)_{a}\cap R^{\alpha}$ implies that $e_{\alpha} \in (X(\im M)_{a}\cap R^{\alpha})R_{\mathfrak{p}}\cong X(\im M)_{a}\cap R_{\mathfrak{p}}^{\alpha}$. Also $U_{1}=a^{\nu}X^{[p]}UX^{-1}$ has entries in $R$ (and in $R_{\mathfrak{p}}$) for the same $\nu \gg 0$. Therefore, we end up with the same matrix $U_{0}$. \item We first note that $(PR^{\alpha})^{\star U}=PR^{\alpha} \Leftrightarrow (PR_{\mathfrak{p}}^{\alpha})^{\star U}=PR_{\mathfrak{p}}^{\alpha}$. Therefore, if $(PR_{\mathfrak{p}}^{\alpha})^{\star U}=PR_{\mathfrak{p}}^{\alpha}$, we can have the same construction working in $R_{\mathfrak{p}}$, i.e., we can take $a_{1} \in R_{\mathfrak{p}}\setminus PR_{\mathfrak{p}}$, $g \in ((PR_{\mathfrak{p}})^{[p]}:PR_{\mathfrak{p}})$, $\alpha \times \alpha$ matrix $V$ for the same $\mu \gg 0$ such that $a_{1}^{\mu}U=gV$ modulo $(PR_{\mathfrak{p}})^{[p]}$ and compute $d=\det V$. \begin{enumerate} \item For any $r \in R$, we have the property that $r \in P \Leftrightarrow r \in PR_{\mathfrak{p}}$. Thus, if $d \in PR_{\mathfrak{p}}$, then we can have the same construction again, and so we can take $a_{2} \in R_{\mathfrak{p}}\setminus PR_{\mathfrak{p}}$ and the same invertible matrix $X$ with entries in $R_{a_{2}}$ (and in $(R_{\mathfrak{p}})_{a_{2}}\cong (R_{a_{2}})_{\mathfrak{p}}$) such that the last column of $UX^{-1}$ is zero, working in $R_{\mathfrak{p}}$. We also can take the same $\nu \gg 0$ such that the entries of $U_{1}=(a_{1}a_{2})^{\nu} X^{[p]}UX^{-1}$ are in $R$ (and in $R_{\mathfrak{p}}$), and $U_{0}$ to be the same matrix. In addition, since $\Ie_{e}(-)$ operation commutes with localization, if we do calculations in $R_{\mathfrak{p}}$, then the stable value of \[ \{\Ie_{e}(U_{0}^{[p^{e-1}]}U_{0}^{[p^{e-2}]} \cdots U_{0}R_{\mathfrak{p}})\}_{e>0} \] is going to equal to the stable value of $$\{\Ie_{e}(U_{0}^{[p^{e-1}]}U_{0}^{[p^{e-2}]} \cdots U_{0}R)R_{\mathfrak{p}}\}_{e>0}$$ which is $\mathcal{K}_{0}R_{\mathfrak{p}}$. Now, since $\mathcal{K}_{0} \subseteq PR^{\alpha -1} \Leftrightarrow \mathcal{K}_{0}R_{\mathfrak{p}} \subseteq PR_{\mathfrak{p}}^{\alpha -1}$, we can do next: \begin{enumerate} \item Working in $R_{\mathfrak{p}}$, if $\mathcal{K}_{0}R_{\mathfrak{p}} \subseteq PR_{\mathfrak{p}}^{\alpha -1}$ we can write the last row of the matrix $U_{1}^{[p^{e-1}]}U_{1}^{[p^{e-2}]} \cdots U_{1}$ as $(g_{1}, \dots ,g_{\alpha -1}, 0)$. \item Working in $R_{\mathfrak{p}}$, if $\mathcal{K}_{0}R_{\mathfrak{p}} \nsubseteq PR_{\mathfrak{p}}^{\alpha -1}$, we can apply our algorithm recursively to $U_{0}$ and find all prime ideals which contain $PR_{\mathfrak{p}}$ minimally and denote their intersection with $\bar{\tau}$, which is $\tau R_{\mathfrak{p}}$, as we have showed all steps of algorithm commute with localization. Then we have \[ \Ie_{1}(U_{1}\bar{l}(\bar{\tau} \mathcal{K}_{0}R_{\mathfrak{p}}))^{\star U_{1}}=(\Ie_{1}(U_{1}l(\tau K))^{\star U_{1}})R_{\mathfrak{p}}, \] where $\bar{l}:R_{\mathfrak{p}}^{\alpha -1} \rightarrow R_{\mathfrak{p}}^{\alpha -1}\oplus R_{\mathfrak{p}}$ is the extension map induced by $l$. \end{enumerate} \end{enumerate} \end{enumerate} All other steps are similar to previous steps, and so all steps of our algorithm commute with localization. Since our algorithm commutes with localization, by Lemma \ref{lemmalast}, the output set $\mathcal{A}_{R_{\mathfrak{p}}^{\alpha}}$ is the set of all $U$-special prime ideals of $R_{\mathfrak{p}}$, and hence, $$\mathcal{A}_{R_{\mathfrak{p}}^{\alpha}}= \{ PR_{\mathfrak{p}} \mid P \in \mathcal{A}_{R^{\alpha}} \text{ and } P \subseteq \mathfrak{p} \}.$$ \end{proof} Let $U$ be an $\alpha\times\alpha$ matrix with entries in $R$, and let $\mathcal{A}_{R^{\alpha}}$ and $\mathcal{A}_{S^{\alpha}}$ be the output sets of our algorithm for $R$ and $S$, respectively. Let $P$ be a $U$-special prime ideal of $R$, i.e. $P\in\mathcal{A}_{R^{\alpha}}$. Since $PS$ is not always a prime ideal of $S$, we do not have a relation between $\mathcal{A}_{R^{\alpha}}$ and $\mathcal{A}_{S^{\alpha}}$ like in Theorem \ref{3}. However, by Lemma \ref{lemmalast}, we can say that the minimal prime ideals of $PS$ are in $\mathcal{A}_{S^{\alpha}}$. Therefore, the set of minimal prime ideals of elements from $\{ PS \mid P\in\mathcal{A}_{R^{\alpha}} \}$ is contained in $\mathcal{A}_{S^{\alpha}}$. \section{An Application to Lyubeznik's F-modules} \label{section:An Application to Lyubeznik's F-modules} In this section, we investigate the connections between special ideals and local cohomology modules using Lyubeznik's theory of $F$-finite $F$-modules. By Example \ref{t3}, the $i$-th local cohomology module of $R$ with respect to an ideal $I$ is an $F$-finite $F$-module and there exist a finitely generated module $M$ with an injective map $\beta:M\rightarrow F_{R}(M)$ such that \[ H_{I}^{i}(R)=\varinjlim(M\xrightarrow{\beta}F_{R}(M) \xrightarrow{F_{R}(\beta)} F_{R}^{2}(M) \xrightarrow{F_{R}^{2}(\beta)}\cdots) \] where $\beta : M \rightarrow F_{R}(M)$ is a root morphism. Since $M$ is finitely generated, we also have $M \cong \coker A=R^{\alpha}/\im A$ for some matrix $A$ with entries in $R$. Hence, \[ H_{I}^{i}(R)\cong\varinjlim(\coker A \xrightarrow{U} \coker A^{[p]}\rightarrow \cdots) \] for some $\alpha \times \alpha$ matrix $U$ with entries in $R$ such that $U\im A \subseteq \im A^{[p]}$. Furthermore, $U$ defines an injective map on $\coker A$, since $\beta$ is a root morphism. \begin{remark} \label{l2} Following \cite[Section 4]{L1}, if $(R,\mathfrak{m})$ is a local ring, for any $F$-finite $F$-module $\mathcal{M}$, there exists a smallest $F$-submodule $\mathcal{N}$ of $\mathcal{M}$ with the property that $\dim_{R}\supp\mathcal{M}/\mathcal{N}=0$. Hence, $\mathcal{M}/\mathcal{N}\cong E^{k}$ as $R$-modules for some $k\in\N$, where $E$ is the injective hull of the residue field of $R$. \end{remark} \begin{definition}\label{crk} If $R$ is local, we define the \index{corank}corank of an $F$-finite $F$-module $\mathcal{M}$ the number $k$ in Remark \ref{l2}, and denote it by $\crk\mathcal{M} =k$. \end{definition} In Section 4 of \cite{L1}, Lyubeznik uses the theory of corank to shed more light on the notion of $F$-depth of a scheme in characteristic $p$, which is analogous to the notion of DeRham depth of a scheme in characteristic $0$. Following \cite[Section 4]{L1}, in equicharacteristic $0$ one can interpret the DeRham depth in terms of closed points only. Proposition 4.14 in \cite{L1} shows that in characteristic $p$ we can not interpret the $F$-depth of a scheme $Y$ in terms of closed points only. To show this Lyubeznik proves that there are only finitely many prime ideals $P$ of $A$ such that $\crk(H_{IA_{P}}^{i}(A_{P})) \neq 0$. Here $Y=\spec B$, where $B$ is a finitely generated algebra over a regular local ring $S$, $A=S[x_{1},\cdots,x_{n}]$ and $I$ is the kernel of the surjection $A\rightarrow B$. Our next theorem not only reproves this result but also gives us an effective way to compute desired prime ideals. \begin{theorem}\label{m1} Let $I$ be an ideal of $R$ and $P \subset R$ a prime ideal. If $H_{IR_{P}}^{i}(R_{P})$ has non zero corank then $P$ is in the output of our algorithm introduced in section \ref{section:Katzman-Zhang Algorithm}, i.e. \[ \crk(H_{IR_{P}}^{i}(R_{P})) \neq 0 \Rightarrow P \in \mathcal{A}_{R^{\alpha}}. \] for some $\alpha\times\alpha$ matrix $U$ with entries in $R$. \end{theorem} \begin{proof} Since $H_{IR_{P}}^{i}(R_{P}) \cong R_{p}\otimes_{R} H_{I}^{i}(R)$, we have \[ H_{IR_{P}}^{i}(R_{P}) \cong \varinjlim(\coker A_{P} \xrightarrow{U_{P}} \coker A_{P}^{[p]}\rightarrow \cdots) \] where $A_{P}$ and $U_{P}$ are localizations of $A$ and $U$, respectively. We also have that $U_{P}$ defines an injective map on $\coker A_{P}$ since $U$ defines a root morphism for $H_{I}^{i}(R)$. $\crk(H_{IR_{P}}^{i}(R_{P})) \neq 0$ implies that there exists a proper $F_{R_{P}}$-submodule $\mathcal{N}$ of $H_{IR_{P}}^{i}(R_{P})$ such that $\dim_{R_{P}}\supp (H_{IR_{P}}^{i}(R_{P})/\mathcal{N})=0$. Since $H_{IR_{P}}^{i}(R_{P})$ is $F_{R_{P}}$-finite, we have \[ \mathcal{N}= \varinjlim (N \rightarrow F_{R_{P}}(N) \rightarrow F_{R_{P}}^{2}(N) \rightarrow \cdots) \] where $N=\mathcal{N} \cap \coker A_{P}$ is an $R_{P}$-submodule of $\coker A_{P}$. Thus, $N \cong V/\im A_{P}$ for some submodule $V \subseteq R_{P}^{\alpha}$ such that $U_{P}V \subseteq V^{[p]}$. Then \begin{align*} H_{IR_{P}}^{i}(R_{P})/\mathcal{N} &\cong \varinjlim(\coker A_{P}/N \xrightarrow{U_{P}} F_{R_{P}}(\coker A_{P}/N) \rightarrow \cdots) \\ &\cong \varinjlim(R_{P}^{\alpha}/V \xrightarrow{U_{P}} R_{P}^{\alpha}/V^{[p]} \rightarrow \cdots ). \end{align*} Furthermore, \begin{align*} \dim_{R_{P}}\supp (H_{IR_{P}}^{i}(R_{P})/\mathcal{N})=0 &\Rightarrow \ass(H_{IR_{P}}^{i}(R_{P})/\mathcal{N})=\{PR_{P}\} \\ &\Rightarrow \ass (R_{P}^{\alpha}/V)=\{PR_{P}\} \\ &\Rightarrow \ann_{R_{P}}(R_{P}^{\alpha}/V) \text{ is $PR_{P}$-primary} \end{align*} Therefore, $\ann_{R_{P}}(R_{P}^{\alpha}/V)$ is $U_{P}$-special and so is $PR_{P}$ by Lemma \ref{sp1}, because it is the only minimal prime ideal of $\ann_{R_{P}}(R_{P}^{\alpha}/V)$, i.e. $PR_{P} \in \mathcal{A}_{R_{P}^{\alpha}}$. Then by Theorem \ref{3}, $P \in \mathcal{A}_{R^{\alpha}}$ \end{proof} \begin{corollary} \label{m2} $\mathcal{C}_{R}:=\{P \in \mathcal{A}_{R^{\alpha}} \mid (\im A_{P}+PR_{P}^{\alpha})^{\star U_{P}}\neq R_{P}^{\alpha}\}$ is the set of all prime ideals of $R$ which satisfy $\crk(H_{IR_{P}}^{i}(R_{P}))\neq 0$ \end{corollary} \begin{proof} By Theorem \ref{m1}, $\crk(H_{IR_{P}}^{i}(R_{P}))\neq 0$ implies that $PR_{P}$ is a $U_{P}$-special prime ideal of $R_{P}$ such that $PR_{P}=\ann_{R_{P}}(R_{p}^{\alpha}/W)$ for some proper submodule $W\subset R_{p}^{\alpha}$ where $\im A_{P} \subseteq W$ and $A_{P}$ as in Theorem \ref{m1}. Since $(\im A_{P}+PR_{P}^{\alpha})^{\star U_{P}}$ is the smallest submodule of $R_{P}^{\alpha}$ which satisfies $PR_{P}=\ann_{R_{P}}(R_{p}^{\alpha}/(\im A_{P}+PR_{P}^{\alpha})^{\star U_{P}})$, if $(\im A_{P}+PR_{P}^{\alpha})^{\star U_{P}}=R_{P}^{\alpha}$, then we have a contradiction with the existence of $W$. Hence, the set of primes ideals of $R$ which satisfy $\crk(H_{IR_{P}}^{i}(R_{P})) \neq 0$ is the set $\{P \in \mathcal{A}_{R^{\alpha}} \mid (\im A_{P}+PR_{P}^{\alpha})^{\star U_{P}}\neq R_{P}^{\alpha}\}$. \end{proof} Corollary \ref{m2} says that if we want to compute the prime ideals of $R$ which satisfy $\crk(H_{IR_{P}}^{i}(R_{P}))\neq 0$, we pick an element $P\in \mathcal{A}_{R^{\alpha}}$ and need to check whether $(\im A_{P}+PR_{P}^{\alpha})^{\star U_{P}}$ is equal to $R_{P}^{\alpha}$. \subsection*{Acknowledgements} I would like to thank my supervisor Mordechai Katzman for his support, guidance and patience throughout this project. Without his helpful advice and insights this preprint would not be possible.
1,314,259,994,419
arxiv
\section{INTRODUCTION} Growing concerns about the environmental problems and energy crisis demand the urgent development of affordable and clean renewable energy sources as a viable replacement for derogating fossil fuels. In this regard, electrochemical water-splitting is an effective and sustainable approach to generate a massive impact in clean-energy technologies\cite{RogerI,LewisNS,WalterMG}. However, the currently used expensive platinum group metals (PGMs) limit their large-scale applications, thereby promoting continuous research attempts toward highly active and non-noble metal electrocatalysts. Several promising candidates with zero or reduced content of PGMs are being considered, such as transition metals\cite{McKoneJ} and their dichalcogenides\cite{YangJ,Voiry,ChenZ,XieJZ}, phosphides\cite{FengY}, nitrides\cite{CaoBF}, borides\cite{VrubelH}, carbides\cite{ChenWF} and metal-free carbon nitrides\cite{MerletC,MengSL}. Although massive experimental and theoretical studies demonstrate the usage of catalysts in hydrogen evolution reaction (HER), but the overall catalytic activity for large-scale hydrogen production is still confined to few active sites and poor electrical transport\cite{QTangD}. Therefore, it is of paramount significance to develop a broad range of catalytic materials with more active sites and higher conductivity, for which the fundamental understanding from an atomic scale point of view is highly essential. \begin{figure*}[t] \centering {{\includegraphics[height = 3.5in,width=7.5in]{Fig1.jpg}}} \caption{Workflow of Machine learning approach, starting from data processing, feature engineering, model training, model selection and property prediction for screening of ideal HER catalyst from MXenes. From first principles calculations, materials' space is generated from a large number of possible combinations between selected elements and functionalization.} \end{figure*} MXenes, unique accordion-like structures exfoliated from MAX phases (M = transition metal; A = $\emph{p}$-block element; C = C and/or N), have recently attracted significant attention in electronic devices\cite{AChandra,MKhazaeiA,CSiK,MKhazae}, electromagnetic shielding\cite{FShahzad}, electrocatalysis\cite{JRan,ZLi,ZWShe,JZan}, energy storage and conversion\cite{XTangX,MRLukat,MNagui,JZho,Vanshree1,Vanshree2} applications. Especially, the long-term structural stability in acidic electrolytes\cite{PLiJ}, large active surface area (21 m$^2$/g)\cite{BWangAZ} and high electrical conductivity (4600 $\pm$ 1100 S/cm)\cite{ALipatov} make them suitable candidates for HER catalysis. In MXenes (M$_{n+1}$X$_n$T$_x$; n=1, 2, 3), tuning of M (transition metal), X (C and/or N) and T (surface functionalization) is found to improve the hydrogen evolution activity\cite{XHuiX}. For instance, manipulation of transition metal atoms in M$_{n+1}$X$_n$T$_x$ (M$_{n+1}$X$_n$O$_2$, M$_{2}$M$^{\prime}$X$_2$O$_2$, and M$_{2}$M$^{\prime}_2$X$_3$O$_2$) leads to the identification of 110 unexplored candidates with better HER performance\cite{KuangP}. Sun et al.\cite{XSunJ} screened 271 different configurations of M$_{n+1}$X$_n$ by tuning X from C to B and found that the Mn/Co$_2$B$_2$, Os/Co$_2$B$_2$, Co$_2$B$_2$, Pt/Ni$_2$B$_2$ and Co/Ni$_2$B$_2$ candidates surpass the HER activity of PGMs. Doping of P region elements (surface functionalization) modulates the in-plane surface atom activity and improves the HER performance, thereby leading to an optimal HER Gibbs free energy\cite{YYoonA}. MXenes can also be used as substrates in HER applications because of their adjustable surface structures as well as promising physicochemical properties.\cite{GaoGO,ChengYZ}. In such cases, the performance of Ti$_2$CO$_2$ at various hydrogen coverages is found to improve by doping of S atom to substitute the surficial O atom\cite{WangSC}. NiS$_2$@VMXene exhibits long-term durability and low HER overpotential\cite{KuangP}. The aforementioned configurational space offered by the broad range of MXenes and their active sites using traditional approach for optimization of catalyst via experimental and theoretical screening is particularly challenging, time-consuming and expensive. Thus, finding suitable advanced methods became an essential task for accelerating the rational design of efficient catalysts. The screening of potential MXene based catalysts from tremendous combinatorial and structural space requires a huge amount of computational resources\cite{Mosesab}. In traditional routine simulations, the H-adsorption energy is typically the most important parameter to evaluate the HER activity\cite{Marti}. According to the Sabatier principle, the binding of hydrogen should be neither too strong nor too weak to obtain the best catalytic activity\cite{Greeley}. However, such direct simulations might not provide complete information regarding HER performance since the descriptors in various reaction processes are equally important. In this regard, the incorporation of physical interaction through scientific knowledge into models trained by data-driven approaches has gradually emerged as a powerful and reliable tool for hastening the identification of catalysts\cite{Oriol,Mazheik,Emanuele}. Especially, random forest regression (RFR), support vector regression (SVR), kernel ridge regression (KRR) and Elman Artificial Neural Networks (Elman ANNs) algorithms are typically employed to predict Gibbs free energy, which is widely accepted descriptor of HER activity. For instance, the regularized random forest learning method reveals Ni-Ni bond length as the primary feature in determining the binding strength of hydrogen on Ni$_2$P (0001) plane\cite{RBWexler}. Sun et al.\cite{MSun} predicted the HER performance of graphdyine based atomic catalysts using the bag-tree learning model. These results demonstrate that the ML models not only discover novel catalyst materials, but also empower an in-depth understanding of the fundamental correlation between the catalytic structures and their properties. This is highly essential to modify the strategies for developing new design principles in revamping the electrocatalytic efficiency. Here, we explore a robust and more broadly applicable multistep workflow as shown in Fig. 1, where the ab initio adsorption properties are combined with supervised toolbox of machine learning algorithms for source, verification and predictions. For this purpose, a data set of 4,500 MM$^{\prime}$XT$_2$-type MXenes was constructed and systematically investigated their HER performance. Among them, 1,125 systems (25\% of the materials' space) were randomly selected to evaluate the HER activity using DFT calculations as well as for training the ML model. Predominating indicators were then employed to build an interpretable ML model that predicts the HER performance of the remaining 85\% materials' space. Overall, the ML model achieves better prediction activity on par with the first-principles calculations. It deciphers the underlying factors that govern the HER performance and enables a coherent path to investigate a large \begin{figure}[H] \centering {{\includegraphics[height = 7.9in,width=3.5in]{Fig2.jpg}}} \caption{(a) Selected elements for MM$^{\prime}$XT$_2$ MXenes (M/M$^{\prime}$ = Sc, Ti, V, Cr, Mn, Y, Zr, Nb, Mo or W; X = B, C or N; T = O, F, Cl or S). Optimized structures of (b) pristine and (c) functionalized MXenes. Color code: M/M$^{\prime}$-, X- and T- layers are presented in blue/voilet, dark grey and pink colors, respectively. The number 1, 2 and 3 in circle indicates possible adsorption sites. (d) Computed normalized cohesive energies $\overline{E}_{coh}$ (eV/atom) and (e) corresponding distribution of MM$^{\prime}$X, MM$^{\prime}$XCl$_2$, MM$^{\prime}$XF$_2$, MM$^{\prime}$XO$_2$ and MM$^{\prime}$XS$_2$ MXenes. } \end{figure} number of MXene configurations. \section{Results} MM$^{\prime}$XT$_2$-type (M/M$^{\prime}$ = Sc, Ti, V, Cr, Mn, Y, Zr, Nb, Mo or W; X = B, C or N; T = O, F, Cl or S) MXenes were constructed through quintuple atomic layers of T-M-X-M$^{\prime}$-T, where the X layers are alternately sandwiched between different metal layers (M/M$^{\prime}$) and the surfaces are terminated with functional groups (T) as shown in Fig. 2a-c. Possible combinations of metal layers were then considered to generate 1,500 MM$^{\prime}$XT$_2$-type MXenes. Initially, we evaluated the cohesive energies to understand the stability trends in these configurations. Computed normalized cohesive energies $\overline{E}_{coh}$ (eV/atom) and the corresponding distribution of various functionalized MXenes are shown in Fig. 2d,e. From the viewpoint of functionalization, the lowest $\overline{E}_{coh}$ is obtained for -O terminated MXenes when compared with other terminations such as -F, -Cl and -S, which indicates that the former is more likely to be synthesized during experimentation. Moreover, the structural stability of the terminated MXenes increases in the order of MM$^{\prime}$X $<$ MM$^{\prime}$XCl$_2$ $<$ MM$^{\prime}$XS$_2$ $<$ MM$^{\prime}$XF$_2$ $<$ MM$^{\prime}$XO$_2$, representing better stability for fully functionalized MXenes with respect to their pristine counterpart. The observed behavior also confirms why the MXenes are usually terminated with functional groups during experimental synthesis\cite{NaguibMM}. In addition, the $\overline{E}_{coh}$ values of MM$^{\prime}$CT$_2$ are lower than those of MM$^{\prime}$BT$_2$ and MM$^{\prime}$NT$_2$, suggesting that the carbon based MXenes are more stable than the boride and nitride based MXenes (see Fig S1a,b). This also provides an alternative explanation for the poor stability in the etching of boride and nitride based MXenes during the synthesis process\cite{Soundir,NgVMH}. \subsection{Adsorption energy distribution} Typically, the availability of active catalytic sites on the surface of MXenes is highly required to carry out the HER activity: the larger the number of active sites on the surface, the stronger will be the catalytic performance. There are three possible adsorption sites available on MXene surfaces for hydrogen adsorption. Site-1 represents that the H-atom is adsorbed directly on the innermost metal atomic layer of the MXenes, site-2 indicates that the H-atom is adsorbed on the top of the outermost M-atom of MXenes and site-3 denotes that the H-atom is adsorbed directly above the X-atom of MXene structures. Overall, the adsorption of H-atom on the three available active sites of 1,500 MM$^{\prime}$XT$_2$-type MXenes leads to 4,500 configurations. Among them, 1,125 systems (25\% of the materials' space) were randomly selected to evaluate the HER activity using DFT calculations as well as for training the ML models; while the catalytic performance for the rest of the materials was predicted using the well-trained ML model. Based on the computational hydrogen electrode (CHE) model\cite{JGreeley}, the Gibbs free energy of adsorbed hydrogen ($\Delta$G$_{H}$) is a universal indicator to evaluate the HER performance. \begin{figure*}[ht] \centering {{\includegraphics[height = 4.2in,width=7in]{Fig3.jpg}}} \caption{(a) DFT computed hydrogen adsorbed Gibbs free energies ($\Delta$G$_{H}$) for randomly selected 1,125 MM$^{\prime}$XT$_2$-type MXenes. $\Delta$G$_{H}$ in the range of -0.1 to +0.1 eV is represented in the yellow shadow region. (b) Normalized cohesive energies $\overline{E}_{coh}$ versus $\Delta$G$_{H}$. The top 10 promising candidates with better stability and high HER activity are highlighted in the yellow region. (c) The free energy profile of hydrogen evolution for the top 10 potential candidates. (d) Distribution of $\Delta$G$_{H}$ with respect to functionalization and type of active sites. (e) Optimized geometries of hydrogen adsorbed on the top 10 promising MXenes. Here blue, violet, grey, pink and white color balls represent M, M$^{\prime}$, X and T, H atoms, respectively. } \end{figure*} Accordingly, the $\mid\Delta G_{H}\mid$ close to zero signifies prominent HER activity of the catalyst; while, a negative or positive $\Delta$G$_{H}$ with too strong or too weak adsorption will tend to reduce the overall reaction rate. The HER performance is highly dependent on functionalization (see Fig. 3a-e and S2); for instance, most of the F- and Cl- terminated MXenes exhibit poor HER activity due to their highly positive $\Delta$G$_{H}$, indicating a weak interaction between adsorbed H, and F- and Cl- groups on the MXene surfaces. There is a significant difference in HER activity even with varying X-layers, where the carbon based MXenes with $\mid\Delta$G$_{H}\mid$ smaller than 0.1 eV show better HER performance when compared to boride and nitride based MXenes. Overall, 48 systems show optimal $\Delta$G$_{H}$ values in the range of -0.1 to 0.1 eV. Among them, CrMoNO$_2$-1, MnNbCO$_2$-3, NbMoNO$_2$-3, NbYBO$_2$-1, VMoCO$_2$-1, TiMoN-2, NbCrC-2, NbTiC-2, NbTiN-2 and TiMoC-2 have better stability and superior HER activities when compared with the noble metal Pt\cite{ObodoKO} and thus can be considered as promising HER catalysts. These results reveal that the HER activity also depends on the active site where the H is adsorbed. It is found that the H adsorbed directly on the outermost metal atomic layer of the MXene structures (site-2) has better HER catalytic performance when compared with other sites. \begin{figure*}[t] \centering {{\includegraphics[height = 3.3in,width=7in]{Fig4.jpg}}} \caption{Mean absolute error (MAE)and coefficient of determination (R$^2$ score) of ABR, ENR, GBR, KNR, KRR, LAS, PLS, RFR and RDG algorithms using primary (atomistic, structural and electronic indicators) and statistical function processed features. Parity plot of the best-performing RFR and GBR models (b) with and (c) without cross-validation using the DFT dataset of hydrogen adsorbed Gibbs free energies ($\Delta$G$_{H}$). The pink-shaded region indicates a deviation of up to 0.5 eV.} \end{figure*} \subsection{ML models optimization} The precision of a well-trained ML model mainly depends on the material descriptor as well as the choice of algorithm. Historically, several aspects have been considered to connect with the chemical reactivity of catalytic materials, such as d-band characteristics, coordination number and bulk or atomic properties. Correlating such physical aspects onto the adsorption energies with highly non-linear regression algorithms even requires features from the fully optimized geometries of the clean adsorption sites. With the subset selection of atomistic, surface and statistical features, we establish nine different ML models as shown in Table S3, namely, ABR, ENR, GBR, KNR, KRR, LAS, PLS, RFR and RDG using the dataset containing 1,125 H adsorption energies obtained from DFT calculations. To ensure the accuracy and generalization of the supervised ML models, we partitioned the data into training and test sets in an 80 by 20 ratio (see Fig. S3). For controlling and assessing against overfitting, the coefficient of determination value (R$^2$ score) and mean absolute error (MAE) were estimated with and without using 10-fold cross-validation technique. As shown in Fig. S4 and Table S4, the subset of features with representative physical indicators anisotropically captures the H adsorption energy over the studied MXenes. Among all the subsets of features, the combination of primary features with the indicators processed through statistical functions provides the best predictive performance. The predicted MAE and R$^2$ of ABR, ENR, GBR, KNR, KRR, LAS, PLS, RFR and RDG algorithms using primary and statistical function processed features are presented in Fig. 4a. The use of the RFR model converges to low MAE with highest R$^2$ score, irrespective of the feature subset and thereby demonstrating its good generalization ability. Predictions by the best-performing RFR and GBR models with and without cross-validation using the DFT dataset of hydrogen adsorbed Gibbs free energies ($\Delta$G$_{H}$) are shown in Fig. 4b and Fig. 4c, respectively. The GBR model also shows better performance with R$^2$ score of 0.913 (0.753) and MAE of 0.294 (0.421) eV in the model training (testing), indicating the inferior accuracy prediction to the RFR model. It should be noted that these tree based RFR and GBR ensemble models are robust against high-dimensional data sets due to the high ability to fit nonlinear data. On the other hand, ABR, ENR, KNR, KRR, LAS, PLS and RDG methods have unsatisfactory prediction performance, which is reflected by their considerable MAEs of 0.702, 0.573, 1.342, 0.625, 0.578, 0.647, 0.568 eV (see Table S5), respectively, due to poor extrapolation capabilities of the models. Using 10-fold cross validation, the studied models exhibit similar prediction performance for the training/testing sets as shown in Fig. S5. These results demonstrate that the materials' descriptors are crucial to reproduce the adsorption energies over MM$^{\prime}$XT$_2$-type MXenes, thereby validating the suitability of our feature pool. The combination of primary and statistical features achieved satisfactory prediction accuracy. Nevertheless, the presence of a large number of input features makes it difficult to readily derive physical insights, thereby increasing the complexity and time-consumption in the ML model. Thus, it is important to look for a fine balance between accuracy and the number of features for obtaining efficient results. \begin{figure*}[t] \centering {{\includegraphics[height = 4.5in,width=7in]{Fig5.jpg}}} \caption{(a) Parity plot of predicted vs actual $\Delta$G$_{H}$ by RFR model with RFE-HO in the best cross-validated process. The pink-shaded region indicates a deviation of up to 0.5 eV. (b) Pearson correlation coefficient (PCC) heat map for the reduced set of features after recursive feature elimination (RFE) and hyperparameter optimization (HO), (c) Feature importance using permutation on the RFR model with RFE-HO, evaluated via 10-fold cross-validation. (d) Alluvial diagram for the predicted $\Delta$G$_{H}$ values of 4,500 MM$^{\prime}$XT$_2$-type MXenes. The positive, negative and optimal $\Delta$G$_{H}$ values are represented in blue, red and yellow colors. The "-" symbol indicates pristine MXenes without termination. Clearly, the Cl- and F- functionalizations show blue color links indicating poor HER activity due to highly positive $\Delta$G$_{H}$; while H adsorbed directly on the outermost metal atomic layer of the MXene structures (site-2) have better HER catalytic performance as shown by yellow color links. } \end{figure*} \subsection{Recursive feature elimination and hyperparameter optimization} Identifying the most representative descriptors is an extremely critical step for feature engineering to minimize the prediction biasing and accelerate the efficiency of the ML model. For this purpose, recursive feature elimination (RFE) is used to filter the descriptors with extreme asymmetry (skewness) and with low/zero variance for recognizing more suitable smaller subset of features. In addition, hyperparameter optimization (HO) was performed on RFR and GBR models by varying the range of parameters using 10-fold cross-validation and the best combinations of hyperparameters were presented in Table S6. RFE decreased the number of features from 125 to 24 and 30 for RFR and GBR, respectively. Clearly, the reduced set of features is found to be sufficient for capturing the complex interactions influencing the Gibbs free energies. The performance of RFR and GBR models show slight improvement through RFE-HO, in both efficiency and predicted accuracy. The MAE/R$^2$ values of RFR and GBR models with RFE-HO are 0.37/0.82 and 0.37/0.81 (see Fig. 5a, S6-S8 and Table S7), respectively, demonstrating that the RFR model is slightly more suitable and the best algorithm in our multistep ML workflow. In addition, the high ranking of a descriptor indicates the vital role in governing the HER activity of MXenes; for example, the valance electron number of termination (V$_T$) predominately affects the adsorption ability of H atom. As the typical descriptor sets generated from RFE-HO process vary with RFR and GBR models, we particularly identified common potential descriptors that precisely connect to the physicochemical properties of MXenes. Subsequently, valence electron number (V$_T$) and electron affinity (EA$_T$) of termination and work function (WF) are the strong key predictors of Gibbs free energy. As shown in Fig. 5b, the Pearson correlation coefficient (PCC) heat map for the reduced set of features is basically consistent with the ranking of features. However, there exists a strong correlations among some statistical and primary features. This is because the statistical features are derived from the same primary features. For instance, the distance between the metal atoms (d$_{M-M}$), functionalized atoms (d$_{T-T}$), X-atoms (d$_{X-X}$) is strongly correlated to the square of their corresponding distances. Overall, the mean accuracy decrease of the model indicates key role of the down-selected descriptors in determining the HER activity (see Fig. 5c, S9 and Table S8, S9), where the further removal of any feature from the list may lead to a relative decrease in the efficacy of the model. \subsection{Performance prediction of the unknown space} After developing the well-trained model, the best-performing RFR and GBR strategies with RFE-HO were further applied to the remaining 3,375 MM$^{\prime}$XT$_2$-type MXenes. As mentioned, the well-trained RFR model with RFE-HO through cross-validation has an MAE of 0.37 for the randomly selected 25\% of the materials' space (1,125 systems). Thus, the optimal $\Delta$G$_{H}$ value for the remaining ML predicted materials' space were set as -0.470 (-0.1 - 0.370 eV) $\sim$ 0.470 (0.1 + 0.370 eV). Fig. 5d presents the $\Delta$G$_{H}$ for the complete list of MM$^{\prime}$XT$_2$-type MXenes using well-trained ML methodology. Clearly, the $\Delta$G$_{H}$ is anisotropically distributed over a large energy scale ranging from 2.75 to -2.94 eV, indicating the substantial heterogeneity of the active sites. Out of 4,500 MM$^{\prime}$XT$_2$-type MXenes, 28 candidates show optimal $\Delta$G$_{H}$ which signifies excellent HER catalyst activity (see Table S10). Similar to the DFT computed results, the thermodynamic uphill in the proton adsorption energies on F- and Cl- functionalized MXenes suppress the hydrogen production activity, while O- and S- termination show better optimal $\Delta$G$_{H}$ values. These results reveal that the HER activity mainly depends on the type of functionalization. In addition, the type of active sites also influences the catalytic activity, where site-2 is found to exhibit efficient HER performance (see Fig. S10a-c). In a broader context, our calculation results indicate that the H adsorbed on site-2 with some specific elements, such as O-functionalization of C-layer can be used to make MM$^{\prime}$XT$_2$-type MXenes suitable for enhancing the HER activity. \section{Discussion} We have developed a multistep workflow for rapid and accurate $\Delta$G$_{H}$ predictions of 4,500 MM$^{\prime}$XT$_2$-type MXenes, in which 1,125 systems were randomly selected as the training samples to evaluate the HER performance using DFT calculations. These MXenes show high structural stability, especially -O terminated structures are highly preferable and more likely to be synthesized during experimentation. From DFT computed $\Delta$G$_{H}$, we noticed weak interaction between hydrogen and F- and Cl- functional groups of MXenes, indicating poor HER activity. While, O- and S- terminations show better HER performance due to moderate hydrogen adsorption with $\Delta$G$_{H}$ close to zero. It should be noted that the carbon based MXenes are more preferable when compared to boron and nitride based structures due to their better HER activity. Our results demonstrate that the hydrogen adsorbed directly on the outermost metal atomic layer of the MXene structures (site-2) is beneficial to enhance the HER performance. During ML model optimization, the primary features are unable to capture the trends of the target property and thus we have considered the statistical measures of the indicators, where the number of features was increased from 60 to 125. The RFR and GBR models show better performance when compared with the other studied algorithms. Subsequently, the recursive feature elimination (RFE) method is employed to filter the features with low/zero variance and with extreme asymmetry (skewness) for recognizing more suitable smaller subset of descriptors. The RFR model with RFE-HO found to exibit best predictive performance towards $\Delta$G$_{H}$ with low MAE and high R$^2$ of 0.37 eV and 0.82, respectively. These results demonstrate that the materials descriptors are crucial to reproduce the adsorption energies over MM$^{\prime}$XT$_2$-type MXenes, thereby validating the suitability of our feature pool. The feature importance analysis revealed valence electron number (V$_T$) and electron affinity (EA$_T$) of termination, and work function (WF) as key descriptors that govern the HER performance. The final RFR model is then used to predict the HER activity of the remaining materials' space. We found that the C-layers alternately sandwiched between Nb, V, Mo, Cr, Ti metal layers of O-functionalized MM$^{\prime}$XT$_2$-type MXenes show high stability and better HER activity. In conclusion, the present work not only established a robust and more broadly applicable ML-DFT based multistep workflow for efficient and accurate screening of HER activity but also provides potential factors that govern the efficiency of the catalysts, thereby accelerating the design and development of novel catalysts with high performance. \section{Methods} \subsection{Density Functional Theory} Ab initio simulations were performed using plane-wave based Vienna ab initio simulation package (VASP) code within the framework of density functional theory. The exchange correlation effects and ion-electron interactions were incorporated through GGA-functional by Perdue-Burke-Ernzerhof and Projected Augmented Wave method (PAW), respectively. For structural relaxation, the convergence thresholds of 10$^{-2}$ eV/\AA \space in force and 10$^{-5}$ eV in energy, with cutoff energy and Monkhorst-Pack method k-point grid of 450 eV and 7 $\times$ 7 $\times$ 1, respectively, were employed to expand the electron wave functions and sampling the Brillouin zone. A vacuum space of 15 \AA \space was adopted along the z-direction to prevent spurious interaction among the periodic units. The Grimme's empirical correction scheme (DFT + D3) was adopted to describe the Van der Waals interactions. The hydrogen adsorbed Gibbs free energy ($\Delta$G$_{H}$) was defined based on the computational hydrogen electrode (CHE) model\cite{NorskovJK} as shown below: \begin{equation} \Delta G_{H} = \Delta E_{H} + \Delta E_{ZPE} - T\Delta S \end{equation} where $\Delta$E$_{H}$ is the DFT computed differential hydrogen adsorption energy. $\Delta$E$_{ZPE}$, T and $\Delta$S are the change in the zero-point energy of each term contribution, temperature (298.15 K) and entropy change, respectively, calculated in the harmonic approximation. $\Delta$E$_{H}$ can be calculated as follows: \begin{equation} \Delta E_{H} = E_{H} - E_{slab} - \frac{1}{2}E_{H_2} \end{equation} where E$_{slab}$, E$_{H}$ and E$_{H_2}$ are the total energies before H-adsorption, after H-adsorption and isolated H$_2$ gas molecule, respectively. Attending to this definition of $\Delta$G$_{H}$, the highly positive or highly negative values are detrimental as they act as a large barrier to the electrochemical reduction reaction and make it difficult during H$_2$ desorption. While the optimal $\Delta$G$_{H}$ values close to zero are highly preferable to obtain an excellent HER catalyst. The cohesive energy (E$_{coh}$) which is the measure of total energy of the system abstracted by the sum of the individual constituent atoms energies can be used to determine the structural stability by understanding the strength of forces that binds the atoms together in a system and is defined as follows: \begin{equation} E_{coh} = E_{Total} - NE_M - NE_{M^{\prime}} - NE_X - NE_T \end{equation} where E$_{Total}$ is the total energy of the system. E$_{M}$/E$_{M^{\prime}}$, E$_X$ and E$_T$ are the energies of free atoms of M (M = Sc, Ti, V, Cr, Mn, Y, Zr, Nb, Mo or W), X (X = B, C or N) and T (T = O, F, S or Cl), respectively. We further computed the cohesive energy per atom by normalizing the E$_{coh}$ of different systems: $\overline{E}_{coh}$ = $\frac{E_{coh}}{No. \space of atoms}$. \subsection{Feature space construction} To establish accurate ML models or to evaluate the main contributions ruling the hydrogen evolution reaction, it is important to map the material-to-attribute connection. On this account, a group of features (material variables) that represent a system in a computer-friendly manner is highly required. Typically, an ideal feature set discloses the structure-activity relationship of a system and specifically describes each material input data set. However, the materials' representation is a concern of complex and intense development, where the explicit interpretation displays a whopping challenge when compared with the success recently attained for molecular representations\cite{Pronobis,Faber}. Thus, it is very important to generate suitable and comprehensive features during the construction of ML models. For easy accessibility of training the efficient and fast ML model, every selected feature has to independently represent the physicochemical property. Within this purpose, we have considered atomistic, structural and electronic indicators as an initial pool of descriptors, which leads to a total number of 60 primary features as shown in Table S1. Nevertheless, the selected primary features were unable to capture the HER performance due to different numbers of constituent atoms that have different feature space sizes. To this end, the statistical measures of some selected primary features were considered, including average, weighted average, maximum, minimum, standard deviation, variance, and squared values (see Table S2). Feature addition using statistical functions tallied features to 125. These features are categorized into Set-1 (atomistic features), Set-2 (surface features) and Set-3 (statistical features) and their corresponding subset combinations are employed to identify the key descriptors. Albeit, the considered descriptors may not provide complete information about the fundamental physicochemical principles. However, in a pragmatic outlook, their predictions can be used as an indicator to understand the importance of variables that influence the property of interest, thereby establishing a potential and practical model to replace the complicated problem. \subsection{Machine Learning} Our ML approach is designed for establishing a regression relationship between the HER catalytic activity and predominating indicators, based on the results from DFT calculations. Nine ML algorithms, namely, AdaBoost Regressor (ABR), Elastic Net Regressor (ENR), Gradient Boosting Regressor (GBR), K Neighbors Regressor (KNR), Kernel Ridge regressor (KRR), Lasso (LAS), Partial Least Squares (PLS), Random Forest Regressor (RFR) and Ridge Regression (RDG) were employed to predict the HER performance. An open-source Python distribution platform is used to train the models via scikit-learn libraries. The 25\% of the materials' space (1,125 systems) is randomly selected to evaluate the HER performance using density functional theory (DFT) calculations, while the activity of the remaining 85\% materials' space is predicted through the well-trained ML model. To ensure the accuracy and generalization of the supervised ML models, The H binding energy data obtained from DFT-calculations were randomly partitioned into training and test sets in an 80 by 20 ratio. The stability and accuracy of all models were evaluated through the coefficient of determination (R$^2$) and mean absolute error (MAE), with standard deviations indicated, and formulas are given as: \begin{equation} R^2 = 1- \frac{ \sum_{i=1}^{n} (y_i - \overline{y})^2}{ \sum_{i=1}^{n} (\dot{y}_i - \overline{y})^2} \end{equation} \vspace{0.3cm} \begin{equation} MAE = \frac{\sum_{i=1}^{n} (y_i - \dot{y}_i)^2}{n} \end{equation} where ${y_i}$, $\dot{y}$, and $\overline{y}$ are the prediction, true and average value, respectively. R$^2$ score ranges from 0 to 1. The model with R$^2$ value (MAE) closer to 1 (0) demonstrates the better model performance. \vspace{0.2in} {\large\textbf{Acknowledgements}\\} This work is supported by the Department of Science and Technology, Government of India, under the grant number SPO/DST/CHE/2021535. BMA would like to thank SERB for the financial support under the grant number PDF/2021/000487. We acknowledge National Supercomputing Mission (NSM) for providing computing resources of PARAM Sanganak at IIT Kanpur. \subsection{Data availability} The dataset consisting all the features along with the DFT calculated ${{\Delta} G_H}$ values of 1,125 MXenes are available on our GitHub repository. The primary features' data was collected from chemical repository. The best RFR and GBR models can be accessed from zip folder to further predict ${{\Delta} G_H}$ of MM$^{\prime}$XT$_2$-type MXenes, provided the aforementioned descriptors. \subsection{Code availability} The code can be retrieved from our GitHub repository (https://github.com/cnislab/MXenes4HER) and Zenodo (https://doi.org/10.5281/zenodo.7414537). Several python libraries are employed in the current work: Pandas to analyse the dataset, SciKit-learn to build the regression models, Joblib to save the best models, and Matplotlib together with Seaborn to visualize the plots.
1,314,259,994,420
arxiv
\section{Introduction} A combination of increasing concerns about the effect of emissions on the environment, aggressive state-level renewable portfolio standards, and decreasing cost of photovoltaic (PV) solar panels are spurring adoption of solar generation in distribution networks. However, from an operational perspective, there are technical challenges associated with the introduction of variable distributed energy resources that need to be overcome to ensure the safety and reliability of the distribution network. For instance, PV generation can cause over-voltage and can mask load from protection equipment. Furthermore, variation in cloud cover can cause voltage flicker and excessive operation of load tap changers \cite{seguin2016high,ari2011impact,cheng2016photovoltaic}. These challenges can be addressed by advanced control strategies that benefit when distribution system operators have better \textit{observability} of the system state (i.e., the complex node voltages or branch currents). Power system state estimation has been well studied from a transmission system perspective, and has been deployed in control centers for decades \cite{monticelli2012state,gomez2004power}. Distribution system state estimation (DSSE) is not as well established due to several modeling challenges. Distribution systems have lower reactance/resistance (X/R) ratios, which makes use of DC power flow approximations difficult. The unbalanced nature of the distribution system means that phase couplings need to be considered and the state estimation cannot be treated as a single-phase problem. Furthermore, distribution systems are not typically endowed with the adequate sensor coverage required for traditional state estimation algorithms to work. This is addressed in DSSE literature by introducing \textit{pseudo-measurements} with high variance, which correspond to load injections at different nodes in the system, determined by historical load levels \cite{baldwin1993power,manitsas2012distribution}. Static DSSE methods typically fall into three categories: \begin{enumerate} \item \textsc{Branch-current based DSSE}: Branch-current based DSSE methods treat branch currents as state variables and convert available sensor measurements and pseudo-measurements into current measurements, and then solve a nonlinear weighted least squares (WLS) problem for the most likely set of branch currents\cite{li2004branch}. These methods are designed specifically for distribution networks with unbalanced flows, and assume that the network topology is radial or weakly meshed. There have been several extensions to the branch-flow based methods that incorporate phasor measurement unit (PMU) measurements, address bad-data problems, and consider the effect of correlations between pseudo-measurements on the state estimate \cite{li2004branch,baran2009,muscas2014effects,pau2013efficient}. \item \textsc{Voltage based DSSE}: Voltage based state estimation methods take the node voltage magnitudes and phase angles as the system state and attempt to reconstruct the voltage phasors by computing the maximum likelihood estimate based on sensor measurements using a WLS approach \cite{baran1994state,lu1995distribution,chen1991distribution,li1996state}. The associated WLS problem is nonconvex in nature and makes no assumptions regarding the topology of the distribution network. A recent paper \cite{cruz2017two} on voltage based state estimation presents an approach to estimate the distribution system state with streaming measurements by developing an approach that updates the maximum likelihood estimate based on incoming measurements. \item \textsc{Load allocation based methods}: The load allocation methods (\cite{dvzafic2013real}, \cite{roytelman2009real}) use the available measurements to construct/update an extended load forecast. Given the load forecast, a load flow problem is solved to estimate the system voltage profile. The method makes effective use of sparse measurements and has been field-tested in several distribution networks with real-time data. \end{enumerate} This paper uses a WLS approach (similar to those detailed in \cite{baran1994state}-\cite{cruz2017two}) in order to set up and solve the state estimation problem for three different distribution feeders. The main contribution is twofold. First, we present a scenario construction approach that combines a load multiplier profile with solar generation data to generate net-load profiles typical of high PV penetration environments. Second, we present numerical results that show the sensitivity of the WLS approach to pseudo-measurement accuracy, measurement accuracy, and sensor coverage. We show that the relationship between the accuracy of the pseudo-measurements and the (less uncertain) sensor measurements is a key component in determining the accuracy of the state estimate. The paper is structured as follows: Section \ref{sec:2} provides a description of the network and the measurement model. Section \ref{sec:3} consists of a brief description of the optimization problem. Section \ref{sec:4} details the scenario construction method, and Section \ref{sec:5} presents the numerical results. \section{Network and Measurement Model} \label{sec:2} Consider a distribution network with $N$ buses. The state of the distribution system is given by the voltage vector $V \in \mathds{C}^M$. The voltage vector is represented by $V$ and its dimension (denoted $M$) depends on the number of buses and the number of phases associated with each bus, which can range anywhere from 1 to 3 for distribution systems. Let $Y^{(M \times M)}$ represent the network admittance matrix associated with the distribution system $\mathcal{G}$ that satisfies Equation (1): \begin{align} I = YV \end{align} where $I_j \in \mathds{C}^k$ represents a subset of entries of the vector $I$ corresponding to bus $j$. The dimension of $I_j$ can vary between 1 and 3, depending on the number of phases associated with bus $j$. Bus $1$ is chosen to be the reference bus and contains three phases. The voltage magnitude of the nodes in the reference bus is fixed at $1$ and the phase angles are separated by $120^\circ$. The following subsection details some of the typical measurements that are usually available on the distribution network. \subsection{Measurements} The distribution network is typically instrumented with different types of metering equipment serving various purposes (smart meters for billing, sensors associated with telemetered protection equipment, etc.) and these can act as a source of measurement data for state estimation purposes. The mathematical relationship between the voltage phasor $V$ and the different types of potentially available measurements is given by the \textit{measurement functions} below: \begin{align} h_{i_{i \rightarrow j}}(V) &= (V_i - V_j) Y_{ij}\\ h_{|I_{i \rightarrow j}|}(V) &= |h_{I_{i \rightarrow j}}(V)|\\ h_{I_i}(V) &= (YV)_i\\ h_{|V_i|}(V) &= |V_i|\\ h_{S_i}(V) &= (V \circ (YV)^{\star})_i \end{align} where the voltage phasor $V$, $h_{i_{i \rightarrow j}}(V)$ is the current flow along the branch $(i,j)$, $h_{I_i}(V)$ is current injection into bus $i$, $h_{|V_i|}(V)$ is the voltage magnitude at bus $i$, $h_{|I_{i \rightarrow j}|}(V)$ is the branch current magnitude, and $h_{S_i}(V)$ is the apparent power injection into bus $i$. Note that $(.)^{\star}$ denotes the complex conjugate and $\circ$ represents the pointwise product of vectors. Let $\Sigma_m$, where $m \in \{i_{a \rightarrow b},~I_a,~|V_a|, |I_{a \rightarrow b}|, S_a \}$, denote the error covariance associated with each of the measurements. The variances associated with the real-time sensor measurements are typically small and are taken to be constant for the sake of simplicity. The load injection into each bus $S_a$ is usually not available on all buses and will be estimated as \textit{pseudo-measurement} from historical data (as detailed in Section \ref{sec:4}). For a given distribution network with a fixed number $N_m$ of sensors, it is possible to construct a \textit{composite} measurement function $H: \mathds{C}^M \rightarrow \mathds{C}^{N_m}$ that maps the voltage $V$ into the corresponding set of measurements. Similarly, a composite covariance matrix $\Sigma_{meas} \in \mathds{C}^{N_m} \times \mathds{C}^{N_m}$ can be derived by constructing a block diagonal matrix in which the diagonal entries correspond to the covariance matrices associated with the individual measurements. \section{DSSE Problem Formulation}\label{sec:3} Given the $composite$ measurement function detailed in the previous section, the DSSE problem with measurements can be formulated as a nonlinear WLS problem as follows: \begin{align} \label{eqn:dssevi} \hat{V} = &\underset{V}{\argmin}~(H(V) - z)^T\Sigma_{meas}^{-1}(H(V) - z)\\ &s.t.~v_{min} \leq|V_i| \leq v_{max}, -\pi \leq \angle{V_i} \leq \pi~\forall i \in \{1,2,\dots N\} \nonumber \end{align} where $H$ is the composite measurement function, $z \in \mathds{C}^{N_m}$ is the observed measurement and $\Sigma_{meas}$ represents the error covariance associated with the measurements. The voltage magnitude is constrained to be between $v_{min}$ and $v_{max}$. The upper bound for the voltage magnitudes is chosen to be 1.1, as the load tap changers can bump the voltage as high as 1.05 and the PV injections can further exacerbate the situation. \\ The problem described by Equation (\ref{eqn:dssevi}) is nonconvex due to the magnitude measurements and the apparent power \textit{pseudo-measurements}. An implementation of the optimization problem was done in Julia and MATLAB using the general nonlinear program solver IPOPT. \section{Scenario Construction and Pseudo-Measurements}\label{sec:4} An hourly load multiplier data set $\mathcal{L}$ (shown in \figref{fig:yearlongL} and \figref{fig:pv}), obtained from the OpenDSS simulation platform, describes how the load at the head of the feeder varies throughout the year. The actual load at any node in the feeder at any point in the year is obtained by scaling the nominal load value for that node and scaling it by the load multiplier value for that time of year. Note that the load multiplier data set is synthetic and is only used for generating different scenarios to determine the efficacy of the state estimator. Similarly, PV forecast/measured data (downsampled from a data set with a resolution of 5 minutes) from Hinesburg, Vermont, USA, obtained from the National Renewable Energy Laboratory at \url{https://www.nrel.gov/grid/solar-power-data.html}, was normalized to make it compatible with the load multiplier data set $\mathcal{L}$ (shown in \figref{fig:pv}). Let $\bar{S} = \bar{P} + j\bar{Q} \in \mathds{C}^M$, where $M$ is the number of nodes, denote the nominal real and reactive injections into each of the nodes. Given the solar data $\mathcal{S}$, the load multipliers $\mathcal{L}$, and the nominal load $\bar{S}$, the $k$-th scenario is constructed as follows: \begin{align*} &P^k = (\alpha_k - s_k) \bar{P} + \epsilon_k \circ \bar{P} \\ &Q^k = \alpha_k\bar{Q} + \epsilon_k \circ \bar{Q}\\ &\alpha_k \in\mathcal{L},~s_k \in \mathcal{S}\\ &\epsilon_k \sim Uniform([-c \alpha_k, c \alpha_k]) \in \mathds{R}^N \end{align*} The parameters $\alpha_k$ and $s_k$ represent the contributions of the base load and the solar generation, respectively, for the k-th scenario. The operator $\circ$ represents the element-wise product of vectors. The vector $\epsilon_k$ is sampled uniformly from the interval $[-c \alpha_k, c \alpha_k]$ and represents small uncertainties in the underlying load profile. The solar injection does not affect the reactive power injection. Since both the solar data $\mathcal{S}$ and the load multiplier data $\mathcal{L}$ represent hourly data for an entire year, 8760 ($365 \times 24$) scenarios are generated. The voltage profile $V^k$ corresponding to each scenario $k$ is generated by solving the load flow equation in OpenDSS (for the IEEE 13 bus system) and MATPOWER (for the IEEE 33 bus system). A small subset of the generated voltage profiles (for the 13 bus system) is shown in \figref{fig:vprofiles}. Note that the tap changers are fixed at their full load position, increasing the voltage magnitude to 1.05 p.u. at the head of the feeder. As such, all the voltage magnitudes are above 1.0 p.u. The dip observed in the voltage profile corresponds to nighttime, when the PV injection is minimal, while the peaks correspond to daytime, when there is surplus power, due to the PV injections, fed back into the power grid. The \textit{pseudo-measurements} for the real and reactive injections are generated by taking the mean and the variance of the load multiplier and the solar data as follows: \begin{align*} &\hat{P} = \dfrac{1}{|\mathcal{L}|} \sum_{k=1}^{|\mathcal{L}|} P^k\\ &\hat{Q} = \dfrac{1}{|\mathcal{L}|} \sum_{k=1}^{\mathcal{L}} Q^k\\ &\Sigma_P = \dfrac{1}{|\mathcal{L}|-1} \sum_{k=1}^{|\mathcal{L}|} (\hat{P} - P^k) (\hat{P}-P^k)^T\\ &\Sigma_Q= \dfrac{1}{|\mathcal{L}|-1} \sum_{k=1}^{|\mathcal{L}|} (\hat{Q} - Q^k) (\hat{Q}-Q^k)^T \end{align*} \begin{figure}[t] \centering \includegraphics[scale=0.4]{loadMultEPS.pdf} \caption{Yearlong hourly load multiplier data set} \label{fig:yearlongL} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.4]{pv.pdf} \caption{Yearlong hourly data set for PV injection} \label{fig:pv} \end{figure} \begin{figure}[t!] \centering \includegraphics[scale=0.6]{summervolts.pdf} \caption{Voltage measurements on a subset of the IEEE 13 bus system for a week in summer} \label{fig:vprofiles} \end{figure} \subsection{Error Metrics} The error metric used for evaluating the state estimator is the percentage error relative to the actual voltage magnitude. If $v \in \mathds{R}^N$ is the voltage magnitude at each of the nodes and $\hat{v} \in \mathds{R}^N$ is the voltage estimate constructed by the state estimator, then \begin{align*} \%{NodeError}_i = \dfrac{|v_i - \hat{v}_i|}{|v_i|} \times 100 \end{align*} where $v_i$, $\hat{v}_i$ represents the voltage magnitude at node $i$, and $\hat{v}_i$ is the voltage estimate at node $i$. \begin{figure} \centering \includegraphics[scale=0.5]{ieee123testfeeder.png} \caption{IEEE 123 bus feeder configuration} \label{fig:ieee123} \end{figure} \section{Numerical Results}\label{sec:5} The DSSE problem (\ref{eqn:dssevi}) was solved in MATLAB/Julia using IPOPT, and was tested in three different test systems: a balanced 33 bus test feeder, an unbalanced IEEE 13 bus test system (shown in \figref{fig:13bus}), and an unbalanced IEEE 123 bus test system (shown in \figref{fig:ieee123}). \subsection{State Estimator Performance} \textit{MATPOWER 33 bus test feeder:} For the balanced MATPOWER 33 bus test system, voltage and phase angle measurements are taken from buses 8, 9, 12, and 25. Bus 1 is fixed at 1.0 p.u. and is used as the reference bus. \figref{fig:33busAgg} shows the 95th percentile of the aggregate $\%{NodeError}_i$ (i.e., the node error across all the buses). \textit{IEEE 13 bus feeder:} The 13 bus system is unbalanced and has 41 nodes, as each bus has multiple nodes corresponding to different phases. Voltage measurements are taken from nodes $10$, $11$, and $12$ (corresponding to three phases of bus 633), $24$ (corresponding to phase 2 of bus 670), and $29$ (corresponding to phase 1 of bus 680). Nodes 1 to 3, corresponding to the source bus, are fixed at 1.0 p.u. and are used as the reference bus. Like that for the 33 bus system, \figref{fig:13busAgg} plots the value of the 95th percentile of the quantity $\%{NodeError}_i$ for each bus and in aggregate for the summer months. \textit{IEEE 123 bus feeder} The IEEE 123 bus system has 278 nodes, the majority of which (roughly 68\%) are unloaded. \figref{fig:123busAgg} shows the value of the 95th percentile of the quantity $\%{NodeError}_i$ in aggregate, for a summer month, where voltage measurements are taken from 20 different nodes. It can be seen in that all three cases, the overall error is less than 2\% for 95\% of the test scenarios. \begin{figure}[t] \centering \includegraphics[scale=0.6]{hist33.png} \caption{95th percentile of aggregate $\%NodeError_i$ for the 33 bus system} \label{fig:33busAgg} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.6]{hist13EPS.pdf} \caption{95th percentile of aggregate $\%NodeError_i$ for the 13 bus system} \label{fig:13busAgg} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.6]{hist123.pdf} \caption{95th percentile of aggregate $\%NodeError_i$ for the 123 bus system} \label{fig:123busAgg} \end{figure} \subsection{Sensitivity to Sample Variance} In typical situations, the sample mean and variance of the pseudo-measurements are obtained from historical data. As such, it is likely that the sample variance, $\Sigma_{P, sample}$ and $\Sigma_{Q, sample}$, associated with a particular time period (say, a week in summer) differs from the historical variance, $\Sigma_P$ and $\Sigma_Q$ used for the pseudo-measurements. Thus, it is of interest to understand the behavior of the state estimation algorithm when the variance of the pseudo-measurement deviates from the actual sample variance. \figref{fig:prctile33} and \figref{fig:prctile13} show the 95th percentile of the aggregate $NodeError_i$ (for a summer week) as a function of the percent deviation of $\Sigma_P$ from $\Sigma_{P,sample}$ ($\Sigma_{Q}$ is perturbed in a similar way) at various sensor error covariance levels. Note that a reduction in the overall sensor noise reduces the overall level of error in the estimates, while the percentile error exhibits a monotone decrease as the pseudo-measurement is increased. Underestimating the variance of the pseudo-measurements relative to that of the actual sample variance (i.e., considering the pseudo-measurements to be more accurate) results in larger error because it is likely that the pseudo-measurements contradict the sensor readings, which are far more reliable. As such, increasing the variance of the pseudo-measurement decreases the overall percentile error. \begin{figure}[t] \centering \includegraphics[scale=0.6]{se33prc.pdf} \caption{95th percentile of aggregate $\%NodeError_i$ for the 33 bus system as a function of the deviation of $\Sigma_P$ from $\Sigma_{P,sample}$ at varying sensor noise levels} \label{fig:prctile33} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.6]{se13prc.pdf} \caption{95th percentile of aggregate $\%NodeError_i$ for the 13 bus system as a function of the deviation of $\Sigma_P$ from $\Sigma_{P,sample}$ at varying sensor noise levels} \label{fig:prctile13} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.6]{se33cov.pdf} \caption{95th percentile of aggregate $\%NodeError_i$ for the 33 bus system as a function of different sensor coverage levels at varying sensor noise levels} \label{fig:cov33} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.6]{se13cov.pdf} \caption{95th percentile of aggregate $\%NodeError_i$ for the 13 bus system as a function of different sensor coverage levels at varying sensor noise levels} \label{fig:cov13} \end{figure} \subsection{Sensor Coverage/Noise} To minimize the cost of sensor deployment, it is important to identify key locations where placing a sensor would improve the quality of the state estimate. \figref{fig:cov33} and \figref{fig:cov13} show how the 95th percentile of aggregate $\%NodeError_i$ (for a summer week) varies as the sensor coverage is increased (at different levels of sensor error, as indicated by the increasing sensor error covariance) for the 13 bus and 33 bus feeders. The sensors were added sequentially (three at a time for the 13 bus system, two at a time for the 33 bus feeder), starting at the head of the feeder. In both cases, it is interesting to note that there is a large drop in the aggregate error when sensors are added to certain locations (corresponding to node 632 for the 13 bus feeder, and corresponding to node 2 in the 33 bus feeder, both of which represent points at which the feeder branches out into several trunks). Furthermore, it can also be seen that the error remains relatively flat (especially for the 13 bus feeder), implying only a few sensors are required to allow adequate estimation of the state of the feeder. \section{Concluding Remarks} In this paper, a distribution system state estimation problem was formulated, and a scenario generation framework was proposed for testing the state estimator under a wide variety of conditions. The sensitivity of the state estimator accuracy to sensor accuracy and sensor coverage levels via simulation was also investigated. \section{Acknowledgements} This work was performed for the U.S. Department of Energy under Contract DE-AC05-76RL01830 with support from the ENERGISE program and the Grid Modernization Lab Initiative.
1,314,259,994,421
arxiv
\section{Introduction} Einstein's theory of gravity has now been tested extensively in the solar system and in binary pulsar systems, using a wide array of relativistic gravitational phenomena \cite{lr}. These range from the deflection of light \cite{cassini}, to perihelion precession \cite{perihelion}, geodetic precession \cite{gravityprobe}, and frame dragging \cite{gravityprobe}. In all cases, the standard framework that is used to interpret observations of these effects is the parameterized post-Newtonian (PPN) formalism \cite{Will}. This formalism is constructed so that it encompasses the possible consequences of a wide variety of metric theories of gravity, and so that it can act as a half-way house between the worlds of experimental and theoretical gravitational physics. The PPN formalism has been tremendously successful not just in constraining particular modified theories of gravity, but also in providing a common language that can be used to isolate and discuss the different physical degrees of freedom in the gravitational field. Crucially, the form of the PPN metric is independent of the field equations of the underlying theory of gravity, and is simple enough to be effectively constrained with imperfect, real-world observations. It is also applicable in the regime of non-linear density contrasts. These are all highly desirable properties. With the advent of a new generation of cosmological surveys \cite{euclid, ska, lsst}, it becomes pertinent to consider whether we can perform precision tests of Einstein's theory on cosmological scales. Of course, the standard PPN formalism itself cannot be used directly for this purpose, as it is valid only for isolated astrophysical systems. More specifically, it relies on (i) asymptotical flatness and (ii) the slow variation of all quantities that might be linked to cosmic evolution. Neither of these conditions should be expected to be valid when considering gravitational fields on large scales: there are no asymptotically flat regions in cosmology, and the time-scale of cosmic evolution is no longer necessarily entirely negligible. We must therefore adapt and extend the PPN approach, if it is to be used in cosmology. Some of this work has already been performed within the context of Einstein's theory \cite{Tim1,vaas1, vaas2}, but more is required if we are going to attempt to port the entire formalism. This is what we intend to make a step towards in the present paper, in a formalism that we will call parameterized post-Newtonian cosmology (PPNC). Of course, we wish to retain as many of the beneficial properties of the PPN formalism as possible. In particular, we want to ensure that the formalism is still valid in the presence of non-linear structure after it has been transferred into cosmology. We also want to ensure that it can encompass as large a class of theories of gravity and dark energy models as possible, while remaining simple enough to be constrained by real observations. These requirements are important as many cosmological processes take place in the presence of non-linear structures, and because we want to be able to represent as many theories as possible. The parameterization that we end up with contains four functions of time that we expect to be able to link to the large-scale expansion, the growth of structure, and the lensing of light in a reasonably straightforward way. We do not assume any knowledge of the specific underlying theory of gravity in order to end up with this result, other than insisting that it fits into the class of conservative theories that can be described using the PPN formalism. Our approach is built using a weak-field and slow-motion post-Newtonian expansion, and so is naturally valid in the presence of non-linear structures (up to neutron star densities). Our work builds on a series of bottom-up approaches to cosmology, that has so far primarily been usually used to study the effect of small-scale inhomogeneities on the large-scale expansion within the context of general relativity \cite{Tim1}-\cite{Tim5} (exceptions to this are applications to $f(R)$ gravity \cite{Tim2b,Tim2} and Yukawa gravity \cite{pierre1}). We also expect our study to complement the existing literature on parameterized frameworks for testing gravity in cosmology, which come under the umbrella terms of ``parameterized post-Friedmannian'' approaches \cite{Hu1}-\cite{PPF3} and ``effective field theory'' approaches and their variants \cite{eff1}-\cite{eff8}. Our approach differs from most of this existing literature in the fact that we emphasize the links between weak gravitational fields and cosmology, and use this to constrain the possibilities for the large-scale properties of cosmology. This means that we end up with a framework that is automatically consistent with the PPN formalism on small scales, and that is constrained by this consistency in the form that it can take on large scales. For reviews on modified theories of gravity and parameterized frameworks in cosmology, the reader is referred to Refs. \cite{Tim3} and \cite{Tim3b}. The plan for the rest of this paper is as follows. In Section \ref{sec2} we introduce the bottom-up constructions we will use to link weak-field gravity and cosmology \cite{Tim1,vaas1, vaas2}. Section \ref{sec3} contains a review of the standard parameterized post-Newtonian approach, which we then modify for application to cosmology. In Section \ref{cosmo}, we build a cosmology from the weak field metric without assuming any field equations. This results in a geometry with four unknown functions of time. Finally, in Section \ref{examples}, we work through four explicit example theories, to show how we expect our formalism to function. Our examples include dark energy models, and scalar-tensor and vector-tensor theories of gravity. We use lower-case Latin letters ($a$, $b$, $c$, ...) to denote space-time indices, and Greek letters ($\mu$, $\nu$, $\rho$, ...) to denote spatial indices. Capitals from the first half of the Latin alphabet ($A$, $B$, $C$, ...) are used to denote the spatial components of tensors in $1+2$-dimensional subspaces, while those from the latter half ($I$, $J$, $K$, ...) will be used to label quantities associated with various different matter fields. \section{From weak fields to cosmology} \label{sec2} In this section we wish to explore the relationship between weak-field gravity and cosmology, without assuming anything about the field equations that govern the gravitational interaction (i.e. without assuming a specific theory of gravity). These two sectors are usually treated entirely separately in the standard approach to cosmology, as they appear at different orders in cosmological perturbation theory. They are, however, intimately linked, and given some knowledge about the weak-field limit of gravity one can construct cosmological evolutions that are consistent with that limit. We do not require a set of field equations in order to do this, as long as we are considering metric theories of gravity. The end result is then a set of effective Friedmann equations in which the large-scale expansion is driven by sources that can be expressed in terms of weak-field potentials. The link between these potentials and the energy-momentum content of the universe can subsequently be determined by the particular field equations of the theory that one wishes to consider. The great benefit of writing the Friedmann equations in this way is that they can be directly expressed in terms of (an extended version of) the PPN parameters. This facilitates both a direct comparison of cosmological and weak-field tests of gravity, as well as constraining the otherwise near limitless freedoms that can exist when parameterizing gravitational interactions in cosmology. \subsection{Post-Newtonian expansions} \label{pn} The perturbative approach we intend to use is the method of post-Newtonian expansions. This approach is designed to be applied to the weak-field and slow-motion limit of gravitational interactions, and is formally an expansion around Minkowski space in the parameter \begin{equation} \label{2} \epsilon \equiv \frac{|\bm{v}|}{c} \ll 1 \, , \end{equation} where $c$ is the speed of light, and $\bm{v}$ is the three-velocity associated with matter fields. We can then use $\epsilon$ to assign orders of magnitude to the matter content and the metric perturbations, such that \begin{equation} \rho \sim \varphi \sim v^2 \sim \epsilon^2 \, , \end{equation} where $\rho$ is the mass density, and $\varphi$ represents a generic gravitational potential. The post-Newtonian expansion is valid in the quasi-static regime, where time derivatives are small compared to space derivatives, such that \begin{equation} \frac{{\partial}/{\partial t}}{{\partial}/{\partial x}} \sim \epsilon \, . \end{equation} This means that the length scales associated with these gravitational fields must be small compared to the horizon size. We will therefore use the post-Newtonian expansion to describe small regions of space, and patch these regions together to determine the emergent large-scale cosmological expansion. For further details of post-Newtonian perturbative expansions the reader is referred to Ref. \cite{Will}. \subsection{Expanding and non-expanding coordinate systems} The PPN formalism, and post-Newtonian expansion generally, are formulated as an expansion about Minkowski space, such that the geometry can be described to lowest non-trivial order by \begin{eqnarray} ds^2 = -(1-2\Phi) dt^2 + (1+ 2\Psi) (dx^2+dy^2+dz^2) \, , \label{weakfield} \end{eqnarray} where the gravitational potentials are of order $\Phi \sim \Psi \sim \epsilon^2$. In the present context, it is useful to transform this line-element so that it can be written as a perturbed Friedmann geometry. The coordinate transformations required for this are \cite{Tim1, vaas1, vaas2} \begin{eqnarray} t &= \hat{t} + \frac{\dot{a} a}{2} (\hat{x}^2 + \hat{y}^2 + \hat{z}^2) + O(\epsilon^3) \ , \label{timetrans} \\[5pt] x &= a \hat{x} \bigg[1 + \frac{\dot{a}^2}{4} (\hat{x}^2 + \hat{y}^2 + \hat{z}^2)\bigg] +O(\epsilon^4) \ , \label{xtrans} \\[5pt] y &= a \hat{y} \bigg[1 + \frac{\dot{a}^2}{4} (\hat{x}^2 + \hat{y}^2 + \hat{z}^2)\bigg] +O(\epsilon^4) \ , \\[5pt] z &= a \hat{z} \bigg[1 + \frac{\dot{a}^2}{4} (\hat{x}^2 + \hat{y}^2 + \hat{z}^2)\bigg] +O(\epsilon^4) \label{ztrans} \, , \end{eqnarray} where {$a=a(\hat{t}) \sim O(1)$ and $\dot{a}=da(\hat{t})/d\hat{t} \sim O(\epsilon)$, because time derivatives add an order of smallness}. Applying these coordinate transformations to the perturbed Minkowski space in Eq. (\ref{weakfield}) gives, to lowest non-trivial order, \begin{equation} ds^{2} = -(1 - 2\hat{\Phi})d\hat{t}^2 + a(\hat{t})^2 (1+2\hat{\Psi}) \frac{ \left( d \hat{x}^2 + d \hat{y}^2 + d \hat{z}^2 \right)}{[1+\frac{k}{4} (\hat{x}^2 + \hat{y}^2 + \hat{z}^2)]^2} \, , \label{FLRW} \end{equation} where $\hat{\Phi}$ and $\hat{\Psi}$ are defined, up to terms of $O(\epsilon^4)$, by \begin{eqnarray} \Phi &=& \hat{\Phi} + \frac{\ddot{a} a}{2} (\hat{x}^2 + \hat{y}^2 +\hat{z}^2) \ , \label{phitran}\\[5pt] \Psi &=& \hat{\Psi} - \bigg(\frac{\dot{a}^2 + k}{4}\bigg) (\hat{x}^2 + \hat{y}^2 +\hat{z}^2) \label{psitran}\, . \end{eqnarray} The quantity $k \sim \epsilon^2$, that appears in (\ref{psitran}), is the Gaussian curvature of the conformal 3-space. The geometry and coordinate system used (\ref{FLRW}) look identical to those of a global FLRW model with linear scalar perturbations. This is, however, only a coordinate transformation of the perturbed Minkowski space from equation (\ref{weakfield}). It is therefore only valid within the same region of space that the perturbed Minkowski description was valid (i.e. a space much smaller than the size of the horizon). The scale factor, $a(\hat{t})$, is not yet the solution to any set of Friedmann equations, and does not yet correspond to the scale factor of any global Friedmann space. It is simply an arbitrary function of time, introduced by the coordinate transformations in equations (\ref{timetrans}) - (\ref{ztrans}). In order to associate it with a global scale factor, and determine the relevant Friedmann equations, we must patch together many such regions of space, using appropriate junction conditions. \subsection{Junction conditions} The conditions required at the junction between neighbouring regions of space, in order for their union to be considered a solution of the field equations, will now be determined. Let us first choose to consider junctions that are $(2+1)$-dimensional time-like submanifolds of the global space-time. In this case, the space-like unit vector normal to the junction is given as the solution to \begin{equation} n_{a}\frac{ \partial x^{a}}{\partial \xi^{i}} = 0 \qquad {\rm and} \qquad n_a n^a =1 \, , \end{equation} where $\xi^{i}$ are the coordinates on the boundary. The first and second fundamental forms on the boundary are then given by \begin{equation} \gamma_{ij} = \frac{ \partial x^{a}}{\partial \xi^{i}} \frac{ \partial x^{b}}{\partial \xi^{j}} \gamma_{ab} \qquad {\rm and} \qquad K_{ij}\equiv \frac{1}{2} \frac{\partial x^{a}}{\partial \xi^{i}} \frac{\partial x^{b}}{\partial \xi^{j}} \mathcal{L}_{n} \gamma_{ab} \, , \end{equation} where $\gamma_{ab} = g_{ab} - n_a n_b $ is the projection tensor onto the boundary. For a metric theory of gravity, we will expect to be able to impose certain conditions on the values of $\gamma_{ij}$ and $K_{ij}$, on either side of the junction. Strictly speaking, the junction conditions on the geometry will depend on the specific field equations that apply to the theory of gravity that is being considered. However, it is reasonable to expect that certain junction conditions should result generically from conservatively constructed metric theories. In particular, we expect that the Israel junction conditions in the absence of surface layers should be obeyed. These conditions are given by \cite{Israel} \begin{eqnarray} \bigg[\gamma_{ij}\bigg]^{(+)}_{(-)} = 0 \ , \label{metjunc1} \\[5pt] \bigg[K_{ij}\bigg]^{(+)}_{(-)} = 0 \, . \label{metjunc2} \end{eqnarray} where $[\varphi]^{(+)}_{(-)} = \varphi^{(+)} - \varphi^{(-)}$ for any object $\varphi$, and where superscripts $(+)$ and $(-)$ indicate that a quantity should be evaluated on either side of the boundary. {The first junction condition (\ref{metjunc1}) comes from the assumption of a continuous induced metric. This is both natural and required so that no Dirac delta functions arise while computing the affine connection. The second junction condition (\ref{metjunc2}) comes from the Ricci equation,} \begin{eqnarray} R_{ij} = R^{(3)}_{ij} + 2 K_{im}K^{m}_{\ j} - K_{ij}K^{m}_{\ m} - \mathcal{L}_{n} K_{ij} + \dot{n}_{(i;j)} \ , \end{eqnarray} {where $R_{ij}$ is the Ricci curvature of space-time projected on the boundary, $R^{(3)}_{ij}$ is the Ricci curvature of the (2+1)-dimensional surface and $\dot{n}_{i} \equiv n_{i;b}n^{b}$. If $K_{ij}$ was discontinuous we would have a Dirac delta function in the $\mathcal{L}_{n} K_{ij}$ term, and hence also in the Ricci curvature. Generically, we expect the Ricci curvature to be related to the energy-momentum tensor, in any theory of gravity that contains second derivatives of the metric in the field equations. This means that if Eq. (\ref{metjunc2}) were not satisfied then we would generically expect to have a discontinuity in the energy-momentum tensor. However, as we are considering situations where there are no surface layers or branes on the boundary, this is not something that can be allowed. We therefore expect the junction conditions (\ref{metjunc1}) and (\ref{metjunc2}) to apply to any covariant theory of gravity that contains second derivatives of the metric in its field equations, as they simply correspond to the metric being $C^1$ smooth at the boundary. This expectation has shown to hold true in scalar-tensor theories \cite{scalarjunc} and $f(R)$ theories of gravity \cite{sasaki}.} If they were found to be untrue, for any particular theory of gravity, then the theory in question would not fall into the domain of applicability of the framework we are constructing. Such anomalous theories would then have to be treated separately, as special cases. The junction conditions (\ref{metjunc1}) and (\ref{metjunc2}) are sufficient to allow us to evaluate the motion of the boundaries of each of our small regions of space, and therefore also tell us the cosmological expansion we expect to obtain from regions described by the geometry in (\ref{weakfield}) and (\ref{FLRW}). This will be described in terms of the potentials $\Phi$ and $\Psi$ in Section \ref{emergent}, and in terms of (an extended set of) the PPN parameters in Section \ref{cosmo}. In Section \ref{examples} we will use these junction conditions, along with additional conditions where required, to relate the weak field geometry to the cosmological expansion in some specific example classes of modified theories that contain additional scalar and vector degrees of freedom. This will allow us to write the functions that appear in the Friedmann equation in terms of the parameters of these example theories. \subsection{Emergent cosmological expansion} \label{emergent} The junction condition in (\ref{metjunc2}) is satisfied if $K_{ij}=0$, on the boundary of every small region of space. This condition means that the boundary is extrinsically flat in the $3+1$-dimensional space-time, and is probably the simplest way of satisfying the second junction condition. Examples of constructions with time-like boundaries of this type are the regular lattices of discrete masses studied in Refs. \cite{Tim1}-\cite{Tim5}, but it is also a perfectly good way to describe an FLRW space that has been divided into small sub-regions with comoving flat boundaries. If we choose to consider regions of space with extrinsically flat boundaries of this type, then we find that this implies \cite{Tim1,vaas1,vaas2} \begin{eqnarray} {{X}_{,tt}} &=& \mathbf{n}\cdot\nabla\Phi |_{\partial \Omega} + O(\epsilon^4) \, , \label{X1} \\[5pt] X_{,AB} &=& \delta_{AB} \, \mathbf{n}\cdot\nabla\Psi|_{\partial \Omega} + O(\epsilon^4) \, , \label{X2} \\[5pt] X_{,tA} &=& 0 + O(\epsilon^3) \ . \label{X3} \end{eqnarray} where we have rotated coordinates so the boundary is located at $x=X(t,y,z)$ (to first approximation). The $|_{\partial \Omega}$ symbol in this equation indicates that the preceeding quantity is being evaluated on the boundary of the region under consideration. These equations describe the motion of the boundary of our small region of space, as well as its shape. After transforming to expanding coordinates via equations (\ref{timetrans}) - (\ref{ztrans}), and choosing $a(t)$ such that each part of the boundary is comoving with the $(\hat{x},\hat{y},\hat{z})$ coordinates, we can use equation (\ref{X1}) to write one of the Friedmann equations for the global space. This will be explained further in Section \ref{cosmo}, after introducing the relevant formalism in Section \ref{sec3}. The other Friedmann equation requires us to derive a Hamiltonian constraint equation. To do this we again assume that there exists a coordinate system where every part of the boundary is comoving with the $(\hat{x},\hat{y},\hat{z})$ coordinates, and consider a time-like 4-vector field that is both uniformly expanding and comoving with our boundaries: \begin{equation} \label{ua} u^a = \left(1 ; \frac{X_{,t}}{X} x^{\mu} \right) \, , \end{equation} where we have kept only the leading-order term in each component, and where we have expressed the components in the $(t,x,y,z)$ coordinates. A spatial hyper-surface orthogonal to this field then gives, from a post-Newtonian expansion of Gauss' embedding equation, that \begin{equation} \left( \frac{X_{,t}}{X} \right)^2 = -\frac{2}{3} \nabla^2 \Psi - \frac{R^{(3)}}{6} + O(\epsilon^4) \, , \label{con1} \end{equation} where $R^{(3)}$ is the Ricci curvature scalar of the space, which for the situation we are considering can be related to the spatial curvature, $k$. The functional form of equation (\ref{con1}) is strongly reminiscent of the Friedmann equations, and after transformation to the expanding coordinates can also be used to construct an effective Friedmann equation for the global space. Again, this will be explained further in Section \ref{cosmo}. We emphasize that nowhere in this section have we assumed anything about any theory of gravity or a set of field equations, other than the junction conditions (\ref{metjunc1}) and (\ref{metjunc2}). Nevertheless, we have ended up with a set of equations that looks very similar to the Friedmann equations, with sources given by the derivatives of weak-field potentials. A concrete realisation of the types of structure being described here is a regular lattice, constructed from cells that are themselves regular convex polyhedra. Such structures were considered in the context of Einstein's theory in Refs. \cite{Tim1, vaas1,vaas2}, and will often be what we have in mind in what follows. \section{An extended PPN formalism} \label{sec3} Let us now consider how to extend the PPN framework, so that it can be used to model weak gravitational fields in an expanding universe. We will begin by briefly discussing the basics of the existing PPN formalism, as it is currently found in the literature \cite{Will}. We will then discuss how we can extend it to include other forms of matter that are relevant in cosmology, and to include the time dependence that is a crucial feature of an expanding universe. This will require not only allowing the parameters themselves to be dynamical, but also the boundary conditions that we use for solving the relevant hierarchy of Poisson equations. \subsection{The standard PPN formalism} The standard PPN formalism is built upon the post-Newtonian expansions outlined in Section \ref{pn}. It does not assume any particular form for the field equations, but does make an ansatz for the weak field metric (which is expected to be valid for any metric theory of gravity). Up to $O(\epsilon^2)$, this PPN metric is given by equation (\ref{weakfield}), which has already been written in the standard post-Newtonian gauge, so that it is diagonal and isotropic at leading order in perturbations. As well as the metric, the energy-momentum tensor is also subject to a post-Newtonian expansion. To lowest non-trivial order, this gives \begin{eqnarray} T_{tt}=& \rho_{M}(t, x^{\mu}) + O(\epsilon^4) \label{emPPNtt} \ , \\[5pt] T_{t\mu} =& - \rho_{M}(t, x^{\mu}) \, v_{M\mu}(t, x^{\mu}) + O(\epsilon^5) \label{emPPNtx} \ , \\[5pt] T_{\mu \nu} =& p_M(t, x^{\mu}) \delta_{\mu \nu} + O(\epsilon^6) \ , \label{emPPN} \end{eqnarray} where $\rho_{M}(t, x^{\mu})\sim \epsilon^2$ is the mass density of non-relativistic matter, $v_{M\mu}(t, x^{\mu})\sim \epsilon$ is the 3-velocity of this matter, and $p_M(t, x^{\mu})\sim \epsilon^4$ is the isotropic pressure. This energy-momentum tensor is assumed to be conserved, so that $T^{ab}_{\ \ ; a} =0$. The relationship between gravitational potentials and energy-momentum content is, of course, specified by the gravitational field equations. If these equations are unknown, or we do not want to specify any particular theory of gravity, then the best we can do is simply assume that the Laplacian of the gravitational potentials can be expressed as a linear function of the energy-momentum content of the space-time. This is done in the PPN framework by writing\footnote{The usual definition of $\alpha$ and $\gamma$ actually involves the solution to this equation written in terms of the integrals of an asymptotically flat Green's function. We have presented it in this way so that it is more amenable for adaption to cosmology.} \begin{eqnarray} \nabla^2 \Phi = -4\pi G \alpha \rho_{M} \, , \label{alpha}\\[5pt] \label{gamma} \nabla^2 \Psi = -4\pi G \gamma \rho_{M} \end{eqnarray} where $G$ is Newton's gravitational constant, and where $\alpha$ and $\gamma$ are constants. {Of course, this description only applies to theories of gravity where Yukawa potentials are either absent, neglected, or can be approximated by Coulomb-like potentials. It also relies on the absence, or neglect, of any non-perturbative physics.} Inclusion of these types of gravitational interactions would require extending both the PPN framework, and the PPNC that we construct here. Now, the lowest-order equations of motion for time-like particles, from $T^{ab}_{\ \ ; a} =0$, tells us that $\Phi$ is the gravitational potential that causes acceleration due to the Newtonian part of the gravitational field. For agreement with local experiments (i.e. so that $G$ is the locally measured value of Newton's constant), we must therefore have $\alpha=1$ at the present time. The parameter $\gamma$ then parameterizes the relativistic deflection of light and Shapiro time delay, while further constants (not given explicitly here) parameterize the zoo of other relativistic effects that are observable in the solar system and elsewhere. The current best observational constraints on this parameter are $\gamma = 1 + (2.1 \pm 2.3) \times 10^{-5}$ \cite{cassini}, which is consistent with the value $\gamma=1$ that is expected from Einstein's theory. The description given above is sufficient to calculate the leading-order gravitational effects on both null and time-like particles. However, if we want to calculate explicit expressions for $\alpha$ and $\gamma$, in terms of the parameters of a given theory of gravity, then we must also expand the additional degrees of freedom present in that theory. For an additional scalar field, $\phi$, this expansion is usually taken to be \begin{equation} \label{s1} \phi = \bar{\phi} + \delta \phi(t, x^{\mu}) + O(\epsilon^4) \, , \end{equation} where $\bar{\phi} \sim \epsilon^0$ is the constant background value of the scalar field, and where $\delta \phi(t,x^{\mu}) \sim \epsilon^2$ is the leading-order perturbation. Similarly, for a theory with a time-like vector field $A_{a}$, one can expand its components as \begin{eqnarray} A_{t} = \bar{A}_{t} + \delta A_{t}(t, x^{\mu}) +O(\epsilon^4)\, , \label{v1} \\[5pt] \label{v2} A_{\mu} = \delta A_{\mu}(t, x^{\mu}) +O(\epsilon^5)\, , \end{eqnarray} where $\bar{A}_{t} \sim \epsilon^0$ is the background value of the time-component, and $\delta A_{t}(t, x^{\mu}) \sim O(\epsilon^2)$ and $ \delta A_{\mu}(t, x^{\mu}) \sim O(\epsilon^3)$ are the leading-order perturbations to the time and space components of the vector field, respectively. Of course, other types of additional fields can be included, depending on the types of theory that one wishes to consider. For further details on this, and other aspects of the standard PPN formalism, the reader is referred to Ref. \cite{Will}. In the following sections we will extend the PPN formalism by adding additional matter content, additional gravitational potentials, and by allowing for additional time dependence in the parameters. These extensions are all required in order to adapt the PPN formalism for cosmology. \subsection{Additional matter content} The treatment above assumes $p \ll \rho$, which is fine for the contents of the solar system, but for cosmological studies would confine us to considering dust. We would like our formalism to also be able to incorporate generic dark energy fluids, radiation, scalar fields, and the variety of other types of matter that are often studied in cosmology. We therefore take the total energy-momentum tensor of all matter fields to be given by \begin{eqnarray} T^{ab} = T^{ab}_M + \sum_{I} T^{ab}_I\ , \end{eqnarray} where subscript $M$ refers to quantities associated with non-relativistic pressureless matter fields (i.e. baryons and dark matter), and where subscript $I$ refers to quantities associated with all other barotropic fluids. The energy-momentum tensor of each of these fluids can then be written \begin{equation} T_J^{a b} = \rho_J u_J^a u_J^b + p_J (g^{ab} +u_J^a u_J^b) \, , \end{equation} where we intend $J \in \{M,I\}$, and where the 4-velocity $u_J^a$ can be written \begin{eqnarray} u_J^{a} = \bigg(1 +\Phi + \frac{v_J^2}{2}\bigg)(1;v_J^{\mu}) + O(\epsilon^4) \ , \label{4-vel1} \end{eqnarray} where $v_J^{\mu}$ is the 3-velocity of fluid $J$, and where $v_J^2= v_J^{\mu}v_{J \mu}$. The components of the total energy-momentum tensor are then given, to leading order, by \begin{eqnarray} T_{tt}=& \rho_{M} + \sum_{I} \rho_{I} + O(\epsilon^4) \label{emtt} \ , \\[5pt] T_{t\mu} =& - \rho_{M} v_{M\mu} - \sum_{I} ( \rho_{I} + p_{I}) v_{I\mu} + O(\epsilon^5) \label{emtx} \ , \\[5pt] T_{\mu \nu} =& \sum_{I} p_{I} \delta _{\mu\nu} + O(\epsilon^4) \ , \label{em} \end{eqnarray} where $\rho_I \sim p_I \sim \epsilon^2$ and $v_I \sim \epsilon$. In Ref. \cite{vaas2} we applied the post-Newtonian expansion to fluids of this type and found that energy-momentum conservation implies \begin{equation} \label{emcon1} \nabla_{\mu} \, p_I = 0 +O(\epsilon^4) \, . \end{equation} We therefore have that $p_I=p_I (t)\sim \epsilon^2$ is a function of time only, and not a function of space. For a barotropic fluid with equation of state $\rho_I=\rho_I (p_I)$ this means that we also have $\rho_I=\rho_I (t)$, at $O(\epsilon^2)$. This further restricts the form of $v_I$ to correspond to the velocity field of a uniformly expanding fluid, as we will explain further in Section \ref{cosmo}. The reader may note that nothing in this description relies on any specific theory of gravity - only on the conservation of energy-momentum. For further details on barotropic fluids with $p \sim \rho$ in post-Newtonian expansions, the reader is referred to Ref. \cite{vaas2}. \subsection{Additional potentials} \label{apot} The extra fluids described above, and the extra degrees of freedom that generically exist in modified theories of gravity, require additional gravitational potentials to be included in equations (\ref{alpha}) and (\ref{gamma}). We define these potentials implicitly through the Poisson equations \begin{eqnarray} \nabla^2 \Phi \equiv -4\pi G \alpha \rho_{M} + \alpha_{c} +O(\epsilon^4)\ , \label{Phigen} \\[5pt] \nabla^2 \Psi \equiv -4\pi G \gamma\rho_{M} + \gamma_{c} +O(\epsilon^4) \ , \label{Psigen} \end{eqnarray} where $\Phi$ and $\Psi$ are the metric perturbations from equation \eref{weakfield}, and where $\{\alpha, \gamma, \alpha_{c},\gamma_{c} \}$ are a set of parameters (to be constrained by observation and experiment). The first two of these are $O(\epsilon^0)$, as before. The last two are of $O(\epsilon^2)$, and are constants in space. We intend these extra two parameters to include all sources for gravitational fields that are independent of position, including the barotropic fluids discussed above and the additional degrees of freedom that occur in modified theories of gravity\footnote{The reason why extra potentials are required for the extra gravitational degrees of freedom in cosmology will become clear when we consider examples, in Section \ref{examples}.}. This choice of parameterization for the gravitational potentials $\Phi$ and $\Psi$ is motivated by (i) the fact that the potentials that appear in the PPN framework can be expressed as a hierarchy of Poisson equations, and (ii) the fact that Poisson equations and Gauss' divergence theorem guarantee that cosmological back-reaction will be small. The first of these points means that our extended framework will be able to encompass all theories that fit naturally into the PPN formalism. This includes an array of simple scalar-tensor, vector-tensor, and bi-metric theories of gravity \cite{Will}. The second point comes from the fact that the large-scale cosmological behaviour can be obtained by integrating the weak-field gravitational equations over small regions of space \cite{Tim1,vaas1,vaas2}. It is only if these equations are of the form given in (\ref{Phigen}) and (\ref{Psigen}) that Gauss' theorem can be used to link the rate of cosmic expansion to energy density in a straightforward way \cite{pierre1}. This will become clearer when we derive the effective Friedmann equations in Section \ref{cosmo}. Let us now turn to considering the solutions to equations (\ref{Phigen}) and (\ref{Psigen}). In the standard approach to the PPN formalism one would use the Green's function for an asymptotically flat space, to write the solution to \begin{eqnarray} \nonumber \nabla^2 U \equiv -4\pi G \rho_{M} \qquad {\rm as} \qquad U= \int \frac{\rho(x^{\prime})}{\vert {\mathbf x} - {\mathbf x^{\prime} \vert}} d^3 x^{\prime} \, . \end{eqnarray} However, in the present case, where we wish to consider cosmology, this is not the appropriate solution. There are no asymptotically flat regions in cosmology, and so one must use a different Green's function. For the case of a large number of polyhedral regions of space, with the vanishing extrinsic curvature condition used on the boundary, we find that the relevant solution instead takes the more complicated form \cite{vaas1} \begin{eqnarray} \label{109} U &= \bar{U}+ 4\pi G \int_{\Omega} \mathcal{G} \rho_{M} \, dV - 4\pi G M \int_{\partial \Omega} \frac{\mathcal{G}}{A} \, dA \, , \end{eqnarray} where $\bar{U}$ is the average value of the potential, $\mathcal{G}$ is a Green's function explained in Ref. \cite{vaas1}, $dA$ is a surface area element of the polyhedral region of space, and $A$ is the total surface area of the polyhedron. The derivation of this result required use of both Gauss' theorem and Neumann boundary conditions. We will not go into any further details of these solutions here. For more information the reader is referred to Ref. \cite{vaas1}, where explicit expressions for $\mathcal{G}$ are found for cubic lattice cells. \subsection{Additional time dependence} Finally, we must consider how the additional degrees of freedom from modified theories of gravity should be expected to behave in our new formalism, and what this means for the PPN parameters. For a theory with a scalar field, for example, the expansion is given in (\ref{s1}). In the standard approach to the PPN formalism one would assume $\bar{\phi}$ to be effectively constant, and only varying over cosmological time-scales (if at all). When considering gravity in the solar system these variations are entirely negligible. When considering modified gravity in cosmology, however, they are not. We therefore cannot neglect the time dependence of $\bar{\phi}$ in scalar-tensor theories. Similarly, we cannot neglect the time-dependence of $\bar{A}_t$ in vector-tensor theories, when we expand the extra vector field as in equations (\ref{v1}) and (\ref{v2}). As the values of the PPN parameters depend on these quantities, this means we also have to allow the PPN parameters to be functions of time, so that we have $$\alpha=\alpha(t) \, , \quad \gamma=\gamma(t) \, , \quad \alpha_c=\alpha_c(t) \, \quad {\rm and} \quad \gamma_c=\gamma_c(t) \, .$$ This does not alter the functional form of the solutions to equations (\ref{Phigen}) and (\ref{Psigen}) in space, as they are still Poisson equations, but it does add an extra degree of time dependence to the source functions. This means that Gauss' theorem can still be used to derive the sources for the Friedmann equations, and that back-reaction can be expected to be small. Spatial dependence of the parameters above would ruin this result, and would not produce a Newtonian gravitational field on small scales. This will be explained further in Section \ref{cosmo}, and explicit example theories will be used to illustrate these points in Section \ref{examples}. \section{A parameterized approach to cosmology} \label{cosmo} Let us now put together the emergent expansion considered in Section \ref{sec2} and the effective field equations considered in Section \ref{sec3}. This will allow us to obtain a set of effective Friedmann equations without using any particular set of field equations. It will also allow us to present parameterized, consistent expressions for both the large-scale expansion, and the quasi-static limit of first-order cosmological perturbations, in terms of our extended set of PPN parameters. \subsection{Conservation equations} We will first derive conservation equations for each of the matter fluids using the energy-momentum conservation equation, $ T^{ab}_{\ \ ; a} =0$. Assuming that to leading order each fluid is non-interacting, we obtain the result in (\ref{emcon1}) from the $O(\epsilon^2)$ part of the Euler equation. At next-to-leading order we find \cite{vaas2} \begin{eqnarray} \rho_{M,t} + \nabla \cdot ({\rho_{M} \bm v}_{M} )= 0 + O(\epsilon^5) \, , \label{emcon3} \\[5pt] \rho_{I,t} + (\rho_{I} + p_{I}) \nabla \cdot {\bm v}_{I} = 0 + O(\epsilon^5) \, , \label{emcon2} \end{eqnarray} where subscript $M$ again refers to non-relativistic pressureless matter, and subscript $I$ corresponds to the barotropic fluids with pressure at $O(\epsilon^2)$. {The assumption that fluids are not interacting at leading order gives standard dark energy models, with interactions expected to occur at higher orders. One could potentially also consider more exotic interacting dark energy models with interactions at leading order, but have chosen to neglect this possibility here.} To integrate these equations we make use of Reynold's transport theorem, which for any space-time function $f$ gives \begin{eqnarray} \frac{d}{dt} \int_{\Omega} f \ dV = \int_{\Omega} f_{,t} \ dV + \int_{\partial \Omega} f \bm{v} \cdot d\bm{A} \, . \end{eqnarray} Integrating equation (\ref{emcon3}) over our small region of space, and then using Gauss' theorem and Reynold's theorem, therefore gives \begin{equation} \label{dM} \frac{d}{dt} \int_{\Omega} \rho_M dV \equiv \frac{dM}{dt} = 0 \, , \end{equation} where the first equality defines $M$. This means that $\langle \rho_M \rangle = M/V$, where the angle brackets denote the average value of $\rho_M$ in the spatial domain $\Omega$, and $V$ is the spatial volume of $\Omega$. In terms of the expanding coordinate system, equation (\ref{dM}) can be written as \begin{eqnarray} \label{emcon4} \boxed{\langle \rho_M \rangle_{,t} + 3 \frac{\dot{a}}{a} \langle \rho_M \rangle=0} \, , \end{eqnarray} which is, of course, just the usual conservation equation for dust in an FLRW space-time. To derive a conservation equation for the barotropic fluid in (\ref{emcon2}) we do not need to integrate it over space, as we have already found it to be homogeneously distributed (to leading order). If instead we simply note that a homogeneous fluid comoving with the boundaries of our region of space must have $v_I^a=u^a$, where $u^a$ is the time-like 4-vector field from equation (\ref{ua}), then this gives $\nabla \cdot \bm{v_{I}} = 3 \dot{a}/a$. Substituting into equation (\ref{emcon2}) then gives \begin{eqnarray} \boxed{\rho_{I,t} + 3 \frac{\dot{a}}{a} ({\rho_{I}} + p_{I} )=0} \, , \label{continuity1} \end{eqnarray} which is, of course, identical to the FLRW continuity equation for such a fluid. The conservation laws for the leading-order parts of both the non-relativstic and the barotropic fluid are therefore unaltered from the homogeneous and isotropic case, even though we have allowed for extremely large density constrasts. These results depend on energy-momentum conservation, but are otherwise independent of the theory of gravity under consideration. \subsection{Background expansion} Our next task it to write the emergent expansion, discussed in Section \ref{emergent}, in terms of the parameters and quantites from Section \ref{apot}. Let us start by integrating the constraint equation (\ref{con1}) over the spatial domain, $\Omega$. The spatial curvature term in this equation can be written \begin{equation} R^{(3)}= \frac{6 k}{a^2} - \frac{4}{a^2} \hat{\nabla}^2 \hat{\Psi} +O( \epsilon^4) \, , \end{equation} where we have chosen to use the expanding coordinates from equation (\ref{FLRW}). Integrating this quantity over $\Omega$, and using Gauss' theorem, then gives \begin{equation} \label{con1a} \int_{\Omega} R^{(3)} dV = \frac{6 k}{a^2} V - \frac{4}{a^2} \int_{\partial \Omega} \hat{\nabla} \hat{\Psi} \cdot d \bm{A} = \frac{6 k}{a^2} V \, , \end{equation} where in the last equality we have used the result that extrinsically flat boundaries are totally geodesic, implying $\bm{n} \cdot \hat{\nabla} \hat{\Psi}\vert_{\partial \Omega} =0$ \cite{Tim1, eisen}. If we now consider the other term on the right-hand side of equation (\ref{con1}), and similarly integrate this over $\Omega$ then we get \begin{equation} \label{con1b} \int_{\Omega} \nabla^2 \Psi \ dV = -4 \pi G \gamma \langle \rho_M \rangle V + \gamma_c V \, , \end{equation} where we have used equation (\ref{Psigen}). Note that if either $\gamma$ or $\gamma_c$ had been functions of space, then the right-hand side of this equation would have been considerably more complicated. Putting equations (\ref{con1a}) and (\ref{con1b}) together with equation (\ref{con1}) then gives \begin{eqnarray} \boxed{\frac{\dot{a}^2}{a^2} = \frac{8\pi G \gamma}{3}\avg{\rho_{M}} - \frac{2\gamma_{c} }{3}- \frac{k}{a^2}} \ , \label{fincon} \end{eqnarray} where we have written the left-hand side in terms of the quantities in the expanding coordinates, and divided through by $V$. This equation has exactly the same form as the first Friedmann equation of FLRW cosmology. It has, however, been derived without reference to the field equations, using only (an extended version of) the PPN metric. Let us now derive an evolution equation. If we integrate equation (\ref{X1}) over $\partial \Omega$, and use Gauss' theorem, then we get \begin{equation} \int_{\partial \Omega} X_{,tt} dA = - 4 \pi G \alpha \langle \rho_M \rangle V + \alpha_c V \, . \end{equation} This equation can be simplified further by noting that $X_{,tt}$ must be constant over $\partial \Omega$, in order for equation (\ref{X2}) to remain valid. We therefore have \begin{eqnarray} \boxed{\frac{\ddot{a}}{a} = -\frac{4\pi G \alpha}{3}\avg{\rho_{M}} + \frac{\alpha_{c} }{3}} \ , \label{accgen2} \end{eqnarray} where we have divided through by $V$, written the left-hand side in terms of the quantities used in the expanding coordinate system, and used the fact that $A/V=3/X$ for regular convex polyhedra. This equation is identical to the second Friedmann equation, but has again been derived without recourse to the field equations. The reader may again note that the right-hand side of this equation would have been considerably more complicated if either $\alpha$ or $\alpha_c$ had been functions of space. By using the conservation equation \eref{emcon4}, the constraint equation \eref{fincon}, and the acceleration equation \eref{accgen2}, we can derive one further constraint for this system. This can be found by differentiating equation \eref{fincon}, and is given by \begin{equation} {4\pi G\avg{\rho_{M}} = \bigg(\alpha_{c} + 2\gamma_{c} + \frac{d \gamma_{c}}{d \ln a} \bigg)\bigg/ \bigg(\alpha - \gamma + \frac{d\gamma}{d \ln a} \bigg)}\, .\label{addcon1} \end{equation} The existence of this constraint means that the first and second Friedmann equations, \eref{fincon} and \eref{accgen2}, can be written entirely in term of the set of parameters $\{ \alpha, \gamma, \alpha_c, \gamma_c \}$. \subsection{First-order perturbations} Finally, let us consider the small-scale, first-order cosmological perturbations that arise within this framework. Using the transformations from equations \eref{phitran} and \eref{psitran}, the Poisson equations \eref{Phigen} and \eref{Psigen} transform to give \begin{eqnarray} \label{np1} \boxed{\hat{\nabla}^2 \hat{\Phi} = -4\pi G a^2 \alpha \delta\rho} \ , \\[5pt] \boxed{\hat{\nabla}^2 \hat{\Psi} = -4\pi G a^2\gamma \delta\rho} \ , \label{np2} \end{eqnarray} where $\hat{\nabla}^2 =\hat{\partial}_{\mu} \hat{\partial}_{\mu}$, and where $\delta \rho = \hat{\rho} - \avg{\rho_{M}}$. These are exactly the type of equations that one would expect to describe cosmological perturbations on small scales, in the quasi-static limit. The often considered gravitational constant parameter, $\mu$, and gravitational slip parameter, $\zeta$, can then be written in terms $\alpha$ and $\gamma$ as \begin{eqnarray} \mu \equiv -\frac{{\nabla}^2 \hat{\Psi}}{4\pi G a^2 \delta\rho} =\gamma \qquad {\rm and} \qquad \zeta \equiv \frac{\hat{\Psi} - \hat{\Phi} }{\hat{\Psi}} = 1 - \frac{\alpha}{\gamma} \, . \end{eqnarray} These expressions provide a direct link between the parameters used to test gravity in cosmology ($\mu$ and $\zeta$), and those used in weak-field slow-motion world of post-Newtonian gravity ($\alpha$ and $\gamma$). We can now see that equations (\ref{emcon4}), (\ref{continuity1}), (\ref{fincon}), (\ref{accgen2}), (\ref{np1}) and (\ref{np2}) provide a consistent set of equations to evolve both the cosmological background, and first-order cosmological perturbations in the quasi-static limit. This is all given in terms of a set of four parameters $\{ \alpha, \gamma , \alpha_c ,\gamma_c\}$ that are functions of time only, and that can be directly related to the PPN parameters. We refer to this framework as ``parameterized post-Newtonian cosmology'' (PPNC). In the next section we will illustrate how our four parameters can be determined in some simple classes of dark energy models, and modified theories of gravity. Such relations will allow observational constraints on $\{ \alpha, \gamma , \alpha_c ,\gamma_c\}$ to be imposed on the parameters that appear in each of these theories. Before moving on, let us now provide some {\it a posteriori} justification for why $\{ \alpha, \gamma , \alpha_c ,\gamma_c\}$ should be functions of time only. From the derivation of equations (\ref{fincon}) and (\ref{accgen2}) one can immediately see that any spatial dependence in either $\alpha$ or $\gamma$ would have resulted in sources proportional to $\langle \gamma \rho_M \rangle$ and $\langle \alpha \rho_M \rangle$ in the emergent Friedmann equations. This would mean that any situation where $\alpha$ or $\gamma$ have spatial dependence should be expected to result in strong cosmological back-reaction, so that the formation of structure would have a large effect on the background expansion. {This is because $\alpha$ and $\gamma$ are expected to be related to the local distribution of mass. The integrated quantities $\langle \gamma \rho_M \rangle$ and $\langle \alpha \rho_M \rangle$ would therefore be non-linear functions, and their precise value would depend on how matter is clustered. Spatial dependence of this type would modify the standard dust-like terms in the Friedmann equations.} {So, while one would still have a consistent cosmology, the precise rate of expansion would no longer be insensitive to the distribution of the mass of objects.} This would make the use of FLRW solutions, as a model to interpret observations, questionable, at best. Furthermore, if $\alpha_c$ or $\gamma_c$ had spatial dependence, then equations (\ref{np1}) and (\ref{np2}) would have had an additional source on their right-hand sides. This would mean that observations used to interpret $\hat{\Phi}$ and $\hat{\Psi}$ may not be directly linked to the mass density, and that one could (for example) have lensing of light in a situation where the matter is perfectly homogeneous. None of these outcomes are desirable, and it seems to us that they can only be avoided if $\{ \alpha, \gamma , \alpha_c ,\gamma_c\}$ do not vary in space. We will see in the following section that simple dark energy models and conservative theories of modified gravity do, in fact, obey these expectations. \section{Worked examples} \label{examples} In this section will investigate how specific example theories of gravity can be incorporated into the formalism described above. For each theory we will calculate the value of the set of parameters $\{ \alpha, \gamma, \alpha_c, \gamma_c\}$, using the weak-field and slow-motion limit of the theory. We will then use the method outlined in Section \ref{sec2} to determine the emergent cosmological expansion for each theory, by using the appropriate set of junction conditions. This will give a set of Friedmann-like equations that govern the emergent cosmological expansion, and which can be compared to the analogous equations that one finds when considering the actual FLRW solutions for each of the theories under consideration. The purpose of this is two-fold. Firstly, it shows that the method used in Section \ref{cosmo} does faithfully represent the perturbed Friedmann solutions of a wide class of modified theories of gravity. Secondly, it confirms that the effect of non-linear structure on the large-scale properties of the cosmology can be neglected at leading order in perturbation theory. This latter property is required if we are to make any sensible link between weak-field gravity and FLRW cosmology. Our first worked example will be general dark energy models in Einstein's theory. As sub-cases of this we look at simple quintessence dark energy models with a minimally coupled scalar field, as well as the standard $\Lambda$CDM model. We then consider scalar-tensor and vector-tensor theories of gravity as further worked examples. These two classes of theories require additional junction conditions for the additional degrees of freedom that they contain. This is the case because theories in which the field equations contain at most second-order derivatives of the fundamental fields should generically be expected to obey junction conditions that imply the smoothness and continuity of each of these fields. For Einstein's theory, this just corresponds to equations (\ref{metjunc1}) and (\ref{metjunc2}), as the metric is the only dynamical degree of freedom in the theory. For modified theories of gravity, the extra degrees of freedom must satisfy a similar set of conditions. \subsection{Dark energy models} Let us first consider a general dark energy model where a dark fluid is minimally coupled to the metric. The gravitational theory in this case is still given by Einstein's field equations, \begin{eqnarray} R_{ab} &= 8\pi G \left( T_{ab} - \frac{1}{2} T g_{ab} \right) \, , \end{eqnarray} where $T_{ab}= T_{M ab} +T_{I ab}$, and $T_{M ab}$ and $T_{I ab}$ are the energy-momentum tensors of non-relativistic matter and the dark fluid, respectively. Using the metric from equation \eref{weakfield}, the Poisson equations we obtain for the gravitational potentials in the weak-field limit are then given by \begin{eqnarray} \nabla^2 \Phi = - 4\pi G\rho_{M} - 4\pi G(\rho_{I} + 3p_{I}) \, , \label{de1} \\ \nabla^2 \Psi = - 4\pi G \rho_{M} - 4\pi G \rho_{I} \, . \label{de2} \end{eqnarray} This immediately gives the PPN parameters as \begin{eqnarray} \alpha = \gamma =1 \, , \end{eqnarray} which are, of course, the usual values of these parameters in Einstein's theory. Whenever $\alpha = \gamma =1$ we can use equations \eref{fincon}, \eref{accgen2} and \eref{addcon1} to find the consistency relations \begin{eqnarray} \alpha_{c} + 2\gamma_{c} + \frac{d \gamma_{c}}{d \ln a} = 0 \label{grcon} \ , \\ 2\alpha_{c} - 2\gamma_{c} = 6 \dot{H} + 9H^2 + \frac{3k}{a^2} \ , \end{eqnarray} where $H = \dot{a} /{a}$ is the Hubble rate. These equations must be obeyed by both $\alpha_c$ and $\gamma_c$. For the field equations given in (\ref{de1}) and (\ref{de2}) we find \begin{eqnarray} \alpha_{c} = - 4 \pi G (\rho_{I} + 3 p_{I}) \, , \ \\ \gamma_{c} = -4\pi G \rho_{I} \ . \end{eqnarray} equations \eref{fincon} and \eref{accgen2} can then be used to write \begin{eqnarray} \frac{\dot{a}^2}{a^2} + \frac{k}{a^2} = \frac{8\pi G \gamma}{3}\avg{\rho_{M}} + \frac{8\pi G }{3}\rho_I \ , \\[5pt] \frac{\ddot{a}}{a} = -\frac{4\pi G \alpha}{3}\avg{\rho_{M}} - \frac{4 \pi G}{3} \left( \rho_I + 3 p_I \right) \ . \end{eqnarray} These are identical to the equations for an FLRW solution to Einstein's equations with a barotropic fluid. The consistency between these equations and the FLRW equations of the same theory shows that our PPNC construction works for general relativity with general barotropic fluids. If we specialize further, to the case of a quintessence field \cite{quintessence}, the we have that the energy density and pressure are given by $\rho_{I} = \frac{1}{2} \dot{\phi}^2 + V(\phi)$ and $p_{I} = \frac{1}{2} \dot{\phi}^2 - V(\phi)$, {where $\dot{\phi} = d \phi / d \hat{t} \sim O(\epsilon)$ and $V(\phi) \sim O(\epsilon^2)$}. This gives \begin{eqnarray} \alpha_{c} = - 8 \pi G \left( \dot{\phi}^2 - V(\phi) \right) \, , \ \\ \gamma_{c} = -4\pi G \bigg(\frac{1}{2} \dot{\phi}^2 + V(\phi)\bigg) \ , \end{eqnarray} where $\phi$ is the minimally-coupled scalar field and $V(\phi)$ is the potential of that field. We can now use equations \eref{fincon} and \eref{accgen2} to write the emergent cosmological expansion as \begin{eqnarray} \frac{\dot{a}^2}{a^2} + \frac{k}{a^2} = \frac{8\pi G \gamma}{3}\avg{\rho_{M}} + \frac{8\pi G }{3}\bigg(\frac{1}{2} \dot{\phi}^2 + V(\phi) \bigg)\ , \\[5pt] \frac{\ddot{a}}{a} = -\frac{4\pi G \alpha}{3}\avg{\rho_{M}} - \frac{8 \pi G}{3} \left( \dot{\phi}^2 - V(\phi) \right) \ . \end{eqnarray} These are again identical to the equations for an FLRW solution to Einstein's equations with a minimally coupled quintessence field. The only extra equation we get in this case is the propagation equation for the scalar field: \begin{eqnarray} \ddot{\phi} = - 3 \frac{\dot{a}}{a}\dot{\phi} -\frac{dV(\phi)}{d\phi} \ , \end{eqnarray} which can be derived from the continuity equation \eref{continuity1}. This shows our parameterization is consistent with quintessence models of dark energy. It must therefore also be consistent with the $\Lambda$CDM model, as this just correponds to the case where both $\phi$ and $V(\phi)$ are constant. In this case we can set $\Lambda = 8\pi G V(\phi)$, and our parameters reduce to $\alpha_{c} = \Lambda$ and $\gamma_{c} =-\frac{\Lambda}{2}$. The acceleration and constraint equations then reduce to the Friedmann equations of $\Lambda$CDM universe. Our parameterization therefore also works for the standard $\Lambda$CDM model. \subsection{Scalar-tensor theories of gravity} Let us now turn our attention to a general class of scalar-tensor theories of gravity. These theories are some of the simplest generic modifications that one can make to general relativity, and involve the addition of only one non-minimally coupled scalar field, $\phi$. In order to fit into the formalism above, we choose to work in the Jordan frame where energy-momentum is covariantly conserved. It then immediately follows that the worldlines of test particles are geodesic \cite{Will, Tim3}. The Lagrangian for the class of theories we wish to consider is given by \begin{equation} \label{Lst} L =\frac{1}{16\pi G}\bigg[\phi R - \frac{\omega(\phi)}{\phi} g^{ab}\phi_{; a} \phi_{; b} - 2\phi\Lambda(\phi)\bigg] + L_{m}(\psi, g _{ab}) \ , \end{equation} so that the effective gravitational constant $G_{\rm eff}$, as determined by local weak-field experiments, is modified by the space-time varying scalar field $\phi(t, x^{\mu})$. The semicolons denote covariant derivative with respect to the metric $g_{ab}$, and $\omega(\phi)$ and $\Lambda(\phi)$ are general functions of $\phi$. Finally, $\psi$ denotes matter fields. This class of theories reduces to Brans-Dicke theory when $\Lambda=0$ and $\omega$ is a constant \cite{Brans}. We recover a $\Lambda$CDM model when $\omega \to \infty$, $\omega' /\omega^2 \to 0$ and $\Lambda$ is a constant. The field equations can be determined from the Lagrangian in (\ref{Lst}) using variational principles, and can be manipulated into the form \begin{eqnarray} \hspace{-20pt} \phi R_{ab} = 8\pi G \left( T_{ab} - \frac{1}{2} g_{a b} T \right) + g_{a b} \bigg( \frac{1}{2} \square \phi + \phi\Lambda(\phi) \bigg) + \frac{\omega(\phi)}{\phi} \phi_{; a} \phi_{; b} + \phi_{; a b} \, ,\label{Riccifieldscalar} \end{eqnarray} with a propagation equation for the scalar field given by \begin{equation} \hspace{-20pt} (2\omega(\phi) +3 ) \square \phi = 8\pi G T - \omega'(\phi) g^{cd}\phi_{; c} \phi_{; d} - 2\phi\Lambda(\phi) + 2\phi^2\Lambda'(\phi) \, . \label{phimatter} \end{equation} In these equations $T_{ab} = T_{M ab} + \sum_{I} T_{I ab}$ is the sum of the energy-momentum tensors of the non-relativistic matter and any non-interacting barotropic fluids that may be present. We have also written $\omega'(\phi) = d\omega(\phi)/d\phi$ and $\Lambda'(\phi) = d\Lambda(\phi)/d\phi$, and used $\square$ to denote the covariant d'Alembertian operator. The first thing to do, when considering the post-Newtonian limit of these theories, is to expand the scalar field $\phi$. We do this in the following way \begin{equation} \phi = \bar{\phi} + \delta \phi + O(\epsilon^4) \, , \end{equation} where $\bar{\phi} \sim \epsilon^0$ and $\delta \phi \sim \epsilon^2$. This is so far the same as the treatment of this field in the PPN formalism. However, we now note that the lowest-order field equations give \begin{equation} \bar{\phi}_{,\alpha} = 0 \qquad {\rm or, equivalently,} \qquad \bar{\phi}=\bar{\phi}(t) \, . \end{equation} This means that the lowest-order part of $\phi$ can be dependent on time, but not on spatial position. At this point in the standard PPN formalism one assumes that $\bar{\phi}$ is effectively constant (i.e. not varying in space or time). While this is likely to be a very good approximation in the Solar System, it is unlikely to be valid on the scales we wish to consider in cosmology. Indeed, we will find that we must allow $\bar{\phi}$ to be a function of time in order for the emergent cosmological expansion to match the behaviour predicted by the Friedmann equations. From now on we will refer to $\bar{\phi}(t)$ as the ``background'' value of the scalar field, and we will suppress its argument. The perturbation $\delta \phi=\delta \phi (x^\alpha,t)$ is dependent on both position in space and time, as usual. Using the weak-field metric from equation \eref{weakfield}, and the field equations \eref{Riccifieldscalar}-\eref{phimatter}, we can now write a set of Poisson equations for the gravitational potentials. They are given by equations of the form given in (\ref{Phigen}) and (\ref{Psigen}), with the parameter values \begin{eqnarray} \alpha(t) =\bigg(\frac{2\omega + 4}{2 \omega + 3}\bigg) \frac{1}{ \bar{\phi}}\, , \label{alphascalartensor}\\[5pt] \gamma(t) = \bigg(\frac{2\omega + 2}{2 \omega + 3}\bigg) \frac{1}{ \bar{\phi}}\, . \label{gammascalartensor} \end{eqnarray} These are exactly the same expression that one derives in the standard PPN formalism \cite{Will}, except that they are now functions of time. The fact that local gravitational experiments determine the present day value of Newton's constant to be given by $G$ then requires \begin{equation} \alpha (t_0) = 1 \qquad {\rm or, equivalently,} \qquad \bar{\phi}(t_0) = \bigg(\frac{2\omega + 4}{2 \omega + 3}\bigg) \, , \end{equation} where $t_0$ denotes the present time. This provides a boundary condition on the function $\alpha(t)$, which is now generically expected to be non-constant in time. It also allows us to write the present day value of $\gamma$ as \begin{equation} \gamma(t_0) = \bigg(\frac{\omega + 1}{\omega + 2}\bigg) \, , \end{equation} which is the usual value used in post-Newtonian gravitational experiments. One may note that in our case this is only a boundary condition on $\gamma(t)$, which is also generically expected to be a non-constant function of time. From equations (\ref{Phigen}) and (\ref{Psigen}) we can also read off the values of the cosmological parameters $\alpha_c$ and $\gamma_c$. These are given by \begin{eqnarray} \hspace{-20pt}\alpha_{c}(t) &=& - \bigg(\frac{2\omega + 4}{2 \omega + 3}\bigg) \sum_{I} \frac{4\pi G\rho_{I}}{\bar{\phi}} + \bigg(\frac{2\omega + 2}{2 \omega + 3}\bigg)\bigg(-\sum_{I}\frac{12\pi Gp_{I}}{\bar{\phi}} + \Lambda(\bar{\phi}) \bigg) \nonumber \\ &&\qquad - \frac{\omega(\bar{\phi})}{\bar{\phi}^2}\dot{\bar{\phi}}^2 - \frac{\ddot{\bar{\phi}}}{\bar{\phi}} + \bigg( \frac{1}{2\omega + 3} \bigg) \left(\frac{\omega' \dot{\bar{\phi}}^2}{2\bar{\phi}} + \bar{\phi}\Lambda' (\bar{\phi})\right) \, , \\[5pt] \hspace{-20pt} \gamma_{c}(t) &=& -\bigg(\frac{2\omega + 2}{2 \omega + 3}\bigg) \sum_{I} \frac{4\pi G\rho_{I}}{\bar{\phi}} -\bigg(\frac{1}{4 \omega + 6}\bigg)\bigg(\sum_{I}\frac{24\pi G p_{I}}{\bar{\phi}} + (2 \omega+1) \Lambda(\bar{\phi}) \bigg) \nonumber \\ &&\qquad - \frac{\omega(\bar{\phi})}{4\bar{\phi}^2}\dot{\bar{\phi}}^2 - \frac{\ddot{\bar{\phi}}}{2\bar{\phi}} -\bigg(\frac{1}{2 \omega + 3}\bigg)\bigg(\frac{\omega'}{2\bar{\phi}}\dot{\bar{\phi}}^2+ \bar{\phi} \Lambda'(\bar{\phi}) \bigg) \, . \label{gcST} \end{eqnarray} These equations have no counterparts in the standard PPN formalism, as they are neglected in that case. However, it can be seen that if $\bar{\phi}$ is a function of $t$, or if barotropic fluids of a scalar field potential are present, then they are not equal to zero. They can also not be neglected on cosmological scales, as we will see below. Finally, one may note that in this case the potential $\Lambda(\bar{\phi}) \sim O(\epsilon^2)$ is not the same as a non-interacting fluid with $p_{I} = -\rho_{I}$. The only other weak-field equation in this theory, other than equations (\ref{Phigen}) and (\ref{Psigen}), is the propagation equation for the scalar field. This is given by \begin{eqnarray} \hspace{-50pt} \nabla^2 \delta \phi = \frac{1}{2\omega + 3}\bigg(\omega' \dot{\bar{\phi}}^2 -8\pi \rho_{M} -8\pi \sum_{I} (\rho_{I} - 3p_{I}) - 2\bar{\phi}\Lambda(\bar{\phi}) + 2\bar{\phi}^2\Lambda'(\bar{\phi})\bigg) + \ddot{\bar{\phi}} \, . \label{nabladeltaphi} \end{eqnarray} {One may note that the terms responsible for screening mechanisms are absent at this order, due to the post-Newtonian expansion we have deployed. They should, however, be expected to appear at higher orders.} In order to determine the cosmological equations, we now need to know the appropriate junction conditions for $\phi$. These are given by \begin{eqnarray} \bigg[\phi \bigg]^{(+)}_{(-)} =&0 \, \label{scalarjunc} \qquad {\rm and} \qquad \bigg[\mathcal{L}_{n} \phi \bigg]^{(+)}_{(-)} = 0 \, , \end{eqnarray} which ensure the smoothness and continuity of the scalar field $\phi$ at the boundary of the region of space we are considering. For the extrinsically flat boundaries we consider here, these equations give $\mathcal{L}_{n} \phi = 0$, which can be expanded to obtain \begin{eqnarray} \mathbf{n} \cdot \nabla \delta \phi |_{x=X} = - \dot{a} \dot{\bar{\phi}} \hat{X}_{0} +O(\epsilon^4) \, , \label{scalarjuncfin} \end{eqnarray} where $\hat{X}_{0}$ is the constant position of the boundary in the expanding coordinate system. Integrating equations (\ref{Phigen}), (\ref{Psigen}) and (\ref{nabladeltaphi}) over our region of space, using Gauss' theorem and equation (\ref{scalarjuncfin}), then gives the cosmological expansion equations for a general scalar-tensor theory of gravity. These are given by \begin{eqnarray} \hspace{-20pt}\frac{\dot{a}^2}{a^2} +\frac{k}{a^2} = \frac{8\pi G}{3\bar{\phi}}\avg{\rho_{M}} +\frac{8\pi G }{3\bar{\phi}}\sum_{I}\rho_{I} + \frac{\omega(\bar{\phi})}{6\bar{\phi}^2}\dot{\bar{\phi}}^2 - \frac{\dot{\bar{\phi}} \dot{a}}{\bar{\phi} a} + \frac{\Lambda(\bar{\phi})}{3} \, , \label{confinscalar} \end{eqnarray} and \begin{eqnarray} \hspace{-20pt}\frac{\ddot{a}}{a} = -\bigg(\frac{\omega+3}{6 \omega + 9}\bigg)\frac{8\pi G }{\bar{\phi}}\avg{\rho_{M}} -\bigg(\frac{\omega+3}{6 \omega + 9}\bigg)\frac{8\pi G }{\bar{\phi}}\sum_{I}\rho_{I} -\frac{8\pi G}{\bar{\phi}}\sum_{I}p_{I} \bigg(\frac{\omega}{2\omega + 3}\bigg) \nonumber \\ \qquad - \frac{\omega(\bar{\phi})}{3\bar{\phi}^2}\dot{\bar{\phi}}^2 + \frac{\dot{\bar{\phi}} \dot{a}}{\bar{\phi} a} + \Lambda(\bar{\phi}) \bigg(\frac{2\omega}{6 \omega + 9}\bigg) + \frac{1}{2\omega + 3}\bigg( \frac{\omega'}{2\bar{\phi}}\dot{\bar{\phi}}^2 +\Lambda'(\bar{\phi}) \bigg) \, , \label{accfinscalar} \end{eqnarray} and \begin{eqnarray} \hspace{-50pt} \frac{\ddot{\bar{\phi}}}{\bar{\phi}} = \frac{1}{2\omega + 3}\bigg(\frac{8\pi G}{\bar{\phi}}\bigg(\avg{\rho_{M}} + \sum_{I} (\rho_{I} - 3p_{I})\bigg) -\frac{\omega' \dot{\bar{\phi}}^2}{\bar{\phi}} + 2\Lambda(\bar{\phi}) - 2\bar{\phi} \Lambda'(\bar{\phi})\bigg) - 3 \frac{\dot{a} \dot{\bar{\phi}}}{a \bar{\phi}} \, . \label{STacc} \end{eqnarray} Equations \eref{confinscalar}-\eref{STacc} are identical to the standard FLRW equations we expect to obtain for scalar-tensor theories of gravity \cite{Tim3, stcheck1}, as well as corresponding precisely to the parameterized equations \eref{fincon} and \eref{accgen2}. The corresponding first-order quasi-static cosmological perturbations are also given precisely by equations (\ref{np1}) and (\ref{np2}), with $\alpha$ and $\gamma$ given by equations (\ref{alphascalartensor}) and (\ref{gammascalartensor}). {One may note that at this order of approximation, and with the assumptions we have made, we find no Yukawa potentials. Again, the terms responsible for these in massive scalar-tensor theories should be expected to appear at higher orders.} This shows our parameterization produces both the correct cosmological expansion, and the correct first-order perturbations, for this class of scalar-tensor theories of gravity. It also shows that the parameterized framework presented in Section \ref{cosmo} is a very compact way of presenting the cosmological dynamics. \subsection{Vector-tensor theories of gravity} In this subsection we will consider a general class of vector-tensor theories of gravity. These theories have a time-like vector field, $A^{a}$, that is non-minimally coupled to gravity, and whose evolution equations are linear and at most second order in derivatives \cite{Will}. Their Lagrangian is given by \cite{Nord2,Nord3,Nord1} \begin{equation} \hspace{-50pt} L =\frac{1}{16\pi G}\bigg[ R + \omega A_{a}A^{a} R + \eta A^{a} A^{b} R_{ab} - \epsilon F^{ab} F_{ab} + \tau A_{a;b} A^{a;b} \bigg] + L_{m}(\psi, g _{ab}) \, , \label{SVL} \end{equation} where $A^{a}$ is a dynamical time-like vector field, and the 2-form $F_{ab}$ is defined by $F_{ab} \equiv A_{b;a} - A_{a;b}$. The parameters $\omega, \eta, \epsilon$ and $\tau$ in this Lagrangian are all constants, and $\psi$ denotes the matter fields present in the theory. We could also have included a term dependent on $A_{\mu} A^{\mu}$ in (\ref{SVL}), but this would behave in the same way as the $\Lambda (\phi)$ term in scalar-tensor theories of gravity and would needlessly complicate the situation. When the action obtained from equation (\ref{SVL}) is varied with respect to the metric, the field equations we obtain are given by \begin{equation} G_{ab} + \omega \Theta_{ab}^{(\omega)} + \eta\Theta_{ab}^{(\eta)} + \epsilon \Theta_{ab}^{(\epsilon)} + \tau\Theta_{ab}^{(\tau)} = 8\pi GT_{ab} \ , \label{Gfieldvector} \end{equation} where $G_{ab} = R_{ab} - \frac{1}{2}g_{ab}R$ is the Einstein tensor, $T_{ab} = T_{M ab} + \sum_{I} T_{I ab}$ is the total energy-momentum tensor (including both matter and non-interacting fluids), and the $\Theta$'s are given by \begin{eqnarray} \hspace{-50pt} \Theta_{ab}^{(\omega)} &= A_{a} A_{b} R + A^2 R_{a b} - \frac{1}{2} g_{ab} A^2 R - (A^2)_{; ab} + g_{ab} (A^2)_{;c}^{\ ; c} \, , \label{SV1} \\ \hspace{-50pt} \Theta_{ab}^{(\eta)} &= 2 A^{c} A_{(a} R_{b) c} - \frac{1}{2} g_{ab} A^{c} A^{d} R_{c d} - (A^{c}A_{(a})_{;b)c} + \frac{1}{2} (A_{a} A_{b})_{;c}^{\ ; c} + \frac{1}{2} g_{ab} (A^{c} A^{d})_{; c d} \, , \\ \hspace{-50pt} \Theta_{ab}^{(\epsilon)} &= - 2(F^{c}_{\ a} F_{b c} - \frac{1}{4} g_{ab} F^{c d} F_{cd}) \, , \\ \hspace{-50pt} \Theta_{ab}^{(\tau)} &= A_{a;c} A_{b}^{\ ; c} + A_{c ; a} A^{c}_{\ ; b} - \frac{1}{2} g_{ab} A_{c;d} A^{c;d} + (A^{c} A_{(a;b)} - A^{c}_{; (a} A^{}_{b)} - A^{}_{(a} A_{b)}^{\ ; c} )_{; c} \, , \end{eqnarray} where $A^2 = A^{a} A_{a}$. The field equation obtained by varying the action from equation (\ref{SVL}) with respect to the vector field $A_{a}$ is given by \begin{equation} \epsilon F^{ab}_{ \quad ; b} + \frac{1}{2} \tau A^{a ; b}_{ \quad ; b} - \frac{1}{2} \omega A^{a} R - \frac{1}{2} \eta A^{b} R^{a}_{\ b} = 0 \ . \label{Afield} \end{equation} The field equations (\ref{Gfieldvector}) - (\ref{Afield}) give the full set of field equations for the theories we wish to consider in this subsection. Let us now expand the components of the vector field $A_{a}$, in the post-Newtonian limit. For this we write \begin{eqnarray} A_t = \bar{A}_{t} + \delta A_{t} + O(\epsilon^4) \, , \\ A_{\mu} = \delta A_{\mu} + O(\epsilon^3) \, , \end{eqnarray} where $\bar{A}_{t} \sim \epsilon^0$, and $\delta A_{\mu} \sim \epsilon^1$, and $\delta A_{t} \sim \epsilon^2$. The reader may note that we have taken the leading-order perturbation to the spatial component of the vector field to contribute at $O(\epsilon)$, which differs from the standard treatment in the PPN formalism, where the lowest-order part of this component is usually taken to be $O(\epsilon^3)$. We find that this is necessary in order to reproduce the correct large-scale expansion. Using the field equations \eref{Gfieldvector} - \eref{Afield} we find that the leading-order part of the time component of the vector field must obey \begin{equation} A_{t,\alpha} =0 \qquad {\rm or, equivalently,} \qquad \bar{A}_{t} = \bar{A}_{t}(t) \, . \end{equation} This also differs from the standard PPN formalism, which assumes that any time dependence in $A_{t}$ can be neglected at this order. Again, such an assumption is likely to be valid on small scales (such as in the Solar System), but will not generically be valid on cosmological scales. In fact, just as with the scalar field in the previous section, we find that we require $\bar{A}_t$ to be time dependent in order to reproduce the expected large-scale expansion. We will refer to $\bar{A}_t$ as the ``background'' value of $A_t$, and note that $\delta A_{t}$ is expected to be a function of both space and time. Let us now consider the lowest-order field equations that feature $\delta A_{\mu}$. Using the $t\mu$-component of equation \eref{Gfieldvector} and the spatial component of equation \eref{Afield} we find \begin{eqnarray} \tau (\eta + \tau - 4\epsilon)\bar{A}_{t} \delta A_{\mu, \nu \nu} = 0 \, . \end{eqnarray} This means that if $\tau (\eta + \tau - 4\epsilon) \bar{A}_{t} \neq 0$ (as one should expect in general circumstances), then we must have $\delta A_{\mu, \nu \nu} =0$. We can then see that equation \eref{Afield} implies that $\delta A_{\mu, \mu \nu} =0$, which implies $\delta A_{\mu, \mu} = f(t)$ for some function $f(t)$. In general, the solution for $\delta A_{\mu}$ can therefore be written as \begin{eqnarray} \delta A_{x} = \frac{1}{3} f(t) x + C_{1}(t,y,z) \, , \label{vecsoln1} \\ \delta A_{y} = \frac{1}{3} f(t) y + C_{2}(t,x,z) \, , \\ \delta A_{z} = \frac{1}{3} f(t) z + C_{3}(t,x,y) \, , \end{eqnarray} where $C_{1}$, $C_{2}$ and $C_{3}$ are unknown functions to be determined. At this point it is useful to consider the junction conditions on the vector field $A_a$. For theories with at most two derivatives in the field equations we expect smoothness and continuity to imply the following: \begin{eqnarray} \bigg[A^{\parallel}_{i}\bigg]^{(+)}_{(-)} = 0 \label{vecjun1} \, , \qquad \bigg[A^{\perp} \bigg]^{(+)}_{(-)} = 0 \, , \qquad {\rm and} \qquad \bigg[ (\mathcal{L}_{n}A)_{i} \bigg]^{(+)}_{(-)} = 0 \, , \label{vecjun3} \end{eqnarray} where $A^{\parallel}_{i} \equiv ({\partial x^{a}}/{\partial \xi^{i}}) A_a$ is the component of the vector field that is parallel to the boundary, where $A^{\perp} \equiv n^a A_a$ is the component of the vector field that is perpendicular to the boundary, and where $(\mathcal{L}_{n}A)_{i} \equiv ({\partial x^{a}}/{\partial \xi^{i}}) \mathcal{L}_{n}A_{a}$ is the Lie normal derivative of the vector field projected on the boundary. The $\xi^i$ here refer to a set of coordinates on the boundary of the region of space being considered. Under reflection symmetric boundary conditions, the last two equations in \eref{vecjun3} simplify to $A^{\perp}=0$ and $(\mathcal{L}_{n}A)_{i}=0$. Then, using equations \eref{metjunc1}, \eref{metjunc2}, and \eref{vecjun3}, we find that the value of the $x$-component of the vector field on the boundary should be given by $\delta A_{x}|_{x=X} = - \dot{a} \bar{A}_{t} \hat{X}_{0}$, where $\hat{X}_{0}$ is a constant. From this and equation \eref{vecsoln1} we can infer that $f(t) = -3({\dot{a}}/{a}) \bar{A}_{t}$ and $C_{1}(t,y,z) = 0$. Similar considerations lead to the results $C_{2}(t,x,z) =C_{3}(t,x,y)= 0$, so that we end up with \begin{eqnarray} \delta A_{x} =& - \frac{\dot{a}}{a} \bar{A}_{t} x \, , \\ \delta A_{y} =& - \frac{\dot{a}}{a} \bar{A}_{t} y \, , \\ \delta A_{z} =& - \frac{\dot{a}}{a} \bar{A}_{t} z \, . \label{zvector} \end{eqnarray} These results will be very useful for simplifying a lot of the terms that will occur in the equations below. Using the weak-field metric from equation \eref{weakfield}, and the field equations \eref{Gfieldvector} - \eref{zvector}, we can now write another set of Poisson equations for the gravitational potentials in these theories. They are again given by equations of the form given in (\ref{Phigen}) and (\ref{Psigen}), with the parameter values \begin{eqnarray} \alpha =- \frac{1}{\mathcal{D}} \bigg[ 2 \omega \bar{A}_{t}^2 ( \tau -8 \omega -2 \epsilon )+2 (2 \epsilon -\tau ) \bigg] \ , \label{alphavectortensor}\\[5pt] \gamma = -\frac{1}{\mathcal{D}}\bigg[ 2 \omega \bar{A}_{t}^2 (-2 \eta +\tau -4 \omega +2 \epsilon ) +2 (2 \epsilon -\tau ) \bigg]\ , \label{gammavectortensor} \end{eqnarray} where $\mathcal{D}$ is a function of time, and is given by \begin{eqnarray} \hspace{-20pt} \mathcal {D} =& -\omega \bar{A}_{t}^4 \left(-\eta ^2+4 \eta \omega +\tau ^2-10 \tau \omega +12 \omega ^2+4 \epsilon (\eta -\tau +3 \omega )\right) \nonumber \\ &\qquad +\bar{A}_{t}^2 \left(-\eta ^2+4 \eta \omega +\tau ^2-4 \tau \omega +12 \omega ^2+4 \epsilon (\eta -\tau )\right)+2 \tau -4 \epsilon \, . \end{eqnarray} These expressions for $\alpha$ and $\gamma$ are generally functions of time, but reduce to the usual expression in PPN gravity when the time dependence of $\bar{A}_t$ is neglected. As before, the fact that local gravity experiments measure the value of Newton's constant to be $G$ means that we have the boundary condition $\alpha(t_0)=1$, which gives the present day value of $\bar{A}_t=\bar{A}_t(t_0)$. We can again read off the value of the cosmological parameters $\alpha_c$ and $\gamma_c$ from equations (\ref{Phigen}) and (\ref{Psigen}). These are still only functions of time, and are given by \begin{eqnarray} \fl \alpha_{c} = \frac{1}{\mathcal{D}}\bigg[ 8 \pi G \sum_{I} \bigg(\omega \bar{A}_{t}^2 (3 p_{I} (-2 \eta+\tau -4 \omega +2 \epsilon )+\rho_{I} (\tau -8 \omega -2 \epsilon )) +(3 p_{I}+\rho_{I} ) (2 \epsilon -\tau ) \bigg)\nonumber \\ \fl \qquad \qquad -6 \bar{A}_{t}^2 \frac{\ddot{a}}{a} \bigg(\omega \bar{A}_{t}^2 \bigg(-2 \eta ^2-4 \eta \omega +\tau ^2-6 \tau \omega +\epsilon (3 \eta -\tau +6 \omega )\bigg) \nonumber \\ \qquad \qquad \qquad -\tau (\eta +\tau )+\epsilon (\eta +3 \tau +2 \omega )\bigg) \nonumber \\ \fl \qquad \qquad -6 \bar{A}_{t} \dot{\bar{A}}_{t} \frac{ \dot{a}}{a} \bigg(\omega \bar{A}_{t}^2 (-(2 \eta +\tau ) (2\eta - \tau + 4\omega)+\epsilon (5\eta +\tau + 6\omega ))\nonumber \\ \qquad \qquad \qquad -\tau (2 \eta +\tau )+\epsilon (3 \eta +3 \tau +2 \omega )\bigg) \nonumber \\ \fl \qquad \qquad -3 \bar{A}_{t}^2 \frac{\dot{a}^2}{a^2} (-\eta +2 \omega +2\epsilon ) \left(2 \omega \bar{A}_{t}^2 (\eta +\tau )-\tau \right) \nonumber \\ \fl \qquad \qquad +2 \bar{A}_{t}\ddot{\bar{A}}_{t} \bigg(\omega \bar{A}_{t}^2 \left(3 \eta ^2-2 \eta (\tau -6 \omega )+2\omega (6 \omega -\tau )+\epsilon (-3 \eta +\tau -6 \omega )\right) \nonumber \\ \qquad \qquad \qquad - (\epsilon (3 \eta +\tau +6 \omega )-2 \tau (\eta +\omega))\bigg) \nonumber \\ \fl \qquad \qquad +\dot{\bar{A}}_{t}^2 \bigg(2 \omega \bar{A}_{t}^2 \left(3 \eta ^2-3 \eta \tau +12 \eta \omega +\tau ^2-8 \tau \omega +12 \omega ^2+\epsilon (-3 \eta +\tau -6 \omega )\right) \nonumber \\ \qquad \qquad \qquad + (2 \epsilon -\tau ) (-3 \eta +2 \tau -6 \omega )\bigg) \bigg] \, , \end{eqnarray} and \begin{eqnarray} \fl \gamma_{c} =\frac{1}{4\mathcal{D}}\bigg[ 16 \pi G \sum_{I}\bigg(3 p_{I} \bar{A}_{t}^2 (\eta -\tau +2 \omega ) (-\eta -\tau -2 \omega +4 \epsilon )\nonumber \\ \qquad \qquad \qquad +2 \rho_{I} \bar{A}_{t}^2 \omega (-2 \eta +\tau -4 \omega +2 \epsilon ) +2 \rho_{I} (2 \epsilon -\tau ) \bigg)\nonumber \\ \fl \qquad \qquad -6 \bar{A}_{t}^2 \frac{ \ddot{a}}{a} \bigg(\bar{A}_{t}^2 \bigg(-2 \eta ^3-\eta ^2 (\tau +8 \omega )+2 \eta \left(\tau ^2-4 \tau \omega -4 \omega ^2\right)\nonumber \\ \qquad \qquad \qquad +\tau \left(\tau ^2+2 \tau \omega -12 \omega ^2\right)+4 \epsilon \left(2 \eta ^2-\eta \tau +7 \eta \omega -\tau ^2+6 \omega ^2\right)\bigg) \nonumber \\ \qquad \qquad \qquad -2\left(\tau ^2+2 \epsilon (\eta -2 \tau +2 \omega )\right)\bigg) \nonumber \\ \fl \qquad \qquad +12 \bar{A}_{t} \dot{\bar{A}}_{t} \frac{ \dot{a}}{a} \bigg((\eta - \tau + 2\omega) (2\epsilon +\bar{A}_{t}^2((2\eta + \tau) (\eta + \tau + 2\omega) - 2\epsilon (4\eta + 2\tau + 3\omega) )\bigg)\nonumber \\ \fl \qquad \qquad -3 \bar{A}_{t}^2 \frac{\dot{a}^2}{a^2} \bigg(\bar{A}_{t}^2 \bigg(-2 \eta ^3-\eta ^2 \tau +2 \eta (\tau -2 \omega )^2+\tau \left(\tau ^2+6 \tau \omega -4 \omega ^2\right) \nonumber \\ \qquad \qquad \qquad +\epsilon \left(8\eta ^2-4 \eta (\tau -2 \omega )-4 \tau (\tau +\omega )\right)\bigg) \nonumber\\ \qquad \qquad \qquad +2 (\tau (4 \eta +\tau +4 \omega )-2 \epsilon (2 \eta +3 \tau))\bigg) \nonumber \\ \fl \qquad \qquad +2 \bar{A}_{t}\ddot{\bar{A}}_{t} \bigg(2\tau (\eta +2 \omega -2 \epsilon )- \bar{A}_{t}^2 \bigg(-3 \eta ^3-18 \eta ^2 \omega \nonumber \\ \qquad \qquad \qquad +\eta \left(3 \tau^2+2 \tau \omega -36 \omega ^2\right) +2 \omega \left(\tau ^2+2 \tau \omega -12 \omega ^2\right)\nonumber \\ \qquad \qquad \qquad +4 \epsilon \left(3 \eta ^2-3 \eta (\tau -4\omega )+\omega (12 \omega -5 \tau )\right)\bigg) \bigg) \nonumber \\ \fl \qquad \qquad +\dot{\bar{A}}_{t}^2 \bigg( \bar{A}_{t}^2 \bigg(6 \eta ^3-3 \eta ^2 (\tau -12 \omega )-2 \eta \left(3 \tau ^2+8 \tau \omega -36 \omega ^2\right) \nonumber \\ \qquad \qquad \qquad - 4 \epsilon \left(6 \eta ^2-9\eta \tau +24 \eta \omega +3 \tau ^2-19 \tau \omega +24 \omega ^2\right) \nonumber \\ \qquad \qquad \qquad +3 \tau ^3-10 \tau ^2 \omega -20 \tau \omega ^2+48 \omega ^3 \bigg)+2 \tau (2 \epsilon -\tau )\bigg) \bigg] \ . \label{gammacvector} \end{eqnarray} Again, these equations do not exist in the standard PPN formalism, as time-dependence of the background fields is neglected in that case. However, it can be seen that if $\bar{A}_t$ is a function of $t$, or if barotropic fluids are present, then they are non-zero. The final weak-field Poisson equation is the propagation equation for $\delta A_{t}$. This is given by \begin{eqnarray} \fl \nabla^2 \delta A_{t} =\frac{1}{\mathcal{D}} \bigg[ 8 \pi G \rho_{M} \bigg(\omega \bar{A}_{t}^3 (\eta -\tau +6 \omega )- \bar{A}_{t} (\eta -\tau -2 \omega )\bigg)\nonumber \\ +8 \pi G \sum_{I} \bigg( \omega \bar{A}_{t}^3 (\rho_{I} (\eta -\tau +6 \omega )+9 p_{I} (\eta -\tau +2 \omega )) \nonumber \\ \qquad \qquad \qquad - \bar{A}_{t} (\rho_{I} (\eta -\tau -2 \omega )+3 p_{I} (\eta -\tau +2 \omega))\bigg) \nonumber \\ +6 \bar{A}_{t} \frac{\ddot{a}}{a} \bigg(\bar{A}_{t}^2 \left(\eta ^2+2 \eta \omega -\tau ^2-2 \eta \epsilon +2 \tau \epsilon \right) + \omega\bar{A}_{t}^4 \left(-3 \eta^2+\eta (\tau -6 \omega ) \right)\nonumber \\ \qquad \qquad \qquad +\omega \bar{A}_{t}^4 \left(2 \tau (\tau -3 \omega )+2 \epsilon (\eta -\tau +3 \omega )\right)+2 \epsilon \bigg) \nonumber \\ +6 \dot{\bar{A}}_{t}\frac{\dot{a}}{a}\bigg(\omega \bar{A}_{t}^4 (2 \epsilon (\eta -\tau +3 \omega )- 3(2 \eta +\tau ) (\eta -\tau +2 \omega )) \nonumber \\ \qquad \qquad \qquad -\bar{A}_{t}^2 (2 \epsilon (\eta -\tau )-(2 \eta+\tau ) (\eta -\tau -2 \omega ))+2 \epsilon \bigg) \nonumber \\ -3 \bar{A}_{t} \frac{\dot{a}^2}{a^2} \bigg(\omega \bar{A}_{t}^4 \left(\eta ^2+4 \eta \omega -\tau ^2-10 \tau \omega+12 \omega ^2+4 \epsilon (\eta -\tau +3 \omega )\right) \nonumber \\ \qquad \qquad \qquad +\bar{A}_{t}^2 \left(\eta ^2-\eta \tau -8 \eta \omega -12 \omega ^2-4 \eta \epsilon +4 \tau \epsilon \right)+4 \epsilon \bigg) \nonumber \\ +\ddot{\bar{A}}_{t} \bigg(-\omega \bar{A}_{t}^4 \left(9 \eta ^2-10 \eta \tau +36 \eta \omega +\tau ^2-18 \tau \omega+36 \omega ^2\right) \nonumber \\ \qquad \qquad \qquad + \bar{A}_{t}^2 \left(3 \eta ^2-4 \eta (\tau -3 \omega )+\tau ^2-8 \tau \omega +12 \omega ^2\right)+2 \tau \bigg) \nonumber \\ +\dot{\bar{A}}_{t}^2 \bigg( \bar{A}_{t} \left(3 \eta ^2-5 \eta \tau +12 \eta \omega +2 \tau ^2-8 \tau \omega +12 \omega ^2\right) \nonumber \\ \qquad \qquad \qquad -\omega \bar{A}_{t}^3 \left(9 \eta ^2-14 \eta \tau +36 \eta \omega +5 \tau ^2-30 \tau \omega +36 \omega ^2\right)\bigg) \bigg] \, . \label{pertfinal} \end{eqnarray} In this case, taking the time component of the last of the expressions in \eref{vecjun3} gives \begin{eqnarray} \mathbf{n} \cdot \nabla \delta A_{t} |_{x=X} =& \frac{\dot{a}^2}{a} \bar{A}_{t} \hat{X}_{0} - \dot{a} \dot{\bar{A}}_{t} \hat{X}_{0} - \ddot{a} \bar{A}_{t} \hat{X}_{0}\ . \label{vecjunfin} \end{eqnarray} Integrating equations (\ref{Phigen}), (\ref{Psigen}) and (\ref{pertfinal}) over our spatial domain, using Gauss' theorem and equation (\ref{vecjunfin}), then gives the equations for the cosmological evolution of the space-time. Firstly, the constraint equation in these theories is given by \begin{eqnarray} \hspace{-45pt} \frac{\dot{a}^2}{a^2} = -\frac{16 \pi G (\avg{\rho_{M}} + \sum_{I}\rho_{I}) a^2 + \tau a^2 \dot{\bar{A}}_{t}^2 + 6(\eta + 2\omega)\dot{a} a \bar{A}_{t} \dot{\bar{A}}_{t} -6k(1-\omega \bar{A}_{t}^2)}{3 a^2 (-2 + (2\eta + \tau + 2\omega) \bar{A}_{t}^2)} \ . \label{confinvect} \end{eqnarray} Next, the acceleration equation is given by \begin{eqnarray} \fl \frac{\ddot{a}}{a} = \frac{8\pi G (\avg{\rho_{M}} + \sum_{I}\rho_{I}) (-2 \tau + (8\eta \tau + \tau ^2 - 12 \eta \omega + 14 \tau \omega - 24 \omega^2)\bar{A}_{t}^2)}{3 (-2 + (2\eta + \tau + 2\omega) \bar{A}_{t}^2) (-2\tau + (-3\eta^2 + \tau^2 + 2\eta(\tau - 6\omega) + 2 \tau \omega - 12 \omega^2)\bar{A}_{t}^2)} \nonumber \\[5pt] \fl \qquad \ \ + \frac{8 \pi G\tau \sum_{I}p_{I}}{ (-2\tau + (-3\eta^2 + \tau^2 + 2\eta(\tau - 6\omega) + 2 \tau \omega - 12 \omega^2)\bar{A}_{t}^2)} \nonumber \\[5pt] \fl \qquad \ \ + \frac{2\dot{\bar{A}}_{t}^2\tau(3\eta-2\tau + 6\omega + (-3\eta^2 + \tau^2 + 2\eta(\tau - 6\omega) + 2 \tau \omega - 12 \omega^2)\bar{A}_{t}^2)}{3 (-2 + (2\eta + \tau + 2\omega) \bar{A}_{t}^2) (-2\tau + (-3\eta^2 + \tau^2 + 2\eta(\tau - 6\omega) + 2 \tau \omega - 12 \omega^2)\bar{A}_{t}^2)} \nonumber \\[5pt] \fl \qquad \ \ +\frac{ 6k (\eta + 2\omega )\bar{A}_{t}^2 (2(\eta+ \tau) \omega \bar{A}_{t}^2 - \tau)}{a^2 (-2 + (2\eta + \tau + 2\omega) \bar{A}_{t}^2) (-2\tau + (-3\eta^2 + \tau^2 + 2\eta(\tau - 6\omega) + 2 \tau \omega - 12 \omega^2)\bar{A}_{t}^2)} \nonumber \\[5pt] \fl \qquad \ \ + \frac{\bar{A}_{t} \dot{\bar{A}}_{t} \dot{a}}{a} \frac{(4\omega-2\tau)}{(-2 + (2\eta + \tau + 2\omega) \bar{A}_{t}^2)} \, . \label{accfinvect} \label{flrwacc} \end{eqnarray} Finally, the evolution equation for the background value of the vector-field is given by \begin{eqnarray} \hspace{-40pt} \frac{\ddot{\bar{A}}_{t}}{\bar{A}_{t}}=- \frac{8\pi G (\avg{\rho_{M}} + \sum_{I}\rho_{I}) (\eta + 2\tau - 2\omega) + 24\pi G \sum_{I}p_{I} (\eta + 2\omega) }{(-2\tau + (-3\eta^2 + \tau^2 + 2\eta(\tau - 6\omega) + 2 \tau \omega - 12 \omega^2)\bar{A}_{t}^2)} -\frac{3 \dot{\bar{A}}_{t} \dot{a}}{\bar{A}_{t} a} \nonumber \\[5pt] - \dot{\bar{A}}_{t}^2 \frac{ (-3\eta^2 + \tau^2 + 2\eta(\tau - 6\omega) + 2 \tau \omega - 12 \omega^2)}{(-2\tau + (-3\eta^2 + \tau^2 + 2\eta(\tau - 6\omega) + 2 \tau \omega - 12 \omega^2)\bar{A}_{t}^2)} \nonumber \\[5pt] -k \frac{12 \omega \bar{A}_{t}^2 (\eta +\tau ) - 6 \tau}{a^2(-2\tau + (-3\eta^2 + \tau^2 + 2\eta(\tau - 6\omega) + 2 \tau \omega - 12 \omega^2)\bar{A}_{t}^2)} \ . \end{eqnarray} These three equations are again identical to the Friedmann equations of this class of theories, showing that the emergent expansion proceeds as expected. They are also identical to the parameterized expressions presented in equations (\ref{fincon}) and (\ref{accgen2}), with the appropriate values of $\{\alpha,\gamma,\alpha_c,\gamma_c\}$. Once more, the first-order quasi-static cosmological perturbations are given by equations (\ref{np1}) and (\ref{np2}), this time with $\alpha$ and $\gamma$ given by equations (\ref{alphavectortensor}) and (\ref{gammavectortensor}). This shows that the parameterization we presented in Section \ref{cosmo} is again applicable, even though the equations are much more complicated in this case. This again highlights the highly compact nature of the parameterized expressions presented in Section \ref{cosmo}, and its ability to incorporate theories that fit into the PPN formalism. \section{Conclusions} In this paper we have constructed a parameterization that extends and transforms the PPN formalism for use in cosmology. This framework is not simply built in analogy to the PPN formalism, but is actually isometric to it on suitably defined spatial domains (that is, the two systems are actually equivalent in a physically meaningful sense). The result is a set of parameterized cosmologies that are fully consistent with the standard framework that is used to constrain gravity in the weak-field slow-motion limit of gravity, and that can be used to test Einstein's gravity and its many alternatives on cosmological scales. The advantage of this approach is that the consistency requirement with PPN requires that the parameters involved must be functions of time only. It also gives constraints on the present day values of some of these parameters, if local experiments are to measure the correct value of Newton's constant, $G$, and an experimentally acceptable value of the spatial curvature caused by rest mass, $\gamma$. If one did allow for spatial dependence in our parameters then the result would not be compatible with PPN, and should generically be expected to lead to either strong back-reaction or gravity without the presence of rest mass (depending on the parameter in question). Formally, we end up with a generic system of Friedmann equations, and linear-order scalar perturbations in the quasi-static limit, that are valid for any theories of gravity that fit into the PPN approach. Our full set of parameters is given by the functions $\{ \alpha (t), \gamma (t), \alpha_{c} (t),\gamma_{c} (t) \}$. The first two of these reduce to the corresponding PPN parameters when $t=t_0$, and the second two are new ``cosmological'' parameters that determine the rate of expansion and acceleration in the large-scale cosmology. The correspondence with PPN parameters means that cosmological observations can be used to either (i) impose constraints on $\alpha$ and $\gamma$ over cosmologically interesting scales that complement those obtained from isolated astrophysical systems, or (ii) impose the following boundary conditions on the initial values of $\alpha$ and $\gamma$: \begin{equation} \alpha(t_0) = 1 \qquad {\rm and} \qquad \gamma(t_0) = 1 + (2.1 \pm 2.3) \times 10^{-5} \, . \end{equation} The former of these ensures that local gravitational experiments measure the correct value of $G$, and the latter is the experimentally determined value of $\gamma$ from observations of the Shapiro time-delay effect of radio signals from the Cassini spacecraft as they pass by the sun \cite{cassini}. In case (ii), observations at high redshifts could be used to impose constraints on the variation of $G$ as the Universe evolves, by constraining $\alpha(t)$ at times $0<t<t_0$. Observationally, one can constrain the parameters $\{ \alpha (t), \gamma (t), \alpha_{c} (t),\gamma_{c} (t) \}$ with the cosmological probes that are, by now, quite standard in constraining modified theories of gravity. Importantly, however, we allow for the background expansion to be a part of the parameterization. This is required for most minimal modifications to Einstein's theory, including the scalar-tensor and vector-tensor theories considered in this paper, and offers new ways to constrain the underlying theory. We also have equation \eref{addcon1}, which provides a consistency relation between our parameters, and may reduce the number of observables required to constrain our full set of parameters. In terms of specific observables, one could for example use supernova data to constrain the Hubble rate $H=\dot{a}/a$ and the acceleration $\ddot{a}/a$ \cite{nova1, nova2, nova3, nova4, nova5, nova6}. Independent information on the density of baryons and dark matter (e.g. from primordial nucleosynthesis) together with information on the spatial curvature of the Universe (e.g. from CMB \cite{CMB} and BAO observations \cite{baos}), should then provide constraints on $\alpha_c(t)$ and $\gamma_c(t)$. Cosmological perturbations, on the other hand, can be used together with observations of the growth rate of structure to determine $\alpha(t)$, and together with observations of weak-lensing to determine the combination $\alpha(t)+\gamma(t)$. This is just a schematic of what is possible of course, and a large number of other cosmological probes are also available to provide additional constraints. In general, we expect there to be more observational probes than parameters in this framework, meaning that the system should be able to be constrained effectively with existing and upcoming data. Of course, there are also certain limitations to our formalism. It does not, for example, apply to many of the more complicated theories of gravity that are now frequently considered in cosmology, as such theories do not always fit into the PPN framework. These theories may include Yukawa potentials \cite{fR, bimetric} or involve non-perturbative gravity \cite{Chameleon, Vainshtein} in the weak-field regime, both of which we have neglected to consider here. {We have also only been concerned with small-scale perturbations, in what is often referred to as the quasi-static limit of cosmological perturbation theory. The inclusion of large-scale perturbations is required to complete the picture, and these may lead to the presence of Yukawa potentials. }These subjects will be addressed in future studies, although it has already recently been shown that one should generically expect Yukawa potentials to lead to strong back-reaction \cite{pierre1}, and we strongly suspect the same applies to theories that involve non-perturbative screening mechanisms. Including more complicated theories, and large-scale perturbations, should therefore be expected to lead to significant complication in the parameterized framework. In this sense, one can consider the PPNC framework we have outlined here as a minimal construction for testing minimal deviations from Einstein's theory. This is sufficient to use tests of gravity from cosmology to constrain conservative theories, as is usual in both Solar System and binary pulsar applications of the PPN formalism. \section*{Acknowledgements} We are grateful to P. Carrilho, J. A. V. Kroon, P. Fleury and S. Imrith for helpful discussions and comments. VAAS and TC both acknowledge support from the STFC. \section*{References}
1,314,259,994,422
arxiv
\subsection{Preliminaries} Consider an undirected graph $\mathcal{G}=\{\mathcal{V},\mathcal{E}\}$ composed of a node set $\mathcal{V}$ of cardinality $|\mathcal{V}|=N$, and an edge set $\mathcal{E}$ connecting nodes. \textit{Graph signal} refers to data/features associated with the nodes of $\mathcal{G}$, denoted by $\mathbf{X} \in \mathbb{R}^{N \times C}$ with $i$th row representing the $C$-dimensional graph signal at the $i$th node of $\mathcal{V}$. To characterize the similarities (and thus the graph structure) among node signals, an \textit{adjacency matrix} $\mathbf{A}$ is defined on $\mathcal{G}$, which is a real-valued symmetric $N \times N$ matrix with $a_{i,j}$ as the weight assigned to the edge $(i,j)$ connecting nodes $i$ and $j$. Formally, the adjacency matrix is constructed from graph signals as follows, \begin{equation} \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} \mathbf{A} = f(\mathbf{X}), \label{eq:A_X} \end{equation} where $f(\cdot)$ is a linear or non-linear function applied to each pair of nodes to get the pair-wise similarity. For example, a widely adopted function is to nonlinearly construct a $k$-nearest-neighbor ($k$NN) graph from node features \cite{wang2019dynamic,zhang2018graph}. \begin{figure*}[t] \centering \subfigure[Original model]{ \includegraphics[width=0.18\textwidth]{aero.pdf} } \subfigure[Global+Isotropic]{ \includegraphics[width=0.18\textwidth]{global_same.pdf} } \subfigure[Global+Anisotropic]{ \includegraphics[width=0.18\textwidth]{global_different.pdf} } \subfigure[Local+Isotropic]{ \includegraphics[width=0.18\textwidth]{local_same.pdf} } \subfigure[Local+Anisotropic]{ \includegraphics[width=0.18\textwidth]{local_different.pdf} } \caption{\textbf{Demonstration of different sampling (Global or Local) and node-wise \textit{translation} (Isotropic or Anisotropic) methods on 3D point clouds.} Red and blue points represent transformed and non-transformed points, respectively. Note that we adopt the wing as a sampled local point set for clear visualization.} \label{fig:sampling_transform} \vspace{-0.2in} \end{figure*} \vspace{-0.05in} \subsection{Graph Signal Transformation} Unlike Euclidean data like images, graph signals are irregularly sampled, whose transformations are thus nontrivial to define. To this end, we define a graph transformation on the signals $\mathbf{X}$ as node-wise \textit{filtering} on $\mathbf{X}$. Formally, suppose we sample a graph transformation $\mathbf{t}$ from a transformation distribution $\mathcal{T}_g$, \textit{i.e.}, $\mathbf{t} \sim \mathcal{T}_g$. Applying the transformation to graph signals $\mathbf{X}$ that are sampled from data distribution $\mathcal{X}_g$, \textit{i.e.}, $\mathbf{X} \sim \mathcal{X}_g$, leads to the filtered graph signal \begin{equation} \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} \tilde{\mathbf{X}}=\mathbf{t}(\mathbf{X}). \label{eq:graph_signal_t} \end{equation} The filter $\mathbf t$ is applied to each node individually, which can be either node-invariant or node-variant. In other words, the transformation of each node signal associated with $\mathbf t$ can be different from each other. For example, for a translation $\mathbf t$, a distinctive translation can be applied to each node. We will call the graph transformation isotropic (anisotropic) if it is node-invariant (variant). Consequently, the adjacency matrix of the transformed graph signal $\tilde{\mathbf{X}}$ equivaries according to \eqref{eq:A_X}: \begin{equation} \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} \tilde{\mathbf{A}} = f(\tilde{\mathbf{X}}) = f(\mathbf{t}(\mathbf{X})), \end{equation} which transforms the \textit{graph structures}, as edge weights are also filtered by $\mathbf{t}(\cdot)$. Under this definition, there exist a wide spectrum of graph signal transformations. Examples include affine transformations (translation, rotation and shearing) on the location of nodes (\textit{e.g.}, 3D coordinates in point clouds), and graph filters such as low-pass filtering on graph signals by the adjacency matrix \cite{sandryhaila2013discrete}. \vspace{-0.05in} \subsection{Node-wise Graph Signal Transformation} As aforementioned, in this paper, we focus on \textit{node-wise} graph signal transformation, {\it i.e.}, each node has its own transformation, either isotropically or anisotropically. We seek to learn graph representations through the node-wise transformations by revealing how different parts of graph structures would change globally and locally. Specifically, here are two distinct advantages. \begin{itemize} \item The node-wise transformations allow us to use node sampling to study different parts of graphs under various transformations. \item By decoding the node-wise transformations, we will be able to learn the representations of individual nodes. Moreover, these node-wise representations will not only capture the local graph structures under these transformations, but also contain global information about the graph when these nodes are sampled into different groups over iterations during training. \end{itemize} Next, we discuss the formulation of learning graph transformation equivariant representations by decoding the node-wise transformations via a graph-convolutional encoder and decoder. \subsection{The Formulation} Given a pair of graph signal and adjacency matrix $(\mathbf{X}, \mathbf{A})$, and a pair of {\em transformed} graph signal and adjacency matrix $(\tilde{\mathbf{X}},\tilde{\mathbf{A}})$ by a node-wise graph transformation $\mathbf{t}$, a function $E(\cdot)$ is \textit{transformation equivariant} if it satisfies \begin{equation} \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} E(\tilde{\mathbf{X}}, \tilde{\mathbf{A}}) = E\left(\mathbf{t}(\mathbf{X}), f\left(\mathbf{t}(\mathbf{X})\right)\right) = \rho(\mathbf{t})\left[E(\mathbf{X}, \mathbf{A})\right], \label{eq:ter} \end{equation} where $\rho(\mathbf{t})$ is a homomorphism of transformation $\mathbf{t}$ in the representation space. Our goal is to learn a function $E(\cdot)$, which extracts equivariant representations of graph signals $\mathbf{X}$. For this purpose, we employ an encoder-decoder network: we learn a graph encoder $E: (\mathbf{X},\mathbf{A}) \mapsto E(\mathbf{X},\mathbf{A})$, which encodes the feature representations of individual nodes from the graph. To ensure the transformation equivariance of representations, we train a decoder $D: \left(E(\mathbf{X},\mathbf{A}), E(\tilde{\mathbf{X}},\tilde{\mathbf{A}})\right) \mapsto \hat{\mathbf{t}}$ to estimate the node-wise transformation $\hat{\mathbf{t}}$ from the representations of the original and transformed graph signals. Hence, we cast the learning problem of transformation equivariant representations as the joint training of the representation encoder $E$ and the transformation decoder $D$. It has been proved that the learned representations in this way satisfy the generalized transformation equivariance without relying on a linear representation of graph structures \cite{qi2019avt}. Further, we sample a subset of nodes $\mathbf S$ following a sampling distribution $\mathcal S_g$ from the original graph signal $\mathbf X$, locally or globally in order to reveal graph structures at various scales. Node-wise transformations are then performed on the subset $\mathbf S$ isotropically or anisotropically, as demonstrated in Fig.~\ref{fig:sampling_transform}. In order to predict the node-wise transformation $\mathbf{t}$, we choose a loss function $\ell_\mathbf S(\mathbf{t}, \hat{\mathbf{t}})$ that quantifies the distance between $\mathbf{t}$ and its estimate $\hat{\mathbf{t}}$ in terms of their parameters. Then the entire network is trained end-to-end by minimizing the loss \begin{equation} \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} \min_{E,D} \; \underset{\mathbf S \sim \mathcal S_g}{\mathbb E}~\underset{\mathbf X \sim \mathcal X_g}{\underset{\mathbf{t} \sim \mathcal{T}_g}{\mathbb E}} ~ \ell_\mathbf S(\mathbf{t}, \hat{\mathbf{t}}), \label{eq:loss} \end{equation} where the expectation $\mathbb{E}$ is taken over the sampled graph signals and transformations, and the loss is taken over the (locally or globally) sampled subset $\mathbf S$ of nodes in each iteration of training. In \eqref{eq:loss}, the node-wise transformation $\hat{\mathbf{t}}$ is estimated from the decoder \begin{equation} \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} \hat{\mathbf{t}} = D\left(E(\mathbf{X},\mathbf{A}), E(\tilde{\mathbf{X}},\tilde{\mathbf{A}})\right). \end{equation} Thus, we update the parameters in encoder $E$ and decoder $D$ iteratively by backward propagation of the loss. \begin{figure}[t] \centering \includegraphics[width=8.3cm]{framework.pdf} \caption{\textbf{The architecture of the proposed GraphTER.} In the unsupervised feature learning stage, the representation encoder and transformation decoder are jointly trained by minimizing (\ref{eq:loss}). In the supervised evaluation stage, the first several blocks of the encoder are fixed with frozen weights and a linear classifier is trained with labeled samples. } \label{fig:framework} \vspace{-0.2in} \end{figure} \vspace{-0.05in} \subsection{The Algorithm} Given graph signals $\mathbf{X}=\{\mathbf{x}_1,\mathbf{x}_2,...,\mathbf{x}_N\}^{\top}$ over $N$ nodes, in each iteration of training, we \textit{randomly sample} a subset of nodes $\mathbf S$ from the graph, either globally or locally. Global sampling refers to random sampling over the entire nodes globally, while local sampling is limited to a local set of nodes in the graph. Node sampling not only enables us to characterize global and local graph structures at various scales, but also reduces the number of node-wise transformation parameters to estimate for computational efficiency. Then we draw a node-wise transformation $\mathbf{t}_i$ corresponding to each sample $\mathbf{x}_i$ of nodes in $\mathbf S$, either isotropically or anisotropically. Accordingly, the graph $\tilde{\mathbf{A}}$ associated with the transformed graph also transforms equivariantly from the original $\mathbf{A}$ under $\mathbf t$. Specifically, as illustrated in Fig.~\ref{fig:node_wise_transform}, we construct a $k$NN graph to make use of the connectivity between the nodes, whose matrix representation in $\mathbf{A}$ changes after applying the sampled node-wise transformations. To learn the applied node-wise transformations, we design a full graph-convolutional auto-encoder network as illustrated in Fig.~\ref{fig:framework}. Among various paradigms of GCNNs, we choose EdgeConv \cite{wang2019dynamic} as a basic building block of the auto-encoder network, which efficiently learns node-wise representations by aggregating features along all the edges emanating from each connected node. Below we will explain the representation encoder and the transformation decoder in detail. \vspace{-0.05in} \subsubsection{Representation Encoder} The representation encoder $E$ takes the signals of an original graph $\mathbf{X}$ and the transformed counterparts $\tilde{\mathbf{X}}$ as input, along with their corresponding graphs. $E$ encodes node-wise features of $\mathbf{X}$ and $\tilde{\mathbf{X}}$ through a Siamese encoder network with shared weights, where EdgeConv layers are used as basic feature extraction blocks. As shown in Fig.~\ref{fig:node_wise_transform}, given a non-transformed central node $\mathbf{x}_i$ and its transformed neighbors $\mathbf{t}_j(\mathbf{x}_j)$, the input layer of encoded feature of $\mathbf{x}_i$ is \begin{equation} \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} \begin{split} E_{\text{in}}(\tilde{\mathbf{X}}, \tilde{\mathbf{A}})_i & = \underset{j \in \mathcal{N}(i)}{\max} \; \tilde{a}_{i,j} \\ & = \underset{j \in \mathcal{N}(i)}{\max} \; \text{ReLU}(\theta(\mathbf{t}_j(\mathbf{x}_j)-\mathbf{x}_i) + \phi \mathbf{x}_i), \end{split} \label{eq:encoding} \end{equation} where $\tilde{a}_{i,j}$ denotes the edge feature, {\it i.e.}, edge weight in $\tilde{\mathbf{A}}$. $\theta$ and $\phi$ are two weighting parameters, and $j \in \mathcal{N}(i)$ denotes node $j$ is in the $k$-nearest neighborhood of node $i$. Then, multiple layers of regular edge convolutions \cite{wang2019dynamic} are stacked to form the final encoder. Edge convolution in (\ref{eq:encoding}) over each node essentially aggregates features from neighboring nodes via {\it edge weights} $\tilde{a}_{i,j}$. Since the edge information of the underlying graph transforms with the transformations of individual nodes as demonstrated in Fig.~\ref{fig:node_wise_transform}, edge convolution is able to extract higher-level features from the original and transformed edge information. Also, as features of each node are learned via propagation from transformed and non-transformed nodes isotropically or anisotropically by both local or global sampling, the learned representation is able to capture intrinsic graph structures at multiple scales. \vspace{-0.05in} \subsubsection{Transformation Decoder} Node-wise features of the original and transformed graphs are then concatenated at each node, which are then fed into the transformation decoder. The decoder consists of several EdgeConv blocks to aggregate the representations of both the original and transformed graphs to predict the node-wise transformations $\mathbf{t}$. Based on the loss in \eqref{eq:loss}, $\mathbf{t}$ is decoded by minimizing the mean squared error (MSE) between the ground truth and estimated transformation parameters at each sampled node. Fig.~\ref{fig:framework} illustrates the architecture of learning the proposed GraphTER in such an auto-encoder structure. \subsection{Datasets and Experimental Setup} \textbf{ModelNet40} \cite{wu20153d}. This dataset contains $12,311$ meshed CAD models from $40$ categories, where $9,843$ models are used for training and $2,468$ models are for testing. For each model, $1,024$ points are sampled from the original mesh. We train the unsupervised auto-encoder and the classifier under the training set, and evaluate the classifier under the testing set. \textbf{ShapeNet part} \cite{yi2016scalable}. This dataset contains $16,881$ 3D point clouds from $16$ object categories, annotated with $50$ parts. Each 3D point cloud contains $2,048$ points, most of which are labeled with fewer than six parts. We employ $12,137$ models for training the auto-encoder and the classifier, and $2,874$ models for testing. We treat points in each point cloud as nodes in a graph, and the $(x,y,z)$ coordinates of points as graph signals. A $k$NN graph is then constructed on the graph signals to guide graph convolution. Next, we introduce our node-wise graph signal transformation. In experiments, we sample a portion of nodes with a sampling rate $r$ from the entire graph to perform node-wise transformations, including 1) \textbf{Global sampling:} randomly sample $r\%$ of points from all the points in a 3D point cloud; 2) \textbf{Local sampling:} randomly choose a point and search its $k$ nearest neighbors in terms of Euclidean distance, forming a local set of $r\%$ of points. Then, we apply three types of node-wise transformations to the coordinates of point clouds, including 1) \textbf{Translation:} randomly translate each of three coordinates of a point by three parameters in the range $[-0.2, 0.2]$; 2) \textbf{Rotation:} randomly rotate each point with three rotation parameters all in the range $[\ang{-5},\ang{5}]$; 3) \textbf{Shearing}: randomly shear the $x$-, $y$-, $z$-coordinates of each point with the six parameters of a shearing matrix in the range $[-0.2, 0.2]$. We consider two strategies to transform the sampled nodes: \textbf{Isotropically} or \textbf{Anisotropically}, which applies transformations with node-invariant or node-variant parameters. \begin{table}[t] \centering \small \caption{Classification accuracy (\%) on ModelNet40 dataset.} \label{tab:classification} \begin{tabular}{rcccc} \hline \multicolumn{1}{c}{\textbf{Method}} & \textbf{Year} & \textbf{Unsupervised} & \textbf{Accuracy} \\ \hline 3D ShapeNets \cite{wu20153d} & 2015 & No & 84.7 \\ VoxNet \cite{maturana2015voxnet} & 2015 & No & 85.9 \\ PointNet \cite{qi2017pointnet} & 2017 & No & 89.2 \\ PointNet++ \cite{qi2017pointnet++} & 2017 & No & 90.7 \\ KD-Net \cite{klokov2017escape} & 2017 & No & 90.6 \\ PointCNN \cite{li2018pointcnn} & 2018 & No & 92.2 \\ PCNN \cite{atzmon2018point} & 2018 & No & 92.3 \\ DGCNN \cite{wang2019dynamic} & 2019 & No & 92.9 \\ RS-CNN \cite{liu2019relation} & 2019 & No & 93.6 \\ \hline T-L Network \cite{girdhar2016learning} & 2016 & Yes & 74.4 \\ VConv-DAE \cite{sharma2016vconv} & 2016 & Yes & 75.5 \\ 3D-GAN \cite{wu2016learning} & 2016 & Yes & 83.3 \\ LGAN \cite{achlioptas2018learning} & 2018 & Yes & 85.7 \\ FoldingNet \cite{yang2018foldingnet} & 2018 & Yes & 88.4 \\ MAP-VAE \cite{han2019multi} & 2019 & Yes & 90.2 \\ L2G-AE \cite{liu2019l2g} & 2019 & Yes & 90.6 \\ \hline GraphTER & & Yes & \textbf{92.0} \\ \hline \end{tabular} \vspace{-0.2in} \end{table} \vspace{-0.05in} \subsection{Point Cloud Classification} First, we evaluate the GraphTER model on the ModelNet40 \cite{wu20153d} dataset for point cloud classification. \vspace{-0.1in} \subsubsection{Implementation Details} In this task, the auto-encoder network is trained via the SGD optimizer with a batch size of $32$. The momentum and weight decay rate are set to $0.9$ and $10^{-4}$, respectively. The initial learning rate is $0.1$, and then decayed using a cosine annealing schedule \cite{loshchilov2016sgdr} for $512$ training epochs. We adopt the cross entropy loss to train the classifier. We deploy eight EdgeConv layers as the encoder, and the number $k$ of nearest neighbors is set to $20$ for all EdgeConv layers. Similar to \cite{wang2019dynamic}, we use shortcut connections for the first five layers to extract multi-scale features, where we concatenate features from these layers to acquire a $1,024$-dimensional node-wise feature vector. After the encoder, we employ three consecutive EdgeConv layers as the decoder -- the output feature representations of the Siamese encoder first go through a channel-wise concatenation, which are then fed into the decoder to estimate node-wise transformations. The batch normalization layer and LeakyReLU activation function with a negative slope of $0.2$ is employed after each convolutional layer. During the training procedure of the classifier, the first five EdgeConv layers in the encoder are used to represent input cloud data by node-wise concatenating their output features with the weights frozen. After the five EdgeConv layers, we apply three fully-connected layers node-wise to the aggregated features. Then, global max pooling and average pooling are deployed to acquire the global features, after which three fully-connected layers are used to map the global features to the classification scores. Dropout with a rate of $0.5$ is adopted in the last two fully-connected layers. \vspace{-0.1in} \subsubsection{Experimental Results} Tab.~\ref{tab:classification} shows the results for 3D point cloud classification, where the proposed model applies isotropic node-wise shearing transformation with a global sampling rate of $r=25\%$. We compare with two classes of methods: unsupervised approaches and supervised approaches. The GraphTER model achieves 92.0\% of classification accuracy on the ModelNet40 dataset, which outperforms the state-of-the-art unsupervised methods. In particular, most of the compared unsupervised models combine the ideas of both GAN and AED, and map 3D point clouds to unsupervised representations by auto-encoding data, such as FoldingNet \cite{yang2018foldingnet}, MAP-VAE \cite{han2019multi} and L2G-AE \cite{liu2019l2g}. Results show that the GraphTER model achieves significant improvement over these methods, showing the superiority of the proposed node-wise AET over both the GAN and AED paradigms. Moreover, the unsupervised GraphTER model also achieves comparable performance with the state-of-the-art fully supervised results. This significantly closes the gap between unsupervised approaches and the fully supervised counterparts in literature. \vspace{-0.1in} \subsubsection{Ablation Studies} Further, we conduct ablation studies under various experimental settings of sampling and transformation strategies on the ModelNet40 dataset. First, we analyze the effectiveness of different node-wise transformations under global or local sampling. Tab.~\ref{tab:diff_transform} presents the classification accuracy with three types of node-wise transformation methods. We see that the shearing transformation achieves the best performance, improving by 1.05\% on average over translation, and 0.52\% over rotation. This shows that the proposed GraphTER model is able to learn better feature representations under more complex transformations. Moreover, we see that the proposed model achieves an accuracy of 90.70\% on average under global sampling, which outperforms local sampling by 0.46\%. This is because global sampling better captures the global structure of graphs, which is crucial in such a graph-level task of classifying 3D point clouds. Meanwhile, under the two sampling strategies, the classification accuracy from isotropic transformations is higher than that from the anisotropic one. The reason lies in the intrinsic difficulty of training the transformation decoder with increased complexity of more parameters when applying anisotropic transformations. \begin{table}[t] \centering \small \caption{Unsupervised classification accuracy (\%) on ModelNet40 dataset with different sampling and transformation strategies.} \label{tab:diff_transform} \begin{tabular}{c|cc|cc|c} \hline \multirow{2}{*}{} & \multicolumn{2}{c|}{Global Sampling} & \multicolumn{2}{c|}{Local Sampling} & \multirow{2}{*}{Mean} \\ \cline{2-5} & Iso. & Aniso. & Iso. & Aniso. & \\ \hline Translation & 90.15 & 90.15 & 89.91 & 89.55 & 89.94 \\ Rotation & 91.29 & 90.24 & 90.48 & 89.87 & 90.47 \\ Shearing & \textbf{92.02} & \textbf{90.32} & \textbf{91.65} & \textbf{89.99} & \textbf{90.99} \\ \hline \multirow{2}{*}{Mean} & \textbf{91.15} & 90.24 & \textbf{90.68} & 89.80 & \\ \cline{2-5} & \multicolumn{2}{c|}{\textbf{90.70}} & \multicolumn{2}{c|}{90.24} & \\ \hline \end{tabular} \vspace{-0.1in} \end{table} Moreover, we evaluate the effectiveness of different sampling rates $r$ under the translations as reported in Tab.~\ref{tab:sample_rate}. The classification accuracies under various sampling rates are almost the same, and the result under $r=25\%$ is comparable to that under $r=100\%$. This shows that the performance of the proposed model is insensitive to the variation of sampling rates, {\it i.e.}, applying node-wise transformations to a small number of nodes in the graph is sufficient to learn intrinsic graph structures. \begin{table}[t] \centering \small \caption{Unsupervised classification accuracy (\%) on ModelNet40 dataset applying translation at different node sampling rates.} \label{tab:sample_rate} \begin{tabular}{c|cc|cc|c} \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Sampling\\ Rate\end{tabular}} & \multicolumn{2}{c}{Global Sampling} & \multicolumn{2}{c|}{Local Sampling} & \multirow{2}{*}{Mean} \\ \cline{2-5} & Iso. & Aniso. & Iso. & Aniso. & \\ \hline 25\% & 90.15 & 90.15 & 89.91 & 89.55 & 89.94 \\ 50\% & 90.03 & 89.63 & 89.95 & 89.47 & 89.77 \\ 75\% & 91.00 & 89.67 & 91.41 & 89.75 & 90.46 \\ 100\% & 89.67 & 89.99 & 89.67 & 89.99 & 89.83 \\ \hline \end{tabular} \vspace{-0.1in} \end{table} \begin{table*}[t] \centering \scriptsize \caption{Part segmentation results on ShapeNet part dataset. Metric is mIoU(\%) on points.} \label{tab:segmentation} \begin{tabular}{r|p{0.65cm}<{\centering}|p{0.5cm}<{\centering}|p{0.4cm}<{\centering}p{0.4cm}<{\centering}p{0.4cm}<{\centering}p{0.4cm}<{\centering}p{0.4cm}<{\centering}p{0.4cm}<{\centering}p{0.4cm}<{\centering}p{0.4cm}<{\centering}p{0.4cm}<{\centering}p{0.4cm}<{\centering}p{0.4cm}<{\centering}p{0.4cm}<{\centering}p{0.4cm}<{\centering}p{0.4cm}<{\centering}p{0.4cm}<{\centering}p{0.4cm}<{\centering}} \hline & Unsup. & Mean & Aero & Bag & Cap & Car & Chair & \begin{tabular}[c]{@{}c@{}}Ear\\ Phone\end{tabular} & Guitar & Knife & Lamp & Laptop & Motor & Mug & Pistol & Rocket & \begin{tabular}[c]{@{}c@{}}Skate\\ Board\end{tabular} & Table \\ \hline Samples & & & 2690 & 76 & 55 & 898 & 3758 & 69 & 787 & 392 & 1547 & 451 & 202 & 184 & 283 & 66 & 152 & 5271 \\ \hline PointNet \cite{qi2017pointnet} & No & 83.7 & 83.4 & 78.7 & 82.5 & 74.9 & 89.6 & 73.0 & 91.5 & 85.9 & 80.8 & 95.3 & 65.2 & 93.0 & 81.2 & 57.9 & 72.8 & 80.6 \\ PointNet++ \cite{qi2017pointnet++} & No & 85.1 & 82.4 & 79.0 & 87.7 & 77.3 & 90.8 & 71.8 & 91.0 & 85.9 & 83.7 & 95.3 & 71.6 & 94.1 & 81.3 & 58.7 & 76.4 & 82.6 \\ KD-Net \cite{klokov2017escape} & No & 82.3 & 80.1 & 74.6 & 74.3 & 70.3 & 88.6 & 73.5 & 90.2 & 87.2 & 81.0 & 94.9 & 57.4 & 86.7 & 78.1 & 51.8 & 69.9 & 80.3 \\ PCNN \cite{atzmon2018point} & No & 85.1 & 82.4 & 80.1 & 85.5 & 79.5 & 90.8 & 73.2 & 91.3 & 86.0 & 85.0 & 95.7 & 73.2 & 94.8 & 83.3 & 51.0 & 75.0 & 81.8 \\ PointCNN \cite{li2018pointcnn} & No & 86.1 & 84.1 & 86.5 & 86.0 & 80.8 & 90.6 & 79.7 & 92.3 & 88.4 & 85.3 & 96.1 & 77.2 & 95.3 & 84.2 & 64.2 & 80.0 & 83.0 \\ DGCNN \cite{wang2019dynamic} & No & 85.2 & 84.0 & 83.4 & 86.7 & 77.8 & 90.6 & 74.7 & 91.2 & 87.5 & 82.8 & 95.7 & 66.3 & 94.9 & 81.1 & 63.5 & 74.5 & 82.6 \\ RS-CNN \cite{liu2019relation} & No & 86.2 & 83.5 & 84.8 & 88.8 & 79.6 & 91.2 & 81.1 & 91.6 & 88.4 & 86.0 & 96.0 & 73.7 & 94.1 & 83.4 & 60.5 & 77.7 & 83.6 \\ \hline LGAN \cite{achlioptas2018learning} & Yes & 57.0 & 54.1 & 48.7 & 62.6 & 43.2 & 68.4 & 58.3 & 74.3 & 68.4 & 53.4 & 82.6 & 18.6 & 75.1 & 54.7 & 37.2 & 46.7 & 66.4 \\ MAP-VAE \cite{han2019multi} & Yes & 68.0 & 62.7 & 67.1 & 73.0 & 58.5 & 77.1 & 67.3 & 84.8 & 77.1 & 60.9 & 90.8 & 35.8 & 87.7 & 64.2 & 45.0 & 60.4 & 74.8 \\ \hline GraphTER & Yes & \textbf{81.9} & \textbf{81.7} & \textbf{68.1} & \textbf{83.7} & \textbf{74.6} & \textbf{88.1} & \textbf{68.9} & \textbf{90.6} & \textbf{86.6} & \textbf{80.0} & \textbf{95.6} & \textbf{56.3} & \textbf{90.0} & \textbf{80.8} & \textbf{55.2} & \textbf{70.7} & \textbf{79.1} \\ \hline \end{tabular} \vspace{-0.2in} \end{table*} \subsection{Point Cloud Segmentation} We also apply the GraphTER model to 3D point cloud part segmentation on ShapeNet part dataset \cite{yi2016scalable}. \vspace{-0.2in} \subsubsection{Implementation Details} We also use SGD optimizer to train the auto-encoding transformation network. The hyper-parameters are the same as in 3D point cloud classification except that we train for $256$ epochs. We adopt the negative log likelihood loss to train the node-wise classifier for segmenting each point in the clouds. The auto-encoding architecture is similar to that of the classification task, where we employ five EdgeConv layers as the encoder. However, the first two EdgeConv blocks consist of two MLP layers with the number of neurons \{64, 64\} in each layer. We use shortcut connections to concatenate features from the first four layers to a $512$-dimensional node-wise feature vector. As for the node-wise classifier, we deploy the same architecture as in \cite{wang2019dynamic}. The output features from the encoder are concatenated node-wise with globally max-pooled features, followed by four fully-connected layers to classify each node. During the training procedure, the weights of the first four EdgeConv blocks in the encoder are kept frozen. \vspace{-0.2in} \subsubsection{Experimental Results} We adopt the Intersection-over-Union (IoU) metric to evaluate the performance. We follow the same evaluation protocol as in the PointNet \cite{qi2017pointnet}: the IoU of a shape is computed by averaging the IoUs of different parts occurring in that shape, and the IoU of a category is obtained by averaging the IoUs of all the shapes belonging to that category. The mean IoU (mIoU) is finally calculated by averaging the IoUs of all the test shapes. We also compare the proposed model with unsupervised approaches and supervised approaches in this task, as listed in Tab.~\ref{tab:segmentation}. We achieve a mIoU of $81.9\%$, which significantly outperforms the state-of-the-art unsupervised method MAP-VAE \cite{han2019multi} by $13.9\%$. Moreover, the unsupervised GraphTER model also achieves the comparable performance to the state-of-the-art fully supervised approaches, greatly pushing closer towards the upper bound set by the fully supervised counterparts. \vspace{-0.2in} \subsubsection{Visualization Results} Fig.~\ref{fig:sup_seg_results} visualizes the results of the proposed unsupervised model and two state-of-the-art fully supervised methods: DGCNN \cite{wang2019dynamic} and RS-CNN \cite{liu2019relation}. The proposed model produces better segmentation on the ``table" model in the first row, and achieves comparable results on the other models. Further, we qualitatively compare the proposed method with the state-of-the-art unsupervised method MAP-VAE \cite{han2019multi}, as illustrate in Fig.~\ref{fig:unsup_seg_results}. The proposed model leads to more accurate segmentation results than MAP-VAE, {\it e.g.}, the engines of planes and the legs of chairs. \begin{figure}[t] \centering \subfigure[Ground-truth]{ \includegraphics[width=0.22\columnwidth]{seg_results_gt.pdf} } \subfigure[DGCNN]{ \includegraphics[width=0.22\columnwidth]{seg_results_dgcnn.pdf} } \subfigure[RS-CNN]{ \includegraphics[width=0.22\columnwidth]{seg_results_rscnn.pdf} } \subfigure[GraphTER]{ \includegraphics[width=0.22\columnwidth]{seg_results_gter.pdf} } \caption{\textbf{Visual comparison of point cloud part segmentation with supervised methods. } Our unsupervised GraphTER learning achieves comparable results with the state-of-the art fully supervised approaches. } \label{fig:sup_seg_results} \vspace{-0.1in} \end{figure} \begin{figure}[t] \centering \subfigure[MAP-VAE]{ \includegraphics[width=0.95\columnwidth]{seg_results_unsup_map_vae.pdf} } \subfigure[GraphTER]{ \includegraphics[width=0.95\columnwidth]{seg_results_unsup_gter.pdf} } \caption{\textbf{Visual comparison of point cloud part segmentation with the state-of-the-art unsupervised method MAP-VAE. } We achieve more accurate segmentation even in tiny parts and transition regions. } \label{fig:unsup_seg_results} \vspace{-0.1in} \end{figure} \vspace{-0.1in} \subsubsection{Ablation Studies} \begin{table}[t] \centering \small \caption{Unsupervised segmentation results on ShapeNet part dataset with different transformation strategies. Metric is mIoU (\%) on points.} \label{tab:seg_ablation} \begin{tabular}{c|cc|cc|c} \hline \multirow{2}{*}{} & \multicolumn{2}{c}{Global Sampling} & \multicolumn{2}{c|}{Local Sampling} & \multirow{2}{*}{Mean} \\ \cline{2-5} & Iso. & Aniso. & Iso. & Aniso. & \\ \hline Translation & 79.83 & 79.88 & 80.05 & 79.85 & 79.90 \\ Rotation & 80.20 & \textbf{80.29} & 80.87 & 80.02 & 80.35 \\ Shearing & \textbf{81.88} & 80.28 & \textbf{81.89} & \textbf{80.48} & \textbf{81.13} \\ \hline \multirow{2}{*}{Mean} & \textbf{80.64} & 80.15 & \textbf{80.94} & 80.12 & \multirow{2}{*}{\textbf{}} \\ \cline{2-5} & \multicolumn{2}{c|}{80.39} & \multicolumn{2}{c|}{\textbf{80.53}} & \\ \hline \end{tabular} \vspace{-0.2in} \end{table} Similar to the classification task, we analyze the effectiveness of different node-wise transformations under global or local sampling, as presented in Tab.~\ref{tab:seg_ablation}. The proposed model achieves the best performance under the shearing transformation, improving by 1.23\% on average over translation, and 0.78\% over rotation, which demonstrates the benefits of GraphTER learning under complex transformations. Further, the proposed model achieves a mIoU of 80.53\% on average under local sampling, which outperforms global sampling by 0.14\%. This is because local sampling of nodes captures the local structure of graphs better, which is crucial in node-level 3D point cloud segmentation task. \section{Introduction} \vspace{-0.05in} \label{sec:intro} \input{01_introduction.tex} \vspace{-0.1in} \section{Related Works} \vspace{-0.05in} \label{sec:related_works} \input{02_related_works.tex} \vspace{-0.1in} \section{Graph Transformations} \label{sec:graphT} \input{03_graph.tex} \vspace{-0.1in} \section{GraphTER: The Proposed Approach} \label{sec:method} \input{04_method.tex} \vspace{-0.1in} \section{Experiments} \vspace{-0.05in} \label{sec:experiments} \input{05_experiments.tex} \section{Conclusion} \vspace{-0.05in} \label{sec:conclusion} \input{06_conclusion.tex} \clearpage {\small \bibliographystyle{ieee}
1,314,259,994,423
arxiv
\section{Introduction} The adsorption of polymers on a sticky wall, or walls, and more recently the pulling, or stretching, of a polymer away from a wall has been the subject of continued interest \cite{privman1988c-a,debell1993a-a,janse2000a-a,rosa2003a-a, orlandini2004a-a,krawczyk2004a-:a,brak2005a-:a,mishra2005a-a,janse2005a-:a, martin2007a-:a}. This has been in part due to the advent of experimental techniques able to micro-manipulate single polymers \cite{svoboda1994a-a, ashkin1997a-a, strick2001a-a} and the connection to modelling DNA denaturation \cite{essevaz-roulet1997a-a, lubensky2000a-a,lubensky2002a-a, orlandini2001a-a, marenduzzo2002a-a, marenduzzo2003a-a,marenduzzo2009a-a}. When a polymer in a dilute solution of good solvent, so that it is in a swollen state \cite{gennes1979a-a}, is then attached to a wall at one end the rest of the polymer drifts away due to entropic repulsion. It otherwise acts as if it were a free polymer. If the wall has an attractive contact potential so that it becomes sticky to the monomers the polymer can be made to stay close to the wall by a sufficiently strong potential or at low enough temperatures. The second-order phase transition between these two states is the \emph{adsorption} transition. The high temperature state is \emph{desorbed} while the low temperature state is \emph{adsorbed}. This pure adsorption transition has been well studied \cite{privman1988c-a,debell1993a-a,hegger1994a-a,janse2000a-a,janse2004a-a} exactly, and numerically, and has been demonstrated to be second-order. The situation becomes more complex when a polymer is confined between two sticky walls. This situation has been studied by various directed and non-directed lattice walk models \cite{brak2005a-:a,janse2005a-:a, brak2007motzkin, martin2007a-:a, owczarek2008a-:a, alvarez2008self, guttmann2009effect}. Here the phase diagram of the model can depend on the mesoscopic size of the polymer relative to the width of the slab/slit and the strengths of the interactions on both walls. A motivation for studying this type of system is related to modelling the stabilisation of colloidal dispersions by adsorbed polymers (steric stabilisation) and the destabilisation when the polymer can adsorb on surfaces of different colloidal particles (sensitised flocculation). A polymer confined between two parallel plates exerts a repulsive force on the confining plates because of the loss of configurational entropy unless the polymer is attracted to both walls when it can exert an effective attractive force at large distances. A directed walk model of a polymer confined between two sticky walls was studied by Brak {\it et al.\ }\cite{brak2005a-:a}. Let us now briefly review the findings of that work so as to motivate the model we study in this paper. In their model the polymers are represented by Dyck paths, which are directed paths in the plane, taking north-east and south-east steps starting on, ending on and staying above the horizontal axis. These are classical objects in combinatorics \cite{flajolet2009analytic}. The height of these paths is then restricted; this is interpreted as a model of a polymer confined between two walls that are $w$ lattice units apart, as in Figure~\ref{onewalk}. It will be crucial to understand the results to note that the polymer is attached to the bottom wall at its end. Finally, different Boltzmann weights $a$ and $b$ were added for each visit to the bottom and top walls respectively. \begin{figure}[ht!] \begin{center} \includegraphics[width=10cm]{defn1b.pdf} \caption{ A Dyck path confined between two walls spaced $w$ lattice units apart. Each visit to the bottom wall contributes a Boltzmann weight $a$ and each visit to the top wall contributes a Boltzmann weight $b$. For combinatorial reasons we do not weight the first vertex.} \label{onewalk} \end{center} \end{figure} The partition function for these paths is defined as \begin{align} Z_n^{single}(a,b;w) &= \sum_{\varphi \in \mathcal{S}_w^n} a^{m_a(\varphi)} b^{m_b(\varphi)}\; , \end{align} where $\mathcal{S}_w^n$ is the set of Dyck paths of length $n$ of restricted height with maximum $w$, $m_a(\varphi)$ the number of vertices on the bottom wall and $m_b(\varphi)$ the number of vertices on the top wall (excluding the leftmost vertex). A phase transition can only occur when both the thermodynamic limit and the limit of infinite width (to give a two-dimensional thermodynamic system) are taken. However, it was explained by Brak \emph{et al.} \cite{brak2005a-:a} that taking the thermodynamic limit $n\rightarrow\infty$ before or after taking the width of the slit to infinity is crucially important. If the width, $w$, of the system is taken to infinity first then the walk does not see the top wall and a half plane system is retrieved, since the polymer is tethered to the bottom wall. There is a simple adsorption transition as $a$ is varied: a second order phase transition occurs when $a=2$. On the other hand, if the thermodynamic limit is taken before the width is taken to infinity then a different phase diagram ensues dependent on both $a$ and $b$. We define the reduced free energy $\kappa^{single}(a,b;w)$ for single Dyck paths at fixed finite~$w$~as \begin{align} \kappa^{single}(a,b;w) &= \lim_{n\rightarrow\infty} \frac{1}{n} \log Z_n^{single}(a,b;w) \end{align} and taking the limit $w\rightarrow \infty$ gives the so-called \emph{infinite slit} limit: \begin{align} \kappa^{single}_{inf-slit}(a,b) & \equiv \lim_{w \rightarrow \infty} \kappa^{single}(a,b;w) = \lim_{w\rightarrow\infty} \lim_{n\rightarrow\infty} \frac{1}{n} \log Z_n^{single}(a,b;w)\,. \label{inf-slit-free-energy-single-def} \end{align} This limit is different from the \emph{half-plane limit} \begin{align} \kappa^{single}_{half-plane}(a) &= \lim_{n\rightarrow\infty} \lim_{w\rightarrow\infty} \frac{1}{n} \log Z_n^{single}(a,b;w)= \begin{cases} \log \left(2\right) & \mbox{ if } a \leq 2 \\ \log\left(\frac{a}{\sqrt{a-1}}\right) & \mbox{ if } a > 2 \end{cases}\, , \end{align} which is independent of $b$. It was shown in \cite{brak2005a-:a} that \begin{align} \kappa^{single}_{inf-slit}(a,b) &= \begin{cases} \log \left(2\right) & \mbox{ if } a,b \leq 2 \\ \log\left(\frac{a}{\sqrt{a-1}}\right) & \mbox{ if } a > 2 \mbox{ and } a>b \\ \log\left(\frac{b}{\sqrt{b-1}}\right) & \mbox{ otherwise.} \end{cases} \label{inf-stlit-free-energy-single} \end{align} For small $a$ and $b$ the walk is desorbed from both walls, while the large $a$ and $b$ phases are characterised by the order parameter of the thermodynamic density of visits to the bottom and top walls respectively. Correspondingly, there are 3 phase transition lines. The first two are given by $b=2$ for $0\leq a \leq 2$ and $a=2$ for $0\leq b \leq 2$. These lines separate the desorbed phase from the two adsorbed phases and are lines of second order transitions of the same nature as the one found in the half-plane model. There is also a first order transition for $a=b >2$ where the density of visits to each of the walls jumps discontinuously on crossing the boundary non-tangentially (see Figure~\ref{phase-force-diagram-single} (left)). For finite widths the effective force between the walls, induced by the polymer, was defined \cite{brak2005a-:a} as \begin{align} \mathcal{F}(a,b;w) &= \kappa(a,b;w) - \kappa(a,b;w-1)\,. \end{align} For large $w$ it was found that the sign and length scale of the force depended on the values of $a$ and $b$ and that it was more refined that simply following the phase diagram (see Figure~\ref{phase-force-diagram-single}). \begin{figure}[ht!] \begin{center} \includegraphics[width=8cm]{phase_diagram-single-walk.pdf} \includegraphics[width=8cm]{force_diagram1w.pdf} \caption{(left) {Phase diagram of the infinite strip for a single walk. There are three phases: desorbed, adsorbed onto the bottom wall (ads bottom) and adsorbed onto the top (ads top).} (right) {A diagram of the regions of different types of effective force between the walls of a slit for a single Dyck path. Short range behaviour refers to exponential decay of the force with slit width while long range refers to a power law decay. The zero force curve is given by $ab=a+b$. On the dashed line there is a singular change of behaviour of the force.} } \label{phase-force-diagram-single} \end{center} \end{figure} The regions of the plane which gave different asymptotic expressions for $\kappa$ and hence different phases for the infinite slit clearly also give different force behaviours. For the square $0\leq a,b\leq 2$ the force is repulsive and decays as a power law (ie \ it is \emph{long-ranged}) while outside this square the force decays exponentially and so is \emph{short-ranged}. This change coincides with the phase boundary of the infinite slit phase diagram. However, the special curve $ab=a+b$ is a line of zero force across which the force, while short-ranged on either side (except at $(a,b)=(2,2)$), changes sign. Hence this curve separates regions where the force is attractive (to the right of the curve) and repulsive to the left of the curve. The line $a=b$ for $a>2$ is also special and, while the force is always short-ranged and attractive, the range of the force on the line is discontinuous and twice the size on this line than close by. All these features leads to a \emph{force diagram} that encapsulates these features (see Figure~\ref{phase-force-diagram-single} (right)). It should be recalled here that the behaviour of the directed system described above has been shown to be a faithful representation of the more general undirected self-avoiding walk model \cite{janse2005a-:a, martin2007a-:a}. It is not unreasonable to speculate that the inequality of the infinite-slit and half plane limits, and more generally the resultant force diagram may be dependent on the particular single walk model chosen where the polymer was tethered to the bottom wall. There is no natural single walk model with fixed ends that can circumvent this restriction sensibly. One is therefore led to consider models of multiple walks in a slit where walks can be tethered to both walls. In fact a related generalisation has already been considered by Alvarez \emph{et al.} \cite{alvarez2008self} where they studied a model of self-avoiding polygons confined to a slit. The resulting force diagram is quite different from the single-walk diagram shown in Figure~\ref{phase-force-diagram-single} (left). In this paper we consider a directed walk model of two polymers confined between two walls with which the polymers interact, as in the single polymer model described above. In particular we fully analyse the infinite slit phase diagram and the large width force behaviour as a function of the interaction parameters. We show there are distinct differences from the single walk problem. \section{Model} We consider pairs of directed paths of equal length in a width $w$ strip of the square lattice --- namely $\mathbb{Z}\times \{0,1,\dots, w\}$, taking steps $(1,\pm1)$. These paths may touch (ie share edges and vertices) but not cross. We consider those pairs of paths whose initial vertices lie at at $(0,0)$ and $(0,w)$. \begin{figure}[ht!] \begin{center} \includegraphics[width=10cm]{defn1a.pdf} \caption{ Two walks confined between two walls spaced $w$ lattice units apart. Each visit of the bottom walk to the bottom wall contributes a Boltzmann weight $a$ and each visit of the top walk to the top wall contributes a Boltzmann weight $b$. For combinatorial reasons we do not weight the leftmost vertex of either walk.} \label{twowalks} \end{center} \end{figure} Let $\varphi$ be such a pair of paths and define $|\varphi|$ to be the length of the paths. If the width of the strip, $w$, is odd then the paths never share vertices and the combinatorics that follows is more complicated. Because of this we only consider even widths. Note that this implies that the distance between the endpoints of the paths is always even. To complete our model let we add the energies $-\varepsilon_a$ and $-\varepsilon_b$ for each visit of the walks to the bottom and top walls respectively (aside from the leftmost vertex of each walk). The number of visits of the bottom walk to the bottom walk will be denoted $m_a(\varphi)$ while the number of visits of the top walk to the top wall will be denoted $m_b(\varphi)$ --- again excluding the leftmost vertex of each walk. The main model we discuss in the paper is based on pairs of walks, $\varphi$ , that finish with endpoints together at the same height. Define the corresponding partition function to be \begin{align} Z_n(a,b;w) &= \sum_{\varphi} e^{(\varepsilon_a m_a(\varphi)+ \varepsilon_b m_b(\varphi) )/k_B T} = \sum_{\varphi} a^{m_a(\varphi)} b^{m_b(\varphi)}\,, \end{align} where $T$ is the temperature, $k_B$ the Boltzmann constant and $a=e^{\varepsilon_a/k_B T}$ and $b=e^{\varepsilon_b/k_B T}$ are the Boltzmann weights associated with visits. The thermodynamic reduced free energy at finite width is given in the usual fashion as \begin{align} \kappa(a,b;w) &= \lim_{n \rightarrow \infty} \frac{1}{n} \log\left(Z_n(w) \right). \end{align} Because the model at finite $w$ is essentially one-dimensional, the free energy is an analytic function of $a$ and $b$ and no thermodynamic phase transitions occur \cite{landau1980a-a}. As noted above, the \emph{infinite slit limit} for the single walk model does display singular behaviour and so we consider the same limit for this model. The \emph{infinite slit free energy} for the two walk model is found analogously by \begin{align} \kappa_{inf-slit}(a,b) &= \lim_{w\rightarrow\infty} \kappa(a,b;w) =\lim_{w\rightarrow\infty} \lim_{n\rightarrow\infty} \frac{1}{n} \log Z_n(a,b;w). \end{align} Motivated by the single walk problem, we see that the above quantity could be different when the order of limits is swapped. Since we have defined the model so that walks start on opposite walls, when the width is taken to infinity before the length, the system separates into two half-planes. Consequently we refer to this limit as the \emph{double half-plane limit} and so define \begin{equation} \kappa_{double-half-plane}(a,b) = \lim_{n\rightarrow\infty} \lim_{w\rightarrow\infty} \frac{1}{n} \log Z_n(a,b;w). \end{equation} Since the system separates into two half-planes we have \begin{equation} \kappa_{double-half-plane}(a,b) = \kappa^{single}_{half-plane}(a) +\kappa^{single}_{half-plane}(b). \label{double-half-plane-free-energy} \end{equation} Motivated by the single walk model, we consider the effective force applied to the walls by the polymers \begin{align} \mathcal{F}_n &= \frac{1}{n} \left[ \log(Z_n(w)) - \log(Z_n(w-2) ) \right], \end{align} with a thermodynamic limit of \begin{align} \mathcal{F}(a,b;w) &= \kappa(a,b;w) - \kappa(a,b;w-2) . \end{align} Note that we will consider only systems of even width and hence we had to modify the single walk definition. Given that the double half-plane limit is known from the discussion above, we shall concentrate on the infinite slit limit. In this limit, the free energy does not depend on where the walks end. It turns out that the combinatorics of the model in which the walks end together are easier. Accordingly we study the generating function \begin{align} G(a,b;z) &= \sum_{n=0}^\infty Z_n(w) z^n. \end{align} where the partition function now counts only those walks which end together. The radius of convergence of the generating function $z_c(a,b;w)$ is directly related to the free energy via \begin{align} \kappa(a,b;w) &= -\log\left(z_c(a,b;w)\right). \end{align} \section{Functional Equations} \begin{figure}[h] \begin{center} \includegraphics[height=3cm]{defn} \end{center} \caption{We form the generating function of all pairs of paths that start in both surfaces and end anywhere according to their length, and distances of the endpoints from the surfaces. The path depicted contributes $z^9 r^1 s^1$ to the generating function.} \label{fig feqn1} \end{figure} Though we are primarily interested in the behaviour of pairs of paths that share their final vertices, we will need to define the generating function of more general pairs of paths with no restrictions on their endpoints. Define $d_u( \varphi)$ to be the distance from the endpoint of the upper path to the top of the strip. Similarly define $d_\ell(\varphi)$ to be the distance of the endpoint of the lower path to the bottom of the strip. \subsection{Without interactions} Let us first consider the case when $a=b=1$. We construct the generating function \begin{align} F(r,s;z) \equiv F(r,s) &= \sum_{\varphi \in paths} z^{|\varphi|} r^{d_\ell( \varphi )} s^{d_u( \varphi )}, \end{align} where $r,s$ are conjugate to the distances of the endpoints to either boundary and $z$ is conjugate to length. See Figure~\ref{fig feqn1}. In order to construct a functional equation satisfied by this generating function we also need to define the generating function of those paths whose final vertices touch. \begin{align} r^w F_d(s/r;z) \equiv r^w F_d(s/r) &= \sum_{h=0}^w s^h r^{w-h} \cdot \left[s^h r^{w-h}\right]\left\{ F(s,r) \right\} \end{align} where we have used $\left[s^i r^k\right]\left\{F(s,r)\right\}$ to denote the coefficient of $s^i r^k$ in the generating function $F(s,r)$. The generating function $G(1,1;z) = F_d(1;z)$. Also note that since the problem is vertically symmetric, we have $F(r,s) \equiv F(s,r)$ and $r^w F_d(s/r) = s^w F_d(r/s)$. Further note that $\left[s^i r^k\right]\left\{F(s,r)\right\}$ is zero whenever $i-k$ is not even. One can construct all pairs of paths using a column-by-column construction whose details we give below. Translating the construction into its action on the generating functions gives the following functional equation \begin{multline} \label{eqn:nonint} F(r,s) = 1 + z\left(s+\frac{1}{s} \right) \left(r + \frac{1}{r} \right) \cdot F(r,s) \\ - \frac{z}{r} \left(s + \frac{1}{s} \right) \cdot F(0,s) - \frac{z}{s} \left(r + \frac{1}{r} \right) \cdot F(r,0) + \frac{z}{sr} \cdot F(0,0) \\ - z sr \cdot s^w F_d(r/s). \end{multline} We now explain each of the terms in this equation. The trivial pair of paths consists of two isolated vertices at $(0,0)$ and $(0,w)$. This gives the initial $1$ in the right-hand side of the above functional equation. Note that to enumerate directed polygons in the strip we may replace the above with all pairs of vertices lying at the same vertical ordinate; this would replace $1$ with $\sum_{k=0}^w r^k s^{w-k} = (s^{w+1}-r^{w+1})/(s-r)$. \begin{figure}[h!] \begin{center} \includegraphics[height=3cm]{steps} \end{center} \caption{Every pair of paths can be continued by appending directed steps to their endpoints as shown. While there are at most 4 possible combinations, depending on the distance from boundaries, some combinations will be forbidden.} \label{fig feqn2} \end{figure} See Figure~\ref{fig feqn2}. When the endpoints are away from the boundaries, every pair of paths may be continued by appending directed steps in four different ways. Since each of these steps either increases or decreases the distance of the endpoint from the boundary the result is \begin{align} z\left(s+\frac{1}{s} \right) \left(r + \frac{1}{r} \right) \cdot F(r,s). \end{align} \begin{figure}[h!] \begin{center} \includegraphics[height=3cm]{bnd1} \end{center} \caption{When the endpoints of the walks are close to the boundaries one must take care to subtract off the contributions of the configurations that step outside the strip as depicted here.} \label{fig feqn3} \end{figure} See Figure~\ref{fig feqn3}. When the endpoints are close to the boundaries or each other, then appending steps as described above may result in paths that either step outside the strip or cross each other. If the endpoint of the upper path lies on the boundary then one cannot append a $(1,1)$ step to that path. Such configurations are counted by \begin{align} \frac{z}{s}\left(r+\frac{1}{r} \right) \cdot \left[s^0\right] F(r,s) & \equiv \frac{z}{s}\left(r+\frac{1}{r} \right) F(r,0). \end{align} Similarly if the endpoint of the lower path lies on the boundary then one cannot append a $(1,-1)$ step to that path: \begin{align} \frac{z}{r}\left(s+\frac{1}{s} \right) \cdot \left[r^0\right] F(r,s) & \equiv \frac{z}{r}\left(s+\frac{1}{s} \right) F(0,s). \end{align} \begin{figure}[h!] \begin{center} \includegraphics[height=3cm]{bnd2} \end{center} \caption{(left) When removing the contributions of paths that step outside the strip, we over-correct by twice removing those configurations in which both paths step outside the strip simultaneously. (right) When the endpoints of the paths are close together we must remove the contribution of paths that cross each other.} \label{fig feqn4} \end{figure} We correct the enumeration by subtracting both of these contributions. In so doing we over-correct by subtracting twice the contribution of paths whose endpoints lie on opposite boundaries (see Figure~\ref{fig feqn4}(left)). Thus we add back in \begin{align} \frac{z}{sr}\cdot \left[s^0r^0\right] F(r,s) & \equiv \frac{z}{sr} F(0,0) \end{align} Finally, we must also remove the contribution of those paths whose endpoints cross. This happens when we take a path whose endpoints lie together and attempt to append an upward step to the lower path and a downward step to the upper path (see Figure~\ref{fig feqn4}(right)). So we must subtract \begin{align} \label{eqn fd symm} z sr \cdot r^w F_d(s/r) & \equiv zsr \cdot s^w F_d(r/s), \end{align} where this equivalence comes from the vertical symmetry of the model without interactions. \subsection{Interacting model} \label{sec simple int} We now add boundary interactions to this model. We weight each pair of paths according to the number of contacts the upper (lower) path has with the upper (lower) boundary excluding their leftmost vertices. Recall that $a$ is conjugate to the number of contacts between the lower path and the boundary and similarly $b$ is conjugate to the number of contacts between the upper path and the boundary. Thus our generating functions $F$ and $F_d$ become functions of $a,b$ in addition to $r,s,z$; as above we will typically write these as \begin{align} F(r,s;a,b;z) \equiv F(r,s) &&\text{and } && F_d(s/r;a,b;z) \equiv F_d(s/r). \end{align} \begin{figure}[h!] \begin{center} \includegraphics[height=3.6cm]{bnd3} \end{center} \caption{Interactions with the boundary are produced when one or both paths steps from distance one onto the boundary.} \label{fig feqn5} \end{figure} We now modify the above construction by noting that a contact between the upper path and its boundary is created when an upward step is appended to a path lying 1 step from the boundary (see Figure~\ref{fig feqn5}). Thus we add \begin{align} zb\left(r + \frac{1}{r} \right) \left[s^1 \right] \left\{F(r,s) \right\}. \end{align} However these configurations have already been enumerated with incorrect weight, so we must also subtract \begin{align} z \left(r + \frac{1}{r} \right) \left[s^1 \right] \left\{F(r,s) \right\}. \end{align} Thus we arrive at \begin{align} z(b-1) \left(r + \frac{1}{r} \right) \left[s^1 \right] \left\{F(r,s) \right\}. \end{align} And similarly, by considering contacts between the lower path and the lower boundary we obtain \begin{align} z(a-1) \left(s + \frac{1}{s} \right) \left[r^1 \right] \left\{F(r,s) \right\}. \end{align} Again we find that these terms over-correct and we must consider those configurations in which contacts with the upper and lower boundaries are created at the same time. \begin{align} z(a-1)(b-1) \left[s^1 r^1 \right] \left\{F(r,s) \right\}. \end{align} So finally we have the following functional equation for $F(r,s;a,b;z) \equiv F(r,s)$. \begin{align} F(r,s) =& 1 + z\left(s+\frac{1}{s} \right) \left(r + \frac{1}{r} \right) \cdot F(r,s) \nonumber \\ &- \frac{z}{r} \left(s + \frac{1}{s} \right) \cdot F(0,s) - \frac{z}{s} \left(r + \frac{1}{r} \right) \cdot F(r,0) + \frac{z}{sr} \cdot F(0,0)- z sr \cdot s^w F_d(r/s) \nonumber \\ &+z(b-1)\left(r + \frac{1}{r} \right)\left[s^1\right]\left\{F(r,s)\right\} +z(a-1)\left(s + \frac{1}{s} \right)\left[r^1\right]\left\{F(r,s)\right\}\nonumber \\ &+z(a-1)(b-1)\left[s^1 r^1 \right] \left\{F(r,s) \right\}. \end{align} We can now further simplify this equation by rewriting $\left[s^1\right]\left\{F(r,s)\right\}, \left[r^1\right]\left\{F(r,s)\right\}$ and $\left[s^1 r^1 \right] \left\{F(r,s) \right\}$ in terms of $F(r,0), F(0,s)$ and $F(0,0)$. Extracting the coefficient of $s^0r^0$ in the above equation gives \begin{align} F(0,0) = &1+z\left( 1 + (b-1)+(a-1) +(a-1)(b-1) \right)\left[s^1 r^1\right]\left\{F(r,s)\right\} \nonumber \\ =& 1 + zab \left[s^1r^1 \right] \left\{ F(s,r) \right\}. \end{align} This has a simple combinatorial interpretation; any path with endpoints ending in each surface must either be trivial or can be constructed from a shorter path whose endpoints end a single unit from each boundary. Similarly, extracting the coefficient of $s^0$ in the above gives \begin{align} F(r,0) &= 1 + z \left(r+\frac{1}{r}\right)\left[s^1\right]\left\{F(r,s)\right\} -\frac{z}{r} \left[s^1r^0\right]\left\{ F(s,r) \right\} \nonumber \\ % &+z(b-1)\left(r + \frac{1}{r} \right)\left[s^1\right]\left\{F(r,s)\right\} +z(a-1)\left[r^1s^1\right]\left\{F(r,s)\right\}\nonumber \\ &+ z(a-1)(b-1) \left[s^1 r^1 \right]\left\{F(r,s)\right\} \nonumber \\ &= 1 + zb \left(r+\frac{1}{r}\right)\left[s^1\right]\left\{F(r,s)\right\} + zb(a-1) \left[s^1 r^1 \right]\left\{F(r,s)\right\} \end{align} and similarly \begin{align} F(0,s) &= 1 + za \left(r+\frac{1}{r}\right)\left[r^1\right]\left\{F(r,s)\right\} + za(b-1) \left[s^1 r^1 \right]\left\{F(r,s)\right\}. \end{align} This gives three linear equations and we may solve them to obtain \begin{align} z\left[s^1 r^1 \right] \left\{F(r,s) \right\} &= \frac{F(0,0)-1}{ab} \\ z\left(s + \frac{1}{s} \right) \left[r^1\right]\left\{F(r,s)\right\} &= -\frac{1 + F(r,0) + (b-1)F(0,0)}{ab}\\ z \left(r + \frac{1}{r} \right) \left[s^1\right]\left\{F(r,s)\right\} &= -\frac{1 - F(0,s) + (a-1)F(0,0)}{ab}. \end{align} Substituting these into the original interactions equation gives us \begin{align} \label{eqn:functional eqn} F(r,s) =& \frac{1}{ab} + z\left(s+\frac{1}{s} \right) \left(r + \frac{1}{r} \right) \cdot F(r,s) - z sr \cdot s^w F_d(r/s)\nonumber \\ &+ A(r,s) F(0,s) + B(r,s) F(r,0) + C(r,s) F(0,0), \intertext{where} A(r,s) &= 1-\frac{1}{b} - \frac{z(r+1/r)}{s},\nonumber \\ B(r,s) &= 1-\frac{1}{a} - \frac{z(s+1/s)}{r},\nonumber \\ C(r,s) &= \frac{z}{sr}-\left(1-\frac{1}{a}\right)\left(1-\frac{1}{b}\right). \end{align} From this equation we can recover $G(a,b;z)=F_d(1;a,b;z)$. In the following section we do not solve explicitly for $F_d$, however we are able to determine its singularities and so its asymptotic behaviour. \section{Solution of Functional Equations} At this point, we define $v = w/2$ as is the more natural parameter in what follows. Rather than solving the full model directly, we first examine the special cases of $a=b=1$ and $a=b$. \subsection{Without Interactions } We start by collecting the $F(r,s)$ terms in equation~\Ref{eqn:nonint} to get \begin{multline} \left(1- z\left(s+\frac{1}{s} \right) \left(r + \frac{1}{r} \right)\right)F(r,s) = 1 - \frac{z}{r} \left(s + \frac{1}{s} \right) \cdot F(0,s) \\ - \frac{z}{s} \left(r + \frac{1}{r} \right) \cdot F(r,0) + \frac{z}{sr} \cdot F(0,0) - z sr \cdot s^{2v} F_d(r/s). \end{multline} The coefficient of $F(r,s)$ is called the kernel $K(r,s;z) \equiv K(r,s)$ and its symmetries play a key role in the solution \begin{align} K(r,s) &= 1- z\left(s+\frac{1}{s} \right) \left(r + \frac{1}{r} \right). \end{align} We use the kernel method which exploits the symmetries of the kernel to remove boundary terms in the functional equation (see \cite{bousquet2010walks} for a thorough description of the kernel method). The kernel is symmetric under the following operations \begin{align} (r,s) &\mapsto \left(\frac{1}{r},s\right) & (r,s) &\mapsto \left(r,\frac{1}{s}\right) & (r,s) &\mapsto \left(s,r\right). \end{align} To more be precise, we use the above symmetries to construct the following equations \begin{subequations} \footnotesize \begin{align} \label{eqn symm1} K(r,s) \cdot F(r,s) &= 1-\frac{z}{s}\left(r+\frac{1}{r} \right)F(r,0) - \frac{z}{r}\left(s+\frac{1}{s}\right)F(0,s) + \frac{z}{sr}F(0,0) - zrs^{{2v}+1}F_d\left(\frac{r}{s}\right)\\ \label{eqn symm2} K\left(\frac{1}{r},s \right) \cdot F\left(\frac{1}{r},s \right) &= 1-\frac{z}{s}\left(r+\frac{1}{r} \right)F\left(\frac{1}{r},0\right) - zr\left(s+\frac{1}{s}\right)F(0,s) + \frac{zr}{s}F(0,0) - \frac{zs^{{2v}+1}}{r} F_d\left(\frac{1}{rs}\right)\\ \label{eqn symm3} K\left(r,\frac{1}{s} \right) \cdot F\left(r,\frac{1}{s} \right) &= 1-zs\left(r+\frac{1}{r} \right)F\left(r,0\right) - \frac{z}{r}\left(s+\frac{1}{s}\right)F\left(0 ,\frac{1}{s} \right) + \frac{zs}{r}F(0,0) - \frac{zr}{s^{{2v}+1}} F_d\left(rs\right)\\ \label{eqn symm4} K\left(\frac{1}{r},\frac{1}{s} \right) \cdot F\left(\frac{1}{r},\frac{1}{s} \right) &= 1-zs\left(r+\frac{1}{r} \right)F\left(\frac{1}{r},0\right) - zr\left(s+\frac{1}{s}\right)F\left(0 ,\frac{1}{s} \right) + zrsF(0,0) - \frac{z}{rs^{{2v}+1}} F_d\left(\frac{s}{r}\right). \end{align} \end{subequations} We can eliminate the boundary terms by taking the appropriate alternating sum of the above equations: \begin{align} rs\cdot \text{Eqn}(\ref{eqn symm1}) - \frac{s}{r}\cdot \text{Eqn}(\ref{eqn symm2}) - \frac{r}{s} \cdot \text{Eqn}(\ref{eqn symm3}) + \frac{1}{rs}\cdot \text{Eqn}(\ref{eqn symm4}). \end{align} This is similar to the ``orbit-sum'' discussed in \cite{bousquet2010walks, bousquet2010expected}. Since the kernel is the same in all of the above equations we obtain \begin{multline} K(r,s) \cdot \left(\text{Sum of $F$} \right) = \frac{(s-1)(s+1)(r-1)(r+1)}{rs}\\ +\frac{zs^{{2v}+2}}{r^2}F_d\left(\frac{1}{rs}\right) +\frac{zr^2}{s^{{2v}+2}}F_d\left(rs\right)\\ - z{s^{{2v}+2}}{r^2}F_d\left(\frac{r}{s}\right) - \frac{z}{r^2s^{{2v}+2}}F_d\left(\frac{s}{r}\right). \end{multline} The symmetry of $F_d$ described by equation~\Ref{eqn fd symm} comes from the vertical symmetry of the model; it can be extended to give \begin{align} zr^{{2v}+1}s F_d\left(\frac{s}{r}\right) &\equiv zrs^{{2v}+1} F_d\left(\frac{r}{s}\right) & zr^{{2v}+1}s^{{2v}+1} F_d\left(\frac{1}{rs}\right) &\equiv F_d\left(\frac{r}{s}\right). \end{align} These relations will then simplify the functional equation further: \begin{multline} K(r,s) \cdot \left(\text{Sum of $F$}\right) = \frac{(s-1)(s+1)(r-1)(r+1)}{rs}\\ +\frac{z\left(r^{{2v}+4} + s^{{2v}+4}\right)}{s^{{2v}+2}r^{{2v}+2}}F_d\left(rs\right) - \frac{z\left({r^{{2v}+4}s^{{2v}+4}+1}\right)}{r^{{2v}+2}s^2}F_d\left(\frac{r}{s} \right). \end{multline} We can now remove the left hand side of the equation by choosing values of $r$ and $s$ that set the kernel to zero. That is, $K(\hat{r}, \hat{s}) = 0$; this also gives $z^{-1} = \left(\hat{s}+\frac{1}{\hat{s}} \right) \left(\hat{r} + \frac{1}{\hat{r}} \right)$. Making this substitution gives \begin{multline} 0 = \frac{(\hat{s}-1)(\hat{s}+1)(\hat{r}-1)(\hat{r}+1)}{\hat{r}\hat{s}} +\frac{\left(\hat{r}^{{2v}+4} + \hat{s}^{{2v}+4}\right)}{\hat{s}^{{2v}+1}\hat{r}^{{2v}+1}\left(\hat{r} ^2+1\right)\left(\hat{s}^2+1\right)}F_d\left(\hat{r}\hat{s}\right)\\ - \frac{\left({\hat{r}^{{2v}+4}\hat{s}^{{2v}+4}+1}\right)}{\hat{r}^{{2v}+1}\hat{s} \left(\hat{r}^2+1\right)\left(\hat{s}^2+1\right) }F_d\left(\frac{\hat{r}}{\hat{s}}\right). \end{multline} By eliminating denominators, we obtain \begin{multline} \label{eqn:nointfunct} 0 = (\hat{s}-1)(\hat{s}+1)(\hat{r}-1)(\hat{r}+1)\left(\hat{r}^2+1\right)\left(\hat{s }^2+1\right) \hat{r}^{2v} \hat{s}^{2v}\\ +\left(\hat{r}^{{2v}+4} + \hat{s}^{{2v}+4}\right) F_d\left(\hat{r}\hat{s}\right) - \hat{s}^{2v} \left({\hat{r}^{{2v}+4}\hat{s}^{{2v}+4}+1}\right)F_d\left(\frac{\hat{r}}{\hat{s} }\right). \end{multline} We now apply a similar argument used by Bousquet-M{\'e}lou in \cite{bousquet2010expected} to determine the singularities of $F_d$. Set $\hat{r} = q\hat{s}$ for a root of unity $q \neq -1$ such that $q^{{2v}+4} = -1$. More precisely, we choose $\hat{s}$ as a solution to $K(qs,s) = 0$. The above equation then reduces to \begin{multline} 0 = (\hat{s}^4q^4 -1)(\hat{s}^4-1) (\hat{s}q)^{2v} \hat{s}^{2v}\\ +\hat{s}^{{2v}+4}\left(q^{{2v}+4} + 1\right) F_d\left(q\hat{s}^2\right) - \hat{s}^{2v} \left({q^{{2v}+4}\hat{s}^{{4v}+8}+1}\right)F_d\left(q\right). \end{multline} Since $q^{{2v}+4} = -1$, the second term drops out and we can find an explicit equation for $F_d(x)$ at the roots of unity $q$. \begin{align} F_d(q) &= \frac{(\hat{s}^4q^4 -1)(\hat{s}^4-1) (\hat{s}q)^{2v} }{1- \hat{s}^{4v+8}}. \end{align} Since the kernel $K(q\hat{s},\hat{s})$ is quadratic in $\hat{s}^2$, this implies symmetric functions in $\hat{s}^2$ will also be rational in $z$. By rewriting $F_d(q)$ as \begin{align} \label{eqn:fdqk} F_d(q) &= \frac{(\hat{s}^2q^2 -\frac{1}{\hat{s}^2q^2})(\hat{s}^2-\frac{1}{\hat{s}^2}) q^{{2v}+2} }{\hat{s}^{-({2v}+4)}- \hat{s}^{{2v}+4}}, \end{align} we can see that it must also be rational in $z$. Of course, one can see much more directly that $F_d$ must be a rational function of $z$ since it can be translated into a problem of counting paths via a finite transfer matrix (see, for example, Chapter~V of \cite{flajolet2009analytic}). The construction of $F_d(x)$ ensures that it is a polynomial in $x$ of degree ${2v}$. Thus, we can obtain the full $F_d(x)$ by using Lagrange polynomial interpolation and the known points of $F_d(q)$ (we follow the method in \cite{bousquet2010expected}). By taking a set of $\{q_k\}$ such that $q_k ^{{2v}+4} = -1$ with $q_i \neq -1$ for any $i$ and making the substitutions, we get \begin{align} F_d(x) = \sum_{j=0}^{{2v}} F_d(q_j) \prod_{\substack{0 \leq m \leq {2v} \\ m \neq j}} \frac{x - q_m}{q_j - q_m}. \end{align} Note that no term in the product contributes any singularities in $z$. Thus $F_d(x)$ being singular implies at least one $F_d(q_k)$ is also singular. By equation~\Ref{eqn:fdqk}, we can see that $F_d(q)$ will be singular when $\hat{s}$ (and hence $\hat{r}$) is a $(4v+8)$-th root of unity. Combining with the kernel, a superset of singularities can obtained by various choices of $k,j$: \begin{align} \label{eqn:a1b1soln} z_{j,k} = \frac{1}{\left(\hat{r} + \frac{1}{\hat{r}}\right) \left(\hat{s} + \frac{1}{\hat{s}}\right)} = \frac{1}{4 \cos\left(\frac{\pi j}{2v+4}\right)\cos\left(\frac{\pi k}{2v+4}\right)}. \end{align} Note that since $\hat{r} = q\hat{s}$ with $q^{{2v}+4} = -1$, we do not have $j=k$ in the above and so the dominant singularity is obtained when $j=1,k=2$ (or vice-versa). \subsection{With Equal Interactions $a=b$} For this section, we follow the same argument however the details become more complicated due to the boundary terms. We start by arranging equation~\Ref{eqn:functional eqn} to collect all $F(r,s)$ terms to obtain the equation \begin{align} K(r,s)F(r,s) =& \frac{1}{ab} - z sr \cdot s^{2v} F_d(r/s)\nonumber \\ &+ A(r,s) F(0,s) + B(r,s) F(r,0) + C(r,s) F(0,0), \intertext{where} A(r,s) &= 1-\frac{1}{a} - \frac{z(r+1/r)}{s},\nonumber \\ B(r,s) &= 1-\frac{1}{a} - \frac{z(s+1/s)}{r},\nonumber \\ C(r,s) &= \frac{z}{sr}-\left(1-\frac{1}{a}\right)^2 \end{align} with the kernel \begin{equation} K(r,s) = 1 - z\left(r + \frac{1}{r}\right)\left(s + \frac{1}{s}\right). \end{equation} Since the kernel is the same as that of the non-interacting case we can use the same symmetries and combine the four equations to eliminate the boundary terms $F(r,0), F\left(\frac{1}{r},0\right), F(0,s)$ and $F\left(0,\frac{1}{s}\right)$. This results in the following functional equation \begin{multline} K(r,s) \cdot\left(\text{linear combination of } F \right) =\\ rs^{{2v}+1}(s^2-1)(r^2-1)(a-1)^2 (r^2s^2z+r^2z-sr+s^2z+z)z \cdot F(0,0)\\ +(sza+r^2sza+r-ra)(rsa-rs-s^2za-za)zs^{4v+3} \cdot F_d\left(\frac{1}{rs}\right)\\ -(rsa-rs-s^2za-za)(za+r^2za-rsa+rs)z\cdot F_d\left(\frac{s}{r}\right) \\ +(sza+r^2sza+r-ra)(rs^2za+rza+s-sa)zr^3s^{4v+3}\cdot F_d\left(\frac{r}{s}\right)\\ -(za+r^2za-rsa+rs)(rs^2za+rza+s-sa)zr^3 \cdot F_d\left(rs\right)\\ - rs^{{2v}+1}z^2(s^4-1)(r^4-1). \end{multline} Since the wall interaction is symmetric, we can again make use of the vertical symmetry to eliminate $F_d\left(\frac{1}{rs}\right)$ and $F_d\left(\frac{s}{r}\right)$ and give \begin{multline} K(r,s) \cdot\left(\text{linear combination of } F \right) \\ = L(r,s;a)\cdot F(0,0) +M(r,s;a)\cdot F_d\left(\frac{r}{s}\right) \\ +N(r,s;a)\cdot F_d\left({r}{s}\right)- rs^{{2v}+1}z^2(s^4-1)(r^4-1), \end{multline} where $L(r,s;a),M(r,s;a)$ and $N(r,s;a)$ are easily computed though complicated functions. As before, we pick values of $r$ and $s$ that set the kernel to $0$. And since $K(\hat{r},\hat{s}) =0$ we can write $z = (\hat{s}+1/\hat{s})^{-1}(\hat{r}+1/\hat{r})^{-1}$ and so eliminate it from the coefficients of the above equations. After clearing the denominators, we obtain a functional equation with coefficients $\alpha, \beta$ and $\delta$ (again being easily computed, though complicated, functions) \begin{align} 0 &= \alpha(r,s;a) \cdot F_d\left(\frac{\hat{r}}{\hat{s}}\right) + \beta(r,s;a) \cdot F_d\left(\hat{r}s\right)+ \delta(r,s;a). \end{align} The coefficient $\delta$ is important in what follows, and so we state it explicitly \begin{align} \delta(r,s;a) &= r^{2v}s^{2v}(1-r^4)(1-s^4). \end{align} Note that if $r$ or $s$ are fourth roots of unity then $\delta=0$. Unlike the $a=b=1$ case, there is no simple relation between $\hat{r}$ and $\hat{s}$ that will give us an explicit form for $F_d(x)$. However, we can still extract the location of the singularities by solving when the coefficients $\alpha$ and $\beta$ are simultaneously $0$ with $\delta \neq 0$. Solving $\alpha=\beta=0$, we get \begin{align} \label{eqn:rssoln} \hat{r}^{{2v}} &= \frac{ \hat{r}^2 (a-1)-1 }{\hat{r}^2(a-1-\hat{r}^2)} & \hat{s}^{2v} &= - \frac{\hat{s}^2(a-1)-1 }{\hat{s}^2(a-1-\hat{s}^2)} \intertext{or} \hat{r}^{{2v}} &= - \frac{ \hat{r}^2 (a-1)-1 }{\hat{r}^2(a-1-\hat{r}^2)} & \hat{s}^{2v} &= \frac{\hat{s}^2(a-1)-1 }{\hat{s}^2(a-1-\hat{s}^2)}. \end{align} Since the form of $\hat{r}$ and $\hat{s}$ is similar, we will concentrate on finding the solutions of \begin{align} \hat{r}^{2v} = \frac{ \hat{r}^2 (a-1)-1 }{\hat{r}^2(a-1-\hat{r}^2)} \end{align} and from there, deducing solutions for $\hat{s}$. By rearranging the equation, we get \begin{equation} \label{eqn:areduction} a-1 = \frac{\hat{r}^{v+2} - \frac{1}{\hat{r}^{v+2}}} {\hat{r}^v - \frac{1}{\hat{r}^v}}. \end{equation} Since $a$ is a positive real parameter, the right hand side must also be real. The following theorem tells us that all solutions to this equation must lie either on the unit circle or the real line. \begin{thm} \label{thm:rrootsloc} The expression \begin{equation} \frac{\hat{r}^{v+2} - \frac{1}{\hat{r}^{v+2}}} {\hat{r}^v - \frac{1}{\hat{r}^v}} \end{equation} is real if and only if $\hat{r} \in \mathbb{R}$ or if $|\hat{r}| = 1$. The equivalent statement holds for $\hat{s}$. \end{thm} The proof of this statement is given in the appendix and is relatively straightforward though cumbersome. We can further refine the above statement when $a \leq 2$ and in that case all the solutions lie on the unit circle. To do this, we use of Theorem~$1$ from Lal{\'\i}n and Smyth \cite{lalin2013unimodularity}. \begin{thm}[from \cite{lalin2013unimodularity}] \label{thm:smalla} Let $h(z)$ be a non-zero complex polynomial of degree $n$ having all its zeros in the closed unit disc $|z| \leq 1$. Then for $d > n$ and any $\lambda$ on the unit circle, the self inverse polynomial \begin{equation} P^{(\lambda)}(z) = z^{d-n} h(z) + \lambda z^n \bar{h}\left(\frac{1}{z}\right) \end{equation} has all its zeros on the unit circle. \end{thm} By rearranging equation~\Ref{eqn:rssoln}, we get \begin{align} 0 &= \hat{r}^{2v+2}(a-1-\hat{r}^2) - (\hat{r}^2 (a-1)-1) \\ 0 &= \hat{s}^{2v+2}(a-1-\hat{s}^2) + (\hat{s}^2(a-1)-1) \end{align} which is in the form given in the theorem with $n=2$, $h(z) = (a-1 - z^2)$ and $\lambda = \pm1$. The zeros of $h(z)$ are given by \begin{align} z &= \pm \sqrt{a-1}. \end{align} Hence, the zeros of $h(z)$ will be inside the closed disc exactly when $a \leq 2$ and so we can apply the theorem. We note that when $\hat{r}$ and $\hat{s}$ lie on the unit circle, the singularities of the generating function are of a similar form to that given in equation~\Ref{eqn:a1b1soln}. However, the angles are not simple functions of $w$. In Section~\ref{subs:case4} we give asymptotic expressions for the singularities. \subsection{With Interactions, $a$, $b$ free} We proceed via the same argument as per the previous sections. We start by arranging equation~\Ref{eqn:functional eqn} to collect all $F(r,s)$ terms to obtain the equation \begin{align} K(r,s)F(r,s) =& \frac{1}{ab} - z sr \cdot s^{2v} F_d(r/s)\nonumber \\ &+ A(r,s) F(0,s) + B(r,s) F(r,0) + C(r,s) F(0,0), \intertext{where} A(r,s) &= 1-\frac{1}{b} - \frac{z(r+1/r)}{s},\nonumber \\ B(r,s) &= 1-\frac{1}{a} - \frac{z(s+1/s)}{r},\nonumber \\ C(r,s) &= \frac{z}{sr}-\left(1-\frac{1}{a}\right)\left(1-\frac{1}{b}\right) \end{align} with the same kernel as before. Again, use the symmetries of the kernel to construct 4 linear equations and then take linear combinations to eliminate the boundary terms $F(r,0), F\left(\frac{1}{r},0\right)$, $F(0,s)$ and $F\left(0,\frac{1}{s}\right)$. This results in the following functional equation \begin{multline} K(r,s) \cdot\left(\text{linear combination of } F \right)\\ = rs^{{2v}+1}(s^2-1)(r^2-1)(a-1)(b-1) (r^2s^2z+r^2z-sr+s^2z+z)z \cdot F(0,0)\\ +(szb+r^2szb+r-rb)(rsa-rs-s^2za-za)zs^{4v+3} \cdot F_d\left(\frac{1}{rs}\right)\\ -(rsa-rs-s^2za-za)(zb+r^2zb-rsb+rs)z\cdot F_d\left(\frac{s}{r}\right) \\ +(szb+r^2szb+r-rb)(rs^2za+rza+s-sa)zr^3s^{4v+3}\cdot F_d\left(\frac{r}{s}\right)\\ -(zb+r^2zb-rsb+rs)(rs^2za+rza+s-sa)zr^3 \cdot F_d\left(rs\right)\\ -rs^{{2v}+1}z^2(s^4-1)(r^4-1). \end{multline} Unlike the previous case, the wall interactions are no longer symmetric and hence we cannot apply the vertical symmetry. However, we can pick values $\hat{r}$ and $\hat{s}$ that sets the kernel $K(\hat{r},\hat{s}) = 0$ and eliminate $z$ from the equation. Making this substitution, we get \begin{multline} \label{eqn:abfuncteqn} 0 = \hat{s}^{4v+2} (b-1-\hat{s}^2)(\hat{r}^2(a-1)-1) \cdot F_d\left(\frac{1}{\hat{r}\hat{s}}\right) - (1-\hat{s}^2(b-1))(\hat{r}^2(a-1)-1)\cdot F_d\left(\frac{\hat{s}}{\hat{r}}\right) \\ -\hat{s}^{4v+2} \hat{r}^2(b-1-\hat{s}^2)(a-1-\hat{r}^2)\cdot F_d\left(\frac{\hat{r}}{\hat{s}}\right) + \hat{r}^2(\hat{s}^2(b-1)-1)(a-1-\hat{r}^2) \cdot F_d(\hat{r}\hat{s}) \\ - \hat{s}^{{2v}}(\hat{s}^4-1)(\hat{r}^4-1). \end{multline} Up to this point, we have omitted the dependence of the parameters $a$ and $b$ in $F_d(x)$ for convenience. In full detail, $F_d(x) \equiv F_d(x;a,b)$. This will be important in the next step when we look at the result of mapping $a \leftrightarrow b$. For this, we define $G_d(x) = F_d(x;b,a)$. With a little work we have \begin{align} G_d(x) &= F_d(x;b,a)\\ &= x^{2v} F_d\left(\frac{1}{x};a,b\right). \end{align} Swapping $a \leftrightarrow b$ in equation~\Ref{eqn:abfuncteqn}, we get \begin{multline} 0 = \hat{s}^{4v+2} (a-1-\hat{s}^2)(\hat{r}^2(b-1)-1) \cdot G_d\left(\frac{1}{\hat{r}\hat{s}}\right) - (1-\hat{s}^2(a-1))(\hat{r}^2(b-1)-1)\cdot G_d\left(\frac{\hat{s}}{\hat{r}}\right) \\ -\hat{s}^{4v+2} \hat{r}^2(a-1-\hat{s}^2)(b-1-\hat{r}^2)\cdot G_d\left(\frac{\hat{r}}{\hat{s}}\right) + \hat{r}^2(\hat{s}^2(a-1)-1)(b-1-\hat{r}^2) \cdot G_d(\hat{r}\hat{s}) \\ - \hat{s}^{{2v}}(\hat{s}^4-1)(\hat{r}^4-1). \end{multline} Now convert $G_d$ back to $F_d$ using the relation $G_d(x) = x^{2v} F_d\left(\frac{1}{x}\right)$ and clear denominators to find \begin{multline} \label{eqn:newabfuncteqn} 0 = \hat{r}^{4v+2} (\hat{s}^2(a-1)-1)(b-1-\hat{r}^2) \cdot F_d\left(\frac{1}{\hat{r}\hat{s}}\right) - \hat{r}^{4v+2}s^{2} (a-1-\hat{s}^2)(b-1-\hat{r}^2)\cdot F_d\left(\frac{\hat{s}}{\hat{r}}\right) \\ -(\hat{s}^2(a-1)-1)(\hat{r}^2(b-1)-1)\cdot F_d\left(\frac{\hat{r}}{\hat{s}}\right) + \hat{s}^2(\hat{r}^2(b-1)-1)(a-1-\hat{s}^2) \cdot F_d(\hat{r}\hat{s}) \\ - \hat{r}^{{2v}}(\hat{s}^4-1)(\hat{r}^4-1). \end{multline} Combining equations~\Ref{eqn:abfuncteqn} and~\Ref{eqn:newabfuncteqn}, we can eliminate one more boundary term (e.g. $F_d\left(\frac{1}{\hat{r}\hat{s}}\right)$) resulting in \begin{align} 0 = \alpha(\hat{r}, \hat{s}) \cdot F_d\left(\frac{\hat{r}}{\hat{s}}\right) + \beta(\hat{r}, \hat{s}) \cdot F_d\left(\frac{\hat{s}}{\hat{r}}\right) + \gamma(\hat{r}, \hat{s}) \cdot F_d\left({\hat{r}}{\hat{s}}\right) + \delta(\hat{r}, \hat{s}). \end{align} We do not state all of the coefficients $\alpha,\beta,\gamma$ (they are easily computed but complicated), however the coefficient $\delta$ will be important in what follows \begin{multline} \delta = r^{2v}s^{2v}(1-r^4)(1-s^4) \big[ (1-b+r^2)(1+s^2-as^2)r^{2v+2}\\ -(1-b+s^2)(1+r^2-ar^2)s^{2v+2} \big]. \label{eqn:abdelta} \end{multline} Note that for $a,b$ in this general case, $\delta = 0$ when $r,s$ are fourth roots of unity or $r=s$. We follow the same logic as for the previous section. The locations of the singularities are when the functions $\alpha = \beta = \gamma = 0$ and $\delta \neq0$. Thus, solving for when $\alpha = \beta = \gamma = 0$ simultaneously gives \begin{align} \label{eqn:rsabsoln} r^{4v+4} &= \frac{(r^2(b-1) - 1)(r^2(a-1) - 1)}{(b-1-r^2)(a-1-r^2)} & \text{and} && s^{4v+4} &= \frac{(s^2(b-1) - 1)(s^2(a-1) - 1)}{(b-1-s^2)(a-1-s^2)}. \end{align} By rearranging equation~\Ref{eqn:rsabsoln}, we get \begin{align} 0&= \hat{r}^{4v+4}\left({(b-1-\hat{r}^2)(a-1-\hat{r}^2)}\right) - \left({(\hat{r}^2(b-1) - 1)(\hat{r}^2(a-1) - 1)}\right)\\ 0&= \hat{s}^{4v+4}\left({(b-1-\hat{s}^2)(a-1-\hat{s}^2)}\right) - \left({(\hat{s}^2(b-1) - 1)(\hat{s}^2(a-1) - 1)}\right) \end{align} which is in the form given in the Theorem \ref{thm:smalla} with $n=4$, $h(z) = (b-1-z^2)(a-1-z^2)$ and $\lambda = 1$. The zeros of $h(z)$ are given by \begin{equation} z = \pm \sqrt{a-1}, \pm \sqrt{b-1}. \end{equation} Hence, the zeros of $h(z)$ will be inside the closed disc exactly when $a,b \leq 2$. Consequently when $a,b \leq 2$ we know that $\hat{r},\hat{s}$ lie on the unit circle. When $a$ or $b > 2$ we observe that all the solutions lie either on the unit circle or the real line. \section{Exact and asymptotic results} In this section, we will describe the asymptotic and exact results we obtained for each of the cases. In the case where both $a,b \in \{1,2\}$ or $ab=a+b$, we are able to obtain an exact solution for the dominant singularity. However, more generally we are only able to obtain asymptotic results. Note that by $a\leftrightarrow b$ symmetry, we need only consider cases where $a\geq b$. This gives 13 different cases (see Figure~\ref{fig param}) which we summarise in Section~\ref{subs:summary}. \begin{figure}[h!] \begin{center} \includegraphics[width=0.60\textwidth]{parameter} \end{center} \caption{The $a-b$ parameter space contains 13 representative points, depending on whether $a,b=1$, $1<a,b<2$, $a,b=2$, $a,b>2$, or if $a=b$ or if $a,b$ lie on along a special curve $ab=a+b$. The numbers in this diagram correspond to the cases described in the text.}\label{fig param} \end{figure} In what follows we proceed by solving equation~\Ref{eqn:rsabsoln} for possible values of $\hat{r},\hat{s}$; we are able to do this exactly for a small number of cases, but in the majority we must do so asymptotically. Each pair of $\hat{r},\hat{s}$ may lead to a singularity of the generating function however only when the auxiliary function $\delta$ is non-zero. \subsection{Case (I) : $a = b =1$.} \label{subs:case1} \begin{mycase}\label{case1}\end{mycase} This case is the non-interacting case. We can obtain the asymptotic expansion by looking at equation~\Ref{eqn:a1b1soln} with $j=1,k=2$. \begin{equation} \label{eqn:a1b1asympt} z_c = \frac{1}{4} + \frac{5}{32}\frac{\pi^2}{v^2} - \frac{5}{8}\frac{\pi^2}{v^3} + O\left(v^{-4}\right). \end{equation} \subsection{Case (II): $a = b = 2$.} \label{subs:case2} \begin{mycase}\label{case2}\end{mycase} Simplifying the solutions for $\hat{r}$ and $\hat{s}$ in equation~\Ref{eqn:rssoln}, we get that \begin{equation} \hat{r}^{4v} = \frac{1}{\hat{r}^4} \qquad \hat{s}^{4v} = \frac{1}{\hat{s}^4}. \end{equation} This suggests that the solutions for $\hat{r}$ and $\hat{s}$ are simple roots of unity. Hence the set of solutions given by \begin{align} \hat{r} &\in \left\{\exp\left[ \frac{\pi i j}{2v + 2}\right]\right\}_{0\leq j \leq 4v+4} & \hat{s} &\in \left\{\exp\left[ \frac{\pi i k}{2v + 2}\right]\right\}_{0\leq k \leq 4v+4} \end{align} is a superset of singularities for $\hat{r}$ and $\hat{s}$. If we attempt to set both $\hat{r}, \hat{s} = 1$, we do not obtain a valid singularity since $\delta = 0$. To obtain the dominant singularity, we instead take \begin{align} \hat{r} &=\exp\left[ \frac{\pi i}{2v + 2}\right] & \hat{s} &= 1, \end{align} and with this choice $\delta \neq 0$. Note that by symmetry we could also swap the choices of $\hat{r} \leftrightarrow \hat{s}$. We then have \begin{align} z_c &= \frac{1}{\left(\hat{r} + \frac{1}{\hat{r}}\right)\left(\hat{s} + \frac{1}{\hat{s}}\right)} = \frac{1}{4 \cos\left(\frac{\pi}{2v+2}\right)}\nonumber \\ &= \frac{1}{4} + \frac{1}{32}\frac{\pi^2}{v^2} - \frac{1}{16}\frac{\pi^2}{v^3} + O\left(v^{-4}\right). \label{eqn:zc_case2} \end{align} \subsection{Case (III): $a = 2; \: b = 1$.} \label{subs:case3} \begin{mycase}\label{case3}\end{mycase} As per the previous two cases, we find that the particular choice of $a$ and $b$ leads to solutions that are roots of unity. Equation~\Ref{eqn:rsabsoln} reduces to \begin{equation} \hat{r}^{4v} = -\frac{1}{\hat{r}^6} \qquad \hat{s}^{4v} = -\frac{1}{\hat{s}^6}, \end{equation} and so the solutions are given by \begin{align} \hat{r} &= \left\{\exp\left[ \frac{\pi i j}{4v + 6}\right]\right\}_{\substack{0\leq j \leq 4v+4\\ {\rm j \; odd} }} & \hat{s} &= \left\{\exp\left[ \frac{\pi i k}{4v + 6}\right]\right\}_{\substack{0\leq k \leq 4v+4\\ {\rm k \; odd} }}. \end{align} To obtain the dominant singularity we take $j,k=1,3$ respectively: \begin{align} \hat{r} &= \exp\left[ \frac{\pi i }{4v + 6}\right] & \hat{s} &= \exp\left[ \frac{3\pi i}{4v + 6}\right], \end{align} and this gives a non-zero $\delta$ \begin{equation} \delta = \frac{-6\pi^3}{v^3} + \frac{27\pi^3}{v^4} + \frac{\pi^3 (47\pi^2-1296) }{16v^5} + O\left(v^{-6}\right). \end{equation} The dominant singularity is \begin{equation} z_c = \frac{1}{4\cos\left(\frac{\pi}{4v+6}\right)\cos\left(\frac{3\pi}{4v+6} \right) }, \end{equation} and its asymptotic expansion is \begin{equation} \label{eqn:zc_case3} z_c = \frac{1}{4} + \frac{5}{64}\frac{\pi^2}{v^2} - \frac{15}{64}\frac{\pi^2}{v^3} + O\left(v^{-4}\right). \end{equation} Note that if we tried choosing $j,k=1,1$ then $\hat{r}=\hat{s}$ and $\delta = 0$. \subsection{Case (IV): $a = b; \: a < 2$.} \label{subs:case4} \begin{mycase}\label{case4}\end{mycase} In Cases~\ref{case1} and~\ref{case2}, the solutions of $\hat{r}$ and $\hat{s}$ are simply roots of unity. Hence we guess that for this generalised case $1 < a=b < 2$, the solutions of $\hat{r}$ and $\hat{s}$ will be perturbations of the roots of unity found in the $a=b=1$ case (a similar approach was used in \cite{brak2005a-:a}). More precisely, we look for a solution of the form \begin{align} \hat{r} &= \exp\left[ \frac{i \pi}{v+2} \left( c_0 + \frac{c_1}{v} + \frac{c_2}{v^2} + \cdots \ \right) \right], \end{align} and similarly for $\hat{s}$. We substitute this into equation~\Ref{eqn:rssoln} and solve for the unknown constants. This process yielded \begin{equation} \hat{r} = \exp\left[\frac{i\pi}{v-\frac{2}{a-2}}\left(1 - \frac{4a(a-1)\pi^2}{3(v(a-2)-1)^3} + O\left(\frac{1}{( v(a-2)-2)^{5}}\right)\right)\right] \end{equation} which, when substituted into equation~\Ref{eqn:rssoln} gives \begin{equation} \hat{r}^{2v} - \frac{ \hat{r}^2 (a-1)-1 }{\hat{r}^2(a-1-\hat{r}^2)} = \frac{8ia(a-1)(a^2+8a-8)\pi^5}{15(a-2)^4v^5} + O\left(v^{-6}\right). \end{equation} Repeating this for $\hat{s}$ leads to \begin{equation} \hat{s} = \exp\left[\frac{i\pi}{2\left(v-\frac{2}{a-2}\right)}\left(1 - \frac{a(a-1)\pi^2}{3(-2+v(a-2))^3} + O\left(\frac{1}{(-2 + v(a-2))^{5}}\right)\right)\right] \end{equation} which, when substituted into equation~\Ref{eqn:rssoln} gives \begin{equation} \hat{s}^{2v} + \frac{\hat{s}^2(a-1)-1 }{\hat{s}^2(a-1-\hat{s}^2)} = \frac{ia(a-1)(a^2+8a-8)\pi^5}{60(a-2)^4v^5} + O\left(v^{-6}\right). \end{equation} Note that equation~\Ref{eqn:rssoln} is not symmetric under $\hat{r} \leftrightarrow \hat{s}$. This choice of $\hat{r}$ and $\hat{s}$ gives a $\delta$ value of \begin{equation} \delta = \hat{r}^{2v}\hat{s}^{2v}(\hat{r}^4-1)(\hat{s}^4-1) = \frac{8\pi^2}{v^2}+\frac{8i\pi^2 (3a\pi-4i)}{(a-2)v^3} + O\left(v^{-4}\right) \end{equation} which is non-zero. Hence, using solving the kernel equation $K(\hat{r},\hat{s})=0$ for $z$, we get that \begin{equation} \label{eqn:zcaasmall} z_c = \frac{1}{4}+\frac{5}{32}\frac{\pi^2}{v^2}+\frac{5}{8}\frac{\pi^2}{v^3(a-2)} + O\left(v^{-4}\right). \end{equation} We see that as $a \to 1$ this agrees with Case~\ref{case1}. \subsection{Case (V): $a = b; \: a > 2$.} \label{subs:case5} \begin{mycase}\label{case5}\end{mycase} In the case $a>2$, Theorem \ref{thm:smalla} does not hold and we expect equation~\Ref{eqn:rssoln} to contain extra solutions along the real axis. By rearranging equation~\Ref{eqn:rssoln}, we get that \begin{align} (a-1-\hat{r}^2)\hat{r}^{{2v}+2} &= \hat{r}^2 (a-1)-1,\\ (a-1-\hat{s}^2)\hat{s}^{{2v}+2} &= - \hat{s}^2(a-1)-1 . \end{align} We observe that $\hat{r} =\sqrt{a-1}$ will set the left hand side to zero and leave a small remainder on the right. Hence, we looked at solutions that perturb this square root (again a similar approach was used in \cite{brak2005a-:a}). We proceed as per the previous case and arrive at a solution of the form \begin{align} \hat{r} & = \sqrt{a-1} \left[1-\frac{a(a-2)}{2(a-1)^2 (a-1)^{v}} + O\left(v (a-1)^{-2v} \right)\right] \\ \hat{s} &= \frac{1}{\sqrt{a-1}} \left[1+\frac{a(a-2)}{2(a-1)^2 (a-1)^{v}} + O\left(v (a-1)^{-2v} \right)\right]. \end{align} This choice of $\hat{r}$ and $\hat{s}$ will give a non-zero $\delta$ which to leading order is \begin{equation} \delta = (a-1)^{2v} a^2 (a-2)^2 + O\left(v \right). \end{equation} Putting this together with the kernel equation $K(\hat{r},\hat{s})=0$ we get \begin{equation} z_c = \frac{a-1}{a^2} + \frac{(a-2)^2}{a^2(a-1) (a-1)^{v}} +O\left(v(a-1)^{-2v}\right). \end{equation} \subsection{Case (VI): $a < 2; \: b < 2$.} \label{sec:case6} \begin{mycase}\label{case6}\end{mycase} In Cases \ref{case1}, \ref{case2} and \ref{case4}, the solutions of $\hat{r}$ and $\hat{s}$ are simple perturbations of roots of unity. Hence we guess that for the case $1 < a,b < 2$, the solutions of $\hat{r}$ and $\hat{s}$ will be of a similar nature. Hence we apply a similar method to that used in Case~\ref{case4} but now applied to equation~\Ref{eqn:rsabsoln}. This leads us to \begin{multline} \hat{r} = \exp\left[ \frac{\pi i}{\left(v- \frac{a+b-4}{(a-2)(b-2)}\right)} \left(1 - \frac{2(ab-a-b)(a^2b+ab^2-10ab+8a+8b-8)\pi^2}{3(a-2)^3(b-2)^3 \left(v- \frac{a+b-4}{(a-2)(b-2)}\right)^3} \right.\right.\\ \left.\left. \phantom{\frac{2(ab-a-b)(a^2b+ab^2-10ab+8a+8b-8)\pi^2}{3(a-2)^3(b-2)^3 \left(v- \frac{a+b-4}{(a-2)(b-2)}\right)^3}} + O\left(\left(v- \frac{a+b-4}{(a-2)(b-2)}\right)^{-5}\right) \right) \right]; \end{multline} which, when substituted into equation~\Ref{eqn:rsabsoln} gives \begin{equation} \hat{r}^{4v+4} - \frac{(\hat{r}^2(b-1) - 1)(\hat{r}^2(a-1) - 1)}{(b-1-\hat{r}^2)(a-1-\hat{r}^2)} = O\left(\frac{1}{(a-2)^5 (b-2)^5 v^{5}}\right). \end{equation} We remind the reader that in this case if $\hat{r} =\hat{s}$ then $\delta = 0$ and so we need the value of $\hat{s}$ to be different. Following the same trend as for the previous case, we get that \begin{multline} \hat{s} = \exp\left[ \frac{\pi i}{2\left(v- \frac{a+b-4}{(a-2)(b-2)}\right)} \left(1 - \frac{(ab-a-b)(a^2b+ab^2-10ab+8a+8b-8)\pi^2}{6(a-2)^3(b-2)^3 \left(v- \frac{a+b-4}{(a-2)(b-2)}\right)^3} \right.\right.\\ \left.\left. \phantom{ \frac{(ab-a-b)(a^2b+ab^2-10ab+8a+8b-8)\pi^2}{6(a-2)^3(b-2)^3 \left(v- \frac{a+b-4}{(a-2)(b-2)}\right)^3} } + O\left(\left(v- \frac{a+b-4}{(a-2)(b-2)}\right)^{-5}\right) \right)\right]; \end{multline} which, when substituted into equation~\Ref{eqn:rsabsoln} gives \begin{equation} \hat{s}^{4v+4} - \frac{(\hat{s}^2(b-1) - 1)(\hat{s}^2(a-1) - 1)}{(b-1-\hat{s}^2)(a-1-\hat{s}^2)} = O\left(\frac{1}{(a-2)^5 (b-2)^5 v^{5}}\right). \end{equation} This choice of $\hat{r}$ and $\hat{s}$ will give a $\delta$ value of \begin{equation} \delta = -\frac{16\pi^2(a-2)(b-2)}{v^2} - \frac{16i \pi^2 (6ab\pi - 6b\pi - 2bi- 2ai -9a \pi +6\pi + 8i)}{v^3} + O\left(v^{-4}\right) \end{equation} which is non-zero. Hence, solving the kernel equation $K(\hat{r},\hat{s})=0$ for $z$, we get that \begin{equation} \label{eqn:zcabsmall} z_c = \frac{1}{4}+\frac{5}{32}\frac{\pi^2}{v^2}+\frac{5}{16}\frac{\pi^2 (a+b-4) }{v^3(a-2)(b-2)} + O\left(v^{-4}\right). \end{equation} Note that equation~\Ref{eqn:zcabsmall} reduces to equation~\Ref{eqn:zcaasmall} when $b=a$, and reduces to equation~\Ref{eqn:a1b1asympt} when $a,b \to 1$. \subsection{Case (VII): $a > 2; \: b > 2$.} \label{subs:case7} \begin{mycase}\label{case7}\end{mycase} In the case where $a$ or $b$ is greater than $2$, we argue as for Case~\ref{case5} in that we expect solutions along the real axis as well. Since $\hat{r}$ and $\hat{s}$ satisfy the same equation and the equation is invariant under switching $a$ and $b$, we can (without loss of generality) look at the expansion of $\hat{r}$ in terms of $\sqrt{a-1}$. We get \begin{equation} \label{eqn:largerabsoln} \hat{r} = \sqrt{a-1} \left[1+ \frac{a(ab-a-b)(a-2)}{2(a-1)^3(a-b)(a-1)^{2v}}+ O\left( (a-1)^{-4v}\right) \right]. \end{equation} Using the same process, we get that \begin{equation} \hat{s} = \sqrt{b-1} \left[1+ \frac{b(ab-a-b)(b-2)}{2(b-1)^3(b-a)(b-1)^{2v}}+ O\left( (b-1)^{-4v}\right) \right]. \end{equation} We then check that this gives a non-zero value of $\delta$. For simplicity of notation, we let $A = a-1$ and $B= b-1$ and through abuse of notation, we obtain \begin{equation} \delta = A^{2v}B^{v} \left[A(AB-1)(A-B)(A^2-1)(B^2-1) + O\left(A^{-2v}\right) + O\left(B^{-v}\right) \right]. \end{equation} By making the substitution into $K(\hat{r},\hat{s})=0$, we get that to leading order \begin{multline} z_c = \frac{\sqrt{a-1}\sqrt{b-1}}{ab} + \frac{(a-2)^2 (ab-a-b)\sqrt{b-1}}{2ab(b-a) \sqrt{a-1}(a-1)^{2v+2}} \\ + \frac{(b-2)^2 (ab-a-b)\sqrt{a-1}}{2ab(a-b) \sqrt{b-1}(b-1)^{2v+2}} + O\left(a^{-4v}\right) + O\left(b^{-4v}\right). \end{multline} Note that the above expression implies that $z_c$ is a decreasing function of $v$. To see this, consider $a>b>2$. The first correction term is now negative (since $(b-a)<0$) while the second correction term is positive. The factor of $(a-1)^{2v+2}$ in the denominator of the first correction term is larger than the corresponding factor of $(b-1)^{2v+2}$ in the denominator of the second term. Hence for large $v$ the first correction term is smaller and negative than the larger and positive second correction term. Finally as $v\to \infty$ the sum of two corrections is positive and shrinking to $0$. \subsection{Case (VIII): $a > 2; \: b < 2$.} \label{subs:case8} \begin{mycase}\label{case8}\end{mycase} The next region we consider is when one parameter is small ($<2$) and the other is large ($>2$). Without loss of generality, we can assume that $a>2$ and $b<2$. We make use of the solutions obtained in Cases~\ref{case6} and~\ref{case7} to obtain \begin{equation} \hat{r} = \sqrt{a-1} \left[1+ \frac{a(ab-a-b)(a-2)}{2(a-1)^3(a-b)(a-1)^{2v}}+ O\left( (a-1)^{-4v}\right) \right] \end{equation} and \begin{multline} \hat{s} = \exp\left[ \frac{\pi i}{2\left(v- \frac{a+b-4}{(a-2)(b-2)}\right)} \left(1 - \frac{(ab-a-b)(ab^2+a^2b-10ab+8a+8b-8)\pi^2}{6(a-2)^3(b-2)^3 \left(v- \frac{a+b-4}{(a-2)(b-2)}\right)^3} \right.\right.\\ \left.\left. \phantom{\frac{(ab-a-b)(ab^2+a^2b-10ab+8a+8b-8)\pi^2}{6(a-2)^3(b-2)^3 \left(v- \frac{a+b-4}{(a-2)(b-2)}\right)^3}} + O\left(\left(v- \frac{a+b-4}{(a-2)(b-2)}\right)^{-5}\right) \right)\right]. \end{multline} Substituting these choices into $\delta$ give the following non-zero form \begin{equation} \delta = 2\pi A^{2v}(AB-1)(A^2-1) \left[ -\frac{i(A-1)}{v} +O(v^{-2})\right]. \end{equation} We can then extract the growth rate as \begin{equation} z_c = \frac{\sqrt{a-1}}{2a} \left[1 + \frac{\pi^2}{8v^2} + \frac{\pi^2(a+b-4)}{4(a-2)(b-2)v^3} + O\left(v^{-4}\right)\right]. \end{equation} Note that as $b \to 1$ the above expression becomes equation~\Ref{eqn:zc_case10} in Case~\ref{case10} below. We now complete the analysis by looking at the remaining boundary cases. \subsection{Case (IX): $a < 2; \: b = 1$.} \label{subs:case9} \begin{mycase}\label{case9}\end{mycase} In this case, equation~\Ref{eqn:rsabsoln} reduces down to \begin{align} \hat{r}^{4v+6} &= \frac{\hat{r}^2 (a-1) - 1 }{a-1-\hat{r}^2} & \hat{s}^{4v+6} &= \frac{\hat{s}^2 (a-1) - 1 }{a-1-\hat{s}^2}. \end{align} Following similar techniques used Section \ref{case6}, we can obtain the two primitive roots of $\hat{r}$ and $\hat{s}$ to get \begin{equation} \hat{r} = \exp\left[\frac{\pi i}{2\left(v + \frac{a-3}{a-2} \right)}\left(1 - \frac{a(a-1)\pi^2}{6\left(a-3 + v(a-2)\right)^3} + O\left(\frac{1}{(a-3 + v(a-2))^5}\right)\right) \right] \end{equation} and \begin{equation} \hat{s} = \exp\left[\frac{\pi i}{\left(v + \frac{a-3}{a-2} \right)}\left(1 - \frac{2a(a-1)\pi^2}{3\left(a-3 + v(a-2)\right)^3} + O\left(\frac{1}{(a-3 + v(a-2))^5}\right)\right) \right]. \end{equation} Using these values of $\hat{r}$ and $\hat{s}$, we obtain a non-zero $\delta$ value of \begin{equation} \delta = -\frac{16\pi^2 (a-2)}{v^2} - \frac{16 \pi^2i (2ia + 3\pi a - 6i)}{v^3} + O\left(v^{-4}\right). \end{equation} This will yield a dominant singularity of \begin{equation} z_c = \frac{1}{4} + \frac{5}{32}\frac{\pi^2}{v^2} - \frac{5}{16} \frac{\pi^2(a-3)}{(a-2)v^3} + O\left(v^{-4}\right). \end{equation} As $a \to 1$ this reduces to equation~\Ref{eqn:a1b1asympt}. \subsection{Case (X): $a > 2; \: b = 1$.} \label{subs:case10} \begin{mycase}\label{case10}\end{mycase} In this case, we have the same equations for $\hat{r}$ and $\hat{s}$ as the previous case \begin{align} \hat{r}^{4v+6} &= \frac{\hat{r}^2 (a-1) - 1 }{a-1-\hat{r}^2} & \hat{s}^{4v+6} &= \frac{\hat{s}^2 (a-1) - 1 }{a-1-\hat{s}^2}. \end{align} Following methods used in Cases~\ref{case5} and~\ref{case7}, we can obtain the singularity of $\hat{r}$ on the real line: \begin{equation} \hat{r} = \sqrt{a-1} \left[1 - \frac{a(a-2)}{2(a-1)^4 (a-1)^v} +O\left(v(a-1)^{-2v}\right) \right] \end{equation} while \begin{equation} \hat{s} = \exp\left[\frac{\pi i}{2\left(v + \frac{a-3}{a-2} \right)}\left(1 - \frac{a(a-1)\pi^2}{6\left(a-3 + v(a-2)\right)^3} + O\left(\frac{1}{\left(a-3 + v(a-2)\right)^5}\right)\right) \right]. \end{equation} Using these values of $\hat{r}$ and $\hat{s}$, we obtain a non-zero $\delta$ \begin{equation} \delta = \frac{2i a\pi (a-1)^{2v+2}(a-2)^2}{v} + O\left(\frac{(a-1)^{2v}}{v^{2}}\right). \end{equation} This yields a dominant singularity as \begin{equation} \label{eqn:zc_case10} z_c = \frac{\sqrt{a-1}}{2a} \left[1 + \frac{\pi^2}{8v^2} - \frac{(a-3)\pi^2}{4(a-2)v^3} + O\left(v^{-4}\right)\right]. \end{equation} \subsection{Case (XI): $a = 2; \: b < 2$.} \label{subs:case11} \begin{mycase}\label{case11}\end{mycase} This case is very similar to that of Case \ref{case9}. equation~\Ref{eqn:rsabsoln} reduces down to \begin{align} \hat{r}^{4v+4} &= -\frac{\hat{r}^2 (b-1) - 1 }{b-1-\hat{r}^2} & \hat{s}^{4v+4} &= -\frac{\hat{s}^2 (b-1) - 1 }{b-1-\hat{s}^2}. \end{align} Again we follow the method used in Case~\ref{case6}, and we find \begin{align} \hat{r} &= \exp\left[\frac{\pi i}{4\left(v + \frac{b-4}{b-2} \right)}\left(1 - \frac{b(b-1)\pi^2}{3\left(b-4 + 2v(b-2)\right)^3}+ O\left(\frac{1}{\left(b-4 + 2v(b-2)\right)^5}\right)\right) \right] \\ \hat{s} &= \exp\left[\frac{3 \pi i}{4\left(v + \frac{b-4}{b-2} \right)}\left(1 - \frac{3b(b-1)\pi^2}{\left(b-4 + 2v(b-2)\right)^3} + O\left(\frac{1}{\left(b-4 + 2v(b-2)\right)^5}\right)\right) \right]. \end{align} These give a non-zero $\delta$: \begin{equation} \delta = \frac{6\pi^3 (b-2)}{v^3} - \frac{3i \pi^3 (3ib + 6\pi b - 12i - 4\pi)}{v^4} + O\left(v^{-5}\right). \end{equation} And so we find the dominant singularity: \begin{equation} z_c = \frac{1}{4} + \frac{5}{64}\frac{\pi^2}{v^2} - \frac{5}{64} \frac{\pi^2(b-4)}{(b-2)v^3} + O\left(v^{-4}\right). \end{equation} Note that as $b \to 1$ we recover equation~\Ref{eqn:zc_case3}. \subsection{Case (XII): $a > 2; \: b = 2$.} \label{subs:case12} \begin{mycase}\label{case12}\end{mycase} As per Case \ref{case11}, we assume that $b=2$. This reduces equation~\Ref{eqn:rsabsoln} \begin{equation} \hat{s}^{4v+4} = -\frac{\hat{s}^2(a-1)-1}{a-1-\hat{s}^2}. \end{equation} Looking at the expansion of $\hat{s}$, we get \begin{multline} \hat{s} = \exp\left[ \frac{\pi i}{2\left(2v - \frac{a-4}{a-2}\right)} \left(1- \frac{\pi^2(a-1)a}{3( (2a-4)v + a-4)^3} \right.\right.\\ \left.\left. \phantom{\frac{\pi^2(a-1)a}{3( (2a-4)v + a-4)^3}} + O\left(\frac{1}{\left( (2a-4)v +a-4 \right)^5}\right)\right)\right] \end{multline} Similarly, the solution for $\hat{r}$ is given by a simplified version of equation~\Ref{eqn:largerabsoln}. \begin{equation} \hat{r} = \sqrt{a-1}\left[1 + \frac{a(a-2)}{2(a-1)^3(a-1)^{2v}} + O\left((a-1)^{-4v}\right)\right]. \end{equation} Together these give \begin{equation} \delta = (a-1)^{2v} \left[ \frac{\pi a (a-1)(a-2)^3}{v} + O\left(v^{-2}\right)\right] \end{equation} with the dominant singularity being \begin{equation} z_c = \frac{\sqrt{a-1}}{2a} \left[ 1 +\frac{\pi^2}{32 v^2} - \frac{(a-4)\pi^2}{32(a-2)v^3} + O\left(v^{-4}\right)\right]. \end{equation} \subsection{Case (XIII): $ ab-a-b = 0$.} \label{subs:case13} \begin{mycase}\label{case13}\end{mycase} Looking at Cases \ref{case6}, \ref{case7} and \ref{case8}, the factor $ab-a-b$ appears in the asymptotic expansions, leading us to believe that there may be something of interest along this line. We note that this polynomial plays an important role in the single-walk version of this model \cite{brak2005a-:a} --- along the curve $ab=a+b$ the dominant singularity is independent of the width of the system. While this is not the case for the two-walk model we consider in this paper, we are able to compute the dominant singularity exactly along the curve. equation~\Ref{eqn:rsabsoln} reduces down to \begin{align} (\hat{r}^2(a-1) - 1)(a-1-\hat{r}^2)(\hat{r}^{2v+2}-1)(\hat{r}^{2v+2}+1) &= 0,\\ (\hat{s}^2(a-1) - 1)(a-1-\hat{s}^2)(\hat{s}^{2v+2}-1)(\hat{s}^{2v+2}+1) &= 0. \end{align} This suggests that the solutions of $\hat{r}$ or $\hat{s}$ come in two forms. One is a simple root of unity and the other is a square root type singularity. Again, the condition $\delta \neq 0$ requires $\hat{r} \neq \hat{s}$ and we obtain the following exact expressions \begin{align} \hat{r} &= \sqrt{a-1}\\ \hat{s} &= \exp\left(\frac{\pi i}{2v + 2}\right). \end{align} We could equally well have chosen the above with $\hat{r}$ and $\hat{s}$ swapped. Using the above values of $\hat{r}$ and $\hat{s}$, we obtain \begin{equation} \delta = (a-1)^{2v} \left[\frac{-2i a^2(a-2)^3 \pi}{v} + \frac{2(-2i+ia-\pi +\pi a)\pi(a-2)^2a^2}{v^2} + O\left(v^{-3}\right)\right]. \end{equation} This will yield a dominant singularity of \begin{equation} z_c = \frac{\sqrt{a-1}}{2a \cos\left(\frac{\pi}{2v+2}\right)} \end{equation} or asymptotically, \begin{equation} z_c = \frac{\sqrt{a-1}}{2a}\left[1 + \frac{\pi^2}{8v^2} - \frac{\pi^2}{4v^3} + O\left(v^{-4}\right) \right]. \end{equation} Note that as $a \to 2$ this reduces to equation~\Ref{eqn:zc_case2}. \subsection{Summary} \label{subs:summary} \renewcommand{\arraystretch}{1.3} Here we simply summarise the results of this section and divided them into three tables. In Table~\ref{tab:zcexact} we give the cases in which we are able to find the dominant singularity exactly. For the remainder of the parameter space we have been unable to find exact expressions and we present only asymptotic results. These are divided into Tables~\ref{tab:zcsmall} and~\ref{tab:zcbig} according to whether or not at least one $a,b$ exceeds $2$. For comparison we include the asymptotics of the single-walk model with $b=1$ in Table~\ref{tab:zcsingle}. \begin{table}[h!] \begin{center} \begin{tabular}{|| c | c | c | l||} \hline \hline Case: & $a$ & $b$ & Dominant Singularity ($z_c$)\\ \hline \hline (I) & $=1$ & $=1$ & $=\frac{1}{4\cos\left(\frac{\pi}{2v+4}\right)\cos\left(\frac{2 \pi}{2v+4}\right)}$\\ & & &$=\frac{1}{4} + \frac{5}{32}\frac{\pi^2}{v^2} - \frac{5}{8}\frac{\pi^2}{v^3} + O\left(v^{-4}\right)$\\ \hline (II) & $=2$ & $=2$ & $=\frac{1}{4\cos\left(\frac{\pi}{2v+2}\right)}$\\ & & &$=\frac{1}{4} + \frac{1}{32}\frac{\pi^2}{v^2} - \frac{1}{16}\frac{\pi^2}{v^3} + O\left(v^{-4}\right)$\\ \hline (III) & $=2$ & $=1$ & $=\frac{1}{4\cos\left(\frac{\pi}{4v+6}\right)\cos\left(\frac{3\pi}{4v+6}\right)} $\\ & & &$=\frac{1}{4} + \frac{5}{64}\frac{\pi^2}{v^2} - \frac{15}{64}\frac{\pi^2}{v^3} + O\left(v^{-4}\right)$\\ \hline (XIII) & \multicolumn{2}{|c|}{$ab=a+b$} & $=\frac{\sqrt{a-1}}{2a\cos\left(\frac{\pi}{2v+2}\right)}$\\ & \multicolumn{2}{|c|}{} & $=\frac{\sqrt{a-1}}{2a}\left(1 + \frac{\pi^2}{8v^2} - \frac{\pi^2}{4v^3} + O\left(v^{-4}\right) \right)$\\ \hline \hline \end{tabular} \end{center} \caption{The exact value and asymptotic behaviour of the dominant singularity when $a,b \in {1,2}$ and $ab=a+b$. Note that in each case $z_c$ decreases with increasing $v$. } \label{tab:zcexact} \end{table} \begin{table}[h!] \begin{center} \begin{tabular}{|| c | c | c | l ||} \hline \hline Case: & $a$ & $b$ & Dominant Singularity ($z_c$)\\ \hline \hline (IV) &\multicolumn{2}{|c|}{$a=b<2$} & $=\frac{1}{4} + \frac{5}{32}\frac{\pi^2}{v^2} + \frac{5}{8}\frac{\pi^2}{v^3(a-2)} + O\left(v^{-4}\right)$\\ \hline (VI) & $<2$ & $<2$ & $=\frac{1}{4} + \frac{5}{32}\frac{\pi^2}{v^2} + \frac{5}{16}\frac{\pi^2(a+b-4)}{v^3(a-2)(b-2)} + O\left(v^{-4}\right)$\\ \hline (IX) & $<2$ & $=1$ & $=\frac{1}{4} + \frac{5}{32}\frac{\pi^2}{v^2} - \frac{5}{16}\frac{\pi^2(a-3)}{v^3(a-2)} + O\left(v^{-4}\right)$\\ \hline (XI) & $=2$ & $<2$ & $=\frac{1}{4} + \frac{5}{64}\frac{\pi^2}{v^2} - \frac{5}{64}\frac{\pi^2(b-4)}{v^3(b-2)} + O\left(v^{-4}\right)$\\ \hline \hline \end{tabular} \end{center} \caption{The asymptotic behaviour of the dominant singularity when $a,b \leq 2$. Again note that in each case, $z_c$ is a decreasing function of $v$ and that $z_c \to \frac{1}{4}$ as $v \to \infty$.} \label{tab:zcsmall} \end{table} \begin{table}[h!] \begin{center} \begin{tabular}{|| c | c | c | l||} \hline \hline Case: & $a$ & $b$ & Dominant Singularity ($z_c$)\\ \hline \hline (V) & \multicolumn{2}{|c|}{$a=b>2$} & $=\frac{a-1}{a^2} + \frac{(a-2)^2}{a^2(a-1)(a-1)^v} + O\left(v(a-1)^{2v}\right)$\\ \hline (VII) & $>2$ & $>2$ & $=\frac{\sqrt{a-1}\sqrt{b-1}}{ab}$ \\ & & & $+ \frac{(a-2)^2 (ab-a-b)\sqrt{b-1}}{2ab(b-a) \sqrt{a-1}(a-1)^{2v+2}} + \frac{(b-2)^2 (ab-a-b)\sqrt{a-1}}{2ab(a-b) \sqrt{b-1}(b-1)^{2v+2}} $ \\ & & & $+ O\left(a^{-4v}\right) + O\left(b^{-4v}\right)$\\ \hline (VIII) & $>2$ & $<2$ & $= \frac{\sqrt{a-1}}{2a} \left[1 + \frac{1}{8}\frac{\pi^2}{v^2}+\frac{1}{4}\frac{\pi^2(a+b-4)}{(a-2)(b-2)v^3} +O\left(v^{-4}\right) \right]$\\ \hline (X) & $>2$ & $=1$ & $= \frac{\sqrt{a-1}}{2a} \left[1 + \frac{1}{8}\frac{\pi^2}{v^2}-\frac{1}{4}\frac{\pi^2(a-3)}{(a-2)v^3} +O\left(v^{-4}\right) \right]$\\ \hline (XII) & $>2$ & $=2$ & $= \frac{\sqrt{a-1}}{2a} \left[1 + \frac{1}{32}\frac{\pi^2}{v^2}-\frac{1}{32}\frac{\pi^2(a-4)}{(a-2)v^3} +O\left(v^{-4}\right) \right]$\\ \hline \hline \end{tabular} \end{center} \caption{The asymptotic behaviour of the dominant singularity when at least one of $a,b > 2$. Note that $z_c$ decreases with increasing $v$ in all cases.} \label{tab:zcbig} \end{table} \begin{table} \begin{center} \begin{tabular}{||c|c|l||} \hline \hline $a$ & $z_c$ & Asymptotic expansion \\ \hline\hline $1$ & $\frac{1}{2\cos\left(\frac{\pi}{2v+2} \right)} $ & $\sim \frac{1}{2}+\frac{\pi^2}{16v^2}-\frac{\pi^2}{8v^3} + O\left(v^{-4}\right)$\\ \hline $(1,2)$ & $\circ$ & $\sim \frac{1}{2}+\frac{\pi^2}{16v^2} - \frac{\pi^2}{8(2-a)v^3}+ O\left(v^{-4}\right)$\\ \hline $2$ & $ \frac{1}{2 \cos\left( \frac{\pi}{4v+2} \right)}$ & $ \sim \frac{1}{2} + \frac{\pi^2}{64v^2}-\frac{\pi^2}{64v^3} + O\left(v^{-4}\right)$ \\ \hline $(2,\infty)$ & $\circ$ & $\sim \frac{\sqrt{a-1}}{a}\left( 1 + \frac{(a-2)^2}{2(a-1)^{2v+2}} \right) + O\left( (a-1)^{-4v} \right) $ \\ \hline \hline \end{tabular} \end{center} \caption{The dominant singularity when $b=1$ for the single-walk model.} \label{tab:zcsingle} \end{table} \section{Overview and discussion} \subsection{Infinite slit phase diagram} Recall that in the single walk case, discussed in the introduction, the order of the limits polymer length $n$ and slit width $w$ going to infinity matters; it was shown in \cite{brak2005a-:a} that \begin{equation} \kappa^{single}_{half-plane}(a) \neq \kappa^{single}_{inf-slit}(a,b). \end{equation} In fact the phase diagram for the single walk in the infinite slit, given in Figure~\ref{phase-force-diagram-single}(left), depends on both $a$ and $b$ whereas the half plane limit depends only on $a$. This can be understood by observing that a finite Dyck path must visit the bottom wall as it is fixed at both ends there so once the width is sent to infinity any finite Dyck path only feels the bottom wall, while if the length of the Dyck path is first sent to infinity the walk will ``see'' both walls for any finite width. From the calculations in the previous section we see that for the two walk model the infinite slit free energy is \begin{align} \kappa_{inf-slit}(a,b) &= \begin{cases} \log\left(4\right) & \mbox{ if } a,b \leq 2 \\ \log\left(\frac{2a}{\sqrt{a-1}}\right) & \mbox{ if } a > 2 \mbox{ and } b< 2 \\ \log\left(\frac{2b}{\sqrt{b-1}}\right) & \mbox{ if } a < 2 \mbox{ and } b > 2\\ \log\left(\frac{ab}{\sqrt{a-1}\sqrt{b-1}}\right) & \mbox{ if } a \geq 2 \mbox{ and } b \geq 2 . \end{cases} \label{inf-slit-free-energy} \end{align} Hence the phase diagram can be illustrated as in Figure~\ref{phase-diagram}. \begin{figure}[ht!] \begin{center} \includegraphics[width=8cm]{phase_diagram2.pdf} \caption{ Phase diagram of the infinite strip for the two walk model analysed in this paper. There are four phases: a desorbed phase, a phase where the bottom walk is adsorbed onto the bottom wall, a phase where the top walk is adsorbed onto the top wall, and a phase where both walks are adsorbed onto their respective walls.} \label{phase-diagram} \end{center} \end{figure} We observe that \begin{equation} \kappa_{inf-slit}(a,b) = \kappa^{single}_{half-plane}(a) +\kappa^{single}_{half-plane}(b) \label{inf-strip-free-energy-split} \end{equation} and recalling equation~\Ref{double-half-plane-free-energy} we see that \begin{equation} \kappa_{inf-slit}(a,b) = \kappa_{double-half-plane}(a,b) . \label{inf-strip-half-plane-equality} \end{equation} So the free energy for this two walk model does \emph{not} depend on the order of the limits! This conclusion depends on the particular model we have chosen where both walks start on different walls. Had we considered a model where both walks started on the bottom wall this observation would be different; by taking the width to infinity first, neither walk would interact with the top wall and the free energy of this system would be that of two walks in a single half-plane. On the other hand, the infinite slit free energy does not depend on the end points of the polymer because the length is taken to infinity first. \subsection{Force between the walls} Using the asymptotic expressions for $\kappa$ found above we obtain the asymptotics for the force. We have \begin{itemize} \item For $a,b<2$ \begin{equation} \mathcal{F}\sim\frac{5\pi^2}{w^3} ; \end{equation} \item For $a<2, b=2$ \begin{equation} \mathcal{F}\sim\frac{5\pi^2}{2 w^3} ; \end{equation} \item For $a<2,b>2$ \begin{equation} \mathcal{F}\sim\frac{\pi^2}{w^3} ; \end{equation} \item For $a>2,b<2$ \begin{equation} \mathcal{F}\sim\frac{\pi^2}{w^3}; \end{equation} \item For $a=2, b<2$ \begin{equation} \mathcal{F}\sim\frac{5\pi^2}{2 w^3} ; \end{equation} \item For $a=2, b=2$ \begin{equation} \mathcal{F}\sim\frac{\pi^2}{w^3} ; \end{equation} \item For $a>2, b=2$ \begin{equation} \mathcal{F}\sim \frac{\pi^2}{4 w^3} ; \end{equation} \item For $a=2, b>2$ \begin{equation} \mathcal{F}\sim\frac{\pi^2}{4 w^3} ; \end{equation} \item For $a,b>2$ with $a>b$ \begin{equation} \mathcal{F} \sim \frac{(b-2)^2(ab-a-b) \log (b-1)}{2 (a-b)(b-1)^3} \left(\frac{1}{b-1}\right)^{w}; \end{equation} \item For $a,b>2$ with $a<b$ \begin{equation} \mathcal{F} \sim \frac{(a-2)^2(ab-a-b) \log (a-1)}{2 (b-a)(a-1)^3} \left(\frac{1}{a-1}\right)^{w}; \end{equation} \item For $b=a>2$ \begin{equation} \mathcal{F} \sim \frac{(a-2)^2 \log(a-1) }{2(a-1)^2} \left(\frac{1}{a-1}\right)^{w/2}. \end{equation} \end{itemize} For any $a,b$ the force is positive and so is repulsive. This is in contrast to the single walk case where there is a region of attractive forces. The regions of the plane which gave different asymptotic expressions for $\kappa$ and hence different phases for the infinite slit clearly also give different force behaviours. There is also a special subtle change of the magnitude of the force when $a=b$ for $a,b>2$. On the other hand the special super-integrable curve $a+b=ab$ does not display special behaviour, which relates to which walk is less bound to its respective surface and so drives the value of the force, except when $a=b=2$. The difference between the single and two walk models can be understood as follows. When there are two walks they effective shield each other from the interactions of the other wall and it is when a single walk is sufficiently attracted to the two sides of the slit simultaneously that an attractive force eventuates. There are however changes in the magnitude and the range of the repulsive force arising from whether the walks are adsorbed or desorbed. When either or both walks are desorbed there is a long range force arising from the entropy of the walk(s) while if both are adsorbed the force is short-ranged as the excursions of either walk from the walls are relatively short-ranged. The force diagram is given in Figure~\ref{force-diagram}. \begin{figure}[ht!] \begin{center} \includegraphics[width=10cm]{force_diagram.pdf} \caption{ A diagram of the regions of different types of effective force between the walls of a slit. Short range behaviour refers to exponential decay of the force with slit width while long range refers to a power law decay. On full lines there is a change from long to short range force decay. On the dashed lines there is a singular change of behaviour of the magnitude of the force. } \label{force-diagram} \end{center} \end{figure} \subsection{Conclusion} A model of two polymers confined to be in a long macroscopic sized slit with sticky walls has been modelled by a directed walk system. Our results show distinct differences from the earlier single polymer results. In particular, we see differences from the single polymer system in both the phase diagram, and the sign and strength of the entropic force exerted by the polymers on the walls of the slit. The phase diagram contains four phases, whereas that the single walk model has only three. Moreover, this phase diagram is independent on the order one considers the limits of large width and length to be taken. This is also in contrast with the single walk system. The force induced by the polymers remains repulsive in all parts of the phase diagram even though the range of the force does depend on whether the walks are adsorbed onto the walls. This again is in contrast with the single polymer system where an attractive regime is observed. In our two polymer system each polymer is effectively shielded from the opposite wall by the other polymer. This gives rise to the difference between the results seen here and those of the single polymer system. While we have a model that goes beyond the single polymer results, to obtain a situation which might replicate the non-directed self-avoiding polygon results of Alvarez {\it et al.\ }\cite{alvarez2008self} one will need to allow both walks to interact with both walls. This will be significantly more complicated combinatorially to analyse. \section*{Acknowledgements} Financial support from the Australian Research Council via its support for the Centre of Excellence for Mathematics and Statistics of Complex Systems and through its Discovery Program is gratefully acknowledged by the authors. Financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC) though its Discovery Program is gratefully acknowledged by the authors. ALO thanks the University of British Columbia, and in particular Prof Mark Mac Lean, for hospitality.
1,314,259,994,424
arxiv
\section{Introduction} There is compelling evidence of the ubiquitous presence of massive black holes (BHs) at the centres of nearby galaxies (Kormendy \& Richstone 1995; Kormendy \& Ho 2013). These BHs are mainly in low-luminous states (Ho 2008) or in quiescence, but sometimes they can enter highly luminous phases (AGN) that are due to sudden influxes of the surrounding gas. These influxes can be provided by the tidal disruption (TD) of stars (Rees 1988). TDs occur when stellar dynamical encounters scatter a star (of mass $M_{\rm *}$ and radius $R_{\rm *}$) onto a low angular momentum orbit about the BH (of mass $M_{\rm BH}$), subjecting it to the extreme BH tidal field (Alexander 2012). Specifically, if the star comes close to the so-called BH tidal radius \begin{equation} r_{\rm t}\sim R_{\rm *}\Biggl(\frac{M_{\rm BH}}{M_{\rm *}} \Biggr)^{1/3}\sim 10^2 \rm R_{\rm \odot} \Biggl(\frac{\it R_{\rm *}}{1 \rm R_{\rm \odot}}\Biggr)\Biggl(\frac{\it M_{\rm BH}}{10^6 \rm M_{\rm \odot}}\Biggr)^{1/3}\Biggl(\frac{1 \rm M_{\rm \odot}}{\it M_{\rm *}}\Biggr)^{1/3} \label{rtidal} \end{equation} (Hills 1975; Frank \& Rees 1976), the star will be totally or partially disrupted, depositing a fraction of its mass onto the BH through an accretion disc and powering a bright flare (e.g. Rees 1988; Phinney 1989; Evans \& Kochanek 1989; Lodato et al. 2009; Strubbe \& Quataert 2009; Lodato \& Rossi 2011; Guillochon \& Ramirez-Ruiz 2013, 2015a; Coughlin \& Begelman 2014). For a star to be disrupted outside the event horizon of a BH, that is, in order to observe the corresponding TD accretion flare, $r_{\rm t}$ must be greater than the BH event horizon radius \begin{equation} r_{\rm s}=\frac{xGM_{\rm BH}}{c^2}\sim 4 \rm R_{\rm \odot} \it \Biggl(\frac{M_{\rm BH}}{\rm 10^6 \rm M_{\rm \odot}}\Biggr)\Biggl(\frac{x}{\rm 2}\Biggr), \label{rschwar} \end{equation} where $x$ encapsulates effects related to the BH spin (Kesden 2010). Hence, the non-rotating destroyer BH mass must be $M_{\rm BH}\lesssim 10^8 \rm M_{\odot}$ when solar-type stars are involved. TD accretion flares thus reveal otherwise quiescent or low-luminous BHs in a mass range complementary to that probed in AGN surveys (Vestergaard \& Osmer 2009). Regardless of whether the destruction is total or partial, most of the stars in a galaxy fated to be disrupted by the central BH are scattered onto low angular momentum orbits from about the BH sphere of influence, that is, onto nearly parabolic trajectories (Magorrian \& Tremaine 1999; Wang \& Merritt 2004). For this reason, in this work we assume the disruptive orbits to be parabolic. This assumption, together with the kick naturally imparted by the disruption itself (Manukian et al. 2013), prevents our partially disrupted stars from encountering the BH a multitude of times. In this parabolic regime, about half of the stellar debris produced by a (total or partial) stellar disruption binds to the BH, returns to pericentre on different elliptical orbits (that is, with different orbital energies; Lacy et al. 1982), circularises forming an accretion disc, and falls back onto the BH emitting a peculiar flare. The fallback rate is likely to be somewhat different from the rate of debris returning to pericentre (e.g. Cannizzo et al. 1990; Ramirez-Ruiz \& Rosswog 2009; Hayasaki et al. 2013; Coughlin \& Nixon 2015; Guillochon \& Ramirez-Ruiz 2015b; Hayasaki et al. 2015; Piran et al. 2015; Shiokawa et al. 2015; Bonnerot et al. 2016; Coughlin et al. 2016), which in turn depends on the structure of the disrupted star (e.g. Lodato et al. 2009) and the properties of the encounter (e.g. Guillochon \& Ramirez-Ruiz 2013, 2015a). In this paper we aim at computing the critical distance $r_{\rm d}$ at which a star becomes totally or partially disrupted by the BH tidal field. We are interested in finding the critical disruption parameter \begin{equation} \beta_{\rm d}=\frac{r_{\rm t}}{r_{\rm d}}=\beta \frac{r_{\rm p}}{r_{\rm d}} \label{betad} \end{equation} for specific stellar structures that distinguishes total TDs from partial TDs, with $r_{\rm t}$ given by Eq. \ref{rtidal}, $\beta=r_{\rm t}/r_{\rm p}$ and $r_{\rm p}$ being the pericentre of the star around the BH. A partial TD is obtained for $\beta < \beta_{\rm d}$, that is, for $r_{\rm p} > r_{\rm d}$, a total TD for $\beta \geq \beta_{\rm d}$, that is, for $r_{\rm p} \leq r_{\rm d}$. The need to introduce the critical distance $r_{\rm d}$ arises because the tidal radius $r_{\rm t}$ defines where the BH tidal force overcomes the stellar self-gravity only at the stellar surface, and not everywhere within the star. This problem has been considered previously (Guillochon \& Ramirez-Ruiz 2013, 2015a; hereafter GRR). GRR evaluated the critical disruption parameter $\beta_{\rm d}$ for polytropes of index 5/3 and 4/3 (which represent low- and high-mass stars, respectively) using a series of adaptive mesh refinement (AMR) grid-based hydrodynamical simulations of tidal encounters of star and BH. In this paper, we instead present the results of simulations we performed for the same purpose with the codes \textsc{gadget2} (traditional smoothed particle hydrodynamical (SPH)\textbf{;} Springel 2005)\footnote{http://wwwmpa.mpa-garching.mpg.de/gadget/} and \textsc{gizmo} (modern SPH and mesh-free finite mass (MFM)\textbf{;} Hopkins 2015)\footnote{http://www.tapir.caltech.edu/$\sim$phopkins/Site/GIZMO.html}. Since these techniques all have advantages but also limits, we are inclined to follow GRR in finding the critical disruption parameter $\beta_{\rm d}$ for certain stellar structures\footnote{When a more realistic stellar equation of state is used (e.g. Rosswog et al. 2009, but only for white dwarfs), the value of $\beta_{\rm d}$ may change slightly.} using an MFM, a traditional SPH, and a modern SPH code instead of an AMR grid-based code, with the goal of comparing results from different techniques. The paper is organised as follows. In Section \ref{gridVSsph} we compare AMR grid-based codes to \textsc{gizmo mfm}, traditional SPH, and modern SPH techniques. In Section \ref{loss} we discuss our method and describe how we evaluate the stellar mass loss $\Delta M$ in our simulated encounters. We show the curves of mass loss we obtain for all codes as a function of $\beta$ and polytropic index, comparing them and the corresponding $\beta_{\rm d}$ with GRR. Section \ref{conclusions} contains our summary. \section{Grid-based vs. SPH and \textsc{gizmo mfm} codes} \label{gridVSsph} Fluid hydrodynamics and interactions in astrophysics are generally treated using two different classes of numerical methods: Eulerian grid-based (e.g. Laney 1998; Leveque 1998) and Lagrangian SPH (e.g. Monaghan 1992; Price 2005; Cossins 2010; Price 2012). Basically, grid-based methods divide a domain into stationary cells traversed over time by the investigated fluid, and account for information exchange between adjacent cells in the aim at solving the fluid equations. In particular, AMR grid-based techniques (e.g. Berger \& Oliger 1984; Berger \& Colella 1989) adapt the cell number and size according to the properties of different fluid regions, thus increasing the resolution where needed (for example in high-density regions) and reducing computational efforts and memory employment where lower resolution is sufficient. In contrast, SPH methods are Lagrangian by construction and model a fluid as a set of interacting fluid elements, or particles, each with its own set of fluid properties. In practice, the density of each particle is calculated by considering the neighbours within its so-called smoothing length (e.g. Price 2005), and particle velocities and entropies or internal energies are evolved according to a pressure-entropy or energy formalism (modern SPH) or a density-entropy or energy formalism (traditional SPH). Essentially, modern SPH techniques evaluate the pressure and the local density of each particle by considering the neighbours within the particle smoothing length and use pressure to define the equations of motion (Hopkins 2013). Traditional SPH techniques instead directly estimate the pressure of each particle from its local density in the same way as for the other particle properties, and use local density to define the equations of motion. In SPH methods the particle density mirrors the density of different regions of the fluid. Grid-based and SPH techniques both have advantages, but also limits. At sufficiently high velocities, grid-based methods are non-invariant under Galilean transformations, which means that different reference frames are associated with different levels of numerical diffusion among adjacent cells, and simulation results may slightly depend on the choice of the reference system (e.g. Wadsley et al. 2008). Moreover, grid-based methods violate angular momentum conservation because a fluid moving across grid cells produces artificial diffusion; this diffusion can lead to unphysical forces, which couple with the fixed structure of the grid to tie the fluid motion on specific directions (e.g. Peery \& Imlay 1988; Hahn et al. 2010). Finally, in grid-based methods hydrodynamics and gravity descriptions are mismatched, in the sense that hydrodynamics is evaluated by integrating quantities over each cell, while gravity is computed at the centre of each cell and then interpolated at the desired position (as for collisionless particles). This can produce spurious instabilities (e.g. Truelove et al. 1997). SPH methods first need an artificial viscosity term added to the particle equation of motion in order to resolve shocks (Balsara 1989; Cullen \& Dehnen 2010). Second, traditional SPH codes are associated with a surface tension between fluid regions of highly different densities, which limits their mixing (e.g. Agertz et al. 2007). Great effort has been made to improve SPH methods, leading to the so-called modern SPHs (Hopkins 2013). The smoothed definition of pressures together with densities, the more sophisticated viscosity switches, the higher order smoothing kernels (quintic spline instead of cubic spline; see below), and the inclusion of artificial conduction allowed solving these problems, at least partially. However, the higher order kernels typically lead to excessive diffusion. Despite all these improvements, some intrinsic limits of this technique still remain, such as the ideal infinite number of neighbours required to capture small-amplitude instabilities. \begin{figure*} \centering \vspace{0.2cm} \includegraphics[width=19.cm, angle=0]{D_analytical_codes.pdf} \\ \caption{Left panel: Analytic solutions for the $\gamma=4/3, 5/3$ polytropic radial density profile from the Lane-Emden equation. Right panel: Plot of the relaxed radial stellar density profile for each simulation technique for both politropic indices ($\gamma=4/3, 5/3$ from the highest to the lowest central density). Units are $\rm M_{\rm \odot}/\rm R_{\rm \odot}^3$ for $\rho$ and $\rm R_{\rm \odot}$ for $r$. \label{densities}} \end{figure*} \begin{figure*} \centering \vspace{0.2cm} \includegraphics[width=11.cm, angle=0]{snapshots_composite_53.pdf} \\ \caption{Snapshots of the SPH particle density (in logarithmic scale) at $t\sim 8.5\times 10^4\rm s$ after pericentre passage for our \textsc{gadget2} simulations, in the case of a star with polytropic index $5/3$. White and black correspond to the highest and lowest densities, respectively. Each snapshot is labelled with the corresponding value of $\beta$, with the range where the critical disruption parameter $\beta_{\rm d}$ lies highlighted in yellow. \label{53_stars}} \end{figure*} \begin{figure*} \centering \vspace{0.2cm} \includegraphics[width=11.cm, angle=0]{snapshots_composite_43.pdf} \\ \caption{Same as Fig. \ref{53_stars} for a polytropic star of index $4/3$. \label{43_stars}} \end{figure*} Recently, a completely new Lagrangian method that aims to simultaneously capture the advantages of both SPH and grid-based techniques, has been implemented in the public code \textsc{gizmo} (Hopkins 2015). In \textsc{gizmo}, the volume is discretised among a discrete set of tracers (particles) through a partition scheme based on a smoothing kernel (in a way that is similar to SPH codes). However, unlike SPH codes, these particles do not sample fluid elements, but only represent the centre of unstructured cells that are free to move with the fluid, like in moving mesh codes (Springel 2010). Hydrodynamics equations are then solved at the cell boundaries, defined by an effective face. This guarantees an exact conservation of energy and linear and angular momentum as well as an accurate description of shocks without needing an artificial viscosity term. The density associated with each particle or cell is obtained by dividing the mass of the cell for its effective volume. In this work, we use the mesh-free finite mass method of \textsc{gizmo}, where particle mass is preserved, making the code perfectly Lagrangian. For this method, we use the cubic spline kernel with a desired number of neighbours equal to 32 for the partition. \section{SPH and \textsc{gizmo mfm} simulations and stellar mass losses} \label{loss} We modelled stars as polytropes of index 5/3 (low-mass stars) or 4/3 (high-mass stars), with masses and radii of $1 \rm M_{\rm \odot}$ and $1 \rm R_{\rm \odot}$, sampling each of them with $N_{\rm part}\sim10^5$ particles. This is done by placing the particles through a close sphere packing and then stretching their radial positions to reach the required polytropic density profile, thus limiting the statistical noise associated with a random placement of the particles. $N_{\rm part}$ sets the gravitational softening length of each particle in our codes to $\epsilon \sim 0.1R_{\rm *}/(N_{\rm part})^{1/3}\sim 0.002 \rm R_{\rm \odot}$, preventing particle overlapping in evaluating gravitational interactions. We also tried test runs at higher resolution, where we modelled stars with $\sim10^6$ particles, but did not find significant differences in the stellar mass loss $\Delta M$ with respect to simulations with lower resolution. We evolved stars in isolation for several dynamical times in order to ensure their stability. The right panel of Fig. \ref{densities} shows the relaxed stellar density profile, that is, the local density of the particles $\rho (r)$ (in $\rm M_{\rm \odot}/\rm R_{\rm \odot}^3$) versus their radial distance from the stellar centre of mass $r$ (in $\rm R_{\rm \odot})$, for each simulation technique for the two polytropic indices ($\gamma=4/3$, and $5/3$ from the highest to the lowest central density), compared to the analytic solutions from the Lane-Emden equation (left panel). The kernel function that drives the evaluation of each particle local density (e.g. Price 2005) and the volume partition (Hopkins 2015) is chosen to be a cubic (in \textsc{gadget2} and \textsc{gizmo mfm}) or quintic (in \textsc{gizmo} modern SPH) spline, and the number of neighbours of each particle and domain point within its smoothing length/kernel size is fixed to 32 and 128, respectively (Monaghan \& Lattanzio 1985; Hongbin \& Xin 2005; Dehnen \& Aly 2012). Gravitational forces are computed through the Springel relative criterion (Springel 2005) instead of the standard Barnes-Hut criterion (Barnes \& Hut 1986) because the Springel criterion shows better accuracy at the same computational cost. Since the relative criterion is based on the particle acceleration, which is not available at the beginning of each simulation, the Barnes-Hut criterion is adopted at the first timestep to estimate an acceleration value, and then the iteration is repeated using the Springel criterion in order to remain consistent with the subsequent iterations. In our simulations we use quite a large opening angle value (0.7), but the accuracy parameter for the relative criterion is set to 0.0005, which is very small compared to the suggested standard value (0.0025). We performed test runs setting the opening angle to 0.1 and increasing the accuracy parameter to 0.0025, but found no differences in the stellar density and temperature profiles and in $\Delta M$. We implemented the BH force through a Newtonian analytical potential, with $M_{\rm BH}=10^6 \rm M_{\rm \odot}$, and in each of the traditional SPH, modern SPH and \textsc{gizmo mfm} simulations we placed one star on a parabolic orbit with a given $r_{\rm p}$, that is, $\beta$, around the BH. The star was initially placed at a distance five times greater than $r_{\rm t}$ to avoid spurious tidal distortions (we also tested larger initial distances, but found no significant differences in the outcomes). Stellar rotation is not expected to significantly affect our results in the range of $\beta$ considered in this paper (Stone et al. 2013). Figures \ref{53_stars} and \ref{43_stars} show snapshots from our traditional SPH simulations recorded shortly after pericentre passage. The lower limit of the range where $\beta_{\rm d}$ lies (yellow) allows the core recollapse to occur for both polytropic indices (Guillochon \& Ramirez-Ruiz 2013, 2015a). Modern SPH and \textsc{gizmo mfm} simulations give almost the same results. \begin{figure*} \centering \includegraphics[width=13.cm, angle=0]{curve_53.pdf} \\ \caption{Stellar mass loss (in units of $\Delta M/M_{\rm *}$) as a function of $\beta$ for a star with polytropic index 5/3. $\Delta M$ is evaluated at $t\sim 10^6 \rm s$ after the disruption. Blue, black, green, and red points are associated with \textsc{gizmo mfm}, \textsc{gadget2}, \textsc{gizmo} modern SPH, and GRR simulations, respectively. Uncertainties on $\Delta M/M_{\rm *}$ from SPH and \textsc{gizmo mfm} simulations are inferred as reported in the main text. Points at low values of $\beta$ have been slightly horizontally displaced to give a better view of the error bars. The value of the critical disruption parameter $\beta_{\rm d}$ (dashed lines) slightly depends on the adopted simulation method. \label{53_curve}} \end{figure*} \begin{figure*} \centering \includegraphics[width=13.05cm, angle=0]{curve_43.pdf} \\ \caption{Same as Fig. \ref{53_curve} for a polytropic star of index 4/3. The values of $\beta_{\rm d}$ obtained from our simulations visibly differ from those of GRR. \label{43_curve}} \end{figure*} We aim to assess the stellar mass loss $\Delta M$ in each simulation. We recall that $\Delta M=M_{\rm *}$ corresponds to total disruption. \begin{table*} \centering \caption{Fitting coefficients of Eq. \ref{ff1} and $\beta_{\rm d}$ for each of the point sets in Figs. \ref{53_curve} and \ref{43_curve}. \label{fitting_table}} \begin{tabular}{c|c|c|c|c|c|c|c} Simulations & Polytropic index & \it A & \it B & \it C & \it D & \it E & $\beta_{\rm d}$ \\ & & & & & & & \\ \hline & & & & & & & \\ GRR & 5/3 & 3.1647 & -6.3777 & 3.1797 & -3.4137 & 2.4616 & 0.90 \\ \textsc{gizmo mfm} & 5/3 & 5.4722 & -11.764 & 6.3204 & -3.8172 & 2.8919 & 0.91 \\ \textsc{gadget2} & 5/3 & 8.9696 & -19.111 & 10.180 & -4.2964 & 3.3231 & 0.93 \\ \textsc{gizmo} modern SPH & 5/3 & 8.7074 & -18.358 & 9.6760 &-4.5340 & 3.5914 & 0.94 \\ GRR & 4/3 & 12.996 & -31.149 & 12.865 & -5.3232 & 6.4262 & 1.85 \\ \textsc{gizmo mfm} & 4/3 & -13.964 & 11.217 & -2.1168 & 0.3930 & 0.5475 & 2.00\\ \textsc{gadget2} & 4/3 & -15.378 &-5.2385 & 6.3635 & -1.5122 & 5.7378 & 2.02 \\ \textsc{gizmo} modern SPH & 4/3 & -10.394 & -0.2160 & 2.6421 & -0.8804 & 2.9215 & 2.02 \\ \end{tabular} \end{table*} We describe the method we adopted to evaluate $\Delta M$ from each of our simulated star-BH tidal encounters, following GRR. In a specific simulation at a specific time, the position and velocity components of the stellar centre of mass around the BH, $x_{\rm CM}$, $y_{\rm CM}$, $z_{\rm CM}$, $v_{\rm x_{\rm CM}}$, $v_{\rm y_{\rm CM}}$, and $v_{\rm z_{\rm CM}}$ are defined through an iterative approach. As a first step, we choose them to coincide with the position and velocity components of the particle with the highest local density, $x_{\rm peak}$, $y_{\rm peak}$, $z_{\rm peak}$, $v_{\rm x_{\rm peak}}$, $v_{\rm y_{\rm peak}}$, $v_{\rm z_{\rm peak}}$. The specific binding energy to the star of the $i$-$\rm th$ particle then reads \begin{equation} E_{\rm *_{\it i}}=\frac{1}{2}\Bigl[(v_{\rm x_{\it i}}-v_{\rm x_{\rm peak}})^2+(v_{\rm y_{\it i}}-v_{\rm y_{\rm peak}})^2+(v_{\rm z_{\it i}}-v_{\rm z_{\rm peak}})^2\Bigr]+\phi_{\rm *_{\it i}}, \label{Estar} \end{equation} where $v_{\rm x_{\it i}}$, $v_{\rm y_{\it i}}$, and $v_{\rm z_{\it i}}$ are the velocity components of the $i$-$\rm th$ particle and $\phi_{\rm *_{\it i}}$ the stellar gravitational potential acting on the $i$-$\rm th$ particle (directly computed by the simulation code). By considering only particles with $E_{\rm *_{\it i}}< 0$, we re-define the position and velocity components of the star centre of mass and re-evaluate Eq. \ref{Estar} by setting them in place of the components labelled with the subscript "peak". The process is re-iterated until the convergency of $v_{\rm CM}$ to a constant value to lower than $10^{-5} \rm R_{\rm \odot} \rm yr^{-1}$. Particles with $E_{\rm *_{\it i}}> 0$ are unbound from the star. The stellar mass loss at the considered time can be obtained by multiplying the mass of a single particle, $m=M_{\rm *}/\it N_{\rm part}$, by the number of particles bound to the star, $N_{\rm Bound}$, and subtracting the result from $M_{\rm *}$. $\Delta M$ is obtained at $t\sim 10^6 \rm s$ ($\sim650$ stellar dynamical times) after the disruption. \begin{table} \centering \caption{$\beta_{\rm d}$ value as a function of polytropic index and adopted simulation method. \label{beta_table}} \begin{tabular}{c|c|c} Simulation method & Polytropic index & $\beta_{\rm d}$ \\ & & \\ \hline & & \\ AMR grid-based & 5/3 & 0.90 \\ MFM & 5/3 & 0.91 \\ Traditional SPH & 5/3 & 0.93 \\ Modern SPH & 5/3 & 0.94 \\ AMR grid-based & 4/3 & 1.85 \\ MFM & 4/3 & 2.00 \\ Traditional SPH & 4/3 & 2.02 \\ Modern SPH & 4/3 & 2.02 \\ \end{tabular} \end{table} Figures \ref{53_curve} and \ref{43_curve} show the stellar mass loss in units of $\Delta M/M_{\rm *}$ as a function of $\beta$ for polytropes of index $5/3$ and $4/3$, respectively, inferred from our simulations with \textsc{gizmo mfm} (blue points), \textsc{gadget2} (black points) and \textsc{gizmo} modern SPH (green points), and the same obtained from the GRR simulations (red points). We estimate the uncertainty on our inferred $\Delta M/M_{\rm *}$ as \begin{equation} \sigma_{\frac{\rm \Delta M}{M_{\rm *}}}=\sqrt{\sigma_{\rm Poisson}^2+\sigma_{\rm E_{\rm *_{\rm i}}}^2+\sigma_{\rm AD}^2}=\sqrt{\Biggl(\frac{\sqrt{N_{\rm Bound}}}{N_{\rm part}}\Biggr)^2+0.01^2+\sigma_{\rm AD}^2} \end{equation} where $\sigma_{\rm AD}$ is the average deviation from 1 of $\Delta M/M_{\rm *}$ for total disruptions in each of our point sets and $\sigma_{\rm E_{\rm *_{\rm i}}}=0.01$, as the values of $|E_{\rm *_{\rm i}}|$ for about $10^3$ particles of $10^5$ are lower than 0.01 times the average value $\overline{|E_{\rm *}|}$, that is, we are not able to determine exactly whether these $10^3$ particles are bound to or unbound from the star. We fit each of our point sets with a function introduced in GRR \begin{equation*} f(\beta)=\exp{\Biggl[\frac{A+B\beta+C\beta^2}{1-D\beta+E\beta^2}\Biggr]} , \; \; \; \; \; \beta<\beta_{\rm d} \label{ff} \end{equation*} \begin{equation} f(\beta)=1, \; \; \; \; \; \beta \geq \beta_{\rm d}. \label{ff1} \end{equation} The values of the coefficients $A, B, C, D$, and $E$ and of $\beta_{\rm d}$ are given in Table \ref{fitting_table}. \begin{figure*} \centering \includegraphics[width=18.5cm, angle=0]{res_com.pdf} \\ \caption{Comparison of mass losses as a function of $\beta$ near $\beta_{\rm d}$ between the GRR simulations (blue points), high- ($\sim 10^5$ particles; red points) and low-resolution ($\sim 10^3$ particles; black points) \textsc{gadget2} simulations, for $\gamma=5/3$ (left panel) and $\gamma=4/3$ (right panel) polytropes. For a $\gamma=4/3$ polytrope, the value of $\beta_{\rm d}$ clearly depends on the adopted resolution below a resolution threshold. For a $\gamma=5/3$ polytrope, the value of $\beta_{\rm d}$ differs very slightly among the three simulations. \label{res_com}} \end{figure*} It is worth noting that for the 5/3 polytropic index the curves of stellar mass loss associated with the four simulation codes differ very slightly in the value of the critical disruption parameter $\beta_{\rm d}$ (dashed lines in Fig. \ref{53_curve}). Specifically, $\beta_{\rm d}$ is reached first in the GRR simulations ($\beta_{\rm d}$=0.90), followed by the \textsc{gizmo mfm} ($\beta_{\rm d}=$0.91), \textsc{gadget2} ($\beta_{\rm d}=$0.93), and \textsc{gizmo} modern SPH ($\beta_{\rm d}=$0.94) simulations (Table \ref{beta_table}). This is expected because of the greater degree of excessive diffusion that characterises grid-based techniques compared to modern and traditional SPH techniques and the surface tension conversely involved in SPH methods (Section \ref{gridVSsph}). For the 4/3 polytropic index, instead, there is disagreement between our simulations and those of GRR (dashed lines in Fig. \ref{43_curve}). $\beta_{\rm d}$ is reached clearly first in the simulations of GRR ($\beta_{\rm d}$=1.85), followed by very similar values of the \textsc{gizmo mfm} ($\beta_{\rm d}=$2.00), \textsc{gadget2} ($\beta_{\rm d}=$2.02), and \textsc{gizmo} modern SPH ($\beta_{\rm d}=$2.02) simulations (Table \ref{beta_table}). We hypothesise that the lower value of $\beta_{\rm d}$ obtained by GRR is the result of resolving the stellar core of the $\gamma=4/3$ polytrope not far enough. In support of this hypothesis, we tested the dependence of $\beta_{\rm d}$ on the resolution of our simulations by performing some low-resolution ($\sim 10^3$ particles) \textsc{gadget2} simulations for the two polytropic indices (black points in Fig. \ref{res_com}). Fig. \ref{res_com} shows that for a $\gamma=5/3$ polytrope (left-hand panel) the change in resolution has negligible effects on $\beta_{\rm d}$. On the other hand, for $\gamma=4/3$ polytropes (right-hand panel) we observe a strong dependence of $\beta_{\rm d}$ on resolution below a resolution threshold because the configuration of the star is less stable. We also determined the dependence of $\beta_{\rm d}$ on different values of $M_{\rm BH}$ by performing additional low-resolution ($\sim 10^3$ particles) \textsc{gadget2} simulations with a $\gamma=5/3$ polytrope of mass $1 \rm M_{\rm \odot}$ and BHs of masses $10^5 \rm M_{\rm \odot}$ and $10^7 \rm M_{\rm \odot}$. Fig. \ref{MBH_com} clearly shows that $\beta_{\rm d}$ does not depend sensitively on $M_{\rm BH}$. We recall that flares and accretion temperatures instead depend on $M_{\rm BH}$ (e.g. Guillochon \& Ramirez-Ruiz 2013, 2015a). For completeness, we also show in Fig. \ref{n_com} how the polytropic index of the stellar remnant, which results from partial disruptions on parabolic orbits, is not preserved, but decreases with increasing $\beta$ for both $\gamma=5/3$ polytropes (left panel) and $\gamma=4/3$ polytropes (right panel). \begin{figure}[h!] \centering \includegraphics[width=9.cm, angle=0]{fit_MBH_key.pdf} \\ \caption{Comparison of mass losses as a function of $\beta$ near $\beta_{\rm d}$ for a $\gamma=5/3$ polytrope of mass $1 \rm M_{\rm \odot}$ approaching BHs with three different masses: $10^5 \rm M_{\rm \odot}$ (red points), $10^6 \rm M_{\rm \odot}$ (black points), $10^7 \rm M_{\rm \odot}$ (blue points). The value of $\beta_{\rm d}$ clearly does not depend on $M_{\rm BH}$. \label{MBH_com}} \end{figure} \section{Summary and conclusions} \label{conclusions} Tidal disruption events provide a unique way to probe otherwise quiescent or low-luminous black holes at the centres of galaxies. When approaching the central black hole of a galaxy, a star may be totally or partially disrupted by the black hole tidal field, depositing material onto the compact object and lighting it up through a bright flare (e.g. Rees 1988; Phinney 1989; Evans \& Kochanek 1989). Such a tidal accretion flare is expected to be shaped by the structure of the disrupted star (e.g. Lodato et al. 2009) and the morphology of the star-black hole encounter (e.g. Guillochon \& Ramirez-Ruiz 2013, 2015a). The hydrodynamical simulations of Guillochon \& Ramirez-Ruiz of star-black hole close encounters probably represent the most complete theoretical investigation of the properties of tidal disruption events (Guillochon \& Ramirez-Ruiz 2013, 2015a). In each simulation, the star ($M_{\rm *}=1\rm M_{\rm \odot}$, $R_{\rm *}=1\rm R_{\rm \odot}$) is modelled as a polytrope of index 5/3 or 4/3 and evolved on a parabolic orbit with a specific pericentre around the black hole ($M_{\rm BH}=10^6 \rm M_{\rm \odot}$) using an AMR grid-based code. The resulting stellar mass loss defines the morphology of the simulated encounter, that is, it defines whether the disruption is total or partial, thus shaping the ensuing accretion flare. Here we followed the approach of Guillochon \& Ramirez-Ruiz, but adopted two SPH simulation codes (\textsc{gadget2}, traditional SPH; Springel 2005; \textsc{gizmo}, modern SPH; Hopkins 2015) and \textsc{gizmo} in \textsc{mfm} mode (Hopkins 2015) instead of a grid-based method, as all these simulation techniques have their advantages, but also limits (Section \ref{gridVSsph}). We mainly intended to determine for each polytropic index whether the demarcation line between total and partial tidal disruption events, the critical disruption parameter $\beta_{\rm d}$ (Eq. \ref{betad}), is the same for different simulation techniques. Figs. \ref{53_curve} and \ref{43_curve} clearly show that for a $\gamma=5/3$ polytrope the curves of stellar mass loss inferred from AMR grid-based simulations (red points) and from \textsc{gizmo mfm} (blue points), traditional SPH (black points), and modern SPH (green points) simulations differ only slightly in the value of $\beta_{\rm d}$ (dashed lines), reflecting the limits of different codes (Section \ref{gridVSsph}), while for a $\gamma=4/3$ polytrope there is disagreement between our simulations and those of GRR (Table \ref{beta_table}), which is most likely due to the adopted resolutions; this interpretation is consistent with the resolution tests we performed with our own simulations (Fig. \ref{res_com}). However, even with equal resolution, the SPH approach should be superior to a grid-based approach at resolving the dynamics of the core of, especially, a $\gamma=4/3$ polytrope, given that the resolution naturally follows density in equal-mass-particle approaches. As a consequence, we find $\beta_{\rm d}=0.92\pm 0.02$ ($2.01\pm 0.01$) for a $\gamma=5/3$ ($4/3$) polytrope. \begin{figure*} \centering \includegraphics[width=18.cm, angle=0]{n_comparison_43_53.pdf} \\ \caption{Changes in the value of the polytropic index of the stellar remnant resulting from partial disruptions for selected initial values of its $\beta$. Densities and radii are normalised to the central density and the radius of the remnant. Black curves represent solutions to the Lane-Emden equation for different values of $\gamma$; red, green, and blue points are from some of our simulations that left a remnant, for three different values of $\beta$. Left panel: $\gamma=5/3$ polytrope. Right panel: $\gamma=4/3$ polytrope. \label{n_com}} \end{figure*} The $\gamma=4/3$ profile is probably only appropriate for a zero-age main-sequence sun because the central density of our Sun is about twice greater than the $\gamma=4/3$ polytrope at an age of 5 Gyr. For a real star, even greater resolution would therefore be needed in a grid-based approach in order to properly estimate the location of full versus partial disruption. Moreover, real stars are generally not well modelled by a single polytropic index, especially as they evolve (MacLeod et al. 2012). Giant stars consist of a tenuous envelope and a dense core, which prevents envelope mass loss, thus likely moving the value of $\beta_{\rm d}$ even ahead. A similar core-envelope structure and behaviour also characterise giant planets when they are disrupted by their host star (Liu et al. 2013). TDEs could also refer to disruptions by stellar objects (Guillochon et al. 2011; Perets et al. 2016). However, the value of $\beta_{\rm d}$ for the latter encounters still remains to be investigated. \begin{acknowledgements} We thank the ISCRA staff for allowing us to perform our simulations on the Cineca Supercomputing Cluster GALILEO. We acknowledge P. F. Hopkins and G. Lodato for very useful discussion and comments on this work. ERC acknowledges support by NASA through the Einstein Fellowship Program, grant PF6-170150. This work was also supported by the Packard grant and NASA ATP grant NNX14AH37G (ER). We also thank the referee, H. B. Perets, for valuable comments on the manuscript and very constructive suggestions. \end{acknowledgements}
1,314,259,994,425
arxiv
\section{Introduction} The evolution of a spherically symmetric blast wave expanding into the interstellar medium (ISM) is a well-established, textbook topic, amply covered in many highly detailed reviews \cite[see e.g.][]{omk88, bks95}. After the relatively short free expansion and adiabatic phases, the gas just behind the outward-propagating shock cools and forms a very dense shell of material which continues propagating outwards due to its inertia. The stability of this thin shell of gas has been also extensively studied \cite[e.g.][see also Sect. \ref{sec:review}]{vis83, whit94a, whit94b, elme94, ehle97, wp01, franta17}. Unstable modes can be found for a wide range of initial conditions, and the fragmentation of the unstable supershell can lead to star formation. This has been proposed as a possible triggering mechanism for star formation \cite[e.g.][]{gs78, ee78, palo94}. Stars formed in this still outward-propagating supershell will evolve and release winds and, possibly, supernova (SN) explosion, into the ISM like any other stellar system. However, the evolution of the supershell and of the stellar ejecta after the formation of the supershell stars (SSSs) is still largely unexplored. Our intention is to fill this gap. Gravitationally unstable supershells may also be related to the formation of globular clusters (GCs). In particular, the lack of GCs with a metallicity below [Fe/H]$\simeq$ -2.5 \citep{harr96} has puzzled astronomers for a long time. It seems possible to explain this fact by assuming an early generation of stars at the center of a proto-GC. Supernovae go off and create an expanding supershell. The mixing of the expanding supershell with the ejecta of the first generation of stars lead to the right metal enrichment \citep{bbt91, parm99, rd05, sw00}. In particular, if the mass of the proto-GC is larger than some threshold (usually in the range of 10$^6$-10$^7$ M$_\odot$), the fragmentation of the supershell can occur before the supershell becomes gravitationally unbound. This guarantees that the newly-formed SSSs will remain bound to the cluster. However, it is unlikely that this scenario can explain all chemical properties of GCs. These properties are reviewed in Sect. \ref{sec:review}, but in brief GCs are characterized by multiple stellar populations and by characteristic anti-correlations between some pairs of light elements. Particularly remarkable and ubiquitous are the Na-O and Mg-Al anticorrelations \cite[see e.g.][and references therein,]{sned97, gsc04} notice however that the Mg-Al anticorrelation is ubiquitous only in massive GCs and often is missing in less massive ones \cite[see e.g.][and references therein]{panc17}. On the other hand, there is a remarkable homogeneity in the abundances of iron-peak elements in the large majority of observed GCs. The scenario depicted above seems thus unlikely because it implies that the stars which created and enriched the expanding supershell in iron either disappeared or had already the relatively high [Fe/H] we observe nowadays. Other scenarios have therefore been proposed to explain the peculiar chemical properties of GCs. These scenarios are very briefly reviewed in Sect. \ref{sec:review}. However, it is maybe possible to save the broad scenario (fragmentation in the expanding supershell) by invoking a different trigger for the supershell expansion, namely a primordial, very energetic population-III (Pop-III) star. This can also easily explain the relatively high iron content of observed GCs \citep{beas03}. The stars formed in this expanding supershell will evolve as any normal star: they will create stellar winds and, depending on the IMF, some of them will explode as Type II supernovae (SNeII). Some of the winds of these SSSs will propagate inwards rather than outwards. They might accumulate in the center (or close to the center) of the proto-GC, cool and condense there, maybe leading to a new episode of star formation. This is the general idea we want to explore here and in follow-up papers. The paper is structured as follows: in Sect. \ref{sec:review} we will briefly summarize the main physical and chemical properties of GCs and what features a theory of GC formation must have in order to reproduce them. We will then lay down the idea of the paper in more detail in Sect. \ref{sec:idea}, linking the proposed model to the requisites of theories of GC formation described in Sect. \ref{sec:review}. The initial conditions are presented in Sect. \ref{subs:incon}, the adopted numerical scheme in Sect. \ref{subs:numer}, the assumed criteria for the fragmentation of the expanding supershell in Sect. \ref{subs:fragm}, and the results of numerical experiments in Sect. \ref{sec:results}. Specifically, in Sect.~\ref{sub:expss} we study the expanding supershell and the formation of the 1G SSSs, in Sect. \ref{sub:inwss} we study the fate of the inward-propagating supershell formed by the SSSs winds, and we establish under what conditions this supershell can form a new generation of stars and what are the characteristics of these second-generation stars. Finally, in Sect. \ref{sec:disc} we will discuss our results and draw conclusions. \section{The properties of globular clusters} \label{sec:review} GCs show characteristics which have puzzled astronomers for decades. Many theories of their formation have been put forward but none of them manage to explain all properties of GCs. This lead \cite{bast15} to write "Hence, with the exclusion of all current models, new scenarios are desperately needed." Many review papers describe in detail the chemical characteristics of GCs and the properties a theory of GC formation should have in order to conform with observations \cite[see in particular][]{bast15, renz15}. We will follow in particular \citet{renz15}, grossly simplifying the description. We refer to the original paper for more detail. \begin{enumerate} \item {\bf Specificity}. A crucial property of GCs is the presence of multiple populations of stars, with a number of different stellar populations ranging from two to perhaps seven \citep{milo15}. This phenomenon appears to be specific of GCs: young massive clusters do not show clear evidence of multiple populations although the masses of these clusters are similar \cite[see recent papers by][]{bast13a, cabr16}. It seems thus, to quote \citet{renz15}, that "special conditions encountered only in the early Universe appear to be instrumental for the occurrence of the GC multiple population phenomenon." {\it We identify in this work these "special conditions" with the explosion of very massive pop-III stars}, as we describe in detail in Sect. \ref{sec:idea}. Another aspect of the specificity of GCs is the fact that field halo stars with the same metallicity and age do not show the same chemical characteristics of GC stars. On the other hand, we note that \citet{niederhofer17} recently reported the discovery of multiple stellar populations in 3 intermediate-age star clusters in the Small Magellanic Cloud. \item {\bf Predominance}. At least half of the stars in GCs have been self-polluted within the GC, thus do not belong to the first generation (1G) of stars. This is strictly connected to the so-called mass-budget problem: a significant mass in ejecta from 1G pollutant stars is required to explain the fraction of late-generation (LG) stars but, for ordinary IMFs, 1G ejecta constitute only a small fraction of the total 1G mass. The fraction of 1G stars could be as low as one third and pretty constant among different GCs \citep{bl15}, although a more recent study shows a clear variability of this fraction and an anti-correlation with the GC mass \citep{milo17}. In particular, the smallest GCs show 1G fractions larger than 50\%, whereas this fraction decreases to about 20\% for the most massive GCs. Notice that the mass dependence is not limited to the 1G stellar fraction. Also the spread in helium \citep{milone15} and in light elements \citep{milo17} seems to correlate with the GC mass. \item {\bf Discreteness}. The process of star formation in GCs appears to be discontinuous, characterized by well-separated events. This is clearly visible in well-separated main sequences in color-magnitude diagrams \citep{piot07}. This is much less clear in the apparently continuous Na-O and Mg-Al anticorrelations, although at least in the Na/O plane discrete multiple populations are discernible \citep{mari08, milo15b} and signs of patchiness in the distribution of other chemical elements have started to emerge \citep{carr15}. \item {\bf Supernova avoidance}. As already mentioned, stars in the large majority of GCs do not show a spread in iron-peak or in other heavy elements, clearly showing that SN ejecta should negligibly contribute to the chemical enrichment of LG stars. However, it is worth mentioning that the fraction of observed GCs which shows also a spread in heavy elements is increasing by the day and nowadays amounts to about 15 to 20\% of the observed GCs \cite[see in particular][and references therein]{mari15}. This variation is not limited to iron-peak elements but it is extended to calcium and s-process elements. This has been shown in a number of objects \cite[see e.g.][]{mari11, john17, milo15, yong14, yong16}. SNe might therefore play after all a role in the evolution of some GCs. \end{enumerate} There are other peculiar characteristics of GC stars and constraints for theories of GC formation. In particular, a theory of GC formation should be able to reproduce the above mentioned Na-O and Mg-Al anticorrelations, and should reproduce the observed spread in He. We will not treat these aspects in detail, as they are more closely related to the nucleosynthesis of stars within a GC rather than to the mechanism of GC formation. As already mentioned, various models have attempted to explain the peculiarities of GC stars. We refer again to review papers for a detailed description. We recall here very briefly only the two more widely investigated scenarios: \begin{itemize} \item {\bf Pollution from winds of massive stars, e.g. from fast rotating massive stars (FRMS)} \citep{cc16, krau13, decressin07, wuen16, richa16, lt17}. The idea is that extruding disks of FRMS, filled with ejecta from stellar winds, mix with pristine gas and form new generations of stars. \item {\bf Pollution from AGB stars} \citep{derc08, derc10} Here the assumption is that the ejecta of AGB stars, given their low velocity, might remain bound to the proto-GC, whereas SN remnants do not. The AGB ejecta, possibly mixed with some pristine material, form LG stars. \end{itemize} Both these scenarios, and many other scenarios not described here \cite[see e.g.][]{demi09, bast13b, deni15} have attractive features, as well as some problems which require some tweaking of parameters. \section{The proposed model} \subsection{General idea} \label{sec:idea} We recall here the main aim of our paper. We want to study the fragmentation of a supershell created by the explosion of a Pop-III star and the fate of winds of stars formed out of this fragmented supershell. It is to be expected that approximately a half of these winds will propagate towards the center of the system (the place where the Pop-III star exploded), perhaps creating the conditions for a new episode of star formation. In fact, the density of gas piling up close to the center must increase, thus the cooling increases too, leading to large regions of cold, dense gas, probably prone to the formation of new stars. Alternatively, the inward-propagating winds of SSSs could trigger the formation of another inward propagating dense supershell, able to fragment itself and form a new generation of stars. At the same time, the SSSs destroy by their winds the primary expanding shell and a fraction of them leaves the proto-cluster due to their outward velocity inherited from the expanding shell, in combination with galactic tides. This process might then repeat itself. We illustrate the main phases of our assumed scenario in Fig. \ref{fig:sketch}. \begin{figure*}[t] \includegraphics[width=16cm]{sketch} \caption{The major phases of our assumed scenario for the formation of GC: {\it a)} A very massive pop-III star explodes in the middle of a primordial cloud, {\it b)} The energy of this star produces a shell of dense, cold gas. This supershell becomes gravitationally unstable, fragments and forms new stars (1G stars) {\it c)} Supershell stars release energy, due to stellar winds and SN explosions. Some of the gas they release is driven inwards. {\it d)} This triggers the formation of a new, inward- propagating supershell, which fragments and forms a second generation of stars. The thick red line denotes the tidal radius in the external proto-galactic field. } \label{fig:sketch} \end{figure*} We believe that this study is interesting per se, as we are not aware of similar studies in the literature, but we want to explore here the possibility that this mechanism is connected to the formation of GCs. We will thus go through the points illustrated in Sect. \ref{sec:review} (specificity, predominance, discreteness, SN avoidance) and analyze if our proposed scenario might help explaining them. Ours is thus not properly a scenario of progenitors of polluted GC stars (as the massive stars and the AGB scenarios are). We think that many authors (many of them mentioned above) have made a great job trying to identify the nucleosynthesis pathways towards the chemical patterns observed in GCs. We simply devise a possible evolutionary pathway which might help solve some of the difficulties encountered by these progenitor models. For convenience we tune our discussion and the following calculations to the massive stars scenario, but we believe that with simple changes our idea can be adapted to the AGB scenario (and perhaps to the other proposed scenarios) as well. \begin{enumerate} \item {\bf Specificity}. This is easy: stars in GCs appear unlike stars in young massive clusters because nowadays Pop-III stars do not exist any more, thus our proposal can apply preferentially to very old massive clusters, not to young ones. Our scenario requires a very energetic Pop-III star, able to sweep up a large mass of gas. The properties and nucleosynthesis of these extremely massive (M$>$100 M$_\odot$) stars are pretty well known \cite[e.g.][]{hw02}. Attention in the last few years has shifted to less massive (M$<$100 M$_\odot$) first stars \cite[e.g.][]{hw10, chop16}. This is most probably due to the need to explain the observed chemical composition of ultra metal-poor stars \cite[e.g.][]{plac16}, which are extremely poor in iron, thus can not have been polluted by very massive Pop-III stars. It is nevertheless still true that, due to the lack of metals and to the higher temperature of the cosmic microwave background, the conditions in the early universe should have favoured the formation of very massive stars \citep{bromm99}. Therefore it appears to us plausible that very massive Pop-III stars existed in the early universe, and were responsible for the formation of GCs. The high iron yields of theses Pop-III stars are responsible for the relatively high [Fe/H] in observed GCs \citep{beas03}. Less massive (and less energetic) Pop-III stars were, on the other hand, unable to sweep up large masses of pristine gas and were thus unable to trigger the formation of large clusters. They polluted the ISM more patchily, creating the conditions for the formation of ultra metal-poor stars. Recently, \cite{elme17} suggested that the different conditions that allowed the formation of GCs in the early universe were due to the higher densities. This is of course a viable suggestion, alternative to our proposed Pop-III triggered mechanism. \item {\bf Predominance}. The problem of the predominance (and the associated problem of the mass budget) is typically solved assuming that the 1G stars we see nowadays in a GC are just a small fraction of the polluting stars originally present in the proto-cluster. However, no convincing scenario has been put forward to explain why a large fraction of 1G stars should leave the proto-GC. Our proposed scenario might provide a partial solution because $(i)$ the SSSs possess a positive radial velocity (they inherit the velocity of the supershell). Some of the SSSs might have a velocity larger than the escape velocity from the proto-GCs. On the other hand, the winds from these SSSs will have at least partially a negative radial velocity, assuming that they expand isotropically from the SSSs and that their velocities are larger than the stellar velocities. $(ii)$ Even if SSSs do not escape directly, they are formed at relatively large distances from the center of the cluster. As they start being pulled towards the cluster center, they will acquire relatively large velocities. The velocity dispersion of SSSs (the 1G stars in our scenario) will be thus larger than the velocity dispersion of stars formed closer to the center of the cluster, thus it is less likely that they will be bound to the cluster. At least in principle, our model can explain the anticorrelation between cluster mass and 1G star fraction (and between mass and chemical complexity) described in Sect. \ref{sec:review}. According to our model, a crucial parameter is the fragmentation time of the supershell (see Sect. \ref{subs:fragm}); for large fragmentation times, the supershell can propagate further and collect more mass. The larger mass in SSSs does not necessarily imply a larger mass in 1G stars. Having formed at large radii, a large fraction of the SSSs might be unbound and leave the proto-GC. On the other hand, the SSS ejecta are more massive and therefore a larger fraction of LG stars can be formed. This can explain the anti-correlation between cluster mass and 1G star fraction. Moreover, as already mentioned (and as we will see below), a new generation of stars can be formed in an inward-propagating shell. The radius at which this shell will fragment and form new stars will depend on the radius at which SSSs form. If the fragmentation radius of the inward-propagating shell is large enough, this process can repeat itself (see also below). \item {\bf Discreteness}. It is obvious that 1G stars and LG stars are well-separated. 1G stars are the surviving SSSs, whereas LG stars are formed later, closer to the center of the cluster. To explain the presence of multiple ($n>2$) separated generations of stars, we must assume that the mechanism we describe here might replicate itself: SSSs form an inward-propagating shell, which fragments and form new stars, which create another inward-propagating supershell, and so on. As we will see in Sect. \ref{sub:inwss}, simulations suggest that this mechanism might be possible. Moreover, as explained above, this mechanism might depend on the fragmentation time of the supershell (and thus on the radius at which SSSs form). This can affect the fragmentation radius of the inward-propagating shell, which in turn will affect the probability that this mechanism repeats itself. We see thus, at least in principle, a possible explanation for the correlation between GC mass and chemical complexity (spread in helium and in light elements) described in Sect. \ref{sec:review}. The larger the fragmentation time, the larger the collected supershell mass, the larger the mass in LG stars, the larger the probability of forming multiple (and not just two) stellar population, the larger the chemical complexity. \item {\bf Supernova avoidance}. In the framework of the massive stars, we must assume that the processes leading to the formation of LG stars should occur relatively quickly (within few Myr), in order to be completed before SNeII go off. As we will see below, again, simulations (and analytical considerations) seem to support this view. However, as remarked in Sect. \ref{sec:review}, a non-negligible fraction of GCs do show a spread in heavy elements, which may be due to differences in their supplies by Pop-III stars of different mass, or due to changing fragmentation time of the supershell. Some of these GCs (as originally suggested for the archetypal examples of Omega Centauri and M22) might be remnants of tidally disrupted dwarf galaxies \cite[see e.g.][]{mari15} but, for some others, an internal mechanism might be invoked. This is easily accounted for, if the formation of LG stars lasts more than a few Myr (see also Sect. \ref{sub:inwss}). \end{enumerate} After having laid down the general idea of this paper, we describe here and in the following section the calculations we have performed to substantiate it. The calculations presented here should be seen as a feasibility study rather than as a complete, self-consistent description of the phenomena we wish to simulate. A fully consistent simulation requires the inclusion of many complex physical processes and is very computationally demanding. Before embarking on this kind of computation, we want to be sure that the proposed scenario can really work, thus we will present here a simplified model. Even this simplified model turns out to be more computationally demanding than we thought. We will present more detailed simulations in forthcoming papers. \subsection{The initial conditions} \label{subs:incon} The parameters of the Pop-III stars are taken from \cite{hw02}. We concentrate on the model having a He core mass of 130 M$_\odot$, corresponding to an initial mass of about 260 M$_\odot$. The explosion energy of this model Pop-III star is about 10$^{53}$ erg. This model Pop-III star releases 40 M$_\odot$ of Fe. If mixed with 10$^7$ M$_\odot$ of pristine gas, this amount of iron leads to a [Fe/H] of about -2.5, which is the minimum [Fe/H] observed in GCs. Lacking a detailed description of mixing processes, we will assume henceforth that the Pop-III explosion leads instantaneously to a uniform metallicity of [Fe/H]=-2.5 in the supershell, which is thus also the metallicity of our 1G stars. As we will see below, the mass of gas accumulated in the supershell is generally less than 10$^7$ M$_\odot$, thus the supershell metallicity might be higher than -2.5. Our choice is thus conservative: cooling and star formation might be more efficient than we assume in what follows. Several papers in the literature have been devoted to the study of the formation and explosion of very massive Pop-III stars \cite[see e.g.][]{abel02, ripa02, op03, bromm09, humm16}. We choose here to fix the initial conditions of the ISM surrounding the Pop-III star sitting in the middle of an ISM cloud according to the results of \citet{yosh06}. In their study, the ISM clouds are characterized by a very small core, a very high central density and a $r^{-2}$ envelope beyond the core. Namely the number density is characterized by the following profile: \begin{equation} n(r)=\begin{cases}n_{\rm c}\;\;\;\;\;{\rm if }\;r<r_{\rm c}\\ n_{\rm c}\left(\frac{r}{r_{\rm c}}\right)^{-2} \;\;\;\;\;{\rm if }\;r\geq r_{\rm c}\end{cases} \label{eq:profile} \end{equation} \noindent Typical values found in the simulations of \citet{yosh06} are $r_{\rm c}=1$ au for the core radius and $n_{\rm c}=2\cdot 10^{15}$ cm$^{-3}$ for the central number density. This central density is very high, but it falls off very rapidly. The total mass contained in the whole computational grid (a sphere of radius 200 pc) is $\sim 3\cdot 10^6$ M$_\odot$. Therefore, by adopting these initial conditions we are unable with the mass available to reproduce the most massive GCs observed in the Galaxy, i.e. those with masses of the order of 10$^6$ M$_\odot$. We consider a minimum temperature of the ISM of 50 K. If dictated by the cosmic microwave background alone, this temperature implies a redshift of formation of the Pop-III star of about 17. \subsection{The numerical code} \label{subs:numer} We construct a 1-D, Eulerian, spherically symmetric code to follow the expansion and fragmentation of the supershell produced by the Pop-III explosion. Of course, it is highly unlikely that the system will maintain a perfect spherical symmetry during the whole evolution. As mentioned before, we plan here to perform a simplified feasibility study; more detailed 3-D simulations are planned and will be presented in forthcoming papers. The code, which is second-order accurate, is based on a HLLC Riemann solver and uses slope limiters to avoid spurious oscillations \citep{toro09}. The cells are uniform and 0.01 pc wide. This means that the core of 1 au in the initial ISM distribution is not resolved in our simulation. As already mentioned, we avoid the (uncertain) simulation of the process of chemical enrichment of the supershell due to the Pop-III ejecta and consider from the beginning a constant metallicity of [Fe/H]=-2.5. The physical processes included in our simulations are: \begin{itemize} \item Molecular and atomic cooling. For T $>$ 10$^{3.8}$ K we adopt the cooling function and the ionization fractions of \citet{schu09} with fitting formulae taken from \citet{voro15}. Below 10$^{3.8}$ K we adopt the atomic and molecular cooling of \citet{dm72}. The ionization fractions are adapted from \citet{abel97}. \item Self-gravity. The gravitational acceleration is calculated as ${g}=-{GM(r)}/{r^2}$, where G is the gravitational constant and $M(r)$ is the mass in gas and stars contained within radius $r$. \item Thermal conduction. A Spitzer-type thermal conduction equation is solved by means of the Crank-Nicolson method. We refer to \citet{bd86} for the numerical implementation of this method. \end{itemize} We follow the expansion of a supershell created by the Pop-III star until it fragments and forms 1G stars. Later, we follow the evolution of the system when SSSs start inject energy and chemical elements into the ISM. Thus we include in the simulations two additional physical processes: \begin{itemize} \item Feedback from a single stellar population. This is calculated by means of a tailored Starburst99 simulation \citep{leit99, leit14}. The IMF of the SSS is assumed to be the Kroupa IMF, as it closely resembles the spectrum of mass fragments in an unstable supershell \citep{tt03}. The energy produced by SSSs is inserted as thermal energy. Energy, gas mass, and mass in chemical elements produced by the SSSs are inserted in those grid cells, where stars reside (see below for the dynamics of the stars). The amount of feedback to be inserted in each cell is proportional to the total stellar mass in any given cell. \item Dynamics of stars. Here, we will simply assume that the stars inherit the velocity of the supershell at the moment of fragmentation. Subsequently, their dynamics is dictated by the competition between their inertia and the gravitational pull. The stellar population formed in the supershell is subdivided into 250 mass bins, each of which evolves independently. To guarantee a spread in the stellar bins, we distribute the velocities according to a Maxwellian distribution, centered on the supershell expansion velocity. Of course, our treatment grossly simplifies the dynamics of the stars, and only detailed N-body simulations can shed light on the final fate of SSSs. \end{itemize} Many other physical phenomena which might be relevant for the problem at hand have not been considered. As already mentioned, this has been done on purpose, in order to simplify the treatment and put more emphasis on the feasibility of the proposed scenario. A critical assessment of the missing physics will be presented in Sect. \ref{sec:disc}. \subsection{The fragmentation of the supershell} \label{subs:fragm} The ambient ISM is unlikely to be perfectly isotropic, and when accreted, it seeds the supershell with perturbations in its surface density. We assume that the amplitudes of perturbations are small leaving the spherically symmetric model of the expanding supershell still justified. Since our one dimensional model cannot follow shell fragmentation in three dimensional space, we estimate the time $t_{frg}$ of the supershell fragmentation as an instant when the shell expansion time $t$ equals to the collapse time-scale $t_{col}$ of the most unstable fragment in the shell. Note that the fragment collapses in the direction tangent to the shell surface. Following \citep{whit94a}, the collapse time-scale is \begin{equation} t_{col} = \left\{ \frac{G \Sigma}{d} - \frac{a_s^2}{d^2} \pm \frac{\alpha^2}{t^2} \right\}^{-1/2}, \label{efrag} \end{equation} where $\Sigma$ is the surface density, $d$ the fragment radius, $a_s$ the sound speed inside of the shell, and $\alpha$ the exponent in the shell expansion law, \begin{equation} R = K t^{\alpha}. \label{eexpansionlaw} \end{equation} The first term on the right hand side of \eq{efrag} expresses the tendency of the self--gravity to bind the fragment together while the second is due to the thermal pressure gradient acting in the opposite direction. The third term describes either stretching of the fragment in the case of the expanding shell (negative sign), or the fragment contraction in the case of the collapsing shell (positive sign). The latter case applies to the secondary inward propagating shell formed by the 1G SSSs. In the case of a supernova explosion at the density peak of a medium with a power--law density profile following equation (\ref{eq:profile}), the exponent of the expanding law in the energy conserving phase is $\alpha$ = 2/3 and in the momentum conserving phase $\alpha = 1/2$ \citep{omk88}. The high density in our model results in rapid cooling, so the supershell switches from the energy conserving phase to momentum conserving phase very early (see also \citet{haid16}), and fragmentation takes place in the momentum conserving phase with $\alpha = 1/2$. The relative importance of the self--gravity and external pressure in confining the supershell significantly influences the fragmentation process and the supershell fragmentation time $t_{frg}$ \citep{franta17}. In Appendix \ref{apppressure}, we show that the supershell likely fragments when dominated with its self--gravity, in which case the collapse time-scale of the most unstable fragment in the supershell is \begin{equation} t_{col} = \left(\frac{G^2 \Sigma^2}{4 a_s^2} \pm \frac{\alpha^2}{t^2}\right)^{-1/2}. \label{eq:fragtime} \end{equation} During the simulation, we evaluate $t_{col}$ at each time step, and identify the fragmentation time $t_{frg}$ as the time when the expansion time $t$ exceeds the collapse time-scale $t_{col}$ for the first time. We assume that the SSSs are formed at this instant. The surface density, $\Sigma$, and the sound speed, $a_s$, in the shell are measured in the simulation by averaging between the inner and the outer edges of the shell, detected as the steepest positive and negative density gradients. Note that the sound speed is almost always constant with a value corresponding to the temperature $50$\,K as the shell is nearly isothermal (see Figure \ref{fig:ss}). \section{Results} \label{sec:results} Here we present results of 1D simulations modelling the supershells formed after an explosion of a Pop-III star releasing energy $10^{53}$\,ergs into a centre of a gaseous cloud with the density profile given by eq.~(\ref{eq:profile}). The primary expanding supershell leading to the formation of 1G stars is described in Sect.~\ref{sub:expss}. The secondary, inward propagating supershell out of which the LG stars form is described in Sect.~\ref{sub:inwss}. Two versions of the model are studied differing in the location within the primary supershell where the 1G SSSs are inserted. \subsection{The expanding supershell and the formation of 1G stars} \label{sub:expss} The evolution of the ISM density following the Pop-III explosion is shown in Fig. \ref{fig:dens_expl}. As mentioned in Sect. \ref{subs:fragm}, we expect the external shock to expand approximately as $t^{1/2}$. A reverse shock starts to appear in the last time-frames of this plot. Figure \ref{fig:vel} shows the velocity, which decreases with time as expected. Figure \ref{fig:ss} shows the sound speed $a_{\rm s}$ for the same time frames as in the previous plots. Apart from a very narrow spike, the sound speed within the supershell is constant, because the efficient cooling leads rapidly to the temperature equal to the minimum temperature of 50 K in the shocked supershell gas. The constancy of the sound speed justifies the considerations and the analytical estimates made in the previous section. Fig. \ref{fig:pres} shows the pressure profile for the same time frames. \begin{figure}[t] \includegraphics[width=5.6cm, angle=270]{den} \caption{Evolution of the gas density (in g cm$^{-3}$) after a Pop-III explosion. The density is plotted every Myr.} \label{fig:dens_expl} \end{figure} \begin{figure}[t] \includegraphics[width=5.6cm, angle=270]{vel} \caption{Gas velocity (in km/s) after a Pop-III explosion, plotted every Myr as in Fig.~\ref{fig:dens_expl}.} \label{fig:vel} \end{figure} \begin{figure}[t] \includegraphics[width=5.6cm, angle=270]{ss2} \caption{Sound speed (in km/s) after a Pop-III explosion, plotted every Myr as in Fig.~\ref{fig:dens_expl}.} \label{fig:ss} \end{figure} \begin{figure}[t] \includegraphics[width=5.6cm, angle=270]{pres} \caption{Logarithm of the pressure (in cgs) after a Pop-III explosion, plotted every Myr as in Fig.~\ref{fig:dens_expl}.} \label{fig:pres} \end{figure} In this model, the fragmentation of the supershell occurs, according to the condition \eq{eq:fragtime}, at $t_{frg} \simeq$ 4.8 Myr, an instant in which the external shock has reached the radius $\sim$ 20 pc. This corresponds to the dotted-short dashed line in Figs. \ref{fig:dens_expl} and \ref{fig:vel}. At this instant, the supershell has a mass of slightly more than $3 \cdot 10^5$ M$_\odot$. Assuming a star formation efficiency of 30\%, we obtain a mass in SSSs of $\sim$ 10$^5$ M$_\odot$. Since we expect the large majority of these SSSs to become unbound (see the discussion in Sect. \ref{sec:idea}), even considering LG stars we will not be able to form a GC larger than a few times 10$^4$ M$_\odot$. This is consistent with the low-mass end of the mass distribution function of observed GCs; it seems much harder to obtain masses of the order of $\sim$ 10$^6$ M$_\odot$, corresponding to the most massive observed GCs. We have also tried to run models with densities larger than the ones described in Sect. \ref{subs:incon}, namely with a central density of $n_{\rm c}=6\cdot 10^{15}$ cm$^{-3}$. In this case, the fragmentation of the supershell occurs earlier, at $t_{frg} \simeq$ 0.6 Myr (\eq{eq:fragtime}). However, given the higher density, the mass accumulated in the supershell at this moment in time is more or less the same. With some fine-tuning, we are able to increase the mass of the supershell by a factor of a few, but with our assumptions and initial conditions it seems impossible to accumulate substantially more mass in the supershell. It seems thus that the problem of having a supershell massive enough has more to do with the profile rather than to the central density. We have tried different, flatter initial ISM density distributions and we can produce in this way unstable supershells with masses up to $\sim$ 10$^7$ M$_\odot$. Notice that the $r^{-2}$ profile found by \citet{yosh06} extends only up to a few pc, thus a different and perhaps flatter gas distribution outside the first few pc is conceivable. We prefer in this work to stick to the \citet{yosh06} initial conditions in order to study the basic scenario in a simple and well-justified setting. We will change the initial conditions in forthcoming papers. In the meantime we note that a flatter initial gas distribution would not only allow a larger total GC mass, but might also help to capture more SN ejecta, because it would be harder for them to escape outwards. That would explain why only most massive GCs exhibit variation in Fe content (and other heavy elements), as described in Sect. \ref{sec:review}. \subsection{The inward-propagating supershell and the formation of LG stars} \label{sub:inwss} Once the supershell has fragmented and formed new stars (the SSSs), we wish to follow the evolution of the SSS ejecta and analyze under what conditions can these ejecta, perhaps mixing with some pristine gas, lead to the formation of new stars. The first problem we face is to figure out what is the exact location of these SSSs. As we can see from Fig. \ref{fig:dens_expl}, the supershell is initially pretty narrow, but it broadens as time goes on. This broadening is partially deceptive given the logarithmic scale: the large majority of the supershell mass is confined within $0.5$\,pc behind the shock, even in the last plotted moment in time. Still, lacking a detailed physical description of the star formation process, there is still considerable uncertainty about the birth site of SSSs. One might argue that stars, being collisionless, are not slowed down by the pressure of the external medium and so might overtake the supershell. This argument is however weakened by the fact that stars form out of the densest, coolest clumps of ISM in the supershell. Pressure gradients between the center and the supershell are steeper along directions that do not cross these dense clumps and, as is well known, large pressure gradients favour a faster expansion of the supershell. Moreover, supershell expansion in non-uniform media may lead to the development of Rayleigh-Taylor instabilities. The clumps of denser gas formed as a consequence of the Rayleigh-Taylor instabilities are more severely decelerated and tend to lag behind the expanding shock \cite[see e.g.][]{mfa91, dpj96}. For this reason, we decide to initially place the SSSs close to the base of the supershell, at a distance of $\sim$ $16$\,pc from the center. We label this {\it Model~1}. This choice has the further advantage that the spherical volume of radius $16$\,pc has been almost completely evacuated of pristine gas. Only about 8 $\times$ 10$^3$ M$_\odot$ of gas are contained in this volume. Therefore, the ejecta of the SSSs propagating inwards are mixed with just a small fraction of pre-existing gas. As a comparison, we consider also another {\it Model~2} in which the SSSs are placed close to the density peak of the supershell, at a distance of $20.6$\,pc from the center. \begin{figure}[t] \includegraphics[width=5.6cm, angle=270]{den_m3} \caption{Evolution of the gas density (in g cm$^{-3}$) after the formation of SSSs for Model 1. The density is plotted every 0.1 Myr after $t = t_{frg}$ starting with the solid red line.} \label{fig:dens_feed} \end{figure} As already mentioned, the feedback from the SSSs is calculated by means of tailored Starburst99 simulations. The evolution of density after the formation of SSSs for Model1 is shown in Fig. \ref{fig:dens_feed}. It is clear that the stellar winds propagate in part towards the center, piling up the little gas remaining in the cavity. On the other hand, the original supershell continues its propagation outwards. A two-shell structure is formed. The inner supershell cools relatively quickly and becomes denser. The fragmentation time, calculated again as described in Sect. \ref{subs:fragm}, becomes equal to the evolutionary time at $t\simeq$ 0.6 Myr after the insertion of the SSSs, which also the last moment in time plotted in Fig. \ref{fig:dens_feed}. Notice that, in order to calculate the fragmentation time, we use the positive sign of the term ${\alpha^2}/{t^2}$ in Eq. \ref{eq:fragtime}. However, this term turns out to be much smaller than the ${G^2 \Sigma^2}/{4 a_s^2}$ term. At this moment, the inner supershell has reached a distance of $\sim$ 5 pc from the center, and has accumulated slightly more than 10$^4$ M$_\odot$ of gas. A significant, but still relatively low (about 20 \%) fraction of this gas is made of ejecta of SSSs, which were able to cool in a short timescale. This short cooling timescale is due to the large densities of the supershell. The fact that the fraction of SSS ejecta in the inward-propagating supershell is relatively low is due to the fact that, with our adopted Starburst99 model, the amount of material ejected by stellar winds in the first Myr is quite low. It increases and becomes significant only after 3-4 Myr. {\it Model~2} takes a longer time to propagate towards the center and to fragment, mainly due to the higher densities the supershell has to travel through. The fragmentation occurs after $\sim$ 3 Myr, when the inward-propagating supershell has reached a distance of $\sim$ 8 pc from the center (see Fig. \ref{fig:dens_feed_m2}). The mass of the unstable supershell at this point is relatively large (about 10$^5$ M$_\odot$), but the majority of this gas is pristine, thus the ejecta of the SSSs can not pollute significantly the new generation of stars. The energy of the SSSs is simply pushing the pre-existing supershell inwards. Notice that at an age of 3 Myr the first SNe start exploding. Although we do not see the signs of contamination of SN ejecta in this specific model, it is at least conceivable that some GCs might form LG stars partially polluted by ejecta of Type II SNe. \begin{figure}[t] \includegraphics[width=5.6cm, angle=270]{den_m3_lr} \caption{Evolution of the gas density (in g cm$^{-3}$) after the formation of SSSs for Model 2. The density is plotted every 0.5 Myr after $t = t_{frg}$ starting with the solid red line.} \label{fig:dens_feed_m2} \end{figure} \section{Discussion and conclusions} \label{sec:disc} In this work we have studied the explosion of a very energetic Pop-III star and the subsequent supershell expansion. This supershell becomes gravitationally unstable and fragments and, eventually, starts forming stars. Depending on the explosion energy and on the central density of the ISM surrounding the Pop-III star, gravitational instabilities start growing $\sim$ 0.5--5 Myr after the Pop-III explosion, and eventually lead to the formation of supershell stars (SSSs). The fate of the ejecta of these SSSs is the main aim of this paper. We have seen that the energy of SSS stellar winds is able in part to accelerate the already existing supershell created by the Pop-III, in part to create a new, inward-propagating supershell. With our assumptions, this supershell is particularly rich in SSS ejecta, at least if we assume that the SSSs are initially located close to the inner edge of the supershell. This supershell is able to cool in a very short time (a fraction of a Myr), due to its very large density. We have also checked that this inward-propagating supershell also becomes gravitationally unstable and might lead to a new generation of stars. There is thus the possibility to repeat the already described process and create a new inward-propagating supershell and, from it, a new stellar population. For the moment we have limited our analysis to the study of the formation of this second generation of stars, but we intend to analyze the possibility to form a third generation of stars in a future work. We have considered also if our depicted scenario can help explaining some of the puzzling aspects of globular clusters (GCs). In particular, we wanted to see whether it can explain some characteristics of GC stars listed in Sect. \ref{sec:review}, namely specificity, predominance, discreteness and supernova avoidance. Our scenario certainly helps explaining the specificity of GCs, as it singles out one physical process which was present only in the early Universe (Pop-III stars); see also the discussion in Sect. \ref{sec:idea}. It can help also explain the predominance of late generation (LG) stars as compared to the first generation. We have seen that, depending on the explosion energy and on the central density, the supershell can fragment and form new stars at a distance between 4 and 20 pc from the center of the explosion. LG stars form much closer to the cluster center, so they have a higher chance to remain bound to the cluster than first generation (1G) stars (our SSSs). Moreover, SSSs are born with a positive radial velocity, at variance with LG stars, and this increases the chance that a large fraction of 1G stars might be unbound. Even if this argument is sound, our simulations can not make quantitative predictions on that and we have to wait for N-body simulations to shed light on the different dynamics and on the fate of 1G and LG stars. As explained in Sect. \ref{sec:review}, \citet{milo17} find a clear correlation between the mass of a GC and the fraction of 1G stars. This fraction is large for low-mass clusters and decreases with increasing GC mass. Also this result is broadly consistent with our scenario. In fact, we expect that, if the supershell formed by the Pop-III star is large, then a large number of 1G stars can be formed. These stars however, due to the large distance of the supershell, have a larger probability of escaping the gravitational potential. As we have seen (Sect. \ref{sec:idea}), the dependence of the chemical complexity in GCs on the stellar mass might also be accounted for in our model. However, it was not our intention in this paper to provide a detailed model of the chemical composition of GCs. The abundances of different elements are taken from a Starburst99 model, and this is probably inadequate to account for the chemical peculiarities of GCs. The time evolution of the content of He, C, N, O, Na, Si and of other elements will be the subject of a subsequent paper. Also as regards the dynamics of stars, although the argument presented here is sound, still we have to wait for N-body simulations for a detailed quantitative analysis. We conclude this paper with a quick discussion on the comparison between GCs in the Milky Way and in the Magellanic Clouds. Most GCs in the LMC and SMC with ages smaller than $\sim$2 Gyr show an extended main-sequence turn off \citep{mack08, milo09, milo16}. Although the connection between this spread and the presence of multiple stellar populations is still not firmly established, the impression is that young and intermediate-age clusters in Magellanic clouds are the counterparts of old GCs with multiple populations \citep{cs11}. This connection is fascinating although only measurements of abundances of individual stars (for which we must wait JWST) can tell us whether we might face for GCs in the Magellanic Clouds the same problems we face for Milky Way GCs. For the moment we note that the model of a collapsing supershell may also apply to intermediate age clusters in the LMC and SMC if there was a sufficiently strong source of energy. This source of energy might also be due to a Pop-III star, but this is unlikely to be the case for young GCs. \acknowledgments Support for this project was provided by the Czech Science Foundation grant 15-06012S and by the project RVO: 6785815. We thank the anonymous referee for very useful and insightful remarks and suggestions. We thank Sona Ehlerova and Anthony Whitworth for assistance with the preparation of the manuscript. \nocite{*}
1,314,259,994,426
arxiv
\section{Byzantine Consensus} \label{Bconsensus} In this section, we briefly review relevant exsting results on Byzantine consensus. Byzantine consensus has attracted significant amount of attention \cite{Dolev:1986:RAA:5925.5931,fekete1990asymptotically,vaidya2013byzantine,LeBlanc2012,vaidya2012iterative,vaidya2014iterative,Mendes:2013:MAA:2488608.2488657}. While the past work mostly focus on scalar inputs, the more general vector (or multi-dimensional) inputs have been studied recently \cite{Mendes:2013:MAA:2488608.2488657,vaidya2013byzantine,vaidya2014iterative}. Complete communication networks are considered in \cite{Mendes:2013:MAA:2488608.2488657,vaidya2013byzantine}, where tight conditions on the number of agents are identified. Incomplete communication networks are studied in \cite{vaidya2014iterative}. Closer to the non-Bayesian learning problem is the class of {\em iterative approximate Byzantine consensus algorithms}, where each agent is only allowed to exchange information about its state with its neighbors. In particular, our learning algorithms build upon {\em Byz-Iter} algorithm proposed in \cite{vaidya2014iterative} and a simple algorithm proposed in \cite{vaidya2012iterative} for iterative Byzantine consensus with vector inputs and scalar inputs, respectively, in incomplete networks. A matrix representation of the non-faulty agents' states evolution under {\em Byz-Iter} algorithm is provided by \cite{vaidya2014iterative}, which also captures the dynamics of the simple algorithm with scalar inputs in \cite{vaidya2012iterative}. To make this paper self-contained, in this section, we briefly review the algorithm {\em Byz-Iter} and its matrix representation. \subsection{Algorithm {\em Byz-Iter} \cite{vaidya2014iterative}} Algorithm {\em Byz-Iter} is based on Tverberg's Theorem \cite{Tverberg'sTheorem2007}. \begin{theorem}\cite{Tverberg'sTheorem2007} \label{TG} Let $f$ be a nonnegative integer. Let $Y$ be a multiset containing vectors from ${\mathbb{R}}^m$ such that $|Y|\ge (m+1)f+1$. There exists a partition $Y_1, Y_2, \cdots, Y_{f+1}$ of $Y$ such that $Y_i$ is nonempty for $1\le i\le f+1$, and the intersection of the convex hulls of $Y_i$'s are nonempty, i.e., $\cap_{i=1}^{f+1}{\sf Conv}(Y_i)\not=\O$, where ${\sf Conv}(Y_i)$ is the convex hull of $Y_i$ for $i=1, \cdots, f+1$. \end{theorem} The proper partition in Theorem \ref{TG}, and the points in $\cap_{i=1}^{f+1} {\sf Conv}(Y_i)$, are referred as {\em Tverberg partition of $Y$} and {\em Tverberg points of $Y$}, respectively. For convenience of presenting our algorithm in Section \ref{main results}, we present {\em Byz-Iter} (described in Algorithm \ref{BconsensusAA}) below using {\em One-Iter} (described in Algorithm \ref{BconsensusA}) as a primitive. The parameter ${\bf x}^i$ passed to {\em One-Iter} at agent $i$, and ${\bf y}^i$ returned by {\em One-Iter} are both $m$-dimensional vectors. Let ${\bf v}^i$ be the state of agent $i$ that will be iteratively updated, with ${\bf v}_t^i$ being the state at the end of iteration $t$ and ${\bf v}_0^i$ being the input of agent $i$. In each iteration $t\ge 1$, a non-faulty agent performs the steps in{\em One-Iter}. In particular, in the message receiving step, if a message is not received from some neighbor, that neighbor must be faulty, as the system is synchronous. In this case, the missing message values are set to some default value. Faulty agents may deviate from the algorithm specification arbitrarily. In {\em Byz-Iter}, the value returned by {\em One-Iter} at agent $i$ is assigned to ${\bf v}_t^i$. \begin{algorithm} \caption{Algorithm {\em One-Iter} ~~with input ${\bf x}^i$ at agent $i$} \label{BconsensusA} \vskip 0.2\baselineskip {\normalsize $Z^i\gets \O$\; Transmit ${\bf x}^i$ on all outgoing links\; \vskip 0.2\baselineskip Receive messages on all incoming links. {\small \color{OliveGreen}\% These message values form a multiset $R^i$ of size $|{\mathcal{I}}_i|$.\%} \vskip 0.2\baselineskip \For {every $C\subseteq R^i\cup \{{\bf x}^i\}$ such that $|C|=(m+1)f+1$} {add to $Z^i$ a {\em Tverberg point} of multiset $C$} \vskip 0.2\baselineskip Compute ${{\bf y}^i}$ as follows:~~ ${\bf y}^i\gets \frac{1}{1+|Z^i|} \pth{{\bf x}^i+\sum_{{\bf z}\in Z^i}{\bf z}}$\; \vskip 0.2\baselineskip Return ${\bf y}^i$\; } \end{algorithm} \begin{algorithm} \caption{Algorithm {\em Byz-Iter}~ \cite{vaidya2014iterative}: ~~ $t$-th iteration at agent $i$ } \label{BconsensusAA} {\normalsize \vskip 0.2\baselineskip ${\bf v}_t^i \gets$ {\em One-Iter}(${\bf v}_{t-1}^i$)\; \vskip 0.2\baselineskip } \end{algorithm} \begin{remark} Note that for each agent $i\in {\mathcal{N}}$, the computation complexity per iteration is \begin{align*} \Omega \pth{\binom{|R^i\cup \{{\bf x}^i\}|}{(m+1)f+1}} = \Omega \pth{\binom{|{\mathcal{I}}_i|+1}{(m+1)f+1}}. \end{align*} In the worst case, $||{\mathcal{I}}_i|+1|=n$, and \begin{align*} \Omega \pth{\binom{|{\mathcal{I}}_i|+1}{(m+1)f+1}}=\Omega \pth{\binom{n}{(m+1)f+1}}= \Omega \pth{\pth{\frac{n}{{\rm e}}}^{(m+1)f+1}}. \end{align*} Since our first learning rule is based on Algorithm {\em Byz-Iter}, the computation complexity of our first proposed algorithm is also high. Nevertheless, our first learning rule contains our main algorithmic ideas. More importantly, this learning rule can be modified such that the computation complexity per iteration per agent is $O(m^2 n\log n)$. Specifically, the modified learning rule adopts the scalar Byzantine consensus instead of the $m$--dimensional consensus. This modified learning rule is optimal in the sense that it works under minimal network identifiability requirements. \end{remark} \subsection{Correctness of Algorithm {\em Byz-Iter}} We briefly summarize the aspects of correctness proof of Algorithm \ref{BconsensusAA} from \cite{vaidya2014iterative} that are necessary for our subsequent discussion. By using the Tverberg points in the update of ${\bf v}_t^i$ above, effectively, the extreme message values (that may potentially be sent by faulty agents) are trimmed away. Informally speaking, trimming certain messages can be viewed as ignoring (or removing) incoming links that carry the outliers. \cite{vaidya2014iterative} shows that the effective communication network thus obtained can be characterized by a ``reduced graph'' of $G({\mathcal{V}}, {\mathcal{E}})$, defined below. It is important to note that the non-faulty agents {\bf do not} know the identity of the faulty agents. \begin{definition}[$m$--dimensional reduced graph] \label{reduced graph} An $m$--dimensional reduced graph ${\mathcal{H}}({\mathcal{N}}, {\mathcal{E}}_{{\mathcal{F}}})$ of $G({\mathcal{V}}, {\mathcal{E}})$ is obtained by (i) removing all faulty nodes ${\mathcal{F}}$, and all the links incident on the faulty nodes ${\mathcal{F}}$; and (ii) for each non-faulty node (nodes in ${\mathcal{N}}$), removing up to $mf$ additional incoming links. \end{definition} \begin{definition} \label{source} A source component in any given $m$--dimensional reduced graph is a strongly connected component (of that reduced graph), which does not have any incoming links from outside that component. \end{definition} It turns out that the effective communication network is potentially time-varying (partly) due to time-varying behavior of faulty nodes. Assumption \ref{sufficient} below states a condition that is sufficient for reaching approximate Byzantine vector consensus using Algorithm \ref{BconsensusA} \cite{vaidya2014iterative}. \begin{assumption} \label{sufficient} Every $m$--dimensional reduced graph of $G({\mathcal{V}}, {\mathcal{E}})$ contains a unique source component. \end{assumption} Let ${\mathcal{C}}_m$ be the set of all the $m$--dimensional reduced graph of $G({\mathcal{V}}, {\mathcal{E}})$. Define $\chi_m \triangleq |{\mathcal{C}}_m| $. Since $G({\mathcal{V}}, {\mathcal{E}})$ is finite, we have $\chi_m<\infty$. Let ${\mathcal{H}}_m \in {\mathcal{C}}_m$ be an $m$--dimensional reduced graph of $G({\mathcal{V}}, {\mathcal{E}})$ with source component ${\mathcal{S}}_{{\mathcal{H}}_m}$. Define \begin{align} \label{minimal source} \gamma_m\triangleq \min_{{\mathcal{H}}_m\in {\mathcal{C}}_m} |{\mathcal{S}}_{{\mathcal{H}}_m}|, \end{align} i.e., $\gamma_m$ is the minimum source component size among all the $m$--dimensional reduced graphs. Note that $\gamma_m\ge 1$ if Assumption \ref{sufficient} holds for a given $m$. \begin{theorem}\cite{vaidya2014iterative} \label{correctness of Byz-Iter} Suppose Assumption \ref{sufficient} holds for a given $m\ge 1$. Under Algorithm {\em Byz-Iter}, all the non-faulty agents (agents in ${\mathcal{N}}$) reach consensus asymptotically, i.e., $\lim_{t\to\infty} |{\bf v}_t^i-{\bf v}_t^j| =0, \forall \, i,j\in {\mathcal{N}}. $ \end{theorem} The proof of Theorem \ref{correctness of Byz-Iter} relies crucially on a matrix representation of the state evolution. \subsection{Matrix Representation \cite{vaidya2014iterative}} \label{mr} Let $|{\mathcal{F}}|=\phi$ (thus, $0\leq\phi\leq f$). Without loss of generality, assume that agents $1$ through $n-\phi$ are non-faulty, and agents $n-\phi+1$ to $n$ are Byzantine. \begin{lemma}\cite{vaidya2014iterative} \label{matrix lemma} Suppose Assumption \ref{sufficient} holds for a given $m\ge 1$. The state updates performed by the non-faulty agents in the $t$--th iteration ($t\ge 1$) can be expressed as \begin{align} \label{evo matrix} {\bf v}_t^i=\sum_{j=1}^{n-\phi}{\bf A}_{ij}[t]{\bf v}^j_{t-1}, \end{align} where ${\bf A}[t]\in {\mathbb{R}}^{(n-\phi)\times (n-\phi)}$ is a {\em row stochastic} matrix for which there exists an $m$--dimensional reduced graph ${\mathcal{H}}_m[t]$ with adjacency matrix ${\bf H}_m[t]$ such that $ {\bf A}[t]\ge \beta_m {\bf H}_m[t]$, where $0<\beta_m\le 1$ is a constant that depends only on $G({\mathcal{V}}, {\mathcal{E}})$. \end{lemma} Let ${\bf \Phi}(t,r) \triangleq {\bf A}[t]\cdots {\bf A}[r]$ for $1\le r\le t+1$. By convention, ${\bf \Phi}(t,t)={\bf A}[t]$ and ${\bf \Phi} (t, t+1)={\bf I}$. Note that ${\bf \Phi}(t,r)$ is a backward product. Using prior work on coefficients of ergodicity \cite{Hajnal58}, under Assumption \ref{sufficient}, it has been shown \cite{vaidya2014iterative,wolfowitz1963products} that \begin{align} \label{mixing} \lim_{t\ge r,~ t\to\infty}{\bf \Phi}(t, r)=\mathbf 1 {\bf \pi}(r), \end{align} where ${\bf \pi}(r)\in {\mathbb{R}}^{n-\phi}$ is a row stochastic vector, and $\mathbf 1$ is the column vector with each entry being $1$. Recall that $\chi_m$ is the total number of $m$--dimensional reduced graphs of $G({\mathcal{V}}, {\mathcal{E}})$, and $\beta_m$ is defined in Lemma \ref{matrix lemma}, and $\phi \triangleq |{\mathcal{F}}|$. The convergence rate in \eqref{mixing} is exponential. \begin{theorem}\cite{vaidya2014iterative} \label{convergencerate} For all $t\ge r\ge 1$, it holds that $\left | {\bf \Phi}_{ij}(t, r)-\pi_j(r)\right |\le (1-\beta_m^\nu)^{\lceil\frac{t-r+1}{\nu}\rceil},$ where $\nu \triangleq \chi_m(n-\phi)$. \end{theorem} Recall that $\gamma_m$ is defined in (\ref{minimal source}). The next lemma is a consequence of the results in \cite{vaidya2014iterative}. \begin{lemma}\cite{vaidya2014iterative} \label{lblimiting} For any $r\ge 1$, there exists a reduced graph ${\mathcal{H}} [r]$ with source component ${\mathcal{S}}_r$ such that $\pi_i(r)\ge \beta_m^{\chi_m(n-\phi)}$ for each $i\in {\mathcal{S}}_r$. In addition, $|{\mathcal{S}}_r|\ge \gamma_m$. \end{lemma} \subsection{Tight Topological Condition for Scalar Iterative Byzantine Consensus} The above analysis shows that Assumption \ref{sufficient} is sufficient for achieving Byzantine consensus iteratively. For the special case when $m=1$,(i.e., the inputs provided at individual non-faulty agents are scalars) it has been shown \cite{vaidya2012iterative} that Assumption \ref{sufficient} is also necessary. \begin{theorem} \cite{vaidya2012iterative} \label{iabc} For scalar inputs, iterative approximate Byzantine consensus is achievable among non-faulty agents if and only if every $1$-dimensional reduced graph of $G({\mathcal{V}}, {\mathcal{E}})$ contains only one source component. \end{theorem} Moreover, the following simple algorithm (Algorithm \ref{BconsensusA scalar}) works under Assumption \ref{sufficient} when $m=1$. \begin{algorithm} \caption{Algorithm {Scalar Byzantine Consensus}: iteration $t\ge 1$ \cite{vaidya2012iterative}} \label{BconsensusA scalar} {\normalsize Transmit $v^i[t-1]$ on all outgoing links\; \vskip 0.2\baselineskip Receive messages on all incoming links. {\small \color{OliveGreen}\% These message values $w_j[t]$ for each $j\in {\mathcal{I}}_i$ form a multiset $R^i[t]$ of size $|{\mathcal{I}}_i|$. \%} \vskip 0.2\baselineskip Sort the received values $w_j[t]$ for each $j\in {\mathcal{I}}_i$ in a non-decreasing order\; \vskip 0.2\baselineskip Remove the largest $f$ values and the smallest $f$ values. {\small \color{OliveGreen}\% Denote the set of indices of incoming neighbors whose values have not been removed at iteration $t$ by ${\mathcal{I}}_i^*[t]$.\%} \vskip 0.2\baselineskip Update $v^i$ as follows: $v^i[t]\gets \frac{\sum_{j\in {\mathcal{I}}_i^*[t]} w_j[t] + v^i[t-1]}{1+|{\mathcal{I}}_i^*[t]|}$\; } \end{algorithm} In addition, it has been show that the dynamic of the non-faulty agents states admits the same matrix representation as in Subsection \ref{mr} with the reduced graph being $1$--dimensional reduced graph defined in Definition \ref{reduced graph}. With the above background on Byzantine vector consensus, we are now ready to present our first algorithm and its analysis. \section{Conclusion} \label{sec:conclusion} This paper addresses the problem of consensus-based non-Bayesian learning over multi-agent networks when an unknown subset of agents may be adversarial (Byzantine). We propose two learning rules, and characterize the tight network identifiability condition for any consensus-based learning rule of interest to exist. In our first update rule, each agent updates its local beliefs as (up to normalization) the product of (1) the likelihood of the {\em cumulative} private signals and (2) the weighted geometric average of the beliefs of its incoming neighbors and itself. Under reasonable assumptions on the underlying network structure and the global identifiability of the network, we show that all the non-faulty agents asymptotically agree on the true state almost surely. For the case when every agent is failure-free, we show that (with high probability) each agent's beliefs on the wrong hypotheses decrease at rate $O(\exp (-Ct^2))$, where $t$ is the number of iterations, and $C$ is a constant. In general when agents may be adversarial, network identifiability condition specified for the above learning rule scales poorly in $m$. In addition, the computation complexity per agent per iteration of this learning rule is forbiddingly high. Thus, we propose a modification of our first learning rule, whose complexity per iteration per agent is $O(m^2 n \log n)$. We show that this modified learning rule works under a much weaker global identifiability condition that is independent of $m$. We so far focussed on synchronous system and static network, our results may be generalizable to asynchronous as well as time varying network. Throughout this paper, we assume that consensus among non-faulty agents needs to be achieved. Although this is necessary for the family of consensus-based algorithms (by definition), this is not the case for the non-faulty agents to collaboratively learn the true state in general. Indeed, there is a tradeoff between the capability of the network to reach consensus and the tight condition of the network detectability. For instance, if the network is disconnected, then information cannot be propagated across the connected components. Thus, the non-faulty agents in each connected component have to be able to learn the true state. We leave investigating the above tradeoff as future work. \bibliographystyle{abbrv} \input{biblist1} \end{document} \section{BFL in the absence of Byzantine Agents, i.e., $f=0$} \label{failure-free} In this section, we present BFL for the special case in the absence of Byzantine agents, i.e., $f=0$, named Failure-free BFL. Since $f=0$, all the agents in the network are cooperative, and no trimming is needed. Indeed, the BFL for $f=0$ is a simple modification of the algorithm proposed in \cite{nedic2014nonasymptotic}. \begin{algorithm} \caption{Failure-free BFL} \label{alg: failure-free} {\normalsize \vskip 0.2\baselineskip Transmit current belief vector $\mu_{t-1}^i$ on all outgoing edges\; % \vskip 0.2\baselineskip Wait until a private signal $s_t^i$ is observed and belief vectors are received from all incoming neighbors ${\mathcal{I}}_i$\; % \vskip 0.2\baselineskip \For{$\theta\in \Theta$} {$\mu_{t}^i(\theta)\gets \frac{\ell_i(s_{1, t}^i|\theta)\prod_{j\in {\mathcal{I}}_i\cup \{i\}} \mu_{t-1}^j(\theta)^{\frac{1}{|{\mathcal{I}}_i|+1}}}{\sum_{p=1}^m \ell_i(s_{1, t}^i|\theta)\prod_{j\in {\mathcal{I}}_i\cup \{i\}} \mu_{t-1}^j(\theta)^{\frac{1}{|{\mathcal{I}}_i|+1}}}.$} } \end{algorithm} For each time $t\ge 1$, we define a matrix that follows the structure of $G({\mathcal{V}}, {\mathcal{E}})$ as follows: \begin{align} \label{matrix 1} {\bf A}_{ij}\triangleq \begin{cases} \frac{1}{|{\mathcal{I}}_i|+1}, & j\in {\mathcal{I}}_i\cup \{i\}\\ 0, & \text{otherwise. } \end{cases} \end{align} Thus, the dynamic of $\psi_t^i (\theta, \theta^*) $ (defined in \eqref{b1}) under Algorithm \ref{alg: failure-free} can be written as \begin{align*} \psi_t^i (\theta, \theta^*) &= \log \frac{\mu_t^i(\theta)}{\mu_t^i(\theta^*)}\\ &=\log \frac{\ell_i(s_{1, t}^i|\theta)\prod_{j\in {\mathcal{I}}_i\cup \{i\}} \mu_{t-1}^j(\theta)^{\frac{1}{|{\mathcal{I}}_i|+1}}}{\ell_i(s_{1, t}^i|\theta^*)\prod_{j\in {\mathcal{I}}_i\cup \{i\}} \mu_{t-1}^j(\theta^*)^{\frac{1}{|{\mathcal{I}}_i|+1}}}\\ &=\log \prod_{j\in {\mathcal{I}}_i \cup \{i\}}\qth{\frac{\mu_{t-1}^j(\theta)}{\mu_{t-1}^j(\theta^*)}}^{\frac{1}{|{\mathcal{I}}_i|+1}}+\log \frac{\ell_i(s_{1, t}^i\mid \theta)}{\ell_i(s_{1, t}^i\mid \theta^*)}\\ &=\log \prod_{j\in {\mathcal{I}}_i \cup \{i\}}\qth{\frac{\mu_{t-1}^j(\theta)}{\mu_{t-1}^j(\theta^*)}}^{\frac{1}{|{\mathcal{I}}_i|+1}}+\sum_{r=1}^t\log \frac{\ell_i(s_{r}^i\mid \theta)}{\ell_i(s_{r}^i\mid \theta^*)}\\ &=\sum_{j=1}^n {\bf A}_{ij}\psi_{t-1}^i (\theta, \theta^*)+\sum_{r=1}^t {\mathcal{L}}_r^i(\theta, \theta^*) ~~~\text{by \eqref{b1} and \eqref{matrix 1}} \end{align*} Recall that $\bm{\psi}_{t} (\theta, \theta^*)\in {\mathbb{R}}^{n-\phi}$ is the vector that stacks $\psi_{t-1}^i (\theta, \theta^*)$ with the $i$--th entry being $\psi_{t-1}^i (\theta, \theta^*)$ for all $i\in {\mathcal{N}}$. Since $f=0$, i.e., the network is free of failures, it holds that $$ 0\le \phi=|{\mathcal{F}}|\le f=0.$$ Thus, $\bm{\psi}_{t} (\theta, \theta^*)\in {\mathbb{R}}^{n}$. Similar to \eqref{int1}, the evolution of $\bm{\psi}_{t} (\theta, \theta^*)$ can be compactly written as follows. \begin{align} \label{ff int1} \nonumber \bm{\psi}_{t}(\theta, \theta^*)&={\bf A}^t\bm{\psi}_0(\theta, \theta^*)+\sum_{r=1}^{t} {\bf A}^{t-r}\sum_{k=1}^r{\mathcal{L}}_k(\theta, \theta^*)\\ &=\sum_{r=1}^{t} {\bf A}^{t-r}\sum_{k=1}^r{\mathcal{L}}_k(\theta, \theta^*). \end{align} The last equality holds from the fact that $\bm{\psi}_0(\theta, \theta^*)=\mathbf 0$. As mentioned before, the non-Bayesian learning rules \cite{nedic2014nonasymptotic,rad2010distributed,Lalitha2014,shahrampour2014distributed} are consensus-based learning algorithms, wherein agents are required to reach a common decision asymptotically. \begin{assumption} \label{graph failure free} The underlying communication network $G({\mathcal{V}}, {\mathcal{E}})$ is strongly connected. \end{assumption} It is easy to see that $G({\mathcal{V}}, {\mathcal{E}})$ itself is the only reduced graph of $G({\mathcal{V}}, {\mathcal{E}})$, and that Assumption \ref{graph failure free} is the special case of Assumption \ref{sufficient} when $f=0$. Thus, $$ \chi_m=1, ~~~~~\text{and } ~~~~ \nu_m=\chi_m(n-\phi)=n.$$ Note that both $\chi_m$ and $\nu_m$ are independent of $m$ when $f=0$. Henceforth in this section, we drop the subscripts of $\chi_m$ and $\nu_m$ for ease of notation. Similar to \eqref{mixing}, for any $r\ge 1$, we get \begin{align*} \lim_{t\ge r, ~ t\to\infty} {\bf A}^{t-r} = \mathbf 1 \bm{\pi}. \end{align*} Since ${\bf A}$ is time-invariant, the product limit $\lim_{t\ge r, ~ t\to\infty} {\bf A}^{t-r} $ is also independent of $r$. It is easy to see that $$ {\bf A} \ge \frac{1}{n} {\bf H},$$ where ${\bf H}$ is the adjacency matrix of the communication graph $G({\mathcal{V}}, {\mathcal{E}})$, and that \begin{align} \label{ff ll} \pi_j \ge \frac{1}{n^n}, ~~~\forall\, j=1, \cdots, n. \end{align} The following corollary is a direct consequence of Theorem \ref{convergencerate}, and its proof is omitted. \begin{corollary \label{ff convergencerate} For all $t\ge r\ge 1$, it holds that $\left | [{\bf A}^{t-r}]_{ij}-\pi_j\right |\le (1-\frac{1}{n^n})^{\lceil\frac{t-r}{n}\rceil},$ where $[{\bf A}^{t-r}]_{ij}$ is the $i,j$--th entry of matrix ${\bf A}^{t-r}$. \end{corollary} In addition, when $f=0$, Assumption \ref{ass} becomes \begin{assumption} \label{ass failure-free} Suppose that Assumption \ref{graph failure free} holds. For any $\theta\not=\theta^*,$ the following holds \begin{align} \label{failure-free identify} \sum_{j=1}^m D\pth{\ell_j(\cdot |\theta^*)\parallel\ell_j(\cdot |\theta)}~\not=~0. \end{align} \end{assumption} As an immediate consequence of Theorem \ref{almost sure}, we have the following corollary. \begin{corollary} \label{almost sure ff} When Assumption \ref{ass failure-free} holds, each agent $i$ will concentrate its belief on the true hypothesis $\theta^*$ almost surely, i.e., $\mu_t^i(\theta) \xrightarrow{{\rm a.s.}} 0$ for all $\theta\not= \theta^*$. \end{corollary} Since Corollary \ref{almost sure ff} is the special case of Theorem \ref{almost sure} for $f=0$, the proof of Corollary \ref{almost sure ff} is omitted. \subsection{Finite-Time Analysis of Failure-Free BFL} In this subsection, we present the convergence rate that is achievable in finite time with high probability. Our proof is similar to the proof presented in \cite{nedic2014nonasymptotic,shahrampour2014distributed}. \begin{lemma} \label{expect} Let $\lambda\triangleq \pth{1-(\frac{1}{n})^n}^{\frac{1}{n}}$, and let $\theta\not=\theta^*$, and consider $\psi_t^i(\theta, \theta^*)$ as defined in \eqref{b1}. Then, for each agent $i$ we have $$\expect{\psi_t^i(\theta, \theta^*)} \le \frac{nC_0}{(1-\frac{1}{n^n})(1-\lambda)}t -\frac{C_1}{2n^n} t^2. $$ \end{lemma} \begin{proof} By \eqref{ff int1}, we have $\psi_{t}^i (\theta, \theta^*)=\sum_{r=1}^t \sum_{j=1}^n [{\bf A}^{t-r}]_{ij} \sum_{k=1}^r {\mathcal{L}}_k^j(\theta, \theta^*). $ Taking expectation of $\psi_{t}^i (\theta, \theta^*)$ with respect to $\ell^i(\cdot \mid \theta^*)$, we get \begin{align} \label{ff exp} \nonumber \mathbb{E}^*\qth{\bm{\psi}^i_{t}(\theta, \theta^*)}&=\mathbb{E}^*\qth{ \sum_{r=1}^t \sum_{j=1}^n [{\bf A}^{t-r}]_{ij} \sum_{k=1}^r {\mathcal{L}}_k^j(\theta, \theta^*)}\\%~~~\text{by \eqref{ff int1}}\\ \nonumber &=\sum_{r=1}^t \sum_{j=1}^n [{\bf A}^{t-r}]_{ij} \sum_{k=1}^r \mathbb{E}^*\qth{{\mathcal{L}}_k^j(\theta, \theta^*)}\\ \nonumber &=\sum_{r=1}^t \sum_{j=1}^n [{\bf A}^{t-r}]_{ij} r H_j(\theta, \theta^*)~~~\text{by ~~\eqref{expected}}\\ &=\sum_{r=1}^t \sum_{j=1}^n \pth{[{\bf A}^{t-r}]_{ij}-\pi_j} r H_j(\theta, \theta^*) +\sum_{r=1}^t \sum_{j=1}^n \pi_j r H_j(\theta, \theta^*). \end{align} For the first term in the right hand side of \eqref{ff exp}, we have \begin{align} \label{ff exp 1} \nonumber \sum_{r=1}^t \sum_{j=1}^n \pth{[{\bf A}^{t-r}]_{ij}-\pi_j} r H_j(\theta, \theta^*)&\le \sum_{r=1}^t \sum_{j=1}^n \left |[{\bf A}^{t-r}]_{ij}-\pi_j\right | r \left |H_j(\theta, \theta^*)\right |\\ \nonumber &\le \sum_{r=1}^t \sum_{j=1}^n \qth{1-\frac{1}{n^n}}^{\lceil \frac{t-r}{n}\rceil} r C_0 ~~~\text{by Corollary \ref{ff convergencerate}, and \eqref{c0}}\\ \nonumber &=n C_0 \sum_{r=1}^t \qth{1-\frac{1}{n^n}}^{\lceil \frac{t-r}{n}\rceil} r\\ &\le \frac{nC_0}{(1-\frac{1}{n^n})(1-\lambda)}t. \end{align} Since $G({\mathcal{V}}, {\mathcal{E}})$ is the only source component, $C_1$ (defined in \eqref{c1}) becomes $$C_1 = \min_{\theta, \theta^* \in \Theta; \theta\not= \theta^*} \sum_{i=1}^n D(\ell_i(\cdot | \theta^*) \parallel \ell_i(\cdot | \theta)).$$ Thus, for the second term in the right hand side of \eqref{ff exp}, we get \begin{align} \label{ff exp 2} \nonumber \sum_{r=1}^t \sum_{j=1}^n \pi_j r H_j(\theta, \theta^*) &\le \sum_{r=1}^t \sum_{j=1}^n \frac{1}{n^n} r H_j(\theta, \theta^*)~~~\text{by \eqref{ff ll} and \eqref{expected}}\\ \nonumber &=\frac{1}{n^n}\sum_{r=1}^t r\sum_{j=1}^n H_j(\theta, \theta^*)\\ \nonumber &\le -\frac{1}{n^n}\sum_{r=1}^t r C_1 \\ &\le -\frac{C_1}{2n^n} t^2. \end{align} By \eqref{ff exp 1} and \eqref{ff exp 2}, \eqref{ff exp} becomes \begin{align} \nonumber \mathbb{E}^*\qth{\bm{\psi}^i_{t}(\theta, \theta^*)}&=\sum_{r=1}^t \sum_{j=1}^n \pth{[{\bf A}^{t-r}]_{ij}-\pi_j} r H_j(\theta, \theta^*) +\sum_{r=1}^t \sum_{j=1}^n \pi_j r H_j(\theta, \theta^*)\\ &\le \frac{nC_0}{(1-\frac{1}{n^n})(1-\lambda)}t - \frac{C_1}{2n^n} t^2, \end{align} proving the lemma. \hfill $\Box$ \end{proof} Similar to \cite{nedic2014nonasymptotic,shahrampour2014distributed}, we also use McDiarmid's Inequality. \begin{theorem}[McDiarmid's Inequality] \label{McInequality} Let $X_1, \cdots, X_t$ be independent random variables and consider the mapping $H: {\mathcal{X}}^t\to {\mathbb{R}}$. If for $r=1, \cdots, t$, and every sample $x_1, \cdots, x_t$, $x_r^{\prime}\in {\mathcal{X}}$, the function $H$ satisfies $$ \left | H(x_1, \cdots, x_r, \cdots, x_t)-H(x_1, \cdots, x_r^{\prime}, \cdots, x_t)\right |\le c_r,$$ then for all $\epsilon>0$, $$ \mathbb{P}\qth{|H(x_1, \cdots, x_t)-\mathbb{E}[H(x_1, \cdots, x_t)]|\ge \epsilon}\le \exp \sth{\frac{-2\epsilon^2}{\sum_{r=1}^t c_r^2}}.$$ \end{theorem} \begin{theorem} \label{finite time} Under Assumption \ref{ass failure-free}, for any $\rho\in (0,1)$, there exists an integer $T(\rho)$ such that with probability $1-\rho$, for all $t\ge T(\rho)$ and for all $\theta\not= \theta^*$, we have \begin{align*} \mu_t^i(\theta)\le \exp \pth{ \frac{nC_0}{(1-\frac{1}{n^n})(1-\lambda)}t -\frac{C_1}{4n^n} t^2 } \end{align*} where $C_0$ and $C_1$ are defined in \eqref{c0} and \eqref{c1} respectively, and $ T(\rho)= \frac{64C_0^2n^{2n}}{3C_1^2}\log \frac{1}{\rho}.$ \end{theorem} \begin{proof} Since $\mu_{t}^i(\theta^*)\in (0,1]$, we have \begin{align*} \mu_t^i(\theta)\le \frac{\mu_t^i(\theta)}{\mu_t^i(\theta^*)} =\exp \pth{\psi_t^i(\theta, \theta^*)}. \end{align*} Thus, we have \begin{align*} \mathbb{P}\pth{\mu_t^i(\theta)\ge \exp \pth{ \frac{nC_0}{(1-\frac{1}{n^n})(1-\lambda)}t -\frac{C_1}{4n^n} t^2 }} &\le \mathbb{P}\pth{\psi_t^i(\theta, \theta^*)\ge \frac{nC_0}{(1-\frac{1}{n^n})(1-\lambda)}t -\frac{C_1}{4n^n} t^2}\\ &\le \mathbb{P}\pth{\psi_t^i(\theta, \theta^*)-\mathbb{E}^*\qth{\psi_t^i(\theta, \theta^*)}\ge \frac{C_1}{4n^n} t^2 }. \end{align*} Note that $\psi_t^i(\theta, \theta^*)$ is a function of the random vector $ {\bf s}_1, \cdots, {\bf s}_t$. For a given sample path ${\bf s}_1, \cdots, {\bf s}_t$, and for all $p\in \{1, \cdots, t\}$, we have \begin{align*} \quad&\max_{{\bf s}_p\in {\mathcal{S}}_1\times \cdots \times {\mathcal{S}}_t} \psi_t^i(\theta, \theta^*)-\min_{ {\bf s}_p\in {\mathcal{S}}_1\times \cdots \times {\mathcal{S}}_t} \psi_t^i(\theta, \theta^*)\\ &=\max_{{\bf s}_p\in {\mathcal{S}}_1\times \cdots \times {\mathcal{S}}_t} \sum_{r=1}^t \sum_{j=1}^n[{\bf A}^{t-r}]_{ij} \sum_{k=1}^r {\mathcal{L}}_k(\theta, \theta^*)-\min_{{\bf s}_p\in {\mathcal{S}}_1\times \cdots \times {\mathcal{S}}_t} \sum_{r=1}^t \sum_{j=1}^n[{\bf A}^{t-r}]_{ij} \sum_{k=1}^r {\mathcal{L}}_k(\theta, \theta^*)\\ &=\max_{{\bf s}_p\in {\mathcal{S}}_1\times \cdots \times {\mathcal{S}}_t} \sum_{r=p}^t \sum_{j=1}^n[{\bf A}^{t-r}]_{ij} \sum_{k=1}^r {\mathcal{L}}_k(\theta, \theta^*)-\min_{{\bf s}_p\in {\mathcal{S}}} \sum_{r=p}^t \sum_{j=1}^n[{\bf A}^{t-r}]_{ij} \sum_{k=1}^r {\mathcal{L}}_k(\theta, \theta^*)\\ &=\max_{{\bf s}_p\in {\mathcal{S}}_1\times \cdots \times {\mathcal{S}}_t} \sum_{r=p}^t \sum_{j=1}^n[{\bf A}^{t-r}]_{ij} {\mathcal{L}}_p(\theta, \theta^*)-\min_{{\bf s}_p\in {\mathcal{S}}} \sum_{r=p}^t \sum_{j=1}^n[{\bf A}^{t-r}]_{ij} {\mathcal{L}}_p(\theta, \theta^*)\\ &\le \sum_{r=p}^t \sum_{j=1}^n[{\bf A}^{t-r}]_{ij} C_0 +\sum_{r=p}^t \sum_{j=1}^n[{\bf A}^{t-r}]_{ij} C_0\\ &=2C_0(t-p+1)\triangleq c_{p}. \end{align*} By McDiarmid's inequality (Theorem \ref{McInequality}), we obtain that \begin{align*} \mathbb{P}\pth{\psi_t^*(\theta, \theta^*)-\mathbb{E}^*\qth{\psi_t^*(\theta, \theta^*)}\ge \frac{C_1}{4n^n} t^2 }&\le \exp \pth{-\frac{2\frac{C_1^2}{16n^{2n}}t^4}{\sum_{p=1}^t (2C_0(t-p+1))^2}}\\ &\le \exp \pth{-\frac{3C_1^2}{64C_0^2 n^{2n}}t}, \end{align*} where the last inequality follows from the fact that $$ t(t+1)(2t+1)\le 4t^3 ~~~\forall \, t\ge 2,$$ which can be shown by induction. Therefore, for a given confidence level $\rho$, in order to have $$\mathbb{P}\pth{\mu_t^i(\theta)\ge \exp \pth{\frac{nC_0}{(1-\frac{1}{n^n})(1-\lambda)}t -\frac{C_1}{4n^n} t^2}}\le \rho, $$ we require that $$ t\ge T(\rho)= \frac{64C_0^2n^{2n}}{3C_1^2}\log \frac{1}{\rho}.$$ \hfill $\Box$ \end{proof} \begin{remark} The above finite-time analysis is not directly applicable for the general case when $f>0$, due to the fact that the local beliefs are dependent on all the observations collected so far {\em as well as} all the future observations. \end{remark} \begin{remark} Our analysis for the special when $f=0$ also works for time-varying networks \cite{nedic2014nonasymptotic}. In addition, with identical analysis, we are able to adapt the failure-free scheme to work in the more general setting where there is no underlying true state, and the goal is to have the agents collaboratively identify an optimal $\theta\in \Theta$ that best explains all the observations obtained over the whole network. \end{remark} \section{Problem Formulation} \label{prob formulation} \paragraph{Network Model:} Our network model is similar to the model used in \cite{DBLP:conf/sss/SuV15,vaidya2012iterative}. We consider a synchronous system. A collection of $n$ agents (also referred as {\em nodes}) are connected by a {\em directed} network $G({\mathcal{V}}, {\mathcal{E}})$, where ${\mathcal{V}}=\{1, \ldots, n\}$ and ${\mathcal{E}}$ is the collection of {\em directed} edges. For each $i\in {\mathcal{V}}$, let ${\mathcal{I}}_i$ denote the set of incoming neighbors of agent $i$. In any execution, up to $f$ agents suffer Byzantine faults. For a given execution, let ${\mathcal{F}}$ denote the set of Byzantine agents, and ${\mathcal{N}}$ denote the set of non-faulty agents. Throughout this paper, we assume that $f$ satisfies the condition implicitly imposed by the given topology conditions mentioned later. We assume that each non-faulty agent knows $f$, but does not know the {\em actual} number of faulty agents $|{\mathcal{F}}|$. \footnote{This is because the upper bound $f$ can be learned via long-time performance statistics, whereas, the actual size of ${\mathcal{F}}$ varies across executions, and may be impossible to be predicted in some applications.} Possible misbehavior of faulty agents includes sending incorrect and mismatching (or inconsistent) messages. The Byzantine agents are also assumed to have complete knowledge of system, including the network topology, underlying running algorithm, the states or even the entire history. The faulty agents may collaborate with each other adaptively \cite{Lynch:1996:DA:525656}. Note that $|{\mathcal{F}}|\le f$ and $|{\mathcal{N}}|\ge n-f$ since at most $f$ agents may fail. (As noted earlier, although we assume a static network topology, our results can be easily generalized to time-varying networks.) Throughout this paper, we use the terms {\em agent} and {\em node} interchangeably. \paragraph{Observation Model:} Our observation model is identical the model used in \cite{jadbabaie2012non,Lalitha2014,shahrampour2015finite}. Let $\Theta=\{\theta_1, \theta_2, \ldots, \theta_m\}$ denote a set of $m$ environmental states, which we call {\em hypotheses}. In the $t$-th iteration, each agent {\em independently} obtains a private signal about the environmental state $\theta^*$, which is initially unknown to every agent in the network. Each agent $i$ knows the structure of its private signal, which is represented by a collection of parameterized marginal distributions ${\mathcal{D}}^i=\{\ell_i(w_i | \theta)| \, \theta\in \Theta,\, w_i\in {\mathcal{S}}_i\}$, where $\ell_i(\cdot | \theta)$ is the distribution of private signal when $\theta$ is the true state, and ${\mathcal{S}}_i$ is the finite private signal space. For each $\theta \in \Theta$, and each $i\in {\mathcal{V}}$, the support of $\ell_i(\cdot|\theta)$ is the whole signal space, i.e., $\ell_i(w_i|\theta)>0$, $\forall\, w_i\in {\mathcal{S}}_i$ and $\forall\, \theta \in \Theta$. Let $s_t^i$ be the private signal observed by agent $i$ in iteration $t$, and let ${\bf s}_t=\{s_t^1, s_t^2, \ldots, s_t^n\}$ be the signal profile at time $t$ (i.e., signals observed by the agents in iteration $t$). Given an environmental state $\theta$, the signal profile ${\bf s}_t$ is generated according to the joint distribution $\ell_1(s_t^1|\theta)\times \ell_2(s_t^2|\theta)\times \cdots \times \ell_n(s_t^n|\theta)$. In addition, let $s^i_{1, t}$ be the signal history up to time $t$ for agent $i=1, \cdots, n$, and let ${\bf s}_{1,t}=\{s_{1,t}^1, s_{1,t}^2, \ldots, s_{1,t}^n\}$ be the signal profile history up to time $t$. \section{Introduction} \label{intro} Decentralized hypothesis testing (learning) has received significant amount of attention \cite{chamberland2003decentralized,gale2003bayesian,jadbabaie2012non,Tsitsiklis1988,tsitsiklis1993decentralized,varshney2012distributed,wong2012stochastic}. The traditional decentralized detection framework consists of a collection of spatially distributed sensors and a fusion center \cite{Tsitsiklis1988,tsitsiklis1993decentralized,varshney2012distributed}. The sensors independently collect {\em noisy} observations of the environment state, and send only {\em summary} of the private observations to the fusion center, where a final decision is made. In the case when the sensors directly send all the private observations, the detection problem can be solved using a centralized scheme. The above framework does not scale well, since each sensor needs to be connected to the fusion center and full reliability of the fusion center is required, which may not be practical as the system scales. Distributed hypothesis testing in the {\em absence} of fusion center is considered in \cite{gale2003bayesian,cattivelli2011distributed,jakovetic2012distributed,bajovic2012large}. In particular, Gale and Kariv \cite{gale2003bayesian} studied the distributed hypothesis testing problem in the context of social learning, where fully Bayesian belief update rule is studied. Bayesian update rule is impractical in many applications due to memory and computation constraints of each agent. To avoid the complexity of Bayesian learning, a non-Bayesian learning framework that combines local Bayesian learning with distributed consensus was proposed by Jadbabaie et al. \cite{jadbabaie2012non}, and has attracted much attention \cite{jadbabaie2013information,nedic2014nonasymptotic,rad2010distributed,shahrampour2013exponentially,Lalitha2014,shahrampour2015finite,shahrampour2014distributed,molavi2015foundations}. Jadbabaie et al. \cite{jadbabaie2012non} considered the general setting where external signals are observed during each iteration of the algorithm execution. Specifically, the belief of each agent is repeatedly updated as the arithmetic mean of its local Bayesian update and the beliefs of its neighbors -- combining iterative consensus algorithm with local Bayesian update. It is shown \cite{jadbabaie2012non} that, under this learning rule, each agent learns the true state almost surely. The publication of \cite{jadbabaie2012non} has inspired significant efforts in designing and analyzing non-Bayesian learning rules with a particular focus on refining the fusion strategies and analyzing the (asymptotic and/or finite time) convergence rates of the refined algorithms \cite{jadbabaie2013information,nedic2014nonasymptotic,rad2010distributed,shahrampour2013exponentially,Lalitha2014,shahrampour2015finite,shahrampour2014distributed,molavi2015foundations}. In this paper we are particularly interested in the log-linear form of the update rule, in which, essentially, each agent updates its belief as the geometric average of the local Bayesian update and its neighbors' beliefs \cite{rad2010distributed,jadbabaie2013information,nedic2014nonasymptotic,shahrampour2013exponentially,Lalitha2014,shahrampour2015finite,shahrampour2014distributed,molavi2015foundations}. The log-linear form (geometric averaging) update rule is shown to converge exponentially fast \cite{jadbabaie2013information,shahrampour2013exponentially}. Taking an axiomatic approach, the geometric averaging fusion is proved to be optimal \cite{molavi2015foundations}. An optimization-based interpretation of this rule is presented in \cite{shahrampour2013exponentially}, using dual averaging method with properly chosen proximal functions. Finite-time convergence rates are investigated independently in \cite{nedic2014nonasymptotic,Lalitha2014,shahrampour2014distributed}. Both \cite{nedic2014nonasymptotic} and \cite{shahrampour2015finite} consider time-varying networks, with slightly different network models. Specifically, \cite{nedic2014nonasymptotic} assumes that the union of every consecutive $B$ networks is strongly connected, while \cite{shahrampour2015finite} considers random networks. In this paper, we consider static networks for ease of exposition, although we believe that our results can be easily generalized to time-varying networks. \vskip 0.5\baselineskip The prior work implicitly assumes that the networked agents are reliable in the sense that they correctly follow the specified learning rules. However, in some practical multi-agent networks, this assumption may not hold. For example, in social networks, it is possible that some agents are adversarial, and try to prevent the true state from being learned by the good agents. Thus, this paper focuses on the fault-tolerant version the non-Bayesian framework proposed in \cite{jadbabaie2012non}. In particular, we assume that an unknown subset of agents may suffer Byzantine faults. An agent suffering Byzantine fault may not follow the pre-specified algorithms/protocols, and {\em misbehave arbitrarily}. For instance, a faulty agent may lie to other agents (possibly non-consistently) about its own estimates. In addition, a faulty agent is assumed to have a complete knowledge of the system, including the network topology, the local functions of all the non-faulty agents, the algorithm specification of the non-faulty agents, the execution of the algorithm, the local estimates of all the non-faulty agents, and contents of messages the other agents send to each other. Also, the faulty agents can potentially collaborate with each other to prevent the non-faulty agents from achieving their goal. An alternative fault model, where some agents may unexpectedly cease computing and communicate with each other asynchronously, is considered in our companion work \cite{su2016asynchronous}. The Byzantine fault-tolerance problem was introduced by Pease et al. \cite{PeaseShostakLamport} and has attracted intensive attention from researchers \cite{Dolev:1986:RAA:5925.5931,fekete1990asymptotically,LeBlanc2012,vaidya2012iterative,vaidya2014iterative,Mendes:2013:MAA:2488608.2488657}. Our goal is to design algorithms that enable all the non-faulty agents to learn the underlying true state. The existing non-Bayesian learning algorithms \cite{jadbabaie2013information,Lalitha2014,molavi2015foundations,nedic2014nonasymptotic,rad2010distributed,shahrampour2013exponentially,shahrampour2014distributed,shahrampour2015finite} are not robust to Byzantine agents, since the malicious messages sent by the Byzantine agents are indiscriminatingly utilized in the local belief updates. On the other hand, the incorporation of Byzantine consensus is non-trivial, since (i) the {\em effective} communication networks are {\em dependent} on the all the random local observations, making it non-trivial to adapt analysis of previous algorithms to our setting; and (ii) the problem of identifying tight topological condition for reaching Byzantine multi-dimensional consensus iteratively is open, making it challenging to identify the minimal detectability condition on the networked agents to learn the true environmental state. \paragraph{\bf Contributions:} Our contributions are two-fold. \begin{itemize} \item We first propose an update rule wherein each agent iteratively updates its local beliefs as (up to normalization) the product of (1) the likelihood of the {\em cumulative} private signals and (2) the weighted geometric average of the beliefs of its incoming neighbors and itself (using iterative Byzantine multi-dimensional consensus). In contrast to the existing algorithms \cite{nedic2014nonasymptotic,Lalitha2014}, where only the {\em current} private signal is used in the update, our proposed algorithm relies on the {\em cumulative} private signals. Under reasonable assumptions on the underlying network structure and the global identifiability of the network, we show that all the non-faulty agents asymptotically agree on the true state almost surely. In addition, for the special case when every agent is guaranteed to be failure-free, we show that (with high probability) each agent's beliefs on the wrong hypotheses decrease at rate $O(\exp (-Ct^2))$, where $t$ is the number of iterations, and $C$ is a constant. Thus, our proposed rule may be of independent interest for the failure-free setting considered in \cite{jadbabaie2013information,Lalitha2014,molavi2015foundations,nedic2014nonasymptotic,rad2010distributed,shahrampour2013exponentially,shahrampour2014distributed,shahrampour2015finite}. \item The local computation complexity per agent of the first learning rule is high due to the adoption of multi-dimensional consensus primitives. More importantly, the network identifiability condition used for that learning rule scales poorly in the number of possible states $m$. Thus, we propose a modification of our first learning rule, whose complexity per iteration per agent is $O(m^2 n \log n)$, where $n$ is the number of agents in the network. We show that this modified learning rule works under a much weaker global identifiability condition, which is independent of $m$. We cast the general $m$--ary hypothesis testing problem into a collection of binary hypothesis testing sub-problems. \end{itemize} \paragraph{\bf Outline:} The rest of the paper is organized as follows. Section \ref{prob formulation} presents the problem formulation. Section \ref{Bconsensus} briefly reviews existing results on vector Byzantine consensus, and matrix representation of the state evolution. Our first algorithm and its correctness analysis are presented in Section \ref{main results}. Section \ref{failure-free} demonstrates the above learning rule in the special case when $f=0$, and presents a finite-time analysis The modified learning rule and its correctness analysis are summarized in Section \ref{modified}. Section \ref{sec:conclusion} concludes the paper, and discusses possible extensions. \section{Byzantine Fault-Tolerant Non-Bayesian Learning (BFL)} \label{main results} In this section, we present our first learning rule, named Byzantine Fault-Tolerant Non-Bayesian Learning (BFL). In BFL, each agent $i$ maintains a belief vector $\mu^i\in {\mathbb{R}}^m$, which is a distribution over the set $\Theta$, with $\mu^i(\theta)$ being the probability with which the agent $i$ {\em believes} that $\theta$ is the true environmental state. Since no signals are observed before the execution of an algorithm, the belief $\mu^i$ is often initially set to be uniform over the set $\Theta$, i.e., $\pth{\mu_0^i(\theta_1), \mu_0^i(\theta_1), \ldots, \mu_0^i(\theta_m)}^T=\pth{\frac{1}{m}, \ldots, \frac{1}{m}}^T$. Recall that $\theta^*$ is the true environmental state. We say the networked agents collaboratively learn $\theta^*$ if for every non-faulty agent $i\in {\mathcal{N}}$, \begin{align} \label{goal} \lim_{t\to\infty}\mu_t^i(\theta^*) ~=~ 1 \,\,\, a.s. \end{align} where $a.s.$ denotes {\em almost surely}. BFL is a modified version of the geometric averaging update rule that has been investigated in previous work \cite{nedic2014nonasymptotic,rad2010distributed,Lalitha2014,shahrampour2014distributed}. In particular, we modify the averaging rule to take into account Byzantine faults. More importantly, in each iteration, we use the likelihood of the {\em cumulative} local observations (instead of the likelihood of the {\em current} observation only) to update the local beliefs. \vskip 0.5\baselineskip For $t\ge 1$, the steps to be performed by agent $i$ in the $t$--th iteration are listed below. Note that faulty agents can deviate from the algorithm specification. The algorithm below uses {\em One-Iter} presented in the previous section as a primitive. Recall that $s_{1,t}^i$ is the cumulative local observations up to iteration $t$. Since the observations are $i.i.d.$, it holds that $\ell_i(s_{1,t}^i|\theta)=\prod_{r=1}^t \ell_i(s_{r}^i |\theta)$. So $\ell_i(s_{1,t}^i|\theta)$ can be computed iteratively in Algorithm \ref{alg:new}. \begin{algorithm} \caption{BFL:~Iteration $t\geq 1$ at agent $i$} \label{alg:new} {\normalsize $\eta_t^i\gets $ \,{\em One-Iter}$(\log \mu_{t-1}^i)$\; Observe $s_t^i$\; \For{$\theta\in\Theta$} {$\ell_i(s_{1,t}^i|\theta)\gets \ell_i(s^i_t|\theta)\, \ell_i(s_{1,t-1}^i|\theta)$\; $\mu_{t}^i(\theta)\gets \frac{\ell_i(s_{1, t}^i|\theta)\exp \pth{\eta_{t}^i(\theta)}}{\sum_{p=1}^m \ell_i(s_{1, t}^i|\theta_p)\exp \pth{\eta_{t}^i(\theta_p)}}$\;} } \begin{comment} \vskip 0.2\baselineskip Transmit current belief vector $\mu_{t-1}^i$ on all outgoing edges % \vskip 0.2\baselineskip Observe private signal $s_t^i$ and receive belief vectors are from all incoming neighbors ${\mathcal{I}}_i[t]$ % \vskip 0.2\baselineskip Update $\eta_t^i$ using Algorithm {\em Byz-Iter} over $\log \mu_{t-1}^j$ ($j\in {\mathcal{I}}_i\cup \{i\}$) with dimension $d=m$ such that $\eta_t^i(\theta)=\sum_{j=1}^{n-\phi} {\bf A}_{ij}[t]\log \mu_{t-1}^j(\theta)$ for each $\theta\in \Theta$ \vskip 0.2\baselineskip For each $\theta\in \Theta$, update $\mu^i(\theta)$ as $\mu_{t}^i(\theta)\triangleq \frac{1}{N_t^i} \pth{\ell_i(s_{1, t}^i|\theta)\exp \pth{\eta_{t}^i(\theta)}}$, where $N_t^i$ is a normalization factor such that $\sum_{\theta\in\Theta}\mu_t^i(\theta)=1$ =========== changed last summation ==== check ==== \end{comment} \end{algorithm} The main difference of Algorithm \ref{alg:new} with respect to the algorithms in \cite{nedic2014nonasymptotic,rad2010distributed,Lalitha2014,shahrampour2014distributed} is that (i) our algorithm uses a Byzantine consensus iteration as a primitive (in line 1), and (ii) $\ell_i(s_{1,t}^i|\theta)$ used in line 5 is the likelihood for observations from iteration 1 to $t$ (the previous algorithms instead use $\ell_i(s_t^i |\theta)$ here). Observe that the consensus step is being performed on $\log$ of the beliefs, with the result being stored as $\eta_t^i$ (in line 1) and used in line 4 to compute the new beliefs. Recalling the matrix representation of the {\em Byz-Iter} algorithm as per Lemma \ref{matrix lemma}, we can write the following equivalent representation of line 1 of Algorithm \ref{alg:new}. \begin{eqnarray} \eta_t^i(\theta) & = & \sum_{j=1}^{n-\phi} {\bf A}_{ij}[t]\log \mu_{t-1}^j(\theta)=\log \prod_{j=1}^{n-\phi}\mu_{t-1}^j(\theta)^{{\bf A}_{ij}[t]}, ~~~~\forall \theta\in \Theta. \label{e:eta} \end{eqnarray} where ${\bf A}[t]$ is a row stochastic matrix whose properties are specified in Lemma \ref{matrix lemma}. Note that $\mu_{t}^i(\theta)$ is {\bf random} for each $i\in {\mathcal{N}}$ and $t\ge 1$, as it is updated according to local random observations. Since the consensus is performed over $\log \mu_{t}^i\in {\mathbb{R}}^m$, the update matrix ${\bf A}[t]$ is also {\bf random}. In particular, for each $t\ge 1$, matrix ${\bf A}[t]$ is dependent on {\em all the cumulative observations over the network} up to iteration $t$. This dependency makes it non-trivial to adapt analysis from previous algorithms to our setting. In addition, adopting the local cumulative observation likelihood makes the analysis with Byzantine faults easier. \subsection{Identifiability} In the absence of agent failures \cite{jadbabaie2012non}, for the networked agents to detect the true hypothesis $\theta^*$, it is sufficient to assume that $G({\mathcal{V}}, {\mathcal{E}})$ is strongly connected, and that $\theta^*$ is globally identifiable. That is, for any $\theta\not=\theta^*$, there exists a node $j\in {\mathcal{V}}$ such that the Kullback-Leiber divergence between the true marginal $\ell_j(\cdot |\theta^*)$ and the marginal $\ell_j(\cdot |\theta)$, denoted by $D \pth{\ell_j(\cdot |\theta^*)||\ell_j(\cdot |\theta)}$, is nonzero; equivalently, \begin{align} \label{failure-freeidentify} \sum_{j\in {\mathcal{V}}} D \pth{\ell_j(\cdot |\theta^*)||\ell_j(\cdot |\theta)}~\not=~0, \end{align} where $D \pth{\ell_j(\cdot |\theta^*)||\ell_j(\cdot |\theta)}$ is defined as \begin{align} \label{KL} D \pth{\ell_j(\cdot |\theta^*)||\ell_j(\cdot |\theta)}\triangleq \sum_{w_j\in {\mathcal{S}}_j}\ell_j(w_j|\theta^*)\log \frac{\ell_j(w_j|\theta^*)}{\ell_j(w_j|\theta)}. \end{align} Since $\theta^*$ may change from execution to execution, \eqref{failure-freeidentify} is required to hold for any choice of $\theta^*$. Intuitively speaking, if any pair of states $\theta_1$ and $\theta_2$ can be distinguished by at least one agent in the network, then sufficient exchange of local beliefs over strongly connected network will enable every agent distinguish $\theta_1$ and $\theta_2$. However, in the presence of Byzantine agents, a stronger global identifiability condition is required. The following assumption builds upon Assumption \ref{sufficient}. \begin{assumption} \label{ass} Suppose that Assumption \ref{sufficient} holds for $m=|\Theta|$. For any $\theta\not=\theta^*,$ and for any $m$--dimensional reduced graph ${\mathcal{H}}$ of $G({\mathcal{V}}, {\mathcal{E}})$ with ${\mathcal{S}}_{{\mathcal{H}}}$ denoting the unique source component, the following holds \begin{align} \label{failure identify} \sum_{j\in {\mathcal{S}}_{{\mathcal{H}}}} D\pth{\ell_j(\cdot |\theta^*)\parallel\ell_j(\cdot |\theta)}~\not=~0. \end{align} \end{assumption} In contrast to \eqref{failure-freeidentify}, where the summation is taken over all the agents in the network, in \eqref{failure identify}, the summation is taken over agents in the source component only. Intuitively, the condition imposed by Assumption \ref{ass} is that all the agents in the source component can detect the true state $\theta^*$ collaboratively. If iterative consensus is achieved, the accurate belief can be propagated from the source component to every other non-faulty agent in the network. \begin{remark} We will show later that when Assumption \ref{ass} holds, BFL algorithm enables all the non-faulty agents concentrate their beliefs on the true state $\theta^*$ almost surely. That is, Assumption \ref{ass} is a sufficient condition for a consensus-based non-Bayesian learning algorithm to exist. However, Assumption \ref{ass} is not necessary, observing that Assumption \ref{sufficient} (upon which Assumption \ref{ass} builds) is not necessary for $m$-dimensional Byzantine consensus algorithms to exist. As illustrated by our second learning rule (described later), the adoption of $m$-dimensional Byzantine consensus primitives is {\em not necessary}. Nevertheless, BFL contains our main algorithmic and analytical ideas. In addition, BFL provides an alternative learning rule for the failure-free setting (where no fault-tolerant consensus primitives are needed) \end{remark} \subsection{Convergence Results} Our proof parallels the structure of a proof in \cite{nedic2014nonasymptotic}, but with some key differences to take into account our update rule for the belief vector.\\ For any $\theta_1, \theta_2\in \Theta$, and any $i\in {\mathcal{V}}$, define $\bm{\psi}_{t}^i(\theta_1, \theta_2)$ and ${\mathcal{L}}_{t}(\theta_1, \theta_2)$ as follows \begin{align} \label{b1} \bm{\psi}_{t}^i(\theta_1, \theta_2)\triangleq \log \frac{\mu_t^i(\theta_1)}{\mu_t^i(\theta_2)}, \quad {\mathcal{L}}^i_{t}(\theta_1, \theta_2)~\triangleq~\log \frac{\ell_i(s_{t}^i|\theta_1)}{\ell_i(s_{t}^i|\theta_2)}. \end{align} To show Algorithm \ref{alg:new} solves \eqref{goal}, we will show that $\bm{\psi}_{t}^i(\theta, \theta^*)\xrightarrow{{\rm a.s.}} -\infty$ for $\theta\not=\theta^*$, which implies that $\mu_t^i(\theta)\xrightarrow{{\rm a.s.}} 0$ for all $\theta\not=\theta^*$ and for all $i\in {\mathcal{N}}$, i.e., all non-faulty agents asymptotically concentrate their beliefs on the true hypothesis $\theta^*$. We do this by investigating the dynamics of beliefs which is represented compactly in a matrix form. For each $\theta\not=\theta^*$, and each $i\in {\mathcal{N}}=\{1, 2, \cdots, n-\phi\}$, we have \begin{align} \label{rewrite1} \nonumber \bm{\psi}_{t}^i(\theta, \theta^*)&=\log \frac{\mu_{t}^i(\theta)}{\mu_{t}^i(\theta^*)}\overset{(a)}{=}\log \pth{\prod_{j=1}^{n-\phi} \pth{\frac{\mu_{t-1}^j(\theta)}{\mu_{t-1}^j(\theta^*)}}^{{\bf A}_{ij}[t]}\times \frac{\ell_i(s_{1,t}^i|\theta)}{\ell_i(s_{1,t}^i|\theta^*)}}\\ \nonumber &=\sum_{j=1}^{n-\phi} {\bf A}_{ij}[t] \log \frac{\mu_{t-1}^j(\theta)}{\mu_{t-1}^j(\theta^*)} +\log \frac{\ell_i(s_{1,t}^i|\theta)}{\ell_i(s_{1,t}^i|\theta^*)}\\ &=\sum_{j=1}^{n-\phi} {\bf A}_{ij}[t] \bm{\psi}_{t-1}^j(\theta, \theta^*) +\sum_{r=1}^t{\mathcal{L}}^i_{r}(\theta, \theta^*), \end{align} where equality (a) follows from \eqref{e:eta} and the update of $\mu^i$ in Algorithm \ref{alg:new}, and the last equality follows from \eqref{b1} and the fact that the local observations are $i.i.d.$ for each agent. Let $\bm{\psi}_{t}(\theta, \theta^*)\in {\mathbb{R}}^{n-\phi}$ be the vector that stacks $\bm{\psi}_t^i(\theta, \theta^*)$, with the $i$--th entry being $\bm{\psi}_t^i(\theta, \theta^*)$ for all $i\in {\mathcal{N}}$. The evolution of $\bm{\psi}(\theta, \theta^*)$ can be compactly written as \begin{align} \label{matrix form} \bm{\psi}_{t}(\theta, \theta^*)&={\bf A}[t]\bm{\psi}_{t-1}(\theta, \theta^*)+\sum_{r=1}^t{\mathcal{L}}_{r}(\theta, \theta^*). \end{align} Expanding \eqref{matrix form}, we get \begin{align} \label{int1} \bm{\psi}_{t}(\theta, \theta^* ={\bf \Phi}(t,1)\bm{\psi}_0(\theta, \theta^*)+\sum_{r=1}^{t} {\bf \Phi}(t,r+1)\sum_{k=1}^r{\mathcal{L}}_k(\theta, \theta^*). \end{align} For each $\theta\in \Theta$ and $i\in {\mathcal{V}}$, define $ H_i(\theta, \theta^*)\in {\mathbb{R}}^{n-\phi}$ as \begin{align} \label{expected} \nonumber H_i(\theta, \theta^*)&\triangleq \sum_{w_i\in {\mathcal{S}}_i} \ell_i(w_i| \theta^*) \log \frac{\ell_i(w_i\mid \theta)}{\ell_i(w_i\mid \theta^*)}\\ \nonumber &= -D(\ell_i(\cdot |\theta^*)\parallel \ell_i(\cdot |\theta)) ~~~\text{by \eqref{KL}}\\ &\le 0. \end{align} Let ${\mathcal{H}}\in {\mathcal{C}}$ be an arbitrary reduced graph with source component ${\mathcal{S}}_{{\mathcal{H}}}$. Define $C_0$ and $C_1$ as \begin{align} -C_0&\triangleq \min_{i\in {\mathcal{V}}} \min_{\theta_1, \theta_2\in \Theta; \theta_1\not= \theta_2} \min_{w_i\in {\mathcal{S}}_i} \pth{\log \frac{\ell_i(w_i|\theta_1)}{\ell_i(w_i|\theta_2)}}, \label{c0}\\ C_1&\triangleq \min_{{\mathcal{H}}\in {\mathcal{C}}} \, \min_{\theta, \theta^* \in \Theta; \theta\not= \theta^*} \sum_{i\in {\mathcal{S}}_{{\mathcal{H}}}} D(\ell_i(\cdot | \theta^*) \parallel \ell_i(\cdot | \theta)). \label{c1} \end{align} The constant $C_0$ serves as an universal upper bound on $|\log \frac{\ell_i(w_i|\theta_1)}{\ell_i(w_i|\theta_2)}|$ for all choices of $\theta_1$ and $\theta_2$, and for all signals. Intuitively, the constant $C_1$ is the minimal detection capability of the source component under Assumption \ref{ass}. Due to $|\Theta|=m< \infty$ and $|{\mathcal{S}}_i|< \infty$ for each $i\in {\mathcal{N}}$, we know that $C_0<\infty$. Besides, it is easy to see that $-C_0\le 0$ (thus, $C_0\ge 0$). In addition, under Assumption \ref{ass}, we have $C_1>0$. Now we present a key lemma for our main theorem. \begin{lemma} \label{second term goal} Under Assumption \ref{ass}, for any $\theta\not= \theta^*$, it holds that \begin{align} \frac{1}{t^2} \sum_{r=1}^{t}\pth{\sum_{j=1}^{n-\phi}{\bf \Phi}_{ij}(t,r+1)\sum_{k=1}^r{\mathcal{L}}^j_k(\theta, \theta^*)- r\sum_{j=1}^{n-\phi}\pi_j(r+1)H_j(\theta, \theta^*)} \xrightarrow{{\rm a.s.}} 0. \end{align} \end{lemma} As it can be seen later, the proof of Lemma \ref{second term goal} is significantly different from the analogous lemma in \cite{nedic2014nonasymptotic}. \begin{theorem} \label{almost sure} When Assumption \ref{ass} holds, each non-faulty agent $i\in {\mathcal{N}}$ will concentrate its belief on the true hypothesis $\theta^*$ almost surely, i.e., $\mu_t^i(\theta) \xrightarrow{{\rm a.s.}} 0$ for all $\theta\not= \theta^*$. \end{theorem} \begin{proof Consider any $\theta\not=\theta^*$. Recall from \eqref{int1} that \begin{align*} \bm{\psi}_{t}(\theta, \theta^*)&={\bf \Phi}(t,1)\bm{\psi}_0(\theta, \theta^*)+\sum_{r=1}^{t} {\bf \Phi}(t,r+1)\sum_{k=1}^r{\mathcal{L}}_k(\theta, \theta^*)\\ &=\sum_{r=1}^{t} {\bf \Phi}(t,r+1)\sum_{k=1}^r{\mathcal{L}}_k(\theta, \theta^*). \end{align*} The last equality holds as $\mu_0^i$ is uniform, and $\bm{\psi}^i_0(\theta, \theta^*)=0$ for each $i\in {\mathcal{N}}$. Since the supports of $\ell_i(\cdot|\theta)$ and $\ell_i(\cdot|\theta^*)$ are the whole signal space ${\mathcal{S}}_i$ for each agent $i\in {\mathcal{N}}$, it holds that $\left |\frac{\ell_i(w_i|\theta)}{\ell_i(w_i|\theta^*)}\right |<\infty$ for each $w_i\in {\mathcal{S}}_i$, and \begin{align} \label{finite of H} 0\ge H_i(\theta, \theta^*)\ge \min_{w_i\in {\mathcal{S}}_i} \pth{\log \frac{\ell_i(w_i|\theta)}{\ell_i(w_i|\theta^*)}}\ge ~-C_0 >-\infty. \end{align} By \eqref{finite of H}, we know that $ |\sum_{j=1}^{n-\phi}\pi_j(r+1) H_j(\theta, \theta^*)|\le C_0<\infty. $ Due to the finiteness of $\sum_{j=1}^{n-\phi}\pi_j(r+1) H_j(\theta, \theta^*)$, we are able to add and subtract $r \mathbf 1 \sum_{j=1}^{n-\phi}\pi_j(r+1) H_j(\theta, \theta^*)$ from \eqref{int1}. We get \begin{align} \label{int2} \nonumber \bm{\psi}_{t}(\theta, \theta^* &=\sum_{r=1}^{t}\pth{{\bf \Phi}(t,r+1)\sum_{k=1}^r{\mathcal{L}}_k(\theta, \theta^*)-r \mathbf 1 \sum_{j=1}^{n-\phi}\pi_j(r+1) H_j(\theta, \theta^*) }\\ &\quad +\sum_{r=1}^t r \mathbf 1 \sum_{j=1}^{n-\phi}\pi_j(r+1) H_j(\theta, \theta^*). \end{align} For each $i\in {\mathcal{N}}$, we have \begin{align} \label{evo} \nonumber \bm{\psi}^i_{t}(\theta, \theta^*)&=\sum_{r=1}^{t}\pth{\sum_{j=1}^{n-\phi}{\bf \Phi}_{ij}(t,r+1)\sum_{k=1}^r{\mathcal{L}}^j_k(\theta, \theta^*)- r\sum_{j=1}^{n-\phi}\pi_j(r+1)H_j(\theta, \theta^*)}\\ &\quad+\sum_{r=1}^{t} r\sum_{j=1}^{n-\phi}\pi_j(r+1)H_j(\theta, \theta^*). \end{align} To show $\lim_{t\to\infty}\mu_t^i(\theta) \xrightarrow{{\rm a.s.}} 0$ for $\theta\not= \theta^*$, it is enough to show $\bm{\psi}^i_{t}(\theta, \theta^*)\xrightarrow{{\rm a.s.}} -\infty$. Our convergence proof has similar structure as the analysis in \cite{nedic2014nonasymptotic}. From Lemma \ref{second term goal}, we know that \begin{align} \label{lll2} \frac{1}{t^2} \sum_{r=1}^{t}\pth{\sum_{j=1}^{n-\phi}{\bf \Phi}_{ij}(t,r+1)\sum_{k=1}^r{\mathcal{L}}^j_k(\theta, \theta^*)- r\sum_{j=1}^{n-\phi}\pi_j(r+1)H_j(\theta, \theta^*)} \xrightarrow{{\rm a.s.}} 0. \end{align} Next we show that the second term of the right hand side of \eqref{evo} decreases quadratically in $t$. \begin{align} \label{third term} \nonumber \sum_{r=1}^{t} r\sum_{j=1}^{n-\phi} \pi_j(r+1)H_j(\theta, \theta^*)&\le \sum_{r=1}^{t} r\sum_{j\in {\mathcal{S}}_{r}} \pi_j(r+1)H_j(\theta, \theta^*)~~~~~~\text{by \eqref{expected}}\\ \nonumber &\le \sum_{r=1}^{t} r \beta^{\chi (n-\phi)} \sum_{j\in {\mathcal{S}}_{r}}H_j(\theta, \theta^*)~~~~~~\text{by Lemma \ref{lblimiting}}\\ \nonumber & \le - \sum_{r=1}^{t} r \beta^{\chi (n-\phi)} C_1~~~~~~~\text{by \eqref{c1} and \eqref{expected}}\\ &\le -\frac{t^2}{2}\beta^{\chi (n-\phi)} C_1. \end{align} Therefore, by \eqref{evo}, \eqref{lll2} and \eqref{third term}, almost surely, the following hold \begin{align*} \lim_{t\to\infty}\frac{1}{t^2}\psi_t^i(\theta, \theta^* \le-\frac{1}{2}\beta^{\chi (n-\phi)} C_1. \end{align*} Therefore, we have $\psi_t^i(\theta, \theta^*) \xrightarrow{{\rm a.s.}} -\infty$ and $\mu_t^i(\theta) \xrightarrow{{\rm a.s.}} 0$ for $i\in {\mathcal{N}}$ and $\theta\not=\theta^*$, proving Theorem \ref{almost sure}. \hfill $\Box$ \end{proof} We now present the proof of our key lemma -- Lemma \ref{second term goal}. \begin{proof}[Proof of Lemma \ref{second term goal}] By \eqref{b1}, we have \begin{align*} \left |{\mathcal{L}}^i_r(\theta, \theta^*)\right |=\left |\log \frac{\ell_i(s_{t}^i|\theta)}{\ell_i(s_{t}^i|\theta^*)}\right |\le \max_{i\in {\mathcal{V}}} \max_{\theta_1, \theta_2 \in \Theta; \theta_1\not= \theta_2} \max_{w_i\in {\mathcal{S}}_i} \left |\log \frac{\ell_i(w_i|\theta_1)}{\ell_i(w_i|\theta_2)}\right |. \end{align*} Note that $\max_{i\in {\mathcal{V}}} \max_{\theta_1, \theta_2 \in \Theta; \theta_1\not= \theta_2} \max_{w_i\in {\mathcal{S}}_i} \left |\log \frac{\ell_i(w_i|\theta_1)}{\ell_i(w_i|\theta_2)}\right |$ is symmetric in $\theta_1$ and $\theta_2$. Thus, \begin{align} \label{d2} \nonumber \left |{\mathcal{L}}^i_r(\theta, \theta^*)\right |&\le \max_{i\in {\mathcal{V}}} \max_{\theta_1, \theta_2 \in \Theta; \theta_1\not= \theta_2} \max_{w_i\in {\mathcal{S}}_i} \left |\log \frac{\ell_i(w_i|\theta_1)}{\ell_i(w_i|\theta_2)}\right |=\max_{i\in {\mathcal{V}}} \max_{\theta_1, \theta_2 \in \Theta; \theta_1\not= \theta_2} \max_{w_i\in {\mathcal{S}}_i} \log \frac{\ell_i(w_i|\theta_1)}{\ell_i(w_i|\theta_2)}\\ \nonumber &=\max_{i\in {\mathcal{V}}} \max_{\theta_1, \theta_2 \in \Theta; \theta_1\not= \theta_2} \max_{w_i\in {\mathcal{S}}_i} -\log \frac{\ell_i(w_i|\theta_2)}{\ell_i(w_i|\theta_1)}\\ &=-\min_{i\in {\mathcal{V}}} \min_{\theta_1, \theta_2 \in \Theta; \theta_1\not= \theta_2} \min_{w_i\in {\mathcal{S}}_i} \log \frac{\ell_i(w_i|\theta_2)}{\ell_i(w_i|\theta_1)}=-(-C_0)=C_0<\infty. \end{align} Thus, adding and subtracting $\frac{1}{t^2}\sum_{r=1}^t \sum_{j=1}^{n-\phi} \pi_j(r+1)\sum_{k=1}^r {\mathcal{L}}_k^j(\theta, \theta^*)$ from the first term on the right hand side of \eqref{evo}, we can get \begin{align} \label{ltbb} \nonumber &\quad\frac{1}{t^2} \sum_{r=1}^{t}\pth{\sum_{j=1}^{n-\phi}{\bf \Phi}_{ij}(t,r+1)\sum_{k=1}^r{\mathcal{L}}^j_k(\theta, \theta^*)- \pi_j(r+1)r\sum_{j=1}^{n-\phi}H_j(\theta, \theta^*)}\\ \nonumber &=\frac{1}{t^2} \sum_{r=1}^{t}\sum_{j=1}^{n-\phi}\pth{{\bf \Phi}_{ij}(t,r+1)- \pi_j(r+1)}\sum_{k=1}^r{\mathcal{L}}^j_k(\theta, \theta^*)\\ &\quad + \frac{1}{t^2} \sum_{r=1}^{t}\sum_{j=1}^{n-\phi} \pi_j(r+1)\pth{ \sum_{k=1}^r {\mathcal{L}}_k^j(\theta, \theta^*)-rH_j(\theta, \theta^*)}. \end{align} For the first term of the right hand side of \eqref{ltbb}, we have \begin{align} \label{lt1} \nonumber \quad&\frac{1}{t^2} \left |\sum_{r=1}^{t}\sum_{j=1}^{n-\phi}\pth{{\bf \Phi}_{ij}(t,r+1)- \pi_j(r+1)}\sum_{k=1}^r{\mathcal{L}}^j_k(\theta, \theta^*)\right |\\ &\le \frac{1}{t^2}\sum_{r=1}^{t}\sum_{j=1}^{n-\phi}\left |{\bf \Phi}_{ij}(t,r+1)- \pi_j(r+1)\right |\sum_{k=1}^r \left |{\mathcal{L}}^j_k(\theta, \theta^*) \right |\\ \nonumber &\le \frac{1}{t^2}\sum_{r=1}^{t}\sum_{j=1}^{n-\phi}\left |{\bf \Phi}_{ij}(t,r+1)- \pi_j(r+1)\right |r C_0~~~~\text{by \eqref{d2}}\\ \nonumber &\le \frac{1}{t^2}\sum_{r=1}^{t}\sum_{j=1}^{n-\phi} (1-\beta^{\nu})^{\lceil\frac{t-r}{\nu}\rceil}r C_0~~~~\text{by Theorem \ref{convergencerate}}\\ \nonumber &\le \frac{1}{t^2} \pth{t(n-\phi)C_0}\sum_{r=1}^{t}(1-\beta^{\nu})^{\lceil\frac{t-r}{\nu}\rceil} \\ &\le \frac{(n-\phi)C_0}{(1-\beta^{\nu})(1-(1-\beta^{\nu})^{\frac{1}{\nu}})t} . \end{align} Thus, for every sample path, we have $$ \frac{1}{t^2} \sum_{r=1}^{t}\sum_{j=1}^{n-\phi}\pth{{\bf \Phi}_{ij}(t,r+1)- \pi_j(r+1)}\sum_{k=1}^r{\mathcal{L}}^j_k(\theta, \theta^*) \to 0.$$ For the second term of the right hand side of \eqref{ltbb}, we will show that $$\frac{1}{t^2} \sum_{r=1}^{t}\sum_{j=1}^{n-\phi} \pi_j(r+1)\pth{ \sum_{k=1}^r {\mathcal{L}}_k^j(\theta, \theta^*)-rH_j(\theta, \theta^*)} \xrightarrow{{\rm a.s.}} 0,$$ i.e., almost surely for any $\epsilon>0$ there exists sufficiently large $t(\epsilon)$ such that $\forall \, t\ge t(\epsilon)$, \begin{align} \label{22} \frac{1}{t^2} \left |\sum_{r=1}^{t}\sum_{j=1}^{n-\phi} \pi_j(r+1)\pth{ \sum_{k=1}^r {\mathcal{L}}_k^j(\theta, \theta^*)-rH_j(\theta, \theta^*)} \right |~\le ~\epsilon. \end{align} We prove this by dividing $r$ into two ranges $r\in \{1, \cdots, \sqrt{t}\}$ and $r\in \{\sqrt{t}+1, \cdots, t\}$, i.e., \begin{align} \label{lll} \nonumber &\frac{1}{t^2}\sum_{r=1}^{t} \sum_{j=1}^{n-\phi} \pi_j(r+1)\pth{\sum_{k=1}^r {\mathcal{L}}_k^j(\theta, \theta^*)-rH_j(\theta, \theta^*)}\\ \nonumber &=\frac{1}{t^2}\sum_{r=1}^{\sqrt{t}} \sum_{j=1}^{n-\phi} \pi_j(r+1)\pth{\sum_{k=1}^r {\mathcal{L}}_k^j(\theta, \theta^*)-rH_j(\theta, \theta^*)}\\ &\quad+\frac{1}{t^2}\sum_{r=\sqrt{t}+1}^t \sum_{j=1}^{n-\phi} \pi_j(r+1)\pth{\sum_{k=1}^r {\mathcal{L}}_k^j(\theta, \theta^*)-rH_j(\theta, \theta^*)}. \end{align} For the first term of the right hand side of \eqref{lll}, we have \begin{align*} &\frac{1}{t^2}\left |\sum_{r=1}^{\sqrt{t}} \sum_{j=1}^{n-\phi} \pi_j(r+1)\pth{\sum_{k=1}^r {\mathcal{L}}_k^j(\theta, \theta^*)-rH_j(\theta, \theta^*)}\right |\\ &\le \frac{1}{t^2} \sum_{r=1}^{\sqrt{t}} \sum_{j=1}^{n-\phi}\pi_j(r+1)\pth{2rC_0}~~~\text{by \eqref{expected} and \eqref{d2}}\\ &=\frac{1}{t^2} \pth{2C_0} \sum_{r=1}^{\sqrt{t}}r\\ &\le C_0\pth{\frac{1}{t}+\frac{1}{t^{\frac{3}{2}}}}. \end{align*} Thus, there exists $t_1(\epsilon)$ such that for all $t\ge t_1(\epsilon)$, it holds that \begin{align*} \frac{1}{t^2}\left |\sum_{r=1}^{\sqrt{t}} \sum_{j=1}^{n-\phi}\pi_j(r+1)\pth{\sum_{k=1}^r {\mathcal{L}}_k^j(\theta, \theta^*)-rH_j(\theta, \theta^*)}\right | \le \frac{\epsilon}{2}. \end{align*} For the second term of the right hand side of \eqref{lll}, we have \begin{align*} \quad&\frac{1}{t^2}\sum_{r=\sqrt{t}+1}^{t} \sum_{j=1}^{n-\phi}\pi_j(r+1)\pth{\sum_{k=1}^r {\mathcal{L}}_k^j(\theta, \theta^*)-rH_j(\theta, \theta^*)}\\ &=\frac{1}{t}\sum_{r=\sqrt{t}+1}^{t} \sum_{j=1}^{n-\phi}\pi_j(r+1)\frac{r}{t}\pth{\frac{1}{r}\sum_{k=1}^r {\mathcal{L}}_k^j(\theta, \theta^*)-H_j(\theta, \theta^*)} \end{align*} Since ${\mathcal{L}}_k^j(\theta, \theta^*)$'s are i.i.d., from Strong LLN, we know that $\frac{1}{r}\sum_{k=1}^r {\mathcal{L}}_k^j(\theta, \theta^*)-H_j(\theta, \theta^*)\xrightarrow{{\rm a.s.}} 0$. That is, with probability 1, the sample path converges. Now, focus on each convergent sample path. For sufficiently large $r(\epsilon)$, it holds that for any $r\ge r(\epsilon)$, $$\left| \frac{1}{r}\sum_{k=1}^r {\mathcal{L}}_k^j(\theta, \theta^*)-H_j(\theta, \theta^*)\right | \le \frac{\epsilon}{2}.$$ Recall that $r\ge \sqrt{t}$. Thus, we know that there exists sufficiently large $t_2(\epsilon)$ such that $\forall \, t\ge t_2(\epsilon)$, $r\ge \sqrt{t}$ is large enough and $$ \left |\frac{1}{r}\sum_{k=1}^r {\mathcal{L}}_k^j(\theta, \theta^*)-H_j(\theta, \theta^*)\right |\le \frac{\epsilon}{2}.$$ Then, we have $\forall \, t\ge t_2(\epsilon)$, \begin{align*} &\frac{1}{t^2}\left |\sum_{r=\sqrt{t}+1}^{t} \sum_{j=1}^{n-\phi}\pi_j(r+1)\pth{\sum_{k=1}^r {\mathcal{L}}_k^j(\theta, \theta^*)-rH_j(\theta, \theta^*)}\right |\\ &=\frac{1}{t}\sum_{r=\sqrt{t}+1}^{t} \sum_{j=1}^{n-\phi}\pi_j(r+1)\frac{r}{t}\left |\frac{1}{r}\sum_{k=1}^r {\mathcal{L}}_k^j(\theta, \theta^*)-H_j(\theta, \theta^*)\right |\\ &\le \frac{1}{t}\sum_{r=\sqrt{t}+1}^{t} \sum_{j=1}^{n-\phi}\pi_j(r+1)\frac{r}{t} \frac{\epsilon}{2}\\ &=\frac{1}{t}\sum_{r=\sqrt{t}+1}^{t} \frac{r}{t} \frac{\epsilon}{2}=\frac{\epsilon}{2}\frac{1}{t^2} \sum_{r=\sqrt{t}+1}^{t}r \\ &=\frac{\epsilon}{4}\frac{1}{t^2} \pth{t^2-\sqrt{t}}\le ~\frac{\epsilon}{2}. \end{align*} Therefore, for any $\epsilon>0$, there exists $\max \{t_1(\epsilon), t_2(\epsilon)\}$, such that for any $t\ge \max \{t_1(\epsilon), t_2(\epsilon)\}$, \begin{align*} \frac{1}{t^2}\left |\sum_{r=1}^{t} \sum_{j=1}^{n-\phi}\pi_j(r+1)\pth{\sum_{k=1}^r {\mathcal{L}}_k^j(\theta, \theta^*)-rH_j(\theta, \theta^*)}\right | ~\le ~\epsilon, \end{align*} for every convergent sample path. In addition, we know a sample path is convergent with probability 1. Thus \eqref{22} holds almost surely. Therefore, Lemma \ref{second term goal} is proved. \hfill $\Box$ \end{proof} \section{Modified BFL and Minimal Network Identifiability} \label{modified} To reduce the computation complexity per iteration in general, and to identify the minimal (tight) global identifiability of the network for any consensus-based learning rule of interest to learn the true state, we propose a modification of the above learning rule, which works under much weaker network topology and global identifiability condition. We decompose the $m$-ary hypothesis testing problem into $m(m-1)$ (ordered) binary hypothesis testing problems. For each pair of hypotheses $\theta_1$ and $\theta_2$, each non-faulty agent updates the likelihood ratio of $\theta_1$ over $\theta_2$ as follows. Let $r^i_t(\theta_1, \theta_2)$ be the log likelihood ratio of $\theta_1$ over $\theta_2$ kept by agent $i$ at the end of iteration $t$. Our modified learning rule applies consensus procedures to log likelihood ratio, i.e., $r^i_t(\theta_1, \theta_2)$, which is a scalar. For Algorithm \ref{alg: pairwise}, we only require scalar iterative Byzantine (approximate) consensus among the non-faulty agents to be achievable. When scalar consensus is achievable, the following assumption on the identifiability of the network to detect $\theta^*$ is minimal, meaning that if this assumption is not satisfied, then no correct consensus-based non-Bayesian learning exists. \begin{assumption} \label{ass pairwise} Suppose that every $1$-dimensional reduced graph of $G({\mathcal{V}}, {\mathcal{E}})$ contains only one source component. For any $\theta\not=\theta^*,$ and for any $1$-dimensional reduced graph ${\mathcal{H}}_1$ of $G({\mathcal{V}}, {\mathcal{E}})$ with ${\mathcal{S}}_{{\mathcal{H}}_1}$ denoting the unique source component, the following holds \begin{align} \label{failure identify pairwise} \sum_{j\in {\mathcal{S}}_{{\mathcal{H}}_1}} D\pth{\ell_j(\cdot |\theta^*)\parallel\ell_j(\cdot |\theta)}~\not=~0. \end{align} \end{assumption} Assumption \ref{ass pairwise} is minimal for the following reasons: (1) For any consensus-based learning rule to work, the communication network $G({\mathcal{V}}, {\mathcal{E}})$ should support consensus with scalar inputs. That is, every $1$-dimensional reduced graph of $G({\mathcal{V}}, {\mathcal{E}})$ must contain only one source component. (2) Under some faulty behaviors of the Byzantine agents, one particular $1$--dimensional reduced graph may govern the entire dynamics of $r^i(\theta_1, \theta_2)$. If \eqref{failure identify pairwise} does not hold for that reduced graph, then the good agents may not able to distinguish $\theta_1$ from $\theta_2$. \begin{algorithm} \caption{Pairwise Learning} \label{alg: pairwise} \vskip 0.2\baselineskip {\normalsize Initialization: \For{$\theta_1, \theta_2\in \Theta, \text{and}~ \theta_1\not=\theta_2$} {$r_0^i(\theta_1, \theta_2)\gets 0$\;} \While{$t\ge 1$}{ \For{$\theta_1, \theta_2\in \Theta, \text{and}~ \theta_1\not=\theta_2$} {Transmit current belief vector $r_{t-1}^i(\theta_1, \theta_2)$ on all outgoing edges\; % \vskip 0.2\baselineskip Wait until a private signal $s_t^i$ is observed and log likelihood ratios $\tilde{r}_{t-1}^j(\theta_1, \theta_2)$ are received from all incoming neighbors ${\mathcal{I}}_i$\; \vskip 0.2\baselineskip Sort the received log likelihood ratios $\tilde{r}_{t-1}^j(\theta_1, \theta_2)$ in a non-decreasing order, and remove the smallest $f$ values and the largest $f$ values. {\small \color{OliveGreen}\% Denote the set of indices of incoming neighbors whose ratios have not been removed at iteration $t$ by ${\mathcal{I}}_i^*[t]$.\%} \vskip 0.2\baselineskip $r_{t}^i(\theta_1, \theta_2) \gets \frac{\sum_{j\in {\mathcal{I}}_i^*[t]} \tilde{r}_{t-1}^j(\theta_1, \theta_2) + r_{t-1}^i(\theta_1, \theta_2)}{|{\mathcal{I}}^*[t]|+1} + \log \frac{\ell_i (s^i_{1, t} \mid \theta_1) }{\ell_i (s^i_{1, t} \mid \theta_2)}.$ } } } \end{algorithm} For each iteration, the computation complexity per agent (non-faulty) can be calculated as follows. The cost-dominant procedure in each iteration is sorting the received log likelihood ratios, which takes $O(n\log n)$ operations. In total, we have $m(m-1)$ order pairs of hypotheses. Thus, the total computation per agent per iteration is $O(m^2 n \log n)$. \begin{theorem} \label{pairwise learning} Suppose Assumption \ref{ass pairwise} holds. Under Algorithm \ref{alg: pairwise}, for any $\theta \not=\theta^*$, the following holds: \begin{align*} r_{t}^i(\theta^*, \theta)\xrightarrow{{\rm a.s.}} +\infty, ~\text{and }~~~ r_{t}^i(\theta, \theta^*)\xrightarrow{{\rm a.s.}} -\infty. \end{align*} \end{theorem} \begin{proof} By \cite{VaidyaMatrix2012}, we know that for each pair of hypotheses $\theta_1$ and $\theta_2$, there exists a row-stochastic matrix ${\bf M}^{1, 2}[t]\in {\mathbb{R}}^{(n-\phi)\times (n-\phi)}$ such that \begin{align} \label{update pairwise} r_{t}^i(\theta_1, \theta_2)= \sum_{j=1}^{n-\phi} {\bf M}^{1,2}_{ij}[t] r_{t-1}^j(\theta_1, \theta_2)+ \log \frac{\ell_i (s^i_{1, t} \mid \theta_1) }{\ell_i (s^i_{1, t} \mid \theta_2)}. \end{align} Note that matrix ${\bf M}^{1,2}$ depends on the choice of hypotheses $\theta_1$ and $\theta_2$. For a given pair of hypotheses $\theta_1$ and $\theta_2$, let ${\bf r}_{t}(\theta_1, \theta_2)\in {\mathbb{R}}^{n-\phi}$ be the vector that stacks $r_{t}^i(\theta_1, \theta_2)$. The evolution of ${\bf r}(\theta_1, \theta_2)$ can be compactly written as \begin{align} \label{update pairwise vector} \nonumber {\bf r}_{t}(\theta_1, \theta_2)&= {\bf M}^{1,2}[t] {\bf r}_{t-1}(\theta_1, \theta_2)+ \sum_{r=1}^t {\mathcal{L}}_r(\theta_1, \theta_2) \\ &= \sum_{r=1}^t {\bf \Phi}^{1,2}(t, r+1) \sum_{k=1}^r {\mathcal{L}}_k(\theta_1, \theta_2), \end{align} where ${\bf \Phi}^{1,2}(t, r+1)\triangleq {\bf M}^{1, 2}[t] {\bf M}^{1, 2}[t-1] \cdots {\bf M}^{1, 2}[r+1]$ for $r\le t$, ${\bf \Phi}^{1,2}(t, t)\triangleq {\bf M}^{1, 2}[t]$ and ${\bf \Phi}^{1,2}(t, t+1)\triangleq {\bf I}$. We do the analysis for each pair of $\theta_1$ and $\theta_2$ separately. The remaining proof is identical to the proof of Theorem \ref{almost sure}, and is omitted. \hfill $\Box$ \end{proof} \begin{proposition} \label{uniqueness} Suppose there exists $\tilde{\theta}\in \Theta$ such that for any $\theta \not=\tilde{\theta}$, it holds that $r_{t}^i(\tilde{\theta}, \theta)\xrightarrow{{\rm a.s.}} +\infty$, and $r_{t}^i(\theta, \tilde{\theta})\xrightarrow{{\rm a.s.}} -\infty$. Then $\tilde{\theta}=\theta^*.$ \end{proposition} \begin{proof} We prove this proposition by contradiction. Suppose there exists $\tilde{\theta}\not=\theta^*\in \Theta$ such that for any $\theta \not=\tilde{\theta}$, it holds that $r_{t}^i(\tilde{\theta}, \theta)\xrightarrow{{\rm a.s.}} +\infty$, and $r_{t}^i(\theta, \tilde{\theta})\xrightarrow{{\rm a.s.}} -\infty$. Then we know that $r_{t}^i(\tilde{\theta}, \theta^*)\xrightarrow{{\rm a.s.}} +\infty$ and $r_{t}^i(\theta^*, \tilde{\theta})\xrightarrow{{\rm a.s.}} -\infty$, contradicting Theorem \ref{pairwise learning}. Thus, Proposition \ref{uniqueness} is true. \hfill $\Box$ \end{proof}
1,314,259,994,427
arxiv
\section{Introduction} The analysis of sequential data is a very active research area that addresses problems where data is processed naturally as sequences or can be better modeled that way, such as sentiment analysis, machine translation, video analytics, speech recognition, and time-series processing. A scenario that is gaining increasing interest in the classification of sequential data is the one referred to as ``early classification'', in which, the problem is to classify the data stream as early as possible without having a significant loss in terms of accuracy. The reasons behind this requirement of ``earliness'' could be diverse. It could be necessary because the sequence length is not known in advance (e.g. a social media user's content) or, for example, if savings of some sort (e.g. computational savings) can be obtained by early classifying the input. However, the most important (and interesting) cases are when the delay in that decision could also have negative or risky implications. This scenario, known as ``early risk detection'' (ERD) has gained increasing interest in recent years with potential applications in rumor detection~\cite{chen2018rumor, ma2016detecting}, sexual predator detection and aggressive text identification \cite{escalante2017early}, depression detection \cite{losada2017erisk, losada2016test} or terrorism detection \cite{iskandar2017terrorism}. ERD scenarios are difficult to deal with since models need to support: classifications and/or learning over of sequential data (streams); provide a clear method to decide whether the processed data is enough to classify the input stream (early stopping); and additionally, models should have the ability to explain their rationale since people's lives could be affected by their decisions. A recently introduced text classifier\cite{burdisso2019}, called SS3, has shown to be well suited to deal with ERD problems on social media streams. Unlike standard classifiers, SS3 was created to naturally deal with ERD problems since: it supports incremental training and classification over text streams and it has the ability to visually explain its rationale. It obtained state-of-the-art performance on early depression, anorexia and self-harm detection on the CLEF eRisk open tasks\cite{burdisso2019, burdisso2019clef}. However, at its core, SS3 processes each sentence from the input stream using a bag-of-word model. This leads to SS3 lacking the ability to capture important word sequences which could negatively affect the classification performance. Additionally, since single words are less informative than word sequences, this bag-of-word model reduces the descriptiveness of SS3's visual explanations. The weaknesses of bag-of-words representations are well-known in the standard document classification field, in which word n-grams are usually used to overcome them. Unfortunately, when dealing with text streams, using word n-grams is not a trivial task since the system has to dynamically identify, create and learn which n-grams are important ``on the fly''. In this paper, we introduce a variation of SS3, called $\tau$-SS3, which expands its original definition to allow recognizing important word sequences. In \autoref{sec:ss3} the original SS3 definition is briefly introduced. \autoref{sec:tss3} formally introduces $\tau$-SS3, in which the needed equations and algorithms are described. In \autoref{sec:results} we evaluate our model on the CLEF's eRisk 2017 and 2018 tasks on early depression and anorexia detection. Finally, \autoref{sec:conclusion} summarizes the main conclusions derived from this work. \section{The SS3 text classifier} \label{sec:ss3} \begin{figure*}[t!] \centering \includegraphics[width=140mm]{images/ss3-machine-learning} \caption{Classification example for categories \emph{technology} and \emph{sports}. In this example, SS3 misclassified the document's topic as $sports$ since it failed to capture important sequences for \emph{technology} like \emph{``machine learning''} or \emph{``video game''}. This was due to each sentence being processed as a bag of words.} \label{fig:ss3-example} \end{figure*} As it is described in more detail by Burdisso et al. \cite{burdisso2019}, during training and for each given category, SS3 builds a dictionary to store word frequencies using all training documents of the category. This simple training method allows SS3 to support online learning since when new training documents are added, SS3 simply needs to update the dictionaries using only the content of these new documents, making the training incremental. Then, using the word frequencies stored in the dictionaries, SS3 computes a value for each word using a function, $gv(w,c)$, to value words in relation to categories. $gv$ takes a word $w$ and a category $c$ and outputs a number in the interval [0,1] representing the degree of confidence with which $w$ is believed to \emph{exclusively} belong to $c$, for instance, suppose categories $C= \{food, music, health, sports\}$, we could have: \ \begin{tabular}{l l} $gv($`$sushi$'$, food) = 0.85;$ & $gv($`$the$'$, food) = 0;$\\ $gv($`$sushi$'$, music) = 0.09;$ & $gv($`$the$'$, music) = 0;$\\ $gv($`$sushi$'$, health) = 0.50;$ & $gv($`$the$'$, health) = 0;$\\ $gv($`$sushi$'$, sports) = 0.02;$ & $gv($`$the$'$, sports) = 0;$\\ \end{tabular} \ Additionally, a vectorial version of $gv$ is defined as: $$\overrightarrow{gv}(w)=(gv(w,c_0), gv(w,c_1), \dots, gv(w,c_k))$$ where $c_i \in C$ (the set of all the categories). That is, $\overrightarrow{gv}$ is only applied to a word and it outputs a vector in which each component is the word's \emph{gv} for each category $c_i$. For instance, following the above example, we have: \ $\overrightarrow{gv}($`$sushi$'$) = (0.85, 0.09, 0.5, 0.02);$ $\overrightarrow{gv}($`$the$'$) = (0, 0, 0, 0);$ \ The vector $\overrightarrow{gv}(w)$ is called the ``\emph{confidence vector} of $w$''. Note that each category is assigned to a fixed position in $\overrightarrow{gv}$. For instance, in the example above $(0.85, 0.09, 0.5, 0.02)$ is the \emph{confidence vector} of the word ``sushi'' and the first position corresponds to $food$, the second to $music$, and so on. The computation of $gv$ involves three functions, $lv$, $sg$ and $sn$, as follows: \begin{equation} gv(w, c) = lv_\sigma(w, c)\cdot sg_{\lambda}(w, c)\cdot sn_\rho(w, c) \label{eq:gv} \end{equation} \begin{itemize} \item $lv_\sigma(w, c)$ values a word based on the local frequency of $w$ in $c$. As part of this process, the word distribution curve is smoothed by a factor controlled by the hyperparameter $\sigma$. \item $sg_{\lambda}(w, c)$ captures the significance of $w$ in $c$. It is a sigmoid function that returns a value close to $1$ when $lv(w, c)$ is significantly greater than $lv(w, c_i)$, for most of the other categories $c_i$; and a value close to $0$ when $lv(w, c_i)$ values are close to each other, for all $c_i$. The $\lambda$ hyperparameter controls how far $lv(w, c)$ must deviate from the median to be considered significant. \item $sn_\rho(w, c)$ decreases the global value in relation to the number of categories $w$ is significant to. That is, the more categories $c_i$ to which $sg_{\lambda}(w, c_i)\approx 1$, the smaller the $sn_\rho(w, c)$ value. The $\rho$ hyperparameter controls how severe this sanction is. \end{itemize} To keep this paper shorter and simpler we will only introduce here the equation for $lv$ since the computation of both, $sg$ and $sn$, is based only on this function. Nonetheless, for those readers interested in knowing how the $sg$ and $sn$ functions are actually computed, we highly recommend reading the SS3 original paper \cite{burdisso2019}. Thus, $lv$ is defined as: \begin{equation} \label{eq:lv-p} lv_\sigma(w, c) = \bigg(\frac{P(w|c)}{P(w_{max}|c)}\bigg)^\sigma \end{equation} Which, after estimating the probability, $P$, by analytical \emph{Maximum Likelihood Estimation}(MLE), leads to the actual definition: \begin{equation} \label{eq:lv} lv_\sigma(w, c) = \bigg(\frac{tf_{w,c}}{max\{tf_c\}}\bigg)^\sigma \end{equation} Where $tf_{w,c}$ denotes the frequency of $w$ in $c$, $max\{tf_c\}$ the maximum frequency seen in $c$, and $\sigma \in (0, 1]$ is one of the SS3's hyperparameter \ It is worth mentioning that SS3 will learn to automatically ignore stop words since, by definition, $gv(w, c)\approx0$ for all of them. Therefore, there is no need to manually remove stop words. Moreover, stop words removal could cause negative effects since in \autoref{eq:lv} we are implicitly evaluating words in terms of stop words.\footnote{That is, words are normalized over the frequency of the most probable one, which will always be a stop word. Note that the fact that stop words have a similar distribution across all the categories enables us to compute the $gv$ value of a word by comparing its $lv$ value across different categories.} However, there is nothing in the model preventing us from using any other type of preprocessing, such as stemming, lemmatization, etc. Finally, during classification, SS3 performs a 2-phase process to classify the input, as it is illustrated in \autoref{fig:ss3-example}. In the first phase, the input is split into multiple blocks (e.g. paragraphs), then each block is in turn repeatedly divided into smaller units (e.g. sentences, words). Thus, the previously ``flat'' document is transformed into a hierarchy of blocks. In the second phase, the $\overrightarrow{gv}$ function is applied to each word to obtain the ``level 0'' \emph{confidence vectors}, which then are reduced to ``level 1'' \emph{confidence vectors} by means of a level 0 \emph{summary operator}, $\oplus_0$.\footnote{ Any function $f:2^{\mathbb{R}^n}\mapsto\mathbb{R}^n$ could be used as a \emph{summary operator}, in this example, vector addition was used.} This reduction process is recursively propagated up to higher-level blocks, using higher-level \emph{summary operators}, $\oplus_j$, until a single \emph{confidence vector}, $\overrightarrow{d}$, is generated for the whole input. Finally, the actual classification is performed based on the values of this single \emph{confidence vector}, $\overrightarrow{d}$, using some policy ---for example, selecting the category with the highest \emph{confidence value}. Note that the classification process is incremental as long as the \emph{summary operator} for the highest level can be computed incrementally. For instance, suppose that later, a new sentence is appended to the example shown in \autoref{fig:ss3-example}. Since $\oplus_1$ is the vector addition, instead of processing the whole document again, we could update the already computed vector, $(0.63, 0.07)$, by simply adding the new sentence \emph{confidence vector} to it. In addition, the \emph{confidence vectors} in the block hierarchy allow SS3 to visually explain the classification if different blocks are painted in relation to their values; this aspect is vital when classification could affect people's lives, humans should be able to inspect the reasons behind the classification. However, note that SS3 processes individual sentences using a bag-of-word model since the $\oplus_0$ operators reduce the \emph{confidence vectors} of individual words into a single vector. Therefore, SS3 does not take into account any relationship that could exist between individual words, for instance, between ``machine'' and ``learning'' or ``video'' and ``game''. That is, the model cannot capture important word sequences that could improve classification performance, as could have been possible in our example with ``machine learning" or ``self-driving cars''. In standard document classification scenarios, this type of relationship could be captured by using variable-length n-grams. Unfortunately, when working with text streams, using n-grams is not trivial, since the model has to dynamically identify and learn which n-grams are important ``on the fly''. In the next section, we will introduce an extension of SS3, called $\tau$-SS3, which is able to achieve it. \begin{figure*}[t!] \centering \includegraphics[width=140mm]{images/t-ss3-machine-learning} \caption{$\tau$-SS3 classification example. Since SS3 now has the ability to capture important word sequences, it is able to correctly classify the document's topic as $tech$.} \label{fig:t-ss3-example} \end{figure*} \section{The \texorpdfstring{$\tau$}{t}-SS3 text classifier} \label{sec:tss3} Regarding the model's formal definition, the only change we need to introduce is a generalized version of the $lv$ function given in \autoref{eq:lv-p}. This is trivial because it only involves allowing $lv$ to value not only words but also sequences of them. That is, in symbols, if $t_k = w_1{\rightarrow}w_2\dots{\rightarrow}w_k$ is a sequence of $k$ words, then $lv$ is now defined as: \begin{equation} \label{eq:t-lv-p} lv_\sigma(t_k, c) = \bigg(\frac{P(w_1w_2\dots w_k|c)}{P(m_1m_2\dots m_k|c)}\bigg)^\sigma \end{equation} where $m_1m_2\dots m_k$ is the sequence of $k$ words with the highest probability of occurring given that the category is $c$. Then, as with \autoref{eq:lv}, the actual definition of $lv$ becomes: \begin{equation} lv_\sigma(t_k, c) = \bigg(\frac{tf_{t_k,c}}{max\ tf_{k,c}}\bigg)^\sigma \label{eq:t-lv} \end{equation} Where $tf_{t_k,c}$ denotes the frequency of sequence $t_k$ in $c$ and $max\{tf_{k,c}\}$ the maximum frequency seen in $c$ for sequences of length $k$. Thus, given any word sequence $t_k$, now we could use the original \autoref{eq:gv} to compute its $gv(t_k, c)$. For instance, suppose $\tau$-SS3 has learned that the following word sequences have the $gv$ value given below: \ \begin{tabular}{ll} $gv(machine{\rightarrow}learning, tech) = 0.23;$\\ $gv(video{\rightarrow}game, tech) = 0.19;$\\ $gv(self{\rightarrow}driving{\rightarrow}cars, tech) = 0.21;$\\ \end{tabular} \vspace{5mm} Then, the previously misclassified example could now be correctly classified, as shown in \autoref{fig:t-ss3-example}. In the following subsections, we will see how this formal extension is, in fact, implemented in practice. \subsection{Training} \begin{figure*}[t] \centering \centerline{\includegraphics[width=190mm]{images/trie-training}} \caption{Training example. Gray color and bold indicate an update. (a) the first two words have been consumed and the tree has 3 nodes, one for each word and one for the bigram ``mobile APIs'', then a comma (,) is found in the input and \autoref{alg:training}'s line 9 and 10 have removed all the cursors and placed a new one, $a$, pointing to the root; (b) the word ``for'' is consumed, a new node for this word is created using the node pointed by cursor $a$ (lines 14), $a$ is updated to point to this new node (line 15 and 20), the next term is read and a new cursor $b$ is created (line 11) in the root; (c) ``mobile'' is consumed, using cursor $b$ the node for this word updated its frequency to 2 (line 16), a new node is created for the bigram ``for mobile'' using cursor $a$, and a new cursor $c$ is created in the root node (line 11); (d) finally, the word ``developers'' is consumed and similarly, new nodes are created for word ``developers'', bigram ``mobile developers'' and trigram ``for mobile developers''.} \label{fig:t-ss3-training} \end{figure*} \begin{algorithm}[t] \small \caption{\small Learning Algorithm. Note that $text$ is a sequence of lexical units (terms) which includes not only words but also punctuation marks. $MAX\_LVL$ stores the maximum allowed sequence length.} \label{alg:training} \begin{algorithmic}[1] \Statex \Procedure{Learn-New-Document}{$text$, $category$} \State \textbf{input:} $text$, a sequence of lexical units \State \hspace*{\algorithmicindent}\hspace*{\algorithmicindent}\hspace{.16667em} $category$, the category the document belongs to \State \textbf{local variables:} $cursors$, a set of prefix tree nodes \State \State $cursors$ $\gets$ an empty set \For{\textbf{each} $term$ \textbf{in} $text$} \If{$term$ \textbf{is not} a word} \State $cursors$ $\gets$ an empty set \Else \State add $category$.\Call{Prefix-Tree}{}.\Call{Root}{} to $cursors$ \For{\textbf{each} $node$ \textbf{in} $cursors$} \If{$node$ has \textbf{not} a child for $term$} \State $node$.\Call{Child-Node}{}.\Call{New}{$term$} \EndIf \State $child\_node$ $\gets$ $node$.\Call{Child-Node}{}[$term$] \State $child\_node$.\Call{Freq}{} $\gets$ $child\_node$.\Call{Freq}{} + 1 \If{$child\_node$.\Call{Level}{} $\ge$ $MAX\_LVL$} \State remove $node$ from $cursors$ \Else \State replace $node$ with $child\_node$ in $cursors$ \EndIf \EndFor \EndIf \EndFor \EndProcedure \end{algorithmic} \end{algorithm} The original SS3 learning algorithm only needs a dictionary of term-frequency pairs for each category. Each dictionary is updated as new documents are processed ---i.e., unseen terms are added and frequencies of already seen terms are updated. Note that these frequencies are the only elements we need to store since to compute $lv(w,c)$ we only need to know $w$'s frequency in $c$, $tf_{w,c}$ (see \autoref{eq:lv}). Likewise, $\tau$-SS3 learning algorithm only needs to store frequencies of all word sequences seen while processing training documents. More precisely, given a fixed positive integer $n$, it must store information about all word $k$-grams seen during training, with $1 \leq k \leq n$ ---i.e., single words, bigrams, trigrams, etc. To achieve this, the new learning algorithm uses a \emph{prefix tree} (also called \emph{trie})\cite{trie1960, crochemore2009trie} to store all the frequencies, as shown in \autoref{alg:training}. Note that instead of having $k$ different dictionaries, one for each $k$-grams (e.g. one for words, one for bigrams, etc.) we have decided to use a single prefix tree since all n-grams will share common prefix with the shorter ones. Additionally, note that instead of processing the input document $k$ times, again one for each $k$-grams, we have decided to use multiple cursors to be able to simultaneously store all sequences allowing the input to be processed as a stream. Finally, note that lines 8 and 9 of \autoref{alg:training} ensure that we are only taking into account n-grams that make sense, i.e., those composed only of words. All these previous observations, as well as the algorithm intuition, are illustrated with an example in \autoref{fig:t-ss3-training}. This example assumes that the training has just begun for the first time and that the short sentence, \emph{``Mobile APIs, for mobile developers''}, is the first document to be processed. Note that this tree will continue to grow, later, as more documents are processed. Thus, each category has a \emph{prefix tree} storing information linked to word sequences in which there is a tree's node for each learned k-gram. Note that in \autoref{alg:training}, there will never be more than $MAX\_LVL$ cursors and that the height of the trees will never grow higher than $MAX\_LVL$ since nodes at level 1 store 1-grams, at level 2 store 2-gram, and so on. Finally, it is worth mentioning that this learning algorithm allows us to keep the original one's virtues. Namely, the training is still incremental (i.e., it supports online learning) since there is no need neither to store all documents nor to re-train from scratch every time new training documents are available, instead, it is only necessary to update the already created trees. \subsection{Classification} \begin{algorithm}[t!] \small \caption{\small Sentence classification algorithm. \textproc{Map} applies the $gv$ function to every n-gram in $ngrams$ and returns a list of resultant vectors. \textproc{Reduce} reduces $ngrams\_cvs$ to a single vector by applying the $\oplus_{0}$ operator cumulatively.} \label{alg:classification} \begin{algorithmic}[1] \Statex \Function{Classify-Sentence}{$sentence$} \textbf{returns} a confidence vector \State \textbf{input:} $sentence$, a sequence of lexical units \State \textbf{local variables:} $ngrams$, a sequence of n-grams \State \inputindent\inputindent \hspace{.75em} $ngrams\_cvs$, confidence vectors \State \State $ngrams \gets$ \Call{Parse}{$sentence$} \State $ngrams\_cvs \gets$ \Call{Map}{}($gv$, $ngrams$) \State \Return \Call{Reduce}{}(\Call{$\oplus_{0}$}{}, $ngrams\_cvs$) \EndFunction \Statex \hrulefill \Function{Parse}{$sentence$} \textbf{returns} a sequence of n-grams \State \textbf{input:} $sentence$, a sequence of lexical units \State \textbf{global variables:} $categories$, the learned categories \State \textbf{local variables:} $ngram$, a sequence of words \State \inputindent\inputindent \hspace{.75em} $output$, a sequence of n-grams \State \inputindent\inputindent \hspace{.75em} $bests$, a list of n-grams \State \State $cur \gets$ the first term in $sentence$ \While{$cur$ \textbf{is not} empty} \For{\textbf{each} $cat$ \textbf{in} $categories$} \State $bests[cat] \gets$ \Call{Best-N-Gram}{$cat$, $cur$} \EndFor \State $ngram \gets$ the n-gram with the highest $gv$ in $bests$ \State add $ngram$ to $output$ \State move $cur$ forward $ngram$.\Call{Length}{} positions \EndWhile \State \Return $output$ \EndFunction \Statex \hrulefill \Function{Best-N-Gram}{$cat$, $term$} \textbf{returns} a n-gram \State \textbf{input:} $cat$, a category \State \hspace*{\algorithmicindent}\hspace*{\algorithmicindent}\hspace{.16667em} $term$, a cursor pointing to a term in the sentence \State \textbf{local variables:} $state$, a node of $cat$.\Call{Prefix-Tree}{} \State \inputindent\inputindent \hspace{.75em} $ngram$, a sequence of words \State \inputindent\inputindent \hspace{.75em} $best\_ngram$, a sequence of words \State \State $state \gets$ $cat$.\Call{Prefix-Tree}{}.\Call{Root}{} \State add $term$ to $ngram$ \State $best\_ngram \gets$ $ngram$ \While{$state$ has a child for $term$} \State $state \gets$ $state$.\Call{Child-Node}{}[$term$] \State $term \gets$ next word in the sentence \State add $term$ to $ngram$ \If{$gv(ngram, cat) > gv(best\_ngram, cat)$} \State $best\_ngram \gets$ $ngram$ \EndIf \EndWhile \State \Return $best\_ngram$ \EndFunction \end{algorithmic} \end{algorithm} The original classification algorithm will remain mostly unchanged\footnote{See Algorithm 1 from the original work \cite{burdisso2019}.}, we only need to change the process by which sentences are split into single words, by allowing them to be split into variable-length n-grams. Also, these n-grams must be ``the best possible ones'', i.e., having the maximum $gv$ value. To achieve this goal, we will use the prefix tree of each category as a \emph{deterministic finite automaton} (DFA) to recognize the most relevant sequences. Virtually, every node will we considered as a final state if its $gv$ is greater or equal to a small constant $\epsilon$. Thus, every DFA will advance its input cursor until no valid transition could be applied, then the state (node) with the highest $gv$ value will be selected. This process is illustrated in more detail in \autoref{fig:dfa}. Finally, the formal algorithm is given in \autoref{alg:classification}.\footnote{Note that for this algorithm to be included as part of the SS3's overall classification algorithm, we only need to modify the definition of \textproc{Classify-At-Level}$(text, n)$ defined in Algorithm 1 of the original paper} \cite{burdisso2019} so that, when called with $n \leq 1$, it will call our new function, \textproc{Classify-Sentence}. Note that instead of splitting the sentences into words simply by using a delimiter, now we are calling a \textproc{Parse} function on line 6. \textproc{Parse} intelligently splits the sentence into a list of variable length n-grams. This is done by calling the \textproc{Best-N-Gram} function on line 20 which carries out the process illustrated in \autoref{fig:dfa} to return the best n-gram for a given category. \begin{figure*}[t!] \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=90mm]{images/dfa0} \caption{} \label{fig:dfa_a} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=90mm]{images/dfa1} \caption{} \label{fig:dfa_b} \end{subfigure} \caption{Example of recognizing the best n-gram for the first sentence block of \autoref{fig:t-ss3-example}, ``Machine learning is being widely used''. For simplicity in this example, we only show the \emph{technology}'s DFA. There are conceptually 2 cursors, the black one ($\blacktriangle$) represents the input cursor and the white one ($\vartriangle$) the ``lookahead'' cursor to feed the automatons. (a) The lookahead cursor has advanced feeding the DFA with 3 words (``machine'', ``learning'', and ``is'') until no more state transitions were available. There were two possible final states, one for ``machine'' and another for ``machine$\rightarrow{}$learning'', the latter is selected since it has the highest $gv$ (0.23); (b) Finally, after the bigram ``machine$\rightarrow{}$learning'' was recognized (see the first two word blocks painted in gray in \autoref{fig:t-ss3-example}), the input cursor advanced 2 positions and is ready to start the process again using ``is'' as the first word to feed the automatons.} \label{fig:dfa} \end{figure*} \section{Experimental results} \label{sec:results} \subsection{Tasks and datasets} Experiments were conducted on three of the CLEF's eRisk open tasks, namely eRisk 2017 and 2018 early depression detection \cite{losada2017erisk, losada2018overview} and eRisk 2018 early anorexia detection \cite{losada2018overview}. These tasks focused on sequentially processing the content posted by users on Reddit. Thus, the datasets that were used in these tasks are collections of writings (submissions) posted by a subset of Reddit users (referred to as ``subjects''). In order to compare the results among different participants, as usual, each dataset is split into a training set and a test set. Participating research teams were given the training set to train and tune their models offline and were allowed to submit up to five models each. To carry out the test phase, eRisk organizers divided each user's writing history into 10 chunks.\footnote{Thus, each chunk contained 10\% of the complete user's history.} Classifiers were given each user's history, one chunk at a time, and after receiving each chunk, they could either classify the user as depressed/anorexic or wait for the next chunk. Furthermore, models had to make the correct decision \emph{as early as possible} since their performance was measured taking into account not only the effectiveness but also the delay of their decisions. Namely, the evaluation metric that was used is called \emph{Early Risk Detection Error} (ERDE). The ERDE measure was firstly introduced by Losada et al. \cite{losada2017erisk}, it was designed to take into account not only the correctness of decisions but also the delay taken to emit them. The delay is measured by counting the number ($k$) of different textual items seen before making the binary decision ($d$), which could be positive ($p$) or negative ($n$). Formally, the ERDE measure is defined as follows: $$ ERDE_o(d,k) = \left\{ \begin{array}{@{}l@{\thinspace}l} c_{fp} &\ if\ d=p\ AND\ truth=n\\ c_{fn} &\ if\ d=n\ AND\ truth=p\\ lc_o(k)\cdot c_{tp} &\ if\ d=p\ AND\ truth=p\\ 0 &\ if\ d=n\ AND\ truth=n\\ \end{array} \right. $$ Where the sigmoid \emph{latency cost function}, $lc_o(k)$ is defined by: $$lc_o(k) = 1 - \frac{1}{1+e^{k - o}}$$ Note that the ERDE measure is parameterized by the $o$ parameter, which acts as the ``deadline'' for decision making, i.e., if a correct positive decision is made in time $k > o$, it is taken by $ERDE_o$ as if it were incorrect (false positive). In our case, the performance of all participating models was measured using $ERDE_5$ and $ERDE_{50}$. \subsection{Implementation details} The new model implementation was coded in \emph{Python} using only built-in functions and data structures (such as \emph{dict}, \emph{map} and \emph{reduce} functions, etc.). The implementation was added to the official SS3's PyPI package, PySS3\cite{burdisso2019pyss3}, and its source code is available at \href{https://github.com/sergioburdisso/pyss3}{\url{https://github.com/sergioburdisso/pyss3}}. During experimentation, in order to avoid wasting memory by letting the digital trees grow unnecessarily large, every million words a ``pruning'' procedure was executed in which all the nodes with a frequency less or equal than 10 were removed. We also fixed the maximum n-gram length to 3 (i.e., we set $MAX\_LVL = 3$).\footnote{We tried using different values, from 2 to 10, but the best performance was obtained with $MAX\_LVL = 3$.} Finally, since we wanted to perform a direct (and fair) comparison against the original SS3 model, we decided to use the same hyperparameter values that were used in the SS3 original paper \cite{burdisso2019}. Therefore, we set $\lambda=\rho=1$ and $\sigma= 0.455$, which were originally selected by applying a grid search to minimize the $ERDE_{50}$ metric over the training data using 4-fold cross-validation. Furthermore, as in the original work, vector addition was used as the \emph{summary operator}, $\oplus_j$, for all the levels. Likewise, the same policy for early classification was also applied, i.e., users were classified as positive as soon as the positive accumulated \emph{confidence value} exceeded the negative one. Regarding preprocessing, as in the original work, no method was used except for simple accents removal, lowercase conversion, and tokenization.\footnote{ We also tried performing stemming and lemmatization using the Natural Language Toolkit (NLTK), but contrary to what we initially expected, the early classification performance was reduced. } This experimental setting ensured that, other than the addition of the variable-length word n-grams, no other factors were influencing the obtained results. \subsection{Results} As it is described in more detail in the overview of each task\cite{losada2017erisk, losada2018overview} and the CLEF Working Notes,\footnote{Section ``Early risk prediction on the Internet'' of the CLEF Working Notes for 2017 (\url{http://ceur-ws.org/Vol-1866/}) and 2018 (\url{http://ceur-ws.org/Vol-2125/}).} a total of 180 models were submitted to these three eRisk tasks, ranging from simple to more advanced deep learning models. For instance, some research groups used simple classifiers such as Multinomial Naive Bayes, Logistic Regression, or Support Vector Machine (SVM) while others made use of more advanced methods such as different types of Recurrent Neural Networks with embeddings, graph-based models, or even ensemble of multiple classifiers. Results for each one of the three tasks are shown in \autoref{tab:results-d2017}, \autoref{tab:results-d2018}, and \autoref{tab:results-a2018}, respectively. As can be seen, although not significantly, $\tau$-SS3 improves SS3's performance in all three tasks. Furthermore, $\tau$-SS3 obtained the best $ERDE_{50}$ values in both depression detection tasks. However, it obtained the second-best value in the anorexia detection task; the best value was obtained by the FHDO-BCSGD\cite{trotzek2018word} model, which consists of a Convolutional Neural Network (CNN) with \emph{fastText} word embeddings. Regarding $ERDE_{5}$, $\tau$-SS3 also outperformed all the other models in both the anorexia detection and the 2018 depression detection tasks. However, the best value in the 2018 depression detection task was obtained by the UNSLA\cite{funez2018} model, which consists of an SVM classifier using a novel (time-aware) document representation, called FTVT. It is worth mentioning that, although not included in the tables, the new model also improved the original SS3's performance in terms of the standard (timeless) measures, precision, recall, and $F_1$. For instance, in the eRisk 2017 ``Early Depression Detection'' task, $\tau$-SS3's recall, precision and $F_1$ were 0.55, 0.43 and 0.77 respectively, against SS3's 0.52, 0.44 and 0.63. Additionally, although these values were not the best among all participants, they were quite above the average (0.39, 0.36, and 0.51), which is not bad considering that our hyperparameter values were selected to optimize the $ERDE_{50}$ measure. Results suggest that learned n-grams could contribute to improving the performance of the original model since, although not significantly, $\tau$-SS3 outperformed SS3 in all three tasks. Furthermore, and perhaps more importantly, learned n-grams also contribute to improving visual explanations given by SS3, as illustrated in \autoref{fig:subject9579_descriptive}.\footnote{We have built a live demo to try out $\tau$-SS3 online, available at \href{http://tworld.io/ss3}{\url{http://tworld.io/ss3}}, in which an interactive visual explanation, similar to the one shown in this figure, is given along with the classification result.} \begin{table}[t] \centering \caption{Results on the eRisk 2017 ``Early Depression Detection'' task ordered by ERDE$_{50}$ (the lower, the better). A total of 30 models were submitted by 8 research teams. Here we are only showing the model with best $ERDE_5$ and the model with the best $ERDE_{50}$ of each participating team.} \label{tab:results-d2017} \begin{tabular}{l | c c} \hline Model & $ERDE_{5}$ & $ERDE_{50}\blacktriangle$ \\ \hline \textbf{$\tau$-SS3}$\star$ & \textbf{12.6\%} & \textbf{7.70\%} \\ \textbf{SS3}$\star$ & \textbf{12.6\%} & 8.12\% \\ UNSLA & 13.66\% & 9.68\% \\ FHDO-BCSGA & 12.82\% & 9.69\% \\ UArizonaD & 14.73\% & 10.23\% \\ FHDO-BCSGB & 12.70\% & 10.39\% \\ UArizonaB & 13.07\% & 11.63\% \\ UQAMD & 13.23\% & 11.98\% \\ GPLC & 14.06\% & 12.14\% \\ CHEPEA & 14.75\% & 12.26\% \\ LyRE & 13.74\% & 13.74\% \\ \hline \end{tabular} \end{table} \begin{table}[t] \centering \caption{Results on the eRisk 2018 ``Early Depression Detection'' task ordered by ERDE$_{50}$ (the lower, the better). A total of 44 models were submitted by 11 research teams. Here we are only showing the model with best $ERDE_5$ and the model with the best $ERDE_{50}$ of each participating team.} \label{tab:results-d2018} \begin{tabular}{l | c c} \hline Model & $ERDE_{5}$ & $ERDE_{50}\blacktriangle$ \\ \hline \textbf{$\tau$-SS3}$\star$ & 9.48\% & \textbf{6.17\%} \\ \textbf{SS3}$\star$ & 9.54\% & 6.35\% \\ FHDO-BCSGB & 9.50\% & 6.44\% \\ FHDO-BCSGA & 9.21\% & 6.68\% \\ LIIRB & 10.03\% & 7.09\% \\ PEIMEXC & 10.07\% & 7.35\% \\ UNSLA & \textbf{8.78\%} & 7.39\% \\ LIIRA & 9.46\% & 7.56\% \\ UQAMA & 10.04\% & 7.85\% \\ LIRMMD & 11.32\% & 8.08\% \\ UDCA & 10.93\% & 8.27\% \\ UPFA & 10.01\% & 8.28\% \\ RKMVERID & 9.97\% & 8.63\% \\ UDCC & 9.47\% & 8.65\% \\ RKMVERIC & 9.81\% & 9.08\% \\ LIRMMA & 10.66\% & 9.16\% \\ TBSA & 10.81\% & 9.22\% \\ TUA1C & 10.86\% & 9.51\% \\ TUA1A & 10.19\% & 9.70\% \\ \hline \end{tabular} \end{table} \begin{table}[t] \centering \caption{Results on the eRisk 2018 ``Early Anorexia Detection'' task ordered by ERDE$_{50}$ (the lower, the better). A total of 34 models were submitted by 9 research teams. Here we are only showing the model with best $ERDE_5$ and the model with the best $ERDE_{50}$ of each participating team.} \label{tab:results-a2018} \begin{tabular}{l | c c} \hline Model & $ERDE_{5}$ & $ERDE_{50}\blacktriangle$ \\ \hline FHDO-BCSGD & 12.15\% & \textbf{5.96\%} \\ \textbf{$\tau$-SS3}$\star$ & \textbf{11.31\%} & 6.26\% \\ FHDO-BCSGE & 11.98\% & 6.61\% \\ \textbf{SS3}$\star$ & 11.56\% & 6.69\% \\ FHDO-BCSGB & 11.75\% & 6.84\% \\ PEIMEXB & 12.41\% & 7.79\% \\ UNSLB & 11.40\% & 7.82\% \\ RKMVERIA & 12.17\% & 8.63\% \\ LIIRB & 13.05\% & 10.33\% \\ LIIRA & 12.78\% & 10.47\% \\ TBSA & 13.65\% & 11.14\% \\ UPFA & 13.18\% & 11.34\% \\ UPFD & 12.93\% & 12.30\% \\ TUA1C & 13.53\% & 12.57\% \\ LIRMMB & 14.45\% & 12.62\% \\ LIRMMA & 13.65\% & 13.04\% \\ \hline \end{tabular} \end{table} \begin{figure}[t!] \small \begin{subfigure}{90mm} \centering \includegraphics[width=90mm]{images/vd-example-a} \caption{Sentence-level explanation given by SS3} \label{fig:subject9579_descriptive_a} \end{subfigure} \begin{subfigure}{90mm} \centering \includegraphics[width=90mm]{images/vd-example-c} \caption{Original word-level explanation, given by $SS3$} \label{fig:subject9579_descriptive_b} \end{subfigure} \begin{subfigure}{90mm} \centering \includegraphics[width=90mm]{images/vd-example-b} \caption{New word-level explanation, given by $\tau$-SS3} \label{fig:subject9579_descriptive_c} \end{subfigure} \caption{This figure shows a fragment of the visual explanation given by SS3 in Figure 9 of the original article \cite{burdisso2019}. It shows the subject 9579's writing 60 of the 2017 depression detection task. Blocks are painted proportionally to the true \emph{confidence values} obtained for the ``\emph{depressed}'' category after experimentation. This visual explanation is shown at two different levels: (a) sentences and (b) words. For comparison purposes, in (c), we now show the new visual explanation given by $\tau$-SS3. Note that more useful information is now shown, namely the trigram ``I was feeling'' and the bigram ``kill myself'', improving the richness of visual explanations.} \label{fig:subject9579_descriptive} \end{figure} \section{Conclusions and future work} \label{sec:conclusion} In this article, we introduced $\tau$-SS3, an extension of the SS3 classification model that allows it to learn and recognize variable-length word n-grams ``on the fly.'' This extension gives $\tau$-SS3 the ability to recognize useful patterns over text streams. The new model uses a \emph{prefix tree} to store variable-length n-grams seen during training. The same data structure is then used as a DFA to recognize important word sequences as the input is read. Experimental results showed that, although not significantly, $\tau$-SS3 outperformed SS3 in terms of standard performance metrics as well as the ERDE metrics. These results suggest that learned n-grams seem to positively contribute to the model's performance as well as to the expressiveness of visual explanations. Future research should focus on evaluating and analyzing the space and computational complexity of the algorithms and data structures. Furthermore, it could be interesting to analyze the impact of pruning procedures on both performance and computational resource savings. \section*{\refname} \bibliographystyle{splncs04}
1,314,259,994,428
arxiv
\section{Introduction and auxiliary results} Let $x \in \mathbb{R}_+,\ f \in L_p(\mathbb{R}_+),\ 1\le p< \infty$ be a complex-valued function. It is known that the classical Laplace transform $$(\mathcal{ L} f)(x)= \int_0^\infty e^{-xt}f(t)dt,\quad x >0,\eqno(1)$$ is well defined and one can compute its iteration, simply changing the order of integration and calculating an elementary integral. This drives us to the operator of Stieltjes's transform. Namely, we obtain $$(\mathcal{S} f)(x) = (\mathcal{ L}^2 f)(x)= \int_0^\infty e^{-xs}\int_0^\infty e^{-st}f(t)dtds = \int_0^\infty \frac{f(t)}{x+t} dt,\quad x >0,\eqno(2)$$ where the change of the order of integration is allowed due to Fubini's theorem via the estimate, which is based on the H$\ddot{o}$lder inequality $$\int_0^\infty e^{-xs}\int_0^\infty e^{-st} |f(t)| dtds \le \int_0^\infty e^{-xs}\left(\int_0^\infty e^{-q st} dt\right)^{1/q} ds \left(\int_0^\infty |f(t)|^p dt\right)^{1/p}$$ $$= q^{-1/q} ||f||_p \Gamma \left(1- q^{-1} \right) x^{q^{-1}-1}, \ {1\over q} + {1\over p} =1.$$ Let us compute, in turn, the iteration of the Stieltjes transform (2) of an arbitrary $f \in L_p(\mathbb{R}_+),\ 1\le p< \infty$. Similar motivations with the estimate $$ \int_0^\infty \frac{1}{x+s} \int_0^\infty \frac{|f(t)|}{s+t} dtds \le ||f||_p \int_0^\infty \frac{1}{x+s} \left(\int_0^\infty \frac{1}{(s+t)^q} dt\right)^{1/q} ds $$ $$=\left[ \frac{\Gamma(q-1)}{\Gamma(q)}\right]^{1/q} \Gamma(q^{-1})\Gamma(1-q^{-1}) ||f||_p \ x^{q^{-1}-1}, \ {1\over q} + {1\over p} =1 $$ and relations (2.2.6.24), (7.3.2.148) in \cite{prud}, Vol. 1 and \cite{prud}, Vol.3, respectively, lead us to the following transformation $$(\mathcal{S}_2 f)(x) \equiv G(x)= \int_0^\infty \frac{\log(x)- \log(t)}{x-t} f(t) dt,\ x >0,\eqno(3)$$ whose kernel has a removable singularity at the point $t=x$ and the integral exists in the Lebesgue sense. This transformation was introduced for the first time in \cite{boas} in the form of the Stieltjes integral and it is called the iterated Stieltjes transform or the $\mathcal{S}_2$-transform. On spaces of generalized functions the $\mathcal{S}_2$-transform (3) was extended in \cite{dub} (see also in \cite{prudni}). In 1990 \cite{con} the first author proposed a new method of convolution constructions for integral transforms, which is based on the double Mellin-Barnes integrals (see in \cite{hai}, \cite{luch}). Following this direction, he established for the first time as an interesting particular case the convolution operator for the Stieltjes transform (2) (see \cite{hai}, formula (24.38)) $$(f *g)_{\mathcal{S}} (x)= f(x) (Hg)(x)+ g(x) (Hf)(x),\ x >0,\eqno(4)$$ where $$(Hf)(x)= \int_0^\infty \frac{f(t)}{t- x} dt\eqno(5)$$ is the operator of the Hilbert transform. Moreover, it was proved the corresponding convolution theorem $$\left( \mathcal{S} \ (f *g)_{\mathcal{S}} \right)(x)= ( \mathcal{S} f)(x) ( \mathcal{S} g)(x)\eqno(6)$$ in the class of functions, which is associated with the Mellin transform. Later \cite{sri} these results were extended on $L_p$-spaces and applied to a class of singular integral equations of convolution type (4). Our main goal in this paper is to employ the convolution method to the transformation (3) in order to derive the related convolution operator, to prove the convolution and Titchmarsh's type theorems (the latter one is about the absence of divisors of zero in the convolution product) and to apply these results, finding solutions and solvability conditions for a new class of singular integral equations. However, we begin with investigation of mapping properties of the iterated Stieltjes transform (3), proving the inversion theorem for this transformation of the Paley-Wiener type, as it was done by M. Dzrbasjan for the classical Stieltjes transform (2) (see \cite{mhit}, Theorem 2.11) and which is different from the corresponding inversion in \cite{boas}. \section{Inversion theorem for the iterated Stieltjes transformation} Let $f \in L_2(\mathbb{R}_+)$. Then according to \cite{tit} its Mellin transform $f^*(s),\ s \in \sigma_s= \{s \in \mathbb{C}: {\rm Re} s={1\over 2} \}$ is defined by the integral $$f^*(s)= \int_0^\infty f(t) t^{s-1}dt,\eqno(7)$$ which is convergent in the mean square sense with respect to the norm in $L_2(\sigma)$. Reciprocally, the inversion formula takes place $$f(x)= {1\over 2\pi i}\int_{\sigma_s} f^*(s) x^{-s} ds\eqno(8)$$ with the convergence of the integral in the mean square sense with respect to the norm in $L_2(\mathbb{R}_+)$. Furthermore, for any $f_1,f_2 \in L_2(\mathbb{R}_+)$ the generalized Parseval identity holds $$\int_0^\infty f_1(xt)f_2(t) dt= {1\over 2\pi i}\int_{\sigma_s} f_1^*(s)f_2^*(1-s) x^{-s} ds, \ x >0\eqno(9)$$ and Parseval's equality of squares of $L_2$- norms $$\int_0^\infty |f(x)|^2 dx = {1\over 2\pi} \int_{-\infty}^{\infty} \left|f^*\left({1\over 2}+ i\tau\right)\right|^2 d\tau.\eqno(10)$$ It is easily seen, that $f(x) \in L_2(\mathbb{R}_+)$ if and only if ${1\over x} f\left({1\over x}\right) \in L_2(\mathbb{R}_+)$. Hence, writing operator (3) in the form $$G(x)= \int_0^\infty \frac{\log(x/t)}{x/t -1} f(t) {dt\over t}= \int_0^\infty \frac{\log(xt)}{xt -1} f(1/t){dt\over t}$$ and observing that $\log(x)/(x-1) \in L_2(\mathbb{R}_+)$, we appeal to generalized Parseval equality (9) and relation (8.4.6.11) in \cite{prud}, Vol. 3 to derive the representation $$G(x)= \int_0^\infty \frac{\log(xt)}{xt -1} f(1/t){dt\over t}= {1\over 2\pi i}\int_{\sigma_s} \left[\Gamma(s)\Gamma(1-s)\right]^2 f^*(s)x^{-s} ds.\eqno(11)$$ As we see from (11), (8) and supplement formula for Euler's gamma-function, the Mellin transform of $g$ is equal to $$G^*(s)= \frac{\pi^2}{\sin^2(\pi s)} f^*(s),\quad s \in \sigma_s.\eqno(12)$$ In order to prove the inversion theorem for transformation (3), we will employ Dzrbasjan's classes of functions \cite{mhit}, Chap. 2. Indeed, we have {\bf Definition 1}. Let $\Phi(s)$ be an entire function $$\Phi\left(s\right)= \sum_{k=0}^\infty d_k s^k,$$ having on the line $\sigma_s$ the expansion $$\Phi\left({1\over 2} +i\tau\right)= \sum_{k=0}^\infty c_k\tau^{2k},\eqno(13)$$ where $c_0 >0, \ c_k \ge 0, \ k=1,2,\dots .$ We will say that $f(x) \in L_2^{(\Phi)}(\mathbb{R}_+)$, if 1) $f$ is differentiable infinitely many times on $\mathbb{R}_+$ and % $$\left(-x{d\over dx}\right)^k f(x) \in L_2(\mathbb{R}_+), \ k=0,1,2\dots\ ;$$ 2) The following equality holds % $$\Phi \left(-x{d\over dx}\right)f(x) = \sum_{k=0}^\infty d_k \left(-x{d\over dx}\right)^k f(x),$$ where the latter operator series converges in the mean square with respect to the norm in $L_2(\mathbb{R}_+)$. Returning to (12), we have $$f^*(s)= \frac{\sin^2(\pi s)}{\pi^2}G^*(s),\quad s \in \sigma_s.\eqno(14)$$ Hence letting $\Phi(s)={1\over \pi^2} \sin^2(\pi s)$ in our case (see (13)), we find $$\Phi\left({1\over 2} +i\tau\right) = \frac{1+\cosh(2\pi\tau)}{2\pi^2} = {1\over \pi^2} + {1\over 2\pi^2} \sum_{k=1}^\infty \frac{(2\pi)^{2k}}{(2k)!} \tau^{2k}.\eqno(15)$$ Now we are ready to prove the inversion theorem for the iterated Stieltjes transform (3). {\bf Theorem 1}. {\it For an arbitrary function $f \in L_2(\mathbb{R}_+)$ formula $(3)$ defines everywhere on $\mathbb{R}_+$ a function $G \in L_2^{(\Phi)}(\mathbb{R}_+)$, where $\Phi(s)={1\over \pi^2} \sin^2(\pi s)$. Moreover, almost everywhere the reciprocal inversion formula takes place $$f(x)= {1\over \pi^2} \left[ G(x) + {1\over 2 \sqrt x } \sum_{k= 1}^\infty \frac{ (-1)^{k} }{(2k)!} \left(2\pi x{d\over dx}\right)^{2k} \left(\sqrt x \ G(x)\right)\right],\eqno(16)$$ where the operator series converges in the mean square with respect to the norm in $L_2(\mathbb{R}_+)$. Conversely, for any $G \in L_2^{(\Phi)}(\mathbb{R}_+)$ formula $(16)$ defines almost everywhere on $\mathbb{R}_+$ a function $f \in L_2(\mathbb{R}_+)$ and the reciprocal formula $(3)$ holds.} \begin{proof} According to Definition 1, Parseval equality (10) and identity (14), we get that its right -hand side belongs to $L_2(\sigma_s)$. Thus via Lemma 2.4 from \cite{mhit} $G \in L_2^{(\Phi)}(\mathbb{R}_+)$. In the meantime evidently for each $k=0,1,2,\dots$ $$\sup_{s \in \sigma_s} \left|s^k \Phi^{-1}(s)\right| = a_k < \infty.$$ Hence due to inequalities $$\int_{\sigma_s} \left|s^k G^*(s)\right| ^2 |ds| \le a^2_k \int_{\sigma_s} \left|\Phi(s) G^*(s)\right| ^2 |ds|, \ k=0,1,2,\dots $$ we have $s^k G^*(s) \in L_2(\sigma_s) \cap L_1(\sigma_s), \ k=0,1,\dots\ .$ Hence owing to the differentiation under the integral sign we find immediately the representations $$\left(x{d\over dx}\right)^{2k} \left(\sqrt x \ G(x)\right) = {1\over 2\pi i} \int_{\sigma_s} (s- 1/2) ^{2k} G^*(s) \ x^{1/2 -s} ds,\ k=0,1,\dots,$$ and therefore, $${1\over \sqrt x} P_n \left(x{d\over dx}\right) \left(\sqrt x\ G(x)\right) = {1\over 2\pi i} \int_{\sigma_s} P_n(s) G^*(s) x^{-s} ds,\eqno(17)$$ where by $P_n(s)$ we denoted the sum $$P_n(s) = {1\over \pi^2}+ {1\over 2\pi^2} \sum_{k= 1}^n (-1)^{k} \frac{(2\pi)^{2k}}{(2k)!} (s-1/2)^{2k}.$$ But (14) and (8) yield $$f(x)= {1\over 2\pi i} \int_{\sigma_s} \Phi(s) G^*(s) x^{-s} ds$$ and from (17) it has for each $n \in \mathbb{N}$ $$f(x)- {1\over \sqrt x} P_n \left(x{d\over dx}\right) \left(\sqrt x \ G(x)\right)= {1\over 2\pi i} \int_{\sigma_s} \Phi(s) \left[1- \frac{P_n(s)}{\Phi(s)}\right] G^*(s) x^{-s} ds.$$ Therefore appealing to the Parseval equality (10) we derive $$\int_0^\infty \left| f(x)- {1\over \sqrt x} P_n \left(x{d\over dx}\right) \left(\sqrt x \ G(x)\right) \right|^2 dx= {1\over 2\pi} \int_{\sigma_s} |\Phi(s) G^*(s)|^2 \left|1- \frac{P_n(s)}{\Phi(s)}\right|^2 | ds|.$$ However, the right-hand side of the latter equality tends to zero, when $n \to \infty$ by virtue of the Lebesgue dominated convergence theorem, since $$\lim_{n\to \infty} \left|1- \frac{P_n(s)}{\Phi(s)}\right|= 0, \quad s \in \sigma_s$$ and via (15) for all $n$ $$ \left|1- \frac{P_n(s)}{\Phi(s)}\right| \le 2, \quad s \in \sigma_s.$$ Thus we arrive at the inversion formula (16), where the operator series converges in the mean square with respect to the norm in $L_2(\mathbb{R}_+)$. In the same way, starting from (16) and using (12), we prove the converse proposition of the theorem. \end{proof} \section{The convolution and Titchmarsh theorems} In this section we will construct and study mapping properties of the convolution, related to the iterated Stieltjes transform (3). In fact, according to formula (12.22) in \cite{luch} of the generalized $G$-convolution we have {\bf Definition 2}. Let $f, g$ be functions from $\mathbb{R}_+$ into $\mathbb{C}$ and $f^*,\ g^*$ be their Mellin transforms $(7)$. Then the function $f*g$ being defined on $\mathbb{R}_+$ by the double Mellin-Barnes integral $$(f*g)(x)= \frac{\sqrt x}{(2\pi i)^2} \int_{\sigma_s} \int_{\sigma_w}\left[ \frac{ \Gamma(s)\Gamma(1-s) \Gamma(w)\Gamma(1-w)} { \Gamma(s+w-1/2)\Gamma(3/2-s-w)}\right]^2 $$$$\times f^*(s)g^*(w) x^{-s-w} dsdw\eqno(18)$$ % is called the convolution of $f$ and $g$ (provided that it exists). Using again the supplement formula for gamma-functions and elementary trigonometric identities, we obtain % $$ \frac{ \Gamma(s)\Gamma(1-s) \Gamma(w)\Gamma(1-w)} { \Gamma(s+w-1/2)\Gamma(3/2-s-w)}= \pi \left[1- \cot(\pi s)\cot(\pi w)\right].$$ {\bf Lemma 1}. {\it Let $f, g$ be such that their Mellin transforms $f^*,\ g^*$ satisfy conditions $s f^*(s),\ s g^*(s) \in L_2(\sigma_s)$. Then convolution $(18)$ exists as a continuous function on $\mathbb{R}_+$, $f*g \in L_2(\mathbb{R}_+)$ and the following inequalities hold} $$|(f*g)(x)| \le {2\pi \over \sqrt x} \left(\int_{-\infty}^\infty \left|\left({1\over 2}+ i\tau\right) f^*\left({1\over 2}+ i\tau\right)\right|^2 d\tau\right)^{1/2}$$$$\times \left(\int_{-\infty}^\infty \left|\left({1\over 2}+ i\theta\right) g^*\left({1\over 2}+ i\theta\right)\right|^2 d\theta \right)^{1/2},\eqno(19)$$ $$\int_0^\infty |(f*g)(x)|^2 dx \le 16\pi^2 \int_{-\infty}^\infty \left| \left({1\over 2}+ i\theta \right) f^*\left({1\over 2}+ i\theta\right) \right|^2 d\theta $$$$\times \int_{-\infty}^\infty \left| \left({1\over 2}+ i\tau \right) g^*\left({1\over 2}+ i\tau\right)\right|^2 d\tau .\eqno(20)$$ \begin{proof} In fact, with the Schwarz inequality for double integrals, inequality $|\tanh(\pi\tau)| \le 1, \ \tau \in \mathbb{R}$ and computation of elementary integrals, we obtain $$|(f*g)(x)| \le {1\over 4 \sqrt x} \int_{-\infty}^\infty \int_{-\infty}^\infty \left[\tanh(\pi\tau)\tanh(\pi\theta) +1\right]^2 $$$$\times \left| f^*\left({1\over 2}+ i\tau\right)g^*\left({1\over 2}+ i\theta\right) \right|d\tau d\theta\le {1\over 4 \sqrt x} \left(\int_{-\infty}^\infty \int_{-\infty}^\infty \frac{ \left[\tanh(\pi\tau)\tanh(\pi\theta) +1\right]^2 }{\theta^2+1/4} \right.$$$$\left.\times \left|\left({1\over 2}+ i\tau\right) f^*\left({1\over 2}+ i\tau\right)\right|^2d\theta d\tau\right)^{1/2} \left(\int_{-\infty}^\infty \int_{-\infty}^\infty \frac{ \left[\tanh(\pi\tau)\tanh(\pi\theta) +1\right]^2 }{\tau^2+1/4} \right.$$$$\times \left. \left|\left({1\over 2}+ i\theta\right) g^*\left({1\over 2}+ i\theta\right)\right|^2d\theta d\tau\right)^{1/2} \le {2\pi\over \sqrt x} \left(\int_{-\infty}^\infty \left|\left({1\over 2}+ i\tau\right) f^*\left({1\over 2}+ i\tau\right)\right|^2 d\tau\right)^{1/2} $$$$\times \left(\int_{-\infty}^\infty \left|\left({1\over 2}+ i\theta\right) g^*\left({1\over 2}+ i\theta\right)\right|^2 d\theta\right)^{1/2},$$ which leads to (19) and guarantees continuity of the convolution $(f*g)(x)$ on $\mathbb{R}_+$ via the Weierstrass test of the uniform convergence of the double integral (18) for $x \ge x_0 >0$. Furthermore, appealing to the Parseval equality (10) and making a simple change of variables $z= s+w-1/2$ in (18), we get $$\int_0^\infty |(f*g)(x)|^2 dx = {\pi\over 8} \int_{-\infty}^\infty \left| \int_{-\infty}^\infty \left[\tanh(\pi\theta)\tanh(\pi(\tau-\theta)) +1\right]^2 \right. $$$$\left. \times f^*\left({1\over 2}+ i\theta\right)g^*\left({1\over 2}+ i(\tau- \theta)\right) d\theta \right|^2 d\tau \le 2 \pi \int_{-\infty}^\infty d\tau $$$$\times \left|\int_{-\infty}^\infty f^*\left({1\over 2}+ i\theta\right)g^*\left({1\over 2}+ i(\tau- \theta)\right) d\theta \right|^2.$$ Hence we employ the generalized Minkowski inequality to derive $$\left(\int_0^\infty |(f*g)(x)|^2 dx\right)^{1/2} \le \sqrt{2\pi} \int_{-\infty}^\infty \left| f^*\left({1\over 2}+ i\theta\right) \right| \left( \int_{-\infty}^\infty \left| g^*\left({1\over 2}+ i(\tau- \theta)\right)\right|^2 d\tau \right)^{1/2} d\theta$$ $$\le 2\sqrt{2 \pi} \int_{-\infty}^\infty \left| f^*\left({1\over 2}+ i\theta\right) \right| d\theta \left( \int_{-\infty}^\infty \left| \left({1\over 2}+ i\tau \right) g^*\left({1\over 2}+ i\tau\right)\right|^2 d\tau \right)^{1/2} $$ % $$\le 2\sqrt{2 \pi} \left( \int_{-\infty}^\infty {d\theta\over \theta^2+ 1/4}\right)^{1/2} \left( \int_{-\infty}^\infty \left| \left({1\over 2}+ i\theta \right) f^*\left({1\over 2}+ i\theta\right) \right|^2 d\theta\right)^{1/2} $$$$\times \left( \int_{-\infty}^\infty \left| \left({1\over 2}+ i\tau \right) g^*\left({1\over 2}+ i\tau\right)\right|^2 d\tau \right)^{1/2} $$ % $$= 4 \pi \left( \int_{-\infty}^\infty \left| \left({1\over 2}+ i\theta \right) f^*\left({1\over 2}+ i\theta\right) \right|^2 d\theta\right)^{1/2} \left( \int_{-\infty}^\infty \left| \left({1\over 2}+ i\tau \right) g^*\left({1\over 2}+ i\tau\right)\right|^2 d\tau \right)^{1/2} .$$ Thus squaring both sides of the inequalities, we arrived at (20) and proved the lemma. \end{proof} Now we are ready to prove the convolution theorem for transformation (3). Precisely, we state {\bf Theorem 2}. {\it Let $f^*,\ g^*$ be the Mellin transforms of $f, g$, respectively, satisfying conditions $s f^*(s),\ s g^*(s) \in L_2(\sigma_s)$. Then the Mellin transform of the convolution $(18)\ (f*g)^*(s) \in L_2(\sigma_s)$ and is equal to % $$(f*g)^*(s) = \frac{\pi}{2i} \int_{\sigma_w} \left[1+ \tan(\pi( s-w))\cot(\pi w)\right]^2 f^*(s-w+1/2 )g^*(w)dw.\eqno(21)$$ Moreover, the factorization equality holds $$(\mathcal{S}_2 (f*g) )(x) = \sqrt x \ (\mathcal{S}_2 f)(x) (\mathcal{S}_2 g)(x), \quad x >0. \eqno(22)$$ Besides, if $s f^*(s), \ s g^*(s) \in L_2(\sigma_s) \cap L_1(\sigma_s)$, then for all $x >0 $ the following representation takes place $$(f*g)(x)= \pi^2 \sqrt x \left[ f(x) g(x) - {2\over \pi^2} (Hf)(x) (Hg)(x) \right.$$$$\left. + {1\over \pi^4} (H^2 f)(x) (H^2 g)(x)\right],\eqno(23) $$ where $H$ is the operator of the Hilbert transform $(5)$ and by $H^2$ the iterated Hilbert transform is denoted.} \begin{proof} Formula (21) and condition $(f*g)^*(s) \in L_2(\sigma_s)$ follow immediately from (18), Lemma 1 and Parseval's equality (10). Hence via generalized Parseval's identity (9) (see also (3), (11)) it has $$(\mathcal{S}_2 (f*g) )(x) = \frac{1}{(2\pi i)^2} \int_{\sigma_s} {\pi^4 x^{-s} \over \sin^2(\pi s) } \int_{\sigma_w} \left[1+ \tan(\pi( s-w))\cot(\pi w)\right]^2$$$$ \times f^*(s-w+1/2 )g^*(w)\ dw ds= \frac{1}{(2\pi i)^2} \int_{\sigma_s} x^{-s} \int_{\sigma_w} \frac{ \pi^4 f^*(s-w+1/2 )g^*(w)} {\sin^2(\pi(s-w+1/2))\sin^2(\pi w)} \ dw ds$$ $$= \frac{\sqrt x }{(2\pi i)^2} \int_{\sigma_s} \frac{ \pi^2 f^*(s )} {\sin^2(\pi s)} x^{-s}\ ds \int_{\sigma_w} \frac{\pi^2 g^*(w)} {\sin^2(\pi w)} x^{-w}\ dw = \sqrt x \ (\mathcal{S}_2 f)(x) (\mathcal{S}_2 g)(x), \quad x >0,$$ where the change of the order of integration is by Fubini's theorem by virtue of the estimate $$\int_{\sigma_s} \int_{\sigma_w} \left| \frac{ f^*(s-w+1/2 )g^*(w)} {\sin^2(\pi(s-w+1/2))\sin^2(\pi w)} \right| \ | dw ds|$$$$ = \int_{-\infty}^\infty \int_{-\infty}^\infty \left| \frac{ f^*(i(\tau-\theta) +1/2 )g^*(i \theta+1/2)} {\cosh^2(\pi(\tau-\theta)) \cosh^2(\pi \theta)} \right| \ d\tau d\theta \le 4 \left( \int_{-\infty}^\infty \left| ( i\tau +1/2 ) f^*(i\tau +1/2 )\right|^2 d\tau\right)^{1/2} $$$$\times \left( \int_{-\infty}^\infty \left| ( i\theta +1/2 ) g^*(i \theta+1/2)\right|^2 d\theta \right)^{1/2} < \infty. $$ So we established equality (22). In order to prove (23), we return to (18) and write it in the form $$(f*g)(x)= \frac{\sqrt x}{(2\pi i)^2} \int_{\sigma_s} \int_{\sigma_w}\left[1- 2\cot(\pi s)\cot(\pi w) + \cot^2(\pi s)\cot^2(\pi w) \right] $$$$\times \pi^2 f^*(s)g^*(w) x^{-s-w} dsdw = \pi^2 \sqrt x \ \left[ f(x) g(x) + \frac{1}{2\pi^2} \int_{\sigma_s} \cot(\pi s) f^*(s) x^{-s} ds \right.$$$$\left. \times \int_{\sigma_w} \cot(\pi w) \ g^*(w) x^{-w} dw - \frac{1}{4\pi^2} \int_{\sigma_s} \cot^2(\pi s) f^*(s) x^{-s} ds \right.$$$$\left.\times \int_{\sigma_w} \cot^2(\pi w) \ g^*(w) x^{-w} dw\right]\eqno(24)$$ and the latter equality in (24) is indeed possible since owing to conditions of the theorem $f^*(s),\ g^*(s) \in L_1(\sigma_s)$. Now our goal is to prove the equalities $$\frac{1}{2\pi i} \int_{\sigma_s} \cot(\pi s) f^*(s) x^{-s} ds= {1\over \pi} (Hf)(x),\quad x >0,\eqno(25)$$ $$\frac{1}{2\pi i} \int_{\sigma_s} \cot^2(\pi s) f^*(s) x^{-s} ds= {1\over \pi^2} (H^2 f)(x),\quad x >0.\eqno(26)$$ In order to do this, we employ the known equality (\cite{prud}, Vol. 1, relation (2.2.4.26)) $$PV \ {1\over \pi} \int_0^\infty {t^{s-1}\over 1-t}\ dt = \cot(\pi s), \quad 0 < {\rm Re} s < 1,\eqno(27)$$ where its left-hand side is understood as $$PV \ {1\over \pi} \int_0^\infty {t^{s-1}\over 1-t}\ dt= \lim_{\varepsilon \to 0} \varphi_\varepsilon(s),$$ and $$\pi \varphi_\varepsilon (s)= \left( \int_0^{1-\varepsilon} + \int_{1+\varepsilon}^\infty \right) {t^{s-1}\over 1-t}\ dt, \quad 0 < \varepsilon < 1, \ 0 < {\rm Re} s < 1.\eqno(28)$$ We will treat the following integral $$I_\varepsilon(x)= \frac{1}{2\pi i} \int_{\sigma_s} \varphi_\varepsilon (s) f^*(s) x^{-s} ds,\eqno(29)$$ showing, that it is possible to pass to the limit under the integral sign when $\varepsilon \to 0$. This fact will be done, establishing the uniform estimate $$ \left|\varphi_\varepsilon(s)\right| \le C |s|,\quad s \in \sigma_s,\eqno(30)$$ where $C >0$ is an absolute constant. So, in order to prove (29), we choose $0 < \varepsilon < 1/2$ and split integrals in (28) as follows $$\pi \varphi_\varepsilon (s)= \left( \int_0^{1/2}+ \int_{1/2}^{1-\varepsilon} + \int_{1+\varepsilon}^{3/2} + \int_{3/2}^\infty \right) {t^{s-1}\over 1-t}\ dt$$$$= I_1(s)+ I_2(s)+ I_3(s)+ I_4(s), \quad s \in \sigma_s.$$ Clearly, $$ \left|I_1(s)\right| \le \int_0^{1/2}{dt\over (1-t)\sqrt t }= O(1).$$ Analogously, $$ \left|I_4(s)\right| \le \int_{3/2}^\infty {dt\over (t-1)\sqrt t }= O(1).$$ Concerning integral $I_2$, we have $(s= 1/2 +i\tau)$ $$I_2(s)= \int_{1/2}^{1-\varepsilon} {\cos(\tau\log t) + i\sin(\tau\log t)\over (1-t)\sqrt t}\ dt $$ and via elementary inequality $|\sin x | \le |x|, \ x \in \mathbb{R}$ $$\left| \int_{1/2}^{1-\varepsilon} {\sin(\tau\log t )\over (1-t)\sqrt t}\ dt \right| \le |\tau| \int_{1/2}^{1} {|\log t | \over (1-t)\sqrt t}\ dt =O(\tau).$$ Further, $$\int_{1/2}^{1-\varepsilon} {\cos(\tau\log t)\over (1-t)\sqrt t}\ dt = \int_{1/2}^{1-\varepsilon} {\cos(\tau\log t) -1\over (1-t)\sqrt t}\ dt + \int_{1/2}^{1-\varepsilon} {1\over (1-t)\sqrt t}\ dt $$ and after integration by parts in the second integral, we find $$ \int_{1/2}^{1-\varepsilon} {1\over (1-t)\sqrt t}\ dt = -{ \log \varepsilon\over \sqrt{1-\varepsilon}}- \sqrt 2 \log 2- {1\over 2} \int_\varepsilon^{1/2} {\log t\over (1-t)^{3/2}}\ dt.$$ In the meantime, with the Lagrange theorem $$ {\cos(\tau\log t) -1\over t-1} = - \tau \ {\sin(\tau\log (\xi_t)) \over \xi_t},\quad 1/2\le t< \xi_t< 1. $$ Hence, $$\left| \int_{1/2}^{1-\varepsilon} {\cos(\tau\log t) -1\over (1-t)\sqrt t}\ dt \right| \le 2 |\tau| \int_{1/2}^{1} {dt \over \sqrt t} = O(\tau).$$ Similarly, $$I_3(s)= \int_{1+\varepsilon}^{3/2} {\cos(\tau\log t) + i\sin(\tau\log t)\over (1-t)\sqrt t}\ dt $$ and $$\left| \int_{1+\varepsilon}^{3/2} {\sin(\tau\log t)\over (1-t)\sqrt t}\ dt\right|\le |\tau| \int_{1}^{3/2} {|\log t | \over (t-1)\sqrt t}\ dt =O(\tau).$$ Meanwhile, $$\int_{1+\varepsilon}^{3/2} {\cos(\tau\log t)\over (1-t)\sqrt t}\ dt = \int_{1+\varepsilon}^{3/2} {\cos(\tau\log t)-1 \over (1-t)\sqrt t}\ dt + \int_{1+\varepsilon}^{3/2} {1\over (1-t)\sqrt t}\ dt $$ and , in turn, with the same arguments $$ \int_{1+\varepsilon}^{3/2} {1\over (1-t)\sqrt t}\ dt = { \log \varepsilon\over \sqrt{1+\varepsilon}} + \sqrt {2/3} \log 2- {1\over 2} \int_{\varepsilon}^{1/2} {\log t\over (1+t)^{3/2}}\ dt,$$ $$\left| \int_{1+\varepsilon}^{3/2} {\cos(\tau\log t)-1 \over (1-t)\sqrt t}\ dt \right| \le |\tau| \int_{1}^{3/2} {dt \over \sqrt t} = O(\tau).$$ Thus, $$\left| I_2(s)+I_3(s)\right| \le 2 \varepsilon |\log\varepsilon| + O(1)+ O(\tau)< \log 2 + O(1)+ O(\tau)$$ and combining with estimates above, we complete the proof of inequality (30). Returning to (29) and appealing to the Lebesgue dominated convergence theorem, one can pass to the limit when $\varepsilon \to 0$ under the integral sign. Consequently, employing (28) and making simple changes of variables by Fubini's theorem $(f^* \in L_1(\sigma_s))$ with the use of (8), we obtain for all $x>0$ $$\lim_{\varepsilon \to 0} \frac{1}{2\pi i}\int_{\sigma_s} \varphi_\varepsilon (s) f^*(s) x^{-s} ds= \frac{1}{2\pi i}\int_{\sigma_s} \cot(\pi s) f^*(s) x^{-s} ds$$ $$= \lim_{\varepsilon \to 0} \frac{1}{2\pi^2 i}\int_{\sigma_s} \left( \int_0^{1-\varepsilon} + \int_{1+\varepsilon}^\infty \right) {t^{s-1}\over 1-t}\ f^*(s) x^{-s} dt ds $$ $$= \lim_{\varepsilon \to 0} \frac{1}{\pi} \left( \int_0^{1-\varepsilon} + \int_{1+\varepsilon}^\infty \right) {f(x/t) dt \over (1-t) t } = \lim_{\varepsilon \to 0} \frac{1}{\pi} \left( \int_0^{x/(1+\varepsilon)} + \int_{x/ (1-\varepsilon) }^\infty \right) {f(t) dt \over t-x } $$ $$= PV \ {1\over \pi} \int_{0 }^\infty {f(t) dt \over t-x } = {1\over \pi} (H f)(x).$$ Therefore we proved equality (25). Analogously we establish (26), minding that $s h(s)\in L_2(\sigma_s)$ if $s f^*(s) \in L_2(\sigma_s)$, where $h(s)= \cot(\pi s) f^*(s)$. Substituting these results in (24), it gives equality (23) and complete the proof of the theorem. \end{proof} It is well known that on $\mathbb{R}$ the following equality, involving the iterated Hilbert transform holds, namely $$ {1\over \pi^2} (\hat{H}^2 f)(x) = - f(x), \quad f \in L_p(\mathbb{R}),\ p > 1,$$ where $$ (\hat{H} f)(x)= \int_{-\infty}^\infty \frac{f(t)}{t-x}\ dt, \quad x \in \mathbb{R}.$$ Here as an immediate corollary of equality (26) we prove the following relation between the iterated Stieltjes and Hilbert transforms (3) and (5), respectively, on $\mathbb{R}_+$, which seems to be new. Precisely, it states {\bf Corollary 1.} {\it Under condition $s f^*(s) \in L_2(\sigma_s)$, where $f^*$ is the Mellin transform $(7)$ of $f$, the following relation holds for all $x >0$ } $$ (H^2 f)(x) = (\mathcal{S}_2 f)(x) - \pi^2 f(x), \quad x > 0.$$ \begin{proof} The proof is straightforward with the use of (14), (26) and elementary trigonometric identity $\cot^2(\pi s)= \csc^2(\pi s) -1$. \end{proof} {\bf Corollary 2}. {\it Under conditions of Theorem 2 the following equality holds for convolution $(18)$} $$ (H^2 (f*g) )(x) + \pi^2 (f*g)(x) = \sqrt x \ (\mathcal{S}_2 f)(x) (\mathcal{S}_2 g)(x), \quad x > 0.$$ \begin{proof} The proof is immediate with the use of factorization equality (22). \end{proof} Finally in this section we establish an analog of the Titchmarsh theorem about the absence of divisors of zero in the convolution (18). We have {\bf Theorem 3}. {\it Let under conditions of Theorem $2$ $f*g= 0$. Then either $f=0$ or $g=0$.} \begin{proof} Indeed, one can consider the iterated Stieltjes transform (3) of complex variable $$G(z)= \int_0^\infty \frac{\log(z)- \log(t)}{z-t} f(t) dt,\quad z \in \mathbb{C} \backslash \{0\},$$ where we take the principal branch of $\log z$. On the other hand, we can treat $G(z)$ as the Stieltjes transform (2) of $L_2$-function, which is analytic in the complex plane cut along the nonpositive real axis (see in \cite{wid}). In fact, $G(z)= (\mathcal{S}_2 f)(z) = (\mathcal{S}\ (\mathcal{S} f) ) (z)$, where similar to (11) $$(\mathcal{S} f)(x) = {1\over 2\pi i}\int_{\sigma_s} \Gamma(s)\Gamma(1-s) f^*(s)x^{-s} ds, \quad x >0,$$ $$ (\mathcal{S}_2 f)(z)= {1\over 2\pi i}\int_{\sigma_s} \left[\Gamma(s)\Gamma(1-s)\right]^2 f^*(s)z^{-s} ds, \quad |\arg z| < \pi,\ z \neq 0$$ and the right-hand side of the latter equality represents an absolutely and uniformly convergent integral in the domain $D=\{z \in \mathbb{C}, |\arg z| < \pi,\ |z| > a >0\}.$ Indeed, with the Schwarz inequality we have $$\int_{\sigma_s} \left| \left[\Gamma(s)\Gamma(1-s)\right]^2 f^*(s)z^{-s} ds\right| = {\pi^2\over \sqrt{|z|}} \int_{-\infty}^\infty \frac{ e^{\tau\arg z} }{\cosh^2(\pi\tau)} \left| f^*\left({1\over 2}+ i\tau \right) \right| d\tau $$ $$< {\pi^2\over \sqrt{a}} \left( \int_{-\infty}^\infty \frac{ e^{2\pi| \tau|} }{(1/4+ \tau^2) \cosh^4(\pi\tau)} d\tau \right)^{1/2} \left( \int_{-\infty}^\infty \left| \left({1\over 2}+ i\tau \right) f^*\left({1\over 2}+ i\tau \right) \right|^2 d\tau\right)^{1/2}$$$$ < \infty.$$ Moreover, $f(x),\ (\mathcal{S} f)(x) \in L_2(\mathbb{R}_+)$ because, evidently, $f^*(s), \ \Gamma(s)\Gamma(1-s) f^*(s) \in L_2(\sigma_s)$ when $s f^*(s) \in L_2(\sigma_s)$. Thus, if $f*g=0$, then $(\mathcal{S}_2 (f*g))(z) \equiv 0$ and by virtue of equality (22), which has a meaning for complex $z \in D$, either $(\mathcal{S}_2 f)(z)\equiv 0$ or $(\mathcal{S}_2 g)(z)\equiv 0$ everywhere in the complex plane cut along the nonpositive real axis. Therefore appealing twice to the uniqueness of the Stieltjes transform (cf., for instance, in \cite{wid}, p. 336), we conclude that either $f=0$ or $g=0$ almost everywhere on $\mathbb{R}_+$. \end{proof} \section{A new class of singular integral equations} This section is devoted to an application of convolution (18) to an interesting class of integral equations, containing a combination of the Hilbert transform and the iterated Stieltjes transform (or the iterated Hilbert transform, taking into account equality (30)). However, first we apply our method to simplify a solution of the singular integral equation, considered in \cite{sri}, which involves the convolution (4) for the Stieltjes transform (2). Moreover, the result is known by Lemma 11.1 in \cite{sam}. But our main goal will be to establish the reciprocal inverse operator, being associated with a singular integral equation mentioned above, which involves the iterated Hilbert and Stieltjes operators. We begin, considering the convolution (4) with $g(x)= x^{\alpha-1},\ 0 < \alpha < 1/2$. Namely, taking into account the value of integral (27), we come out with the equation $$ f(x) \cos(\pi\alpha ) + {\sin(\pi\alpha)\over \pi} (Hf)(x)= h(x),\ x >0,\eqno(31)$$ where $h(x)= \pi^{-1} \sin(\pi\alpha)\ x^{1-\alpha} (f *x^{\alpha-1} )_{\mathcal{S}}$ and $f(x), h(x)$ satisfy conditions of Theorem 2. Then applying to both sides of (31) the Mellin transform (7) and taking into account equality (25), we obtain $$h^*(s)= f^*(s)\left[ \cos(\pi\alpha ) + \cot(\pi s )\sin(\pi\alpha)\right]= f^*(s)\frac{\sin(\pi(s+\alpha))}{\sin(\pi s)}. $$ Therefore, $$f^*(s)= h^*(s)\frac{\sin(\pi s)} {\sin(\pi(s+\alpha))},\quad s \in \sigma_s$$ and reciprocally with the inversion formula (8) for the Mellin transform, we arrive at the unique solution of the singular integral equation (31) $$f(x)= {1\over 2\pi i} \int_{\sigma_s} h^*(s)\frac{\sin(\pi s)} {\sin(\pi(s+\alpha))} x^{-s} ds = \cos(\pi\alpha) h(x)- {\sin(\pi\alpha) \over 2\pi i} $$$$\times \int_{\sigma_s} \cot(\pi(s+\alpha)) h^*(s) x^{-s} ds = \cos(\pi\alpha) h(x)- {\sin(\pi\alpha) \over \pi} x^{\alpha} \int_0^\infty \frac{t^{-\alpha} h(t)}{t-x }\ dt.$$ Consequently, we found a pair of reciprocal formulas for all $x >0$ and $0 < \alpha < 1/2$ $$h(x)= \cos(\pi\alpha ) f(x) + { \sin(\pi\alpha ) \over \pi} \int_0^\infty \frac{f(t)}{t-x }\ dt,\eqno(32)$$ $$f(x)= \cos(\pi\alpha) h(x)- {\sin(\pi\alpha)\over \pi} \int_0^\infty \left({x\over t}\right)^\alpha \frac{ h(t)}{t-x }\ dt,\eqno(33)$$ which is confirmed by Lemma 11.1 in \cite{sam}. Finally in a similar manner, we apply our technique to investigate a solvability and find a solution of a new singular integral equation, which is associated with convolution (23) (in fact, $g(x)= x^{\alpha-1}$ does not satisfy conditions of Theorem 2 and $(f *x^{\alpha-1} )$ is understood by equality (23)). Precisely, denoting now by $h(x)= \pi^{-2} \sin^2 (\pi\alpha)\ x^{1/2 -\alpha} (f *x^{\alpha-1} )$ and calling Corollary 1, we come out with the following equation for all $x > 0$ and $0<\alpha < 1$ $${\cos^2(\pi\alpha)\over \pi^2} \int_0^\infty \frac{\log(x)- \log(t)}{x-t} f(t) dt - {\sin(2\pi\alpha)\over \pi} \int_0^\infty \frac{f(t)}{t-x }\ dt$$$$ - \cos(2\pi\alpha) f(x) = h(x).\eqno(34)$$ {\bf Theorem 4}. {\it Let $f, h$ satisfy conditions of Theorem 2. Then for all $x >0$ and $0< \alpha < 1$ singular integral equation $(34)$ has the unique solution $$f(x)= {\cos^2(\pi\alpha)\over \pi^2} \int_0^\infty \frac{\log(x)- \log(t)}{x-t} \left({x\over t}\right)^{\alpha-1/2} h(t) dt$$$$+ {\sin(2\pi\alpha)\over \pi} \int_0^\infty \left({x\over t}\right)^{\alpha-1/2} \frac{h(t)}{t-x }\ dt- \cos(2\pi\alpha) h(x).\eqno(35)$$ Conversely, singular integral equation $(35)$ has the unique solution in the form $(34)$.} \begin{proof} Taking the Mellin transform of both sides of (34), minding (12), (25), (26) and Corollary 1, after simple manipulations we get the equality $$h^*(s)= f^*(s) \left[ \sin(\pi\alpha) - \cos(\pi\alpha)\cot(\pi s)\right]^2,\quad s \in \sigma_s.$$ Hence, reciprocally, $$ f^*(s) = h^*(s) \left[\frac {\sin(\pi s)} {\sin(\pi(s+\alpha-1/2))}\right]^2= h^*(s) \left[ \sin(\pi\alpha)\right.$$$$\left. +\cos(\pi\alpha) \cot(\pi(s+\alpha-1/2))\right]^2.$$ Consequently, canceling the Mellin transform of both sides of the latter equality, we obtain $$f(x)= \sin^2 (\pi\alpha) h(x) + {\sin(2\pi\alpha)\over \pi} \int_0^\infty \left({x\over t}\right)^{\alpha-1/2} \frac{h(t)}{t-x }\ dt$$$$+ {\cos^2 (\pi\alpha)\over \pi^2} \ x^{\alpha-1/2} \int_0^\infty {1\over t-x} \int_0^\infty \frac{u^{1/2- \alpha} \ h(u)}{u-t }\ du dt.$$ Applying again Corollary 1, we come out with solution (35). In the same manner we verify the converse statement. \end{proof} {\bf Remark 1}. Letting $\alpha= 1/2$, we find the simplest degenerated case of the pair of singular integral equations (34), (35). It leads us to the unique solution $f=h$ and vice versa. \noindent {{\bf Acknowledgments}}\\ The present investigation was supported, in part, by the "Centro de Matem{\'a}tica" of the University of Porto.\\
1,314,259,994,429
arxiv
\section{Introduction} Resonant tunneling often plays an important role in current transport in transmissive superconducting junctions. The presence of impurities in the tunnel barriers may strongly affect Josephson tunneling \cite{Asl2,Tart,Gla,Dev} as well as Andreev transport under applied voltage \cite{Gla2,Golub} due to the resonant tunneling through localized impurity levels. A mechanism of resonant tunneling was assumed to be responsible for the unusual properties of junctions with disordered semiconducting barriers \cite{Ovi} and grain boundary Josephson junctions of high Tc materials \cite{Gross}. Furthermore, in clean superconductor -semiconductor junctions, mobile electrons confined between the Schottky barriers form resonant states which determine specific properties of such junctions \cite{Asl,She}. Similar resonant states are also important in recently fabricated ballistic superconductor-2D electron gas (2DEG) structures \cite{Taka} where they are formed by the electron reflections by the gate potential. Moreover, the possibility to create quantum point contacts and quantum dots in such structures allows one to investigate quantum resonant transport through well resolved resonant levels \cite{Yacobi}. Quantum resonant transport was also observed in metallic dots \cite{Ralph}, and in contacts containing single wall nanotubes \cite{Delft1,Rice}. The properties of dc Josephson current in quantum resonant junctions as well as Andreev quantization have been discussed in various publications \cite{Volk,Bee,Cre,WSh}. The subharmonic gap structure (SGS) in resonant junctions and the effect of the resonance on multiple Andreev reflections (MAR) is less investigated. Meanwhile, detailed theory of SGS in quantum point contacts \cite{Sh1,Sh2,Ave1,Ye1,BB} in combination with precise experiments on controllable break junctions \cite{Jan,Urb} was found to be a powerful tool for investigation of intrinsic properties of the atomic-size contacts. Our preliminary results \cite{Joh} have shown that SGS in resonant junction drastically differs from the SGS in non-resonant junction; similar results were obtained by different methods in Ref. \cite{Ye2}. In this paper, we present a detailed study of interplay of MAR with Breit-Wigner resonances in quantum junctions. The structure of the paper is the following. In Section II we derive equations for the inelastic scattering amplitudes in resonant junctions. In Section III we discuss properties of the normal electron resonance in the proximity region between the superconducting electrodes. Results of numerical calculations of the current-voltage characteristic for different positions and widths of the resonance are presented in Section IV. Section V is devoted to perturbative analysis of resonant SGS. \section{Scattering amplitudes.} We will consider a junction consisting of a ballistic normally conducting region separated from the superconducting electrodes by tunnel barriers, Fig. 1. The length of the junction $L$ is assumed to be smaller than the coherence length, and therefore the distance between normal resonances will exceed the superconducting energy gap, $v_{F}/L>\Delta$ ($v_{F}$ is the normal electron Fermi velocity, $\hbar=1$). We will also assume that the resonances are well separated, $\Gamma\ll v_{F}/L$, where $\Gamma$ is the resonance half-width, and that Coulomb charging effects do not dominate in the subgap voltage region, $E_C<2\Delta$, where $E_C$ is a Coulomb gap. We will apply Landauer-B\"uttiker scattering theory \cite{Lan,But,Imry} extended to superconducting junctions \cite{Sh2} for calculating the current. In voltage biased superconductive junctions, the quasiparticle scattering is inelastic due to time dependence of the superconducting phase difference across the junction, $\phi(t)=2eVt$, and the scattering state wave functions consist of linear combinations of harmonics (sidebands) with energies $E_n=E+neV$ shifted by integer number of quanta $eV$ with respect to the energy $E$ of the incoming wave. Below we will consider one transport mode in the junction and choose the scattering state wave functions in the left ($L$) and right ($R$) electrodes having the form \begin{mathletters} \label{psi} \begin{eqnarray} \label{psiL} \psi_L &=& e^{-iEt} \left[ \delta_{j1}u_0^+e^{ik_0^+x}+ \delta_{j2}u_0^-e^{-ik_0^-x}\right]+ \nonumber \\ &&\sum_{n=-\infty}^{\infty} e^{-iE_nt} \left[a_nu_n^-e^{ik_n^-x}+ c_nu_n^+e^{-ik_n^+x}\right] \\ \label{psiR} \psi_R &=& e^{i\sigma_z \phi(t)} \left\{ e^{-iEt} \left[ \delta_{j3}u_0^+e^{-ik_0^+x}+ \delta_{j4}u_0^-e^{ik_0^-x}\right] \right. + \nonumber \\ && \left. \sum_{n=-\infty}^{\infty} e^{-iE_nt}\left[b_nu_n^-e^{-ik_n^-x}+ f_nu_n^+e^{ik_n^+x} \right] \right\}, \end{eqnarray} \end{mathletters} where $u_n^\pm$ are (non-normalized) two-component elementary solutions of the Bogoliubov-de Gennes equations, \begin{equation} u_n^\pm={1\over \sqrt 2} \left( \begin{array}{c} e^{\pm\gamma_n/2}\\ \sigma_n e^{\mp\gamma_n/2} \end{array} \right). \label{u} \end{equation} In this equation $$ e^{\gamma_n}={|E_n|+\xi_n\over\Delta},\;\; \xi_n= \left\{ \begin{array}{lr} \sqrt{E^2_n-\Delta^2}, & |E_n|>\Delta\cr i\sigma_n\sqrt{\Delta^2-E^2_n}, & |E_n|<\Delta \end{array}\right. , $$ $$ \sigma_n=\mbox{sgn} (E_n),\;\;\; k_n^\pm=\sqrt{2m(E_F\pm \sigma_n\xi_n)}. $$ Index $j=1-4$ in Eqs. (\ref{psi}) labels scattering states of the electron- and hole-like quasiparticles incoming from the left ($j=1,2$) and right ($j=3,4$). The form of wave functions in Eqs. (\ref{psi}), (\ref{u}) assumes that superconducting electrodes serve as equilibrium quasiparticle reservoirs, and that the potential difference between the reservoirs is absorbed into the time-dependent factor $e^{i\phi(t)}$ in Eq. (\ref{psiR}) due to appropriate choice of the gauge. To match the wave functions in Eqs. (\ref{psi}) we will apply a transfer matrix technique. In the present case of an inelastic scattering problem, the connection between $\psi_L$ and $\psi_R$ is non-local in time, and the corresponding transfer matrix ${\bf T}^S_{nm}$ mixes the sidebands, \begin{equation} {A\choose B}_{Ln}= \sum_m {\bf T}^S_{nm}{A\choose B}_{Rm} . \label{matchS} \end{equation} The matrix ${\bf T}^S_{nm}$ is a $4\times 4$ matrix defined on a space of wave function coefficients, $A=(A^+,A^-)$, $B=(B^+,B^-)$, % \begin{eqnarray} \psi_{n}&=&e^{-iE_nt}\left[A^+_nu_n^+e^{ik_n^+x}+A^-_nu_n^+e^{-ik_n^+x} \right. \nonumber \\ &+& \left. B^+_n u_n^-e^{ik_n^-x}+B^-_nu_n^-e^{-ik_n^-x}\right] . \label{psiLn} \end{eqnarray} The transfer matrix in Eq. (\ref{matchS}) can be expressed, similarly to the case of unbiased junctions \cite{WSh}, through a transfer matrix $T(E)$ associated with elastic electron scattering by the normal junction. Let us introduce auxiliary normal regions between the superconductors and the junction, the length of which normal regions will be put equal to zero at the end of the calculations. The wave function in the normal region has the form \begin{equation} \psi_{n}^N= {A_n^{N+}e^{ik_n^{N+}x}+A_n^{N-}e^{-ik_n^{N+}x}\choose B_n^{N+} e^{ik_n^{N-}x} + B_n^{N-}e^{-ik_n^{N-}x}}e^{-iE_nt}, \label{psiLnN} \end{equation} where $k_n^{N\pm}=\sqrt{2m(E_F\pm E_n)}$ is the normal electron wave vector. The wave functions Eq. (\ref{psiLnN}) in the left and right normal regions are related as \begin{equation} {{ A}^N\choose { B}^N}_{Ln} ={\bf T}^{N}_n{{ A}^N\choose{ B}^N}_{Rn}, \;\;\;{\bf T}^{N}_n=\left( \begin{array}{cc} T(E_n)&0\\ 0&T(-E_n)\\ \end{array} \right). \label{matchT} \end{equation} We note that the transfer matrix ${ T}(E)$ describes scattering of the normal electrons by the actual potential of the junctions at a given voltage, i.e. it includes effects of potential deformation due to applied voltage, $T(E;V)$. Continuous matching at the left SN interface yields in quasiclassical approximation $k_n\approx k_n^N\approx k_F$, % \begin{equation} {A^N\choose B^N}_{Ln} ={\bf T}^{NS}_n{ A\choose B}_{Ln},\;\; {\bf T}^{NS}_n=\left( \begin{array}{cc} e^{\gamma_n/2}&e^{-\gamma_n/2}\\ \sigma_n e^{-\gamma_n/2} & \sigma_n e^{\gamma_n/2}\\ \end{array} \right). \label{matchL} \end{equation} A matching condition at the right NS interface is derived in a similar way but an additional time-dependent factor $e^{i\sigma_z eVt}$ in Eq. (\ref{psiR}) must be taken into account. The latter gives different equations for upper (electron) and lower (hole) components of the coefficient vectors: \begin{equation} { A}_{Rn} ^N={\bf P}^+{\bf T}^{NS}_{n+1}{ A\choose B}_{R(n+1)},\;\; { B}_{Rn}^N={\bf P}^-{\bf T}^{NS}_{n-1}{ A\choose B}_{R(n-1)}. \label{matchR} \end{equation} In this equation, ${\bf P}^\pm$ are projectors on upper/lower vector components. Combination of Eqs. (\ref{matchT})-(\ref{matchR}) gives the following equation for the transfer matrix in Eq. (\ref{matchS}) \begin{equation} {\bf T}^S_{nm}=\sum_\pm ({\bf T}^{NS}_{n})^{-1}{\bf T}^{N}_{n} {\bf P}^\pm{\bf T}^{NS}_{m}\delta_{m(n\pm 1)}. \label{match} \end{equation} Normal electron transfer matrix enters this equation with different arguments $\pm E_n$. This energy difference introduces effects of electron-hole dephasing during quasiparticle propagation through the junction. In non-resonant short constrictions, the energy dispersion of the transfer matrix is negligible, and equation (\ref{match}) is equivalent to the matching equation derived in Ref.\cite{Sh2}. In resonant junctions (and also in long SNS and SIS junctions \cite{WSh2}) dephasing effects are important. The matching equations (\ref{matchS}),and (\ref{match}) can be written in an equivalent form, \begin{equation} {\bf P}^\pm{\bf T}^{NS}_{n}{ A\choose B}_{Ln}= T(\pm E_n){\bf P}^\pm{\bf T}^{NS}_{Rn\pm 1}{ A\choose B}_{R(n\pm 1)}. \label{match1} \end{equation} Applied to the scattering state wave functions in Eqs. (\ref{psi}), it yields the following recurrences for the scattering amplitudes: \begin{mathletters} \label{mar} \begin{eqnarray} &e^{\sigma_z\gamma/2} \delta_{n0} {\delta_{j1}\choose \delta_{j2}}+ e^{-\sigma_z\gamma_n/2}{a\choose c}_n= \nonumber \\ &{T}(E_n) \left[ e^{\sigma_z\gamma/2} \delta_{(n+1)0} {\delta_{j3}\choose \delta_{j4}}+ e^{-\sigma_z\gamma_{n+1}/2}{f\choose b}_{n+1} \right] \\ \label{up} &e^{-\sigma_z\gamma/2} \delta_{n0} {\delta_{j1}\choose \delta_{j2}}+ e^{\sigma_z\gamma_n/2}{a\choose c}_n= \nonumber \\ &{ T}(-E_n)\sigma_n\sigma_{n-1} \left[ e^{-\sigma_z\gamma/2} \delta_{(n-1)0} {\delta_{j3}\choose \delta_{j4}}+ e^{\sigma_z\gamma_{n-1}/2}{f\choose b}_{n-1} \right]. \label{down} \end{eqnarray} \end{mathletters} Analytical solutions of the recurrences in Eqs. (\ref{mar}) can be presented in chain-fraction form (see Appendix A) similar to the case of non-resonant junctions \cite{Sh1,Sh2}. \section{Model for resonance.} Now we will specify the transfer matrix for the resonant junction. We will restrict ourselves to symmetric junctions, $T_{11}=T_{22}^\ast=1/d$ and $T_{21}=T_{12}^\ast=r/d$ and assume the Breit-Wigner resonance form for transmission and reflection amplitudes $d$ and $r$ respectively, \begin{equation} d(E)={i\Gamma\over E-E_r+i\Gamma},\;\;r(E)=-{E-E_r\over E-E_r+i\Gamma}. \label{bw} \end{equation} The position of the resonance level $E_r$ as well as the resonance half-width $\Gamma$ are generally dependent on the applied voltage. However, while the subharmonic gap structure is affected in an essential way by the position of the resonance, the dependence on the resonance width is less important. Thus we will assume $\Gamma=const$. We will not specify the voltage dependence of resonance level position, but rather present the current as a function of two variables: driving voltage and resonance position, $I(V,E_r)$. The current voltage characteristics can then be reconstructed from such a dependence by specifying the $E_r(V)$ dependence determined by self-consistent distribution of the electric potential across the junction. The normal electron resonance, being confined between superconducting electrodes, possesses specific properties which will be important for further analysis of the resonant MAR. Since the transfer matrix $T(E)$ enters the recurrences for the scattering amplitudes in Eq. (\ref{mar}) at two different energies $\pm E$, the proximity resonance consists of two, electron and hole, resonances situated symmetrically with respect to the Fermi level, $E=\pm E_r$. Within the adopted approach, the current is calculated by using the scattering amplitudes defined in the {\em superconducting electrodes} [see further Eq. (\ref{I})] and the recurrences in Eq. (\ref{mar}) are formulated for these amplitudes. Although equivalent, such an approach is different from the discussion of MAR amplitudes in the normal region of the junction (see, e.g. \cite{KBT}). Within our approach, the non-superconducting region of the junction is considered as a black box and is represented by the transfer matrix $T(E)$. Due to the different choices of gauge in the left and right electrodes, the resonance is seen from the left and right electrodes at different energies [cf. Eqs. (\ref{matchL}) and (\ref{matchR})]. Indeed, the resonances are seen from the left electrode at $E=\pm E_r$, i.e. quasiparticles incoming from the left undergo resonant transition if $E=\pm E_r$, while the resonances are seen from the right electrode at $E=\pm (E_r+eV)$, as shown in Fig. 2. In the scattering diagram in Fig. 2c, the resonance therefore is presented with two segments: $E_n=E_r \leftrightarrow E_{n+1}=E_r+eV$ for the electron resonance, and $E_n=-E_r \leftrightarrow E_{n-1}=-E_r-eV$ for the hole resonance. There is a symmetry between the scattering states originating from the left and right electrodes: \begin{equation} {a\choose c}_{n,3}(\gamma,\Gamma, E_0)=\sigma_0\sigma_n {f\choose b}_{n,1}(-\gamma, -\Gamma, -E_0), \label{sym13} \end{equation} with an analogous relation for the second pair of scattering amplitudes. In Eq. (\ref{sym13}), $E_0=E_r+eV/2$ is the distance of the normal resonance level with respect to the midpoint between the chemical potentials in the left and right electrodes. Equation (\ref{sym13}) leads to a symmetry property of the current which is an even function of the resonance position $E_0$: $I(V,E_0)=I(V,-E_0)$. Below we will indicate the resonance position by means of the energy $E_0$ and abbreviate the Breit-Wigner amplitudes (\ref{bw}), \begin{equation} d_n^{\pm}={i\Gamma\over E_n^{\pm}+i\Gamma},\;\; r_n^{\pm}={E_n^{\pm}\over E_n^{\pm}+i\Gamma}, \;\;\; E_n^{\pm}=E_n\mp (E_0-eV/2). \end{equation} \section{dc Current.} In the quasiclassical approximation, the equation for the current reads \cite{Sh1,Sh2} \begin{eqnarray} &I= {e\Delta\over2\pi}\int^{\infty}_{\Delta} {dE\over \xi} \sum_{n=odd} \cosh(Re \gamma_n)\tanh{E\over 2T} \nonumber \\ &\left[ \sum_{j=1,2}\left(|b_{n,j}|^2-|f_{n,j}|^2\right)- \sum_{j=3,4}\left(|c_{n,j}|^2-|a_{n,j}|^2\right)\right] \label{I} \end{eqnarray} The current in Eq. (\ref{I}) is calculated using transmitted states (in the right and left electrodes for scattering states $j=1,2$ and $j=3,4$ respectively), and it consists of contributions from all odd sidebands. By virtue of the symmetry equations \begin{eqnarray} {f\choose b}_{n,2}(\gamma, { T}) & = & {b\choose f}_{n,1}(-\gamma, { T^\ast}) , \nonumber \\ {a\choose c}_{n,2}(\gamma, {T}) & = & {c\choose a}_{n,1}(-\gamma, { T^\ast}) , \label{sym12} \end{eqnarray} directly following from Eqs. (\ref{mar}) (analogous relations hold for the scattering states $j=3,4$) and symmetry equations (\ref{sym13}), the current in Eq. (\ref{I}) can be expressed through the sideband contributions \begin{equation} K_{n}=\left[|b_{n}|^2-|f_{n}|^2\right]\cosh(Re \gamma_n) \label{K} \end{equation} of one single scattering state ($j=1$, index $j$ is omitted), \begin{equation} I= {e\Delta\over2\pi}\int^{\infty}_{\Delta} {dE\over \xi} \sum_{n=odd}[K_{n}-\bar K_{n} + (E_0\rightarrow -E_0)]\tanh{E\over 2T}, \label{I'} \end{equation} where $\bar K_{n}=K_{n}(-\xi_n, -\Gamma)$. Equation Eq. (\ref{I'}) together with the recurrences in Eqs. (\ref{mar}) provide a basis for numerical calculation of the current. The calculation of scattering amplitudes should obey the boundary condition at $\pm$ infinity where the amplitudes approach zero. The simplest way to obtain such solutions is to iterate the recurrences from large $|E_{n}|$ towards $E$. The correct solution will then grow exponentially and numerically "kill" the solution growing at infinity. By this procedure one gets the correct scattering states for each incoming quasiparticle at every energy. The results of numerical calculation of current-voltage characteristics (IVC) are presented in Fig. 3 for different values of resonance level position $E_0=const$. This particular case corresponds to a perfectly symmetric distribution of the electric potential across the junction with $E_0$ indicating the departure of the resonance level from the Fermi level in equilibrium $(V=0)$. The IVC with the resonance level situated at the Fermi level, $E_0=0$, shows an onset of the single-particle current at $eV=2\Delta$ accompanied by a current peak caused by large density of states near the superconducting gap [see below Eqs. (\ref{I1res}-\ref{I1asym})]. Such behaviour of the single-particle current has been observed in the experiments on metallic dots \cite{Ralph} and carbon-nanotube junctions \cite{Dekker}. A striking feature of this IVC is the absence of current structure at $eV=\Delta$, while the structure at $eV=2\Delta/3$ is pronounced, consisting of a peak similar to the structure of the single-particle current. Calculation of the IVC at lower voltage, presented in Fig. 4, shows the same feature - only odd subharmonic gap structures are present. If the resonant level departs from the Fermi level and $E_0\neq 0$, the single-particle current onset shifts towards larger voltage, $eV>2\Delta$, and the current peak broadens. A striking feature in this case is the development of a huge current peak at voltages lower than the position of the structure of single-particle current. This peak, associated with resonant pair current (see below Sec. V), appears as soon as $E_0>\Delta/2$ and is situated at voltage $eV=2E_0$ which coincides with position of the resonant current onset in normal junctions. If the resonance level departs far from the Fermi level, $E_0\gg\Delta$, the IVCs in the subgap voltage region $eV<2\Delta$ approach the form typical for non-resonant point contacts, as could be expected, while a broadening of the resonance, $\Gamma\gg \Delta$, gives rise to SNS-type IVCs, as shown in Fig. 5. A complete description of the current in resonant junctions is given by the function $I(V,E_0)$, as already mentioned in Section III. A plot of this function is presented in Fig. 6. The IVCs plotted in Figs. 3-5 correspond to horizontal cuts ($E_0=const$) of the plot in Fig. 6. In Fig. 6a, the light wedge-like region at $eV>2\Delta$ corresponds to the resonance single-particle current. The resonant peak of the pair current is seen as the light streaks directed along the lines $E_0=\pm eV/2$, the structure starts at $eV=\Delta$. Fig. 6b presents a similar plot for the region of small voltage, $eV<\Delta$. The picture shows quite complex structure of the current consisting of wedge-like plateaux of the resonant current as well as of light streaks corresponding to current peaks. In order to interpret the features of the IVCs one needs to analyze the properties of the side-band currents $K_n$ presented in Eqs. (\ref{K}) and (\ref{Kexp}). \section{Discussion} A convenient expression for analysis of the subharmonic gap structure is derived in Appendix B: $$I_{SGS}(V, E_0)=\sum_{n=1}^\infty I_n(V, E_0),\\ $$ \begin{eqnarray} I_n(V, E_0)={e\Delta\over2\pi}\int^{neV-\Delta}_{\Delta} \tanh{E\over 2T}{dE\over\xi} \nonumber \\ \left[ \tilde K_{-n} - \tilde{\bar K}_{-n} + (E_0\rightarrow -E_0)\right] \label{In} \end{eqnarray} In Eq. (\ref{In}) only contributions of processes of creation of real excitations (transitions across the gap $E>\Delta\rightarrow E_n<-\Delta$) are retained which are responsible for the subharmonic gap structure \cite{Sh2} while a contribution of thermal excitations is omitted. Furthermore, the sum over the sideband currents in Eq. (\ref{I'}) is now rearranged in order to explicitly separate the contributions of all inelastic channels [contributions of {\em even} inelastic channels are hidden in Eq. (\ref{I'})]. This is done by proper renormalization of the sideband currents $K_n\rightarrow \tilde K_{n}$ presented in Appendix B, the equation for $\tilde K_{n}$ being given in Eq. (\ref{Ktilde}). We will now develop a perturbative analysis of the current in the limit of small width of the resonance, $\Gamma\ll\Delta$, and zero temperature. \subsection{Single-particle current} The single-particle current is given by the first term in Eq. (\ref{In}). In accordance with Eq. (\ref{Ktilde}), it has explicit form \begin{eqnarray} I_1={4e \over \pi} \int_{\Delta}^{eV-\Delta} dE\, {|E_{-1}|\xi\xi_{-1}\over \Delta^3} \nonumber \\ \left\{ D_0^- \left( {e^{-\gamma}\over P_1}+{e^{\gamma}\over \bar P_1} \right) + \left( E_{0}\rightarrow -E_{0} \right) \right\}, \label{I1} \end{eqnarray} $P_1$ is defined in Eq. (\ref{P'}). This current has no contribution from Andreev reflections and it has only one resonance. It is sufficient to consider only scattering states incoming from the left [the first term in curly brackets in Eq. (\ref{I1})], the resonance equation in this case being $E_0^-=0$. The resonance is only involved if it belongs to the integration interval. This determines the resonance region $eV/2>\Delta+|E_{0}|$ in the plane $(V,E_0)$ (region $I$ in Fig. 7). The resonant scattering diagram is shown in Fig. 8a. In non-resonant junctions, the currents $\tilde K_{-n}$ in Eq. (\ref{In}) have the singularities which are responsible for the current onset at $eV=2\Delta$ and subharmonic gap structure. In resonant junctions, the singularities are washed out due to strong electron-hole dephasing and the resonant transmissivity is simultaneously renormalized. In the case of a single-particle current in Eq. (\ref{I1}), the onset of non-resonant current is caused by zeros of the function $P_1$. Calculation of $P_1$ for the resonant junctions by using the rule in Eq. (\ref{P'}) and retaining only the resonant scattering amplitude $d^-_0$ yields \begin{equation} {D_0^-\over P_1}\approx {\Delta^4\Gamma^2\over 16\xi^2\xi_{-1}^2} \left| E_0^- +{i\over 2} \left( \Gamma_{0}+\Gamma_{-1} \right)\right|^{-2}, \label{P1} \end{equation} where $\Gamma_{n}=\Gamma |E_{n}|/\xi_{n}$. Equation (\ref{P1}) shows the transformation of the resonant tunneling probability in the superconducting junctions: the resonance width is broadened due to superconducting density of states $E/\xi$. Taking into account Eq. (\ref{P1}) and similar equations for the other terms in Eq. (\ref{I1}), we present the single-particle current on the form of the Landauer formula, \begin{equation} I_1={e\over\pi} \int_{\Delta}^{eV-\Delta} dE\,\tilde D_1(E), \label{I1'} \end{equation} with the effective single-particle transmission coefficient, \begin{equation} \tilde D_1(E)= {\Gamma_0\Gamma_{-1}\over|E_0^- +(i/2)(\Gamma_{0}+\Gamma_{-1})|^2}. \label{D1} \end{equation} Similar equation have been derived in Ref. \cite{Ye2} using a different method. Equations (\ref{I1'}) and (\ref{D1}) determine the current in the wedge region in Fig. 6a. In the limit of $\Gamma\rightarrow 0$, the resonant current reads \begin{equation} I_1={2e^2\Gamma V_+ V_-\theta[eV-2(\Delta+|E_0|)] \over V_-\sqrt{(eV_+)^2-4\Delta^2}+V_+\sqrt{(eV_-)^2-4\Delta^2}} , \label{I1res} \end{equation} where $eV_{\pm}=eV\pm 2|E_0|$. This equation quantitatively describes the single-particle current feature in Fig. 3. The current has maximum at the wedge edges and decreases at large $eV$ approaching the value for the resonant current in the normal junction $I_N=e\Gamma$ (see Fig. 3), \begin{equation} I_1=I_N \left\{ \begin{array}{rl} \displaystyle {2|E_0|+\Delta\over\sqrt{|E_0|(|E_0|+\Delta)}}, & eV=2(\Delta+|E_0|)\\ \displaystyle 1+{2\Delta^2\over (eV)^2} , & eV\gg \Delta, E_0 \end{array} \right. \label{I1asym} \end{equation} The current peak is the result of enhancement of the effective width of the resonance in Eq. (\ref{D1}) at low energy $\xi=0$ Equation (\ref{I1asym}) is everywhere applicable except of the wedge vertex, $E_0=0$, $eV=2\Delta$, where the current grows without limit. In fact, the current consists of the peak and turns to zero at $eV=2\Delta$ due to shrinking of interval of integration in Eq. (\ref{I1'}) in this region. The maximum current is achieved when the integration interval becomes comparable with the resonance width, $eV-2\Delta\sim \Gamma\sqrt{\Delta/(eV-2\Delta)}$. These arguments yield estimate for the maximum current at $eV=2\Delta$, $(I_1)_{max}\sim I_N(\Delta/\Gamma)^{1/3}$. \subsection{Pair current} The pair current has form \begin{eqnarray} I_2={4e\over \pi} \int_{\Delta}^{2eV-\Delta} dE\, {|E_{-2}|\xi\xi_{-2}\over \Delta^3} \nonumber \\ \left\{ D_0^-D_{-2}^+ \left( {e^{-\gamma}\varphi_{-2}\over P_2} +{e^{\gamma}\bar\varphi_{-2}\over \bar P_2} \right)+ \left( E_0\rightarrow -E_0 \right) \right\}. \label{I2} \end{eqnarray} Restricting again the consideration with quasiparticles incoming from the left, we find that this current is contributed by two resonances, $E_0^-=0$ and $E_{-2}^+=0$, which simultaneously enter the integration interval within the region $II_1$ in Fig. 7 (region $II_2$ corresponds to resonant quasiparticles incoming from the right). Therefore, the resonant pair current only appears if the normal resonance is sufficiently far from the Fermi level, $E_0>\Delta/2$, while at $E_0<\Delta/2$ the current is non-resonant within the voltage interval $\Delta<eV<2\Delta$. This means in particular that the onset of the pair current at $eV=\Delta$ is small: $I_2\sim I_N(\Gamma/\Delta)^3$ if $E_0=0$. In regions $II$, the pair current undergoes resonant enhancement, $I_2\sim I_N(\Gamma/\Delta)^2$ due to independent contributions of two separate resonances (Fig. 8b), each contribution being described by the equations similar to Eqs. (\ref{I1'}), (\ref{D1}). The most interesting phenomenon in the resonant pair current is the overlap of the resonances occurring along the lines $eV=\pm 2|E_0|$ in Fig. 7. The overlap of the resonances produces a huge current peak near these lines (seen as light streaks in the Fig 6a; we note that these lines correspond to the position of the onset of resonant current in normal junctions). The scattering diagram for this case is presented in Fig. 8c. Applying Eq. (\ref{P'}) for calculation of $P_2$ and retaining both the resonant amplitudes $d_0^-$ and $d_{-2}^+$, we obtain \begin{eqnarray} {D_0^-D_{-2}^+\over P_2} \approx {\Delta^6\Gamma^4\over |8\xi\xi_{-1}\xi_{-2}|^2} \left| \left[E_0^- +i\left({\Gamma_{0}+\Gamma_{-1}\over 2}\right)\right] \right. \nonumber\\ \left. \left[E_{-2}^+ +i\left({\Gamma_{-1}+\Gamma_{-2}\over 2}\right)\right] -{\Gamma^2\Delta^2\over 4|\xi{-1}|^2} \right|^{-2}. \label{P2res} \end{eqnarray} Substituting Eq. (\ref{P2res}) into Eq. (\ref{I2}) for the current and collecting the contributions of all scattering modes, we find % \begin{equation} I_2={e\over \pi} \int_{\Delta}^{2eV-\Delta} dE\, \tilde D_2(E), \label{I2'} \end{equation} where \begin{eqnarray} &&\tilde D_2(E)= \nonumber \\ &&{\Gamma_{0}\Gamma_{-2}\Gamma^2\Delta^2/4|\xi_{-1}|^2\over |\tilde E_{0}^-\tilde E_{-2}^+ -(\Gamma_{0}\Gamma_{-2} +\Gamma^2\Delta^2/|\xi_{-1}|^2)/4+ i(\Gamma_{-2}E_{0}^- + \Gamma_{0}E_{-2}^+)/2|^2} \nonumber \\ &&\tilde E= E+i\Gamma_{-1}/2. \label{D2} \end{eqnarray} Equation (\ref{D2}) shows a remarkable similarity to the resonant transmissivity of Schr\"odinger three-barrier structures: the probability to leak outside the superconducting gap through the sidebands $n=0$ and $n=-2$ (Fig. 8c) corresponding to probability of tunneling through side barriers, while the probability of Andreev reflection by the sideband $n=-1$ corresponding to transmissivity of a central barrier. Such three-barrier structures have been investigated, e.g. in connection with normal electron transport properties of coupled quantum dots \cite{Naz,Delft}. The strong overlap of the resonances is explained by the fact that shift of the resonances is proportional to $\Gamma^2$ due to Andreev reflection, according to Eq. (\ref{D2}), while the resonance width is proportional to the first power of $\Gamma$ (the quantity $\Gamma_{-1}$ is equal to zero at the lines $eV=\pm2|E_0|$). In the vicinity of the lines $eV=\pm 2|E_0|$ and in the limit $\Gamma\rightarrow 0$, the pair current has following form % \begin{equation} I_2=I_N{\Gamma^2 eV\sqrt{(eV)^2-\Delta^2}\over (eV-2|E_0|)^2 [(eV)^2-\Delta^2]+\Gamma^2[2(eV)^2-\Delta^2]}. \label{I2double} \end{equation} (We notice that this formula is valid at all voltages $eV>\Delta$ because the side band $n=-1$ is inside the energy gap if $eV\approx \pm 2|E_0|$). Equation (\ref{I2double}) describes the current peak in Fig. 3, the height of the peak \begin{equation} (I_2)_{max}=I_N\;{2|E_0|\sqrt{4E_0^2-\Delta^2}\over 8E_0^2-\Delta^2} \label{I2p} \end{equation} being comparable to the magnitude of the resonant single-particle current, in particular, $(I_2)_{max}=I_N/2$ at $E_0\gg\Delta$. According to Eq. (\ref{I2double}) the resonant pair current tends to zero at large voltage $eV\gg\Delta, E_0$, which means that, rigorously speaking, there is no resonant excess current. However, if the resonance is far beyond the gap, $|E_0|\gg\Delta$, the current may strongly deviate from the current in the normal junction in the region $\Delta\ll eV\ll 2|E_0|$ because the single-particle current is non-resonant in this region, while the pair current is resonant. Such an effect is particularly pronounced in the junctions where the resonance level follows the chemical potential of one of the electrodes, $E_0(eV)\pm eV/2 \approx \epsilon=const$. The IVC in this case corresponds to cuts in the plot in Fig 6a parallel to the light streaks. In such a case, the peak of the pair current is very broad, and even transforms into a plateau with a sharp onset at $eV=\Delta$ ($\epsilon=0$), as shown in the inset in Fig. 10. The magnitude of the current at the plateau can be found directly from Eq. (\ref{Kexp}) when assuming $E_0=\epsilon\pm eV/2$ and $eV=\infty$, % \begin{eqnarray} I_2(\epsilon, \Gamma)={2e\over \pi} \int_{0}^{\infty}dE\;\cosh(Re\gamma) \nonumber \\ {2D_0^- \sinh(Re\gamma) + D_0^-D_0^+ e^{-Re\gamma}\over |e^{\gamma}-r_0^{-\ast}r_0^+e^{-\gamma}|^2}, \end{eqnarray} $D_0^{\pm}=\Gamma^2/[((E\mp\epsilon)^2+\Gamma^2]$. This current as function of $\epsilon$ is shown in Fig. 10. There is an interesting difference between the property of the resonance in the single-particle current and that of individual resonances of the pair current. To be specific, let us consider the resonance $E^-_0$: in the pair current this resonance is more narrow because the quantity $\Gamma_{-1}$ is imaginary and causes a resonant shift rather than a contribution to the resonance width. The physical reason for this squeezing of the resonance is that direct leakage of a quasiparticle through the side band $n=-1$ is blocked, and the only escape from the resonant region into the continuum is through the states of the side band $n=0$. \subsection{High-order currents} The effect of the resonance narrowing is even more important for the third order current, \begin{eqnarray} &&I_3={4e\Delta\over \pi} \int_{\Delta}^{3eV-\Delta} dE\, {|E_{-3}|\xi\xi_{-3}\over \Delta^3} \nonumber \\ &&\left\{ D_0^-D_{-2}^+D_{-2}^- \left( {e^{-\gamma}\varphi_{-3} \over P_3}+ {e^{\gamma}\bar\varphi_{-3}\over \bar P_3} \right)+\left( E_0\rightarrow -E_0 \right) \right\}. \label{I3} \end{eqnarray} The third order current has three resonances at $E_0^-, E_{-2}^{+}, E_{-2}^{-}=0$ which belong to the interval of integration within the regions $III_1, III_2, III_3$ in Fig. 7, respectively. The side resonances at $E_0^-, E_{-2}^{-}=0$ are characterized by an effective transmissivity similar to the effective transmissivity of the resonances of pair current (times additional factor $\sim \Gamma ^2$). The contribution of these resonances in the current is therefore estimated as $I_3\sim I_N(\Gamma/\Delta)^4$. The central resonance $E_{-2}^{+}=0$ is much more narrow. Indeed, in this case (Fig. 8d), direct leakage of the resonant particle into continuum is blocked at the both sidebands $n=-1,-2$, and the particle can escape only through the sideband states $n=0,-3$, traversing the junction one more time. The central resonance determines the current in the vicinity of the threshold $eV=2\Delta/3,\, E_0=0$. Calculation of the quantity $P_3$ in region $III_2$ according to Eq. (\ref{P'}) yields \begin{equation} {D_0^-D_{-2}^+D_{-2}^-\over P_3} \approx {\Delta^4\tilde\Gamma_0\tilde\Gamma_{-3}\over |4^2\xi\xi_{-3}EE_{-3}|} \left| \tilde E_{-2}^+ +{i\over2} \left(\tilde\Gamma_{0}+ \tilde\Gamma_{-3} \right)\right|^{-2} \label{P3res} \end{equation} where $\tilde E_{-2}^+=E_{-2}^+ +i(\Gamma_{-1}+\Gamma_{-2})/2 + O(\Gamma^2)$ and $\tilde\Gamma_{0}=\Gamma_{0}D_{0}^-\Delta^2/4|\xi|^2$, $\tilde\Gamma_{-3}=\Gamma_{-3}D_{-2}^-\Delta^2/4|\xi_{-2}|^2$. According to Eq. (\ref{P3res}), the resonance width is of the order of $\tilde\Gamma\sim\Gamma^3$ which yields giant enhancement of the current $I_3\sim I_N(\Gamma/\Delta)^2$, exceeding by two orders of $\Gamma$ the contribution of the side resonances. Such narrowing of the central resonance occurs in the quadrangle region in Fig. 7 bounded by the edges of the resonance region $III_2$ and regions $II$. The current in this region has a form similar to the one in equation (\ref{I1'}), \begin{equation} I_3={e\over \pi} \int_{\Delta}^{3eV-\Delta} dE\, \tilde D_3(E), \label{I3'} \end{equation} with the effective resonant transmissivity % \begin{equation} \tilde D_3(E)={3\tilde\Gamma_0\tilde\Gamma_{-3} \over \left| \tilde E_{-2}^+ - i \left( \tilde\Gamma_{0}+\tilde\Gamma_{-3} \right)/2 \right|^2}. \label{D3} \end{equation} In the limit of small $\Gamma\rightarrow 0$, the current reads % \begin{equation} I_3=6e \tilde\Gamma_{0} \tilde\Gamma_{-3}/ \left( \tilde\Gamma_{0}+\tilde\Gamma_{-3}\right)_{E=|E_0|+3eV/2}. \label{I3r} \end{equation} The phenomenon of resonance narrowing provides the explanation for the absence of current structure at voltage $eV=\Delta$, namely the dominance of the third order current $I_3$ at the threshold of the pair current. The current in Eq. (\ref{I3r}) is responsible for the light wedge-like region at $eV<\Delta$ in Fig. 6b. Similarly to the case of single-particle current, the third-order current in Eq. (\ref{I3r}) has a peak at the edges of the wedge with the height increasing proportionally to $(eV-2\Delta/3)^{-1/2}$ towards the vertex of the wedge, $eV=2\Delta/3, \, E_0=0$. This growth is again limited due to interplay between shrinking integration interval and growing resonance width, $eV-2\Delta/3\sim \Gamma^3[\Delta/(eV-2\Delta/3)]^{1/2}$. This estimate gives a height $(I_3)_{max}\sim I_N(\Gamma/\Delta)$ of the current peak at $eV=2\Delta/3$. As one may see in Fig. 6b, there are no current structures at the edges $eV=2(\Delta-|E_0|)$ of the above mentioned quadrangle where the the narrow resonance of three-particle current dies: this is because of the resonant pair current emerges at the same lines, giving rise to a gradual cross over between three-particle current and pair current both having the magnitude of the order of $I_N(\Gamma/\Delta)^2$. The phenomenon of resonance narrowing results in enhancement of central resonances in all higher odd-order currents, giving rise to the current peaks at $eV=2\Delta/(2k+1)$, $E_0=0$ with the height $I_{max}\sim I_N(\Gamma/\Delta)^{2k-1}$. The magnitude of the current between neighbouring peaks is $I\sim I_N(\Gamma/\Delta)^{2k}$. Also, the overlap of narrow resonances of the even-order currents near the lines $eV=\pm 2|E_0|$ yields current peaks with the height $I\sim I_N(\Gamma/\Delta)^{2k}$ within the intervals $2\Delta/(2k+2)<eV<2\Delta/2k$. These current peaks are clearly seen in Fig. 6b in the form of light streaks. \section{Conclusion.} In conclusion, we have considered effect of the normal electron resonant tunneling on the subharmonic gap structure (SGS) in mesoscopic superconducting junctions. In non-resonant tunnel junctions, the SGS consists of sharp onsets and narrow peaks of the current at voltage $eV=2\Delta/n$. In resonant junctions, SGS is considerably modified depending on the position of resonance level with respect to the chemical potentials of the electrodes. If the resonance level is situated exactly in the middle between the chemical potentials of electrodes, the odd-$n$ current structures are tremendously enhanced while the even-$n$ current structures are not affected by the resonance. This enhancement is explained by narrowing of the resonance during multiple Andreev reflections. When the resonance departs from the midpoint between the chemical potentials of electrodes, new current structures appear at $eV=\pm2E_0$ in a form of current peaks. This feature results from overlap of electron and hole resonances. In our calculations, the Coulomb charging energy was assumed to be smaller than the superconducting gap, and the charging effects were neglected. In experiments on metallic dots \cite{Ralph} and carbon nanotubes \cite{Delft1,Rice,Dekker}, the opposite situation has been observed with the Coulomb charging energy exceeding the superconducting gap which led to suppression of the subgap current. The charging energy in quantum transport experiments can be reduced by enhancing capacitance of the resonant structure, e.g. by using substrates with large dielectric constants. This will allow direct application of our results to such structures. Another way would be to use high-$T_c$ materials for fabrication of superconducting electrodes for the nanotube experiments. Our theory is applicable to ballistic plane junctions with large capacitance such as resonant junctions in high mobility S-2DEG-S devices and atomic plane junctions in layered cuprates (intrinsic Josephson junctions \cite{Kleiner}). Current-voltage characteristics of such multimode junctions can be obtained on the basis of our theory by summation of contributions of all transport modes. \section{Acknowledgement} This work has been supported by the Swedish Natural Science Research Council (NFR), the Swedish Board for Technical Development (NUTEK), the Swedish Royal Academy of Sciences (KVA), and the New Energy Development Organization (NEDO), Japan.
1,314,259,994,430
arxiv
\section{Introduction} \label{sec:intro} With the onset of quantum information theory, the weirdness of quantum mechanics has transitioned from being a bug to being a feature, and the first demonstrations of quantum speedup have recently been achieved~\cite{arute2019quantum,zhong2020quantum}, building on inherently non-classical properties of physical systems. While entanglement is used daily for the calibration of current quantum experiments, it was originally perceived as a `spooky action at a distance' by Einstein. This led him, Podolsky and Rosen (EPR) to speculate about the incompleteness of quantum mechanics~\cite{einsteincan1935} and the existence of a deeper theory over `hidden' variables reproducing the predictions of quantum mechanics without its puzzling non-local aspects. During the same period, Wigner was also looking for a more intuitive description of quantum mechanics, and he obtained a phase-space description akin to that of classical theory~\cite{Wigner1932}. However, a major difference with the classical case was that the Wigner function---the quantum equivalent of a classical probability distribution over phase space---could display negative values. These `negative probabilities' seemingly prevented a classical interpretation of phase-space quantum mechanics. More than thirty years later, the seminal results of Bell \cite{belleinstein1964,bell1966} and Kochen and Specker \cite{kochen1975problem} ruled out the possibility of finding the underlying hidden-variable model for quantum mechanics envisioned by EPR, thus establishing non-locality, and its generalisation contextuality, as fundamental properties of quantum systems. At an intuitive level, contextuality and negativity of the Wigner function are properties of quantum states that seek to capture similar characteristics of quantum theory: the non-existence of a classical probability distribution that describes the outcomes of the measurements of a quantum system. In more operational terms, contextuality is present whenever any hidden-variable description of the behaviour of a system is inconsistent with the basic assumptions that \begin{enumerate*}[label=(\roman*)] \item all of its observable properties may be assigned definite values at all times, and \item jointly measuring compatible observables does not disturb the global value assignments, or, in other words, these assignments are context-independent. \end{enumerate*} Aside from its foundational importance, contextuality has been increasingly identified as an essential ingredient for enabling a range of quantum-over-classical advantages in information processing tasks, which include the onset of universal quantum computing in certain computational models \cite{raussendorf2013contextuality,howard2014contextuality,abramsky2017contextual,bermejo2017contextuality,abramsky2017quantum}. Similarly, the negativity of the Wigner function, or Wigner negativity for short, is also crucial for quantum computational speedup as quantum computations described by non-negative Wigner functions can be simulated efficiently classically~\cite{Mari2012}. Importantly, quantum information can be encoded with discrete but also continuous variables~\cite{lloyd1999quantum}, using quantum degrees of freedom of physical systems such as position or momentum. The study of contextuality has mostly focused on the simpler discrete-variable setting~\cite{abramsky2011sheaf,csw,Spekkens2005,Xiang2013,Helmut2006,bartosik2009experimental,kirchmair2009state}: for discrete-variable systems of odd prime-power dimension, Howard \textit{et al.} \cite{howard2014contextuality} showed that negativity of the (discrete) Wigner function~\cite{gross2006hudson} corresponds to contextuality with respect to Pauli measurements, and the equivalence was later generalised to odd dimensions in \cite{delfosse2017equivalence} and to qubit systems in \cite{raussendorf2017contextuality,delfosse2015wigner}. However, the EPR paradox~\cite{einsteincan1935} and the phase-space description derived by Wigner~\cite{Wigner1932} were originally formulated for continuous-variables systems. Moreover, from a practical point-of-view, continuous-variable quantum systems are emerging as very promising candidates for implementing quantum informational and computational tasks \cite{braunstein2005quantum,weedbrook2012gaussian,crespi2013integrated,Bourassa2021blueprintscalable,walschaers2021non,chabaud2021continuous} as they offer unrivalled possibilities for quantum error-correction~\cite{Gottesman2001,cai2021bosonic}, deterministic generation of large-scale entangled states over millions of subsystems~\cite{yokoyama2013ultra,yoshikawa2016invited} and reliable and efficient detection methods, such as homodyne or heterodyne detection~\cite{Leonhardt-essential,Ohliger2012}. Since contextuality and Wigner negativity both seem to play a fundamental role as non-classical features enabling quantum-over-classical advantages, a natural question arises in the continuous-variable setting: \begin{center} \textit{What is the precise relationship between contextuality and Wigner negativity?} \end{center} \noindent Here we prove that contextuality and Wigner negativity are equivalent with respect to continuous-variable measurements, thus unifying the quantum quirks that prevented Einstein and Wigner from obtaining a classically intuitive description of quantum mechanics. More precisely, we build on the recent extension of the sheaf-theoretic framework of contextuality~\cite{abramsky2011sheaf} to the continuous-variable setting~\cite{barbosa2019continuous} and prove the equivalence between contextuality and Wigner negativity with respect to generalised position and momentum measurements, \ie continuous-variable Pauli measurements. These are the most commonly used measurements in continuous-variable quantum information, in particular in quantum optics \cite{adesso2014continuous,walschaers2021non}, and for defining the standard models of continuous-variable quantum computing~\cite{lloyd1999quantum,Gottesman2001}. \section{Continuous-variable contextuality} \label{sec:CVcontextuality} In this section, we briefly review the contextuality formalism from \cite{barbosa2019continuous}, which is the continuous-variable extension of \cite{abramsky2011sheaf}. We introduce the necessary ingredients of measure theory in Appendix \ref{app:measure}. We encourage readers unfamiliar with these notions to read it. We present the two main objects to properly define `contextuality' in a continuous-variable setting. \textit{Empirical models} can be thought of as providing formal descriptions of tables of data specifying the probabilities of joint outcomes for compatible measurements. These empirical models need an underlying abstract description of an experiment which is given by a \textit{measurement scenario}. \subsection{Measurement scenarios} \label{subsec:measscenario} An abstract description of an experimental setup is formalised as a measurement scenario. \begin{definition}[Measurement scenario] \label{def:measurementscenario} A measurement scenario is a triple $\tuple{\Xc,\Mc,\bm \Oc}$ whose elements are specified as follows. \begin{itemize} \item $\Xc$ is a (possibly infinite) set of measurement labels. \item $\Mc$ is a covering family of subsets of $\Xc$, \ie such that $\bigcup\Mc = \Xc$. The elements $C \in \Mc$ are called maximal contexts and represent maximal sets of compatible observables. We therefore require that $\Mc$ be an anti-chain with respect to subset inclusion, \ie that no element of this family is a proper subset of another. Any subset of a maximal context also represents a set of compatible measurements. \item $\bm \Oc = \family{\bm \Oc_x}_{x \in \Xc}$ specifies a measurable space of outcomes $\bm \Oc_x = \tuple{\Oc_x,\Fc_x}$ for each measurement $x \in \Xc$. If some set of measurements $U \subseteq \Xc$ is considered together, there is a joint outcome space given by the product of the respective outcome spaces $\bm \Oc_U \defeq \prod_{x \in U} \bm \Oc_x = \tuple{\Oc_U, \Fc_U}$. \end{itemize} \end{definition} As for the discrete-variable case, the \textit{sheaf} $\Ec$ that maps $U \subseteq \Xc$ to $\Ec(U) = \bm \Oc_U$ is called the \emph{event sheaf}. It comes with the notion of restriction, \ie, for $U \subseteq V \in \Pc(\Xc)$ there is a restriction map $\rho^V_U : \bm \Oc_V \rightarrow \bm \Oc_U $ which is measurable. \subsection{Empirical models} \begin{definition}[Empirical model] \label{def_empiricalmodel} An empirical model (or empirical behaviour) on a measurement scenario $\tuple{\Xc,\Mc,\Oc}$ is a family $e = \family{e_C}_{C \in \Mc}$, where $e_C$ is a probability measure on the space $\Ec(C)=\bm \Oc_C$ for each maximal context $C \in \Mc$. It satisfies the compatibility conditions: \[\forall C, C' \in \Mc, \quad e_C|_{C \cap C'} = e_{C'}|_{C \cap C'} \Mdot \] \end{definition} Empirical models capture in a precise way the probabilistic \emph{behaviours} that may arise upon performing measurements on physical systems. The compatibility condition ensures that the empirical behaviour of a given measurement or compatible subset of measurements is independent of which other compatible measurements might be performed along with them. This is known as the \emph{no-disturbance condition}. A special case is \emph{no-signalling}, which applies in multi-party or Bell scenarios. In that case, contexts consist of measurements that are supposed to occur in space-like separated locations, and compatibility ensures for instance that the choice of performing $a$ or $a'$ at the first location does not affect the empirical behaviour at the second location. \subsection{Noncontextuality} Informally, contextuality arises from the tension between local and global assignments. This tension was first considered in \cite{vorob1962consistent}. Noncontextuality can be thought of as an extendability property: the empirical data is noncontextual whenever local predictions (within a valid context) can be glued together consistently so that it can be described from global assignments. \begin{definition}[Noncontextuality] \label{def:nc} An empirical model $e$ on a scenario $\tuple{\Xc,\Mc,\bm \Oc}$ is \emph{extendable} or \emph{noncontextual} if there is a probability measure $\mu$ on the space $\Ec(\Xc)=\bm \Oc_\Xc$ such that $\mu|_C = e_C$ for every $C \in \Mc$. \end{definition} Recall that $\bm \Oc_\Xc$ is the space of global assignments. Of course, it is not always the case that $\Xc$ is a valid context, and if it were then $\mu = e_\Xc$ would trivially extend the empirical model. The question of the existence of such a probability measure that recovers the context-wise empirical content of $e$ is particularly significant. When it exists, it amounts to a way of modelling the observed behaviour as arising stochastically from the behaviours of underlying states, identified with the elements of $\Oc_\Xc$. These elements deterministically assigns outcomes to \textit{all} the measurements in $\Xc$ independently of the measurement context that is actually performed. If an empirical model is not extendable it is said to be \emph{contextual}. \subsection{A Fine--Abramsky--Brandenburger theorem} We have characterised noncontextuality of an empirical model by the extendability property. Global sections are sufficient to capture noncontextual empirical behaviours via deterministic global states that assign predefined outcomes to all measurements. This is precisely the model referred to in the Kochen-Specker theorem \cite{kochen1975problem}. On the other hand, Bell's theorem---focused on a multi-party experiment in which the parties may be spacelike separated---identifies another classical feature: factorisability rather than determinism. Fine unified these two notions in the case of the (2,2,2) Bell scenario \cite{Fine1982}. Later, Abramsky and Brandenburger \cite{abramsky2011sheaf} showed that this existential equivalence holds for any measurement scenario with observables with a discrete spectrum. This proof was further generalised to the continuous-variable setting in \cite{barbosa2019continuous}. It establishes an unambiguous, unified treatment of locality and noncontextuality, which is captured in a canonical way by the notion of extendability. We begin by introducing the notion of hidden-variable models (HVM). Note that HVMs are often referred to as ontological models \cite{Spekkens2005} nowadays. The latter has become widely used in quantum foundations in recent years. It indicates that the hidden variables---or the ontic state---are supposed to provide an underlying description of the physical world at perhaps a more fundamental level than the empirical-level description via the quantum state. The idea behind the introduction of HVMs is that there exists some space $\Lambda$ of hidden variables predetermining the empirical behaviour. The motivation is that hidden variables could \textit{explain away} some of the more non-intuitive aspects of the empirical predictions of quantum mechanics, which would then arise from an incomplete knowledge of the true state of a system rather than being a fundamental feature. There is some precedent for this in physical theories: for instance, statistical mechanics---a probabilistic theory---admits a deeper, albeit usually very complex, description in terms of classical mechanics, which is purely deterministic. It is desirable to further impose constraints on hidden-variable models (HVM) which will restrict the set of achievable empirical behaviours and require that it behaves \textit{classically} in some sense. In the case of Bell locality, we require that the hidden-variable model must be local \ie factorisable (in a sense made precise below). However, hidden variables may not be directly accessible themselves so we allow that we only have probabilistic information about the hidden variables in the form of a probability distribution $p$ on $\Lambda$. The empirical behaviour should then be obtained as an average over the hidden-variable behaviours. \begin{definition}[Hidden-variable model] \label{def:hvmodel} A hidden-variable model on a measurement scenario $\tuple{\Xc,\Mc,\Oc}$ consists of the triple $\tuple{\bm \Lambda, p, (k_C)_{C \in \Mc}}$ where: \begin{itemize} \item $\bfLambda = \tuple{\Lambda,\Fc_\Lambda}$ is the measurable space of hidden variables, \item $p$ is a probability distribution on $\bfLambda$, \item for each context $C \in \Mc$, $k_C$ is a probability kernel between the measurable spaces $\bfLambda$ and $\Ec(C)=\bm \Oc_C$ satisfying the following compatibility condition: \begin{equation}\label{eq:compatibility_hiddenvar} \forall C,C' \in \Mc, \forall \lambda \in \Lambda, \quad k_C(\lambda,-)|_{C \cap C'} = k_{C'}(\lambda,-)|_{C \cap C'} \Mdot \end{equation} \end{itemize} \end{definition} A hidden-variable model $\tuple{\bfLambda,p,(k_C)_{C \in \Mc}}$ on a measurement scenario $\tuple{\Xc,\Mc,\bm \Oc}$ gives rise to an empirical behaviour. The corresponding empirical model $e$ is such that for any maximal context $C \in \Mc$ and measurable set of joint outcomes $B \in \Fc_C$, \begin{equation} \label{eq:eval_HVM_emp} e_C(B) = \intg{\Lambda}{k_C(\dummy,B)}{p} = \intg{\lambda \in \Lambda}{k_C(\lambda,B)}{p(\lambda)}\Mdot \end{equation} It is important to emphasise that quantum systems \textit{can} be described by such an HVM: we must impose further constraints to ensure that the HVM behaves \textit{classically}.\footnote{Note that this definition of hidden-variable model assumes \textit{$\lambda$-independence} \cite{Dickson1998} (the fact that the distribution $p$ on $\bfLambda$ is independent of the measurement context) and \textit{parameter-independence} \cite{Jarrett1984,Shimony1986} (the compatibility condition at the hidden variable level), as is described in \cite{barbosa2019continuous}.} A hidden-variable model $\tuple{\bfLambda,p,(k_C)_{C \in \Mc}}$ is said to be \textit{deterministic} if, for all contexts $C \in \Mc$, for every $\lambda \in \Lambda$, $\fdec{k_C(\lambda,\dummy)}{\Fc_C}{[0,1]}$ is a Dirac measure \ie there is an assignment $\bm o\in \Oc_C$ such that $k_C(\lambda,\dummy) = \delta_{\bm o}$. It is said to be \textit{factorisable} if, for every $\lambda \in \Lambda$ and for every maximal context $C \in \Mc$, $h_C^\lambda$ factorises as a product probability distribution \ie for every $s \in \Oc_C$: \begin{equation*} h_C^\lambda(s) = \prod_{m \in C} h_C^\lambda|_{\{m\}}(s|_{\{m\}}) = \prod_{m \in C} h_m^\lambda(s|_{\{m\}}). \end{equation*} \noindent Due to the assumption of parameter-independence, we can unambiguously write $h_m^\lambda$ for $h_C^\lambda|_{\{m\}}$ as the marginalisation is independent of the context $C$. \begin{theorem}[FAB theorem \cite{abramsky2011sheaf,barbosa2019continuous}] \label{th:FAB} Let $e$ be an empirical model on a measurement scenario $\tuple{\Xc,\Mc,\bm \Oc}$. The following are equivalent: \begin{enumerate}[label=(\arabic*)] \item\label{it:ext} $e$ is extendable;\index{Extendability!CV} \item\label{it:det} $e$ admits a realisation by a deterministic hidden-variable model;\index{Hidden-variable model!CV!deterministic} \item\label{it:fac} $e$ admits a realisation by a factorisable hidden-variable model.\index{Hidden-variable model!CV!factorisable} \end{enumerate} \end{theorem} Crucially, this theorem provides a canonical form of hidden-variable model for a noncontextual empirical model $e$ where the hidden-variable space can be taken as the space of global assignments. \begin{corollary}[\cite{barbosa2019continuous}] \label{coro:HVM} For a noncontextual empirical model $e$ on a measurement scenario $\tuple{\Xc,\Mc,\bm \Oc}$ extendable by a measure $\mu$ on $\bm \Oc_{\Xc}$, a canonical hidden-variable model for $e$ is given by: \begin{itemize} \item $\bfLambda \defeq \bm \Oc_{\Xc}$; \item $p \defeq \mu$; \item $k_C(\bm g,\dummy) \defeq \delta_{\bm g|_C}$ for all global outcome assignments $\bm g \in \Oc_\Xc$ and all maximal contexts $C \in \Mc$. \end{itemize} \end{corollary} \section{Wigner functions and the symplectic phase space} \label{sec:wigner} We fix $M \in \N^*$ to be the number of qumodes, that is, $M$ continuous-variable systems. For a single qumode, the corresponding Hilbert space is \(L^2(\R)\) and the total Hilbert space for all \(M\) qumodes is then \(L^2(\R)^{\otimes M} \cong L^2(\R^{M})\). To each qumode, we associate a \emph{position} operator and a \emph{momentum} operator. These are defined on the dense subspace of Schwartz functions (functions whose derivatives are decreasing faster than any polynomial) by: \begin{equation} \hat{q} \psi(x) \defeq x \psi(x) \quad \text{and} \quad \hat{p} \psi(x) \defeq -i \frac {\partial \psi(x)}{\partial x} \, , \end{equation} and we extend this definition by linearity to any linear combination thereof. Any \(\R\)-linear combination is called a \emph{quadrature}. The Wigner representation for a quantum state in the Hilbert space \(L^2(\R^{M})\) is a function defined on the \emph{phase space} \(\R^{2M}\), which can be intuitively understood as a quantum version of the position and momentum phase space of a classical particle. We equip this phase space with the measure \(\dd{\bm{x}} = (2\pi)^{-M}\dd{Leb}\), where \(\dd{Leb}\) is the Lebesgue measure and with a symplectic form denoted ${\Omega}$. For $\bm x,\bm y \in \R^{2M}$, \begin{equation} \label{eq:symplecticform} \Omega(\bm x,\bm y) := \bm x \cdot J \bm y \quad \text{where} \quad J = \begin{pmatrix} 0 & \Id_M \\ - \Id_M & 0 \end{pmatrix} \end{equation} in a given basis \((\bm{e}_k,\bm{f}_k)_{k=1}^{M}\) of \(\R^{2M}\) (which is therefore a symplectic basis for the phase space). We have that $J^{-1} = J^T = -J$ and $J$ can be seen as linear map from $\R^{2M}$ to $\R^{2M}$. A \emph{Lagrangian vector subspace} is defined as a maximal isotropic subspace, that is, a maximal subspace on which the symplectic form $\Omega$ vanishes. For a symplectic space of dimension $2M$, Lagrangian subspaces are of dimension $M$. See \cite{Sudarshan1988} for a concise introduction to the symplectic structure of the phase space and \cite{Gosson2006symplectic} for a detailed review. The importance of the symplectic phase space $\R^{2M}$ comes from its relation to the position and momentum operators. To any \(\bm{x} \in \R^{2M}\) we associate a \emph{generalised quadrature} as follows. Assume w.l.o.g. that \(\bm{x} = \sum_{k}a_k \bm{e}_k + \sum_{k}b_k \bm{f}_k\), then putting \begin{equation} \hat{\bm{x}} = \sum_{k=1}^M a_k \hat{q}_k + \sum_{k=M+1}^{2M} b_k \hat{p}_k, \end{equation} where the indices indicate on which qumode each operator acts, it is straightforward to verify, using the canonical commutation relations, that \begin{equation} [\hat{\bm{x}},\hat{\bm{y}}] = i\Omega(\bm{x},\bm{y}) \hat{\Id}. \end{equation} We can also associate the elements of \(\R^{2M}\) to translations\footnote{The relationship between the maps \(\hat D\) and \(\hat{-}\) amounts to a representation of a Lie group and its Lie algebra. We do not develop this perspective here as we will not need it.}. Firstly, for any \(s \in \R^{M}\), define the \emph{Weyl operators}, acting on \(L^2(\R^{M})\) by \begin{equation} X(s)\psi(x) = \psi(x-s) \quad \text{and} \quad Z(s)\psi(x) = e^{isx} \psi(x). \end{equation} Then, define the \emph{displacement operator} for any \(\bm x = (q,p) \in \R^{2M}\) in the symplectic basis, by \begin{equation} \hat D(\bm{x}) = e^{-i\frac{q \cdot p}{2}} X(q) Z(p), \end{equation} so that \begin{equation} [\hat D(\bm{x}),\hat D(\bm{y})] = e^{i \Omega(\bm{x},\bm{y})} \hat{\Id}. \end{equation} Finally, to any symplectic transformation \(S\) of \(\R^{2M}\), we can associate a unitary operator \(\tau(S)\) such that\footnote{\(\tau\) can also be given a group-theoretical treatment as a projective representation, but once again we do not need any of the related properties, so we skip these details.} \begin{equation}\label{mudef} \tau(S)^*\hat{\bm{x}}\tau(S) = \hat{\bm y}, \end{equation} for all $\bm x\in\R^{2M}$, where $\bm y=S\bm x$. A quadrature operator such as the position operator $\hat q$ is self-adjoint and it can be expanded via the spectral theorem \cite{hall2013quantum} as: \begin{equation} \hat q = \intg{x \in \mathrm{sp}(\hat q)}{x}{P_{\hat q}(x)} \, , \end{equation} where the spectrum of $\hat q$ is $\mathrm{sp}(\hat q) = \R$ and $P_{\hat q}$ is the spectral measure of $\hat q$ \cite[Th. 8.10]{hall2013quantum}. For any Lebesgue measurable \(E \subseteq \R\), $P_{\hat q}(E)$ is given by \cite[Def. 8.8]{hall2013quantum}: \begin{equation} P_{\hat q}(E) \psi = \bm{1}_E \cdot \psi \Mdot \label{eq:position_pvm} \end{equation} Informally, it assigns 1 whenever measuring $\hat q$ yields an outcome that belongs to $E$. We can view $P_{\hat q}(E)$ as the formal version of a projector $\intg{x \in E}{\ket x \bra x}{x}$, with $\ket x$ a (non-normalisable) eigenvector of $\hat q$. Writing \(P_{\hat{\bm{x}}}\) the spectral measure of the self-adjoint operator \(\hat{\bm{x}}\), we have that \begin{equation}\label{Ptau} \hat D(\bm{y})^*P_{\hat{\bm{x}}}(E)\hat D(\bm{y}) = P_{\hat{\bm{x}}+\hat{\bm y}}(E) \quad \text{and} \quad \tau^*(S)P_{\hat{\bm{x}}}(E) \tau(S) = P_{S\hat{\bm{x}}}(E). \end{equation} This allows us to calculate the PVM for the measurement of any such quadrature from Eq.~\eqref{eq:position_pvm}. The spectral decomposition for a displacement can then be obtained by the functional calculus as: \begin{equation} \hat D({\bm{x}}) = \intg{\R}{e^{i\lambda}}{P_{\hat{J\bm x}}(\lambda)}, \label{eq:functionnal_displacement} \end{equation} where the \(J\) in \(P_{\hat{J\bm x}}\) comes from the fact that we use the convention \(X(s) = e^{-is\hat{p}}\) rather than \(e^{is\hat{p}}\). \subsection*{Wigner functions} There are several equivalent ways of defining the Wigner function of a state \cite{ferraro2005gaussian,cahill1969density, de_gosson_wigner_2017}. We follow the conventions adopted in \cite{de_gosson_symplectic_2011} (see in particular Prop. 175). The \emph{characteristic function} \(\Phi_\rho : \R^{2M} \to \C\) of a density operator \(\rho\) on \(L^2(\R^{M})\) is defined as \begin{equation} \Phi_\rho(\bm{x}) \defeq \Tr(\rho \hat D(-\bm{x})). \label{eq:characfunction} \end{equation} This function satisfies the equation \begin{equation} \int_{\bm{x}\in\R^{2M}} |\Phi_\rho(\bm{x})|^2 \dd{\bm{x}} = \Tr(\rho^2) \leqslant 1, \end{equation} so that \(\Phi_\rho \in L^2(\R^{2M})\) \cite{cahill1969density}. This means that its \(L^2\)-Fourier transform is well-defined, and we can define the \emph{Wigner function} \(W_\rho\) of \(\rho\) as \begin{equation} W_\rho(\bm{x}) \defeq \operatorname{FT}[\Phi_\rho](J\bm{x}). \label{eq:WignerCharacFunction} \end{equation} The Wigner function is a real-valued square-integrable function on \(\R^{2M}\). Arguably, its key property is that one can recover the probabilities for quadrature measurements from its marginals \cite[proposition 6.43]{Gosson2006symplectic}: if \(W\) is the Wigner function for a pure state \(\psi \in L^2(\R^{M})\) such that \(W \in L^1(\R^{2M})\), then once again identifying \(\bm{x}\) with \((q,p)\in\R^{2M}\), \begin{equation} \frac{1}{(\sqrt{2\pi})^M} \intg{\R^M}{W(q,p)}{p} = |\psi(q)|^2 \quad \text{and} \quad \frac{1}{(\sqrt{2\pi})^M} \intg{\R^M}{W(q,p)}{ q} = \left|\operatorname{FT}[\psi](p)\right|^2. \end{equation} It follows, that for any Lebesgue measurable \(E \subseteq \R\), \begin{equation}\label{PW2} \Tr(P_{\bm{e}_1}(E)\rho) = \intg{E \times \R^{2M-1}}{W_\rho(\bm{x})}{\bm{x}}. \end{equation} As described in the introduction, when the Wigner function only takes non negative values, it can then be interpreted as a simultaneous probability distribution for position and momentum measurements (and in general, any quadrature obtained by a linear combination of these). The Wigner representation has many other fascinating properties, but we only need one more in this paper: if \(S\) is a symplectic orthogonal transformation, \begin{equation} W_{\tau(S)\rho\tau^*(S)}(\bm x) = W_{\rho}(S \bm x) \Mdot \end{equation} This equation is derived in Appendix~\ref{app:relations} (see Eq.~\eqref{eq:WigSymplTrans}). \section{Measurement scenario under consideration} \label{sec:measurementscenario} As advertised by its title, the main contribution of the present paper is a formal proof that Wigner negativity is equivalent to contextuality with respect to continuous-variable measurements. In that respect, we carefully and precisely derive the measurement scenario under consideration. Our result generalises previous known results \cite{howard2014contextuality,delfosse2017equivalence,delfosse2015wigner,raussendorf2017contextuality} as it applies to discrete-variable systems as well as continuous-variable systems. In \cite{bertrand1987tomographic,blass2020negative} it is proven that the Wigner function is the unique phase-space quasiprobability distribution yielding the correct marginals for every quadrature measurement. However, the link to contextuality remains unclear as it is delicate to exhibit the right measurement scenario \cite{Banaszek1998}. In fact, generalising the discrete-variable approach to the sheaf-theoretic framework for continuous-variable contextuality, one runs into several problems of a functional-analytic nature. For example, since we only assume access to measurable properties of the quantum system, already the results of \cite{blass2020negative} are of no use since they implicitly assume point-wise equality of marginals whereas we can only guarantee equality almost-everywhere. As described in section~\ref{sec:wigner}, the Wigner function representing the total system is a real-valued quasiprobability distribution $\R^{2M} \to \R$. It was already established in \cite{albini2009quantum} that the Wigner function must be at least an $L^1$-function for the generalised quadrature measurements to provide a good characterisation. We assume a minimal structure in the nature of our model. In fact, assuming only the (pre-)sheaf structure for a set of measurements on quantum systems, we prove that there is a natural hidden-variable model which relates to the Wigner function on phase space. The proof strategy is to use the extendability property of a noncontextual empirical model from Definition~\ref{def:nc} to exhibit a global probability measure on global value assignments. Then we show that it corresponds to the Wigner function. This step is delicate since it requires careful functional-analytic considerations. Having achieved this, it proves that the Wigner function must be everywhere nonnegative since it corresponds to a global probability measure. For the other direction, a nonnegative Wigner function can be used directly as a hidden-variable model so that the corresponding empirical model is noncontextual \cite{Son2009positive}. \subsection*{Measurement scenario} The Wigner function bears a close relationship to displacement operators as emphasised via its link with the characteristic function in Eq.~\eqref{eq:WignerCharacFunction}. Wigner negativity will be shown to be equivalent to contextuality with respect to Pauli measurements in a similar spirit as \cite{howard2014contextuality,delfosse2017equivalence}. Following Definition~\ref{def:measurementscenario}, this measurement scenario corresponds to the setting described below. \begin{definition} \label{def:measscen} We fix the measurement scenario $\tuple{\Xc,\Mc,\bm \Oc}$ as follows: \begin{itemize} \item the set of measurement labels is $\Xc \defeq \R^{2M}$ with the symplectic structure described in section~\ref{sec:wigner}; \item the maximal contexts are Lagrangian subspaces of $\R^{2M}$ so that the set of maximal contexts $\Mc$ is the Lagrangian Grassmannian of $\Xc$; \item for each $\bm x \in \Xc$, $\bm \Oc_{\bm x} \defeq \langle \R,\Bc_\R \rangle$ so that for any set of measurement labels $U \subseteq \Xc$, $\Oc_U \cong \R^U$ can be seen as the set of functions from $U$ to $\R$ with its product $\sigma$-algebra $\Fc_U$\footnotemark. \footnotetext{It is generated by collection of functions $E$ from $L$ to $\R$ such that $\pi_{\bm x}(E)$ is a real interval for a finite number of $\bm x \in \Xc$ and $\R$ for the rest where $\pi_{\bm x}$ is the projection given later in Eq~\eqref{eq:catprojection}. } \end{itemize} \end{definition} \noindent Each $\bm x = (q_1,\dots,q_M,p_1,\dots,p_M) \in \Xc$ specifies a point in phase-space which corresponds to measuring the associated displacement operator $\hat D(\bm x) = \hat D(q_1,p_1) \otimes \dots \otimes \hat D(q_M,p_M)$. Since a displacement operator is not self-adjoint (\ie Hermitian) we detail in Appendix~\ref{app:measDisp} how measuring displacement operators relates to quadrature measurements to better understand how our result generalises \cite{delfosse2017equivalence}. \subsection*{Corresponding experimental setup} The measurement scenario detailed above requires measuring any linear combination of multimode quadratures, \eg $\hat q_1 + 2 \hat p_{2\theta} + 5 \hat q_{M\alpha}$ for arbitrary angles $\theta, \alpha$. To do so in practice, we first apply phase-shift operators $\hat R(\theta)$ for each individual qumode to obtain the right rotated quadratures for each mode. Then we apply CZ gates of the form $e^{i g \hat q_k \hat q_l}$ for $g \in \R$ to pairs of qumodes $k$ and $l$ to sum them. This permits the construction of the desired linear combinations that is stored in one quadrature of a qumode. It remains to measure it. This can easily be implemented with standard homodyne detection, which consists in a Gaussian measurement of a quadrature of the field, by mixing the state on a beam splitter with a strong coherent state. Then, the intensities of both output arms are measured with photodiode detectors. Their difference yields a value proportional to a quadrature of the input qumode, which can be rotated depending on the phase of the local oscillator. The POVM elements for homodyne detection are given by $\ket{x}_{\phi}\bra{x}$ for all $x \in \R$ where $\ket x_\phi$ is the eigenstate of the rotated quadrature operator $\hat q_\phi$ with eigenvalue $x$. This is represented in Figure~\ref{fig:homodyne}. All of these steps can be implemented experimentally \cite{ferraro2005gaussian,su2013gate}. \begin{figure}[ht!] \centering \input{tikz/homodyne} \caption{Experimental protocol corresponding to the measurement scenario detailed above. It permits measurement of any linear combination of quadratures. After phase-shift operations on individual qumodes and CZ gates on pairs of qumodes, homodyne detection one qumode of the state is implemented. The dashed line represents a balanced beamsplitter. The local oscillator (LO) is a strong coherent state. At the hand of each arm are photodiode detectors. The difference in the intensity yields a value proportional to a quadrature of the input qumode, which can be rotated depending on the phase of the local oscillator. } \label{fig:homodyne} \end{figure} \subsection*{Maximal contexts} This measurement scenario is to be interpreted as follows. The measurement corresponding to the label $\bm r \in \Xc$ is described by the spectral measure $P_{\bm {\hat r}}$ for the quadrature corresponding to $\bm r$. A pair of spectral measures of self-adjoint operators is compatible, in the sense that they admit a joint spectral measure, if and only if they commute, which in turn is true if and only if the operators themselves commute \cite{hall2013quantum}. In the case of our measurement scenario, two spectral measures associated to $\bm x,\bm y\in \Xc$ commute if and only if $\Omega(\bm x,\bm y)=0$. Thus, measurement labels are compatible only when they both belong to some isotropic subspace of $\Xc$. The maximal isotropic subspaces are the Lagrangian subspaces, so each context $L \in \Mc$ corresponds to a Lagrangian subspace. Hence, $\Mc$ must be the collection of Lagrangian subspaces of \(\Xc\), also known as the Lagrangian Grassmanian of $\Xc$. \section{Admissible empirical models} \label{sec:empiricalmodels} In this section, we detail what empirical model may arise from the measurement scenario we detailed above. For each context $L \in \Mc$, the set $\Oc_L = \prod_{\bm x \in L} \R$ can be seen as the set of functions from $L$ to $\R$ with the corresponding $\sigma$-algebra constructed with the product topology. For $\bm x \in L$, the associated projections are: \begin{equation} \begin{aligned} \pi_{\bm x} : \Oc_L &\longrightarrow \R \\ f &\longmapsto f(\bm x) \Mdot \end{aligned} \label{eq:catprojection} \end{equation} We are interested in experiments \textit{arising from quadrature measurements of a quantum system}. We thus restrict our attention to empirical models $e = (e_L)_{L \in \Mc}$ which satisfy the Born rule, \ie there exists some quantum state $\rho \in \Dc(\mathscr{H})$ such that for all contexts $L \in \Mc$ and measurable sets $U \in \Fc_L$: \begin{equation} e_L(U) = \Tr\left(\rho \prod_{\bm x \in L} P_{\bm{\hat{x}} } \circ \pi_{\bm x} (U) \right) \Mdot \end{equation} We will therefore use the notation $e^{\rho} = (e_L^\rho)_{L \in \Mc}$ to make explicit the dependence with $\rho$. Because of the compatibility condition, we may unambiguously write, for each $\bm x \in \Xc$ and for each $U \in \Fc_{\bm x}$ (we write $\Fc_{\bm x}$ for simplicity though we mean $\Fc_{\left\{\bm x\right\}}$): \begin{equation} \label{eq:empiricalx} e^{\rho}_{\bm x}(U) = \Tr\left(\rho P_{\bm{\hat{x}} } \circ \pi_{\bm x} (U) \right) \Mdot \end{equation} This comes from the marginalisation $e^{\rho}_L|_{\bm x}$ for each $L \in \Mc$ such that $\bm x \in L$. At this stage, there is a mismatch: the Wigner function is a quasiprobability distribution over $\Xc=\R^{2M}$, while the extendability property of a noncontextual empirical model in the measurement scenario presented above provides a global probability measure on $\Oc_\Xc$, which can be seen as the set of functions $\Xc \to \R$. In general, the latter is much larger than the former. To solve this issue, we show that we can restrict to linear value assignments so that $\Oc_\Xc$ can be taken as $\Xc^*$, the linear dual of $\Xc$. Because $\Xc^*$ is isomorphic to $\Xc$ this allows us to resolve the mismatch between the Wigner function and the global probability measure corresponding to a noncontextual empirical behaviour. We first show that we can restrict to linear value assignments on Lagrangian subspaces in Lemma~\ref{lemma:outcomes_linear}. We do so by showing that the empirical model assigns mass only to the linear functions $L \to \R$. We then lift this property to global value assignments in Proposition~\ref{prop:linearglobal} in the same spirit as \cite{delfosse2017equivalence}. \begin{lemma} Let $L \subseteq \Xc$ be a Lagrangian subspace. Let $U \in \Fc_L$ be a Lebesgue measurable set of functions $L \to \R$ such that $\pi_{\bm x}(U)$ is distinct from $\R$ for a finite number of $\bm x \in L$. Then there exists a subset $U_{\mathrm{lin}}$ of linear functions $L \to \R$ such that for all $\bm x \in L$, $\pi_{\bm x}(U_{\mathrm{lin}}) \subseteq \pi_{\bm x}(U)$, and \begin{equation} e_L^\rho(U_{\mathrm{lin}}) = e_L^\rho(U) \Mdot \end{equation} \label{lemma:outcomes_linear} \end{lemma} \begin{proof} First let $(\bm e_k)_{k=1,\dots,M}$ be a basis of $L \cong \R^M$. Let $P$ be the joint spectral measure of $\{P_{\bm{ \hat e_1}},\dots,P_{\bm{ \hat e_M}}\}$. For any $ \bm y \in L$, define the function \begin{equation} \begin{aligned} f_{\bm y} : L & \longrightarrow \R \\ \bm x & \longmapsto \bm x \cdot \bm y \, , \end{aligned} \end{equation} where $\dummy \cdot \dummy $ is the usual Euclidean scalar product on $L \cong \R^M$. For any $\bm x \in L$, $P_{\bm {\hat x}}$ is the push-forward of $P$ by the measurable function $f_{\bm x}$ by definition of the functional calculus on $M$ variables. Recall that for $\bm x \in L$, $\pi_{\bm x}(U) = \left\{f(\bm x) \mid f \in U \right\} \subseteq \R$. \linebreak Then, \begin{align} \Tr\left(\rho \prod_{\bm x \in L} P_{\bm {\hat x}} \circ \pi_{\bm x}(U)\right) &= \Tr\left(\rho \prod_{\bm x \in L} P \left(f_{\bm x}^{-1}\left( \pi_{\bm x}(U)\right)\right)\right) \\ &= \Tr\left(\rho P\left(\bigcap_{\bm x \in L} f_{\bm x}^{-1}\left(\pi_{\bm x}(U)\right)\right) \right) \end{align} with \begin{align} \bigcap_{\bm x \in L} f_{\bm x}^{-1}(\pi_{\bm x}(U)) = \left\{ \bm y \in L \mid \forall \bm x \in L, \, \bm x \cdot \bm y \in \pi_{\bm x}(U) \right\}. \end{align} Now define \begin{equation} U_{\mathrm{lin}} \defeq \left\{ \begin{aligned} L &\longrightarrow \R \\ \bm x &\longmapsto \bm x \cdot \bm y \end{aligned} \bigm\vert \bm y \in \bigcap_{\bm x \in L} f_{\bm x}^{-1}(\pi_{\bm x}(U)) \right\} \Mdot \end{equation} By construction, \begin{align} \bigcap_{\bm x \in L} f_{\bm x}^{-1}&(\pi_{\bm x}(U_{\mathrm{lin}})) \\ &= \bigcap_{\bm x \in L} f_{\bm x}^{-1} \left( \left\{ \bm x \cdot \bm y \mid \bm y \in L \text{ s.t. } \forall \bm z \in L, \, \bm y \cdot \bm z \in \pi_{\bm z}(U) \right\} \right)\\ &= \bigcap_{\bm x \in L} \left\{ \bm \alpha \in L \mid \bm x \cdot \bm \alpha = \bm x \cdot \bm y \text{ with } \bm y \in L \text{ s.t. } \forall \bm z \in L, \, \bm y \cdot \bm z \in \pi_{\bm z}(U) \right\} \\ &= \left\{ \bm \alpha \in L \mid \forall \bm x \in L, \, \bm x \cdot \bm \alpha = \bm x \cdot \bm y \text{ with } \bm y \in L \text{ s.t. } \forall \bm z \in L, \, \bm y \cdot \bm z \in \pi_{\bm z}(U) \right\} \\ &= \left\{ \bm \alpha \in L \mid \forall \bm z \in L, \, \bm \alpha \cdot \bm z \in \pi_{\bm z}(U) \right\} \label{eq:equality}\\ &= \bigcap_{\bm x \in L} f_{\bm x}^{-1}(\pi_{\bm x}(U))\, , \end{align} where Eq.~\eqref{eq:equality} follows from that fact that $\forall \bm x \in L$, $\bm x \cdot \bm \alpha = \bm x \cdot \bm y$ implies $\bm \alpha = \bm y$. Also for all $\bm x \in L$, $\pi_{\bm x}(U_{\mathrm{lin}}) \subseteq \pi_{\bm x}(U)$ so that we are indeed reproducing all value assignments from linear functions of $U$. Then, as claimed, \begin{align} e_L^\rho (U_{\mathrm{lin}}) &= \Tr\left(\rho \prod_{\bm x \in L} P_{\bm {\hat x}} \circ \pi_{\bm x}(U_{\mathrm{lin}})\right) \\ &= \Tr\left(\rho \prod_{\bm x \in L} P \left( f_{\bm x}^{-1} \left( \pi_{\bm x}(U_{\mathrm{lin}})\right)\right) \right)\\ &= \Tr\left(\rho P\left(\bigcap_{\bm x \in L} f_{\bm x}^{-1}\left( \pi_{\bm x}(U_{\mathrm{lin}} \right)\right)\right) \\ &= \Tr\left(\rho P\left(\bigcap_{\bm x \in L} f_{\bm x}^{-1}\left(\pi_{\bm x}(U)\right)\right)\right) \\ &= \Tr\left(\rho \prod_{\bm x \in L} P_{\bm {\hat x}}\left(\pi_{\bm x}(U)\right)\right)\\ &= e_L^\rho (U)\Mdot \end{align} \end{proof} Now we prove that the set of global value assignments can be identified with $\Xc^*$, the linear dual space of $\Xc$. \begin{proposition} If $M \geqslant 2$ (\ie for more than 2 qumodes), global value assignments are linear functions $\Xc \to \R$, and the set of global value assignments forms a $\R$-linear space of dimension $2M$, namely $\mathscr{E}(\Xc) = \Xc^*$. \label{prop:linearglobal} \end{proposition} \begin{proof} The sheaf-theoretic framework for contextuality describes value assignments as a sheaf $\mathscr{E}$ where $\mathscr{E}(U)$ is the set of value assignments for the measurement labels in $U$, which can be viewed as a set of functions $U \to \R$. For any Lagrangian $L \in \Mc$, there is a restriction map $\rho_L^\Xc = \mathscr{E}(\Xc) \to \mathscr{E}(L) : f \mapsto f|_L$ that simply restricts the domain of any function from $\Xc$ tp $L$. Then $\mathscr{E}(L)$ must coincide with the set of possible value assignments $\Oc_L$. By Lemma~\ref{lemma:outcomes_linear}, $\mathscr{E}(L)$ consists in linear functions $L \to \R$ so that $\mathscr{E}(\Xc)$ contains only functions $\Xc \to \R$ whose restriction to any Lagrangian subspace is $\R$-linear. Then, following \cite[Lemma 1]{delfosse2017equivalence} (the lemma is proven for the discrete phase-space $\Z_d^M \times \Z_d^M$ but its proof extends directly to $\R^M \times \R^M$), we conclude that if $M \geqslant 2$, $\mathscr{E}(\Xc)$ contains only $\R$-linear functions $\Xc \to \R$, i.e. $\mathscr{E}(\Xc) = \Xc^*$. \end{proof} Therefore, without loss of generality, for any $U \subset \Xc$, we can restrict $\Oc_U$ to be the set of linear functions from $U \to \R$ that extend to \(\R\)-linear functions on the linear space generated by \(U\). Thus, an empirical model will be a collection of probability measures on $L^*$ for each $L \in \Mc$. For a noncontextual empirical model $e^\rho$, the extendability property yields a global probability measure on $\Xc^*$ that we will identify with the Wigner function of $\rho$ in the following section. \section{Equivalence between Wigner negativity and contextuality} \label{sec03:equivalence} We are now ready to tackle the main proof that Wigner negativity is equivalent to contextuality in our measurement scenario. We prove it by essentially identifying the Wigner function with the probability density of a carefully constructed hidden-variable model. We first set up the hidden-variable model via Proposition~\ref{proposition:density} and we prove the equivalence in Theorem~\ref{th:equivalence}. Crucially, for the identification with the Wigner function, we need to ensure that the hidden-variable model is realisable by a probability measure over hidden variables that has a density. We further require that this density is a $L^1$ function. \begin{proposition} If an empirical model $e^\rho$ for the continuous-variable measurement scenario in Definition~\ref{def:measscen} is noncontextual, then $e^\rho$ admits a realisation by a deterministic hidden-variable model $\tuple{\Xc,\mu_r,(k_L)_{L \in \Mc}}$ such that $\mu_r$ has density $w_\mu \in L^1(\Xc)$ with respect to the Lebesgue measure. \label{proposition:density} \end{proposition} \begin{proof} By the extension of the FAB theorem (see Corollary~\ref{coro:HVM}) and Lemma~\ref{lemma:outcomes_linear}, $e^\rho$ is realised by a canonical hidden-variable model (HVM) $(\bfLambda,\mu,k)$ (see Definition~\ref{def:hvmodel}), for which \begin{itemize} \item $\bfLambda = \bm \Oc_\Xc = \bm \Xc^*$ \ie hidden variables are linear value assignments; \item $\mu$ is a probability measure on $\Xc^*$; \item each probability kernel $k_L : \Xc^* \to L^*$ is deterministic and factorisable. \end{itemize} In the same spirit as the Riesz representation theorem \cite{riesz1909operations}, we pick the following natural isomorphism constructed with the scalar product to identify elements from $\Xc$ to elements from $\Xc^*$: \begin{equation} \begin{aligned} \alpha : \Xc &\longrightarrow \Xc^* \\ \bm x &\longmapsto \bm x \cdot - \Mdot \end{aligned} \end{equation} This will be essential to take the hidden-variable space to be $\Xc$ rather than $\Xc^*$. For all $L\in \Mc$, let \begin{equation} \begin{aligned} \tilde k_L: \Xc \times \Fc_L & \longrightarrow \R \\ (\bm x,U) & \longmapsto k_L( \alpha(\bm x),U) = \delta_{\alpha(\bm x)|_L}(U) \Mdot \end{aligned} \end{equation} Note that for both $(\tilde k_L)_{L \in \Mc}$ and $(k_L)_L$ we can unambiguously write $\tilde k_{\bm x}$ and $k_{\bm x}$ for a measurement label $\bm x \in \Xc$ because of the compatibility condition. Fix $\bm x \in \Xc$. For $E \in \Bc(\R)$, let \begin{equation} \label{eq:prho} p^\rho_{\bm x}(E) \defeq \Tr \left( \rho P_{\bm{\hat x}} (E) \right) \Mdot \end{equation} Fix $U \in \Fc_{\bm x}$ (a measurable set of functions on $\{\bm x\}$ that extends to a set of linear functions on the subspace generated by $\bm x$). We first evaluate $p^\rho_{\bm x} \circ \pi_{\bm x} (U) =e^\rho_{\bm x}(U)$ (see Eq.~\eqref{eq:empiricalx}) with the HVM above (see Eq.~\eqref{eq:eval_HVM_emp}). \begin{align} p^\rho_{\bm x} \left(\pi_{\bm x}(U) \right) &= e^\rho_{\bm x}(U) \\ &= \intg{\bfLambda}{k_{\bm x}(\dummy,U)}{\mu} \\ &= \intg{f \in \Xc^*}{k_{\bm x}(f,U)}{\mu(f)} \\ &= \intg{f \in \Xc^*}{\delta_{f|_{\{\bm x\}}}(U)} {\mu(f)} \\ &= \intg{f \in \Xc^*}{\delta_{f(\bm x)}(\pi_{\bm x}(U))}{\mu(f)} \\ &= \intg{f \in \Xc^*}{\delta_{\alpha^{-1}(f) \cdot \bm x}(\pi_{\bm x}(U))} {\mu(f)} \\ &= \intg{\bm y \in \Xc}{\delta_{\bm y \cdot \bm x}(\pi_{\bm x}(U))} {\mu \circ \alpha (\bm y)} \\ &= \intg{\bm y \in (- \cdot \bm x)^{-1} (\pi_{\bm x}(U))}{}{\mu \circ \alpha (\bm y)} \end{align} Thus $p^\rho_{\bm x}$ is the push-forward of the measure $\mu \circ \alpha$ on $\Xc$ by the linear functional $(- \cdot x)$. By the Lebesgue decomposition theorem \cite{billingsley2008probability}, there is a decomposition $\mu \circ \alpha = \mu_r + \mu_s$ where $\mu_r$ is absolutely continuous with respect to the Lebesgue measure $\mathrm{d} \bm x$ on $\Xc$ and $\mu_s$ is singular with respect to $\mathrm{d}\bm x$. It follows that, that for any $\bm x \in \Xc$, since $A = (\bm x\cdot-)^{-1}(E)$ has non-zero $\mathrm{d} \bm x$-measure for any Borel-measurable $E \subseteq \R$ of non-zero Lebesgue measure, \begin{equation} \mu_r (A) = \mu \circ \alpha (A) - \mu_s (A) = \mu \circ \alpha(A) = p^\rho_{\bm x}(E). \label{eq:pushforward} \end{equation} Then $(\bm \Xc,\mu_r,(\tilde k_L)_L)$ is a deterministic hidden-variable model for the empirical model $e^\rho$. By the Radon--Nikodym theorem \cite{Nikodym1930}, $\mu_r$ has a density $w_\mu$ with respect to the Lebesgue measure $\mathrm d \bm x$ on $\Xc$. Since $\mu_r$ is a probability measure, $w_\mu \in L^1(\Xc)$. \end{proof} The main result follows from identifying $w_\mu$ (Proposition~\ref{proposition:density}) and the Wigner function $W_\rho$ as $L^1(\Xc)$ functions: \begin{theorem} Assume $\rho$ is a density operator such that its Wigner function $W_\rho \in L^1(\Xc)$ with respect to the Lebesgue measure. Let $e^\rho$ be an empirical model on the measurement scenario in Definition~\ref{def:measscen} for $\rho$ according to the Born rule \ie for any $L \in \Mc$, for $U \in \Fc_L$, $e^\rho_L(U) = \Tr(\rho \prod_{\bm x \in L} P_{\bm{\hat x}} \circ \pi_{\bm x}(U))$. Then $e^\rho$ is noncontextual if and only if the Wigner function $W_\rho$ of $\rho$ is nonnegative, and in that case $W_\rho$ describes a hidden variable model for $e^\rho$. \label{th:equivalence} \end{theorem} \begin{proof} ($\implies$) The result holds by identifying the characteristic function of $\rho$, denoted $\Phi_\rho$ (see Eq.~\eqref{eq:characfunction}) with the Wigner function and with the density $w_\mu$ from Proposition~\ref{proposition:density}. For $\bm x \in \Xc$, we have that $\Phi_\rho(\bm x) = \text{FT}^{-1} \left[W_\rho \right](-J\bm x)$ by taking the inverse Fourier transform of Eq.~\eqref{eq:WignerCharacFunction}. On the other hand, fix a noncontextual empirical model $e^\rho$ satisfying the Born rule associated to $\rho$ and the measurement scenario in Definition~\ref{def:measscen}. By Proposition~\ref{proposition:density} we have: \begin{align} \Phi(\bm x) &= \Tr\left(\hat D(-\bm x)\rho\right) \\ &= \Tr\left(\rho \intg{\lambda\in\R}{e^{-i\lambda}}{P_{\hat{J\bm x}}(\lambda)}\right) \\ &= \intg{\R} {e^{-i\lambda}} {p_{\hat{J\bm{x}}}^\rho(\lambda)} \\ &= \intg{\Xc} {e^{-i J\bm x \cdot \bm y}}{\mu_r (\bm y)} \\ &= \intg{\Xc} {e^{-i J\bm x \cdot \bm y} w_\mu(\bm y)} {\bm y} \\ &= \text{FT}^{-1} [w_\mu](-J\bm x). \end{align} where the second line comes from the spectral theorem in Eq.~\eqref{eq:functionnal_displacement}; the third line via Eq.~\eqref{eq:prho} and the fact that the integral and the trace may be inverted by the definition of the integral with respect to the spectral measure \cite{hall2013quantum}; the fourth line via the push-forward operation in Eq.~\eqref{eq:pushforward}; and the two last lines comes from Proposition~\ref{proposition:density} and the definition of the Fourier transform. As a result, for all \(x \in \Xc\), $\text{FT}^{-1}[w_\mu](\bm x) = \text{FT}^{-1}[W_\rho](\bm x)$ and since $w_\mu$ and $W_\rho$ are both in $L^1(\Xc)$, it must hold that $w_\mu = W_\rho$ $\dd{\bm{x}}$-almost everywhere \cite[Corollary 7.1]{folland2009fourier}. $w_\mu$ is the density function of a probability measure, so it follows that both functions must be almost everywhere non-negative. Because the Wigner function is a continuous function from $\Xc$ to $\R$ \cite{cahill1969density}, $W_\rho$ must be non-negative. ($\impliedby$) Conversely, the Wigner function provides the correct marginals for the quadratures, and can be seen as a global probability density on phase space when it is nonnegative. Via the equivalence demonstrated in the first part of the proof (namely that the density of $\mu_r$ is almost everywhere the Wigner function), the idea is to show that $(\bm \Xc,W_{\rho} \dd{\bm x},(\tilde k_L)_L)$ is a valid deterministic HVM that reproduces the empirical predictions. We thus have to show that Eq.~\eqref{eq:eval_HVM_emp} holds for $W_\rho$. For any \(\bm x \in \Xc\), there is a special orthogonal and symplectic transformation \(S\) such that \(\bm x = \|\bm x\| S\bm e_1\). For any \(U \in \Fc_{\bm{x}}\), \begin{align} e_{\bm{x}}^\rho(U) &= \Tr(P_{\hat{\bm{x}}}\circ\pi_{\bm{x}}(U)\rho) \\ &= \Tr(\tau(S)^*P_{\|\bm x\| \hat{\bm{e}_1}}\circ\pi_{\bm{x}}(U)\tau(S)\rho) \\ &= \Tr(P_{\hat{\bm e_1}}(\|\bm x\|^{-1} \pi_{\bm{x}}(U))\tau(S)\rho\tau(S)^*) \\ & = \intg{\left( \|\bm x\|^{-1} \pi_{\bm{x}}(U) \right) \times \R^{2M-1}}{W_{\tau(S)\rho\tau(S)^*}(\bm{z})}{\bm{z}} \\ & = \intg{\|\bm x\|^{-1} \pi_{\bm{x}}(U) \times \R^{2M-1}}{W_\rho(S \bm{z})}{\bm{z}} \\ & = \intg{(\bm e_1 \cdot -)^{-1}(\|\bm x\|^{-1} \cdot \pi_{\bm{x}}(U))}{W_\rho(S \bm{z})}{\bm{z}} \\ & = \intg{(\|\bm x\| \bm e_1 \cdot S^{-1}-)^{-1}(\pi_{\bm{x}}(U))}{W_\rho(\bm{z})}{\bm{z}} \label{eq:changeofvariablesJac}\\ & = \intg{(\|\bm x\| S\bm e_1 \cdot -)^{-1}(\pi_{\bm{x}}(U))}{W_\rho(\bm{z})}{\bm{z}} \\ & = \intg{(\bm x \cdot -)^{-1}(\pi_{\bm{x}}(U))}{W_\rho(\bm{z})}{\bm{z}}, \end{align} where we used Eqs.~\eqref{mudef},~\eqref{Ptau},~\eqref{PW2},~\eqref{eq:WigSymplTrans} and the fact that the Jacobian change of variable is 1 in Eq.~\eqref{eq:changeofvariablesJac}. As expected with respect to Eq.~\eqref{eq:eval_HVM_emp}: \begin{align} \intg{\bm z \in \Xc}{\tilde k_x(\bm z,U) W_{\rho}(\bm z)}{\bm z} &= \intg{\bm z \in \Xc}{\delta_{\alpha(\bm z)|_{\{\bm x\}}}(U) W_{\rho}(\bm z)}{\bm z} \\ &= \intg{\bm z \in \Xc}{\delta_{(\bm z \cdot \dummy)|_{\{ \bm x \}}}(U) W_{\rho}(\bm z)}{\bm z}. \end{align} $U$ only consists of functions from $\{\bm x\}$ to $\R$ that extends to linear functions on the subspace generated by $\bm x$, so: \begin{align} \{\bm z \in \Xc \mid (\bm z \cdot \dummy)|_{\{\bm x\}} \in U\} &= \{\bm z \in \Xc \mid (\bm z\cdot \bm x) \in \pi_{\bm x}(U) \} \\ &= (\dummy \cdot \bm x)^{-1}(\pi_{\bm x}(U)) \label{eq:cov}\Mdot \end{align} Thus: \begin{align} \intg{\bm z \in \Xc}{\tilde k_x(\bm z,U) W_{\rho}(\bm z)}{ \bm z} &= \intg{(\dummy \cdot \bm x)^{-1}(\pi_{\bm x}(U))}{ W_{\rho}(\bm z) }{\bm z}\\ &= e_{\bm{x}}^\rho(U), \end{align} The same computation can be carried out for $e^\rho_L(U)$ for a Lagrangian $L$ and $U \in \Fc_L$ to retrieve the joint probability distributions from the hidden variable model $(\bm \Xc,W_{\rho} \dd{\bm x},(\tilde k_L)_L)$. \end{proof} \section{Discussion and open problems} \label{sec:conclusion} We have shown that Wigner negativity is equivalent to contextuality with respect to continuous-variable generalised Pauli measurements, which may be realised using homodyne detection, a standard detection method in continuous variables \cite{yokoyama2013ultra}, and the basis of several computational models in continuous-variable quantum information \cite{DouceCVIQP2017,Chabaud2017hom,BShomodyne2017,GKP2001}. From a practical perspective, this implies that contextuality is a necessary resource for achieving a computational advantage within the standard model of continuous-variable quantum computation~\cite{lloyd1999quantum}. Like in the discrete-variable case \cite{howard2014contextuality}, continuous-variable contextuality supplies the necessary ingredients for continuous-variable quantum computing. From a foundational perspective, the failure of a local hidden variable model describing quantum mechanical predictions as enlightened by Bell regarding the EPR paradox is very closely related to the impossibility of a non-negative phase-space distribution as described by Wigner. Hence, our result implies that the negativity of phase-space distributions can be cast as an obstruction to the existence of a noncontextual hidden-variable model. Unlike previous proofs, our generalisation clearly identifies that contextuality with respect to Pauli measurements is at play. In particular, the EPR state \cite{einsteincan1935} describes a continuous-variable state that is Wigner positive and still violates a Bell inequality \cite{Banaszek1998}: this is possible since it necessitates parity operator measurements that do not have a nonnegative Wigner representation \cite{spekkens2008negativity}, and thus is not in contradiction with our result, since our measurement scenario is nonnegatively represented in phase-space. Since homodyne detection (and generalised position and momentum measurement) is Gaussian, any possible quantum advantage is due to Wigner negativity being present before the detection setup. Our results open up a number of possibilities for ongoing research. Firstly, our present argument requires considering a measurement scenario that comprises an uncountable family of measurement labels (the entire phase-space $\Xc$). From an experimental perspective, it is crucial to wonder what happens if we restrict to a finite family of measurement labels and see whether we can derive a robust version of this theorem. Another question concerns the link between quantifying contextuality and quantifying Wigner negativity. Quantifying contextuality for continuous-variable systems is possible via semidefinite relaxation \cite{barbosa2015contextuality}. Also, there exist various measures of Wigner negativity~\cite{kenfack2004negativity,mari2011directly}. In particular, witnesses for Wigner negativity have been introduced in \cite{chabaud2021witnessing}, whose violation gives a lower bound on the distance to the set of states with non-negative Wigner function. It would be highly desirable to establish a precise and quantified link between these measures of nonclassicality. On the practical side, this result paves the way for surprisingly simple demonstrations of non-classicality. Contextual states are typically associated with violated Bell-like inequalities---although this result has only been formally proven in the case of a finite number of measurement settings, and needs to be generalised to continuous variables. In principle, this means that one could violate such an inequality with a setup as simple as a single photon and a heterodyne detection, necessitating only a single beam splitter. \section*{Acknowledgements} The authors would like to thank M.\ Howard, S.\ Mansfield and T.\ Douce for enlightening discussions. They are also grateful to D.\ Markham, E.\ Kashefi and F.\ Grosshans for their mentorship and the wonderful group they have created at LIP6. U.\,C. acknowledges interesting discussions with S.\ Mehraban, J.\ Preskill. U.\,C. acknowledges funding provided by the Institute for Quantum Information and Matter, an NSF Physics Frontiers Center (NSF Grant PHY-1733907). R.\,I.\,B. was supported by the ANR VanQuTe project (ANR-17-CE24-0035). \section{Necessary tools of measure theory} \label{app:measure} We briefly recall the main ingredients of measure theory needed in this article. To avoid some pathological behaviours when dealing with probability distributions on a continuum, we first need to define $\sigma$-algebras which will give rise to a good notion of measurability. \begin{definition}[$\sigma$-algebras] A $\sigma$-algebra on a set $U$ is a family $\Bc$ of subsets of U containing the empty set and closed under complementation and countable unions, that is: \begin{enumerate}[label=(\roman*)] \item $\emptyset \in \Bc$. \item for all $E \in \Bc$, $E^c \in \Bc$. \item for all $E_1,E_2, \dots \in \Bc$, $\cup_{i=1}^\infty E_i \in \Bc$. \end{enumerate} \end{definition} \begin{definition}[Measurable space] A measurable space is a pair $\bm U = \tuple{U,\Fc_U}$ consisting of a set $U$ and a $\sigma$-algebra $\Fc_U$ on $U$. \end{definition} \noindent In some sense, the $\sigma$-algebra specifies the subsets of $U$ that can be assigned a `size', and which are therefore called the \emph{measurable sets} of $\bm U$. We will use the convention of using boldface to refer to measurable spaces and regular font to refer to the underlying set. A trivial example of a $\sigma$-algebra over any set $U$ is its powerset $\Pc(U)$, which gives the discrete measurable space $\tuple{U,\Pc(U)}$, where every set is measurable. This is typically used when $U$ is countable (finite or countably infinite), in which case this discrete $\sigma$-algebra is generated by the singletons. Another example, of central importance in measure theory, is the Borel $\sigma$-algebra $\Bc_\R$ generated from the open sets of $\R$, whose elements are called the Borel sets. This gives the measurable space $\tuple{\R,\Bc_\R}$. Working with Borel sets avoids the problems that would arise if we naively attempted to measure or assign probabilities to points in the continuum. We will also need to deal with measurable spaces formed by taking the product of an uncountably infinite family of measurable spaces. As enlightened by Tychonoff's theorem \cite{tychonoff1930topologische}, we will use the \textit{product $\sigma$-algebra}. \begin{definition}[Infinite product]\index{Measurable!space!infinite product of|textbf} Fix a possibly uncountably infinite index set $I$. The product of measurable spaces $(\bm U_i = \tuple{U_i,\Fc_i})_{i \in I}$ is the measurable space: \[\prod_{i \in I} \bm U_i = \tuple{\prod_{i \in I} U_i, \prod_{i \in I} \Fc_i} = \tuple{U_I,\Fc_I} \Mcomma\] where $U_I = \prod_{i \in I} U_i$ is the Cartesian product of the underlying sets, and the $\sigma$-algebra $\Fc_I = \prod_{i \in I} \Fc_i$ is obtained via the product construction \ie it is generated by subsets of $\prod_{i \in I} U_i$ of the form $\prod_{i \in I} B_i$ where for all $i \in I$, $B_i \subseteq U_i$ and $B_i \subsetneq U_i$ only for a finite number of $i \in I$. \label{def:productmeasurableinfinite} \end{definition} Remarkably, the product topology is the smallest topology that makes the projection maps $\fdec{\pi_k}{\prod_{i \in I} U_i }{U_k}$ measurable. This definition reduces straightforwardly to the case of a finite product. We can now formally define measurable functions and measures on those spaces. \begin{definition}[Measurable function]\index{Measurable!function|textbf} A measurable function between measurable spaces $\bm U = \tuple{U,\Fc_U}$ and $\bm V = \tuple{V, \Fc_V}$ is a function $\fdec{f}{U}{V}$ whose preimage preserves measurable sets, \ie such that, for any $E \in \Fc_V$, ${f^{-1}(E) \in \Fc_U}$. \end{definition} \noindent This is similar to the definition of a continuous function between topological spaces. Measurable functions compose as expected. \begin{definition}[Measure] \label{def:measure} A measure on a measurable space $\bm U = \tuple{U,\Fc_U}$ is a function $\fdec{\mu}{\Fc_U}{\Rext}$ from the $\sigma$-algebra to the extended real numbers $\Rext = \R \cup \enset{-\infty,+\infty}$ satisfying: \begin{enumerate}[label=(\roman*)] \item\label{enum:nonnegativity} (nonnegativity) $\mu(E)\geq 0$ for all $E\in\Fc_U$; \item (null empty set) $\mu(\emptyset)=0$; \item ($\sigma$-additivity) for any countable family $\family{E_i}_{i=1}^\infty$ of pairwise disjoint measurable sets, $\mu(\bigcup_{i=1}^\infty E_i) = \sum_{i=1}^\infty \mu(E_i)$. \end{enumerate} \end{definition} \noindent In particular, a measure $\mu$ on $\bm U = \tuple{U,\Fc_U}$ allows one to integrate well-behaved measurable functions $\fdec{f}{U}{\R}$ (where $\R$ is equipped with its Borel $\sigma$-algebra $\Bc_\R$) to obtain a real value, denoted \begin{equation} \intg{\bm U}{f}{\mu} \; \text{ or } \; \intg{x \in U}{f(x)}{\mu(x)}. \end{equation} A simple example of a measurable function is the \emph{indicator function} of a measurable set $E \in \Fc_U$: \[ \bm 1{_E}(x) \defeq \begin{cases} 1 & \text{if $x \in E$} \\ 0 & \text{if $x \not\in E$.}\end{cases}\] For any measure $\mu$ on $\bm U$, its integral yields the `size' of E: \begin{equation}\label{eq:integralindicator} \intg{\bm U}{\bm 1_{E}}{\mu} = \mu(E) \Mdot \end{equation} A measure $\mu$ on a measurable space $\bm U$ is said to be \emph{finite} if $\mu(U)<\infty$ and it is a \emph{probability measure} if it is of unit mass \ie $\mu(U)=1$. A measurable function $f$ between measurable spaces $\bm U$ and $ \bm V$ carries any measure $\mu$ on $\bm U$ to a measure $f_*\mu$ on $\bm V$. This is known as a \emph{push-forward} operation. This push-forward measure is given by $f_*\mu(E) = \mu(f^{-1}(E))$ for any set $E$ measurable in $\bm V$. An important use of push-forward measures is that for any integrable function $g$ between measurable spaces $\bm V$ and $\tuple{\R,\Bc_\R}$, one can write the following change-of-variables formula: \begin{equation}\label{eq:changeofvariables} \intg{\bm U}{g \circ f}{\mu} = \intg{\bm V}{g}{f_*\mu} \Mdot \end{equation} A case that will be of particular interest to us is the push-forward of a measure $\mu$ on a product space $\bm U_1 \times \bm U_2$ along a projection $\fdec{\pi_i}{ U_1 \times U_2}{U_i}$. This yields the \emph{marginal measure} $\mu|_{\bm U_i}={\pi_i}_*\mu$, where for any $E \in \Fc_{U_1}$ measurable, $\mu|_{\bm U_1}(E) = \mu(\pi_1^{-1}(E)) = \mu(E \times U_2)$. In the opposite direction, given a measure $\mu_1$ on $\bm U_1$ and a measure $\mu_2$ on $\bm U_2$, a \emph{product measure} $\mu_1 \times \mu_2$ is a measure on the product measurable space $\bm U_1 \times \bm U_2$ satisfying $(\mu_1 \times \mu_2)(E_1 \times E_2) = \mu_1(E_1)\mu_2(E_2)$ for all $E_1 \in \Fc_1$ and $E_2 \in \Fc_2$. For probability measures, there is a uniquely determined product measure. The last ingredient we need from measure theory is called a \emph{Markov kernel}. \begin{definition}[Markov kernel] A Markov kernel (or probability kernel) between measurable spaces $\bm U = \tuple{U,\Fc_U}$ and $\bm V = \tuple{V, \Fc_V}$ is a function $\fdec{k}{U \times \Fc_V}{[0,1]}$ (the space $[0,1]$ is assumed to be equipped with its Borel $\sigma$-algebra) such that: \begin{enumerate}[label=(\roman*)] \item for all $E \in \Fc_V$, $\fdec{k(\dummy,E)}{U}{[0,1]}$ is a measurable function; \item for all $x \in U$, $\fdec{k(x,\dummy)}{\Fc_V}{[0,1]}$ is a probability measure. \end{enumerate} \end{definition} \noindent Markov kernels generalise the discrete notion of stochastic matrices. \section{Wigner function and symplectic transformations} \label{app:relations} Below we show how a symplectic orthogonal transformation $S$ affects the Wigner function: \begin{align} W_{\tau(S)\rho\tau^*(S)}(\bm x) &= \mathrm{TF}[\Phi_{\tau(S)\rho\tau^*(S)}](J\bm x) \\ &= \intg{\Xc}{e^{-i J \bm x \cdot \bm y} \Tr(\tau(S) \rho \tau(S)^* D(-\bm y))}{\bm y} \\ &= \intg{\Xc}{e^{-i J \bm x \cdot \bm y} \Tr(\rho \tau(S)^* D(-\bm y)\tau(S))}{\bm y} \\ &= \intg{\Xc}{e^{-i J \bm x \cdot \bm y} \Tr(\rho D(-S\bm y))}{\bm y} \\ &= \intg{\Xc}{e^{-i J \bm x \cdot (S^{-1}\bm y')} \Tr(\rho D(-\bm y'))}{\bm y'} \\ &= \intg{\Xc}{e^{-i J (S\bm x) \cdot \bm y'} \Tr(\rho D(-\bm y'))}{\bm y'} \\ &= \mathrm{TF}[\Phi_\rho](JS\bm x) \\ &= W_\rho(S\bm x). \label{eq:WigSymplTrans} \end{align} \section{Measuring a displacement operator} \label{app:measDisp} In this appendix, we make the link with previous discrete-variable analogues of this proof \cite{howard2014contextuality,delfosse2017equivalence}. In particular, in \cite{delfosse2017equivalence} it is not (clearly) specified that that contextuality emerges with respect to the measurements of all (discrete) displacement operators. Since it might not be obvious what ``measuring a displacement'' refers to, we detail detail below how it amounts to performing quadrature measurements. We focus on a single qumode since it can be straightforwardly extended to multimode quantum states. A quadrature operator such as the position operator $\hat q$ is self-adjoint and it can be expanded via the spectral theorem \cite{hall2013quantum} as: \begin{equation} \hat q = \intg{x \in \mathrm{sp}(\hat q)}{x}{P_{\hat q}(x)} \, , \end{equation} where the spectrum of $\hat q$ is $\mathrm{sp}(\hat q) = \R$ and $P_{\hat q}$ is the spectral measure of $\hat q$ \cite[Th. 8.10]{hall2013quantum}. For $E \in \Bc(\R)$, $P_{\hat q}(E)$ is given by \cite[Def. 8.8]{hall2013quantum}: \begin{equation} P_{\hat q}(E) = \bm 1_E (\hat q) \Mdot \end{equation} Informally, it assigns 1 whenever measuring $\hat q$ yields an outcome that belongs to $E$. We can view $P_{\hat q}(E)$ as the formal version of the projector $\intg{x \in E}{\ket x \bra x}{x}$ (with $\ket x$ a non-normalisable eigenvector of $\hat q$). Its functional calculus \cite{hall2013quantum} can be expressed as: \begin{equation} f(\hat q) = \intg{x \in \mathrm{sp}(\hat q)}{f(x)}{P_{\hat q}(x)} \, , \end{equation} for $f$ be a bounded measurable function. Then we can write the spectral measure of $f(\hat q)$ via the push-forward operation: \begin{equation} \forall E \in \Bc(\R), \; P_{f(\hat q)}(E) = P_{\hat q}(f^{-1}(E)) \Mdot \end{equation} It follows immediately that, for $s \in \R$, the spectral measure for the diagonal phase operator $e^{is\hat q}$ is given, for $E \in \mathcal{B}(\mathbb{S}_1)$, by \begin{equation} P_{\exp(is \hat q)}(E) \defeq P_Q(\{x \in \R \mid e^{isx} \in E\}). \end{equation} Define the rotated quadrature $\hat q_\theta \defeq \cos(\theta) \hat q + \sin(\theta) \hat p$ for $\theta \in \left[0,2\pi\right]$ and the phase-shift operator $\hat R(\theta) \defeq \exp(i \frac{\theta}{2} (\hat q^2 + \hat p^2))$. Then \begin{equation} \hat q_\theta = \hat R(\theta) \hat q \hat R(-\theta), \end{equation} so that the spectral measure of $\hat q_{\theta}$ is given by \begin{equation} P_{\hat q_\theta}(E) = \hat R(\theta) P_{\hat q}(E) \hat R(-\theta). \end{equation} Let $(q,p) \in \R^2$. We can find $r \in \R_+$, $\theta \in \left[ 0,2\pi \right]$ such that $(q,p) = (-r\sin(\theta),r\cos(\theta))$. Then: \begin{equation} \hat D(q,p) = e^{i(p \hat q - q \hat p)} = e^{ir(\cos(\theta) \hat q - \sin(\theta) \hat p)} = e^{ir \hat q_{\theta}}. \end{equation} This form allows us to deduce spectral measures for the displacement operators. For any $E \in \mathcal{B}(\mathbb{S}_1)$, we have \begin{align} P_{\hat D(q,p)}(E) &= P_{\exp(ir \hat q_{\theta})}(E) \\ &= P_{\hat q_\theta}(\{x \in \R \mid e^{irx} \in E\})\\ &= \hat R(\theta) P_{\hat q}(\{x \in \R \mid e^{irx} \in E\}) \hat R(-\theta). \end{align} In conclusion, ``measuring a displacement operator'' amounts to a quadrature measurement.
1,314,259,994,431
arxiv
\section{Introduction} In graphic design, there are many creative applications providing thousands of templates. These design platforms are suitable for creative designers and amateurs such as marketing professionals, bloggers, social media managers, etc. In design workflows, users choose a template and replace the elements using their own assets. The pre-designed templates have coordinated colors in each visual element. When some visual elements are replaced, the color harmony may be destroyed. Selecting appropriate colors is not easy for amateurs, even designers usually struggle with getting suitable color palettes for vector graphic documents. A color palette refers to a limited number of colors expressed in refined forms. It is widely used in graphic design due to its simplicity, intuitiveness, generality, and easy computation \cite{kim2021dynamic}. The prior researches of color palette representation proposed to train a regression model \cite{o2011color, kita2016aesthetic}. These regression methods extract hundreds of color features manually and learn the weights of each feature. The feature extraction is complex including palette colors, mean, standard deviation, median, max, min, and max minus min across a single channel in each color space, i.e., RGB, CIELAB, and HSV. The difficulty of the hand-crafted features is that they do not comprehensively encode the semantics of colors, and some features might not have significant effects in terms of the downstream tasks. Learning high-quality representation of color remains an open problem. In this work, we simplify the input without hand-crafted features and propose a data-driven deep learning model for color representation. In recent years, some researchers have explored deep learning techniques on color palette generation and color recommendation. The previous researches focus on generating a color palette for a single visual target, such as image colorization \cite{bahng2018coloring}, shapes colorization in statistical graphics \cite{lu2020palettailor}, shapes and texts colorization in infographics \cite{yuan2021infocolorizer}. However, a vector graphic document is much more complex with multiple visual elements, including images, shapes, and texts simultaneously. Each visual element has its own palette. It is challenging for existing color recommendation tools to recommend colors for multi-palette design. In this study, we use a color sequence combining multiple palettes of different visual elements and train a masked color model to learn multi-palette representation by color sequence completion. In summary, our main contributions include: \begin{itemize} \setlength{\itemsep}{-5pt} \item A novel masked color model to represent multiple palettes in vector graphic documents. \item An interactive system for color recommendation which recolors graphic documents with the recommended colors. \item A series of experimental evaluations covering quantitative experiments and perceptual studies for the recommendation system and the recommended results that validates the effectiveness of the proposed methods. \end{itemize} \section{Related works} \subsection{Color recommendation} There are mainly two scenarios of color recommendation. The first one is to suggest a color palette for specified themes or semantic requirements. The second one is to expand a color palette based on the given colors. For the first scenario, there are some websites, such as Adobe Color \cite{AdobeColor}, and COLOURLovers \cite{ColourLovers}, providing color palette templates classified with various theme names or semantic tags, such as 'natural', 'environmental'. These palette templates can be used as references to suit semantic requirements. Some researches suggest color palette templates based on semantic tags and fixed harmonic color selection models to generate text colors based on image colors in magazine cover design \cite{jahanian2013recommendation, yang2016automatic}. For the second scenario, an early effort by O’Donovan \etal. \cite{o2011color} proposed a linear regression method and suggested the fifth color for the given four colors. Kita \etal. \cite{kita2016aesthetic} used the same regression method to expand a color palette composed of N colors to $N+\alpha$ retraining the original color harmony. These regression models depended on the hand-crafted feature extraction methods. In this work, we suggest compatible colors for the given colors in multi-palette without color feature extraction methods by a deep learning model. Recently, some researchers have explored deep learning algorithms on color palette recommendation. Yuan \etal. \cite{yuan2021infocolorizer} employed a Variational AutoEncoder with Arbitrary Conditioning (VAEAC) model to generate a color palette for infographic elements. Kim \etal. \cite{kim2022colorbo} trained the color embedding model to predict and recommend other colors that are likely to gather together in the same palette for Mandala Coloring. The color model was in a similar way as fastText \cite{bojanowski2017enriching}, an extension of Word2Vec \cite{mikolov2013efficient} model. They signified a color as a word, and the palette as a sentence. The model was trained providing a continuous vector representation of colors. The elements in these design objects include shapes, or texts, and each element is limited to a single color. In graphic documents, the elements also include photos and illustrations. The color design in vector graphic documents is more complex as a multi-palette design. We extend the idea of applying word embedding to color representation similar to the previous Word2Vec-based work \cite{kim2022colorbo}. However, the Word2Vec models aren't able to account for the relationships between different palettes in the same graphic document. We explore multi-palette representation by contextual embedding based on BERT architecture \cite{kenton2019bert}. \subsection{Palette-based image recoloring} There are some image recoloring approaches based on semantic segmentation from image by deep neural networks \cite{afifi2019image, khodadadeh2021automatic}. We focus on palette-based models that a color palette is useful and easy to express the color concept of an image. Color palette captures the main colors in an image and is used to adjust the color composition of an image towards the desired color palette. Most approaches for recoloring images involve two steps, as extracting a palette from the image and mapping every pixel in the image to the target palette. Many approaches \cite{chang2015palette, zhang2017palette, akimoto2020fast} employ clustering methods to extract palette colors. Several other works \cite{tan2018efficient, wang2019improved} use a geometric method to extract palettes that construct a convex hull in RGB color space. The convex-hull-based palettes may miss important colors that lie within the convex hull. We use k-means clustering method to recolor image elements in our system. \section{Multi-palette representation} \subsection{Datasets} We generate a multi-palette dataset from a large-scale dataset Crello \cite{yamaguchi2021canvasvae}. The Crello dataset contains design templates for various display formats, such as social media posts, banner ads, blog headers, or printed posters. It offers complete document structure and element attributes including element-specific configuration, such as type of the element, position, size, opacity, color, or a raster image. The element types mainly include imageElement, maskElement, coloredBackground, svgElement, and textElement. We classify the elements into three groups, as image element group including imageElement and maskElement, scalable vector graphic (SVG) element group including coloredBackground and svgElement, and text element group including textElement. The color data of each element in the Crello dataset only has 1 color that is relevant for solid background and text placeholder. We generate a multi-palette dataset as Image-SVG-Text palettes that each element group has its own palette as shown in Figure~\ref{fig:paletteExt}. For image and SVG elements, we merge the elements of the same group into a single image and then extract the color palette using k-means clustering method. For text elements, we collect text colors and cluster them into a palette. Each palette is up to 5 colors in this work. We get 18,768 / 2,315 / 2,278 valid data as train, validation, and test datasets. All design templates in the figures of this paper are from the Crello test dataset. \begin{figure}[h] \begin{center} \includegraphics[width=1.0\linewidth]{Figures/palette_extraction_v3.png} \end{center} \caption{Multi-palette extraction from a design template as Image-SVG-Text palettes. Merge the elements in the same element group into a single image and then extract the color palette.} \label{fig:paletteExt} \end{figure} \subsection{Representation learning with masked color model} \begin{figure*}[h] \begin{center} \includegraphics[width=0.8\linewidth]{Figures/masked_color_model_v2.png} \end{center} \caption{Masked color model for image-SVG-text color sequence.} \label{fig:model} \end{figure*} We train the color embedding model in a similar way as the word embedding model. In natural language processing, the word embedding model is used to learn distributed representation, where the input is a text corpus and the output is a set of feature vectors that represent words. Similarly, in the color embedding model, a color signifies a word, a palette signifies a sentence, and multiple palettes in the same design signify a paragraph. For the input color corpus, we adopt CIELAB color space, which is more perceptually uniform than other color spaces \cite{kim2021dynamic}. The most general-used color space is the 24-bit RGB model. We convert RGB color data to CIELAB with a range of [0, 255], and assign each color to one of the bins in a $b \times b \times b$ histogram (we use $b=16$ in this work). For example, the color white(255, 255, 255) in RGB color space is labeled as the code $'15\_8\_8'$ in CIELAB color space with 16 bins. There are 796 color codes in the vocabulary of the train dataset. The color codes are converted into vectors and embedded in the space in the learning progress. We obtain color embeddings by a masked color model based on pre-training BERT architecture \cite{kenton2019bert}. The masked color model in Figure~\ref{fig:model} is trained in the similar way as masked language model (Masked LM) in BERT. The model receives a fixed length of each palette as input. For a palette that is shorter than this fixed length, we will have to add the token [PAD] to the palette to make up the length. Another artificial token [SEP], is added to the end of a palette. For a given token, its input representation is constructed by summing the corresponding token, segment, and position embeddings. Here, $\{C_{1_{1}},\ldots C_{1_{5}}\}$ is for image color palette, $\{C_{2_{1}},\ldots C_{2_{4}}\}$ is for SVG color palette, $\{C_{3_{1}}, C_{3_{2}}\}$ is for text color palette. The palettes of image, SVG, and text, are respectively labeled with the segment number 1, 2, and 3. The segment embeddings are basically the palette number that is encoded into a vector. The trained model knows whether a particular color token belongs to a specific palette. The multi-palette representation can be achieved by segment embedings. The masked color model randomly masks some percentage of the tokens from the input, and then predict the masked tokens based on its context. In our experiments, we mask $10\%$ of the tokens in each sequence at random and replace the chosen token with the [MASK] token $80\%$ of the time. Then, we use the standard cross entropy loss to optimize the pre-training task. The final embeddings from the Transformer self-attention mechanism can be used in downstream tasks, such as aesthetic rating prediction. In current work, we recommend colors in multi-palette by predicting masked colors with high probability. \subsection{Color recommendation system} In the existing creative applications for graphic documents, users are allowed to choose and edit a design template. However, when same visual elements are replaced, users may struggle with coordinating the colors in the design. To reduce user's work, we propose a system to recommend the specified colors and recolor the elements with the recommended colors automatically. We create color recommendation engine with masked color model and develop an interactive user interface that enables users to obtain the coordinated colors for design. The color recommendation system supports basic selecting, interactive recommendation, and previewing functions shown in Figure~\ref{fig:system}. The design template is converted to a JSON file as the system input that contains complete element-specific configuration. The system parses a JSON object and reconstructs the design with separated visual elements. The system allows users to change the image elements and extracts the color palette of each element group. Users can select the colors for recoloring and then check the design results with the recommended colors. For SVG recoloring, the original color is changed to the recommended color by a simple interpolation method. For image recoloring, we use a palette-based photo recoloring method by k-means clustering \cite{chang2015palette} in this system. \begin{figure*} \begin{center} \includegraphics[width=1.0\linewidth]{Figures/system_UI_v2.png} \end{center} \caption{Interactive interface of color recommendation system for vector graphic documents contains six main operations: \textcircled{\raisebox{-0.9pt}{1}} Input a JSON file. \textcircled{\raisebox{-0.9pt}{2}} Replace image elements. \textcircled{\raisebox{-0.9pt}{3}} Get image-SVG-text palettes and select the colors for recoloring. \textcircled{\raisebox{-0.9pt}{4}} Get the recommended colors from the color recommendation engine. \textcircled{\raisebox{-0.9pt}{5}} Choose a recommended color and check the recolored result. \textcircled{\raisebox{-0.9pt}{6}} Mark the preferred results.} \label{fig:system} \end{figure*} \section{Experimental validation} To evaluate the performance of our proposed approach, we compare with related work and a baseline model by both quantitative and qualitative evaluations. We adapted a Word2Vec-based model which is used in the related work \cite{kim2022colorbo}. The input for this model is the color token without segment embeddings. We also trained a BERT-based model without segment embeddings as a baseline to show the effectiveness of the segment embeddings for multi-palette representation. \subsection{Quantitative evaluation} We use 2278 color sequences of our test dataset in the quantitative experiments. We mask a color in color sequence randomly and evaluate the accuracy of the predicted color. We use top N accuracy that the true color is equal to any of the N most probable colors predicted by each model. The human eye sometimes cannot fully perceive subtle differences in color. In addition to accuracy, we also use visual similarity to measure the recommended colors. For similarity measurement, we calculate the distance between two colors using CIEDE2000 rather than Euclidean distance, which has exhibited good performance in predicting visual similarity between color palettes \cite{yang2020predicting, kim2021dynamic}. Firstly, we train 20 times and get the mean value of accuracy. The results of our model with and without segment and position embeddings are shown in Table~\ref{tab:quantitative_bert}. It is found that there is no difference between the models with and without position embeddings in the current dataset. Thus we pick up the best models trained by our method without position embeddings for following comparisons. \begin{table}[b] \begin{center} \begin{tabular}{|l|l|c|c|c|c|} \hline \multicolumn{2}{|c|}{Embeddings} & \multicolumn{4}{|c|}{Accuracy$\uparrow$}\\ segment & position & @1 & @3 & @5 & @10 \\ \hline\hline w/ & w/ & 0.27 & 0.45 & 0.53 & 0.63 \\ w/ & w/o & 0.27 & 0.44 & 0.52 & 0.62 \\ w/o & w/o & 0.16 & 0.30 & 0.38 & 0.50 \\ \hline \end{tabular} \end{center} \caption{Quantitative comparison of our model with and without segment and position embeddings on top N accuracy (N = 1, 3, 5, 10). Here is the mean value of 20 trained models.} \label{tab:quantitative_bert} \end{table} To compare our method with the Word2Vec-based model and the baseline model, we evaluate the accuracy and similarity of color prediction results by these three models. The comparison results of accuracy are shown in Figure~\ref{fig:quantitative_accuracy}, Table~\ref{tab:quantitative_accuracy}, and the comparison results of similarity are shown in Figure~\ref{fig:quantitative_similarity}, Table~\ref{tab:quantitative_similarity}. Our method with segment embeddings provides significantly better results than the Word2Vec-based method and the baseline model. The results show that the segmentation is effective in multi-palette representation learning and brings better performance to color recommendation. We suggest providing more than two color candidates in recommendation applications with a high accuracy and users would like to find a desired color in top N recommended colors. \begin{figure}[h] \begin{center} \includegraphics[width=1.0\linewidth]{Figures/accuracy_results.png} \end{center} \caption{Quantitative comparison of our models with and without segment embeddings and the Word2Vec-based model on top N accuracy (N = 1, 2, 3, 4, 5).} \label{fig:quantitative_accuracy} \end{figure} \begin{table}[h] \begin{center} \begin{tabular}{|l|c|c|c|c|c|} \hline & \multicolumn{5}{|c|}{Accuracy$\uparrow$}\\ Models & @1 & @2 & @3 & @4 & @5 \\ \hline\hline Word2Vec & 0.03 & 0.05 & 0.08 & 0.10 & 0.11 \\ Ours w/o segment & 0.23 & 0.32 & 0.39 & 0.43 & 0.46 \\ \setrow{\bfseries} Ours w/ segment & 0.36 & 0.46 & 0.52 & 0.57 & 0.61 \\ \hline \end{tabular} \end{center} \caption{Quantitative comparison of our models with and without segment embeddings and the Word2Vec-based model on top N accuracy (N = 1, 2, 3, 4, 5).} \label{tab:quantitative_accuracy} \end{table} \begin{figure}[h] \begin{center} \includegraphics[width=1.0\linewidth]{Figures/similarity_results.png} \end{center} \caption{Quantitative comparison of our models with and without segment embeddings and the Word2Vec-based model on top N similarity (N = 1, 2, 3, 4, 5).} \label{fig:quantitative_similarity} \end{figure} \begin{table}[h] \begin{center} \begin{tabular}{|l|c|c|c|c|c|} \hline & \multicolumn{5}{|c|}{Similarity$\downarrow$}\\ Models & @1 & @2 & @3 & @4 & @5 \\ \hline\hline Word2Vec & 38.4 & 28.3 & 23.9 & 20.3 & 17.8 \\ Ours w/o segment & 30.6 & 18.8 & 14.1 & 11.5 & 9.9 \\ \setrow{\bfseries} Ours w/ segment & 23.8 & 14.5 & 10.7 & 8.7 & 7.4 \\ \hline \end{tabular} \end{center} \caption{Quantitative comparison of our models with and without segment embeddings and the Word2Vec-based model on top N similarity (N = 1, 2, 3, 4, 5).} \label{tab:quantitative_similarity} \end{table} \subsection{Qualitative evaluation} \begin{figure}[h] \begin{center} \includegraphics[width=1.0\linewidth]{Figures/eval_samples_hues.png} \end{center} \caption{Hue distribution of selected colors in the evaluation samples and all colors in the Crello test dataset. Hue orders are based on the Practical Color Co-ordinate System, and we denote the neutral color as -1.} \label{fig:eval_hues} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=1.0\linewidth]{Figures/recomm_samples.png} \end{center} \caption{Color recommendation results with our proposed BERT-based models with and without segment embeddings, and the Word2Vec-based model. The three samples are recolored with one image color, one SVG color, and one text color respectively.} \label{fig:recomm_results} \end{figure} Considering that color performance depends on human perception, we conducted qualitative evaluation to verify the performance of the recommended results. We randomly selected 80 templates from the Crello test dataset and randomly selected one color for recoloring in multi-palette, that can be in image, or SVG, or text element. The hue distribution of selected color in the evaluation samples and all colors in the test dataset are shown in Figure~\ref{fig:eval_hues}. The neutral colors in some visually imperceptible elements are excluded during random selection, e.g. a text color with very small font size is ignored in this experiment. The recolored designs by the top1 recommended colors of each model are shown in Figure~\ref{fig:recomm_results}. We pick up one original design (GT), and the top2 recommended results from three models including our model with segment, the baseline model without segment, and the Word2Vec-based model. These seven designs are arranged together in an evaluation question. The participants are asked to select at most three good designs and three bad designs from seven designs. We recruited 84 participants with 68 non-designers and 16 graphic designers. For non-designers, the evaluation results of good and bad design selections are shown in Figure~\ref{fig:eval_nondesigner_g} and Figure~\ref{fig:eval_nondesigner_ng}. Here is the mean value of top2 recommendation results by three models. We can find that though the results by our proposed model with segment embeddings get worse performance than GT, they have higher preference and lower dislike than the baseline model without segment and the Word2Vec-based model ($p<0.1$). There is no significant difference between the baseline model and the Word2Vec-based model. \begin{figure}[h] \begin{center} \includegraphics[width=1.0\linewidth]{Figures/eval_nondesigner_g.png} \end{center} \caption{Evaluation results of good design from non-designers.} \label{fig:eval_nondesigner_g} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=1.0\linewidth]{Figures/eval_nondesigner_ng.png} \end{center} \caption{Evaluation results of bad design from non-designers.} \label{fig:eval_nondesigner_ng} \end{figure} For designers, the evaluation results of good and bad design selections are shown in Figure~\ref{fig:eval_designer_g} and Figure~\ref{fig:eval_designer_ng}. The results are similar with the results from non-designers. Moreover, designers evaluate that our model with segment has significantly better performance than the Word2Vec model (preference: $p<0.001$, dislike: $p<0.01$). From the evaluation results of non-designers and designers, the most obvious finding is that our proposed model has better performance than the Word2Vec-based model. Moreover, the segment embedding is necessary to multi-paltte representation. \begin{figure}[h] \begin{center} \includegraphics[width=1.0\linewidth]{Figures/eval_designer_g.png} \end{center} \caption{Evaluation results of good design from designers.} \label{fig:eval_designer_g} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=1.0\linewidth]{Figures/eval_designer_ng.png} \end{center} \caption{Evaluation results of bad design from designers.} \label{fig:eval_designer_ng} \end{figure} \subsection{Interview study} To access the color recommendation system as in Figure~\ref{fig:system}, we collected qualitative feedback from 12 professional designers aged 20-39. We provided a short tutorial of our system and asked the participants to explore color recommendation for recoloring one color in visual elements. For the elements in some templates, there are more than one dominant colors. We also asked participants to explore the recommendation of more than one color. Here is a sample of recoloring three colors in SVG elements in Figure~\ref{fig:3color_recomm}. We prepared design templates and some image samples from the Crello test dataset. \begin{figure}[b] \begin{center} \includegraphics[width=1.0\linewidth]{Figures/3color_recomm.png} \end{center} \caption{Color recommendation results for recoloring three colors in SVG elements. The number shown on the palette color is the recommended ranking.} \label{fig:3color_recomm} \end{figure} \twocolumn[{ \begin{center} \captionsetup{type=table} \begin{tabular}{|l|l|} \hline \textbf{Q1} Struggle with choosing colors in creative design works&\includegraphics[scale=0.25]{Figures/q1.png} \\ \textbf{Q2} Color recommendation tool is useful&\includegraphics[scale=0.25]{Figures/q2.png} \\ \textbf{Q3} This recommendation system is easy to use&\includegraphics[scale=0.25]{Figures/q3.png} \\ \textbf{Q4} Recommended results for one color look good&\includegraphics[scale=0.25]{Figures/q4.png} \\ \textbf{Q5} Recommended results for more than one color look good&\includegraphics[scale=0.25]{Figures/q5.png} \\ \textbf{Q6} Like to use this recommendation system for work&\includegraphics[scale=0.25]{Figures/q6.png} \\ & \includegraphics[scale=0.25]{Figures/interview_sign.png}\\ \hline \end{tabular} \captionof{table}{Evaluation results from 12 designers in the interview study.} \label{tab:interview_study} \end{center} }] We asked participants to replace the image elements in design templates with image samples. The participants could also upload their own images. The participants filled out a questionnaire after some trials. There are six questions and each question has five choices, as strongly agree, agree, neither, disagree, and strongly disagree. The results are shown in Table~\ref{tab:interview_study}. We observed the high demand for color recommendation systems in creative design works as in Q1 and $91.7\%$ participants answered that a color recommendation tool is useful for graphic design work in Q2. For Q3 and Q6, $58.3\%$ participants answered that our current system is easy to use, and $75\%$ participants liked to use our recommendation system for work. For the recommendation results, $66.7\%$ participants can get a satisfied recommended colors by our system in one color and more than one color recoloring tasks as in Q4 and Q5. Generally, we received positive feedback for our color recommendation system and the recommended results from designers. \subsection{Limitations} \textbf{Accuracy decreases when masked color number increases.} We generate the color sequence with a maximum 15 colors and train our model with masking 10$\%$ of the token in each sequence. That is, only 1 color is masked for prediction in most sequences in the training process. When the masked color number increases, the prediction accuracy decreases significantly in Table~\ref{tab:accuracy_multi_mask}. \begin{table}[h] \begin{center} \begin{tabular}{|l|c|c|c|c|c|} \hline Masked colors & 1 & 2 & 3 & 4 & 5 \\\hline Accuracy@1$\uparrow$ & 0.36 & 0.29 & 0.24 & 0.20 & 0.17 \\ \hline \end{tabular} \end{center} \caption{Top 1 accuracy for predicting different numbers of masked colors.} \label{tab:accuracy_multi_mask} \end{table} \textbf{Lack of diversity in recommended colors when more than one color is masked.} Though users can freely combine the recommended colors and designers can get a satisfactory design by our recommendation system in interview study, how to recommend a complete palette for each element by a learned model remains a problem. The recommended colors in the same element group are highly similar as in Figure~\ref{fig:3color_recomm}. Furthermore, neutral colors have a semantic value to a color palette and we do not filter out these high-frequency colors. Neutral colors have high frequency in the dataset, and it has higher probability to be recommended than chromatic colors. \section{Conclusions} We proposed a masked color model for multi-palette representation to recommend colors for vector graphic documents and developed an interactive system of recoloring the specified colors in visual elements. The performance of the proposed system is experimentally verified through both quantitative and qualitative evaluations compared to the state-of-the-art method of a Word2Vec-based model and the baseline model. To our knowledge, our method opens the door to recommend colors for vector graphic design based on multi-palette of visual elements. We will explore to improve the performance of a complete palette recommendation and fine-turn our model for the practical applications in the future work. {\small \bibliographystyle{ieee_fullname}
1,314,259,994,432
arxiv
\chapter{Spherical atom} \label {ch1} \section{Introduction} Quantum mechanics permits one to calculate with a high accuracy all parameters of an electron in an atom, such as its energy, squared angular momentum, projection of the angular momentum on the $Z$ axis and others. The square of the wave function determines the probability for an electron to be at a certain point in space. It is possible, however, to study the behavior an electron in an atom in a more detailed way. One may invoke for this purpose methods developed in electrodynamics. Significantly, one can calculate many fundamental parameters of the electron in an atom without resorting to such strictly quantum-mechanical concepts as “wave function” or “operator”. It turns out also that such an approach provides in some cases a \textbf{more detailed} description of the behavior of an electron than it would be possible within the frame of quantum mechanics. This opens a way to understanding this behavior more closely. Accordingly, our basic goal here will be not determination of the parameters of an electron in an atom, which are actually well known, but rather an attempt at describing the electron behavior in greater detail. Now if the various parameters of the electron calculated in the context of electrodynamics are found to agree with experimental data and values derived from quantum mechanics, this may be considered as supporting the assumptions formulated below on electron behavior (or, in the case of disagreement, as refuting these ideas). The first Chapter calculates the potential, kinetic, and total energies of the hydrogen atom for three states based on the above assumptions and invoking only the concepts of electrodynamics and theoretical mechanics. The second Chapter calculates, likewise drawing only from the concepts of electrodynamics and theoretical mechanics, the angular momentum of the hydrogen atom in the ground state, which turned out to be $\hbar/2$, in excellent agreement with experimental data. We are going to conduct the relevant reasoning and calculations in the nonrelativistic approximation. \section{Equation for the shape of a distributed electron charge in an atom} Quantum mechanics assumes the electron to be a point charge of magnitude $(-e)$. A point charge at rest can be presented in the form of an expansion in a Fourier integral in scalar spatial harmonics of the {\itshape\bfseries {charge}}: \begin{equation} \label{1.1.1} q(\mathbf{r}) = (-e)\delta(\mathbf{r}) = (-e)\int\limits_{-\infty} ^\infty f(\mathbf{k})e^{i\mathbf{kr}} d\mathbf{k}, \end{equation} where $d\mathbf{k}=dk_xdk_ydk_z$ denotes integration over the three wave vectors $k_x$, $k_y$, $k_z$ of Cartesian coordinates. Here $k=2\pi/\lambda$, where $\lambda$ is the wavelength of the corresponding harmonic of {\itshape\bfseries {charge}}, is the wave number, and $\delta(\mathbf {r})$ is Dirac’s delta function. The spectral density $f(\mathbf{k})$ of expansion into the Fourier integral is defined in the following way: \begin{equation} \label{1.1.2} f(\mathbf{k})=\frac{1}{(2\pi)^3}\int\limits_{-\infty}^\infty \delta (\mathbf{r})e^{-i\mathbf{kr}} d\mathbf{r}, \end{equation} where $d\mathbf{r}=dxdydz$ denotes integration over the three Cartesian axes. Each harmonic $f(k_x)e^{ik_xx},$ $f(k_y)e^{ik_yy},$ $f(k_z)e^{ik_zz}$ in Eq. (\ref{1.1.1}) is essentially a wave at rest with wave numbers $k_x,$ $k_y,$ $k_z.$ These are, however, not electromagnetic but rather {\itshape\bfseries{charge waves.}} If a point charge propagates with a constant velocity $\upsilon_x$ along the $x$ axis, it can be presented in the form of an expansion in moving harmonics: \begin{equation} \label{1.1.3} q({\mathbf{r}},t)=(-e)\delta(\mathbf{r}-\mathbf{v}t)=(-e) \int\limits_{-\infty}^\infty f(\mathbf{k})e^{i(\mathbf{kr}-wt)}d\mathbf{k}. \end{equation} Each harmonic $f(k_x)e^{i(k_xx-wt)}$ is a plane wave moving with a velocity $\upsilon_x$ along the $x$ axis. The frequency $w$ of all these waves is coupled to the wave number $k_x$ through the relation $k_x=w/\upsilon_x$. The $f(k_y)e^{ik_yy}$ and $f(k_z)e^{ik_zz}$ harmonics are, as before, waves at rest with wave numbers $k_y$ and $k_z$. It should be stressed that the above decompositions {\itshape\bfseries {are in no way }} expansions in de Broglie waves. These are nothing more than conventional expansions in spatial or spatial-temporal harmonics. Significantly, expansion of an object in harmonics is not just a mathematical abstraction. Each harmonic is in actual fact a real physical component of a given process. For instance, when sound or electromagnetic waves are decomposed into a spectrum each harmonic corresponding to a harmonic of the mathematical expansion is a real sound or electromagnetic wave exhibiting all the properties of a wave of this nature. The validity of this statement is not questioned any longer. Nevertheless, one could refer in this connection to the book by A. A. Kharkevich \cite{1} where this point is considered at length. In our case, we meet with the same situation in expanding a charge in separate harmonics of the kind of Eq. (\ref{1.1.3}), which represent in this particular case {\itshape\bfseries{charge waves.}} Consider now a point electron that enters an atom and starts rotating about the nucleus. When an electron rotates about the nucleus, each {\itshape\bfseries{charge}} harmonic adding up to the charge of the electron must satisfy the periodicity conditions. Because the lengths of the {\itshape\bfseries{charge}} waves making up the electron charge are different, the shape of the point charge of the electron should somehow change. For this reason, the electron, while being point-like in free space, will have to “spread out” in rotation about the nucleus, i.e., assume another shape, different from point-like. Significantly, this will be a real “spread-out”, a real change in the shape of the point charge, rather than the probability of finding a point charge at a certain point in space. Said otherwise, we have here what can be called distribution of the electron charge in space. Now what shape can assume a charge rotating about a nucleus? In other words, what will be the electron charge distribution in an atom? The Schr\"odinger equation permits one to calculate with a high precision the various parameters describing the state of an electron in an atom. This licenses us to assume that the charge of an electron rotating about a nucleus should take on the shape coinciding with that of an eigenfunction of Schr\"odinger’s equation, or of a function related in some way to eigenfunctions of the Schr\"odinger equation. The set of eigenfunctions of the Schr\"odinger equation is usually expressed in a spherical reference frame and is well known. Therefore, we are going to use in what follows the spherical coordinate system $r$, $\vartheta$, $\varphi$. The Cartesian system was used above only to make our reasoning more revealing. What are the eigenfunctions of the Schr\"odinger equation among which one could look for functions describing the shape of a rotating charge? We note immediately that the shape of the distribution of this charge can be looked for only by invoking $S$ states of solutions of the Schr\"odinger equation. Indeed, charge conservation dictates that the total charge of a distribution rotating about a nucleus must be equal to the electron charge. Integration over the whole space of the $S$ states only yields a nonzero result. The integral over all other states gives zero. Therefore, only the $S$ state is capable of describing the presence of an electron in an atom. While the atom can contain, besides $S$, other charge states as well, they will be able only to modify the charge shape. \medskip Hence, {\bfseries\underline{in all situations}} the $S$ charge state {\bfseries\underline{ is present}} in an atom. The $S$ charge state {\bfseries\underline{ must be present}} in an atom {\bfseries\underline{always.}} \medskip Besides, it is only the $S$ states of the charge that can contribute to the electron potential energy. The potential energy of an electron is mediated by the Coulomb interaction of the electron with the nucleus. We identify the nucleus with a geometric point placed at the origin of the spherical reference frame. The electron charge is distributed somehow in space. In this case, the potential energy of interaction of this charge with the nucleus can be written as \begin{equation} \label{1.1.4} \mathcal E_{Pot}=\int e\delta(\mathbf{r})\Phi(\mathbf{r})dV, \end{equation} where $e\delta(\mathbf{r})$ is the nuclear charge, and $\Phi(\mathbf{r})$ is the potential generated by the electron charge distribution. Examining Eq. (\ref{1.1.4}), we see that the energy $\mathcal E_{Pot}$ is nonzero for the $S$ states only, because it is solely for the $S$ states that the potential $\Phi(0)$ is nonzero. While besides the $S$ states an atom may contain other charge states as well, they will not be able to contribute to the potential energy of electron interaction with the nucleus. This contribution can come from the $S$ states of charge only. \medskip Hence, {\bfseries\underline{in all situations}} the $S$ charge state {\bfseries\underline{ is present}} in an atom. The $S$ charge state {\bfseries\underline{ must be present}} in an atom {\bfseries\underline{always.}} \medskip To find the charge shape assumed by an electron rotating about a nucleus, we write the corresponding equation in the form \begin{equation} \label{1.1.5} \bigtriangleup \rho(\mathbf{r})+\frac{4m_e}{\hbar^2} (\mathcal E-\mathcal E_{Pot})\rho(\mathbf{r})=0. \end{equation} In this equation, $\rho(\mathbf{r})$ is the density of the electron charge distribution we are looking for, $m_e$ is the electron mass, $\mathcal E$ is the eigenvalue of energy, $\mathcal E_{Pot}$ is the electron potential energy, and $\hbar$ is the Dirac constant: $\hbar=h/2\pi$, where $h$ is the Planck constant. Equation (\ref{1.1.5}) differs in form from the Schr\"odinger equation in that it is written not for the wave function $\Psi$ but rather for the density of electron charge distribution $\rho(\mathbf{r})$, and in the numerical coefficient 4 in the second term of the equation (in the Schr\"odinger equation, the coefficient is 2). The meaning of this difference will become clear later. Equation (\ref{1.1.5}) may be called {\itshape\bfseries{ “the equation of wave mechanics”}}. This name stresses that an electron, while being point-like in free space and, hence, obeying the laws of mechanics, changes its shape in an atom as a result of manifestation of its wave properties when expanded in charge waves. We are going to find the shape of the electron charge distribution by means of Eq. (\ref{1.1.5}). Equation (\ref{1.1.5}) contains the energy $\mathcal E_{Pot}$. This is the potential energy of an electron in the nuclear field. We do not know, however, the shape of the electron charge distribution and, hence, do not know the energy $\mathcal E_{Pot}$. The only thing we can do is to substitute for $\mathcal E_{Pot}$ the energy of interaction of a point nucleus with a point electron. In doing this, we have got ourselves into an ambiguity: indeed, we are looking for the shape of a distributed electron charge but substitute into Eq. (\ref{1.1.5}) the energy of a point electron. One should therefore verify that this will not give rise to an inconsistency. We shall check this later in the particular example of the hydrogen atom. Introducing the parameter $\mu_e=2m_e$, Eq. (\ref{1.1.5}) can be recast in the form \begin{equation} \label{1.1.6} \bigtriangleup \rho_\mu(\mathbf{r})+\frac{2\mu_e}{\hbar^2} (\mathcal E_\mu-\mathcal E_{Pot})\rho_\mu(\mathbf{r})=0. \end{equation} This equation coinciding in form with the Schr\"odinger equation, one can use the results of its solution, keeping in mind all the time that Eq. (\ref{1.1.6}) contains in place of the mass $m_e$ the parameter $\mu_e$ which is not the electron mass. Therefore in what follows we are going to label all the results derived from Eq. (\ref{1.1.6}) with index $\mu$. The quantities labeled by $\mu$ may judiciously be called “modified” to stress that they are related not to the real electron mass $m_e$ but rather to a parameter $\mu_e$. Accordingly, Eq. (\ref{1.1.6}) may be referred to as a “modified” equation of wave mechanics. It appears pertinent to specify now the relations connecting the commonly used quantities with the corresponding “modified” quantities. \begin{center}Modified mass $$ \mu_e=2m_e. $$ Modified Bohr radius \begin{equation} \label{1.1.7} a_\mu=\frac{\hbar^2}{\mu_ee^2}=\frac{\hbar^2}{2m_ee^2}=\frac{a}{2}, \end{equation} $$ \mbox{where}\ a=\frac{\hbar^2}{m_ee^2} \mbox{ is the Bohr radius.} $$ Modified reduced radius $$ \tau_\mu=\frac{r}{a_\mu}=2\tau, $$ \end{center} where $r$ is the radial coordinate of a spherical reference system, and $\tau=r/a$ is the reduced radius. We thus obtain $\tau_\mu a_\mu=\tau a=r$. \section{Potential energy of distributed electron charge in a hydrogen atom} Let us find the potential energy of interaction of an electron with the nucleus in a hydrogen atom. One should first find for this purpose the shape of the distributed charge of an electron in the potential field of the nucleus and the energy of interaction of this distributed charge with the nucleus by methods of electrodynamics. To calculate the charge distribution we are looking for, one has to substitute into the equation of wave mechanics (\ref{1.1.5}), or Eq. (\ref{1.1.6}), the potential energy of electron interaction with the nucleus. As pointed out in Sec. 1.2, the only possibility open for us here is to treat this energy as interaction of a point nucleus with a point electron. The potential energy of interaction of a point nucleus with a point electron can be written as \begin{equation} \label{1.2.1} \mathcal E_{Pot}=-\frac{e^2}{r} \end{equation} where $r$ is the distance between the nucleus and the electron. We assume the nucleus to be at the origin of the spherical coordinate system. In this case $r$ is nothing else but the radial component of the spherical reference frame. Substituting Eq. (\ref{1.2.1}) into Eq. (\ref{1.1.5}), we come to \begin{equation} \label{1.2.2} \bigtriangleup \rho+\frac{4m_e}{\hbar^2}\left(\mathcal E+\frac{e^2} {r}\right)\rho=0. \end{equation} One has now to solve this equation and find the charge distribution $\rho$. In place of solving Eq. (\ref{1.2.2}), however, we can write the “modified” equation by substituting Eq. (\ref{1.2.1}) into Eq. (\ref{1.1.6}), to obtain \begin{equation} \label{1.2.3} \bigtriangleup \rho_\mu+\frac{2\mu_e}{\hbar^2}\left(\mathcal E_\mu+\frac{e^2}{r}\right)\rho_\mu=0. \end{equation} Recall that it is not the real electron mass $m_e$ but rather parameter $\mu_e=2m_e$ that enters this equation. Equation (\ref{1.2.3}) coincides in form with the Schr\"odinger equation for the hydrogen atom. The solutions to this equation are well known. In the form of most relevance to us they are presented in monograph \cite{2}. As pointed out in Sec. 1.2, of all the solutions we are interested in the spherical symmetric $S$ states only. We are writing out these solutions for the quantum numbers $n$ = 1, 2, 3 below (they are normalized against unity): $$ \rho_{\mu1S}=2e^{-\tau_\mu}, $$ \begin{equation} \label{1.2.4} \rho_{\mu2S}=\frac{1}{\sqrt2}e^{-\frac{\tau_\mu}{2}}\left (1-\frac{\tau_\mu}{2}\right), \end{equation} $$ \rho_{\mu3S}=\frac{2}{3\sqrt3}e^{-\frac{\tau_\mu}{3}}\left (1-\frac{2}{3}\tau_\mu+\frac{2}{27}\tau_\mu^2\right). $$ The variable in relations (\ref{1.2.4}) is the modified radius $\tau_\mu=r/a_\mu$, where $r$ is the radial component of the spherical coordinate system, and $a_\mu$ is the modified Bohr radius (see Eqs. (\ref{1.1.7})). Equations (\ref{1.1.7}) can be used to eliminate the $\mu$ index, with the solutions acquiring the form $$ \rho_{1S}=A_{1S}e^{-2\tau}, $$ \begin{equation} \label{1.2.5} \rho_{2S}=A_{2S}e^{-\frac{2\tau}{2}}(1-\tau), \end{equation} $$ \rho_{3S}=A_{3S}e^{-\frac{2\tau}{3}}(27-36\tau+8\tau^2). $$ We have introduced here normalization factors $A_{nS}$, so that there is no sense anymore in retaining the fractional coefficients in the parentheses. The variable in Eqs. (\ref{1.2.5}) is the reduced radius $\tau=r/a$. We are going to use besides $\tau$ in what follows the $r$ variable too. The solutions $\rho_{nS}$ are the densities of electron charge distribution in the atom. The total charge should be equal to the electron charge; therefore, the $\rho_{nS}$ functions should be normalized not to unity, as this is done in quantum mechanics, but rather to the electron charge $(-e)$: \begin{equation} \label{1.2.6} \int \rho_{nS}dV=-e. \end{equation} $dV$ is here an element of volume in the spherical coordinate system: \begin{equation} \label{1.2.7} dV=r^2\sin\vartheta d\vartheta d\varphi dr=a^2\tau^2\sin\vartheta d\vartheta d\varphi ad\tau=a^3\tau^2\sin\vartheta d\vartheta d\varphi d\tau. \end{equation} Integrating Eq. (\ref{1.2.6}), we come to the following values for the $A_{nS}$ coefficients: $$ A_{1S}=\frac{-e}{\pi a^3}, $$ \begin{equation} \label{1.2.8} A_{2S}=\frac{+e}{4\pi a^3}\cdot \frac{1}{4}, \end{equation} $$ A_{3S}=\frac{-e}{4\pi a^3}\cdot \frac{4}{27\cdot 27\cdot 3}. $$ Having obtained the solutions (\ref{1.2.5}) for the electron charge distribution in an atom, we can find the energy of interaction of this charge with the nucleus: \begin{equation} \label{1.2.9} \mathcal E_{Pot,nS}=\int \Phi_N\rho_{nS}dV. \end{equation} Here $\Phi_N=e/r=e/a\tau$ is the nuclear potential. Substituting now the expression for the nuclear potential and equations (\ref{1.2.5}) into Eq. (\ref{1.2.9}), and going over to a common variable $\tau$, we obtain after some straightforward calculations \begin{equation} \label{1.2.10} \mathcal E_{Pot,1S}=-\frac{e^2}{a},\qquad \mathcal E_{Pot,2S} =-\frac{e^2}{a}\cdot \frac{1}{4},\qquad \mathcal E_{Pot,3S} =-\frac{e^2}{a}\cdot \frac{1}{9}. \end{equation} We have used in the integration the well known formula (valid for an integer $n$) \begin{equation} \label{1.2.11} \int\limits_0^\infty x^ne^{-ax}dx=\frac{n!}{a^{n+1}}. \end{equation} The expressions for the energy (\ref{1.2.10}) coincide with the eigenvalues of Eq. (\ref{1.2.2}) although they were derived by another method. The expressions (\ref{1.2.10}) can be combined in one formula \begin{equation} \label{1.2.12} \mathcal E_{Pot,nS}=-\frac{e^2}{a}\cdot \frac{1}{n^2}. \end{equation} Equation (\ref{1.2.12}) coincides fully with the expression for the {\bfseries potential} energy of interaction between a nucleus and an electron which is well known in quantum mechanics (it could be obtained, for instance, from the expression $\mathcal E_{Pot,nS}=\int \Psi_{nS}^*\hat H \Psi_{nS} dV$, where $\hat H=-\frac{e^2}{r}$ is the potential energy operator). Thus, by normalizing the charge distribution against the electron charge and using standard methods of electrostatics, we have arrived at correct values of the energies of interaction of a distributed charge with the nucleus, which coincide with those known from quantum mechanics. \section{Fields and potentials of a distributed electron charge} Knowing the distribution of electron charge in an atom, we can readily calculate the electric fields and potentials generated by these charges. As pointed out in Sec. 1.2, of all the solutions of the wave mechanics equation we are interested in spherically symmetric states of charge only. This simplifies greatly our task. Let us find now the potentials and fields of a distributed electron charge. The electric field can be derived from the Gauss’ theorem (see, e.g., Ref. \cite{3}, p. 109): \begin{equation} \label{1.3.1} \oint\limits_S \mathbf E(\mathbf r)d\mathbf S=4\pi\int\limits_V\rho (\mathbf r')dV'=4\pi Q', \end{equation} where for the surface $S$ we can take a sphere of radius $r$, $Q'$ is the charge inside this sphere, $\rho(\mathbf r')$ is the charge density, and $V$ is the volume inside the sphere $S$. Because by symmetry of the system the field on the surface of such a sphere is constant (and has only one radial component $E_r(r)$), the Gauss’ theorem takes on the form \begin{equation} \label{1.3.2} E(r)\cdot 4\pi r^2=4\pi\int\limits_V \rho(\mathbf r')dV', \end{equation} whence \begin{equation} \label{1.3.3} E(r)=\frac{1}{r^2}\int\limits_V \rho(\mathbf r')dV'=\frac{4\pi}{r^2} \int\limits_0^r \rho(r'){r'}^2dr'. \end{equation} The potential $\Phi(r)$ can be calculated from the formula \begin{equation} \label{1.3.4} \Phi(r)=-\int\limits_\infty^r E(r')dr'. \end{equation} Substituting Eq. (\ref{1.3.3}) for the field $E(r)$ in this equality and integrating by parts, we come to the following expression for the potential \begin{equation} \label{1.3.5} \Phi(r)=\frac{4\pi}{r}\int\limits_0^r \rho(r'){r'}^2dr'+4\pi\int \limits_r^\infty \rho(r')r'dr'. \end{equation} Equations (\ref{1.3.3}) and (\ref{1.3.5}) can be found in monographs \cite{3} (pp. 113, 175) and \cite{4} (pp. 27, 228). The expressions for distributed charge (\ref{1.2.5}) involve the variable $\tau=r/a$. Equation (\ref{1.3.5}) expressed in these variables assumes the form \begin{equation} \label{1.3.6} \Phi(\tau)=\frac{4\pi a^2}{\tau}\int\limits_0^\tau \rho(\tau') {\tau'}^2d\tau'+4\pi a^2\int\limits_\tau^\infty \rho(\tau')\tau'd\tau'. \end{equation} Substituting the expressions for distributed charge (\ref{1.2.5}) in Eq. (\ref{1.3.6}), we come to the following relations for the potential of the distributed electron charge $$ \Phi_{1S}=-\frac{e}{a\tau}+\frac{e}{a\tau}e^{-2\tau}(1+\tau), $$ \begin{equation} \label{1.3.7} \Phi_{2S}=-\frac{e}{a\tau}+\frac{e}{a\tau}e^{-\frac{2\tau}{2}} \cdot\frac{1}{4}(\tau^2+3\tau+4), \end{equation} $$ \Phi_{3S}=-\frac{e}{a\tau}+\frac{e}{a\tau}e^{-\frac{2\tau}{3}} \cdot\frac{1}{27\cdot9}(8\tau^3+36\tau^2+27\cdot5\tau+27\cdot9). $$ Recalling that $\tau=r/a$, these formulas can be made more revealing by a corresponding transformation $$ \Phi_{1S}=-\frac{e}{r}+\frac{e}{r}e^{-2\tau}(1+\tau), $$ \begin{equation} \label{1.3.8} \Phi_{2S}=-\frac{e}{r}+\frac{e}{r}e^{-\frac{2\tau}{2}}\cdot \frac{1}{4}(\tau^2+3\tau+4), \end{equation} $$ \Phi_{3S}=-\frac{e}{r}+\frac{e}{r}e^{-\frac{2\tau}{3}}\cdot \frac{1}{27\cdot9}(8\tau^3+36\tau^2+27\cdot5\tau+27\cdot9). $$ Examining expressions (\ref{1.3.8}), we see immediately that the potentials of distributed electron charge in a hydrogen atom are actually a sum of the potential of a point charge $(-e)$ placed at the origin and the potential of a point charge $(+e)$ but with an exponential factor, likewise located at the origin. Consider now the limiting cases of the behavior of the potential. For $r\to\infty$, the second term in expressions (\ref{1.3.8}) vanishes by virtue of the exponential factor, with the potentials reducing to a potential of a point charge $(-e)$ placed at the origin. For $r\to0$, the exponentials can be expanded in a series, with two terms retained. This leads us to \begin{equation} \label{1.3.9} \Phi_{1S}(0)=-\frac{e}{a},\qquad \Phi_{2S}(0)=-\frac{e}{a}\cdot \frac{1}{4},\qquad \Phi_{3S}(0)=-\frac{e}{a}\cdot\frac{1}{9}. \end{equation} Because we assume the nucleus to be point-like, located at the origin, and having a charge $q_N=(+e)$, the potential energy of interaction of the nucleus with a distributed electron charge can be derived without integration \begin{equation} \label{1.3.10} \mathcal E_{nS}=q_{N}\Phi_{nS}(0)=e\Phi_{nS}(0). \end{equation} This yields for the interaction energy \begin{equation} \label{1.3.11} \mathcal E_{1S}=-\frac{e^2}{a},\qquad \mathcal E_{2S}=-\frac{e^2} {a}\cdot\frac{1}{4},\qquad \mathcal E_{3S}=-\frac{e^2}{a}\cdot \frac{1}{9}. \end{equation} These expressions coincide naturally with Eqs. (\ref{1.2.10}) calculated from Eq. (\ref{1.2.9}). Calculate now the fields corresponding to the charge distributions obtained. By virtue of the spherical symmetry of the charge distribution, the fields will have only one radial component $E_r$: $E_r=-\bigtriangledown_r\Phi=-\frac{1}{a} \bigtriangledown_\tau\Phi$. Taking a derivative of expressions (\ref{1.3.7}), we come to $$ E_{1S}=-\frac{e}{a^2\tau^2}+\frac{e}{a^2\tau^2}e^{-2\tau} (2\tau^2+2\tau+1), $$ \begin{equation} \label{1.3.12} E_{2S}=-\frac{e}{a^2\tau^2}+\frac{e}{a^2\tau^2}e^{-\tau} \cdot \frac{1}{4}(\tau^3+2\tau^2+4\tau+4), \end{equation} $$ E_{3S}=-\frac{e}{a^2\tau^2}+\frac{e}{a^2\tau^2}e^{-\frac{2\tau}{3}} \cdot\frac{1}{27\cdot27}(16\tau^4+24\tau^3+9\cdot18\tau^2+27 \cdot18\tau+27\cdot27). $$ Just as in the case with the potential, we rewrite Eqs. (\ref{1.3.12}) in a more revealing way $$ E_{1S}=-\frac{e}{r^2}+\frac{e}{r^2}e^{-2\tau}(2\tau^2+2\tau+1), $$ \begin{equation} \label{1.3.13} E_{2S}=-\frac{e}{r^2}+\frac{e}{r^2}e^{-\tau}\cdot \frac{1}{4} (\tau^3+2\tau^2+4\tau+4), \end{equation} $$ E_{3S}=-\frac{e}{r^2}+\frac{e}{r^2}e^{-\frac{2\tau}{3}}\cdot \frac{1}{27\cdot27}(16\tau^4+24\tau^3+9\cdot18\tau^2+27 \cdot18\tau+27\cdot27). $$ Examining Eqs. (\ref{1.3.13}), we see that in all cases the field of a distributed electron charge may be considered as a sum of two fields, more specifically, of a field of a point charge $(-e)$ located at the origin plus that of a point charge $(+e)$ but with an exponential factor, which is likewise placed at the origin. Consider now the extreme cases. For $r\to\infty$, the second term in all expressions vanishes by virtue of the exponential factor, to leave the field of a point charge $(-e)$ at the origin. To study the behavior of the field for $r\to0$, one can expand the exponential to third order. Substituting this expansion in Eqs. (\ref{1.3.12}), we readily see that all fields in this case vanish. The expressions for the potential $\Phi_{1S}$ and field $E_{1S}$ of the $1S$ state can be found in Refs. \cite{3} (pp. 113, 176) and \cite{4} (pp. 27 and 228). The field acting in the atom is actually a sum of the field of the nucleus $E_N=e/r^2$ and of that created by the distributed charge (\ref{1.3.12}), (\ref{1.3.13}). At a certain distance from the nucleus this total field will tend to zero by virtue of the exponential factor. As one approaches the nucleus, the total field resembles the field of the nucleus $e/r^2$. Similarly, the potential acting in the atom is a sum of that created by the nucleus $\Phi_N=e/r$ and of the distributed charge potential (\ref{1.3.7}), (\ref{1.3.8}). At some distance from the nucleus, this total potential tends to zero by virtue of the exponential factor. As one comes closer to the nucleus, the total potential approaches in form the nuclear potential $e/r$. \section{Kinetic and total energies of a distributed electron charge in an atom} We have seen that the charge which was point-like in free space spreads out in an atom to become a distributed rather that point charge. In free space, however, electron has not only a point charge but a point mass as well. On entering an atom, this point mass has to spread out just as this was done by the point charge. The shape of the mass distribution should be analogous to that of the charge distribution, because it forms by summation of absolutely identical harmonics both in the first and the second cases. Indeed, a point electron possesses both a point charge and a point mass. The point charge and mass can be expanded in terms of harmonics as this was done in series (\ref{1.1.1}) or (\ref{1.1.3}); a difference may appear only in the common coefficient and the common sign of the harmonics. The velocities and directions of motion of the harmonics also coincide, because both the charge and the mass belong to the same point electron. Therefore, as the electron moves in its circular trajectory, summation of all harmonics in which the mass was expanded should produce the same distribution of mass as that of the charge (to within the common coefficient and sign). In other words, we deal here not with a distribution of charge or that of mass but rather with a distribution of {\itshape\bfseries the charge/mass object}. Therefore, we can write an equation for the distribution of mass similar to Eq. (\ref{1.1.5}): \begin{equation} \label{1.4.1} \bigtriangleup m(\mathbf{r})+\frac{4m_e}{\hbar^2}(\mathcal E-\mathcal E_{Pot})m(\mathbf{r})=0. \end{equation} For the hydrogen atom, this equation can be recast to the form of Eq. (\ref{1.2.2}): $$ \bigtriangleup m+\frac{4m_e}{\hbar^2}\left(\mathcal E+ \frac{e^2}{r}\right)m=0. $$ We write the solutions to this equation in a form similar to that of Eqs. (\ref{1.2.5}): $$ m_{1S}=B_{1S}e^{-2\tau}, $$ \begin{equation} \label{1.4.2} m_{2S}=B_{2S}e^{-\frac{2\tau}{2}}(1-\tau), \end{equation} $$ m_{3S}=B_{3S}e^{-\frac{2\tau}{3}}(27-36\tau+8\tau^2). $$ To find the coefficients $B_{nS}$, one has first to normalize the $m_{nS}$ functions against the electron mass $m_e$: \begin{equation} \label{1.4.3} \int m_{nS}dV=m_e. \end{equation} Integration yields the following expressions for the $B_{nS}$ coefficients $$ B_{1S}=\frac{m_e}{\pi a^3}, $$ \begin{equation} \label{1.4.4} B_{2S}=\frac{-m_e}{4\pi a^3}\cdot \frac{1}{4}, \end{equation} $$ B_{3S}=\frac{m_e}{4\pi a^3}\cdot \frac{4}{27\cdot 27\cdot 3}. $$ Thus, we have found the shape of the mass distribution. To be precise, this is not the shape of the mass distribution but rather that of {\itshape\bfseries the charge/mass object}. What’s more, examining Eqs. (\ref{1.2.8}) and (\ref{1.4.4}) we see that the distributions of charge and mass are related through \begin{equation} \label{1.4.5} m(\mathbf r)=-\frac{m_e}{e} \rho(\mathbf r). \end{equation} In what follows, in all cases where the term “charge” or “mass” appears it will be assumed that in actual fact this is {\itshape\bfseries the charge/mass object}. A comment is appropriate here. The mass distributions (\ref{1.4.2}) are normalized against the electron mass, i.e., they are always positive. Equations (\ref{1.4.2}) contain, however, alternating polynomials, with the result that within some intervals the mass may acquire a negative sign. There is nothing particular in this from the mathematical side of view, but theoretical mechanics does not operate with such a notion as negative mass. Most probably, the “negativeness” of the mass may physically come up in combination with some other parameters, for instance, in the momentum or angular momentum. The momentum and angular momentum may have either sign, and the negative sign means only motion or rotation in the opposite direction. Besides the shape of the mass distribution, one can envisage some other parameters as well which are connected with mass. Indeed, electron in an atom possesses kinetic energy. Hence, its mass should move somehow. The only internal motion allowed for an atom is rotation. In other words, we have to find the form of this rotation. But neither Eq. (\ref{1.4.1}) nor Eq. (\ref{1.1.5}) can yield the form of rotation, because these relations do not contain any parameters of motion at all. But some form of rotation has to be present in an atom. There is, however, another consideration. By the theorem of Earnshaw, a stable static configuration of electric charges cannot exist without involvement of any other forces of other than electric origin. For this reason, static charge distributions (\ref{1.2.5}) are intrinsically unstable. But the atom is stable. Only rotation can impart stability to the atom. Hence, the charges described by Eqs. (\ref{1.2.5}) should rotate. Motion (considered in nonrelativistic approximation) cannot change the potential energy of a distributed charge. The potential energy of a rotating charge distribution can be readily found assuming the charge being at rest. Indeed, any charge which has departed from a given point is replaced immediately by an identical charge at this point. Potential energy depends only on the magnitude of the charge at a given point and is independent of whether this is the charge that has just left or the second (that has arrived). \medskip Let us look now for the overall pattern of rotation of a distributed charge and a distributed mass. What are the basic considerations we should start from? First, we know from quantum mechanics the expression for the kinetic energy of a rotating electron $\mathcal E_{Kin}$ (derived, for instance, from the relation $\mathcal E_{Kin,nS}=\int\Psi_{nS}^* \hat H \Psi_{nS} dV$, where $\hat H=\frac{\hat p^2}{2m}$ is the kinetic energy operator, and $\hat p=-i\hbar\bigtriangledown$ is the momentum operator): \begin{equation} \label{1.4.6} \mathcal E_{Kin,nS}=\frac{e^2}{2a}\cdot\frac{1}{n^2}. \end{equation} Second, quantum mechanics offers the following expression for the angular momentum of an electron in the $S$ states, $M_{nS}$: \begin{equation} \label{1.4.7} M_{nS}=const=\hbar/2, \end{equation} because an electron has in the $S$ states only the spin moment $\hbar/2$. (Recall that nonrelativistic quantum mechanics required that the angular momenta of $S$ states be zero, while experiment showed them to be $\hbar/2$ rather than zero.) Thus, rotation should be such as to satisfy at least these two conditions, (\ref{1.4.6}) and (\ref{1.4.7}). The simplest assumption that comes immediately to mind is as follows. The distributions of mass and charge rotate as a whole, i.e., as a continuous body. Let us assume that rotation occurs around a vertical axis $(\vartheta=0)$. Let us call it for convenience the $Z$ axis, and the plane $\vartheta=\pi/2$, the equatorial plane. In this case, the velocity $v_\varphi$ of motion of any point is proportional to the radius and the sine of the angle $\vartheta$: \begin{equation} \label{1.4.8} v_\varphi=Ar\sin\vartheta=Aa\tau\sin\vartheta, \end{equation} where $A$ is a constant. The above suggests that as one moves away from the axis of rotation, $v_\varphi$ {\bfseries increases}, and as one approaches this axis, it {\bfseries decreases}. Let us verify that this assumption meets conditions (\ref{1.4.6}) and (\ref{1.4.7}). Substituting the expressions for the mass (\ref{1.4.2}) in the relation \begin{equation} \label{1.4.9} \mathcal E_{Kin,nS}=\int\frac{m_{nS}v_\varphi^2}{2}dV, \end{equation} where the element of volume $dV$ is defined by Eq. (\ref{1.2.7}) we come to \begin{equation} \label{1.4.10} \mathcal E_{Kin,1S}=m_eA^2a^2\cdot1,\quad \mathcal E_{Kin,2S} =m_eA^2a^2\cdot8,\quad \mathcal E_{Kin,3S}=m_eA^2a^2\cdot57. \end{equation} We calculate now the angular momentum around the $Z$ axis $(\vartheta=0)$ \begin{equation} \label{1.4.11} M_{nS}=\int m_{nS}v_\varphi r\sin\vartheta dV. \end{equation} For the momenta $M_{nS}$ we obtain \begin{equation} \label{1.4.12} M_{1S}=m_eAa^2\cdot2,\qquad M_{2S}=m_eAa^2\cdot16,\qquad M_{3S}=m_eAa^2\cdot114. \end{equation} Examining now Eqs. (\ref{1.4.10}) and (\ref{1.4.12}), we see that the energies $\mathcal E_{Kin,nS}$ do not scale as $1/n^2$, and that the angular momenta $M_{nS}$ are in no way constant. Said otherwise, the conditions (\ref{1.4.6}) and (\ref{1.4.7}) are not met. But this means that the distributions of charge and mass {\bfseries \underline{cannot rotate as a whole}}, i.e., as a solid body. \medskip To establish the pattern of rotation of the charge and mass that can exist in an atom, the following reasoning appears to be appropriate. Consider circular rotation of an element of mass $dm$ with a negative charge $dq$ in the equatorial plane about a nucleus with charge $(+e)$ as about the center. Rotation of an element of mass $dm$ along a circle of radius $R$ is driven by the action on the element of mass of a centripetal force $d\mathbf F$ toward the center: $d\mathbf F=-\frac{dmv_\varphi^2}{R}\mathbf n$, where $v_\varphi$ is the velocity of the element of mass, and $\mathbf n$ is the unit vector in the direction of the radius $R$. The negative sign of the centripetal force $d\mathbf F$ signifies that the force is directed oppositely to the unit vector $\mathbf n$. In our particular case, the centripetal force is essentially the force of attraction between the charges, $d\mathbf F=\frac{edq}{R^2}\mathbf n$. Equating these two expressions, canceling $R$ and dividing by two, we finally obtain \begin{equation} \label{1.4.13} \frac{dmv_\varphi^2}{2}=\frac{1}{2}\cdot\left(-\frac{edq}{R}\right). \end{equation} We see on the left the kinetic energy of the element of mass, and on the right, one half of the potential energy taken with the opposite sign. This means that circular rotation of an element of mass $dm$ obeys the relation \begin{equation} \label{1.4.14} \mathcal E_{Kin}=\frac{1}{2}(-\mathcal E_{Pot}). \end{equation} The equality (\ref{1.4.14}) formulates the Clausius theorem on the virial of forces mediating circular motion of an element of mass in a Coulomb potential well (see, e.g., \cite{5}, p. 76). Because the potential, kinetic, and total energies of an element of mass moving circularly are constant, there is no need in averaging the energies, as this would be required by the theorem of virial in a general form. Let us write now the theorem of virial for an element or sum of elements of mass/charge moving in a Coulomb potential well in a general form (see, e.g., \cite{5}): \begin{equation} \label{1.4.15} \overline{\mathcal E_{Kin}}=\frac{1}{2}(-\overline{\mathcal E_{Pot}}), \end{equation} where the bar on top signifies averaging over time. Equation (\ref{1.4.13}) yields the velocity of circular motion of an element $dm$ (at the equator) \begin{equation} \label{1.4.16} v_\varphi=\sqrt{-\frac{e}{R}\cdot\frac{dq}{dm}}, \end{equation} and, combined with Eq. (\ref{1.4.5}), we finally have \begin{equation} \label{1.4.17} v_\varphi=\frac{e}{\sqrt{m_e}}\cdot\frac{1}{\sqrt{R}}, \end{equation} i.e., the velocity of an element of mass is inversely proportional to the square root of the distance between the nucleus and the element. Thus, as one approaches the axis of rotation, {\bfseries\underline{the velocity increases}}, and as one moves away from it, {\bfseries\underline{the velocity decreases}}. Because at the equator the radius $R$ of the circle along which the element of mass moves coincides with the coordinate $r$ of the spherical reference frame, Eq. (\ref{1.4.17}) can be recast in the form \begin{equation} \label{1.4.18} v_\varphi=\frac{\alpha c}{\sqrt{\tau}}. \end{equation} In this expression, the radius of the circle $R=r=a\tau$, where $a$ is, as before, the Bohr radius: $a=\frac{\hbar^2}{m_ee^2}$, $\alpha$ is the fine structure constant: $\alpha=\frac{e^2}{\hbar c}$, and $c$ is the velocity of light. Finding the velocity of an element of mass not lying in the equatorial plane meets with some difficulties. In this case, the centripetal force does not coincide in direction with the force of attraction to the nucleus, and this makes the above reasoning invalid here. To describe the rotation of a distributed charge as a whole, we can make two assumptions: 1. Rotation occurs in such a way that linear velocity $v_\varphi$ of each element depends only on its distance from the nucleus; 2. Rotation occurs such that elements located on the sphere of radius $r$ have the same angular velocity. In the first case, the relation for the velocity of motion of elements of a distributed charge can be written in the way similar to Eq. (\ref{1.4.18}) \begin{equation} \label{1.4.19} v_\varphi=\frac{k'\alpha c}{\sqrt{\tau}}. \end{equation} In the second case, the equation assumes the form \begin{equation} \label{1.4.20} v_\varphi=\frac{k''\alpha c}{\sqrt{\tau}}\sin\vartheta. \end{equation} Here $k'$ and $k''$ are some coefficients. In both cases, {\itshape\bfseries the charge rotates in a layered pattern depending on the radius} $r$, with {\itshape\bfseries the velocity of rotation being the higher, the closer is the charge element to the nucleus}. Thus, of the two versions of the velocity, (\ref{1.4.19}) and (\ref{1.4.20}), we have to choose the right one. Substituting the velocity from Eq. (\ref{1.4.19}) in Eq. (\ref{1.4.9}), we obtain \begin{equation} \label{1.4.21} \mathcal E_{Kin,1S}=k'^2\frac{e^2}{2a}\cdot1,\quad \mathcal E_{Kin,2S}=k'^2\frac{e^2}{2a}\cdot\frac{1}{4},\quad \mathcal E_{Kin,3S}=k'^2\frac{e^2}{2a}\cdot\frac{1}{9}. \end{equation} Substituting now the velocity from Eq. (\ref{1.4.20}) in Eq. (\ref{1.4.9}), we come to \begin{equation} \label{1.4.22} \mathcal E_{Kin,1S}=k''^2\frac{e^2}{2a}\cdot\frac{2}{3}\cdot1, \quad \mathcal E_{Kin,2S}=k''^2\frac{e^2}{2a}\cdot\frac{2}{3} \cdot\frac{1}{4},\quad \mathcal E_{Kin,3S}=k''^2\frac{e^2}{2a} \cdot\frac{2}{3}\cdot\frac{1}{9}. \end{equation} An analysis of Eqs. (\ref{1.4.21}) and (\ref{1.4.22}) suggests that the energies $\mathcal E_{Kin,nS}$ scale as $1/n^2$, as required by Eq. (\ref{1.4.6}). The virial theorem (\ref{1.4.15}) permits us now to find the $k'$ and $k''$ coefficients. We come eventually to $k'=1$, $k''=\sqrt{3/2}$. For the kinetic energy we obtain in the two cases the following expressions \begin{equation} \label{1.4.23} \mathcal E_{Kin,1S}=\frac{e^2}{2a}\cdot1,\qquad \mathcal E_{Kin,2S} =\frac{e^2}{2a}\cdot\frac{1}{4},\qquad \mathcal E_{Kin,3S} =\frac{e^2}{2a}\cdot\frac{1}{9}. \end{equation} Equations (\ref{1.4.23}) can be combined to yield \begin{equation} \label{1.4.24} \mathcal E_{Kin,nS}=\frac{e^2}{2a}\cdot\frac{1}{n^2}. \end{equation} Equation (\ref{1.4.24}) coincides exactly with Eq. (\ref{1.4.6}) for the kinetic energy of an electron, which is well known from quantum mechanics. Let us determine now the total energy of the electron. By adding the potential energy (\ref{1.2.10}) just found with the kinetic energy (\ref{1.4.23}), we arrive at the total electron energy \begin{equation} \label{1.4.25} \mathcal E_{1S}=-\frac{e^2}{2a}\cdot1,\qquad \mathcal E_{2S}=-\frac{e^2}{2a}\cdot\frac{1}{4},\qquad \mathcal E_{3S}=-\frac{e^2}{2a}\cdot\frac{1}{9}. \end{equation} Expressions (\ref{1.4.25}) can now be combined \begin{equation} \label{1.4.26} \mathcal E_{nS}=-\frac{e^2}{2a}\cdot\frac{1}{n^2}, \end{equation} in an expression for the total electron energy, likewise well known from quantum mechanics. Thus, basing on the formulas and methods of electrodynamics and mechanics and applying the above approach, we have come to absolutely correct values of the potential, kinetic, and total energies of an electron in a hydrogen atom. Significantly, in so doing we have not invoked such purely quantum-mechanical concepts as the wave function and the operator. \medskip Let us calculate now the angular momentum of a distributed mass about a vertical axis $Z$ $(\vartheta=0)$. Substituting the velocity $v_\varphi$ from Eqs. (\ref{1.4.19}) and (\ref{1.4.20}) into the expression for the angular momentum (\ref{1.4.11}) we come, respectively, to \begin{equation} \label{1.4.27} M_{1S}=0.92\hbar,\qquad M_{2S}=1.63\hbar,\qquad M_{3S}=2.40\hbar, \end{equation} and \begin{equation} \label{1.4.28} M_{1S}=0.96\hbar,\qquad M_{2S}=1.70\hbar,\qquad M_{3S}=2.49\hbar. \end{equation} To calculate the radial integrals, we have to use now, in place of Eq. (\ref{1.2.11}) valid for an integer exponent $n$, a more general formula \begin{equation} \label{1.4.29} \int\limits_0^\infty e^{-ax}x^ndx=\frac{\Gamma(n+1)}{a^{n+1}} \qquad (\mbox{при }a>0 \quad n>-1), \end{equation} because the exponent of $x$ assumes not integer but rather half-integer values. $\Gamma(n+1)$ is the gamma function. Examining now Eqs. (\ref{1.4.23}) and (\ref{1.4.24}), we see that we have obtained correct values for the kinetic energy of the electron and a correct dependence on number $n$. As for the angular momenta, they are not equal to $\hbar/2$, and, more than that, they are not equal to a constant value at all. Hence, the assumptions concerning the velocity of rotation of a distributed charge are not accurate, and this problem requires a further study. In Chapter 2 of this Paper, the velocity of rotation of a distributed charge is analyzed in more detail, and it is demonstrated that the angular momentum of the ground state is $\hbar/2$, in full agreement with experimental observations. \chapter{Non-spherical atom} \label{ch2} \section{Shape of a non-spherical charge} As shown in Chapter 1 of the present Paper, if the electron charge distribution is assumed to be spherically symmetric, one cannot obtain a correct value of the projection of angular momentum on the $Z$ axis. One may therefore suggest that the distribution of the charge/mass does depend somehow on the angle $\vartheta$. We have not as yet, however, an equation more accurate than Eq. (\ref{1.1.5}) of wave mechanics and, therefore, we cannot know a more accurate solution. This implies that the assumption concerning the actual form of the dependence of the distribution on the angle $\vartheta$ will have to be chosen intuitively. This also means that we can no longer use Eq. (\ref{1.2.2}). We are going to employ, however, the main conclusions derived by means of this equation. We shall apply, in particular, the radial dependence of charge density which was derived for a spherically symmetric charge distribution. Consider the hydrogen atom in ground state. The simplest assumptions that appear reasonable in this case consist in that the distributed charge scales with $\vartheta$ as $\sin\vartheta$ or $\sin^2\vartheta$. In these conditions, the charge density can be written as follows \begin{equation} \label{2.1.1} \rho'_{1NS}=A'_{1NS}\cdot e^{-2\tau}\cdot\sin\vartheta, \end{equation} or \begin{equation} \label{2.1.2} \rho''_{1NS}=A''_{1NS}\cdot e^{-2\tau}\cdot\sin^2\vartheta. \end{equation} Here the subscript $1NS$ identifies the state which is similar to $1S$ but not spherically symmetric. The coefficients $A'_{1NS}$ and $A''_{1NS}$ are found by normalization against the electron charge $(-e)$ \begin{equation} \label{2.1.3} \int \rho'_{1NS}dV=-e,\qquad \int \rho''_{1NS}dV=-e. \end{equation} We finally arrive at \begin{equation} \label{2.1.4} A'_{1NS}=\frac{-4e}{a^3\pi^2},\qquad A''_{1NS}=\frac{-3e}{2\pi a^3}. \end{equation} One may as reasonably assume that the angular dependence can be a combination of several spherical functions $Y_{lm}$. As shown in Chapter 1, the $l=0$ spherical function must be present in an atom always, because it is only this function that permits description of the charge in an atom. All the other functions are capable of affecting the charge shape only, without adding or subtracting any charge. What other functions could be used to describe a distributed charge in an atom? These functions should be symmetric with respect to the angle $\vartheta=\pi/2$; indeed, there are no grounds to assume that an atom can be asymmetric relative to the equator, unless some additional external fields interfere. Besides, we can select for description of the charge only functions with $m=0$. Functions with $m\ne0$ have no axial symmetry relative to the $Z$ axis; therefore, any rotation of such an asymmetric charge should give rise to emission of radiation, which in actual fact does not happen. The simplest function satisfying these requirements is $Y_{20}$. Therefore, we write the angular part of the formula for the charge distribution in the following form: \begin{equation} \label{2.1.5} L=D(Y_{00}+D_{20}Y_{20}), \end{equation} $$ \mbox{where}\ Y_{00}=\frac{1}{\sqrt{4\pi}},\ Y_{20} =\sqrt{\frac{5}{4\pi}}\left(\frac{3}{2}\cos^2\vartheta-\frac{1} {2}\right),\ \mbox{(see, for instance, Ref. \cite{2}).} $$ Coefficient $D_{20}$ can be found from the condition that function $L$ for $\vartheta=0$ and $\vartheta=\pi$ (i.e., at the $Z$ axis) be zero. Indeed, if a charge rotates about the $Z$ axis, all elements of charge not on the $Z$ axis are acted upon by both attraction to the nucleus and the centrifugal force. On the $Z$ axis, the centrifugal force is zero, therefore the only possibility for existence of a distributed charge lies in the absence of charge on the $Z$ axis. Coefficient $D$ will be found from the condition that function $L$ for $\vartheta=\pi/2$ (i.e., at the equator) be unity. We obtain from these conditions $$ D=\frac{2\sqrt{4\pi}}{3},\qquad D_{20}=-\frac{1}{\sqrt{5}}. $$ Thus, function $L$ acquires the form $$ L=\frac{2\sqrt{4\pi}}{3}\left[\frac{1}{\sqrt{4\pi}}-\frac{1} {\sqrt5}\cdot\sqrt{\frac{5}{4\pi}}\left(\frac{3}{2}\cos^2 \vartheta-\frac{1}{2}\right)\right]. $$ One can easily verify that this function exactly coincides with the function $\sin^2\vartheta$. In other words, we obtain in this case two forms for the density of distributed charge: \begin{equation} \label{2.1.6} \rho''_{1NS}=A''_{1NS}e^{-2\tau}\sin^2\vartheta, \end{equation} or \begin{equation} \label{2.1.7} \rho''_{1NS}=A''_{1NS}e^{-2\tau}D(Y_{00}+D_{20}Y_{20}). \end{equation} One can chose conveniently the form most suitable for the actual conditions. Let us see whether we obtain a correct potential energy of interaction of the nucleus with these forms of distributed charge. The potential produced by a distributed charge at the nucleus, i.e., at the origin, can be written as \begin{equation} \label{2.1.8} \Phi'(0)=\int\frac{\rho'_{1NS}}{r}dV=\int\frac{\rho'_{1NS}}{a\tau}dV, \quad \Phi''(0)=\int\frac{\rho''_{1NS}}{r}dV=\int\frac{\rho''_{1NS}} {a\tau}dV. \end{equation} We consider the nucleus to be a point at the origin, with the charge $(+e)$. Therefore, for the energy of interaction of the nucleus with the distributed electron charge can be written as \begin{equation} \label{2.1.9} \mathcal E'_{Pot}=e\Phi'(0),\qquad \mathcal E''_{Pot}=e\Phi''(0). \end{equation} One can easily verify that in both cases we arrive at the same result: \begin{equation} \label{2.1.10} \mathcal E'_{Pot}=\mathcal E''_{Pot}=-\frac{e^2}{a}, \end{equation} which coincides with the energy of the spherically symmetric $1S$ state (\ref{1.2.10}), as well as with the results known from quantum mechanics. \medskip Let us calculate now the kinetic energy of a rotating charge. On repeating the arguments concerning the mass distribution formulated for a spherically symmetric charge (Sec. 1.5), we come to the conclusion that the mass distribution corresponding to the distribution of charge (\ref{2.1.1}) has the form \begin{equation} \label{2.1.11} m'_{1NS}=B'_{1NS}e^{-2\tau}\sin\vartheta, \end{equation} and that the distribution of mass corresponding to the charge distribution (\ref{2.1.6}) or (\ref{2.1.7}) reads as \begin{equation} \label{2.1.12} m''_{1NS}=B''_{1NS}e^{-2\tau}\sin^2\vartheta, \end{equation} or \begin{equation} \label{2.1.13} m''_{1NS}=B''_{1NS}e^{-2\tau}D(Y_{00}+D_{20}Y_{20}). \end{equation} The coefficients $B'_{1NS}$ and $B''_{1NS}$ are found by normalizing the mass density by the electron mass $m_e$. In this way we obtain for the $B'_{1NS}$ and $B''_{1NS}$ coefficients \begin{equation} \label{2.1.14} B'_{1NS}=\frac{4m_e}{a^3\pi^2},\qquad B''_{1NS}=\frac {3m_e}{2\pi a^3}. \end{equation} This distribution rotates about the $Z$ axis. Let us find now the rotation velocity $v_\varphi$. Two versions of the rotation velocity, (\ref{1.4.19}) and (\ref{1.4.20}), were proposed for the spherical charge distribution. If a charge distribution has no spherical symmetry (Eqs. (\ref{2.1.1}) or (\ref{2.1.2})), there is no charge near the $Z$ axis. This means that we may remove now the assumption of the angular velocity for charges on a sphere of radius $r$ being constant (as this was done in Chapter 1 of the Paper) and retain only the dependence of the velocity on radius. The dependence of the velocity on radius only can be assigned to the fact that the only source of the force propelling the motion of a distributed charge is the nucleus. Therefore (in contrast to Eq. (\ref{1.4.20})), we write the expression for the velocity of a distributed charge, similar to Eq. (\ref{1.4.19}), for $k'=1$ in the following form \begin{equation} \label{2.1.15} v_\varphi=\frac{\alpha c}{\sqrt\tau}, \end{equation} where $\alpha$ is the fine structure constant, $c$ is the velocity of light, and $\tau=r/a$ is the reduced radius. Equation (\ref{2.1.15}) shows that the velocity $v_\varphi$ increases as one approaches the nucleus. The charge rotates, as before, in a stratified fashion, but now it is the linear velocity $v_\varphi$ along the $\varphi$ coordinate rather than the angular velocity that depends on radius. We meet, however, as before, with a difficulty of identifying the factor that causes the motion of charges along the $\varphi$ coordinate if the force center does not lie in the $\varphi$ orbital plane (except for the equator). We still have not got, however, any other pattern of rotation to compare. Calculate now the kinetic energy of a rotating charge. The kinetic energy $\mathcal E_{Kin}$ can be written as \begin{equation} \label{2.1.16} \mathcal E_{Kin}=\int\frac{mv^2_\varphi}{2}dV. \end{equation} Substituting Eqs. (\ref{2.1.11}), (\ref{2.1.12}), and (\ref{2.1.15}) in Eq. (\ref{2.1.16}), we come to \begin{equation} \label{2.1.17} \mathcal E'_{Kin}=\mathcal E''_{Kin}=\frac{1}{2}\cdot\frac{e^2}{a}, \end{equation} which coincides with the formula derived for a spherically symmetric $1$S charge/mass distribution (\ref{1.4.23}) and with the expression known from quantum mechanics. The total energy $\mathcal E=\mathcal E_{Pot}+\mathcal E_{Kin}$ also turns out to be identical for both charge distribution patterns (see Eqs. (\ref{2.1.10}) and (\ref{2.1.17})): \begin{equation} \label{2.1.18} \mathcal E'=\mathcal E''=-\frac{1}{2}\cdot\frac{e^2}{a}, \end{equation} which coincides with the results derived for a spherically symmetric $1S$ charge distribution (\ref{1.4.25}) and the appropriate formulas of quantum mechanics. Calculate now the angular momenta for these two versions of charge distribution. We assume the charge to rotate about the $Z$ axis, with the velocity determined by Eq. (\ref{2.1.15}). The angular momentum is given by the formula \begin{equation} \label{2.1.19} M_Z=\int mv_\varphi RdV, \end{equation} where $R=r\sin\vartheta=a\tau\sin\vartheta$ is the distance from the element of mass to the $Z$ axis. Formula (\ref{2.1.19}) yields \begin{equation} \label{2.1.20} M'_Z=\hbar\cdot0.998,\qquad M''=\hbar\cdot1.039. \end{equation} The integrals (\ref{2.1.19}) are expressed in terms of the gamma-function with a half-integer index, unlike the integrals of energy (\ref{2.1.16}) containing the gamma-function with an integer index. Experiments suggest that the angular momentum of the $S$ state is $\hbar\cdot0.5$. This means that the values specified by Eq. (\ref{2.1.20}) disagree with the value known from experiment. One should therefore reconsider the process of charge/mass rotation in more detail. \section{Potentials of a non-spherical charge distribution} In Sec. 2.1, the energies of the $1NS$ state of the hydrogen atom are calculated under the assumption that a distributed charge does not have spherical symmetry. Experiment suggests, however, that the atom is spherically symmetric. It appears now reasonable to study what consequences would ensue from the assumption of the hydrogen atom being not spherically symmetric, and how such an atom would look to an observer. To do this, we have to calculate the potentials of distributed charges (\ref{2.1.1}) and (\ref{2.1.2}) and compare them with the Coulomb potential of the nucleus. The potentials of a distributed charge can be calculated from an expansion in spherical harmonics. The formulas pertaining to a search of a potential with the use of such an expansion can be found, for instance, in Ref. \cite{4}. As demonstrated in Sec. 2.1 of the present Paper, the distribution of the kind of Eq. (\ref{2.1.2}) can be expressed through two spherical harmonics, $Y_{00}$ and $Y_{20}$; therefore, the series will be truncated with only four terms left (two terms for $r>r'$ and two terms with $r<r'$). The function $\sin\vartheta$ being not a member of the $Y_{lm}$ system of spherical functions, the series for the potential is not truncated. As shown in Sec. 2.1, a charge distribution can be fully described by functions with $m=0$ only. Apart from this, these functions should be symmetric with respect to the equator. The second condition suggests that a spherical function can have only even order $l$. The functions $Y_{l0}$ representing essentially Legendre polynomials were taken from Ref. \cite{6}. Five harmonics were taken for calculation of the potential: $Y_{00}$, $Y_{20}$, $Y_{40}$, $Y_{60}$, $Y_{80}$. In this case, the part of the multipole moment which depends on the $Y_{80}$ function amounts to 0.01 of the moment depending on $Y_{00}$. The formulas employed in the calculation are specified in Appendix. The main results of the calculations can be visualized in Figs. 2.1--2.6. The notation accepted is as follows. The radius of the spherical reference frame $\mathrm r$ is given in units of the Bohr radius $a$, i.e., $\mathrm r$ in the graphs is actually the parameter $\tau$ in all of the above formulas. The potentials $\mathrm U$ are expressed in units of $e/a$. The potential $\mathrm{U(r,}\vartheta)$ is the total potential deriving from the whole set of the harmonics involved. $\mathrm{U0(r)}$ is the potential deriving only from the spherically symmetric harmonic $Y_{00}$. Expressed in this notation, the Coulomb potential of a nucleus $\mathrm{N(r)}$ reads as $\mathrm{1/r}$. \begin{figure} \label{F1} \begin{center} \includegraphics[scale=0.2]{Fig.2.1.eps} \caption{} \end{center} \end{figure} \begin{figure} \label{F2} \begin{center} \includegraphics[scale=0.2]{Fig.2.2.eps} \caption{} \end{center} \end{figure} \begin{figure} \label{F3} \begin{center} \includegraphics[scale=0.2]{Fig.2.3.eps} \caption{} \end{center} \end{figure} \begin{figure} \label{F4} \begin{center} \includegraphics[scale=0.2]{Fig.2.4.eps} \caption{} \end{center} \end{figure} \begin{figure} \label{F5} \begin{center} \includegraphics[scale=0.2]{Fig.2.5.eps} \caption{} \end{center} \end{figure} \begin{figure} \label{F6} \begin{center} \includegraphics[scale=0.2]{Fig.2.6.eps} \caption{} \end{center} \end{figure} Figure 2.1 plots the potential of a distributed charge, Eq. (\ref{2.1.1}), vs. distance $\mathrm r$ from the nucleus for three angles: $\vartheta=0$, $\vartheta=\pi/4$, $\vartheta=\pi/2$. Shown in Fig. 2.2 is the same graph for the (\ref{2.1.2}) distribution. We readily see that the potentials depend very weakly on the angle $\vartheta$. And, as mentioned above, the potentials do not depend on the angle $\varphi$ at all. Figure 2.3 illustrates for the charge distribution (\ref{2.1.1}) the dependence on distance $\mathrm r$ from the nucleus of both the potential $\mathrm{U0(r)}$ formed by the spherically symmetric harmonic $Y_{00}$ only and of the total potential $\mathrm{U(r,}\vartheta)$ involving all the harmonics included for two angles, $\vartheta=0$ and $\vartheta=\pi/2$. Figure 2.4 shows the same plot constructed for the charge distribution (\ref{2.1.2}). Examining these graphs we see that the potentials are formed in both cases mostly by the spherically symmetric harmonic, the contribution of the other harmonics being very small. This is immediately evident though from Figs. 2.1 and 2.2: indeed, if the potential is not small and the angular dependence is very weak, the only conclusion can be that the potential derives primarily from the spherically symmetric harmonic. Thus, in spite of the charge distributions (\ref{2.1.1}) and (\ref{2.1.2}) being different from spherical, the potentials produced by these charges deviate very little from the spherical pattern. Therefore, {\itshape\bfseries such an atom would superficially look as spherically symmetric}. Consider now the extent to which the spherically symmetric harmonic of the potential of a distributed charge $\mathrm{U0(r)}$ affects the potential of the atom. An atom carries the total potential, i.e., the potential of the nucleus plus that of the distributed charge. Because the distributed charge potential primarily derives from the spherically symmetric harmonic $\mathrm{U0(r)}$, it is only this harmonic that we shall consider in this sum. Figure 2.5 plots the total potential of a spherically symmetric harmonic of charge (\ref{2.1.1}) plus the Coulomb potential of the nucleus $\mathrm{N(r)}$. The dashed curve shows only the Coulomb potential of the nucleus $\mathrm{N(r)}$. We immediately see that for $\mathrm{r>2}$ the total potential is very small (which certainly derives from the fact that the field vanishes at some distance from the atom), but it approaches the Coulomb potential of the nucleus as one comes closer to the latter. Figure 2.6 plots the same graph for the charge distribution (\ref{2.1.2}). Significantly, the potential generated by the spherically symmetric harmonic $\mathrm{U0(r)}$ coincides with the potential $\Phi_{1S}$ of the spherical charge distribution (see Eqs. (\ref{1.3.7}) and (\ref{1.3.8})). Thus, despite the absence of spherical symmetry in the charge distribution, the atom mostly preserves the main features of spherical symmetry; indeed, the deviation of the potential from the spherically symmetric pattern is very small, and as one comes closer to the nucleus, it approaches the Coulomb potential. \section{Analysis of possible patterns of motion of a distributed charge} As can be seen from Eqs. (\ref{2.1.20}), we have not obtained the correct value for the angular momentum of the hydrogen atom in ground state. This stresses the need for reconsidering the process of mass/charge rotation in more detail. Equation (\ref{2.1.15}) shows essentially that each element of charge rotates circularly about the $Z$ axis. But why should each element of charge rotate in a circle only? Each element of the distributed charge is confined to the Coulomb potential well of the nucleus (we are disregarding as yet the additional potential of the distributed charge itself; as follows from Sec. 2.2, the difference of the total from the Coulomb potential in an atom is small). The behavior of a charge in a Coulomb potential well is known. Monograph \cite{7} could be best suited for our purposes. Consider the situation in two stages. We shall first be interested in the motion of a charge at the equator. Recall some well established facts. In a Coulomb potential well, a constant element of charge $dq$ with a mass $dm$ can move along a circle or an ellipse, and the ellipse can degenerate into a straight line. The energy of an element of charge depends on the semimajor axis of the ellipse (or on the radius of the circle). The actual shape of the ellipse (i.e., its semiminor axis) depends on the angular momentum of the particle. Thus, all ellipses with the same semimajor axis but different semiminor axes have the same energies but different angular momenta, down to the zero momentum (in which case the ellipse degenerates into a straight line). Said otherwise, mass/charge elements of the same energy can move in a Coulomb potential well along trajectories which differ in the value of the angular momentum. Because we have a distributed electron charge in an atom, it appears only logical to assume that each element of charge can move along any allowed trajectory (by an allowed trajectory we understand here any trajectory satisfying the laws of mechanics). Note, however, that different trajectories (different ellipses) intersect. Therefore, when introducing the assumption that elements of charge can move along different allowed trajectories, we have to accept another one as well, namely, that each element of mass/charge can move along its trajectory regardless of those of other charges. In other words, charges {\itshape\bfseries may pass through} one another without an attendant change of the trajectory. Said otherwise, each element of mass/charge moves in the force field independently of other elements of mass/charge. It goes without saying, that all elements surrounding the element under consideration contribute to the force field. This would seem to be contrary to the observation that charges interact with one another. But the statement that each mass/charge element moves in a force field independently of other mass/charge elements is just a consequence of the fact that we consider interaction of charges not directly with one another but rather through the field; indeed, each charge interacts with the field generated by another charge or by all the other charges. In other words, we consider charge motion in a force field created by other charges, putting the existence of these charges apart. Like charges repel one another. Therefore, the assumption that likely charged elements of charge can pass through one another may seem a far-fetched idea. Indeed, the point charges one usually considers have an infinite density at the point where the charge is located. Therefore, like point charges cannot pass through one another; more than that, they cannot even approach one another close enough. The distributed charges treated by us here have a finite charge density. Such charges can penetrate into one another, depending on what external forces act on these charges and what are the forces created by these charges. In actual fact, the statement that charges can pass through one another does not carry anything supernatural in it. For instance, electromagnetic fields can penetrate one into or through the other without at the same time affecting one another---this is nothing but the standard principle of superposition. Two radar beams can cross without interaction; this is just penetration of ac fields through one another. Superposition of one dc field on another (the principle of superposition) may be regarded as penetration of one field into another. Significantly, in this process the fields do not act in any way on one another. As for the charges, no statements concerning passage of one charge through another without direct action on one another (interaction of charges is taken into account through the fields created by these charges) have thus far been made, although the principle of superposition is valid for charges as well. This statement should, however, be made. If electromagnetic fields do pass through one another, there would appear nothing strange in admitting that charges likewise can do it. The difference between these statements lies in that ac electromagnetic fields propagate along rectilinear trajectories (trajectories (beams) are straight lines (in vacuum)), while charges move along their trajectories in a potential field. The actual shape of the trajectory is determined by the potential field in which this element moves, as well as by the parameters of this element. A trajectory can be calculated in the frame of theoretical mechanics. In our case, the trajectories along which an element of distributed charge/mass moves in the field of the nucleus are closed curves rather than straight lines. Consider in more detail the motion of an element of charge in an atom. Assume an element of charge $dq$ located in the equatorial plane. Consider the trajectories in moving along which the charge $dq$ has the same total energies (in this case all elliptical trajectories have the same semimajor axes). This element can move in a circular or an elliptical trajectory in the equatorial plane, with the total energy of this element in any trajectory being the same, and only angular momenta different (see, e.g., Ref. \cite{7}). Each angular momentum can be identified with its own elliptical trajectory. Because in all trajectories the element of charge $dq$ has the same energy, this element of charge can move {\bfseries\underline {along any}} trajectory. Moreover, this element of charge can move {\bfseries\underline {in all trajectories at the same}} {\bfseries\underline {time}}. This can be visualized in the following way. Divide element of charge $dq$ in $k$ parts. Then one element of charge $dq'=dq/k$ can move along one elliptical trajectory, another charge $dq'$, along another trajectory, and so on. As $k$ tends to infinity, all the trajectories will criss-cross all of the allowed region containing trajectories of the elements of charge $dq'$ of the same energy but with different angular momenta. Generally speaking, this process may be considered not as motion of elements of charge along trajectories but rather as motion of a continuous medium, of a {\itshape\bfseries charge wave}. Let us analyze the various trajectories along which an element of charge $dq$ with a mass $dm$ can move in the case where the total energy of the element in each trajectory is the same. \begin{figure} \label{F7} \begin{center} \includegraphics[scale=0.2]{Fig.2.7.eps} \caption{} \end{center} \end{figure} Figure 2.7 illustrates several such orbits of all possible ones: a circular orbit $a$, eight elliptical orbits $(b-k)$ with different eccentricities (and, hence, different angular momenta), and a linear orbit $l$ into which the ellipse degenerates at an eccentricity of unity. This orbit passes through the nucleus of the atom. All the orbits are characterized by identical semimajor axes (if the energies of the elements are equal, the semimajor axes of the ellipses should likewise be equal). All orbits lie in the same plane. The elements of charge in all orbits rotate in the same direction. All orbits focus at the same point. In this focus (in our figure, this is the center $O$ of the circle) the nucleus of the atom is located. Using the focal properties of ellipses, one can readily show that each elliptic trajectory intersects a circular orbit at the point where this ellipse intersects its semiminor axis. The dashed lines confine the region of allowed trajectories along which an element $dm$ can move. As already mentioned, an element of charge $dq$ with a mass $dm$ can move along all of the above trajectories simultaneously to form not propagation of single particles but rather a wave motion. For this to become possible, the element of charge $dq$ has to split into a multitude of parts $dq'$, each of them moving in its own trajectory. An element of mass/charge residing in a Coulomb potential well moves in one plane. We have considered motion in the equatorial plane. Note, however, that through a line connecting the element under study with the nucleus one can pass an infinite number of planes. The trajectory of a given element may lie in any of these planes, because Coulomb field possesses spherical symmetry. Moreover, this element can be divided into a multitude of parts, and each part of the element can move in a trajectory in its plane. It thus appears that the element of charge under consideration can move along all of the allowed trajectories in all planes simultaneously. It would apparently be more appropriate to speak here not of the motion of a set of mass/charge elements but rather of that of a wave propagating within a certain {\itshape\bfseries solid angle}. The solid angle is defined by the set of all allowed trajectories. This reasoning can be repeated for any element of charge. Each element can be divided into parts, and these parts will propagate in a certain solid angle. What we will have actually is propagation within a certain solid angle of a {\itshape\bfseries charge wave}, or, to express it more properly, a {\itshape\bfseries mass/charge wave}. This reasoning resembles in a large measure the {\itshape\bfseries Huygens--Fresnel principle}. By the Huygens--Fresnel principle, each point of a propagating wave acts as a source of a secondary wave, and the front of the propagating wave may be visualized as an envelope of all the secondary waves. The difference of the consideration offered here from the Huygens--Fresnel principle lies essentially in that by the latter principle a wave propagates along a straight line, i.e., the rays of all secondary waves are straight lines. In our case, the trajectories associated with the propagation of charge waves, rather than being straight lines, are mediated instead by the potential of the field in which the element under consideration moves and by the parameters of the element itself. In the cases of interest here, these trajectories are closed curves. The similarity with the Huygens--Fresnel principle lies in that any point of the distributed charge is a center from which a mass/charge wave propagates within a certain solid angle. The above pattern may be considered as an attempt to find common features between the corpuscular and wave patterns of behavior. There are even grounds to suggest that the corpuscular and wave concepts actually merge. The grounds underlying this statement may be seen in that the behavior of a distributed charge can be studied in two different contexts. The behavior of each single mass/charge element obeys all laws of theoretical mechanics. Its motion can be calculated with the use of equations derived in theoretical mechanics. On the other hand, when one considers the motion of all elements taken together, it is the motion of a wave. This wave should obey certain partial differential equations involving the effect of potentials on the motion of the “charge waves”. These equations have not thus far been constructed. But it is with partial differential equations that the motion of “charge waves” is most appropriate to analyze, and this stresses the need for constructing such equations. When such equations allowing for the effect of potentials on “charge wave” motion are obtained, there will be strong grounds to call the field of science described by these equations {\itshape\bfseries wave mechanics}, because these equation should take into proper account both the wave properties of objects (originating from specific boundary or periodic conditions) and all the characteristics described by theoretical mechanics. Until this is done, the term “wave mechanics” announced in the title of the Paper should be treated rather as an expression of wishful thinking on the part of the author. One should also attempt to apply the enormously vast amounts of knowledge amassed in theoretical mechanics for point objects to description of the behavior of a continuous medium. We have to admit, however, that the behavior of a continuous medium could be described more adequately by the “wave” formalism, i.e., through the use of partial differential equations. \section{Angular momentum of the $1NS$ state} We turn now to calculating the angular momentum of the ground state of a hydrogen atom. We start by dividing the volume of the distributed mass into elements of magnitude $dV$ with a mass $dm=mdV$. Next we calculate the angular momenta of each element separately and add them subsequently. We have to keep in mind that each element of mass $dm$ can move along different elliptical orbits. Therefore, we have to consider in the beginning the angular momentum of one element of mass. A Coulomb potential well is spherically symmetric. The presence of a distributed charge, which is anything but spherically symmetric, distorts the symmetry of this potential. As shown in Sec. 2.2, however, the presence of this charge affects very little the potential. This gives us grounds to assume in what follows that elements of charge move in a Coulomb potential field. Calculate the angular momentum of a charge/mass element subject to the condition that the element moves along different orbits in the same plane but that in all these orbits the energy of the element is the same. We again use the data given in Ref. \cite{7}. The angular momentum $dM$ of a constant element of mass $dm$ in a Coulomb potential well can be written as \begin{equation} \label{2.3.1} dM=\frac{2dmf}{T}, \end{equation} where $f$ is the area of the orbit, and $T$ is the period of revolution of an element of mass in this orbit. Recall that the period $T$ depends only on the energy of this element with the mass $dm$. Because we consider here orbits of the same energy, the value of $T$ for all the orbits of interest will be the same. In a Coulomb potential well, orbits are actually ellipses; therefore, we obtain $f=\pi xy$, where $x$ is the semimajor, and $y$, the semiminor axes of the ellipse. Because elements of mass in all the orbits of interest to us here have the same energy, the semimajor axes $x$ of all the orbits are identical (the semimajor axis depends on energy only), and the semiminor ones, $y$, are different. An element placed in a Coulomb potential well moves in one plane only. Consider the motion of an element in one of such planes. If an element $dm$ rotates in a {\itshape\bfseries circular} orbit of radius $R=x$, for the angular momentum of this element we can write \begin{equation} \label{2.3.2} dM_R=\frac{2dm\pi x^2}{T}=\frac{2dm\pi R^2}{T}. \end{equation} Calculate now the angular momentum for the case where an element of mass $dm$ moves in this plane along {\itshape\bfseries all orbits at the same time}. To do this, divide the element of mass $dm$ into $k$ parts: $dm'=dm/k$ (see Fig. 2.7). Each element of mass $dm'$ will move along its elliptical trajectory. For its angular momentum we can write (recall that $x=const=R$ for this energy): \begin{equation} \label{2.3.3} dM'=\frac{2dm'\pi xy}{T}=\frac{2dm'\pi Ry}{T}. \end{equation} To calculate the total angular momentum $dM_{dm}$ of an element $dm$ moving in all trajectories in the plane under consideration simultaneously, we have to sum all the momenta $dM'$ (see Eq. (\ref{2.3.3})). Significantly, the parameter $y$ varies in the process from zero (the case in which the ellipse degenerates into a straight line) to $R$ (where the ellipse transforms into a circle of radius $R$). The element of mass $dm'$ can be prudently recast to the form $dm'=dm\frac{dy}{R}$, because the quantity $R/dy$ is nothing else but the number of parts $k$ into which the mass $dm$ was divided. But then the total angular momentum becomes \begin{equation} \label{2.3.4} dM_{dm}=\frac{1}{T}\int\limits_0^R 2\pi ydmR\frac{dy}{R} =\frac{1}{T}2\pi dm\int\limits_0^R ydy=\frac{1}{T}2dm\pi R^2\cdot\frac{1}{2}. \end{equation} Examining Eqs. (\ref{2.3.2}) and (\ref{2.3.4}), we see that the angular momentum of an element of mass $dm$, in the case where it moves in all trajectories simultaneously, is only one half that of the element of mass $dm$ moving in a circular orbit as a whole (the energies of the $dm$ elements are in both cases the same). In other words, calculation of the angular momentum of an element $dm$ moving along all allowed elliptical orbits may be replaced by calculation of the angular momentum of the same element but moving along a circular orbit. The necessary condition for this to be valid is that the total energies of the element in the circular and elliptical orbits should be equal (indeed, in both orbits we have the same element). For this to be valid, the semimajor axes of all the ellipses considered should be equal to one another and to the radius of the circle. It may be appropriate to recall that we are speaking here about trajectories confined to one plane. Thus, Eq. (\ref{2.3.4}) can be recast in the form \begin{equation} \label{2.3.5} dM_{dm}=dM_R\cdot\frac{1}{2}. \end{equation} Here $dM_R$ is the angular momentum of the element $dm$ rotating along the {\itshape\bfseries circle} of radius $R$ in the given plane. $dM_{dm}$ is the angular momentum of the element $dm$ moving in {\itshape\bfseries all allowed trajectories}, likewise in the same plane, with the elements in both cases having equal energies. We calculated earlier the angular momentum of the distribution of charges rotating about the $Z$ axis (Eqs. (\ref{2.1.19}) and (\ref{2.1.20})). In this case, each element of charge was rotating in circular orbits whose planes were parallel to the equatorial plane. One just could not conceive at the time of any other pattern of rotation for a charge distribution. In this version of rotation, it was difficult to identify, however, the mechanism accounting for rotation of charges lying outside the equatorial plane, because the plane of their orbits does not pass through the center of force, i.e., the nucleus. Having allowed for the possibility of charges interpenetrating one another, we could construct a different pattern of rotation, which would appear more natural while not contradicting any laws of mechanics. An element of charge, acted upon by the force of attraction, moves in the Coulomb potential well of the nucleus. This element rotates in actual fact about the nucleus rather than about the $Z$ axis. The trajectories of motion lie in the plane crossing the point where the nucleus is located, i.e., the origin of the coordinate frame. In this case, the trajectories, rather than being parallel to the equatorial plane, can make any angle with it. A Coulomb potential well is spherically symmetric. As a consequence, the orbit of an element $dm$ may lie in different planes. These planes can be visualized by rotating the original plane about the line connecting the position of the element $dm$ with that of the nucleus. The orbit of an element $dm$ may lie in any of these planes. Moreover, because the energies of the element in each trajectory are equal, the element $dm$ may move along elliptical trajectories in all these planes at the same time. It appears only natural that, as already mentioned, in actual fact one should treat this pattern as motion of a charge wave within a certain solid angle rather than as that of elements. \begin{figure} \label{F8} \begin{center} \includegraphics[scale=0.2]{Fig.2.8.eps} \caption{} \end{center} \end{figure} Consider this situation in more detail. We take first the trajectories lying in the plane passing through the vectors $Z$ and $r$ (vector $r$ specifies the direction to the element chosen). We will call it plane $C$ (Fig. 2.8). Let the element be located at point $A$. Figure 2.8 displays a “fan”\ of velocities $v$, i.e., directions along which an element $dm$ can move in the $C$ plane (compare with Fig. 2.7). Recall that all elements rotate in one sense. The direction of the velocities of element $dm$ at point $A$ is defined by that of the tangents to the elliptical trajectories at point $A$. One of such trajectories, i.e., one of the ellipses is identified in Fig. 2.8 with a dashed line. The direction of motion of an element can be described by the angle $\eta$, which we will reckon from the $r$ axis. The angle $\eta$ defines the direction of motion of an element along the trajectory, i.e., along the ellipse. This angle varies from $0$ to $\pi$, which corresponds to variation of the ellipse shape from a straight line to a circle and again back to the straight line (see Fig. 2.7). Because the velocity fan lies in the $C$ plane, the resultant velocity formed by summation of the velocities of the element $dm$, which propagates along different trajectories in the $C$ plane, at point $A$ lies in the same $C$ plane, while the angular momentum $dM_C$ of the element $dm$ is perpendicular to this plane (see Fig. 2.8). \begin{figure} \label{F9} \begin{center} \includegraphics[scale=0.2]{Fig.2.9.eps} \caption{} \end{center} \end{figure} The trajectories of the element $dm$ may lie not only in plane $C$. The other planes to which the trajectories may be confined can be obtained by rotating plane $C$ about the axis connecting the origin $O$ with the position of the element $dm$, i.e., point $A$ (Fig. 2.9). We denote the angle of turn of this plane by $\xi$, and will reckon it from the original position of the $C$ plane. The reason for which this angle is reckoned from this plane will become clear later. The angular momentum of an element $dm$ moving in any turned plane is perpendicular to this plane. Therefore, as the planes are turned, the tips of the momentum vectors (we will denote them by $dM_\xi$) will trace an arc. This arc will lie in a plane perpendicular to vector $r$ (Fig. 2.9). Consider the limits within which the angle $\xi$ can vary. Variation of the angle $\xi$ will initiate formation of new planes over which the element $dm$ can move. We have to keep in mind that the element can move not only in these, newly formed, planes, but in all planes simultaneously. We readily see that if the angle $\xi$ is larger than $\pi$, new planes coinciding with some of the original planes will appear. In these coinciding planes, all allowed trajectories will coincide as well. Significantly, motion over these coinciding trajectories will occur simultaneously in opposite senses, i.e., there will be no motion on these planes. (This can be readily seen from Fig. 2.8 if we turn mentally plane $C$ through $\pi$, or from Fig. 2.9.) Thus, angle $\xi$ can vary from $0-\pi$. As a plane turns through $\pi$, vectors $dM_\xi$ likewise turn through $\pi$. Consider now why the angle $\xi$ should be reckoned from the $C$ plane. The overall rotation of the charge in an atom (by overall rotation we understand here rotation of the charge as a whole rather than that of individual elements of charge) occurs in one sense, which accounts for the atom having an angular momentum. We conventionally directed the angular momentum of the atom along the $Z$ axis. Accordingly, the resultant velocity of rotation is directed along the $\varphi$ axis. Now if the trajectories of an element lie in the $C$ plane, all components of the velocity lie in the same plane, with no velocity components left along the $\varphi$ axis. The angular momentum $dM_C$ of this element will in this case have no components along the $Z$ axis (see Fig. 2.8). If we rotate the $C$ plane by increasing the angle $\xi$, we will detect formation in these turned planes velocity components directed along the $\varphi$ axis, with the $dM_\xi$ momentum acquiring a component along the $Z$ axis (see Fig. 2.9). Both quantities reach a maximum for $\xi=\pi/2$. As the angle $\xi$ increases still further, both quantities will decrease, to vanish eventually at $\xi=\pi$. As the angle $\xi$ grows still more (as does the corresponding turn of the plane), the elements will rotate in these planes in the opposite sense, with a negative component of the velocity along the $\varphi$ axis, and of the angular momentum along the $Z$ axis, appearing, although they should not exist by our original condition. Thus, reckoning the angle $\xi$ from the $C$ plane and variation of $\xi$ within the $0-\pi$ limits provides overall rotation of the charge in one sense and formation of an angular momentum along the $Z$ axis. Significantly, no negative components of the angular momentum appear along the $Z$ axis. The motion of elements over certain planes we have just considered is only some approximation to reality, because strictly confined to one plane is only motion of an element of a constant magnitude. Now in a real atom an element of mass/charge propagates within a certain solid angle. We will have to calculate now the angular momentum of an element $dm$ in the case of its propagation within a solid angle. We start with constructing a local frame of spherical coordinates $r'$, $\eta$, $\xi$ centered on the element $dm$, i.e., at point $A$. The $\eta=0$ axis will be directed along the $r$ axis and will be called the $Z'$ axis, and the angle $\xi$ will be reckoned from plane $C$ (Fig. 2.9). We see immediately that the angles $\eta$ and $\xi$ of this coordinate system coincide with the angles $\eta$ and $\xi$ considered above. If an element propagates into a solid angle, this means actually that its trajectory, rather than being confined to a certain plane, occupies a sector instead. An analog of element motion in a plane will be motion within a small solid angle $d\xi$, the orientation of this solid angle $d\xi$ (an analog of the position of the plane) being determined by the angle $\xi$. As already demonstrated, the angle $\eta$ varies within the $0-\pi$ limits, and the angle $\xi$ varies within the same limits, $0-\pi$. Thus, the element propagates into a hemisphere; accordingly, the solid angle into which the charge propagates as a wave is $2\pi$. Our problem lies in finding the resultant angular momentum of the element $dm$ which propagates into a solid angle as a wave, for which purpose one will have to sum all components of the angular momentum oriented in different directions. Significantly, any direction of velocity at a given point (and within the allowed solid angle) is equally probable. This conclusion is valid because an element at a given point which propagates in different directions has the same energies, i.e., all these trajectories are equally probable. Moreover, the magnitude of the velocity should not be dependent on the direction of motion of a given element. This conclusion can be substantiated in the following way. We consider ellipses of the same energy, i.e., the total energy of an element on any ellipse is the same (although the relative magnitudes of the kinetic and potential energies change as the element moves along the ellipse). The potential energy depends on the position of the element only. Hence, at a given point (for instance, at point $A$) the potential energies of an element moving over any ellipse are the same. But if the total energies are equal, and the potential energies at a given point are equal too, then the kinetic energies at this point will be equal as well. For this reason, the velocities are equal irrespective of their direction. To calculate the angular momentum of the element $dm$ propagating into a solid angle $2\pi$, we divide the element $dm$ into parts \begin{equation} \label{2.3.6} dm'=dm\frac{d\Omega}{2\pi}. \end{equation} Each element $dm'$ propagates into a solid angle $d\Omega=\sin\eta d\eta d\xi$. We shall approach this problem in steps. Isolate a sector $d\xi$. Find the projection of the angular momenta of the elements propagating into the $d\xi$ sector onto a plane perpendicular to the $r$ axis (plane $D$ in Fig. 2.9). To do this, we will have to sum the angular momenta over the coordinate $\eta$. Because at a given point the velocities in any direction are equal in magnitude, the distribution of the angular momenta in the $d\xi$ sector is symmetric relative to the $D$ plane. Denoting this projection by $dp_\xi$, we come to \begin{equation} \label{2.3.7} dp_\xi=\int v\sin\eta dm'=\int v\sin\eta dm\frac{d\Omega}{2\pi}= \int_0^\pi dm\frac{\sin\eta d\xi}{2\pi} v\sin\eta d\eta= \frac{dmv}{2\pi}\cdot\frac{\pi}{2}\cdot d\xi. \end{equation} The angular momentum $dM_\xi$ corresponding to the momentum $dp_\xi$ is shown in Fig. 2.9. We turn now to summation of the vectors $dp_\xi$ obtained over the coordinate $\xi$. We first find the projection of vectors $dp_\xi$ on the plane formed by turning plane $C$ through an angle $\xi=\pi/2$. Because at a given point the velocities in any direction are equal, the distribution of the momenta $dp_\xi$ will be symmetric relative to this plane. The momenta $dp_\xi$ lying in plane $D$ perpendicular to vector $r$, the vector we have obtained (denote it by $dp_\varphi$) will coincide in direction with the coordinate $\varphi$: \begin{equation} \label{2.3.8} dp_\varphi=\int \sin\xi dp_\xi= \frac{dmv}{2\pi}\cdot\frac{\pi}{2}\cdot \int_0^\pi\sin\xi d\xi= \frac{dmv}{2}, \end{equation} or \begin{equation} \label{2.3.9} d\mathbf p=\frac{dmv}{2}\mathbf n_\varphi, \end{equation} where $\mathbf n_\varphi$ is the unit vector along the $\varphi$ axis. Thus, the momentum of the element $dm$ propagating as a wave into a solid angle of $2\pi$ is oriented along the $\varphi$ axis. The momentum has no other components. This momentum is equal in magnitude to one half of the momentum the same element would have if it moved as a whole in one direction. We can now calculate trivially the angular momentum of the element $dm$. Using the conventional expression for determination of the angular momentum $\mathbf M=[\mathbf r\times\mathbf p]$, and bearing in mind that the resultant momentum of the element $dm$ propagating as a wave is directed along the $\varphi$ axis, i.e., perpendicular to the $C$ plane, we arrive at the following expression for the resultant angular momentum of the element $dm$: \begin{equation} \label{2.3.10} d\mathbf M_\Omega=\frac{dmvr}{2}[\mathbf n_r\times\mathbf n_\varphi], \end{equation} where $\mathbf n_r$ is the unit vector along the $r$ axis. We are not using here the relation for the angular momentum in its conventional form $d\mathbf M=dm[\mathbf r\times\mathbf v]$, because the velocity $v$ of an element propagating into a hemisphere has not specific direction. As evident from Eq. (\ref{2.3.10}), the angular momentum $d\mathbf M_\Omega$ lies in the plane $C$ (see Fig. 2.10). \begin{figure} \label{F10} \begin{center} \includegraphics[scale=0.2]{Fig.2.10.eps} \caption{} \end{center} \end{figure} Compare now Eq. (\ref{2.3.10}) with the standard expression for the angular momentum of an element moving as a whole along a circular trajectory of radius $r$ and see what orbit could be identified with motion of the element as a wave. To avoid confusion, we shall denote the element of mass $dm$ by $dm_b$ $(dm_{body})$ in the case where we shall believe the element to move as a whole. It should be stressed, however, that $dm$ and $dm_b$ are one and the same element. Because the angular momentum of an element in a Coulomb potential is conserved, one can calculate the momentum at any point we choose. Let us calculate the angular momentum of the element $dm_b$ at the point of the circular orbit in which this element is farthest from the equatorial plane. At this point, the velocity of the element is directed along the $\varphi$ axis. The angular momentum $d\mathbf M_b$ of this element can be written as \begin{equation} \label{2.3.11} d\mathbf M_b=dm_b[\mathbf r\times\mathbf v]= dm_bvr[\mathbf n_r\times\mathbf n_\varphi]. \end{equation} A cursory inspection of Eqs. (\ref{2.3.10}) and (\ref{2.3.11}) reveals that the angular momenta have the same orientation, while in magnitude the angular momentum of the element propagating into the hemisphere is one half only of that of the element moving along a circular orbit. Significantly, $r$ for the element propagating into the hemisphere is the distance from the element to the nucleus, while for the element in circular motion, $r$ is not only the distance from the element to the nucleus but the radius of the circular orbit as well, with the orbit located such that the element under consideration is at the point of the orbit farthest from the equatorial plane. Thus, to calculate the angular momentum of the element propagating into a solid angle, one can restrict oneself to finding that of the element rotating in the corresponding orbit and taking one half of it. We shall use subsequently this observation. As seen from Eqs. (\ref{2.3.10}) and (\ref{2.3.11}), both the $d\mathbf M_\Omega$ and $d\mathbf M_b$ angular momenta lie in the $C$ plane (note that each element has its own $C$ plane). Both in Eq. (\ref{2.3.10}) and (\ref{2.3.11}), velocity $v$ is the velocity of motion of an individual element $dm$. Significantly, the velocity of motion of an element does not coincide with that of the wave process (in propagation of the same element into a solid angle). Examining the expression for the momentum (\ref{2.3.9}), we see that the velocity of the wave process at a given point is one half the velocity of motion of elements at the same point, and it is directed along the $\varphi$ axis, whereas the velocities of the elements have the same magnitude but are differently directed within the solid angle of $2\pi$. That an element moves not as a whole but propagates rather into a solid angle is accounted for in the expression for the angular momentum through the coefficient 1/2 (compare the Eqs. (\ref{2.3.10}) and (\ref{2.3.11}) for circular motion of the element). Expand the angular momentum of the element $dm$ into two components, along and perpendicular to the $Z$ axis (Fig. 2.10). The axial $Z$ component can be written as \begin{equation} \label{2.3.12} dM_Z=dM_\Omega\cdot\sin\vartheta=\frac{dmvr}{2}\sin\vartheta, \end{equation} where $\vartheta$ is the coordinate of point $A$ (see Figs. 2.8 and 2.10). For the component perpendicular to the $Z$ axis we obtain \begin{equation} \label{2.3.13} dM_{\perp}=dM_\Omega\cdot\cos\vartheta=\frac{dmvr}{2}\cos\vartheta. \end{equation} Let us calculate the angular momentum of the atom as a whole. This can be done by summing up the angular momenta of all the elements. By virtue of the axial symmetry of the system, the $M_{\perp}$ component vanishes to leave the $M_Z$ one only: \begin{equation} \label{2.3.14} M_Z=\int\frac{1}{2}vr\sin\vartheta dm=\int\frac{1}{2}mvr\sin\vartheta dV. \end{equation} In this expression, one has to substitute for velocity $v$ the velocity of the element $dm$ moving circularly over the circle, i.e., actually $dm_b$. This was substantiated by us earlier in a comparison of Eqs. (\ref{2.3.10}) and (\ref{2.3.11}). The velocity of a circularly rotating element can be taken from Eq. (\ref{1.4.18}), because this equation describes the motion of an element $dm$ over a circle of radius $r=a\tau$. In this formula, however, one has to drop index $\varphi$, because circular motion of an element with which we compare propagation of an element into the hemisphere can occur in any plane passing through the nucleus. As already mentioned, the circular orbit which is opposed to propagation of an element into a solid angle should be positioned such that the element under consideration is at the point farthest from the equatorial plane (it is in this case that the angular momentum $d\mathbf M_b$ will be confined to the $C$ plane). This provides a mutual one-to-one correspondence between the position of an element and of the orbit under consideration (excluding the elements at the equator). Indeed, a given element can be crossed by a multitude of other orbits, having different angles of tilt. In all of these orbits, however, the point farthest from the equatorial plane will not coincide with the element we are considering. Therefore, these orbits will contribute to other elements. This leaves only one orbit for the given element. Hence, in such a consideration each element will be identified with one and the only orbit, and in integration over elements we will not be plagued by the danger of taking some orbits more than once into account. As for an element being capable of moving along any tilted orbit, this has already been taken into account in deriving the expression (\ref{2.3.10}) for the angular momentum $d\mathbf M_\Omega$, where the coefficient 1/2 was obtained. Equation (\ref{2.3.14}) has a fairly simple structure. It can be revealed readily by considering an equivalent, circularly rotating element $dm_b$. The factor $\sin\vartheta$ accounts for the tilt of the trajectory of the equivalent circularly rotating element, and coefficient 1/2, for the real element propagating into a solid angle rather than rotating circularly. The other terms of Eq. (\ref{2.3.14}) make up the standard expression for the angular momentum. Substituting the expressions for velocity (\ref{1.4.18}) and for the density of mass, (\ref{2.1.11}) or (\ref{2.1.12}), into Eq. (\ref{2.3.14}), we come to \begin{equation} \label{2.3.20} M'_Z=\frac{1}{2}\int\limits_0^\infty \frac{4m_e}{a^3\pi^2}e^{-2\tau} \frac{\alpha c}{\sqrt{\tau}}a^4\tau^3d\tau \int\limits_0^\pi \sin^3 \vartheta d\vartheta \int\limits_0^{2\pi} d\varphi , \end{equation} \begin{equation} \label{2.3.21} M''_Z=\frac{1}{2}\int\limits_0^\infty \frac{3m_e}{a^3 2\pi}e^{-2\tau} \frac{\alpha c}{\sqrt{\tau}}a^4\tau^3d\tau \int\limits_0^\pi \sin^4 \vartheta d\vartheta \int\limits_0^{2\pi} d\varphi . \end{equation} Our calculations finally yield \begin{equation} \label{2.3.22} M'_Z=\hbar\cdot0.499,\qquad M''_Z=\hbar\cdot0.519. \end{equation} The first value fits better the available experimental data. At the same time, we will not yet reject the second value which corresponds to the charge distribution expressed in terms of spherical functions. We have to bear in mind the following points. First, these values were obtained in nonrelativistic approximation. Second, by nonrelativistic quantum mechanics, the moment of the ground state is zero, which is in conflict with experiment. We are turning now to the virial theorem (\ref{1.4.15}). Strictly speaking, we had no grounds for applying this equation in the first Chapter of the Paper. The virial theorem in the form of Eq. (\ref{1.4.15}) was used to describe the motion of an element of mass/charge in a Coulomb potential well. In the first part of the Paper, however, an additional statement was tacitly introduced that the velocity of an element $v_\varphi$ is directed along the $\varphi$ axis only. This is in clear conflict with the conditions under which the theorem can be applied. The above reasoning and the assumption that charges can interpenetrate lift this additional statement. Charges can move in a Coulomb field along any allowed orbit. In this case, the virial theorem in the form of Eq. (\ref{1.4.15}) is certainly applicable to charges in a Coulomb potential well. On these grounds one can forward one more comment. In the analysis of Eq. (\ref{1.1.5}) it was pointed out, in particular, that it resembles in form the Schr\"odinger equation, the only difference being that it contains a coefficient 4, whereas in the Schr\"odinger equation the coefficient is 2. We use Eq. (\ref{1.1.5}) to derive the charge distribution and, hence, the {\itshape\bfseries potential} energy of an electron. This potential energy coincides with the eigenvalue of Eq. (\ref{1.1.5}). If we substituted coefficient 2 in place of 4 in Eq. (\ref{1.1.5}) and calculated the energy as eigenvalues of this new Eq. (\ref{1.1.5}), we would have obtained the numerical value of the {\itshape\bfseries total} electron energy. A decrease of the coefficient by a factor two brings about a corresponding decrease of the calculated energy to one half. In actual fact, this is simply the result of our having tacitly added to Eq. (\ref{1.1.5}) the virial theorem, because by this theorem the total energy of the electron is one half of its potential energy. While this is acceptable if our goal is to numerically calculate the values we are interested in, straightforward logic suggests that Eq. (\ref{1.1.5}) should have the coefficient 4, because it is the electron {\itshape\bfseries potential} energy that we calculate with this equation. \section{Comment on the absence of emission from a stationary orbit} This is a short comment, but we believe it to be important enough to be presented in a separate paragraph. By quantum mechanics, a point charge rotates around a nucleus. (More precisely, an electron is in a state having a definite energy and a definite projection of its angular momentum on the $Z$ axis). In the frame of electrodynamics, a rotating electron should emit radiation, lose energy and eventually fall on the nucleus. In actual fact this just does not happen. To provide a proper explanation for this, it was assumed that the electron does not radiate when in a stationary orbit. Actually, this is one of the {\itshape\bfseries postulates} of quantum mechanics. The stationary orbit (more precisely, the steady state) is calculated by solving the Schr\"odinger equation. Considered in the frame of mathematics, there is nothing that could be questioned; indeed, if, by Schr\"odinger equation, there is a solution within which an electron is in steady state with a certain energy, hence, this energy does not change, and, hence, the electron will not radiate. Viewed in the physical context, however, it is not clear in what does a stationary electron orbit differ from the non-stationary one. Why an electron residing in one orbit does radiate, and in another one, does not? The above assumption of the existence of a {\itshape\bfseries distributed} electron charge permits one to {\itshape\bfseries lift this postulate}. Indeed, each element of mass/charge moves in accordance with the laws of theoretical mechanics and electrodynamics. Each individual element rotating about a nucleus is involved in periodic motion and, thus, has to radiate. But the elements make up a distributed charge. Now motion of a distributed charge is no longer periodic. Although each element moves along its own separate trajectory, the motion of the distributed charge as a whole is actually a common circular motion of the total charge. Radiation of variable fields by one element is canceled by that of the other elements. Significantly, overall motion of the distributed charge as a whole (which now is no longer periodic) generates a magnetic field. This is reflected in the atom having a magnetic moment. An analog of such motion could be a set of closed currents which are known not to radiate periodic fields while having a constant magnetic field. \section{Conclusion} The time has come for summing up the outcome of our reasoning. In the first part of the Paper, we have put forward an assumption that electron in an atom, rather than being a point object, is a {\itshape\bfseries distributed charge}. Taken as a whole, this charge should be equal to that of the electron; therefore, in all cases there should exist in the atom the $S$ {\itshape\bfseries-state} of the distributed charge. We forwarded the {\itshape\bfseries equation of wave mechanics}, i.e., the equation which a charge distribution should obey. The solutions of this equation derived for the Coulomb potential of the nucleus identified the shape of the charge distribution in an atom. These solutions were normalized against the {\itshape\bfseries electron} charge (not by unity). We further invoked standard methods in use in electrostatics to derive the values of the {\itshape\bfseries potential energy} of interaction of a distributed charge with the nucleus for three states of the hydrogen atom, which were found to coincide with those well known from quantum mechanics. More than that, these values coincide with the eigenvalues of the equation of wave mechanics. Standard methods employed in electrostatics were used to find the {\itshape\bfseries fields and potentials } of distributed charges and to calculate again the energies of interaction of distributed charges with a nucleus. This has led to another assumption that the electron represents actually not only a distributed charge but a {\itshape\bfseries distributed mass} as well. The distributed mass assumes a distribution of the same shape as the distributed charge, and its motion coincides with that of a distributed charge. It thus turns out that the electron is a distributed {\itshape\bfseries charge/mass object}. The distributed mass should be normalized by the {\itshape\bfseries electron mass} (rather than by unity). An analysis of various versions of motion of distributed charge and mass led to the conclusion that the velocity of the elements of charge/mass should {\itshape\bfseries increase} as they {\itshape\bfseries approach} the nucleus (if the charge of an electron behaved as a solid body, the velocity of the motion of its elements should be {\itshape\bfseries increasing with distance} from the nucleus). An assumption was made concerning the velocity distribution for the charge/mass distributions found, thus making it possible to calculate the kinetic and total energies for three states of the hydrogen atom. These values were demonstrated to coincide with those known from quantum mechanics. Next, the same velocity distributions were used to calculate the angular momenta for the same states of the hydrogen atom. It was found that the angular momenta {\itshape\bfseries do not coincide} with the values known from experiment. This suggested that the atom does not possibly possess spherical symmetry. In the second part of the Paper, two versions of a {\itshape\bfseries non-spherical} charge distribution in an atom were advanced. It was shown that while the charge is certainly non-spherical, the potentials of these charges are close to spherical. Said otherwise, such an atom would look as spherical to an observer. Moreover, as one comes closer to the nucleus, this potential approaches ever more nearly the Coulomb potential. These distributions were used to calculate the potential, kinetic, and total energies for three states of the hydrogen atom. All these values were demonstrated to coincide with those well known from quantum mechanics. The angular momenta did not, however, equate to reality. This initiated a deeper analysis of the behavior of elements of charge/mass in a Coulomb potential well. By the laws of theoretical mechanics, elements of mass in a Coulomb potential well can move along not only circular but elliptical trajectories as well. In order for such motion to become realistic, however, two more suggestions had to be made. First: {\itshape\bfseries charges can interpenetrate one another} (as electromagnetic waves penetrate one through another). Second: if there are orbits in which an element possesses the same energies, this element can move in any of these orbits, and, more than that, {\itshape\bfseries simultaneously along all these orbits}. In other words, this is no longer the motion of individual elements; it is rather the {\itshape\bfseries motion of a wave}. This motion resembles the {\itshape\bfseries Huygens--Fresnel principle}, the only difference being that the trajectories of the elements obey the laws of theoretical mechanics and are in effect closed curves. These assumptions formed a basis on which the angular momentum of the hydrogen atom in ground state was calculated, and {\itshape\bfseries was found to be equal} to $\hbar/2$, in excellent agreement with the experimental data. The experimental observation that the angular momentum of $S$ states is $\hbar/2$ was introduced into theoretical quantum mechanics as a postulate, as an intrinsic angular momentum of the electron (spin). (It may be reminded that in non-relativistic quantum mechanics the angular momentum of the $S$ states is zero.) Using this mechanism, it was found possible to {\itshape\bfseries calculate} the angular momentum of the ground state of the hydrogen atom drawing solely from the laws of {\itshape\bfseries theoretical mechanics}. Each element in an atom undergoes periodic motion along a circle or ellipse. The motion of all elements as a whole is, however, no longer periodic, and represents rather circular motion of the charge as a whole. There being no common periodic motion of the charges, the electron (distributed charge) residing in steady state {\itshape\bfseries should not radiate}. The only thing that exists is a constant magnetic field generated by a common circular motion of charges. For quantum mechanics, the statement that the electron in a stationary orbit does not radiate is essentially a postulate. Thus, {\itshape\bfseries this postulate can now be lifted}. Now how could one visualize an atom containing an electron in the form of a distributed charge? The most pictorial way would possibly be to compare it with a drop of a liquid. Elements of the liquid within the drop move in different directions but, when summed, produce rotation of the drop as a whole. This rotation could be detected only by labeling somehow an element of the liquid. If the element is not labeled, rotation of the drop cannot be detected. In other words, an observer would believe this drop to be at rest. One should bear in mind, however, that the density of the drop increases toward the center, as does increase also the velocity of motion of the elements of the liquid. \section{Appendix} {\bfseries{Equations for calculation of the potential}} (the notation used is that of Sec. 2.2). Point of observation -- $\mathrm r$, point of integration -- $\mathrm r'$. \noindent For $\mathrm r>\mathrm r'$: $$ \mathrm{U}_Q(\mathrm{r,}\vartheta)=\sum_{l=0}^\infty \sqrt{\frac{4\pi}{2l+1}}\cdot \frac{Q_l({\mathrm r}) Y_l(\vartheta)}{\mathrm r^{l+1}}, $$ where the multipole moment $Q_l({\mathrm r})$: $$ Q_l({\mathrm r})=\sqrt\frac{4\pi}{2l+1}\int_0^{\mathrm r}\rho (\mathrm r',\vartheta')\mathrm r'^lY_l^*(\vartheta')dV'. $$ For $\mathrm r<\mathrm r'$: $$ \mathrm{U}_G(\mathrm{r,}\vartheta)=\sum_{l=0}^\infty \sqrt{\frac{4\pi}{2l+1}}\cdot {\mathrm r^l}G_l({\mathrm r}) Y_l(\vartheta), $$ where the multipole moment $G_l({\mathrm r})$: $$ G_l({\mathrm r})=\sqrt\frac{4\pi}{2l+1}\int_{\mathrm r}^\infty\frac{\rho (\mathrm r',\vartheta')}{\mathrm r'^{l+1}}Y_l^*(\vartheta')dV'. $$ Functions $Q_l({\mathrm r})$ and $G_l({\mathrm r})$ depend on $\mathrm r$ in the upper and lower limits of integration as on a parameter. \noindent {\bfseries{Spherical functions}}: $$ Y_l (\vartheta)=\sqrt\frac{2l+1}{4\pi}P_l(\cos\vartheta) $$ {\bfseries{Legendre polynomials}}: $$ P_0=1 $$ $$ P_2(\cos\vartheta)=\frac{1}{2}\cdot(3\cos^2\vartheta-1) $$ $$ P_4(\cos\vartheta)=\frac{1}{8}\cdot(35\cos^4\vartheta- 30\cos^2\vartheta+3) $$ $$ P_6(\cos\vartheta)=\frac{1}{48}\cdot(63\cdot11\cdot\cos^6 \vartheta-15\cdot63\cdot\cos^4\vartheta+15\cdot21\cdot \cos^2\vartheta-15) $$ $$ P_8(\cos\vartheta)=\frac{1}{48\cdot8}\cdot(99\cdot13\cdot15 \cdot\cos^8\vartheta-99\cdot28\cdot13\cdot\cos^6\vartheta+ 99\cdot30\cdot7\cdot\cos^4\vartheta-63\cdot15\cdot4\cdot \cos^2\vartheta+15\cdot7) $$ {\bfseries{Shape of distributed charge in this notation}}: $$ \rho'_{1NS}=-\frac{4}{\pi^2}e^{-2\mathrm r}\sin\vartheta,\qquad \rho''_{1NS}=-\frac{3}{2\pi}e^{-2\mathrm r}\sin^2\vartheta. $$ The potential at a point $\mathrm r$ is a sum of potentials calculated for $\mathrm r>\mathrm r'$ and $\mathrm r<\mathrm r'$: $\mathrm{U(r,}\vartheta)=\mathrm{U}_Q(\mathrm{r,}\vartheta)+ \mathrm{U}_G(\mathrm{r,}\vartheta)$. \addcontentsline{toc}{chapter}{Bibliography}
1,314,259,994,433
arxiv
\section{Introduction} Let $(M,g)$ be a spacetime, i.e. a connected time-oriented Lorentzian manifold. The set of null geodesics of $(M,g)$ naturally carries a topology as the quotient of the null cones $\{v\in TM|\; g(v,v)=0,\; v\neq 0\}$ by the actions of the geodesic flow and the Euler vector field. It is shown in \cite{Low90} the space of null geodesics $\mathcal{N}_g$ retains a smooth structure from the tangent bundle if the spacetime is strongly causal. In general, though, this smooth structure does not induce a manifold structure since the topology might not be Hausdorff. A simple example is given by Minkowski space from which one point is deleted. Up to this point only two classes of spacetimes were known where $\mathcal{N}_g$ is a smooth manifold. On the one hand are the globally hyperbolic spacetimes, for which the space of null geodesics is diffeomorphic to the spherical tangent bundle of any Cauchy hypersurface, see \cite{Low}. One the other hand are the {\it Zollfrei} spacetimes, see \cite{guillemin,suhr13}. Zollfrei spacetimes are compact Lorentzian manifolds such that the geodesic flow restricted to the null cones induces an fibration by circles. The geodesic flow thus projects to a free circle action, which readily implies that the orbit space is a smooth manifold. If the space of null geodesics is not Hausdorff, it is shown in \cite{Low90} that the spacetime must admit a {\it naked singularity}, i.e. there exists a PIP that contains a TIP (see \cite{FHS} for definitions). In \cite{Low90-2} it is shown that the Hausdorff property of $\mathcal{N}_g$ is equivalent to the {\it null pseudoconvexity} of the spacetime. Null pseudoconvexity is a causal condition, which up to this point does not fit into the causal hierarchy, see \cite{Sanchez}. In this context it is interesting to determine the precise position in the causal hierarchy. Motivated by work on the interplay between causal relations in spacetimes and the contact geometry of $\mathcal{N}_g$ Chernov posed in \cite{chernov18} two conjectures on causally simple spacetimes and their spaces of null geodesics. More precisely he conjectured that (1) every causally simple spacetime admits a conformal embedding into a globally hyperbolic one and (2) if such a conformal embedding exists, the space of null geodesics embeds as an open (contact) submanifold. In section \ref{results} below counterexamples to both conjectures are discussed. The main purpose of this article though is to give a proof to the weaker formulation of the second conjecture (Theorem \ref{cor1} below) saying that if a causally simple spacetime conformally embeds into a globally hyperbolic one, the space of null geodesics is Hausdorff, thus showing that in this case the space of null geodesics is a smooth contact manifold. With the richness of examples of such spacetimes one can expect new classes of contact manifolds to appear, possibly with exotic contact geometric properties. In the contraposition Theorem \ref{cor1} gives an obstruction to the existence of a conformal embedding of a causally simple spacetime into a globally hyperbolic one. The construction in Theorem \ref{T2} provide examples of causally simple spacetimes whose space of null geodesics is not Hausdorff and which are therefore not conformally embeddable into a globally hyperbolic spacetime. \section{Results}\label{results} Let $(M,g)$ be a spacetime, i.e. a time-oriented Lorentzian manifold. The space of null geodesics $\mathcal{N}_g$ of $(M,g)$ is defined as follows, see \cite{Low}: The basic outline is given in the following for the convenience of the reader. The metric $g$ induces a Hamiltonian function $$E_g\colon T^*M\to \mathbb R,\; \alpha\mapsto g^*(\alpha,\alpha)$$ where $g^*$ denotes the dual metric of $g$. The Hamiltonian flow of $E_g$, also called the cogeodesic flow, with respect to the canonical symplectic structure on $T^*M$ is dual to the geodesic flow of $(M,g)$ via the Legendre transform of $g$. Denote with $X_g$ the generator, i.e. the symplectic gradient of $E_g$, of the Hamiltonian flow of $E_g$. It is well known that the cogeodesic flow is tangent to the level sets of $E_g$. Thus the {\it future pointing dual null cones} $$\mathcal{L}^*M:=\{\alpha\in T^*M|\;g^*(\alpha,\alpha)=0,\; \alpha\neq 0, \alpha=g(v,.)\text{ for $v$ future pointing}\}$$ are preserved by the flow. One decisive feature which sets $\mathcal{L}^*M$ apart from the other level sets of $E_g$ is that it is invariant under homotheties $\alpha\mapsto t\alpha$ for $t> 0$. The Euler vector field $\xi$ is thus tangent to the dual null cones as well. It is easy to see that the commutator of $X_g$ and $\xi$ is co-linear to $X_g$, i.e. by Frobenius' Theorem their span forms an integrable distribution on the cotangent bundle and by restriction an integrable distribution on $\mathcal{L}^*M$. Denote with $\mathcal{F}_{\text{null}}$ the induced foliation of $\mathcal{L}^*M$. By construction a leaf of $\mathcal{F}_{\text{null}}$ consists of the cotangents $g(\dot\gamma(t),.)$ to a null geodesics $\gamma$ and all its orientation preserving affine reparameterizations. Denote the leaf space of $\mathcal{F}_\text{null}$ with $\mathcal{N}_g$. The leaf space can identified with the space of null geodesics that coincide up to affine parametrizations. Equip $\mathcal{N}_g$ with the quotient topology relative to $\mathcal{F}_\text{null}$. The quotient topology on $\mathcal{N}_g$ can be characterized via aa definition of convergence of sequences: One says that the sequence $\{\kappa_n\}_{n\in\mathbb N}\subset \mathcal{N}_g$ {\it converges to} $\kappa\in \mathcal{N}_g$ if there exist affine parametrizations $\eta_n$ of $\kappa_n$ and $\eta$ of $\kappa$ such that $\dot\eta_n(0)\to \dot\eta(0)$. If $(M,g)$ is strongly causal every leaf of $\mathcal{F}_\text{null}$ is closed, i.e. the leafs are $2$-dimensional submanifolds of $\mathcal{L}^*M$. In this case the leaf space $\mathcal{N}_g$ inherits a smooth structure from $\mathcal{L}^*M$, see \cite[Proposition 11.4.2]{Brickell}. Recall that a smooth structure on a space $\mathcal{M}$ is by definition a maximal atlas of homeomorphisms $\phi\colon U^\phi\to V^\phi$, called charts, between open sets $U^\phi\subset \mathcal{M}$ and $V^\phi\subset \mathbb R^n$ such that every change of chart is a smooth map between open subsets of euclidian space. Thus all notions of calculus are well defined in the case of smooth structures as well. \begin{prop}\label{propnull} If the smooth structure of $\mathcal{L}^*M$ descends to $\mathcal{N}_g$, then $\mathcal{N}_g$ inherits a canonical contact structure from the kernel of the canonical $1$-form $\theta$ on $T^*M$. \end{prop} \begin{proof} First note that both $X_g$ and $\xi$ lie in $\ker\theta$ along $\mathcal{L}^*M$. Further note that the cogeodesic flow preserves $\theta$ and the flow of the Euler vector field preserves $\ker\theta$. Thus the distribution $\ker\theta$ induces a well-defined hyperplane distribution on the quotient of $\mathcal{L}^*M$ by the action of $X_g$ and $\xi$. Now fix $\alpha\in\mathcal{L}^*M$. Choose tangent vectors $V_1,\ldots,V_{2n-3}\in T\mathcal{L}^*M_\alpha$ such that $$\{X_g,\xi,V_1,\ldots,V_{2n-3}\}$$ forms a basis of $T\mathcal{L}^*M_\alpha$. Further choose $W\in TT^*M_\alpha$ with $d\theta(W,\xi)=d\theta(W,V_i)=0$ for all $i=1,\ldots,2n-3$. Then one has \begin{align*} 0\neq &(d\theta)^n(W,X_g,\xi,V_1,\ldots,V_{2n-3})\\ =&d\theta(W,X_g)\cdot (d\theta)^{n-1}(\xi,V_1,\ldots,V_{2n-3})\\ =&-dE_g(W)\cdot \theta\wedge (d\theta)^{n-2}(V_1,\ldots,V_{2n-3}), \end{align*} i.e. $\ker\theta\cap T\mathcal{L}^*M$ is a well defined smooth distribution by hyperplanes in $T\mathcal{L}^*M$ which induces a well defined contact structure on $\mathcal{N}_g$. \end{proof} In case the spacetime $(M,g)$ is globally hyperbolic it is well known \cite{Low} that $\mathcal{N}_g$ with the induced contact structure is contactomorphic to the unit tangent bundle of any smooth Cauchy hypersurface in $(M,g)$ with its canonical contact structure. For a spacetime $(\mathcal{M},\mathcal{G})$ denote with $J^+_\mathcal{G}\subset \mathcal{M}\times \mathcal{M}$ the causal relation in $(\mathcal{M},\mathcal{G})$. Similarly denote with $I^{+}_\mathcal{G}\subset \mathcal{M}\times \mathcal{M}$ the chronological relation and the horismo relation of $(\mathcal{M},\mathcal{G})$ with $E^{+}_\mathcal{G}:=J^{+}_\mathcal{G}\setminus I^{+}_\mathcal{G}$, see \cite{Sanchez}. \begin{prop}\label{prop2} Let $(M,g)$ be a spacetime such that the space of null geodesics $\mathcal{N}_g$ is Hausdorff. Then the horismos $E_g^+$ is closed. \end{prop} \begin{remark} The fact that $E_g^+(p):=E^+_g\cap \{p\}\times M$ and $E^-_g(p):=E^+_g\cap M\times \{p\}$ are closed for every $p\in M$ does not in general imply that $E^+_g$ is closed as a subset of $M\times M$. Consider for example the Minkowski space $\mathbb{L}^2$ with a point removed. Then $E^{\pm}_g(p)$ is closed for every $p$, but there exist sequences $(p_n,q_n)\in E^+_g$ with $(p_n,q_n)\rightarrow (p,q)\in M\times M$ and $(p,q)\notin E^+_g$. \end{remark} \begin{proof}[Proof of Proposition \ref{prop2}] Assume that $\mathcal{N}_g$ is Hausdorff. Let $$\{(p_n,q_n)\}_{n\in \mathbb N}\subset E_g^+$$ be a convergent sequence with limit $(p,q)\in M\times M$. By definition there exists a sequence $$\{\gamma_n\colon [0,T_n]\to M\}_{n\in\mathbb N}$$ of null geodesics connecting $p_n$ with $q_n$. Up to passing to a subsequence one can assume that $\dot\gamma_n(0)$ and $\dot\gamma_n(T_n)$ normalized with respect to a Riemannian metric converge to null vectors $v\in TM_p$ and $w\in TM_q$, respectively. That is equivalent to saying that the sequence $\{[\gamma_n]\}_{n\in\mathbb N}$ converges in $\mathcal{N}_g$ to classes represented by $\gamma_{v}$ and $\gamma_{w}$, where $\dot\gamma_{v}(0):=v$ and $\dot\gamma_w(0):=w$ define the geodesics. Since $\mathcal{N}_g$ is Hausdorff one concludes $[\gamma_{v}]=[\gamma_{w}]$. Thus the point $q$ lies on $\gamma_{v}$, which shows $(p,q)\in J^+_g$. It is obvious that $(p,q)\notin I^+_g$ since otherwise it follows $(p_n,q_n)\in I^+_g$ for sufficiently large $n$ which contradicts the assumption $(p_n,q_n)\in E^+_g=J^+_g\setminus I^+_g$. Therefore one has $(p,q)\in E^+_g$. \end{proof} \begin{definition}[\cite{Sanchez}] A spacetime $(M,g)$ is {\it causally simple} if it is causal and $J^+_g$ is closed. \end{definition} \begin{prop}\label{prop1} Let $(M,g)$ be a simply connected two dimensional spacetime. Then $\mathcal{N}_g$ is a smooth manifold if and only if $(M,g)$ is causally simple. \end{prop} The conformal class of $g$ is defined as $$[g]:=\{e^f g|f\in\mathcal{C}^{\infty}(M,\real{})\}.$$ Let $(M,g)$ and $(N,h)$ be smooth Lorentzian manifolds of the same dimension $m\geq 2$. Assume that $(M,g)$ embeds conformally as an open subset into $N$, i.e. there exists an open embedding $i\colon M\hookrightarrow N$ such that $i^{\ast}h=e^f g$ for some function $\abb{f}{M}{\real{}}$. \begin{theorem}\label{cor1} Let $M,N$ be smooth manifolds of the same dimension. Assume that $(N,h)$ is globally hyperbolic and $(M,g)$ embeds conformally into $(N,h)$. If $(M,g)$ is causally simple, the space of null geodesics $\mathcal{N}_g$ is a smooth contact manifold. \end{theorem} \begin{remark} If $(M,g)$ conformally embeds into $(N,h)$ the canonical map $\mathcal{N}_g\hookrightarrow \mathcal{N}_h$ is an immersion, provided both spaces have a smooth structure. Taking this observation into account, Theorem \ref{cor1} confirms a weaker version of \cite[Conjecture 3.7]{chernov18}. \end{remark} \begin{proof}[Proof of Theorem \ref{cor1}] According to \cite{Low90} the space $\mathcal{N}_g$ inherits a smooth structure if $(M,g)$ is strongly causal. Any causally simple spacetime is strongly causal, see \cite{Sanchez}. The canonical $1$-form on $T^*M$ induces a contact structure on $\mathcal{N}_g$ by Proposition \ref{propnull}. The topology of $\mathcal{N}_g$ is Hausdorff by the next proposition. \end{proof} \begin{prop}\label{T1} Let $M,N$ be smooth manifolds of the same dimension. Assume that $(N,h)$ is globally hyperbolic and $(M,g)$ embeds conformally into $(N,h)$. If $(M,g)$ is causally simple the space of null geodesics $\mathcal{N}_g$ is Hausdorff. \end{prop} \begin{exmp} If $(M,g)$ embeds conformally into $(N,h)$ the space $\mathcal{N}_g$ does not in general embed into $\mathcal{N}_h$. Consider the two dimensional Minkowski spacetime $I(\overline{N},\overline{h})$ with $\overline{N}:=\mathbb R^2$ and the Lorentzian inner product $\overline{h}=dx^2-dy^2$. Let $M$ be the open interior of the convex hull of $\{(0,0),(3/2,1/2), (1,1),(1/2,-1/2)\}$. Clearly $(M,dx^2-dy^2)$ is globally hyperbolic, hence causally simple. Next consider the quotient $N:=\overline{N}/\mathbb Z$ where $\mathbb Z\times\mathbb R^2\to \mathbb R^2$, $(k,(x,y))\mapsto (x+k,y)$. Since the action is isometric for $\overline{h}$, a Lorentzian metric is induced on $N$. This metric is globally hyperbolic as well. Note that the canonical projection $\overline{N}\to N$ is a diffeomorphism from $M$ onto its image which will be denoted with $M$ as well. With this it follows that $M\subset N$ is globally hyperbolic. The null geodesic in $N$ which lifts to the null geodesic through $(1/4,1/4)$ with direction $(-1,1)$ intersects $M\subset N$ twice. Therefore the map $\mathcal{N}_g\to \mathcal{N}_h$ induced by the inclusion is not injective. This shows that one cannot expect an embedding of $\mathcal{N}_g$ into $\mathcal{N}_h$ even if the $(M,g)$ conformally embeds into $(N,h)$, thus giving a counterexample to \cite[Conjecture 3.7]{chernov18}. \end{exmp} Looking at the assumptions of Theorem \ref{cor1} one can wonder if it is necessary to assume the conformal embedding into a globally hyperbolic spacetime or if the space of null geodesics for every casually simple spacetime is Hausdorff. The following construction will show that both there are causally simple spacetimes which do not embed into a globally hyperbolic one and whose space of null geodesics is not Hausdorff. The constructed spacetime thus disproves \cite[Conjecture 3.6]{chernov18}. Consider a smooth function $r\colon \mathbb R\to\mathbb R$ with $r|_{(0,1)} >0$, $r(0)=r(1)=0$ and $|r'(0)|,|r'(1)|<\frac{1}{2\pi}$. The graph of $r|_{(0,1)}$ defines a surface of revolution $\Sigma$ parametrized by $$X\colon (0,1)\times \mathbb R\to \mathbb R^3,\; (x,\phi)\mapsto (x,r(x)\cos\phi,r(x)\sin(\phi)).$$ The induced metric on $\Sigma$ is given by $$k=\left[1+(r'(x))^2\right]dx^2+r^2(x)d\phi^2.$$ \begin{theorem}\label{T2} The spacetime $$(M,g)=(\mathbb R\times \Sigma,-dt^2+k)$$ is causally simple. Further the space of null geodesics of $(M,g)$ is not Hausdorff and $(M,g)$ does not admit a conformal embedding into a globally hyperbolic spacetime. \end{theorem} \section{proofs} \subsection{Proof of Proposition \ref{prop1}} For orientable $2$-dimensional spacetimes the co-null cones are the union of two transversal $1$-dimensional co-distributions. Thus the space of null geodesics is the union of two leaf spaces of two transversal foliations of $M$. Assume first that $(M,g)$ is causally simple. By \cite[Proposition 11.4.2]{Brickell} it suffices to show that $\mathcal{N}_g$ is Hausdorff. Since the quotient topology on $\mathcal{N}_g$ is second countable it suffices to show that limits of sequences are unique. Let $\kappa^1,\kappa^2\in \mathcal{N}_g$ and $\{\kappa_n\}_{n\in\mathbb N}\subset \mathcal{N}_g$ be a sequence with $\kappa_n\to \kappa^1$ and $\kappa_n\to \kappa^2$. Choose parametrizations $\eta^1\colon J_1\to M$, $\eta^2\colon J_2\to M$ and $\eta_n\colon J_n\to M$ of $\kappa^1$, $\kappa^2$ and $\kappa_n$, respectively and $s\in J_1\cap J_n$ and $t\in J_2\cap J_n$ with $\dot\eta_n(s)\to \dot\eta^1(s)$ and $\dot\eta_n(t)\to \dot\eta^2(t)$. By relabelling $\eta^1$ and $\eta^2$ one can assume that $s<t$. Since $(M,g)$ is causally simple one has $(\eta^1(s),\eta^2(t))\in J^+_M$. It is well known that in $2$-dimensional spacetimes no pair of points is conjugated along a null geodesic. Further since $M$ is simply connected and the null geodesics form two transversal foliations of $M$, no null geodesic of $(M,g)$ has cut points. Thus a null geodesic is up to parametrization the unique causal curve connecting any pair of points on it. This shows $\eta^1\equiv \eta^2$, i.e. $\kappa^1=\kappa^2$. Thus limits in $\mathcal{N}_g$ are unique, i.e. the quotient topology on $\mathcal{N}_g$ is Hausdorff. Now assume that $\mathcal{N}_g$ is a smooth manifold. Since every leaf of $\mathcal{F}_\text{null}$ is connected, the manifold $\mathcal{N}_g$ is itself simply connected. Thus $\mathcal{N}_g$ is diffeomorphic to the disjoint union of two open intervals $I_1, I_2$. Denote with $f_i\colon M\to I_i$ for $i=1,2$ the canonical maps. Note that both maps are smooth. By switching the orientation if necessary one can assume that $df_i(v)\ge 0$ for future pointing $v\in TM$ and $i=1,2$. It follows that $v\in TM$ is future pointing if and only if $df_1(v)\ge 0$ and $df_2(v)\ge 0$. \begin{lemma} For $p,q\in M$ one has $(p,q)\in J^+_M$ if and only if $f_1(q)\ge f_1(p)$ and $f_2(q)\ge f_2(p)$. \end{lemma} \begin{proof} Assume that $(p,q)\in J^+_M$. Let $\gamma\colon [0,1]\to M$ be a future pointing curve between $p$ and $q$. Then one has $$f_i(q)-f_i(p)=\int_0^1 df_i(\dot\gamma(t))dt \ge 0$$ for $i=1,2$. Now assume $f_1(q)\ge f_1(p)$ and $f_2(q)\ge f_2(p)$. Let $\alpha\colon [0,1]\to M$ be a curve between $p$ and $q$. Set $$t_1:=\max\{t\in [0,1]|\; f_1\circ\alpha (t)=f_1(p)\text{ or }f_2\circ \alpha (t)=f_2(p)\}.$$ There exists null geodesic $\beta_1\colon [0,t_1]\to M$ between $p$ and $\alpha(t_1)$ since the leafs of the null foliations are connected. By the intermediate value theorem one has $f_i(\alpha(t_1))\ge f_i(p)$ for $i=1,2$, i.e. $\beta_1$ is future pointing. Replace $\alpha|_{[0,t_1]}$ with $\beta_1$. For the resulting $\alpha_1\colon [0,1]\to M$ one has $f_i\circ \alpha_1\ge f_i(p)$ for $i=1,2$. Next let $$t_2:=\min\{t\in [0,1]|\; f_1\circ\alpha_1 (t)=f_1(q)\text{ or }f_2\circ \alpha_1 (t)=f_2(q)\}.$$ The null geodesic $\beta_2\colon [t_2,1]\to M$ between $\alpha_1(t_2)$ and $q$ exists and is future pointing by the same argument as before. Replace $\alpha_1|_{[t_2,1]}$ with $\beta_2$. The resulting curve $\alpha_2\colon [0,1]\to M$ satisfies $f_i(p)\le f_i\circ \alpha_2\le f_i(q)$ for $i=1,2$. If $f_1\circ \alpha_2$ or $f_2\circ \alpha_2$ is constant, then $p$ and $q$ lie on a common null geodesic. By the assumptions it follows that $(p,q)\in J^+_M$. One can thus assume that both $f_1\circ \alpha_2$ and $f_2\circ \alpha_2$ are non-constant. Perturb $\alpha_2$ to a smooth curve $\alpha_3\colon [0,1]\to M$ between $p$ and $q$ with $f_i(p)\le f_i\circ \alpha_3\le f_i(q)$ for $i=1,2$ and such that $f_1\circ \alpha_3$ has only non-degenerate critical points. Choose a local minimum of $f_1\circ \alpha_3$ and a parameter $s\in [0,1]$ where it is attained. Let $$r:=\min\{r'|\; r'<s,\, f_1\circ \alpha_3(r')=f_1\circ \alpha_3(s)\}.$$ Replace $\alpha_3|_{[r,s]}$ with the future pointing null geodesic between the endpoints. Continue inductively over the set of local minima attained outside of $[r,s]$. For the obtained Lipschitz continuous curve $\alpha_4\colon[0,1]\to M$ all left- and right-sided differentials $\dot\alpha_4^\pm$ exists and one has $df_1(\dot\alpha_4^\pm)\ge 0$. Perturb $\alpha_4$ to a smooth curve $\alpha_5$ such that $df_1(\dot\alpha_5)\ge 0$ and $f_2\circ \alpha_5$ has only non-degenerate critical points. One distinguishes two cases: First, if $f_1\circ \alpha_5\equiv \text{const}$ the curve $\alpha_5$ can be reparametrized to a future pointing null geodesic, thus showing $q\in\mathcal{J}^+_M(p)$. Second, if $df_1(\dot\alpha_5)> 0$ somewhere, one can perturb $\alpha_5$ such that $df_1(\dot\alpha_5)> 0$ everywhere. Choose a local minimum of $f_2\circ \alpha_5$ and a parameter $u\in [0,1]$ where it is attained and repeat the induction as in the last paragraph. For the obtained Lipschitz continuous curve $\alpha_6\colon[0,1]\to M$ all left and right sided differentials $\dot\alpha_6^\pm$ exists and one has $df_1(\dot\alpha_6^\pm),df_2(\dot\alpha_6^\pm)\ge 0$. Thus all left and right sided derivatives are future pointing, i.e. $\alpha_6$ is a future pointing curve connecting $p$ and $q$. \end{proof} \subsection{Proof of Proposition \ref{T1}} \begin{lemma}\label{L3} Let $(\mathcal{M},\mathcal{G})$ be a spacetime such that $E_{\mathcal{G}}^+$ is closed and non-empty. Consider a sequence $$\{(p_n,q_n)\}_{n\in\mathbb N}\subset E_{\mathcal{G}}^+$$ converging to $(p,q)\in E_{\mathcal{G}}^+$ and a sequence $$\{\abb{\eta_n}{[0,b_n]}{\mathcal{M}}\}_{n\in\mathbb N}$$ of null geodesics with $\eta_n(0)=p_n$ and $\eta_n(b_n)=q_n$. Then there exists a null geodesic $\eta$ connecting $p$ and $q$ such that up to a subsequence $[\eta_n]\rightarrow [\eta]\in\mathcal{N}_{\mathcal{G}}$. \end{lemma} \begin{proof} Choose a complete Riemannian metric on $\mathcal{M}$. The following properties hold up to a subsequence of $\{\eta_n\}_{n\in\mathbb N}$ due to the limit curve theorem, see \cite{Minguzzi} or \cite{BS1}: Let $\abb{\eta^R_n}{[0,c_n]}{\mathcal{M}}$ be a Riemannian arclength parametrisation of $\eta_n$. The sequence $\{\eta^R_n\}_{n\in\mathbb N}$ converges uniformly on compact subsets with respect to the Riemannian metric to a causal curve $\abb{\eta^R}{[0,c)}{\mathcal{M}}$ with $\eta^R(0)=p$. Since the $\eta_n^R$'s are null pregeodesics so will be $\eta^R$. If $c<\infty$ one concludes that $\eta^R$ extends uniquely to $c$ with $\eta^R(c)=q$. Let $\eta\colon[0,b]\to \mathcal{M}$ be an affine parameterisation of $\eta^R$. It follows that $\eta$ is a null geodesic between $p$ and $q$ with $[\eta_n]\rightarrow [\eta] \in\mathcal{N}_{\mathcal{G}}$. Otherwise one has $c=\infty$ and $\eta^R$ is future inextensible. Choose $0<s<t<\infty$ and $n$ sufficiently large such that $t<c_n$. By a standard argument one has $$(\eta_n^R(s),q_n),(\eta_n^R(t),q_n), (\eta^R_n(s),\eta^R_n(t))\in E_{\mathcal{G}}^+.$$ The assumptions that $\eta_n^R(s)\rightarrow \eta^R(s)$ and $\eta_n^R(t)\rightarrow \eta^R(t)$ as well as that $E_{\mathcal{G}}^+$ is closed imply $$(\eta^R(s),q),(\eta^R(t),q),(\eta^R(s),\eta^R(t))\in E_{\mathcal{G}}^+.$$ The up to parametrisation unique causal curve connecting $\eta^R(s)$ and $q$ has to be $\eta^R$ since $\eta^R$ is the unique causal curve between $\eta^R(s)$ and $\eta^R(t)$. Otherwise this would imply $q\in I^+_\mathcal{G}(\eta^R(s))$. This contradicts the future inextensibility of $\eta^R$. \end{proof} Let $C_h^+\subset E_h^+$ denote the future null cut locus in $N$, i.e. $(p,q)\in C_h^+$ if $(p,q)\in E_h^+$ and there exists a null geodesic from $p$ to $q$ that stops being unique at $q$, see \cite[Chapter 9]{Beem}. Let $$L_g^+:=E_g^+\cap( E_h^+\setminus C_h^+).$$ Obviously is $L^+_g$ the set of pairs of points in $M$ connected by future directed causal curve in $(N,h)$ and $(M,g)$ unique up to parametrisation. \begin{lemma}\label{L1} Let $M\subset N$ be open, $h$ a globally hyperbolic Lorentzian metric on $N$ such that $(M,g):=(M,h|_M)$ is causally simple. Then the set $L_g^+$ is a connected component of $$(E_h^+\setminus C_h^+)\cap (M\times M).$$ \end{lemma} \begin{proof} Since $(M,g)$ is causally simple the set $E_g^+$ is closed in $M\times M$. Hence $$L_g^+=E_g^+\cap ((E_h^+\setminus C_h^+)\cap (M\times M))$$ is closed in $(E_h^+\setminus C_h^+)\cap (M\times M)$ in the subspace topology. To show that it is also open, assume that the complement of $L_g^+$, $$((E_h^+\setminus C_h^+)\cap (M\times M))\setminus L_g^+,$$ is not closed. Then there exists a sequence $(p_n,q_n)\rightarrow (p,q)$ with $$(p_n,q_n)\in ((E_h^+\setminus C_h^+)\cap (M\times M))\setminus L_g^+\text{ and }(p,q)\in L_g^+.$$ Thus there exist null geodesics $\eta_n\colon[a_n,b_n]\to N$ connecting $p_n$ and $q_n$ converging due to Lemma \ref{L3} to the unique null geodesic $\eta\colon[a,b]\to M$ connecting $p$ and $q$. Note that the $\eta_n$'s are unique up to parametrization since $(p_n,q_n)\in E^+_h\setminus C^+_h$ for all $n\in\mathbb N$. The curves are further not contained in $M$ since this would imply $(p_n,q_n)\in E^+_g$, which contradicts the assumption $(p_n,q_n)\notin L^+_g$. Hence there exist points $\eta_n(t_n)\in N\setminus M$. A subsequence of $\eta_n(t_n)$ converges to a point $\eta(t)\in M$. This contradicts $M$ being open in $N$. Hence $L_g^+$ is also open in $(E_h^+\setminus C_h^+)\cap (M\times M)$. This shows that $L^+_g$ is a union of connected components of $(E_h^+\setminus C_h^+)\cap (M\times M)$. It remains to show that $L_g^+$ is path-connected. Since $h$ is globally hyperbolic the set $E^+_h\setminus C^+_h$ contains the diagonal in $N\times N$. Since $M$ is an open subset of $N$ this implies that $E^+_g$ contains the diagonal of $M\times M$. Further for $(p,q)\in E^+_h\setminus C^+_h$ and $\eta\colon [0,1]\to N$ a casual curve from $p$ to $q$ one has $(p,\eta(t))\in E^+_h\setminus C^+_h$ for all $t\in[0,1]$. The same goes for $(p,q)\in E^+_M$ and any causal curve connecting the points in $M$. This shows that $L^+_g$ is path-connected. \end{proof} \begin{lemma}\label{lemma_connect} Let $(N,h)$ be a globally hyperbolic spacetime. Let $\eta\colon I\to N$ be an inextensible null geodesic and $s,u\in I$ with $s<u$ such that $\eta(u)\in E^+_{h}(\eta(s))$. Then for all $t\in [s,u)$ there exists $r\in I$ with $r<s$ such that $\eta|_{[r,t]}$ is up to parametrization the unique causal curve between $\eta(r)$ and $\eta(t)$. \end{lemma} \begin{proof} Let the open set $\mathbb{U}\subset\mathbb R\times TN$ be the maximal domain of the geodesic flow $\abb{\Phi^h}{\mathbb{U}}{TN}$ of $h$. Recall that there exists a neighbourhood of the zero section in $TN$ such that $\{1\}\times U\subset \mathbb{U}$. Consider $$\mathbb{V}:=\{v\in TN|\{1\}\times \{v\}\in\mathbb{U}\}$$ and define the exponential map of $(N,h)$ as $$\mathrm{Exp}\colon\mathbb{V}\to N\times N, v\mapsto (\pi_{TN}(v),\pi_{TN}\circ\Phi^h(1,v)),$$ where $\abb{\pi_{TN}}{TN}{N}$ denotes the canonical projection. Then the exponential map at a point $p\in N$ is defined as $$\abb{\exp_p}{\mathbb{V}\cap T_pN}{N}, v \mapsto \pi_{TN}\circ\Phi^h(1,v).$$ Both $\mathrm{Exp}$ and $\exp_p$ are smooth and $\abb{d\,\mathrm{Exp}_v}{T_vTN}{T_{(p,\exp_p(v))}(N\times N)}$ is non-degenerate if and only if $\abb{d(\exp_p)_v}{T_v(TN_p)}{T_{\exp_p(v)}N}$ is non-degenerate where $p:=\pi_{TN}(v)$, see e.g. \cite{Lee}. If $\eta(u)\in E^+_{h}(\eta(s))$ for some $u>s$, then $\eta|_{[s,\xi]}$ is the unique causal curve in $N$ between $\eta(s)$ and $\eta(\xi)$ for all $s<\xi<u$. Since $\eta$ is the unique causal geodesic between $\eta(s)$ and $\eta(u)$ no $\eta(\xi)$ is conjugate to $\eta(s)$ along $\eta$ for $\xi\in (s,u)$, see \cite{Beem}. This yields $d(\exp_{\eta(s)})_{(\xi-s)\dot{\eta}(s)}$ is non-degenerate. By the above equivalence this implies that $d\,\mathrm{Exp}$ is non-degenerate at $v:=(\xi-s)\dot{\eta}(s)$. With the implicit function theorem one knows that $\mathrm{Exp}$ is a local diffeomorphism from a neighborhood $U_v$ of $v$ in $TN$ onto a neighborhood $V_v$ of $(p,\exp_p(v))$ in $N\times N$. Therefore $\eta|_{[\nu,\xi]}$ is the unique geodesic between $\eta(\nu)$ and $\eta(\xi)$ for $\nu$ sufficiently close to $s$ in a neighborhood of $\eta|_{[s,u]}$. The curve $\eta|_{[\nu,\xi]}$ is in fact the unique causal geodesic between its endpoints in $N$ for $\nu$ close to $s$. Indeed assume that there exists a causal geodesic $\abb{\tilde{\eta}}{[\nu,\xi]}{N}$ different from $\eta|_{[\nu,\xi]}$ with $\tilde{\eta}(\nu)=\eta(\nu)$ and $\tilde{\eta}(\xi)=\eta(\xi)$. Since $\tilde{\eta}$ has to leave a neighborhood of $\eta|_{[s,u]}$ every limit curve of $\tilde{\eta}$ for $\nu\rightarrow s$ is a causal geodesic between $\eta(s)$ and $\eta(\xi)$ different from $\eta|_{[s,\xi]}$. This contradicts the assumption that $\eta(u)\in E_{h}^+(\eta(s))$. The limit geodesic exists by the limit curve theorem in \cite{Minguzzi,BS1} and the assumption that $N$ is globally hyperbolic. \end{proof} \begin{lemma}\label{L41} Let $N$ be a smooth manifold of dimension at least $3$, $M\subset N$ open and $(N,h)$ globally hyperbolic such that $(M,h|_M)$ is causally simple. Further let $\eta\colon [-1,1]\to N$ be a null geodesic with $\eta|_{[-1,0)}\subset M$ and $\eta_n\colon [-1,1]\to M$ be a sequence of null geodesics with $\dot\eta_n(0)\to \dot \eta(0)$. Assume $$[u_n,1]:=\eta_n^{-1}(J^+_h(\eta(0)))\neq \emptyset$$ for infinitely many $n\in\mathbb N$ and $\liminf u_n=0$. Then one has $\eta(0)\in M$. \end{lemma} \begin{proof} Choose an $h$-convex neighborhood $V$ of $\eta(0)$. Fix $\rho\in [-1,0)$ such that $\eta(\rho)\in V$ and an $h|_M$-convex neighborhood $W\subset M$ of $\eta(\rho)$. Let $\tau\colon N\to\mathbb R$ be a smooth temporal function. By diminishing $V$ and $W$ one can assume that the intersection of both $E^+_h(p)$ and $E^-_h(p)$ with $$W_\rho:=W\cap\{\tau= \tau(\eta(\rho))\}$$ is path connected for all $p\in V$. Further one can assume that $\tau(\eta_n(\rho))=\tau(\eta(\rho))$ for all $n$. If $\eta(0)\in \eta_n$ for some $n$ the claim is trivial. Thus one can assume $\eta_n(u_n)\in J^+_h(\eta(0))\setminus \{\eta(0)\}$ for all $n\in\mathbb N$. Then the assumption on $\eta_n$ implies that one can find $\upsilon\in(0,1)$ such that $$\eta_n(\upsilon)\in I^+_h(\eta(0))\cap V$$ for infinitely many $n$. Choose a compact neighborhood $W'_\rho$ of $\eta(\rho)$ in $W_\rho$ such that there exists $$p\in E^-_h(\eta(\upsilon))\cap (W_\rho\setminus W'_\rho)$$ and $p_n\in E^-_h(\eta_n(\upsilon))\cap W_\rho$ with $p_n\to p$. The unique geodesic segment between $p_n$ and $\eta_n(\upsilon)$ belongs to $M$ by Lemma \ref{L1}, since one can find a path in $E_h^+\setminus C^+_h\cap (M\times M)$ between $(\eta_n(\rho),\eta_n(\upsilon))$ and $(p_n,\eta_n(\upsilon))$. The existence of such a path follows from the fact that $W_\rho$ is chosen such that $W_\rho\cap E_h^-(\eta_n(\upsilon))$ is path-connected. The intersection $y_n$ of the geodesic segment between $p_n$ and $\eta_n(\upsilon)$ with $E^+_h(\eta(0))$ converges to $\eta(\upsilon)$ because the intersection of the geodesic segment between $p$ and $\eta(\upsilon)$ with $E^+_h(\eta(0))$ is $\eta(\upsilon)$. The unique geodesic in $V$ between $\eta(0)$ and $y_n$ intersects $W_\rho$ to the past in a point $x_n$ since $y_n\to \eta(\upsilon)$ and the unique geodesic between $\eta(0)$ and $\eta(\upsilon)$ is $\eta$. Like before Lemma \ref{L1} implies $x_n\in E^-_g(y_n)$ using a path between $p_n$ and $x_n$ in $W_\rho\cap E_h^-(y_n)\setminus C^-_h(y_n)$. Therefore the geodesic segment between $x_n$ and $y_n$ lies in $M$. It follows that $\eta(0)\in M$. \end{proof} \begin{lemma}\label{L2} Let $N$ be a smooth manifold of dimension at least $3$, $M\subset N$ open and $(N,h)$ globally hyperbolic such that $(M,h|_M)$ is causally simple. Further let $\eta\colon [-1,1]\to N$ be a null geodesic with $\eta|_{[-1,0)}\subset M$ and $\eta_n\colon [-1,1]\to M$ be a sequence of null geodesics with $\dot\eta_n(0)\to \dot\eta(0)$. Assume that all $\eta_n$ are disjoint from $J^+_h(\eta(0))$. Then there exists a neighborhood $U$ of $\eta(0)$ in $N$ such that $$E^-_h(\eta(0))\cap U\setminus \{\eta(0)\}\subset M.$$ \end{lemma} \begin{proof} Choose an $h$-convex neighborhood $V$ of $\eta(0)$. Fix $\rho\in [-1,0)$ such that $\eta(\rho)\in V$ and an $h|_M$-convex neighborhood $W$ of $\eta(\rho)$ in $M$. Let $\tau\colon N\to\mathbb R$ be a smooth temporal function. By diminishing $V$ and $W$ one can assume that the intersection of both $E^+_h(p)$ and $E^-_h(p)$ with $$W_\rho:=W\cap\{\tau= \tau(\eta(\rho))\}$$ is path connected for all $p\in V$. Choose $\upsilon\in(0,1)$ such that $\eta(\upsilon)\in V$. Let $\beta\colon [0,1]\to V$ be a future pointing null geodesic segment with $\beta(1)=\eta(0)$ not parallel to $\eta$. Thus $\beta|_{[0,1)}\subset I^-_h(\eta(\upsilon))$. Hence for every $t<1$ there exists $n_0$ such that for all $n\ge n_0$ one has $$\beta(t)\in I^-_h(\eta_n(\upsilon)).$$ If $\beta$ is parallel to any $\eta_n$, then $\eta(0)$ lies on $\eta_n$, hence in $M$. Then the claim is trivial since $M$ is an open subset of $N$. Therefore one can assume that $\beta$ is not parallel to any $\eta_n$. This then holds for all null geodesics through $\eta(0)$ sufficiently close to $\beta$. Since $\eta(0)\notin J^-_h(\eta_n(\upsilon))$ there exists $t_n\in (t,1)$ such that $\beta(t_n)\in E^-_h(\eta_n(\upsilon))$ for sufficiently large $n$. Let $[\gamma_{n,\beta}]\in \mathcal{N}_h$ be the unique class of null geodesics whose representatives contain $\beta(t_n)$ and $\eta_n(\upsilon)$. Every representative of $[\gamma_{n,\beta}]$ intersects $W_\rho$ for $n$ sufficiently large since $t_n\to 1$ and $\eta_n(\upsilon)\to \eta(\upsilon)$. Thus Lemma \ref{L1} implies that $\beta(t_n)\in M$: Let $x_n:=\gamma_{n,\beta}\cap W_\rho$. Then one can find a path in $W_\rho\cap E^-_h(\eta_n(\upsilon))$ from $\eta_n(\rho)$ to $x_n$ and hence a path in $(E_h^+\setminus C^+_h)\cap (M\times M)$ from $(\eta_n(\rho),\eta_n(\upsilon))$ to $(x_n,\eta_n(\upsilon))$. Since $\gamma_{n,\beta}$ is a causal curve between $x_n$ and $\eta_n(\upsilon)$ unique up to parametrization it follows that $\beta(t_n)\in M$. For all $\beta$ with $\beta(0)\in W_\rho$ one has $\beta|_{[0,1)}\subset M$: The claim is trivial for $\beta=\eta$. Therefore assume that $\beta$ is not parallel to $\eta$. The sub arc of $[\gamma_{n,\beta}]$ between $x_n$ and $\eta_n(\upsilon)$ lies in $M$. Now for every $t_n$ one can chose a path in $W_\rho\cap E_h^-(\beta(t_n))$ from $x_n$ to $\beta(0)$. Therefore by Lemma \ref{L1} and local uniqueness of geodesics the geodesic arc $\beta|_{[0,t_n]}$ lies in $M$ and since $t_n\rightarrow 1$ this implies $\beta|_{[0,1)}\subset M$. For geodesics $\beta$ which do not intersect $W_\rho$ to the past let $\beta(t_1)$ and $\beta(t_2)$ be intersections of $\beta$ with $E^-_h(\eta_{n_1}(\upsilon))$ and $E^-_h(\eta_{n_2}(\upsilon))$, respectively. Assume $t_1<t_2$ and $n_1,n_2$ sufficiently large. Choose $[\gamma_i]:=[\gamma_{n_i,\beta}]\in \mathcal{N}_h$ and $x_i:=x_{n_i}\in W_\rho$ as before. One has $x_i\in J^-_h(\beta(t_2))$. The set $J^-_h(\eta_{n_2}(\upsilon))\cap V\setminus \gamma_2$ is foliated by past-pointing null geodesics emanating from points on $\gamma_2$ prior to $\eta_{n_2}(\upsilon)$. Consequently a path in $W_\rho\cap J^-_h(\beta(t_2))$ from $x_2$ to $x_1$ joined with $\gamma_1|_{[\gamma_1^{-1}(x_1),\gamma_1^{-1}(\beta(t_2))]}$ induce a path in $(E^+_h\setminus C^+_h)\cap (M\times M)$ from $(x_2,x_2)\in L^+_g$ to $(\beta(t_1),\beta(t_2))$, i.e. $(\beta(t_1),\beta(t_2))\in J^+_g$. As before this implies $\beta|_{[t_1,t_2]}\subset M$. \end{proof} \begin{lemma}\label{L4} Let $N$ be a smooth manifold of dimension at least $3$, $M\subset N$ open and $(N,h)$ globally hyperbolic such that $(M,h|_M)$ is causally simple. Further let $$\eta\colon [-1,1]\to N$$ be a null geodesic with $\eta|_{[-1,0)}\subset M$ and $$\{\eta_n\colon [-1,1]\to M\}_{n\in \mathbb N}$$ be a sequence of null geodesics with $\dot\eta_n(0)\to\dot\eta(0)$. Then the set $\eta^{-1}(M)$ is connected. \end{lemma} \begin{proof} Let $r,w\in \eta^{-1}(M)$ with $r<w$ and assume that there exists $r<s <w$ with $\eta(s)\notin M$. Without loss of generality one can assume that $s$ is minimal in that respect, i.e. $$s=\inf\{s'>r|\,\eta(s')\notin M\}.$$ By Lemma \ref{L41} one knows that the sets $\eta_n^{-1}(J^+_h(\eta(s)))$ are bounded away from $s$. Fix $n$ sufficiently large such that there exists $p\in I_g^+(\eta_n(w))\cap I_g^+(\eta(w))$. Choose a timelike curve $\alpha\colon[w,2]\to M$ from $\eta_n(w)$ to $p$. Define $$\abb{\beta}{[-1,2]}{M},\; \beta(t):=\left\lbrace\begin{array}{cc} \eta_n(t), & t\leq w \\ \alpha(t), & t>w. \end{array}\right.$$ Let $\tau \colon N\to \mathbb R$ be a smooth temporal function. Set $\sigma:=\tau(\eta(s))$ and choose a compact neighborhood $U$ of $\eta(s)$ according to Lemma \ref{L2} such that $U\cap E_h^-(\eta(s))\setminus \{\eta(s)\}\subset M$. Note that by the assumption that $(M,h|_M)$ is causally simple it follows that $\eta(w)\in J^+_g(x)$ for all $x\in U\cap E_h^-(\eta(s))\setminus \{\eta(s)\}$. For $\delta>0$ define $$V_\delta :=\tau^{-1}([\sigma-\delta,\sigma))\cap U\cap E_h^-(\eta(s))\subset M$$ and $$u_{\delta}:=\inf\{u\in \mathbb R|\,\beta(u)\in J_g^+(V_\delta)\}.$$ It follows that the parameter $u_{\delta}$ is bounded from above by $2$ and the function $\delta\mapsto u_{\delta}$ is monotonously decreasing. For $0<\delta'<\delta$ sufficiently small the set $V_\delta\setminus V_{\delta'}$ is precompact in $M$. Since $(M,g)$ is causally simple the precompactness of $V_\delta\setminus V_{\delta'}$ and the monotonicity of $u_\delta$ imply that there exists $x_{\delta}\in V_\delta$ such that $\beta(u_{\delta})\in J_g^+(x_{\delta})$. Furthermore $\beta(u_{\delta})\in E_g^+(x_{\delta})$ by minimality of $u_\delta$. Take a sequence $\delta_k\downarrow 0$ and a sequence $x_k:=x_{\delta_k}$. By construction one has $x_k\rightarrow \eta(s)$. Let $\gamma_{k}\colon [0,1]\to M$ be a sequence of null geodesics connecting $x_{k}$ and $\beta(u_{\delta_k})$. Note that the sequence $\{u_{\delta_k}\}_{k\in\mathbb N}$ is monotonously increasing. Since the geodesic flow is smooth one can assume that up to a subsequence the geodesics $\gamma_{k}$ converge in every $\mathcal{C}^l$-norm to a null geodesic $\gamma\colon[0,1]\to N$. This geodesic connects $\eta(s)$ and $\beta(u_\infty)$, where $u_\infty$ denotes the limit of the sequence $\{u_{\delta_k}\}_{k\in\mathbb N}$. Since $(\gamma_{k}(0),\gamma_{k}(1))\in E_g^+$ one concludes that the index form of every $\gamma_k$ is negative semidefinite (see Appendix \ref{A1}). Furthermore the negative semi-definiteness is preserved under convergence of geodesics, i.e. the index form of $\gamma$ is negative semi-definite. This implies that the index form of $\gamma|_{[0,b]}$ is negative definite for all $b\in (0,1)$. Hence no $\gamma(b)$ is conjugated to $\gamma(0)=\eta(s)$ along $\gamma$. Fix $b<1$ such that $\gamma(b)\in M$. Due to Lemma \ref{L2} one can choose $a<0$ such that $\gamma$ can be extended until $a$ and $\gamma(t)\in M$ for all $t\in [a,0)$. Since the index form depends continuously on the geodesic one can choose $a$ such that the index form of $\gamma|_{[a,b]}$ is negative definite, i.e. no point $\gamma(t)$ is conjugated to $\gamma(a)$ along $\gamma$ for $t\in [a,b]$. Thus $d\exp_{\gamma(a)}$ is non-singular along the line $[0,b-a]\cdot\dot\gamma(a)$, i.e. $d\text{Exp}$ is non-singular along the line $[0,b-a]\cdot\dot\gamma(a)$. Thus there exists a neighborhood $\mathcal{U}$ of $[0,b-a]\cdot\dot\gamma(a)$ such that $$\text{Exp}\colon \mathcal{U}\to \text{Exp}(\mathcal{U})\subset N\times N$$ is a local diffeomorphism. Since $\text{Exp}|_{[0,b-a]\cdot\dot\gamma(a)}$ is bijective one can assume by shrinking $\mathcal{U}$ if necessary that $\text{Exp}|_\mathcal{U}$ is bijective and $\mathcal{U}$ is fibrewise star-shaped. For $p\in \pi_{TN}(\mathcal{U})$ define $\mathcal{V}_p:=\exp_p(\mathcal{U}\cap TM_p)$. Let $v\in \mathcal{U}\cap TM_p$ be null. Then \cite[Proposition 5.34]{oneill} implies that there does not exist a timelike curve inside $\mathcal{V}_p$ between $p$ and $\exp_p(v)$. Applying this to $v=\dot\gamma_k(a)\in \mathcal{U}$ there exists an open neighborhood $\mathcal{V}$ around $\gamma|_{[a,b]}$ such that for sufficiently large $k$ the geodesics $\gamma_{k}|_{[a,b]}$ lie inside $\mathcal{V}$ and two points that lie on a $\gamma_{k}|_{[a,b]}$ cannot be connected by a timelike curve inside $\mathcal{V}$. The claim is now that there exists $a'\in [a,0)$ such that $(\gamma(a'),\gamma(b))\in E^+_g$. Assuming the claim one has $(\gamma(a''),\gamma(b))\in E^+_g$ for all $a''\in [a',0)$. By a standard argument it follows that $\gamma|_{[a',b]}\subset M$, especially implying that $\gamma(0)=\eta(s)\in M$. This contradicts the assumption. The claim is proved if there exists $a'\in [a,0)$ such that $(\gamma_k(a'),\gamma_k(b))\in E^+_g$ for all $k$ sufficiently large. Suppose the claim is false, i.e. there exists a sequence $a'_k\uparrow 0$ with $(\gamma_k(a'_k),\gamma_k(b))\in I^+_g$. Choose a compact neighborhood $W$ of $\gamma(b)$ in $M\cap \mathcal{V}$. Then for sufficiently large $k$ there exists a null geodesic $\zeta_k\colon [0,1]\to M$ from $\gamma_k(a'_k)$ to a point $z_k\in J^-_g(\gamma_k(b))\cap W\setminus \gamma_k$. This follows from the minimality of $u_{\delta_k}$ and $u_{\delta_k}\uparrow u_\infty$. The geodesics $\zeta_k$ cannot be contained in the neighbourhood $\mathcal{V}$ defined in the previous paragraph, since otherwise $\gamma_k(a)$ and $\gamma_k(b)$ would be connected by a timelike curve inside $\mathcal{V}$. A subsequence of $\{\zeta_k\}_{k\in\mathbb N}$ converges to a null geodesic $\zeta\colon [0,1]\to N$ connecting $\gamma(0)$ with $\gamma(b)$. The second assertion follows since $z_k\to \gamma(b)$ again by the minimality of $u_{\delta_k}$ and $u_{\delta_k}\uparrow u_\infty$. The geodesic $\zeta$ is not a reparameterization of $\gamma$, i.e. $\dot\zeta(1)$ and $\dot\gamma(b)$ are not parallel. Choose $c<1$ with $\zeta(c)\in W$. Then there exists $\epsilon >0$ such that $\beta(u_\infty-\epsilon)\in J^+_g(\zeta(c))$. By continuity it follows that $\beta(u_\infty-\epsilon /2)\in J^+_g(\zeta_k(c))\subset J^+_g(\gamma_k(a'_k))$. Note that for all $\delta >0$ one has either $\gamma_k(a'_k)\in J^+_g(V_\delta)$ or $\gamma_k$ intersects $V_\delta$ for all sufficiently large $k$. Since $a'_k\uparrow 0$ it follows that $u_\infty <u_{\delta_k}-\epsilon/2$ for all $k$ sufficiently large. This contradicts $\lim_{k\to \infty} u_{\delta_k}=u_\infty$ and finishes the proof. \end{proof} \begin{proof}[Proof of Proposition \ref{T1}] Since the null geodesics of two conformal metrics coincide up to reparametrisations, one can assume without loss of generality that $$i\colon (M,g)\hookrightarrow(N,h)$$ is an isometric embedding; in other words $M$ is an open subset of $N$ and $g=h|_M$. Recall that $\mathcal{N}_g$ was constructed as the quotient of the bundle of null covectors by the action of the Euler vector field and the geodesic flow. By definition the quotient map is open. Hence the obtained topology on $\mathcal{N}_g$ is second countable and the Hausdorff property is equivalent to the uniqueness of limits for converging sequences, see \cite[Proposition 6.5]{Quer}. Let $[\eta^0], [\eta^1]\in\mathcal{N}_g$ be classes of null geodesics and $$\{[\eta_n]\}_{n\in \mathbb N}\subset \mathcal{N}_g$$ be a sequence such that $$\dot{\eta}_n\rightarrow \dot{\eta}^0\text{ and }\dot{\eta}_n\to \dot\eta^1$$ somewhere. Up to relabelling and reparameterization one can assume that all geodesics are future pointing and $\eta^1\subset J^+_h(\eta^0)$. From the fact that $\mathcal{N}_h$ is Hausdorff it readily follows that $\eta^0$ and $\eta^1$ are subarcs of the same null geodesic $H\colon (\alpha,\omega)\to N$ in $(N,h)$. By Lemma \ref{L4} the set $H^{-1}(M)$ is connected. Therefore the geodesics $\eta^0$ and $\eta^1$ are subarcs of the same geodesic in $M$, i.e. the sequence $\{[\eta_n]\}_{n\in\mathbb N}$ has a unique limit in $\mathcal{N}_g$. \end{proof} \section{Proof of Theorem \ref{T2}} Recall that one considers a smooth function $r\colon \mathbb R\to\mathbb R$ with $r_{(0,1)}>0$, $r(0)=r(1)=0$ and $|r'(0)|,|r'(1)|<\frac{1}{2\pi}$. The graph of $r_{(0,1)}$ defines a surface of revolution $\Sigma$ parametrized by $$X\colon (0,1)\times \mathbb R\to \mathbb R^3,\; (x,\phi)\mapsto (x,r(x)\cos\phi,r(x)\sin\phi).$$ The induced metric on $\Sigma$ is given by $$k=\left[1+(r')^2\right]dx^2+r^2d\phi.$$ \begin{lemma}\label{L10} Every geodesic of $(\Sigma,k)$ is either complete or is asymptotic in both direction to $x=0$ or $x=1$. Further every pair of points in $\Sigma$ is connected by a minimal geodesic. \end{lemma} \begin{proof} The first part follows directly from Clairaut's integral for the geodesic flow of $(\Sigma,k)$. For the second part note that one has $$\dist\nolimits^k(\{x=x_0\},\{x=x_1\})=\left|\int_{x_0}^{x_1} \sqrt{1+(r')^2}\,dx\right|\ge |x_1-x_0|$$ and $$L^k(\{x=x_0\})=2\pi r(x_0),$$ where $\dist^k$ denotes the distance and $L^k$ denotes the length relative to $k$. For $x_0$ sufficiently close to $0$ or $1$ one has thus $L^k(\{x=x_0\})<1$, which implies that $$\dist\nolimits^k(\{x=x_0\},\{x=0,1\})>L^k(\{x=x_0\}).$$ Therefore no minimal geodesic between two points in $\Sigma$ intersects the singularities $\{x=0\}$ or $\{x=1\}$. \end{proof} \begin{cor}\label{cor3} The spacetime $$(M,g)=(\mathbb R\times \Sigma, -dt^2+k)$$ is causally simple. \end{cor} \begin{proof} Since the projection $M\to \mathbb R$ onto the first factor is a temporal function, $(M,g)$ is casual. It remains to show that $J^+_g$ is closed: One has \begin{equation}\label{E1} ((\sigma,p),(\tau,q))\in J^+_g\Leftrightarrow \tau-\sigma \ge \dist\nolimits^k(p,q) \end{equation} by Lemma \ref{L10}. The right-hand-side of \eqref{E1} is a closed condition. The closeness of $J^+_g$ follows directly. \end{proof} \begin{prop}\label{P10} The space of null geodesics $\mathcal{N}_g$ is not Hausdorff. \end{prop} \begin{proof} Up to parametrization every null geodesic $\gamma$ of $(M,g)$ is of the form $t\mapsto (t,\eta(t))$, where $\eta$ is a $k$-arclength geodesic. Choose a sequence $\{\eta_n\}_{n\in\mathbb N}$ of complete $k$-arclength geodesics whose tangents approach the meridian tangents $\frac{1}{\sqrt{1+(r')^2}}\partial_x$. The sequence $\{\eta_n\}_n$ then converges locally in every $C^l$-topology to a union of meridians of $(\Sigma,k)$. The induced sequence $\{\gamma_n\}_n$ has thus several limits in the space of null geodesics, i.e. $\mathcal{N}_g$ is not Hausdorff. \end{proof} Theorem \ref{T2} follows from Corollary \ref{cor3} and Proposition \ref{P10} in conjunction with Theorem \ref{cor1}. \begin{appendix} \section{The index form of a null geodesic}\label{A1} Here we recall the definition and the properties of the index form of a null geodesic for the convenience of the reader. The material with proofs and additional explanations can be found in \cite[Chapter 10]{Beem} and the references therein. Let $(M,g)$ be a spacetime and $\gamma\colon [a,b]\to M$ be a null geodesic. Let $$\gamma^\perp:=\{v\in \gamma^*TM|\; g(v,\dot\gamma)=0\}$$ denote the orthogonal bundle to $\gamma$. Define a equivalence relation $\sim$ on $\gamma^\perp$ by setting $v\sim w$ if $w-v\in \text{span}(\dot\gamma)$. Denote with $\overline{v}$ the equivalence class of $v\in \gamma^\perp$. The quotient bundle $$\overline{\gamma^\perp}:=\gamma^\perp/\sim$$ is a smooth bundle over $[a,b]$. Since $\gamma$ is null one has $g(v_1,w_1)=g(v_2,w_2)$ for all $v_1,v_2,w_1,w_2\in \gamma^\perp$ with $\overline{v_1}=\overline{v_2}$ and $\overline{w_1}=\overline{w_2}$. The same is true for the curvature endomorphism $R(.,\dot\gamma)\dot\gamma$, i.e. $$R(v,\dot\gamma)\dot\gamma=R(w,\dot\gamma)\dot\gamma\in \gamma^\perp$$ if $v-w\in \text{span}(\dot\gamma)$. Thus both the metric $g$ and the curvature endomorphism $R(.,\dot\gamma)\dot\gamma$ descend to a well defined metric $\overline{g}$ on $\overline{\gamma^\perp}$ with $$\overline{g}(\overline{v},\overline{w}):=g(v,w)$$ and a well defined endomorphism field $\overline{R}(.,\dot\gamma)\dot\gamma$ on $\overline{\gamma^\perp}$ with $$\overline{R}(\overline{v},\dot\gamma)\dot\gamma):=\overline{R(v,\dot\gamma)\dot\gamma}.$$ If $X\in \Gamma(\gamma^\perp)$ then the covariant derivative is again a smooth section of $\gamma^\perp$ and if $X-Y\in \text{span}(\dot\gamma)$ everywhere, then $\nabla_{\dot\gamma}(X-Y)\in \text{span}(\dot\gamma)$ everywhere as well. Therefore the covariant derivative $\nabla_{\dot\gamma}$ descends to a covariant derivative on $\overline{\gamma^\perp}$. Abbreviate the covariant derivative by a prime, i.e. $$V':=\overline{\nabla_{\dot\gamma} X},$$ where $V$ denotes the quotient section of $X\in \Gamma(\gamma^\perp)$, i.e. $\overline{X}_t=V_t$ for all $t\in [a,b]$. Denote with $\mathfrak{X}(\gamma)$ the piecewise smooth sections of $\overline{\gamma^\perp}$ and let $$\mathfrak{X}_0(\gamma):=\{V\in \mathfrak{X}(\gamma)|\; V_a=0_a\text{ and }V_b=0_b\},$$ where $0_t$ denotes the zero vector in $\overline{\gamma^\perp}_t$. \begin{definition} A smooth section $V\in \mathfrak{X}(\gamma)$ is said to be a {\it Jacobi class} in $\gamma^\perp$ if $V$ satisfies the Jacobi equation $$V''+\overline{R}(V,\dot\gamma)\dot\gamma=0.$$ \end{definition} \begin{lemma} Let $W$ be a Jacobi class in $\mathfrak{X}(\gamma)$. Then there exists a Jacobi field $Y\in \Gamma(\gamma^\perp)$ with $\overline{Y_t}=W_t$ for all $t\in [a,b]$. Conversely if $Y$ is a Jacobi field in $\gamma^\perp$, then $t\mapsto W_t:=\overline{Y_t}$ is a Jacobi class in $\overline{\gamma^\perp}$. \end{lemma} \begin{lemma} Let $W\in \mathfrak{X}(\gamma)$ be a Jacobi class with $W_a=0_a$ and $W_b=0_b$. Then there is a unique Jacobi field $Z\in \Gamma(\gamma^\perp)$ with $\overline{Z_t}=W_t$ for all $t\in [a,b]$ that vanishes at $a$ and $b$. \end{lemma} \begin{definition} For $s\neq t\in [a,b]$, $s$ and $t$ are said to be {\it conjugated along $\gamma$} if there exists a Jacobi class $W\neq 0$ in $\mathfrak{X}(\gamma)$ with $W_{s}=0_s$ and $W_{t}=0_t$. Also $t\in (a,b]$ is said to be a {\it conjugate point of $\gamma$} if $s=a$ and $t$ are conjugate along $\gamma$. \end{definition} \begin{definition} The {\it index form} $\overline{I}\colon \mathfrak{X}(\gamma)\times \mathfrak{X}(\gamma)\to \mathbb R$ is given by $$\overline{I}(V,W)=-\int_a^b [\overline{g}(V',W')-\overline{g}(\overline{R}(V,\dot\gamma)\dot\gamma,W)]dt.$$ \end{definition} \begin{theorem} Let $\gamma\colon [a,b]\to M$ be a null geodesic segment. Then the following are equivalent: \begin{itemize} \item[(a)] The segment $\gamma$ has no conjugate points to $s=a$ in $(a,b]$. \item[(b)] $\overline{I}(W,W)<0$ for all $W\in \mathfrak{X}_0(\gamma)$, $W\neq 0$. \end{itemize} \end{theorem} \end{appendix}
1,314,259,994,434
arxiv
\section{Background}~\label{sec:background} Let $\reals$ denote the set of real numbers, $\reals^+$ denote non-negative real numbers, and $\bools : \{ \mbox{true}, \mbox{false}\}$. A column vector $[x_1, x_2,..., x_n]^t$ is written as $\vx$. We write $\vx \circ \vy$ to denote element-wise multiplication and $\vx \cdot \vy$ as the inner product. For a set $X$, let $\vol(X)$ be the volume of $X$ and $\inter(X)$ be its interior. In this paper, we consider the inference of a feedback function (policy) for a given plant model and logical specification. Let $\X \subseteq \reals^n$ be a state-space whose elements are state vectors written as $\vx$, and $\U \subseteq \reals^m$ denote a set of control actions whose elements are control inputs written as $\vu$. The plant model is described by a function $\scr{F}: \X \times \U \rightarrow \reals^n$ that describes the right-hand side of a differential equation for the states: \[ \dot{\vx} = \scr{F}(\vx, \vu),\ \vx \in \X,\ \vu \in \U \,.\] For technical reasons, $\scr{F}$ is assumed to be Lipschitz continuous in $\vx$ and $\vu$. \begin{definition}[Policy] A policy $\pi: \X \rightarrow \U$ maps each state $\vx \in \X$ to an action $\vu \in \U$. \end{definition} In this paper, we will consider policies $\pi$ that are Lipschitz continuous over $\X$. Given a plant, $\scr{F}$ and a policy $\pi$, a closed loop system $\Psi(\scr{F},\pi)$ with initial state $\vx_0 \in X$ yields a trace (time trajectory) $\tr : \reals^+ \rightarrow \X$ that satisfies \[ \tr(0) = \vx_0,\ \mbox{and}\ (\forall\ t \geq 0)\ \dot{\tr}(t) = \plant(\tr(t), \pol(\tr(t))) \,. \] Given a state-space $\X$, a specification $\varphi$ maps time trajectories $\sigma: \reals^+ \rightarrow X$ to Boolean values $\true/\false$, effectively specifying desirable vs. undesirable trajectories, i.e, $\varphi: (\reals^+ \rightarrow X) \rightarrow \bools$. There are many useful specification formalisms for time trajectories (e.g.,~\emph{Metric Temporal Logic}(MTL)~\cite{koymans1990specifying}). While our approach is applicable to the specification written in such a formalism, this work focuses exclusively on reachability properties: \begin{definition}[Reachability] Given a compact initial set $I \subset \X$ ($\tr(0) \in I$) and a compact goal set $G \subset \X$, the system must reach some state in $G$ : $(\exists\ t \geq 0)\ \tr(t) \in G$. \end{definition} \begin{definition}[Policy Correctness] A policy $\pi$ is correct w.r.t. a specification $\varphi$, if for all traces $\tr$ of the system $\Psi(\plant,\pi)$, $\varphi(\tr)$ holds. For short, we write $\Psi(\plant, \pi) \models \varphi$. \end{definition} Given a plant $\scr{F}$ and specification $\varphi$, we consider the policy synthesis problem in this paper. \begin{problem}[Policy Synthesis Problem]\label{prob:main} Given a plant $\plant$ and specification $\varphi$, find a policy $\pol$ s.t. $\Psi(\plant, \pol) \models \varphi$. \end{problem} The policy synthesis problem asks for a controller that satisfies a given formal specification. This problem has been considered through many formal synthesis approaches in the recent past~\cite{raman2015reactive,liu2013synthesis,huang2015controller,rungger2016scots,ozay2013computing}. In the subsequent section, we will describe the specific setup considered in this paper. \section{Formal Learning Framework}~\label{sec:framework} We propose a novel approach where the demonstrator provides a set of feasible feedback for a given state. Figure~\ref{Fig:formal-learning-framework} shows the three components involved in the formal learning framework and their interactions. The \learner is a core component that iteratively attempts to learn a policy using the \demonstrator and \verifier components. At the end, it either successfully outputs a policy or outputs \textsc{fail}, indicating failure. The learner works over a space of policies $\Pi$ fixed a priori and at each iteration, it maintains a finite \emph{sample} set $O_i:\ \{ (\vx_1, U_1), (\vx_2, U_2),\ \ldots, (\vx_i, U_i) \}$, with sample states $\vx_j \in \X$ and corresponding set of control inputs $U_j \subseteq \U$. The set $O_i$ can be viewed as a constraint over the set of policies in $\Pi$, using the compatibility notion defined below: \begin{definition}[Compatibility Condition]\label{def:learner-compatible} A policy set $\pi \in \Pi$ is compatible with $O_i:\ \{ (\vx_j, U_j), j = 1,\ldots,i \}$ if and only if for all $(\vx_j, U_j) \in O_i$, $\pi(\vx_j) \in U_j$. \end{definition} \begin{figure}[t] \vspace{0.2cm} \begin{center} \begin{tikzpicture} \matrix[every node/.style={rectangle, draw=black}, row sep=20pt, column sep=20pt]{ \node(n0){\learner}; & \\ \node(n1){\verifier }; & \node[fill=red!20](n3) {Output: \textsf{fail}}; \\ \node(n2){\demonstrator}; & \node[fill=green!20](n4){Output: \textsf{succ.}}; \\ }; \path[->, line width = 2pt] (n0) edge node[left]{$\pi_j$} (n1) (n1) edge node[left]{ $\vx_{j+1}$} (n2) (n2) edge[in=180, out = 180] node[left]{$(\vx_{j+1}, U_{j+1})$}(n0) (n0) edge node[right]{\textsc{No Candidate}} (n3) (n1) edge node[right]{\ \textsf{SAT}} (n4); \draw (n0.north)++(0,0.4cm) node[rectangle, draw=red, dashed]{$\{ (\vx_1,U_1), \ldots, (\vx_j, U_j) \}$ }; \end{tikzpicture} \end{center} \caption{Schematic diagram of the policy learning framework.}\label{Fig:formal-learning-framework} \end{figure} The \demonstrator $\dem$ inputs a state sample $\vx \in X$ and outputs a set of control inputs $U \subseteq \U$. The \demonstrator outputs a set of possible control inputs $U:\ \dem(\vx)$ that can be applied instantaneously at $\vx$. We require that the following correctness condition holds: \begin{definition}[Demonstrator Correctness]\label{def:dem-correctness} For any policy $\pi$ if $(\forall \vx) \ \pi(\vx) \in \dem(\vx)$, then $\Psi(\plant, \pi) \models \varphi$. \end{definition} Therefore, we conclude that an incorrect policy $\pi$ ``disagrees'' with the demonstrator output $\dem(\vx)$ for some state $\vx$. \begin{lemma} Given a \emph{correct} \demonstrator $\dem$, a policy $\pi$ and a trace $\tr$ of $\Psi(\plant, \pi)$ that violates the specification $\varphi$, there exists a time $t$ s.t. $\pi(\tr(t)) \notin \dem(\tr(t))$. \end{lemma} The \verifier inputs a policy $\pi$ and property $\varphi$, outputting SAT or UNSAT. Here {SAT} signifies that the closed loop $\Psi(\scr{F}, \pi)$ consisting of the plant and the current policy satisfies the specification $\varphi$, and \textsc{UNSAT} signifies that the closed loop fails the property. In the latter case, the \verifier generates a trace $\tr: \reals^+ \mapsto \X$ of the closed loop that violates the property. \paragraph{Iteration:} At the start, the \learner is instantiated with its initial policy space $\Pi$ (eg., all policies described that are linear combinations of a set of given basis function) and the initial sample set $O_0 = \emptyset$. At the $i^{th}$ iteration, sample set is denoted $O_i$. The following steps are carried out: \begin{compactenum} \item The \learner chooses a policy $\pi_i \in \Pi$ that is \emph{compatible} with $O_{i-1}$. Then, $\pi_i$ is fed to the \verifier. \item The \verifier either accepts the policy as \textsc{SAT}, or provides a counterexample trace $\tr_i$. \item Using the \demonstrator, a state $\vx_i : \tr_i(t)$ is found for which $\pi_i$ is not compatible with the \demonstrator. I.e. $\pi_i(\vx_i) \notin \dem(\vx_i)$. \item The \learner updates $O_{i}:\ O_{i-1} \cup \{ (\vx_i, \dem(\vx_i)) \}$. \end{compactenum} \begin{theorem} If the formal learning framework terminates with \textsc{success}, then we obtain a policy $\pi$ such that $\Psi(\plant, \pi)$ satisfies the desired property $\varphi$. \end{theorem} \section{Realizing the Oracles}~\label{sec:impl} For simplicity, we will first start by describing how a demonstrator can be realized. \paragraph{Demonstrator:} Given a state $\vx \in \X$, the demonstrator should output a set of control inputs $U \subseteq \U$ such that each input $\vu \in U$ can be applied at $\vx$ without compromising the desired property $\varphi$. We will focus on describing a demonstrator for reachability properties for now. A more general framework is left for future work. Let $G$ be a set of \emph{goal states} that we wish to reach. We will assume the following properties of our demonstrator: \noindent\textbf{D1:} The demonstrator has an (inbuilt) policy $\pi^*$ that can ensure that starting from any $\vx \in \X$, the resulting trace $\sigma(t)$ reaches $G$ in finite time. Given any point $\vx$, we assume that $\pi^*(\vx)$ can be computed for any $\vx \in \X$. However, such a computation can be expensive and we do not know $\pi^*$ in a closed form. \noindent\textbf{D2:} The demonstrator's correctness is certified by a smooth \emph{Lyapunov-like} (value) function $V$ such that \begin{compactenum}[(a)] \item for all $\vx \in \X \setminus \inter(G)$, $V(\vx) \geq 0$, \item $V$ is radially unbounded, and \item For all $\vx \in \X \setminus \inter(G)$, with $\vu = \pi(\vx)$, \[ \nabla_x V \cdot \scr{F}(\vx, \vu) \leq - \epsilon \,.\] Once again, we assume that $V$ is not available in a closed form but that $V(\vx)$ and $\nabla V(\vx)$ can be computed for each $\vx \in \X$. \end{compactenum} Consider a demonstrator with a policy $\pi^*$ satisfying \textbf{D2}. \begin{lemma} Starting from any state $\vx \in \X \setminus \inter(G)$, the closed loop trajectory of $\Psi(\plant, \pi^*)$ eventually reaches $\inter(G)$ in finite time. \end{lemma} Let $\lambda \in (0,1]$ be a chosen constant. We will denote $\pi^*(\vx)$ by $\vu^*$. For any $\vx \in \X \setminus \inter(G)$, we define the set $U_{\lambda}(\vx) \subseteq \U$ as satisfying: \begin{equation}\label{eq:U-extraction} U_{\lambda}(\vx):\ \{ \vu \in \U\ |\ (\nabla V) \scr{F}(\vx, \vu) \leq \lambda (\nabla V) \scr{F}(\vx, \vu^*) \}\,. \end{equation} \begin{theorem} For any $\lambda \in (0,1]$ and any policy $\pi$ such that $ (\forall\ \vx \in \X \setminus \inter(G))\ \pi(\vx) \in U_{\lambda}(\vx)$, the closed loop model $\Psi(\plant,\pi)$ will satisfy the reachability property for $G$ (demonstrator correctness). \end{theorem} \paragraph{MPC Based Demonstrator:} We will now briefly consider how to implement a demonstrator for a reachability property and extract a suitable proof $V$. To do so, we will use a standard receding horizon MPC trajectory optimization scheme that discretizes the plant dynamics using a discretization step $\delta > 0$ and a terminal cost function that is chosen so that the resulting MPC stablizes the plant $\scr{F}$ to a point $\vx^* \in \inter(G)$. Let $\hat{F}(\vx,\vu)$ be a discretization of $\scr{F}$ for the time-step $\delta$. This discretization approximates $\scr{F}$ and is derived using Euler or Runge-Kutta scheme. We define $V^*(\vx)$ as the optimal value of the following problem: \begin{equation}\label{eqn:opt-demonstrator} \begin{array}{ll} \underset{\vu(0),\ldots,\vu(N\delta-\delta)}{\mathop{\min}} & \sum\limits_{j=0}^{N-1}Q(\vu(j\delta),\vx(j\delta))+H(\vx(N\delta))\\ \mathsf{s.t.} & \vx(0) = \vx \\ & \vx((j+1)\delta) = \hat{F}(\vx(j\delta), \vu(j\delta)) \\ & \ j = 0,\ldots, N-1 \,. \end{array} \end{equation} Likewise, $\pi^*(\vx)$ is the optimal value for $\vu(0)$ in~\eqref{eqn:opt-demonstrator}. Under some well-known (and well-studied) conditions, the MPC scheme stabilizes the closed-loop dynamics to $\vx^*$ with the optimal cost to go $V^*$ as the desired Lyapunov function~\cite{Mayne+Others/2000/Constrained,Jadbabaie+Hauser/2005/Stability}. We consider the following strategy for a demonstrator: \begin{enumerate} \item We design an MPC controller with proper cost function (usually $Q$ and $H$ are positive outside $\inter(G)$ and radially unbounded). \item We adjust the cost function by trial and error until the demonstrator works well on sampled initial states $\vx \in \X$ and the cost decreases strictly along each of the resulting trajectories. \item The gradient $\grad V^*$ can be estimated for a given $\vx$ by using the KKT conditions for the optimization problem~\eqref{eqn:opt-demonstrator} or using a numerical scheme that estimates the gradient by sampling around $\vx$. We also note that some methods like iLQR~\cite{li2004iterative} provide local $V^*$ in closed form (and thus $\grad V^*$) along with the solution $\pi^*$. \end{enumerate} \paragraph{Verifier:} Given a policy $\pi$ and a property $\varphi$, the \verifier checks whether the closed loop $\Psi(\plant, \pi)$ satisfies the property $\varphi$ and if not, produces a counterexample trace. The problem of verifying non-trivial properties of nonlinear systems is undecidable. Therefore a perfect \verifier is not feasible. In this section, we review two main sets of solutions: (a) A \verifier that attempt to approximately solve the verification problems using decision procedures, as described in our earlier work~\cite{ravanbakhsh2018learning}. Such a \verifier concludes that the system satisfies the property or is likely buggy. It produces an \emph{abstract counterexample} that does not need to correspond to a real trace of $\Psi(\plant,\pi)$ but could nevertheless be used in the learning loop (see~\cite{ravanbakhsh2018learning} for further details); or alternatively (b) a \emph{falsifier} that tests $\Psi(\plant,\pi)$ for a large number of (carefully chosen) initial states $\vx \in \X$, concluding either a real counterexample or that the system likely satisfies the property. In this paper, we consider falsifiers that ``invert'' the optimization problem from Eq.~\eqref{eqn:opt-demonstrator}, as a search heuristic for a counterexample to the property of reaching the goal $G$. \begin{equation}\label{eqn:opt-falsifier} \begin{array}{ll} \underset{{\vx(0) }}{\mathbf{\max}}\ & \sum\limits_{j=0}^{N-1} (Q(\vu(j\delta), \vx(j\delta))) + H(\vx(N\delta)) \\ \mathsf{s.t.} & \vx((j+1)\delta) = \hat{F}(\vx(j\delta), \vu(j\delta)) \\ & \vu(j\delta) = \pi(\vx(j\delta)) \,. \\ \end{array} \end{equation} However, the optimization problem is over the unknown initial state $\vx(0)$ with the control values $\vu(j\delta)$ fixed by the policy $\pi$ to be falsified. We attempt to solve this problem by using a combination of random choice of various $\vx(0)$ and a second order gradient descent search. Whereas this is not guaranteed to find a falsifying input, it often does within a few iterations, outperforming random simulations. On the other hand, if $M \geq 10^6$ (or a suitably large number) of trials do not yield a falsification, we declare that the policy likely satisfies the property. \paragraph{Witness State Generation:} % Having found a counterexample trace $\tr$, we still need to find a \emph{state} $\vx_{i} = \tr(t)$ of the trace to return back to the demonstrator. Ideally, we choose a state $\vx:\ \tr(t)$ in the trace at time $t$ such that $\pol_i(\vx) \notin U_{\lambda}(\vx)$, wherein $U_{\lambda}(\vx)$ is the set returned by the \demonstrator (derived from the Lyapunov conditions) for input $\vx$ (Cf. Eq.~\eqref{eq:U-extraction}). For this purpose, we discretize the time (using small enough time-step $\delta$). At each discrete time $t$, if the policy $\pi$ is compatible with the \demonstrator at $\tr(t)$, we increment $t$. \section{Introduction}~\label{sec:intro} Policy learning (i.e., learning feedback control laws) is a fundamental problem in control theory and robotics, with applications that include controlling under-actuated robotic systems and autonomous vehicles. The main challenge lies in designing a policy that provably achieves task specifications such as eventually reaching a target set of states. In this paper, we present an automated approach to policy learning with three goals in mind: (a) compute policies that are guaranteed to satisfy a set of formal specifications, expressed in a suitable logic; (b) represent policies as a linear combination of a set of pre-defined basis functions which can include polynomials, trigonometric functions, or even user-provided functions, and (c) compute policies efficiently, in real time. Finding policies that satisfy all three properties is not easy. In this paper, we provide a partial solution to this problem in the form of an automated method that learns from a demonstrator. Using two case studies, we show that complex controllers can be replaced by much simpler policies that achieve all the three desired goals stated above. Our approach relies on a demonstrator component that can be queried for a given starting state and demonstrates control inputs to achieve the desired goals. Specifically, we use nonlinear, receding horizon model-predictive controllers (MPCs) as demonstrators. For a given input, the MPC formulates a nonlinear optimization problem by ``unrolling'' a predictive model of the system to some time horizon $T$. The constraints and the objectives will ensure that the behaviors of the system over the time horizon will satisfy the properties of interest, while optimizing some key performance metrics. A common solution to this problem lies in training a policy that ``mimics'' the input-output map of the MPC~\cite{levine2014learning,mordatch2014combining}. Instead, our approach is based on two new ideas. First, we extend the demonstrator to provide a range of permissible control inputs for each state by using the gradient of the MPC's cost-to-go function. This allows our search for a simple policy to succeed more often. Second, we use a counterexample-guided approach that iterates between querying the demonstrator to learn a candidate policy compatible with the demonstrator query results thus far, and a verifier that checks if the candidate verifier conforms to the specifications, producing a counterexample upon failure. This minimizes the number of demonstrator queries. We demonstrate the applicability of our approach on two case studies that could not be solved previously, comparing the new policy learner with off the shelf supervised learning methods. The two case studies involve (i) performing maneuvers on a nonlinear ground vehicle model (illustrating the result in the Webots\texttrademark~\cite{Michel2004} robotics simulator), and (ii) controlling a nonlinear model of a fixed aircraft wing called the \emph{Caltech ducted fan}~\cite{jadbabaie2002control}. We demonstrate in each case that our approach can learn simple policies that satisfy all the desired requirements of verification, simplicity, and fast computation. The resulting policies are orders of magnitude faster to execute when compared to the original MPCs from which they were learned. \subsection{Related Work} Policy learning from demonstrations is a fundamental problem in robotics, and the subject of much recent work. Argall et al. provide a survey of various learning from demonstration (LfD) approaches~\cite{argall2009survey}. These approaches are primarily distinguished by the nature of the demonstrators. For instance, the demonstrator can be a human expert~\cite{khansari2017learning}, an offline sample-based planning technique (e.g. Monte-Carlo Tree Search~\cite{guo2014deep}), or an offline trajectory optimization based technique~\cite{levine2014learning}. In particular, our approach uses an offline receding horizon MPC to provide demonstrations. An alternative to learning policies is to learn value (potential or Lyapunov) functions. It is well known that systems that can be controlled by relatively simple policies can require potential functions that are complex and hard to learn. Thus, a vast majority of approaches, including this paper, focus on policy learning. However, there have been approaches to learning value functions, including Zhong et al.~\cite{zhong2013value}, Khansari-Zadeh et al~\cite{khansari2017learning}, and our previous work~\cite{ravanbakhsh2018learning}. Notably, our previous work queries demonstrators and uses counter-example learning in a similar manner in order to learn potential (or control Lyapunov) function described as an unknown polynomial of bounded degree. In contrast, the approach of this paper learns policies directly, and exploits the gradient of the cost-to-go functions to make the demonstrator output a range of control inputs. This allows for more policies to be retained at each iterative step. At the same time, the fast convergence properties established in our previous work are retained in our policy learning framework. Another important limitation in iterative policy learning is the lack of an adversarial component that can actively identify and improve wrong policies~\cite{ross2011reduction,he2012imitation,kahn2017plato}. In contrast, our approach includes an adversarial verifier that actively finds mistakes to fix the current policy. However, an important drawback of doing so is our inability to learn on the actual platform in real-time, unless the system can recover from violations of the properties we are interested in. Currently, all the learning is performed using mathematical models. The idea of generating controllers from rich temporal property specifications underlies the field of formal synthesis. A variety of recent approaches consider this problem, including discretization-based techniques that abstract the dynamics to finite state machines and use automata-theoretic approaches to synthesize controllers~\cite{mazo2010pessoa,ozay2013computing,ravanbakhsh2014infinite,rungger2016scots}, formal parameter synthesis approaches that search for unknown parameters so that the overall system satisfies its specifications~\cite{yordanov2008parameter,taly2011synthesizing,jha-iccps10,abate2017sound}, deductive approaches that learn controllers and associated certificates such as the Lyapunov function~\cite{ravanbakhsh2015counter,el1994synthesis,tan2004searching,huang2015controller}. Recent work~\cite{raman2015reactive,vazquez-adhs18} presents approaches to controller synthesis for temporal logic based on the paradigm of counterexample-guided inductive synthesis (CEGIS)~\cite{solar2006combinatorial}. Querying for demonstrations can be viewed as a way of actively querying an oracle for positive examples, which is also done in some CEGIS variants. Our approach and these approaches are thus both instances of the abstract framework of {\em oracle-guided inductive synthesis}~\cite{jha2017theory}. \section{Learner} Given a set of observations $O_i$, the learner finds a policy that is compatible with the observations. Formally, the policy is parameterized by $\param$ and the policy space $\Pi$ is represented by the parameter space $\Param$. The learner wishes to find $\param$ s.t. \begin{equation}\label{eq:learner} (\forall (\vx_j, U_j) \in O_i) \ \pol_{\param}(\vx_j) \in U_j \,. \end{equation} This will be posed as a system of constraints over $\param$. Also, let $\Param_i \subseteq \Param$ be set of all such $\param$. Let $\scr{V}$ be a finite set of basis functions $\{v_1,\ldots,v_K\}$ and $\pi_{\param}$ be linear combinations of these functions: \[\pi_{\param}(\vx) : \sum_{k=1}^K \param_k v_k(\vx) \,.\] First, $\pol_\param$ is a linear function of $\param$. We will now derive the constraints for the compatibility condition (Def.~\ref{def:learner-compatible}), given a sample set $O_i: \{ (\vx_1, U_1), \ldots, (\vx_i, U_i) \}$. We will assume that each set $U_j$ is a polyhedron \[ U_j : \{\vu \ | \ A_{j} \vu \leq \vb_{j} \} \,. \] Therefore, in Eq.~\eqref{eq:learner}, $\pol_{\param}(\vx_j) \in U_j$ can be replaced with $A_{j} (\sum_{k=1}^K \param_k v_k(\vx_j) ) \leq \vb_{j}$. Since $\vx_j$ is known, the compatibility conditions yield a polyhedron over $\param$. \begin{theorem}\label{thm:linear-model} The compatibility conditions $\Param_i$, given a sample set $O_i$, form a linear feasibility problem. \end{theorem} Using ideas from Ravanbakhsh et al~\cite{ravanbakhsh2018learning}, we show that the entire formal learning algorithm terminates in polynomial time, if the learner selects the new parameters $\param \in \Param_i$ at each iteration, carefully. By Theorem.~\ref{thm:linear-model}, $\Param_i$ for iteration $i$ would be a polyhedron. We consider two realistic assumptions: \begin{enumerate} \item $\Param_0$ is a compact set, where $\Param_0 \subseteq [-\Delta, \Delta]^K$, $\Delta > 0$ is an arbitrarily large constant. \item The formal learning algorithm terminates whenever a $K$-ball of radius $\delta$ does not fit inside $\Param_i$ for some arbitrarily small $\delta > 0$ (not when $\Param_i = \emptyset$). \end{enumerate} We design the learner in the following way. In the learning process, $O_i$ implicitly defines $\Param_i$. Given $\Param_i$, let the learner return the center of the maximum volume ellipsoid (MVE) inside $\Param_i$. The problem of finding the MVE is equivalent to solving a SDP problem\cite{vandenberghe1998determinant}. \begin{theorem}\label{thm:convergence} If the learner returns the center of MVE inside $\Param_i$ at each iteration, the formal learning algorithm terminates in $\frac{K (\log(\Delta) - \log(\delta))}{- \log\left(1 - \frac{1}{K}\right) } = O(K)$ iterations. \end{theorem} This theorem addresses an important issue in statistical machine learning. If the model is not precise enough to capture a feasible policy, the learning procedure terminates. And one can use a more complicated model with a larger set of features (basis functions). In other words, one can start from a simple model with smaller number of parameters and iteratively add new basis functions. We will now illustrate the results of policy learning for linear combinations of basis functions using two case studies. The \demonstrator is implemented by an MPC with quadratic cost functions: \[Q = [\vu^t \ \vx^t] \ \mbox{diag}(Q') \ [\vu^t \ \vx^t]^t \,, \ H = \vx^t \ \mbox{diag}(H') \ \vx \,.\] A second order method is used to solve Eq.~\eqref{eqn:opt-demonstrator}. For the falsifier, we use one million ($10^6$) simulations from randomly generated initial states, interspersed with $10^3$ iterations of the adversarial falsifier chosen by solving eq.~\eqref{eqn:opt-falsifier}. If a counterexample is found, it is reported. Otherwise, the policy is declared \emph{likely correct}. Also, we use the demonstrator to randomly generate a single trace and initialize the dataset with demonstrations along that trace. \paragraph{Case-Study I (Car):} A car with two axles is modeled by state variables $\vx^t = [x \ y \ v \ \alpha \ \beta]$, where $\beta = \tan(\gamma)$ and $\gamma$ is the degree between the front and back axles (Fig.~\ref{fig:schematics}(b)). The dynamics are defined as follows: \[ \dot{x} = v \cos(\alpha) \,, \dot{y} = v \sin(\alpha) \,, \dot{v} = u_1 \,, \dot{\alpha} = \frac{v}{b}\beta \,, \dot{\beta} = u_2 \,, \] where $b = 3$. Also, $u_1 \in [-1, 1]$ and $u_2 \in [-3, 3]$ are inputs. The goal is to follow a reference curve (the road), by controlling the lateral deviation from the midpoint of the reference, at constant speed $v_0 = 10 \text{m/s}$. For convenience the $y$-axis always coincides with this lateral deviation. The state variable for $x$ is ignored in our model and velocity $v$ is taken relative to $v_0$ (i.e, $v := v - v_0$). The goal set is $G : (y, v, \alpha, \beta) \in [-0.1, 0.1]^4$ and the initial set $I : (y, v) \in [-2, 2]^2 \times (\alpha, \beta) \in [-1, 1]^2$. For the MPC, the cost functions are defined by $Q' : [1\ 1\ 9 \ 9 \ 1 \ 1]$ and $H' : [90 \ 90 \ 10 \ 10]$, $\delta = 0.2$, $N = 10$. We test that the MPC can solve the control problem for many random initial states, whereas a LQR controller with the same cost function fails: I.e, starting from $I$, the goal $G$ is not reached by some of the executions of this controller. Since there are two inputs, we use two different parameterizations $\pol_\param : [\pol^1_{\param_1}, \pol^2_{\param_2}]$, where $\pol^1_{\theta_1}$ ($\pol^2_{\theta_2}$) is used to learn $u_{1}$ ($u_{2}$). We consider each input to be an affine combination of states (affine policy). Our approach successfully finds an affine policy with only $16$ demonstrations. To study scalability of the method, we consider varying number of cars which do not directly interact. Our goal is for each lateral deviation to converge to a narrow range using a single ``centralized'' policy to control all of the cars at the same time. For $l$ cars, there are $2l$ inputs, each of which is an affine (linear) feedback with $4l+1$ ($4l$) terms. The results for up to $4$ cars is shown in Table~\ref{tab:bicycle}. Results indicates that the method converges much faster when the policy is linear as opposed to being affine. This suggests that selecting basis functions can significantly affect the performance. Nevertheless, the termination is guaranteed and the method is scalable to higher dimensional problems as the complexity is polynomial in the number of states. After all, we are using local search for falsifier and demonstrator and using SDP solvers to implement the learner. We note that falsification is quite fast for most of iterations and it takes significant time only at the final iteration, where the model is most likely correct. Moreover, the witness generation is the most expensive computation especially for larger problems. However, such active search for witnesses helps to generate useful data and guarantees convergence of the algorithm. We implemented the controller for a car in the Webots\texttrademark~\cite{Michel2004} simulator. The cars in Webots have a map for the road and simulate GPS-based localization with internal sensors for heading, steering angle and velocity. The car model in Webots is much more complicated when compared to our simple bicycle model. However, we use the bicycle model to design the policy. The simulation traces for some random initial states are shown in Fig.~\ref{fig:webots} demonstrate that the learned controller is robust enough to compensate the model mismatches. The simulations are shown for a straight as well as a curved road segment. \begin{figure}[t] \vspace{0.2cm} \begin{center} \includegraphics[width=0.45\textwidth]{pics/schematics} \end{center} \caption{Schematic View of the Case-Study I (a) and II (b)}\label{fig:schematics} \end{figure} \begin{table}[t] \vspace{0.2cm} \caption{Results for Case-study I} \begin{center} \begin{tabular}{||c|c||c|c|c||c||} \hline $\#$Cars & K & $\#$Itr. & Wit. Gen. T. & Fals. T. & Total T. \\ \hline \multirow{2}{*}{1} & 8 & 2 & 3 & 59 & 66 \\ & 10 & 10 & 22 & 40 & 65\\ \hline \multirow{2}{*}{2} & 32 & 13 & 115 & 98 & 233\\ & 36 & 33 & 423 & 55 & 492\\ \hline \multirow{2}{*}{3} & 72 & 16 & 274 & 364 & 680 \\ & 78 & 44 & 1903 & 141 & 2084 \\ \hline \multirow{2}{*}{4} & 128 & 47 & 1360 & 257 & 1697\\ & 136 & 62 & 6766 & 187 & 7087\\ \hline \end{tabular} \end{center} \label{tab:bicycle} \hadi{$K$ is the number of model parameters. }$\#$Itr. is the number of iterations. Total T. is the total computation time in seconds on a Mac Book Pro with up to 4 GHz Intel Core i7 processor.\\ \end{table} \begin{figure*}[t] \vspace{0.2cm} \begin{center} \includegraphics[width=0.9\textwidth]{pics/sim-ducted-fan} \end{center} \caption{Three execution traces of Case-Study II for the learned policy. $h$ is the height and $x$ is the horizontal position (initially, $x = 0$). The wings demonstrate the value of $\beta$ for some time instances. Initial state are as follows. blue:[1 0.5 0.5 0.5 0.5]$^t$, red:[0 0 0 0 2]$^t$, and yellow:[-1 -0.5 0.5 0.5 -1]$^t$.}\label{fig:sim-ducted-fan} \end{figure*} \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth]{pics/webots} \end{center} \caption{Simulation Results in Webots for Two Road Segments: (a) a straight road, and (b) a curved road. The center of the road is shown with a black line and traces are shown with colored lines starting from different states.}\label{fig:webots} \end{figure} \paragraph{Case-Study II (Caltech Ducted Fan):} We consider an example from Jadbabaie et al.~\cite{jadbabaie2002control}, where authors develop a mathematical model for the Caltech ducted fan. The state $\vx^t = [v \ \gamma \ \beta \ \dot{\beta} \ h]$, consists of speed $v$, angle of velocity $\gamma$, angle of the ducted fan $\beta$, angular velocity $\dot{\beta}$, and height $h$. Fig.~\ref{fig:schematics}(b) shows a schematic view of the ducted fan. The problem here is to stabilize the wing to move forward. The dynamics are: \begin{align*} m \dot{v} = & -D(v, \alpha) - W \sin(\gamma) + u \cos(\alpha + \delta_u) \\ m v \dot{\gamma} = & L(v, \alpha) - W \cos(\gamma) + u \sin(\alpha + \delta_u) \\ J \ddot{\beta} = & M(v, \alpha) - u l_T \sin(\delta_u)\\ \dot{h} = & v \sin(\gamma) \,, \end{align*} where $\alpha = \beta - \gamma$, $u$ is the thrust force, and $\delta_u$ in the angle for direction of the thrust ($\vu^t = [u \ \delta_u]$) (Cf.~\cite{jadbabaie2002control} for a full description). The model has a steady state $\vx^{*t} = [v_0 \ 0 \ \beta_0 \ 0 \ 0]$, where $v_0 = 6$ and $\beta_0 = 0.177$ (for input $\vu^{*t} = [3.2 \ -0.138]$), for which the ducted fan steadily moves forward. The fact that dynamics are not affine in control is problematic as we need each $U_j$ to be a polyhedron. To get around this, we use $u_{s} : u \ {\sin}(\delta_u)$ and $u_{c} : u \ {\cos}(\delta_u)$ as our inputs, and rewrite the equations. The dynamics of the new system are affine in control and $U_j$ would be a polyhedron. By setting $\vx^*$ and $\vu^*$ as the origins and scaling down $v$, by $2.5$ times \[ \vx := [0.4 \ 1 \ 1 \ 1 \ 1]^t \circ (\vx - \vx^*) \ , \ \vu := \vu - \vu^* \,, \] the goal is to reach $G : [-0.2, 0.2]^5$ from $I: [-0.5, 0.5]^5$. For the MPC, we use $Q': [0.02 \ 0.02\ 1 \ 1\ 1\ 1\ 1]$, $H':[15\ 15\ 15\ 15\ 15]$, $\delta = 0.3$, $N=15$. Let $\scr{T} : \{\tau_1(\vx),...,\tau_{K'}(\vx)\}$ be set of terms defined over $\vx$. Basis functions $v_k(\vx):\ \scr{T}^{\alpha_k}$ ($\pol_{\param} : \sum_k \param_k \ \scr{T}^{\alpha_k}$) correspond to monomials of terms in $\scr{T}$, wherein $\alpha_k \in \mathbb{N}^{K'}$ is a vector of natural number powers such that $|\alpha_k|_1 \leq d$ for some degree bound $d > 0$. Both inputs are parameterized with terms $\scr{T}:\{v, \gamma, \beta, \dot{\beta}, h, \sin(\beta), \cos(\beta)\}$ and degree $d$ is $2$. Using the formal learning algorithm with $\lambda = 0.1$, proper parameters are found with $46$ demonstrations. Several traces of $\Psi(\plant, \pol)$ for the learned policy $\pol$ is shown in Fig.~\ref{fig:sim-ducted-fan}. The results suggest that our framework can efficiently yield reliable solution to reachability problems, given appropriate basis. Nevertheless, if the bases are not rich enough, the method declares failure after few iterations. For example, when $\scr{T}:\{v, \gamma, \beta, \dot{\beta}, h\}$, the method terminates in $20$ iterations without any solution. \paragraph{Comparison with Linear Regression:} We consider a simple supervised learning algorithm in which the demonstrator generates optimal input $\vu_i$ for a given state $\vx_i$. Using the demonstrator we generate optimal traces starting from $M$ random initial states. Then, the states in the optimal traces (and their corresponding inputs) are added to the training data. Having a dataset $\{(\vx_1, \vu_1), \ldots, (\vx_j, \vu_j)\}$, one wishes to find a policy with low error (at least) on the training dataset. In a typical statistical learning procedure, one would minimize the error between the policy and demonstrations: \[\min_\theta \sum_{i=1}^j (\pi_\theta(\vx_i)-\vu_i)^2\,.\] For both case studies, we tried this statistical learning approach with different training data sizes: $10 < M < 1000$. In all experiments the falsifier found a trace where the learned policy fails. We believe this notion of error used in regression is merely a heuristic and may not be relevant. For example, we noticed that because of input saturation, many states in the training data have exactly the same corresponding control inputs in the dataset and this prevents a simple linear regression to succeed. \section{Conclusion} We have presented a policy learning approach that combines learning from demonstrations with formal verification, and demonstrated its effectiveness on two case studies. We showed cases where naive supervised learning fails due to its simplicity, whereas our method can solve the problem. Our future work will consider nonlinear models such as deep neural networks as well as extensions to support a richer set of properties beyond reachability. \hadi{ \section*{Acknowledgments} This work was funded in part by NSF under award numbers SHF 1527075, CPS 1646556, and CPS 1545126, by the DARPA Assured Autonomy grant, and by Berkeley Deep Drive. } \bibliographystyle{plain}
1,314,259,994,435
arxiv
\section{Introduction} An \emph{iterated function system (IFS)} on $\mathbb{R}^d$ is a finite collection of strictly contractive self-maps $f_1,\ldots,f_\kappa$. A classical result, formalized by Hutchinson \cite{Hutchinson1981} (although the crucial idea goes back to Moran \cite{Moran1946}), states that for every IFS there is a unique nonempty compact set $E \subset \mathbb{R}^d$ for which \begin{equation*} E = \bigcup_{i=1}^\kappa f_i(E). \end{equation*} When the mappings are similitudes (or conformal) and the pieces $f_i(E)$ do not overlap much, the Hausdorff dimension of $E$ is easily determined by the contraction ratios of the mappings $f_i$, see for example \cite{Hutchinson1981}, \cite{MauldinUrbanski1996}, and \cite{KaenmakiVilppolainen2006}. In the present article, we assume that the mappings $f_i$ are affine; in this case the set $E$ is called a \emph{self-affine set}. In addition, we do not require any non-overlapping condition. Dropping either the conformality or separation hypothesis makes the problem of estimating dimension dramatically more complicated. The main feature of our work is that we are able to drop both, while obtaining results which are valid everywhere, not just generically. The so-called singular value function plays a prominent r\^ole in the study of the dimension of self-affine sets. Following \cite[Proposition 4.1]{Falconer1988}, the singular value function leads to a notion of the singular value dimension, which serves as an upper bound for the upper Minkowski dimension, see \cite{DouadyOesterle1980} and \cite{Falconer1988}. Falconer \cite{Falconer1988} (see also \cite{Solomyak1998}) proved that assuming the norms of the linear parts to be less than $\tfrac12$, this upper bound is sharp, and also equals the Hausdorff dimension, for $\mathcal{L}^{d\kappa}$-almost every choice of translation vectors. Here $\mathcal{L}^{d\kappa}$ denotes the Lebesgue measure on $\mathbb{R}^{d\kappa}$. Falconer and Miao \cite{FalconerMiao2007a} have recently shown that the size of the set of exceptional translation vectors is small also in the sense of Hausdorff dimension. The self-affine carpets of McMullen \cite{McMullen1984} show that one cannot replace ``almost all'' by ``all'', even if the pieces do not overlap. Furthermore, it follows from examples in \cite{Edgar1992} that the $\tfrac12$ bound on the norms is essential. These counterexamples are of a very special kind, and it is therefore of interest to find families of self-affine sets for which one can loose these assumptions. A result into this direction was obtained by Hueter and Lalley in \cite{HueterLalley1995}, where it is proven that for an explicit open class of self-affine sets, the Hausdorff dimension is indeed given by the singular value dimension, as long as the pieces $f_i(E)$ are disjoint. In their result the norms may be greater than $\tfrac12$, but it follows from their hypotheses that the singular value dimension is less than $1$. In a different direction, it was recently proven in \cite{JordanPollicottSimon2006} that for a randomized version of self-affine sets the natural analogue of Falconer's formula holds almost surely regardless of the norms. See also \cite{Hu1998}, \cite{Kaenmaki2004}, \cite{FengWang2005}, \cite{Shmerkin2005}, and \cite{FalconerMiao2007b} for other recent results on the dimensional properties of self-affine sets. For a fixed $\kappa$ and $d$, the class of all IFSs consisting of $\kappa$ affine maps on $\mathbb{R}^d$ inherits a natural topology from $\mathcal{A}_d^\kappa$, where $\mathcal{A}_d=GL_d(\mathbb{R})\times \mathbb{R}^d$ is identified with the vector space of all invertible affine mappings on $\mathbb{R}^d$. We will say that a family of affine IFS's is \emph{robust} if it is open in this topology, and that a property is \emph{stable} if the set of IFS's where it holds is robust. We define a class of self-affine sets in which we allow overlapping and the norms of all the maps can be arbitrarily close to $1$; see \S \ref{sec:kakeya} for the details. We show that in this class the Minkowski dimension coincides with the singular value dimension (Theorem \ref{thm:main_result}), and it can be defined dynamically as the zero of a certain pressure function. Even though the family is not itself robust, in \S \ref{sec:examples} we will exhibit robust subsets which preserve all the interesting properties. This is the first instance where the equality of Minkowski dimension and singular value dimension is established for a robust family, without requiring any separation assumptions. Moreover, we prove that the Minkowski dimension is a continuous function of the generating maps within this family. The inspiration for our work arose from the theory of Kakeya sets. Recall that a subset of $\mathbb{R}^d$ is called a \emph{Kakeya set} (sometimes also a \emph{Besicovitch set}) if it contains a unit segment in every direction. The long-standing Kakeya conjecture asserts, in one of its many forms, that the Hausdorff dimension of a Kakeya set in $\mathbb{R}^d$ is precisely $d$. This is wide open for $d\ge 3$; however, for $d=2$ it is known to be true, and indeed the proof is not difficult, see for example \cite{Wolff1999}. This result implies that the overlap between segments pointing in different directions is small, in the sense that the dimension of the union of all segments is the same as if there was no overlap at all. We strove to construct a family of self-affine sets in which the cylinder sets are aligned in different directions, so that the possible overlaps between them would not affect the dimension calculations. Although the technical details may obscure it somewhat, it may be useful to keep this basic idea in mind while going through the definitions and proofs. The paper is structured as follows. In \S \ref{sec:self-affine}, we introduce some standard notation and present some preliminary facts on self-affine sets. The family of self-affine sets of Kakeya type is defined in \S \ref{sec:kakeya}, where Theorem \ref{thm:main_result}, the main result of the paper, is stated. The proof of Theorem \ref{thm:main_result} is contained in \S \ref{sec:proof}. In \S \ref{sec:projection}, we study projections of self-affine sets, as part of our preparation to obtain explicit examples of self-affine sets of Kakeya-type. These examples are introduced in \S \ref{sec:examples}, where we finish our discussion with some remarks and open questions. \section{Self-affine sets} \label{sec:self-affine} Throughout the article, we use the following notation: Let $0 < \overline{\alpha} < 1$ and $I = \{ 1,\ldots,\kappa \}$ with $\kappa \ge 2$. Put $I^* = \bigcup_{n=1}^\infty I^n$ and $I^\infty = I^\mathbb{N}$. For each $\mathtt{i} \in I^*$, there is $n \in \mathbb{N}$ such that $\mathtt{i} = (i_1,\ldots,i_n) \in I^n$. We call this $n$ as the \emph{length} of $\mathtt{i}$ and we denote $|\mathtt{i}|=n$. The length of elements in $I^\infty$ is infinity. Moreover, if $\mathtt{i} \in I^*$ and $\mathtt{j} \in I^* \cup I^\infty$ then with the notation $\mathtt{i}\mathtt{j}$ we mean the element obtained by juxtaposing the terms of $\mathtt{i}$ and $\mathtt{j}$. For $\mathtt{i} \in I^*$, we define $[\mathtt{i}] = \{ \mathtt{i}\mathtt{j} : \mathtt{j} \in I^\infty \}$ and we call the set $[\mathtt{i}]$ a \emph{cylinder set} of level $|\mathtt{i}|$. If $\mathtt{j} \in I^* \cup I^\infty$ and $1 \le n < |\mathtt{j}|$, we define $\mathtt{j}|_n$ to be the unique element $\mathtt{i} \in I^n$ for which $\mathtt{j} \in [\mathtt{i}]$. We also denote $\mathtt{i}^- = \mathtt{i}|_{|\mathtt{i}|-1}$. With the notation $\mathtt{i} \bot \mathtt{j}$, we mean that the elements $\mathtt{i},\mathtt{j} \in I^*$ are \emph{incomparable}, that is, $[\mathtt{i}] \cap [\mathtt{j}] = \emptyset$. We call a set $A \subset I^*$ incomparable if all of its elements are mutually incomparable. Finally, with the notation $\mathtt{i} \land \mathtt{j}$, we mean the common beginning of $\mathtt{i} \in I^*$ and $\mathtt{j} \in I^*$, that is, $\mathtt{i} \land \mathtt{j} = \mathtt{i}|_n = \mathtt{j}|_n$, where $n = \min\{ k-1 : \mathtt{i}|_k \ne \mathtt{j}|_k \}$. Defining \begin{equation*} |\mathtt{i} - \mathtt{j}| = \begin{cases} \overline{\alpha}^{|\mathtt{i} \land \mathtt{j}|}, \quad &\mathtt{i} \ne \mathtt{j} \\ 0, &\mathtt{i} = \mathtt{j} \end{cases} \end{equation*} whenever $\mathtt{i},\mathtt{j} \in I^\infty$, the couple $(I^\infty,|\cdot|)$ is a compact metric space. We call $(I^\infty,|\cdot|)$ a \emph{symbol space} and an element $\mathtt{i} = (i_1,i_2,\ldots) \in I^\infty$ a \emph{symbol}. If there is no danger of misunderstanding, we will also call an element $\mathtt{i} \in I^*$ a symbol. Define the \emph{left shift} $\sigma \colon I^\infty \to I^\infty$ by setting \begin{equation} \sigma(i_1,i_2,\ldots) = (i_2,i_3,\ldots). \end{equation} The notation $\sigma(i_1,\ldots,i_n)$ means the symbol $(i_2,\ldots,i_n) \in I^{n-1}$. Observe that to be precise in our definitions, we need to work with ``empty symbols'', that is, symbols with zero length, which will be denoted by $\varnothing$. The singular values $1 > ||A|| = \alpha_1(A) \ge \cdots \ge \alpha_d(A) > 0$ of a contractive invertible matrix $A \in \mathbb{R}^{d\times d}$ are the square roots of the eigenvalues of the positive definite matrix $A^*A$, where $A^*$ is the transpose of $A$. The normalized eigenvectors of $A^*A$ are denoted by $\theta_1(A),\ldots,\theta_d(A)$. These eigenvectors together with singular values give geometric information about the matrix $A$. For example, let $v$ be the unit vector with direction equal to the major axis of the ellipse $A(B)$, where $B$ is any ball. By definition, the direction of $v$ is the image under $A$ of a vector which maximizes $|A x|$ over all $x$ in the unit ball. But $\theta_1(A)$ is precisely such a vector since $|A x|^2 = A^* A x \cdot x$. Thus, explicitly, $v = A\bigl( \theta_1(A) \bigr)/\alpha_1(A)$. For more detailed information, the reader is referred to \cite[\S V.1.3]{Temam1988}. For a contractive invertible matrix $A \in \mathbb{R}^{d\times d}$, we define the \emph{singular value function} to be \begin{equation*} \varphi^t(A) = \alpha_1(A) \cdots \alpha_l(A) \alpha_{l+1}(A)^{t-l}, \end{equation*} where $0 \le t < d$ and $l$ is the integer part of $t$. For $t \ge d$, we put $\varphi^t(A) = \bigl(\alpha_1(A) \cdots \alpha_d(A)\bigr)^{t/d} = |\det(A)|^{t/d}$. For each $i \in I$, fix a contractive invertible matrix $A_i \in \mathbb{R}^{d\times d}$ such that $||A_i|| \le \overline{\alpha} < 1$. Clearly the products $A_\mathtt{i} = A_{i_1}\cdots A_{i_n}$ are also contractive and invertible as $\mathtt{i} \in I^n$ and $n \in \mathbb{N}$. Denoting $\underline{\alpha} = \min_{i \in I} \alpha_d(A_i) > 0$, for each $t,\delta \ge 0$ we have \begin{equation} \label{eq:cylinder1} \varphi^t(A_\mathtt{i})\underline{\alpha}^{\delta |\mathtt{i}|} \le \varphi^{t+\delta}(A_\mathtt{i}) \le \varphi^t(A_\mathtt{i})\overline{\alpha}^{\delta |\mathtt{i}|} \end{equation} whenever $\mathtt{i} \in I^*$. According to \cite[Corollary V.1.1]{Temam1988} and \cite[Lemma 2.1]{Falconer1988}, the following holds for all $t \ge 0$: \begin{equation} \label{eq:cylinder2} \varphi^t(A_{\mathtt{i}\mathtt{j}}) \le \varphi^t(A_\mathtt{i}) \varphi^t(A_\mathtt{j}) \end{equation} whenever $\mathtt{i},\mathtt{j} \in I^*$. Given $t \ge 0$, we define the \emph{topological pressure} to be \begin{equation} \label{eq:pressure} P(t) = \lim_{n \to \infty} \tfrac{1}{n} \log\sum_{\mathtt{i} \in I^n} \varphi^t(A_\mathtt{i}). \end{equation} The limit above exists by the standard theory of subadditive sequences since for each $t \ge 0$, using \eqref{eq:cylinder2}, \begin{equation*} \sum_{\mathtt{i} \in I^{n+m}} \varphi^t(A_\mathtt{i}) \le \sum_{\mathtt{i} \in I^{n+m}} \varphi^t(A_{\mathtt{i}|_n})\varphi^t(A_{\sigma^n(\mathtt{i})}) = \sum_{\mathtt{i} \in I^n}\varphi^t(A_\mathtt{i}) \sum_{\mathtt{j} \in I^m}\varphi^t(A_\mathtt{j}) \end{equation*} whenever $n,m \in \mathbb{N}$. Moreover, as a function, $P \colon [0,\infty) \to \mathbb{R}$ is continuous and strictly decreasing with $\lim_{t \to \infty}P(t) = -\infty$: For $t,\delta \ge 0$ and $n \in \mathbb{N}$, we have, using \eqref{eq:cylinder1}, \begin{equation*} \delta\log\underline{\alpha} + \tfrac{1}{n} \log\sum_{\mathtt{i} \in I^n} \varphi^t(A_\mathtt{i}) \le \tfrac{1}{n} \log\sum_{\mathtt{i} \in I^n} \varphi^{t+\delta}(A_\mathtt{i}) \le \delta\log\overline{\alpha} + \tfrac{1}{n} \log\sum_{\mathtt{i} \in I^n} \varphi^t(A_\mathtt{i}). \end{equation*} Letting $n \to \infty$, we get $0 < -\delta\log\overline{\alpha} \le P(t) - P(t+\delta) \le -\delta\log\underline{\alpha}$. Since $P(0) = \log\kappa$, we have actually shown that there exists a unique $t > 0$ for which $P(t)=0$. The \emph{singular value dimension} is defined to be the zero of the topological pressure. See also \cite[Proposition 4.1]{Falconer1988}. \begin{theorem} \label{thm:semiconformal} Suppose that for each $i \in I$ there is an invertible matrix $A_i \in \mathbb{R}^{d\times d}$ with $||A_i|| \le \overline{\alpha}$. If for given $t \ge 0$ there exists a constant $D \ge 1$ such that \begin{equation*} D^{-1}\varphi^t(A_\mathtt{i})\varphi^t(A_\mathtt{j}) \le \varphi^t(A_{\mathtt{i}\mathtt{j}}) \end{equation*} whenever $\mathtt{i},\mathtt{j} \in I^*$ then there exists a Borel probability measure $\mu$ on $I^\infty$, a constant $c \ge 1$, and $1 > \lambda_1(\mu) \ge \cdots \ge \lambda_d(\mu) > 0$ such that \begin{equation} \label{eq:semiconformal} c^{-1} e^{-|\mathtt{i}|P(t)}\varphi^t(A_\mathtt{i}) \le \mu([\mathtt{i}]) \le ce^{-|\mathtt{i}|P(t)}\varphi^t(A_\mathtt{i}) \end{equation} whenever $\mathtt{i} \in I^*$ and \begin{equation*} \lim_{n \to \infty} \alpha_k(A_{\mathtt{i}|_n})^{1/n} = \lambda_k(\mu) \end{equation*} for $\mu$-almost all $\mathtt{i} \in I^\infty$ and for every $k \in \{ 1,\ldots,d \}$. \end{theorem} \begin{proof} Using the assumptions, \eqref{eq:cylinder1}, and \eqref{eq:cylinder2}, the existence of a Borel probability measure $\mu$ satisfying \eqref{eq:semiconformal} follows from \cite[Theorem 2.2]{KaenmakiVilppolainen2006} by a minor modification. More precisely, in \cite{KaenmakiVilppolainen2006} it was assumed that the parameter $t$ is an exponent, but an examination of the proof reveals that this fact is not required. Using \cite[Theorem 2.2]{KaenmakiVilppolainen2006}, \eqref{eq:cylinder2}, and Kingman's subadditive ergodic theorem \cite{Steele1989}, the limit \begin{equation*} E^t(\mu) = \lim_{n \to \infty} \tfrac1n \log\varphi^t(A_{\mathtt{i}|_n}) \end{equation*} exists for $\mu$-almost every $\mathtt{i} \in I^*$ and for every $t \ge 0$. Setting now $\lambda_k(\mu) = \exp\bigl( E^k(\mu)-E^{k-1}(\mu) \bigr)$ for $k \in \{ 1,\ldots,d \}$, we have finished the proof. \end{proof} It may appear that the assumption of Theorem \ref{thm:semiconformal} is very strong. However, it is implied by some simple geometrical conditions; see Remark \ref{P-semiconformal}. Observe also that even if the measure satisfying \eqref{eq:semiconformal} did not exist, the latter claim of Theorem \ref{thm:semiconformal} remains true for the natural measure found in \cite[Theorem 4.1]{Kaenmaki2004}. If for each $i \in I$ an invertible matrix $A_i \in \mathbb{R}^{d \times d}$ with $||A_i|| \le \overline{\alpha}$ and a translation vector $a_i$ are fixed then we define a \emph{projection mapping} $\pi \colon I^\infty \to \mathbb{R}^d$ by setting \begin{equation*} \pi(\mathtt{i}) = \sum_{n=1}^\infty A_{\mathtt{i}|_{n-1}}a_{i_n} \end{equation*} as $\mathtt{i} = (i_1,i_2,\ldots)$. Using the triangle inequality, we have \begin{align*} |\pi(\mathtt{i}) - \pi(\mathtt{j})| &\le \sum_{n=|\mathtt{i} \land \mathtt{j}|+1}^\infty |A_{\mathtt{i}|_{n-1}}a_{i_n} - A_{\mathtt{j}|_{n-1}}a_{j_n}| \\ &\le \sum_{n=|\mathtt{i} \land \mathtt{j}|+1}^\infty 2\overline{\alpha}^{n-1} \max_{i \in I} |a_i| = \frac{2\max_{i \in I}|a_i|}{1-\overline{\alpha}}|\mathtt{i} - \mathtt{j}| \end{align*} for every $\mathtt{i},\mathtt{j} \in I^\infty$. The mapping $\pi$ is therefore continuous. We define $E = \pi(I^\infty)$ and call this set a \emph{self-affine set}. Observe that the compact set $E$ is invariant under the affine mappings $A_i + a_i$, that is, \begin{equation} \label{eq:invariance} E = \bigcup_{i=1}^\kappa (A_i+a_i)(E). \end{equation} This is an immediate consequence of the fact that \begin{equation*} \pi(i\mathtt{i}) = (A_i + a_i)\sum_{n=1}^\infty A_{\mathtt{i}|_{n-1}}a_{i_n} = (A_i + a_i)\pi(\mathtt{i}) \end{equation*} whenever $\mathtt{i} \in I^\infty$ and $i \in I$. In fact, by \cite[\S 3.1]{Hutchinson1981}, there are no other nonempty compact sets satisfying \eqref{eq:invariance} besides $E$. If there is no danger of misunderstanding, the image of a cylinder set \[ \pi([\mathtt{i}]) = (A_{i_1} + a_{i_1}) \cdots (A_{i_n} + a_{i_n})(E) = A_\mathtt{i}(E) + A_{\mathtt{i}|_{n-1}}a_{i_n} + \cdots + a_{i_1}, \] as $\mathtt{i} = (i_1,\ldots,i_n) \in I^n$, will also be called a cylinder set, and we will denote $E_\mathtt{i} = \pi([\mathtt{i}])$. When we want to emphasize the dependence of $E$ on the affine mappings, we will say that $E$ is the \emph{invariant set} of the affine IFS $\{ A_i + a_i\}_{i\in I}$. \section{Self-affine sets of Kakeya type} \label{sec:kakeya} In this section, we introduce self-affine sets of Kakeya type. Working in $\mathbb{R}^2$, we state that the Minkowski dimension of such a set is the zero of the topological pressure, see \eqref{eq:pressure}. Given a set $A \subset \mathbb{R}^d$, the upper and lower Minkowski dimensions are denoted by $\dimum(A)$ and $\dimlm(A)$, respectively. For the definition, see \cite[\S 5.3]{Mattila1995}. If $\dimum(A) = \dimlm(A)$, then the common value, the Minkowski dimension, is denoted by $\dimm(A)$. For $\theta \in S^{d-1} = \{ x \in \mathbb{R}^d : |x| = 1 \}$ and $0 \le \beta \le \pi$, we set \begin{equation*} X(\theta,\beta) = \{ x \in \mathbb{R}^d : \cos(\beta/2) < |\theta \cdot x| / |x|,\; x \ne 0 \}. \end{equation*} The closure of a given set $A$ is denoted by $\overline{A}$ and with the notation $\mathcal{L}^d$, we mean the Lebesgue measure on $\mathbb{R}^d$. \begin{definition} \label{def:kakeya} Suppose that for each $i \in I$ there are a contractive invertible matrix $A_i \in \mathbb{R}^{2 \times 2}$ with $||A_i|| \le \overline{\alpha}<1$ and a translation vector $a_i \in \mathbb{R}^2$. The collection of affine mappings $\{ A_i + a_i \}_{i \in I}$ is called an \emph{affine iterated function system of Kakeya type}, and the invariant set $E \subset \mathbb{R}^2$ of this affine IFS a \emph{self-affine set of Kakeya type}, provided that the following two conditions hold: \begin{enumerate} \renewcommand{\labelenumi}{(K\arabic{enumi})} \renewcommand{\theenumi}{K\arabic{enumi}} \item \label{K-kakeya} There exist $\theta \in S^1$ and $0 < \beta < \pi/2$ such that \begin{subequations} \renewcommand{\theequation}{\ref{K-kakeya}\alph{equation}} \begin{align} A_i\bigl( \overline{X(\theta,\beta)} \bigr) &\subset X(\theta,\beta), \label{K-coneinvariance} \\ A_i^*\bigl( \overline{X(\theta,\beta)} \bigr) &\subset X(\theta,\beta) \label{K-transpose} \end{align} whenever $i \in I$ and \begin{equation} \label{K-separation} A_i\bigl( \overline{X(\theta,\beta)}\bigr) \cap A_j\bigl( \overline{X(\theta,\beta)} \bigr) = \{0\} \end{equation} for $i \ne j$. \end{subequations} \item \label{K-projection} There exists a constant $\varrho>0$ such that \begin{equation*} \mathcal{L}^1 \bigl( \{ \theta_1(A_\mathtt{i}) \cdot x : x \in E \} \bigr) \ge \varrho \end{equation*} for all $\mathtt{i} \in I^*$. \end{enumerate} \end{definition} Let us make some remarks on these conditions. Our goal is to make the self-affine set look, at a given finite scale, roughly like a rescaled Kakeya set (except that instead of having segments in every direction, there are segments only in a Cantor set of directions). The r\^{o}le of the conditions \eqref{K-coneinvariance} and \eqref{K-separation} is to ensure that cylinder sets are aligned in different directions. Notice the analogy between these conditions and the Hypothesis 3 (``separation'') in \cite{HueterLalley1995}. The hypothesis \eqref{K-transpose} is of technical nature. We underline that \eqref{K-coneinvariance}, \eqref{K-transpose}, and \eqref{K-separation} are all stable properties. The projection condition \eqref{K-projection} is needed so that cylinder sets do not have too many ``holes'' and one can approximate them by neighborhoods of segments. It is the only one of the assumptions which involves the translation vectors $\{a_i\}_{i\in I}$ in addition to the linear maps $\{A_i\}_{i\in I}$. In particular, \eqref{K-projection} implies that the Hausdorff dimension of $E$ is at least one. Hence if $t$ is such that $P(t)=0$, then $t\ge 1$ by \cite[Proposition 5.1]{Falconer1988}. An analogous, but stronger, projection condition was introduced by Falconer in \cite{Falconer1992}. We remark that in that article, unlike in our case, the open set condition is also required. The projection condition is obviously satisfied if the invariant set is connected. Unfortunately, determining when a self-affine set is connected in a stable way is a very difficult problem, even when the linear parts commute, see for example \cite{ShmerkinSolomyak2006}. In \S \ref{sec:projection}, we introduce easily checkable, stable conditions which imply the projection condition. We do not need analogues of either Hypothesis 2 (``distortion'') or Hypothesis 5 (``strong separation'') used in \cite{HueterLalley1995}. In that article, Hypothesis 2 plays a crucial r\^{o}le in guaranteeing that the invariant set has dimension less than 1. By our observation that $t\ge 1$, it cannot possibly hold in our setting. In a sense, our examples are more purely self-affine, since both singular values are involved in the dimension calculations, while in \cite{HueterLalley1995} the dimension depends only on the largest one. We stress that our results are only for the Minkowski dimension; estimating the Hausdorff dimension in our setting appears to be a very difficult problem. Before stating our main result, we formulate and prove a Kakeya-type estimate which is a crucial ingredient of the proof. Even though it is a minor variant of \cite[Proposition 1.5]{Wolff1999}, complete details are provided for the convenience of the reader. \begin{proposition} \label{thm:kakeya_estimate} Let $R_1,\ldots,R_M \subset \mathbb{R}^2$ be rectangles of size $\alpha_1 \times \alpha_2$, with $\alpha_1 > \alpha_2$. Suppose that the angle between the long sides of any two rectangles is at least $\alpha_2/\alpha_1$. If $F \subset \mathbb{R}^2$ and $\tau > 0$ are such that $\mathcal{L}^2(F \cap R_i) \ge \tau\alpha_1\alpha_2$ for every $i \in \{ 1,\ldots,M \}$, then \begin{equation*} \mathcal{L}^2(F) \ge \frac{M\tau^2 \alpha_1\alpha_2}{2\sqrt{2}\pi \log(2\pi\alpha_1/\alpha_2)}. \end{equation*} \end{proposition} \begin{proof} Given two rectangles $R_i$ and $R_j$, let us denote the (smaller) angle between their long sides by $\varangle(R_i,R_j)$. Since $\alpha_2/\alpha_1 \le \varangle(R_i,R_j) \le \pi/2$, a simple geometric inspection yields $\alpha_2/\alpha_1 + \varangle(R_i,R_j) \le 2\varangle(R_i,R_j) \le \sqrt{2}\pi\sin\bigl( \varangle(R_i,R_j)/2 \bigr) = \sqrt{2}\pi\alpha_2/\diam(R_i \cap R_j)$ and hence \begin{equation*} \mathcal{L}^2(R_i \cap R_j) \le \alpha_2\diam(R_i \cap R_j) \le \frac{\sqrt{2}\pi\alpha_2^2}{\alpha_2/\alpha_1 + \varangle(R_i,R_j)} \end{equation*} whenever $i \ne j$. Thus we have \begin{equation} \label{eq:L2_estimate} \sum_{j=1}^M \mathcal{L}^2(R_i \cap R_j) \le \sum_{j=0}^{\lceil \frac{\pi\alpha_1}{2\alpha_2} \rceil} \frac{\sqrt{2}\pi\alpha_2^2}{\alpha_2/\alpha_1 + j\alpha_2/\alpha_1} \le 2\sqrt{2}\pi\alpha_1\alpha_2\log(2\pi\alpha_1/\alpha_2) \end{equation} whenever $i \in \{ 1,\ldots,M \}$. Here with the notation $\lceil x \rceil$, we mean the smallest integer greater than $x$. Since, by using H\"older's inequality, \begin{align*} (M\tau\alpha_1\alpha_2)^2 &\le \biggl(\sum_{i=1}^M \mathcal{L}^2(F \cap R_i)\biggr)^2 = \biggl(\int_{\mathbb{R}^2} \chi_F \sum_{i=1}^M \chi_{R_i} d\mathcal{L}^2\biggr)^2 \\ &\le \biggl( \int_{\mathbb{R}^2} \chi_F^2 d\mathcal{L}^2 \biggr) \biggl( \int_{\mathbb{R}^2} \biggl( \sum_{i=1}^M \chi_{R_i} \biggr)^2 d\mathcal{L}^2 \biggr) \\ &= \mathcal{L}^2(F) \sum_{i=1}^M \sum_{j=1}^M \mathcal{L}^2(R_i \cap R_j), \end{align*} the claim follows by applying \eqref{eq:L2_estimate}. Here $\chi_A$ denotes the characteristic function of a given set $A$. \end{proof} We can now state the main result of this article. \begin{theorem} \label{thm:main_result} Suppose $E \subset \mathbb{R}^2$ is a self-affine set of Kakeya type and $P(t)=0$. Then \begin{equation*} \dimm(E) = t \ge 1. \end{equation*} In particular, $\dimm$ is a continuous function when restricted to the class of affine IFS's of Kakeya-type. \end{theorem} Let us sketch the main idea of the proof; full details are postponed until \S \ref{sec:proof}. In order to compute the Minkowski dimension, we want to estimate the area of the set $E(\delta)$ for small $\delta>0$, where $E(\delta)$ is the $\delta$-neighborhood of $E$. In order to do this we take a small $r$ and decompose $E$ as a union of cylinders $\{ E_\mathtt{i} \}$ with $\varphi^t(A_\mathtt{i})\approx r$ (where $t$ is the singularity dimension). The condition \eqref{K-projection} implies that the projection of $E_\mathtt{i}$ onto the major axis of the ellipse $A_i B + a_i$ (where $B$ is some large ball) has positive Lebesgue measure with a uniform lower bound. Hence it follows that for large $K$ the $K\alpha_2(A_\mathtt{i})$-neighborhood of $E_\mathtt{i}$ intersects a rectangle $R_\mathtt{i}$, with small side comparable to $\alpha_2(A_\mathtt{i})$ and long side comparable to $\alpha_1(A_\mathtt{i})$, in a set of area comparable to $\alpha_1(A_\mathtt{i})\alpha_2(A_\mathtt{i})$. At this point we would like to apply the Kakeya-type estimate of Proposition \ref{thm:kakeya_estimate}. However, for this we need all the rectangles to have the same sizes, while $\alpha_1(A_\mathtt{i})$ and $\alpha_2(A_\mathtt{i})$ may take many different values. We deal with this with the help of Theorem \ref{thm:semiconformal}: with respect to the measure $\mu$ given by that theorem, the values of $\alpha_1(A_\mathtt{i})$ and $\alpha_2(A_\mathtt{i})$ are roughly constant for ``most'' sequences $\mathtt{i}$. More precisely, we will obtain that $\alpha_k(A_\mathtt{i}) \approx \bigl( \varphi^t(A_\mathtt{i}) \bigr)^{\gamma_k}$ for many sequences $\mathtt{i}$, where $\gamma_1+(t-1)\gamma_2=1$. Also, due to the Gibbs property of $\mu$ expressed in \eqref{eq:semiconformal}, the number of cylinders $[\mathtt{i}]$ with $\varphi^t(A_\mathtt{i})\approx r$ is comparable to $r^{-1}$. By \eqref{K-separation}, the angle between the long sides of two of the rectangles $R_i$ and $R_j$ in the construction are sufficiently separated. Hence we can apply Proposition \ref{thm:kakeya_estimate} and conclude that the union of all such rectangles has Lebesgue measure which is, up to a logarithmic factor, the same as if the union was disjoint. Therefore, letting $\delta \approx r^{\gamma_2}$ we conclude \[ \mathcal{L}^2\bigl( E(\delta) \bigr) \gtrsim r^{\gamma_1+\gamma_2-1+\varepsilon} \approx \delta^{2-t+\varepsilon}, \] where $\varepsilon > 0$ is arbitrarily small, which gives the desired lower estimate (the upper estimate is well known). The latter claim of the theorem is now an immediate consequence of the next lemma. \begin{lemma} \label{thm:continuity} Suppose that for each $i \in I$ there is a contractive invertible matrix $A_i \in \mathbb{R}^{2 \times 2}$ such that the condition \eqref{K-coneinvariance} is satisfied. Then $(A_1,\ldots,A_\kappa)$ is a continuity point for the singular value dimension. \end{lemma} \begin{proof} After an appropriate rotation we can assume, without loss of generality, that $\theta = \frac{1}{\sqrt{2}}(1,1)$ in the condition \eqref{K-coneinvariance}. This implies that for each $i \in I$, the coefficients of $A_i$ are either all strictly positive or all strictly negative, and this property is preserved under small perturbations. Since multiplying by the scalar $-1$ does not affect the singular values of $A_{\mathtt{i}}$ for $\mathtt{i} \in I^{\ast}$, we will assume that for each $i \in I$, the matrix $A_i$ has coefficients bounded below by some $\delta > 0$. Note that, since $A_i$ is contractive, all of its coefficients are bounded above by $1$. If $M_1, M_2 \in \mathbb{R}^{2 \times 2}$ and $c \in \mathbb{R}$, by $M_1 < M_2$ we mean that the inequality holds for each coefficient, and by $c < M_1$ we will mean that all coefficients of $M_1$ are strictly greater than $c$. In the same way we define $M_1 > M_2$ and $c > M_1$. Note that if $0 < M_1 < M_2$, then $\alpha_1 (M_1) < \alpha_1 (M_2)$ by the Perron-Frobenius Theorem. Fix $0 < \varepsilon < \delta$, and suppose that for each $i \in I$ there is a matrix $B_i \in \mathbb{R}^{2 \times 2}$ such that \[ - \varepsilon < A_i - B_i < \varepsilon . \] Let $\varepsilon_1 = \varepsilon / \delta$, and note that \[ (1 - \varepsilon_1) A_i < B_i < (1 + \varepsilon_1) A_i . \] Iterating this, we get that if $\mathtt{i} \in I^n$, then \[ (1 - \varepsilon_1)^n A_{\mathtt{i}} < B_{\mathtt{i}} < (1 + \varepsilon_1)^n A_{\mathtt{i}}, \] and hence \begin{equation} \label{eq:ineqalpha} (1 - \varepsilon_1)^n \alpha_1 (A_{\mathtt{i}}) < \alpha_1 (B_{\mathtt{i}}) < (1 + \varepsilon_1)^n \alpha_1 (A_{\mathtt{i}}) . \end{equation} A straightforward calculation shows that, for $i \in I$, \[ | \det (A_i) | - 8 \varepsilon < | \det (B_i) | < | \det (A_i) | + 8 \varepsilon, \] whence, letting \[ \varepsilon_2 = \max_{i \in I} 8 \varepsilon | \det (A_i) |^{- 1}, \] we obtain \[ (1 - \varepsilon_2) | \det (A_i) | < | \det (B_i) | < (1 + \varepsilon_2) | \det (A_i) |. \] Recall the definition of the pressure function given in \eqref{eq:pressure}. Let $P_\mathcal{A}$ and $P_\mathcal{B}$ denote the pressures corresponding to the matrices $\{A_i \}_{i \in I}$ and $\{B_i \}_{i \in I}$, respectively. Let $t$ be such that $P_\mathcal{A}(t)=0$, and let $s$ be such that $P_\mathcal{B}(s)=0$. Our goal is to show that $s \to t$ as $\varepsilon \downarrow 0$. Let $D=\max_{i\in I} |\det(A_i)|$. Pick any $D' \in (D,1)$, and suppose $\varepsilon$ is so small that $D+8\varepsilon < D'$. If $s \ge 2$, then it is easy to see that the pressure is given by \[ P_\mathcal{B}(s) = \log\biggl(\sum_{i\in I} |\det(B_i)|^{s/2}\biggr) \le \log\kappa - \tfrac{s}{2} |\log D'|. \] Using this, we see that \[ s \le \max(2\log\kappa/|\log D'|,2) =: T. \] Since, for $M \in \mathbb{R}^{2 \times 2}$, $\alpha_2 (M) = |\det(M)| / \alpha_1 (M)$, we obtain from \eqref{eq:ineqalpha} and the multiplicativity of the determinant that, for $\mathtt{i} \in I^n$, \begin{equation} \label{eq:ineqfii} \lambda_1^n \varphi^s (A_{\mathtt{i}}) < \varphi^s (B_{\mathtt{i}}) < \lambda_2^n \varphi^s (A_{\mathtt{i}}), \end{equation} where \begin{align*} \lambda_1 &= (1 - \varepsilon_1) (1 + \varepsilon_1)^{- 1} (1 - \varepsilon_2)^{T/2},\\ \lambda_2 &= (1 + \varepsilon_1) (1 - \varepsilon_1)^{- 1} (1 + \varepsilon_2)^{T/2} . \end{align*} In order to see that \eqref{eq:ineqfii} holds, it is convenient to consider the cases $0\leq s<1$, $1\leq s < 2$, and $2\leq s \leq T$ separately. From \eqref{eq:ineqfii}, we obtain \[ P_\mathcal{A}(s) + \log (\lambda_1) \le P_\mathcal{B} (s) \le P_\mathcal{A}(s) + \log (\lambda_2), \] yielding \[ P_\mathcal{A}(s) \in [- \log (\lambda_2), - \log (\lambda_1)]. \] Since $P_\mathcal{A}$ is a continuous, strictly decreasing function, so is its inverse $P_\mathcal{A}^{-1}$. But $\lambda_1, \lambda_2 \rightarrow 1$ as $\varepsilon \rightarrow 0$, so the continuity of $P_\mathcal{A}^{-1}$ implies that $s \to t$ as $\varepsilon \downarrow 0$. This is exactly what we wanted to show. \end{proof} \section{Proof of the main result} \label{sec:proof} This section is dedicated to the proof of Theorem \ref{thm:main_result}. We first collect several lemmas which will be used in the proof. These lemmas are geometric consequences of Definition \ref{def:kakeya}. We remark that some of these lemmas are analogous to results in \cite{HueterLalley1995}. \begin{lemma} \label{thm:norm_alpha} Suppose that for each $i \in I$ there is a contractive invertible matrix $A_i \in \mathbb{R}^{2 \times 2}$ such that the conditions \eqref{K-coneinvariance} and \eqref{K-transpose} are satisfied. Then $|A_\mathtt{i} x| \ge \cos(\beta) \alpha_1(A_\mathtt{i}) |x|$ for all $x \in X(\theta,\beta)$ and all $\mathtt{i}\in I^*$. Moreover, $\alpha_1(A_{\mathtt{i}\mathtt{j}}) \ge \cos^2(\beta) \alpha_1(A_\mathtt{i}) \alpha_1(A_\mathtt{j})$ whenever $\mathtt{i},\mathtt{j} \in I^*$. \end{lemma} \begin{proof} Let $\mathtt{i} \in I^*$, $x\in X(\theta,\beta)$, and write $x = x_1 \theta_1(A_\mathtt{i}) + x_2 \theta_2(A_\mathtt{i})$. We may assume that $|x|=1$. Since $\theta_1(A_\mathtt{i})$ is, by definition, the eigenvector of $A_\mathtt{i}^* A_\mathtt{i}$ corresponding to the largest eigenvalue, it follows from \eqref{K-coneinvariance}, \eqref{K-transpose}, and the Perron-Frobenius Theorem that $\theta_1(A_\mathtt{i})\in X(\theta,\beta)$ and $|x_1| = x\cdot \theta_1(A_\mathtt{i}) \ge \cos(\beta)$ (note that the Perron-Frobenius Theorem is usually stated for matrices preserving the positive cone, but it holds for any cone by a change of coordinates). Therefore \begin{equation*} |A_\mathtt{i} x|^2 = | A_\mathtt{i}^* A_\mathtt{i} x \cdot x | = \alpha_1(A_\mathtt{i})^2 x_1^2 + \alpha_2(A_\mathtt{i})^2 x_2^2 \ge \alpha_1(A_\mathtt{i})^2 \cos^2(\beta) \end{equation*} giving the first claim. The second claim follows immediately since \begin{equation*} \alpha_1(A_{\mathtt{i}\mathtt{j}}) \ge |A_{\mathtt{i}}A_{\mathtt{j}}\theta| \ge \cos(\beta) \alpha_1(A_{\mathtt{i}}) |A_{\mathtt{j}}\theta| \ge \cos^2(\beta) \alpha_1(A_\mathtt{i}) \alpha_1(A_\mathtt{j}) \end{equation*} whenever $\mathtt{i},\mathtt{j} \in I^*$. \end{proof} \begin{remark} \label{P-semiconformal} Suppose that for each $i \in I$ there is a contractive invertible matrix $A_i \in \mathbb{R}^{2 \times 2}$ such that the conditions \eqref{K-coneinvariance} and \eqref{K-transpose} are satisfied. It follows immediately from Lemma \ref{thm:norm_alpha} that there exists a constant $D \ge 1$ such that for every $t\ge 0$ \begin{equation*} D^{-1}\varphi^t(A_\mathtt{i})\varphi^t(A_\mathtt{j}) \le \varphi^t(A_{\mathtt{i}\mathtt{j}}) \end{equation*} whenever $\mathtt{i},\mathtt{j} \in I^*$. In fact, $D=\cos^{-2}(\beta)$ works. \end{remark} \begin{lemma} \label{thm:angle} Suppose that for each $i \in I$ there is a contractive invertible matrix $A_i \in \mathbb{R}^{2 \times 2}$ such that the conditions \eqref{K-coneinvariance} and \eqref{K-transpose} are satisfied. Then \newcounter{savedenumi} \begin{enumerate} \renewcommand{\labelenumi}{(\roman{enumi})} \renewcommand{\theenumi}{(\roman{enumi})} \item \label{atmost} the angle between the vectors $A_\mathtt{i}\bigl( \theta_1(A_\mathtt{i}) \bigr)$ and $A_\mathtt{i} x$ is at most a constant times $\alpha_2(A_\mathtt{i})/\alpha_1(A_\mathtt{i})$ for every $\mathtt{i} \in I^*$ and $x\in X(\theta,\beta)$. \setcounter{savedenumi}{\value{enumi}} \end{enumerate} If in addition the condition \eqref{K-separation} is satisfied, then \begin{enumerate} \setcounter{enumi}{\value{savedenumi}} \renewcommand{\labelenumi}{(\roman{enumi})} \renewcommand{\theenumi}{(\roman{enumi})} \item \label{atleast} the angle between the vectors $A_\mathtt{i} x$ and $A_\mathtt{j} y$ is at least a constant times $\alpha_2(A_{\mathtt{i} \land \mathtt{j}})/\alpha_1(A_{\mathtt{i} \land \mathtt{j}})$ for every $\mathtt{i},\mathtt{j} \in I^*$ and $x,y \in X(\theta,\beta)$. \end{enumerate} \end{lemma} \begin{proof} We first prove \ref{atmost}. Fix $\mathtt{i}\in I^*$. Let $x \in S^1 \cap X(\theta,\beta)$ and denote by $\gamma$ the (smaller) angle between $A_\mathtt{i} x$ and the major axis of the ellipse $A_\mathtt{i}\bigl( B(0,1) \bigr)$, that is, the vector $A_\mathtt{i}\bigl( \theta_1(A_\mathtt{i}) \bigr)$. Since, by Lemma \ref{thm:norm_alpha}, we have $|A_\mathtt{i} x| \ge \cos(\beta)\alpha_1(A_\mathtt{i})$, it follows that $|\sin(\gamma)| \le \alpha_2(A_\mathtt{i})/\bigl(\cos(\beta)\alpha_1(A_\mathtt{i})\bigr)$. We conclude \begin{equation*} |\gamma| \le \tfrac{\pi}{2}|\sin(\gamma)| \le \tfrac{\pi}{2\cos(\beta)} \alpha_2(A_\mathtt{i})/\alpha_1(A_\mathtt{i}). \end{equation*} Next we show \ref{atleast}. Write $\mathtt{i}=\mathtt{k}\mathtt{i}'$ and $\mathtt{j}=\mathtt{k}\mathtt{j}'$, where $\mathtt{k}=\mathtt{i}\wedge\mathtt{j}$, and notice that $\mathtt{i}'$ and $\mathtt{j}'$ start with different symbols. Therefore it follows from \eqref{K-separation} that there exists a constant $c>0$ (independent of $\mathtt{i}$ and $\mathtt{j}$) such that the angle between $A_{\mathtt{i}'}x$ and $A_{\mathtt{j}'}y$ is at least $c$ for any $x,y \in X(\theta,\beta)$. Hence it will be enough to prove the following claim: Given $c_1>0$ there is $c_2>0$ such that if $x,y \in S^1 \cap X(\theta,\beta)$ and $|x-y|\ge c_1$, then the angle between $A_\mathtt{k} x$ and $A_\mathtt{k} y$ is at least $c_2 \alpha_2(A_\mathtt{k})/\alpha_1(A_\mathtt{k})$ for all $\mathtt{k}\in I^*$. To prove the claim consider the triangle with vertices $0, A_\mathtt{k} x, A_\mathtt{k} y$. Denote the angle at $0$ by $\gamma$. By Lemma \ref{thm:norm_alpha}, the sides containing $0$ have lengths between $\cos(\beta) \alpha_1(A_\mathtt{k})$ and $\alpha_1(A_\mathtt{k})$, while by the assumption, the length of the third side is at least $c_1 \alpha_2(A_\mathtt{k}) $. We compute the area of the triangle in two ways. On the one hand, it is $|A_\mathtt{k} x||A_\mathtt{k} y|\sin(\gamma)/2 \le \alpha_1(A_\mathtt{k})^2 \sin(\gamma)/2$. Since one of the other two angles of the triangle must be at least $\pi/6$ (otherwise $\gamma > 2\pi/3$ and there is nothing to prove), the area of the triangle is also at least $\cos(\beta) c_1 \alpha_1(A_\mathtt{k}) \alpha_2(A_\mathtt{k})\sin(\pi/6)/2$. By comparing these two estimates, the claim follows. The proof is complete. \end{proof} In \cite[\S 3]{HueterLalley1995}, it is claimed that \eqref{K-coneinvariance} implies that the matrices $A_i$ are strict contractions acting on the space of lines through the origin with positive slope, where the metric is the smaller angle between them. This assertion is wrong, as the following example shows: let \[ A = \left(% \begin{array}{cc} 1 & \varepsilon \\ \varepsilon & \varepsilon \\ \end{array}% \right). \] Let $\ell$ be the line through the origin and $(\varepsilon,1)$ and let $\ell'$ be the line through the origin and $(2\varepsilon,1)$. Then a simple calculation shows that the angle between the lines $A\ell$ and $A\ell'$ is of the order of $\varepsilon^{-1}$ times the angle between $\ell$ and $\ell'$ as $\varepsilon\downarrow 0$. However, the next lemma, and in particular \eqref{eq:repeated_coneinv}, shows that \cite[Proposition 3.1]{HueterLalley1995} is still correct. \begin{lemma} \label{thm:eta_alpha} Suppose that for each $i \in I$ there is a contractive invertible matrix $A_i \in \mathbb{R}^{2 \times 2}$ such that the condition \eqref{K-coneinvariance} is satisfied. Then there exist constants $C \ge 1$ and $0 < \eta < 1$ such that \begin{equation*} \alpha_2(A_\mathtt{i}) \le C \eta^{|\mathtt{i}|}\alpha_1(A_\mathtt{i}) \end{equation*} whenever $\mathtt{i} \in I^*$. \end{lemma} \begin{proof} Let us first show that there exists $C_0 \ge 1$ and $0 < \eta < 1$ such that \begin{equation} \label{eq:repeated_coneinv} A_\mathtt{i}\bigl( X(\theta,\beta) \bigr) \subset X(A_\mathtt{i}\theta/|A_\mathtt{i}\theta|, C_0 \eta^{|\mathtt{i}|}\beta) \end{equation} whenever $\mathtt{i} \in I^*$. Denote the space of all lines through the origin which are contained in $\overline{X(\theta,\beta)}$ by $\mathcal{P}(\theta,\beta)$. The smaller angle between any two lines $\ell_1,\ell_2$ will be denoted by $\varangle(\ell_1,\ell_2)$. Since the maps $A_i$ are not necessarily contractions with respect to the metric $\varangle$, we will make use of a different, but equivalent, metric. This metric is used in some proofs of the Perron-Frobenius Theorem, see for example \cite[Lemma 3.4]{PollicottYuri1998}. Let $\ell_0$ be a line through the origin which is not contained in $X(\theta,\beta)$, and such that $\varangle(\ell_0,\ell)<\pi/2$ for all $\ell\in\mathcal{P}(\theta,\beta)$. Define $d \colon \mathcal{P}(\theta,\beta)^2\rightarrow\mathbb{R}$ by setting \[ d(\ell_1,\ell_2) = \bigl|\log\tan\bigl( \varangle(\ell_0,\ell_1) \bigr) - \log\tan\bigl( \varangle(\ell_0,\ell_2) \bigr)\bigr| \] as $\ell_1,\ell_2 \in \mathcal{P}(\theta,\beta)$. It is easy to verify that $d$ is indeed a metric and, moreover, there is a constant $C_0 \ge 1$ such that \begin{equation} \label{eq:equivalentmetrics} C_0^{-1/2} \varangle(\ell_1,\ell_2) \le d(\ell_1,\ell_2) \le C_0^{1/2} \varangle(\ell_1,\ell_2), \end{equation} for all $\ell_1,\ell_2\in\mathcal{P}(\theta,\beta)$. This is true since $\log\tan$ has a bounded derivative on a compact subset of $(0,\pi/2)$. We claim that the maps $A_i$ acting on $\mathcal{P}(\theta,\beta)$ are uniformly contractive with respect to $d$. To prove this, we may fix $i \in I$ and assume that \[ A_ i = \left(% \begin{array}{cc} a & b \\ c & d \\ \end{array}% \right). \] Moreover, after an appropriate rotation we can assume that $\ell_0$ is the $x$-axis, and all elements of $\mathcal{P}(\theta,\beta)$ have positive slope. Hence $a,b,c,d$ are nonzero and have the same sign. We will denote the slope of $\ell\in\mathcal{P}(\theta,\beta)$ by $s(\ell)$. After this normalization, we have \[ d(A_i\ell_1,A_i\ell_2) = |\log\bigl(s(A_i\ell_1)\bigr) - \log\bigl(s(A_i\ell_2)\bigr)|, \] where \[ s(A_i\ell) = \frac{c+ds(\ell)}{a+bs(\ell)} \] for any $\ell \in \mathcal{P}(\theta,\beta)$. In order to verify the claim, it suffices to show that the derivative of the function $g \colon \mathbb{R} \to \mathbb{R}$, $g(s) = \log\frac{c+de^s}{a+be^s}$, is strictly less than $1$ in absolute value. It is straightforward to see that \[ |g'(s)| = \frac{|ad-bc|e^s}{(a+be^s)(c+de^s)} \] attains its maximum value at $s_0 = \tfrac12 \log\tfrac{ac}{bd}$. Some elementary algebra shows that \begin{equation*} |g'(s_0)| = \frac{|ad-bc|}{ad+bc+2\sqrt{abcd}} < 1 \end{equation*} which is exactly what we wanted. Using the claim and \eqref{eq:equivalentmetrics}, we see that there exists $0<\eta<1$ such that \[ \varangle(A_\mathtt{i} \ell_1, A_\mathtt{i} \ell_2) < C_0 \eta^{|\mathtt{i}|} \varangle(\ell_1,\ell_2), \] for any $\mathtt{i}\in I^*$ and $\ell_1,\ell_2\in\mathcal{P}(\theta,\beta)$. Taking $\ell_1,\ell_2$ as the two lines which make up the boundary of $X(\theta,\beta)$, the assertion \eqref{eq:repeated_coneinv} follows. To finally prove the lemma, notice that for each $\mathtt{i} \in I^*$, we have \begin{align*} \mathcal{L}^2\bigl( A_\mathtt{i} \bigl( B(0,1) \cap X(\theta,\beta) \bigr) \bigr) &= \mathcal{L}^2\bigl( B(0,1) \cap X(\theta,\beta) \bigr) \det(A_\mathtt{i}) \\ &= \beta \alpha_1(A_\mathtt{i}) \alpha_2(A_\mathtt{i}). \end{align*} On the other hand, using \eqref{eq:repeated_coneinv}, we have \begin{align*} \mathcal{L}^2\bigl( A_\mathtt{i} \bigl( B(0,1) \cap X(\theta,\beta) \bigr) \bigr) &\le \mathcal{L}^2\bigl( B(0,\alpha_1(A_\mathtt{i})) \cap X(A_\mathtt{i}\theta/|A_\mathtt{i}\theta|,C_0\eta^{|\mathtt{i}|}\beta) \bigr) \\ &= C \eta^{|\mathtt{i}|}\beta \alpha_1(A_\mathtt{i})^2 \end{align*} for some constant $C \ge 1$. Comparing the two last displayed formulas yields the result. \end{proof} Now we are ready to prove the main theorem. \begin{proof}[Proof of Theorem \ref{thm:main_result}] The upper bound $\dimum(E) \le t$ holds in general, for example, see \cite{DouadyOesterle1980} and \cite{Falconer1988}. Since \eqref{K-projection} implies $\dimh(E) \ge 1$, it is enough to prove that $\dimlm(E) \ge t$. The continuity assertion will then follow from Lemma \ref{thm:continuity}. Recalling Remark \ref{P-semiconformal}, let $\mu$, $1 > \lambda_1(\mu) \ge \lambda_2(\mu) > 0$, and $c,D \ge 1$ be as in Theorem \ref{thm:semiconformal}. Fix $0 < \varepsilon \le \tfrac12 c^{-2}D^{-1}\underline{\alpha} \le \tfrac12$. Using Egorov's Theorem, we find an integer $n_0$ and a compact set $K \subset I^\infty$ so that $\mu(I^\infty \setminus K) < \varepsilon$ and \begin{equation*} \lambda_k(\mu)^{n(1+\varepsilon)} \le \alpha_k(A_{\mathtt{i}|_n}) \le \lambda_k(\mu)^{n(1-\varepsilon)} \end{equation*} whenever $\mathtt{i} \in K$, $k \in \{ 1,2 \}$, and $n \ge n_0$. Denoting \begin{equation*} \gamma_k = \frac{\log\lambda_k(\mu)}{\log\lambda_1(\mu)\lambda_2(\mu)^{t-1}} \end{equation*} as $k \in \{ 1,2 \}$, we notice that $\gamma_1 + (t-1)\gamma_2 = 1$ and \begin{equation} \label{eq:alpha_approx} \varphi^t(A_{\mathtt{i}|_n})^{\gamma_k(1+\varepsilon)/(1-\varepsilon)} \le \alpha_k(A_{\mathtt{i}|_n}) \le \varphi^t(A_{\mathtt{i}|_n})^{\gamma_k(1-\varepsilon)/(1+\varepsilon)} \end{equation} whenever $\mathtt{i} \in K$, $k \in \{ 1,2 \}$, and $n \ge n_0$. Since $\varepsilon$ can be arbitrarily small, \eqref{eq:alpha_approx} together with Lemma \ref{thm:eta_alpha} imply that $\gamma_1 < \gamma_2$. For $r > 0$ define \begin{equation*} Z(r) = \{ \mathtt{i} \in I^* : \varphi^t(A_\mathtt{i}) \le r < \varphi^t(A_{\mathtt{i}^-}) \}, \end{equation*} and notice that the set $Z(r)$ is incomparable for every $r>0$. Denote also $Z_K(r) = \{ \mathtt{i} \in Z(r) : [\mathtt{i}] \cap K \ne \emptyset \}$. Since \begin{align*} (cD)^{-1}\underline{\alpha}\sum_{\mathtt{i} \in Z(r) \setminus Z_K(r)} r &\le c^{-1}\sum_{\mathtt{i} \in Z(r) \setminus Z_K(r)} \varphi^t(A_\mathtt{i}) \\ &\le \sum_{\mathtt{i} \in Z(r) \setminus Z_K(r)} \mu([\mathtt{i}]) \le \mu(I^\infty \setminus K) < \varepsilon, \end{align*} and, similarly, \begin{equation*} \sum_{\mathtt{i} \in Z(r)} r \ge \sum_{\mathtt{i} \in Z(r)} \varphi^t(A_\mathtt{i}) \ge c^{-1}, \end{equation*} it follows that \begin{equation} \label{eq:zetacardinality} \# Z_K(r) \ge (c^{-1} - \varepsilon cD\underline{\alpha}^{-1})r^{-1} \ge \tfrac12 c^{-1} r^{-1}. \end{equation} Hence, choosing $r>0$ small enough so that $|\mathtt{i}| \ge n_0$ for every $\mathtt{i} \in Z(r)$ and denoting $\xi = \min_{k \in \{ 1,2 \}} (D^{-1}\underline{\alpha})^{3\gamma_k}$, it follows from \eqref{eq:alpha_approx} that \begin{equation} \label{eq:alpha_approx2} \xi r^{\gamma_k(1+4\varepsilon)} \le \alpha_k(A_\mathtt{i}) \le r^{\gamma_k(1-2\varepsilon)} \end{equation} whenever $\mathtt{i} \in Z_K(r)$ and $k \in \{ 1,2 \}$. Fix $\mathtt{i} \in I^*$. Let $v_\mathtt{i}$ be the unit vector with direction equal to the major axis of the ellipse $A_\mathtt{i}\bigl(B(0,1)\bigr)$. Explicitly, $v_\mathtt{i} = A_\mathtt{i}\bigl( \theta_1(A_\mathtt{i}) \bigr)/\alpha_1(A_\mathtt{i})$. Since \[ v_\mathtt{i} \cdot A_\mathtt{i} x = A_\mathtt{i}^* v_\mathtt{i} \cdot x = \alpha_1(A_\mathtt{i}) \theta_1(A_\mathtt{i})\cdot x, \] for each $x \in E$, it follows from \eqref{K-projection} that $\mathcal{L}^1\bigl( \{ v_\mathtt{i} \cdot x : x \in E_\mathtt{i} \} \bigr) \ge \varrho\alpha_1(A_\mathtt{i})$. Hence there exists a constant $T \ge 1$ so that for each $\mathtt{i} \in I^*$ there is a rectangle $R_\mathtt{i}$ of size $\alpha_1(A_\mathtt{i})\times\alpha_2(A_\mathtt{i})$ with long side parallel to $A_\mathtt{i}\bigl( \theta_1(A_\mathtt{i}) \bigr)$ such that the $T\alpha_2(A_\mathtt{i})$-neighborhood of $E_\mathtt{i}$ intersects $R_\mathtt{i}$ in a set of $\mathcal{L}^2$-measure at least $\varrho\alpha_1(A_\mathtt{i})\alpha_2(A_\mathtt{i})$. Using Lemma \ref{thm:angle}\ref{atleast} and (\ref{eq:alpha_approx}), we get that there exists a constant $0<\omega'<1$ such that if $\mathtt{i},\mathtt{j}\in Z_K(r)$, $\mathtt{i}\neq\mathtt{j}$, and $|\mathtt{i}\land\mathtt{j}|\ge n_0$, then the angle between the long sides of the rectangles $R_\mathtt{i}$ and $R_\mathtt{j}$, denoted by $\varangle(R_\mathtt{i},R_\mathtt{j})$, is at least \begin{equation*} \omega' \alpha_2(A_{\mathtt{i} \land \mathtt{j}})/\alpha_1(A_{\mathtt{i} \land \mathtt{j}}) \ge \frac{\omega' \varphi^t(A_{\mathtt{i} \land \mathtt{j}})^{\gamma_2(1+\varepsilon)/(1-\varepsilon)}}{\varphi^t(A_{\mathtt{i} \land \mathtt{j}})^{\gamma_1(1-\varepsilon)/(1+\varepsilon)}} \ge \omega' r^{\gamma_2(1+4\varepsilon)-\gamma_1(1-2\varepsilon)}. \end{equation*} If $|\mathtt{i}\land\mathtt{j}|<n_0$ then, using Lemma \ref{thm:angle}\ref{atleast} again, \[ \varangle(R_\mathtt{i},R_\mathtt{j}) \ge \omega' \alpha_2(A_{\mathtt{i} \land \mathtt{j}})/\alpha_1(A_{\mathtt{i} \land \mathtt{j}}) \ge \omega' \underline{\alpha}^{n_0}/\overline{\alpha}^{n_0}. \] Thus, in either case, if $\mathtt{i},\mathtt{j}\in Z_K(r)$, $\mathtt{i}\neq\mathtt{j}$, then \begin{equation} \label{eq:anglelowerbound} \varangle(R_\mathtt{i},R_\mathtt{j}) \ge \omega r^{\gamma_2(1+4\varepsilon)-\gamma_1(1-2\varepsilon)}, \end{equation} where $\omega = \omega' \underline{\alpha}^{n_0}/\overline{\alpha}^{n_0} < 1$. In order to apply Proposition \ref{thm:kakeya_estimate}, all the rectangles must have the same size. Let \[ \alpha_1' = r^{\gamma_1(1-2\varepsilon)} \frac{r^{\gamma_2(1-2\varepsilon)}}{\omega r^{\gamma_2(1+4\varepsilon)}} \ge r^{\gamma_1(1-2\varepsilon)} \] and let also $\alpha_2' = r^{\gamma_2(1-2\varepsilon)}$ (bear in mind that both $\alpha_1'$ and $\alpha_2'$ depend on $r$). It follows from \eqref{eq:alpha_approx2} that each rectangle $R_\mathtt{i}$, with $\mathtt{i}\in Z_K(r)$, is contained in a rectangle $R_\mathtt{i}'$ of size $\alpha_1' \times \alpha_2'$ with long side still parallel to $A_\mathtt{i}\bigl( \theta_1(A_\mathtt{i}) \bigr)$. Moreover, by \eqref{eq:anglelowerbound}, the angle between any two such rectangles is at least $\alpha_2'/\alpha_1'$. Let $\delta = T r^{\gamma_2 (1-2\varepsilon)}$. We write $E(\delta)$ for the $\delta$-neighborhood of $E$. Using \eqref{eq:alpha_approx2} once again, notice that, whenever $\mathtt{i}\in Z_K(r)$, $E(\delta)$ contains a $T\alpha_2(A_\mathtt{i})$-neighborhood of $E_\mathtt{i}\subset E$. Hence $E(\delta)$ intersects each rectangle $R_\mathtt{i}$, and therefore also each rectangle $R_\mathtt{i}'$, in a set of $\mathcal{L}^2$-measure at least \[ \varrho\alpha_1(A_\mathtt{i})\alpha_2(A_\mathtt{i}) \ge \varrho\xi r^{\gamma_1(1+4\varepsilon)}\xi r^{\gamma_2(1+4\varepsilon)} = \tau \alpha_1'\alpha_2', \] where \begin{equation*} \tau = \varrho\xi^2 r^{(\gamma_1+\gamma_2){(1+4\varepsilon)}} \frac{\omega r^{\gamma_2(1+4\varepsilon)}}{r^{\gamma_1(1-2\varepsilon)} r^{\gamma_2(1-2\varepsilon)}r^{\gamma_2(1-2\varepsilon)}} = \varrho \xi^2 \omega r^{6\varepsilon(\gamma_1+2\gamma_2)}. \end{equation*} We can now apply Proposition \ref{thm:kakeya_estimate} to the set $E(\delta)$ and the family $\{ R_\mathtt{i}' : \mathtt{i} \in Z_K(r) \}$ to obtain, for every $r>0$ small enough, that \begin{align*} \mathcal{L}^2\bigl(E(\delta)\bigr) &\ge \frac{\# Z_K(r)\tau (\tau\alpha_1'\alpha_2')} {2\sqrt{2}\log(2\pi\alpha_1'/\alpha_2')} \\ &\ge \frac{\left(\tfrac12 c^{-1}r^{-1}\right)\left(\varrho \xi^2 \omega r^{6\varepsilon(\gamma_1+2\gamma_2)}\right) \left(\varrho\xi^2r^{(\gamma_1+\gamma_2)(1+4\varepsilon)}\right)} {2\sqrt{2}\log(2\pi\omega^{-1} r^{\gamma_1(1-2\varepsilon)-\gamma_2(1+4\varepsilon)})} \\ &= \frac{(4\sqrt{2}c)^{-1}\varrho^2\xi^4 \omega r^{\gamma_1+\gamma_2-1+\varepsilon(10\gamma_1+16\gamma_2)}} {\log(2\pi\omega^{-1} r^{\gamma_1-\gamma_2-\varepsilon(2\gamma_1+4\gamma_2)}) }, \end{align*} where in the second displayed line we used (\ref{eq:zetacardinality}). Recalling the definition of $\delta$, we estimate \begin{align*} \dimlm(E) &= \liminf_{\delta \downarrow 0} \biggl( 2 - \frac{\log\mathcal{L}^2\bigl(E(\delta)\bigr)}{\log\delta} \biggr) \\ &\ge 2 - \limsup_{r \downarrow 0} \biggl( \frac{\log\bigl((4\sqrt{2}c)^{-1}\varrho^2\xi^4\omega r^{\gamma_1+\gamma_2-1+\varepsilon(10\gamma_1+16\gamma_2)}\bigr)} {\log(T r^{\gamma_2 (1-2\varepsilon)})} \\ &\qquad\qquad\quad\;\;\, - \frac{\log\log(2\pi\omega^{-1} r^{\gamma_1-\gamma_2-\varepsilon(2\gamma_1+4\gamma_2)}) } {\log(T r^{\gamma_2 (1-2\varepsilon)})} \biggr) \\ &= 2 - \frac{\gamma_1+\gamma_2-1+\varepsilon(10\gamma_1+16\gamma_2)} {\gamma_2(1-2\varepsilon)}, \end{align*} provided that $\gamma_1-\gamma_2-\varepsilon(2\gamma_1+4\gamma_2) < 0$. By our earlier remark that $\gamma_1 < \gamma_2$, this can be achieved by starting with a very small $\varepsilon>0$. Since $\gamma_1-1 = (1-t)\gamma_2$, we conclude, by letting $\varepsilon\downarrow 0$, that \[ \dimlm(E) \ge 2 - \frac{\gamma_1+\gamma_2-1}{\gamma_2} = t, \] as desired. \end{proof} \section{On the projection condition} \label{sec:projection} Of all the conditions in the definition of a self-affine set of Kakeya type, the projection condition \eqref{K-projection} is the only one which cannot be checked directly. In this section we prove easily verifiable criteria which will be used to produce examples where \eqref{K-projection} holds. We introduce some notation. Given a set $F\subset\mathbb{R}^d$ and $e\in\mathbb{R}^d$, we will denote \[ F \cdot e = \{ x\cdot e: x\in F\}. \] The convex hull of $F$ will be denoted by $\conv(F)$. Recall that a matrix $M\in\mathbb{R}^{\kappa\times\kappa}$ with nonnegative coefficients is \emph{irreducible} if for all $1\le i,j\le\kappa$ there is $n>0$ such that $M^n_{ij}>0$. Finally, the identity matrix on $\mathbb{R}^{2\times 2}$ will be denoted by $\textrm{Id}_2$. We state two simple lemmas for later reference. \begin{lemma} \label{th:intersectionisinterval} If $\{ \mathcal{I}_\mathtt{i} : \mathtt{i} \in I^* \}$ is a collection of closed intervals such that for any $\mathtt{i} \in I^*$ \[ \bigcup_{i \in I} \mathcal{I}_{\mathtt{i} i} = \mathcal{I}_\mathtt{i}, \] then \[ \bigcap_{k=0}^\infty \bigcup_{\mathtt{i} \in I^k} \mathcal{I}_\mathtt{i} = \mathcal{I}_\varnothing. \] \end{lemma} \begin{proof} Immediate by induction. \end{proof} \begin{lemma} \label{th:unionofintervals} Suppose $\mathcal{I}_1,\ldots, \mathcal{I}_\kappa$ are closed intervals. If the adjacency matrix $M \in \mathbb{R}^{\kappa \times \kappa}$ defined as \begin{equation*} M_{ij} = \begin{cases} 1, &\text{if } \mathcal{I}_i \cap \mathcal{I}_j \ne \emptyset,\\ 0, &\text{otherwise,} \end{cases} \end{equation*} is irreducible, then $\bigcup_{i=1}^\kappa \mathcal{I}_i$ is an interval. \end{lemma} \begin{proof} Left to the reader. \end{proof} The following proposition, which may be of independent interest, provides a simple criterion to guarantee that all the projections of a self-affine set are intervals. Even though our application will be in $\mathbb{R}^2$, we state the result for affine IFS's on $\mathbb{R}^\kappa$ since the proof is the same. \begin{proposition} \label{thm:convexprojectioncondition} Suppose that for each $i\in I$ there are a contractive invertible matrix $A_i \in \mathbb{R}^{\kappa \times \kappa}$ with $||A_i|| \le \overline{\alpha}<1$ and a translation vector $a_i\in\mathbb{R}^\kappa$. Assume the adjacency matrix $M \in \mathbb{R}^{\kappa \times \kappa}$ defined as \begin{equation*} M_{ij} = \begin{cases} 1, &\text{if } \conv(E_i) \cap \conv(E_j) \neq \emptyset,\\ 0, &\text{otherwise}, \end{cases} \end{equation*} is irreducible. Then $E \cdot e = \conv(E) \cdot e$ for all $e \in \mathbb{R}^\kappa$ and, in particular, $E \cdot e$ is an interval or a single point. \end{proposition} \begin{proof} We will repeatedly use the fact that the action of taking convex hulls commutes with affine maps. As a first instance of this, observe that for any $\mathtt{i} \in I^*$, \begin{equation} \label{eq:affineconvex} A_\mathtt{i} \bigl( \conv(E) \bigr) + a_\mathtt{i} = \conv(E_\mathtt{i}), \end{equation} where \begin{equation} \label{eq:a_i} a_\mathtt{i} = \sum_{n=1}^{|\mathtt{i}|} A_{\mathtt{i}|_{n-1}}a_{i_n}. \end{equation} Let $D$ denote the Hausdorff distance. Notice that \eqref{eq:affineconvex} implies \[ D\bigl(\conv(E_\mathtt{i}), E_\mathtt{i}\bigr) \le \alpha_1(A_\mathtt{i}) D\bigl(\conv(E), E\bigr). \] Therefore \[ \lim_{k\rightarrow\infty} D\biggl(\bigcup_{\mathtt{i} \in I^k} \conv(E_\mathtt{i}), E\biggr) = 0, \] which in turn yields that \[ E \cdot e = \bigcap_{k=1}^\infty \bigcup_{\mathtt{i} \in I^k} \conv(E_\mathtt{i}) \cdot e. \] Hence in order to prove the proposition it is enough to show that the family $\{ \conv(E_\mathtt{i}) \cdot e:\mathtt{i}\in I^*\}$ verifies the hypothesis of Lemma \ref{th:intersectionisinterval} for all $e \in \mathbb{R}^\kappa$. We will do so by induction on $|\mathtt{i}|$. Denote $\mathcal{I}_\mathtt{i} = \conv(E_\mathtt{i}) \cdot e$ as $\mathtt{i} \in I^*$, and note that $\mathcal{I}_i \cap \mathcal{I}_j \neq \emptyset$ whenever $\conv(E_i) \cap \conv(E_j) \neq \emptyset$. Since the matrix $M$ was assumed to be irreducible, the hypothesis of Lemma \ref{th:unionofintervals} is met, whence $\mathcal{J}_\varnothing := \bigcup_{i\in I} \mathcal{I}_i$ is an interval, and thus equal to its convex hull. On the other hand, since \[ E \cdot e \subset \mathcal{J}_\varnothing \subset \conv(E) \cdot e = \conv(E \cdot e), \] we have $\conv(\mathcal{J}_\varnothing)= \conv(E) \cdot e$. Hence $\mathcal{J}_\varnothing = \conv(E) \cdot e$, and this settles the case $|\mathtt{i}|=0$. Now assume the case $|\mathtt{i}|=k$ has been proven, and let $\mathtt{i}$ be a symbol of length $k+1$. Write $\mathcal{J}_\mathtt{i} = \bigcup_{i \in I} \conv(E_{\mathtt{i} i}) \cdot e$ and $\mathtt{i} = j\mathtt{j}$, where $j\in I$ and $|\mathtt{j}|=k$. Then \begin{align*} \mathcal{J}_\mathtt{i} &= \bigcup_{i \in I} \bigl( A_j \bigl( \conv(E_{\mathtt{j} i}) \bigr) + a_j \bigr) \cdot e = A_j \biggl( \bigcup_{i \in I} \conv(E_{\mathtt{j} i}) \biggr) \cdot e + a_j \cdot e \\ &= \biggl( \bigcup_{i \in I} \conv(E_{\mathtt{j} i}) \biggr) \cdot A_j^* e + a_j \cdot e. \end{align*} By the inductive hypothesis, this is an interval. On the other hand, $\mathcal{J}_\mathtt{i}$ contains $E_\mathtt{i} \cdot e$ and is contained in $\conv(E_\mathtt{i}) \cdot e$, whence its convex hull must be $\conv(E_\mathtt{i}) \cdot e$. This shows that $\mathcal{J}_\mathtt{i} = \mathcal{I}_\mathtt{i}$, which is what we wanted to prove. \end{proof} Proposition \ref{thm:convexprojectioncondition} is useful because one can check whether it holds by simply plotting the self-affine set $E$, say using a computer program. It also yields a very simple algebraic criterion which guarantees that all linear projections are stably intervals, as the next corollary shows. Given $x,y \in \mathbb{R}^2$, we will denote $[x,y] = \{ \lambda x + (1-\lambda)y : 0 \le \lambda \le 1 \}$ and $(x,y) = [x,y] \setminus \{ x,y \}$. Furthermore, if $i \in I$ then with the notation $i^\infty$, we mean the symbol $(i,i,\ldots) \in I^\infty$. \begin{corollary} \label{thm:intersectionprojectioncondition} Suppose that for each $i\in I$ there are a contractive invertible matrix $A_i \in \mathbb{R}^{2 \times 2}$ with $||A_i|| \le \overline{\alpha}$ and a translation vector $a_i\in\mathbb{R}^2$. Denote by $E$ the invariant set of the affine IFS $\Phi = \{ A_i + a_i\}_{i \in I}$ and let \begin{equation} \label{eq:x_i} \begin{split} x_i &= \pi(i 1^\infty) = a_i + \sum_{n=0}^\infty A_i A_1^n a_1, \\ y_i &= \pi(i \kappa^\infty) = a_i + \sum_{n=0}^\infty A_i A_\kappa^n a_\kappa, \end{split} \end{equation} as $i \in I$. If the adjacency matrix $M \in \mathbb{R}^{\kappa \times \kappa}$ defined as \begin{equation*} M_{ij} = \begin{cases} 1, &\text{if $(x_i,y_i) \cap (x_j,y_j)$ is a single point}, \\ 0, &\text{otherwise}, \end{cases} \end{equation*} is irreducible, then for each affine IFS $\Phi'$ sufficiently close to $\Phi$ there is a constant $\varrho > 0$ such that $E' \cdot e$ is an interval having length at least $\varrho$ for all $e \in \mathbb{R}^2$. Here $E'$ is the invariant set of $\Phi'$. \end{corollary} \begin{proof} Denote by $M'$ the adjacency matrix corresponding to the system $\Phi'$. Since the property that $(x_i,y_i)$ intersects $(x_j,y_j)$ in a single point is stable, we see that $M'_{ij} \ge M_{ij}$ if $\Phi'$ is sufficiently close to $\Phi$. In particular, $M'$ is irreducible whenever $M$ is. Thus it is enough to verify the result for the original system $\Phi$. It follows from the assumptions that $E$ is not contained in a line. Thus there exists $\varrho>0$ such that $\conv(E)$ contains a ball of radius $\varrho$. Since trivially $(x_i,y_i) \subset \conv(E_i)$, the proof is finished by Proposition \ref{thm:convexprojectioncondition}. \end{proof} We next present a different, but also stable and easily checkable, condition that guarantees that the projection condition \eqref{K-projection} is met. Let $\mathcal{Q}_2$ denote the family of all vectors $v \in \mathbb{R}^2$ with strictly positive coefficients and define a partial order $\prec$ on $\mathbb{R}^2$ by setting $x \prec y$ if and only if $y-x \in \mathcal{Q}_2$. With the notation $x \preceq y$ we mean that $x \prec y$ or $x=y$. \begin{lemma} \label{thm:orderprojectioncondition} Suppose that for each $i\in I$ there are a contractive invertible matrix $A_i \in \mathbb{R}^{2 \times 2}$ with $||A_i|| \le \overline{\alpha}$ and a translation vector $a_i\in\mathbb{R}^2$. If $A_i$ has strictly positive coefficients for all $i\in I$ and the points $x_i$, $y_i$ defined in \eqref{eq:x_i} satisfy \begin{equation} \label{eq:chain} x_i \prec x_{i+1} \prec y_i \prec y_{i+1} \end{equation} whenever $i \in \{ 1,\ldots,\kappa-1 \}$, then there is a constant $\varrho>0$ such that $E \cdot e$ contains an interval of length $(y_\kappa-x_1)\cdot e \ge \varrho$ for all $e \in \mathcal{Q}_2$. \end{lemma} \begin{proof} The proof runs parallel to that of Proposition \ref{thm:convexprojectioncondition}. Given $\mathtt{i}\in I^*$, write \[ \ell_\mathtt{i} = A_\mathtt{i}([x_1,y_\kappa])+a_\mathtt{i}, \] where $a_\mathtt{i}$ is given by \eqref{eq:a_i}. We set $A_\varnothing = \textrm{Id}_2$ and $a_\varnothing=(0,0)$. Observe that \[ D(\ell_\mathtt{i}, E_\mathtt{i}) \rightarrow 0 \textrm{ as } |\mathtt{i}|\rightarrow\infty, \] whence \[ \lim_{k\rightarrow\infty} D\biggl(\bigcup_{\mathtt{i} \in I^k} \ell_\mathtt{i}, E\biggr) = 0, \] which in turn yields that \[ E \cdot e \supset \bigcap_{k=1}^\infty \bigcup_{\mathtt{i} \in I^k} \ell_\mathtt{i} \cdot e. \] Thus we only need to prove that the family $\{ \ell_\mathtt{i} \cdot e:\mathtt{i}\in I^*\}$ verifies the hypothesis of Lemma \ref{th:intersectionisinterval} for all $e\in\mathcal{Q}_2$. Denoting $\mathcal{I}_\mathtt{i} = \ell_\mathtt{i} \cdot e$ as $\mathtt{i} \in I^*$, we will prove by induction on $|\mathtt{i}|$ that \begin{equation} \label{eq:inductionunionints} \mathcal{I}_\mathtt{i} = \bigcup_{i\in I} \mathcal{I}_{\mathtt{i} i} = \bigl(A_\mathtt{i}([x_1,y_\kappa]) + a_\mathtt{i}\bigr) \cdot e, \end{equation} for all $e \in \mathcal{Q}_2$. Consider the case $|\mathtt{i}| = 0$ first. Note that, for $i \in I$, \[ x_i = A_i x_1 + a_i, \quad y_i = A_i y_\kappa + a_i. \] Hence $\ell_i = [x_i,y_i]$. From \eqref{eq:chain} we get that $x_1 \preceq x_i \prec y_i \preceq y_\kappa$ for $i\in I$, whence \begin{equation} \label{eq:unionints1} \bigcup_{i\in I} [x_i,y_i]\cdot e \subset [x_1,y_\kappa]\cdot e. \end{equation} On the other hand, from \eqref{eq:chain} we see that $x_i \prec y_{i+1}$ and $x_{i+1}\prec y_i$. Since $x\cdot e < y\cdot e$ whenever $x\prec y$ and $e\in\mathcal{Q}_2$, we get \begin{equation} \label{eq:unionints2} [x_i,y_i] \cdot e \cap [x_{i+1},y_{i+1}]\cdot e \neq \emptyset, \end{equation} whenever $i \in \{ 1,\ldots,\kappa-1 \}$. From \eqref{eq:unionints1} and \eqref{eq:unionints2}, and recalling that $\ell_i = [x_i,y_i]$, we get \eqref{eq:inductionunionints} in the case $|\mathtt{i}|=0$. The inductive step follows the same pattern as in Proposition \ref{thm:convexprojectioncondition}; details are omitted. \end{proof} \section{Examples and remarks} \label{sec:examples} We are now ready to state easily checkable conditions which guarantee that an affine IFS is stably of Kakeya type. Explicit examples follow below. In the following theorem, we will use the convention that $[x,y]=[y,x]$ if $x>y$. \begin{theorem} \label{thm:example} Suppose that for each $i \in I$ there are a contractive invertible matrix $A_i \in \mathbb{R}^{2 \times 2}$ with $||A_i|| \le \overline{\alpha}$ and a translation vector $a_i \in \mathbb{R}^2$. Assume further that for each $i \in I$ there are real numbers $u_i,v_i,w_i,z_i>0$ such that \[ A_i = \left(% \begin{array}{cc} u_i & v_i \\ w_i & z_i \\ \end{array}% \right) \] and the following two conditions hold: \begin{enumerate} \renewcommand{\labelenumi}{(X\arabic{enumi})} \renewcommand{\theenumi}{X\arabic{enumi}} \item \label{X-separation} The intervals $[w_i/u_i,z_i/v_i]$ are pairwise disjoint for every $i \in I$. \item \label{X-projection} The affine IFS $\{A_i+a_i\}_{i \in J}$, where $J \subset I$ has cardinality at least $2$, verifies either the hypotheses of Corollary \ref{thm:intersectionprojectioncondition} or the hypotheses of Lemma \ref{thm:orderprojectioncondition}. \end{enumerate} Then the affine IFS $\Phi=\{ A_i+a_i\}_{i \in I}$ is stably of Kakeya type. In particular, the Minkowski dimension of the invariant set is given by the zero of the pressure formula \eqref{eq:pressure}, and is continuous on a neighborhood of $\Phi$. \end{theorem} \begin{proof} Using Theorem \ref{thm:main_result}, we only need to show that \eqref{K-kakeya} and \eqref{K-projection} hold for any small perturbation of $\Phi$. Since both \eqref{X-separation} and \eqref{X-projection} are stable properties, it is in fact enough to check that $\Phi$ is of Kakeya type. Let $\theta=\frac{1}{\sqrt{2}}(1,1)$. Since the $A_i$ have strictly positive coefficients, both $A_i$ and $A_i^*$ map the cone $X(\theta,\pi/2)$ into $X(\theta,\beta')$ for some $\beta'<\pi/2$. Hence there exists $\beta<\pi/2$ such that both \eqref{K-coneinvariance} and \eqref{K-transpose} hold. Suppose that \eqref{K-separation} does not hold for $\Phi$. Then there is $s>0$ and $i,j \in I$ such that $i \ne j$ and \[ (1,s) = A_i(x,y) = A_j(x',y'), \] for some $x,y,x',y'>0$. Some simple algebra shows that \[ s = \frac{w_i x+z_i y}{u_i x + v_i y} = \frac{w_j x'+z_j y'}{u_j x' + v_j y'}, \] whence $s\in [w_i/u_i,z_i/v_i]\cap [w_j/u_j, z_j/v_j]$, which contradicts \eqref{X-separation}. Let $F$ be the invariant set of $\Psi=\{ A_i +a_i\}_{i \in J}$. It is clear that $F \subset E$. If $\Psi$ verifies the conditions of Corollary \ref{thm:intersectionprojectioncondition}, then \eqref{K-projection} is immediately satisfied for $\Psi$ and hence also for $\Phi$. Likewise, if $\Psi$ satisfies the hypotheses of Lemma \ref{thm:orderprojectioncondition}, then \eqref{K-projection} holds for $\Phi$. This is true since $\theta_1(A_i)$ has positive coordinates thanks to the Perron-Frobenius Theorem. The proof is complete. \end{proof} We remark that finding an explicit neighborhood to which Theorem \ref{thm:example} applies is an elementary, if tedious, exercise. \begin{example} \label{ex:intersection} We consider our first specific example. Let \[ A_1(r,\varepsilon) = \left(% \begin{array}{cc} r & r+\varepsilon \\ \varepsilon & r \\ \end{array}% \right), \quad A_2(r,\varepsilon) = \left(% \begin{array}{cc} r & \varepsilon \\ r+\varepsilon & r \\ \end{array}% \right). \] \begin{figure} \centering \includegraphics[width=0.6\textwidth]{selfaffine.eps} \caption{A self-affine set of Kakeya type.} \label{fig-sa} \end{figure} The affine IFS $\{A_1(r,0)+a_1, A_2(r,0)+a_2\}$ was studied in \cite{Edgar1992}, where it is proven that the singularity dimension is $1$ when $r=1/3$. This IFS does not verify \eqref{K-coneinvariance}; however, $\{A_1(r,\varepsilon)+a_1, A_1(r,\varepsilon)+a_2\}$ does satisfy \eqref{X-separation}, and hence \eqref{K-kakeya}, for all small $\varepsilon>0$. Figure \ref{fig-sa} depicts the invariant set when $r=0.4$, $\varepsilon = 0.1$ and the translations are $a_1 = (-0.3, -0.3)$ and $a_2=-a_1$. For these values of the parameters the spectral radius of the matrices $A_i(r,\varepsilon)$ is approximately $0.624>1/2$; thus Falconer's Theorem does not apply. However, the conditions of Corollary \ref{thm:intersectionprojectioncondition} are clearly met (this can be verified algebraically without effort). Thus, by Theorem \ref{thm:example}, this is stably a self-affine set of Kakeya-type. We remark that by picking appropriate values of $r$ and $\varepsilon$ one can obtain examples where the norms of the maps are arbitrarily close to $1$. Notice that the invariant set resembles a union of approximately equally long segments pointing in different directions, underlining the Kakeya-type structure. Also observe that this particular example appears to be overlapping, although proving this rigorously looks very difficult. \end{example} \begin{lemma} \label{thm:exampleorder} Suppose that for each $i \in \{1,2\}$ there is a contractive invertible matrix $A_i\in\mathbb{R}^{2\times 2}$ with strictly positive coefficients and $||A_i|| \le \overline{\alpha}$, such that the condition \eqref{X-separation} is satisfied. Let \begin{equation*} B_2 = \sum_{n=0}^\infty A_2^n = (\textrm{Id}_2-A_2)^{-1}. \end{equation*} If both $A_1 B_2 - \textrm{Id}_2$ and $(\textrm{Id}_2-A_1)B_2$ have strictly positive coefficients, then for any vector $a_2$ with strictly positive coefficients, the affine IFS $\{ A_1, A_2+a_2\}$ is stably of Kakeya type. \end{lemma} \begin{proof} Notice that $B_2$ has strictly positive coefficients. The points defined in \eqref{eq:x_i} are now $x_1 = 0$, $y_1 = A_1 B_2 a_2$, $x_2 = a_2$, and $y_2 = B_2 a_2$. Suppose $a_2\in\mathcal{Q}_2$. It is clear that $x_1\prec x_2$. Moreover, $x_2 \prec y_1$ whenever $A_1 B_2 - \textrm{Id}_2$ has strictly positive coefficients, and $y_1 \prec y_2$ whenever $(\textrm{Id}_2-A_1)B_2$ has strictly positive coefficients. Thus we have shown that the hypotheses of Lemma \ref{thm:orderprojectioncondition} hold, whence the lemma is immediate from Theorem \ref{thm:example}. \end{proof} \begin{example} \label{ex:concreteexample} As a concrete example, let \begin{equation} \label{eq:concreteexample} A_1 = \left(% \begin{array}{cc} 0.35 & 0.40 \\ 0.30 & 0.35 \\ \end{array}% \right), \quad A_2 = \left(% \begin{array}{cc} 0.40 & 0.45 \\ 0.45 & 0.50 \\ \end{array}% \right) . \end{equation} A straightforward calculation shows that $A_1 B_2 -\textrm{Id}_2$ and $(\textrm{Id}_2-A_1)B_2$ have positive coefficients. Hence, by Lemma \ref{thm:exampleorder}, the affine IFS $\{ A_1, A_2 + a_2\}$, as well as any small perturbation, is of Kakeya type for any $a_2\in\mathcal{Q}_2$. In particular, the Minkowski dimension of the invariant set of this IFS is constant for all $a_2\in\mathcal{Q}_2$. \end{example} \begin{example} As a final example, we exhibit an affine IFS of Kakeya type with an arbitrary number of maps. Choose $\kappa \ge 3$ and let $A_1$, $A_2$ be as in Example \ref{ex:concreteexample}. For $j \in \{ 3,\ldots,\kappa \}$, we define \[ A_j = \left(% \begin{array}{cc} \tfrac{1}{2} & \tfrac{1}{2} \\ \tfrac{1}{3j-1} & \tfrac{1}{3j} \\ \end{array}% \right). \] Note that $\{ A_1,\ldots, A_\kappa\}$ satisfies \eqref{X-separation}. Thus Theorem \ref{thm:example}, applied with $J=\{1,2\}$, implies that for any $a_2\in\mathcal{Q}_2$ and any $a_3,\ldots,a_\kappa\in\mathbb{R}^2$, the affine IFS \[ \{ A_1, A_2+a_2, A_3+a_3,\ldots, A_\kappa+a_\kappa\} \] is stably of Kakeya type. \end{example} We finish the paper with some questions and remarks. \begin{remark} \label{remarks} (1) Our techniques do not extend easily to higher dimensions. One source of technical difficulties is having to deal with more than two singular values, but the main obstruction is of course that the Kakeya conjecture is open for dimension $d\ge 3$, and no analogue of Proposition \ref{thm:kakeya_estimate} is known. We remark, however, that Lemma \ref{thm:norm_alpha} does hold, with the same proof, in higher dimensions, although one needs to replace the cone $X(\theta,\beta)$ by a cone which is, after a change of coordinates, $\mathcal{Q}_d \cup -\mathcal{Q}_d$. Here $\mathcal{Q}_d$ is the family of all vectors $v \in \mathbb{R}^d$ with strictly positive coefficients. Note that in $\mathbb{R}^2$ both classes of cones agree, but not in higher dimensions. This observation will be useful in the appendix. (2) We do not know if our results hold for nonlinear perturbations of the affine IFS's we study. In studying nonlinear, nonconformal IFS's one usually needs to assume the so-called ``1-bunching'' condition, which guarantees that certain kind of bounded distortion holds, and therefore allows control of the shape of the cylinder sets; see for example \cite{Falconer1994}. For a linear map $A$, 1-bunching is equivalent to $\alpha_2(A) > \bigl(\alpha_1(A)\bigr)^2$. This is exactly Hypothesis 2 in \cite{HueterLalley1995} and, as remarked in \S \ref{sec:kakeya}, it cannot hold in our setting. More specifically, $1$-bunching appears to be necessary to extend Lemma \ref{thm:angle} to nonlinear maps. (3) Computing the singularity dimension of an arbitrary affine IFS is a very difficult problem. Recently Falconer and Miao \cite{FalconerMiao2007b} succeeded in finding a closed formula in the case all the matrices are upper triangular but, as they indicate, in general it is very hard to even obtain good numerical estimates. In our setting, one could use Lemma \ref{thm:norm_alpha} to obtain rigorous upper and lower bounds, but the convergence is extremely slow. (4) It would be of interest to find more general conditions for the validity of \eqref{K-projection}. In particular, is it true that, when $\kappa=2$, \eqref{K-projection} holds whenever the singularity dimension is strictly larger than $1$? (5) Falconer's Theorem shows that the equality of Hausdorff dimension and singular value dimension of a self-affine set is typical from the point of view of measure, at least when the norms of the linear maps do not exceed $\frac{1}{2}$, but does not say anything about the topological structure of the exceptional set. In every known counterexample, the linear parts of the affine maps commute; this is of course a nowhere dense condition. Our results provide some support to the conjecture that Minkowski dimension and singular value dimension agree for an open and dense family of affine IFS's. \end{remark} \begin{ack} PS wishes to thank Nuno Luzia and Boris Solomyak for helpful conversations and comments. \end{ack}
1,314,259,994,436
arxiv
\section{Introduction} \label{section:1} For each prime number $p$, there are the mod $p$ and Brown-Peterson cohomology. For a compact connected Lie group $G$, the mod $p$ cohomology of the classifying space $BG$ has no nonzero odd degree element if the integral cohomology of $G$ has no $p$-torsion. So does the Brown-Peterson cohomology. On the one hand, if the integral homology of $G$ has $p$-torsion, the mod $p$ cohomology of $BG$ has a nonzero odd degree element. On the other hand, for the Brown-Peterson cohomology, Kono and Yagita conjectured the following: \begin{conjecture}[Kono and Yagita, (1) in Conjecture 4 in \cite{kono-yagita-1993}] \label{conjecture:1.1} There is no nonzero odd degree element in the Brown-Peterson cohomology of the classifying space of a compact Lie group. \end{conjecture} Conjecture~\ref{conjecture:1.1} is interesting in conjunction with Totaro's conjecture on the cycle map from the Chow ring of the classifying space of a complex linear algebraic group $G$ to its Brown-Peterson cohomology. In \cite{totaro-1997}, Totaro showed that the cycle map from the Chow ring of a complex smooth algebraic variety to its ordinary cohomology factors through the Brown-Peterson cohomology after localized at $p$. In \cite{totaro-1999}, he defined the Chow ring $CH^*(BG)$ of a linear algebraic group $G$ and conjectured the following. \begin{conjecture}[Totaro, p.250 in \cite{totaro-1999}]\label{conjecture:1.2} For a complex linear algebraic group $G$, if there is no nonzero odd degree element in the Brown-Peterson cohomology $BP^{*}(BG)$, the cycle map \[ CH^i(BG)_{(p)} \to (\mathbb{Z}_{(p)} \otimes_{BP^{*}} BP^{*}(BG))^{2i} \] is an isomorphism. \end{conjecture} With Conjectures~\ref{conjecture:1.1} and \ref{conjecture:1.2}, we expect a close connection between the Chow ring in algebraic geometry and the Brown-Peterson cohomology in algebraic topology. In \cite{kono-yagita-1993}, Kono and Yagita confirmed Conjecture~\ref{conjecture:1.1} for some compact connected Lie groups with $p$-torsion by computing the Atiyah-Hirzebruch spectral sequences. The non-triviality of Milnor operations on odd degree elements yields non-trivial differentials sending odd degree elements to non-zero elements, so odd degree elements do not survive to the $E_\infty$-term. With their computational results on the Brown-Peterson cohomology of classifying spaces, Kono and Yagita conjectured the following: \begin{conjecture}[Kono and Yagita, {Conjecture 5} in \cite{kono-yagita-1993}] \label{conjecture:1.3} For each nonzero odd degree element $x$ of the mod $p$ cohomology of the classifying space of a compact connected Lie group, there exists an integer $i$ such that for $m\geq i$, \[ Q_m x\not=0. \] \end{conjecture} Conjecture~\ref{conjecture:1.3} is interesting in the cohomology theory of classifying spaces of non-simply connected Lie groups. In \cite{vavpetic-viruel-2005}, Vavpeti\v{c} and Viruel showed that if $p$ is an odd prime, Conjecture~\ref{conjecture:1.3} holds for the projective unitary group $PU(p)$. Moreover, recently, the cohomology of classifying spaces of non-simply connected Lie groups has enjoyed renewed interest. Many mathematicians have studied it in various contexts. Antieau, Gu and Williams (\cite{antieau-williams-2014}, \cite{gu-2019}, \cite{gu-2020}, \cite{gu-2022}) studied it for the topological period-index problem. Antieau, the author and Tripathy (\cite{antieau-2016}, \cite{kameko-2015}, \cite{kameko-2017}, \cite{tripathy-preprint}) studied it for integral Hodge conjecture modulo torsion. Furthermore, the Atiyah-Hirzebruch spectral sequence is used in theoretical physics to study anomalies, cf. Garc\'{\i}a-Etxebarria and Montero \cite{GM-2019}. In this paper, we give a counterexample for Conjecture~\ref{conjecture:1.3} in the case $p=2$. Our result is as follows: Let $\mathbb{H}$ be the quaternions. Let $Sp(1)\subset \mathbb{H}$ be the symplectic group consisting of unit quaternions. Let $G$ be the quotient of the three-fold product $Sp(1)^3$ of the symplectic groups $Sp(1)$ by the subgroup $\Gamma_2$ generated by $(-1, -1, 1)$ and $(-1, 1 , -1)$. \begin{theorem}\label{theorem:1.4} In the mod $2$ cohomology of the classifying space of the compact connected Lie group $G$ above, there exists a nonzero element $x_{13}$ of degree $13$ such that \[ Q_m x_{13}=0 \] for $m \geq 1$. \end{theorem} This paper is organized as follows. In Section~\ref{section:2}, we describe the action of Milnor operations on the mod $2$ cohomology of $BSO(3)$. In Section~\ref{section:3}, we prove Theorem~\ref{theorem:1.4} as Proposition~\ref{proposition:3.5}. \section{Milnor operations} \label{section:2} In this section, we recall the mod $2$ cohomology of the classifying space $BSO(3)$. The mod $2$ cohomology of $BSO(3)$ is a polynomial ring \[ H^{*}(BSO(3);\mathbb{Z}/2)=\mathbb{Z}/2[w_2, w_3] \] generated by two elements $w_2$, $w_3$ of degree $2$, $3$, respectively. Steenrod squares' action on these elements is well-known as the Wu formula. In particular, we have \begin{align*} \mathrm{Sq}^1 w_2&=w_3, & \mathrm{Sq}^2 w_2 &=w_2^2, \\ \mathrm{Sq}^1 w_3&=0, & \mathrm{Sq}^2 w_3 &=w_2 w_3. \end{align*} By the Wu formula and by the definition of Milnor operations \[ Q_0=\mathrm{Sq}^1, \quad Q_m=\mathrm{Sq}^{2^{m}} Q_{m-1}+Q_{m-1} \mathrm{Sq}^{2^m} \quad (m\geq 1), \] it is easy to obtain \begin{align*} Q_0 w_2&= w_3, & Q_1 w_2&=w_2 w_3, & Q_0 Q_1 w_2&=w_3^2,\\ Q_0 w_3&= 0, & Q_1 w_3&=w_3^2, & Q_0 Q_1 w_3&=0. \end{align*} This section aims to prove the following lemma on the action of Milnor operations on the mod $2$ cohomology of $BSO(3)$. \begin{lemma}\label{lemma:2.1} For $m\geq 2$, there exists a polynomial $g_m$ in $w_2^2$ and $w_3^2$ such that we have \[ Q_m Q_1w_2=g_m w_3^4. \] in the mod $2$ cohomology of $BSO(3)$. \end{lemma} To prove Lemma~\ref{lemma:2.1}, we recall the relation between Dickson invariants and Milnor operations as Proposition~\ref{proposition:2.2}. The connection between Dickson invariants and Milnor operations is an exciting subject in algebraic topology. Thus, we refer the reader to the classical work of Adams and Wilkerson (\cite{adams-wilkerson-1980}, \cite{wilkerson-1983}) for more detail on the background of this section. However, to make this paper self-contained as far as possible, we give detailed proof for Lemma~\ref{lemma:2.1} without mentioning Dickson invariants and the above background. Let $(\mathbb{Z}/2)^2=\mathbb{Z}/2\times \mathbb{Z}/2$ be the elementary abelian $2$-subgroup of $SO(3)$ generated by diagonal matrices \[ \begin{pmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{pmatrix} , \quad \begin{pmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{pmatrix}. \] We denote by $\iota\colon (\mathbb{Z}/2)^2 \to SO(3)$ the inclusion map. The induced homomorphism \[ B\iota^{*}\colon H^{*}(BSO(3);\mathbb{Z}/2)\to H^{*}(B(\mathbb{Z}/2)^2;\mathbb{Z}/2) \] is injective, and its image is the subalgebra generated by the following elements. \begin{align*} B\iota^{*}(w_2)&=s_1^2 + s_1 s_2 + s_2^2, \\ B\iota^{*}(w_3)&=s_1^2s_2 + s_1 s_2^2. \end{align*} \begin{proposition}\label{proposition:2.2} Suppose that $m\geq 2$. For an element $x$ of the mod $2$ cohomology of $B(\mathbb{Z}/2)^2$, let \[ D_m x=Q_m x + B\iota^{*}(w_2^{2^{m-1}}) Q_{m-1}x +B\iota^{*}(w_3^{2^{m-1}}) Q_{m-2}x. \] Then, we have \[ D_m x=0. \] \end{proposition} \begin{proof} Here, in the proof of Proposition~\ref{proposition:2.2}, by the mod $2$ cohomology ring, we mean the mod $2$ cohomology ring of $B(\mathbb{Z}/2)^2$. For $i=1$ and $2$, we have \begin{align*} D_m s_i&=Q_m s_i+ B\iota^*(w_2^{2^{m-1}} ) Q_{m-1} s_i+ B\iota^{*}(w_3^{2^{m-1}}) Q_{m-2} s_i \\ &=s_i^{2^{m+1}} +(s_1^2+s_1s_2+s_2^2)^{2^{m-1}} s_i^{2^{m}} +(s_1^2 s_2+ s_1 s_2^2)^{2^{m-1}} s_i^{2^{m-1}}\\ &=\left( s_i^4 +(s_1^2+s_1s_2+s_2^2) s_i^{2} + (s_1^2 s_2+ s_1 s_2^2) s_i\right)^{2^{m-1}} \\ &=0. \end{align*} For elements $x$, $y$ in the mod $2$ cohomology ring, we have \[ D_m(x\cdot y)=D_m x\cdot y+ x \cdot D_m y. \] Therefore, since the mod $2$ cohomology ring is generated by $s_1$, $s_2$, the fact that $D_m s_i =0$ for $i=1,2$ implies that $D_m x=0$ for each element $x$ in the mod $2$ cohomology ring. \end{proof} Now, for $m\geq 2$, we describe the action of the Milnor operation $Q_m$ in terms of certain polynomials $f_{m,0}$, $f_{m.1}$ in $w_2^2$ and $w_3^2$ and Milnor operations $Q_0$, $Q_1$. Since the induced homomorphism \[ B\iota^{*}\colon H^{*}(BSO(3);\mathbb{Z}/2) \to H^{*}(B(\mathbb{Z}/2)^2;\mathbb{Z}/2) \] is injective, by Proposition~\ref{proposition:2.2}, for each $x$ in the mod $2$ cohomology of $BSO(3)$, we have \[ Q_m x= w_2^{2^{m-1}} Q_{m-1}x+ w_3^{2^{m-1}} Q_{m-2}x. \] We may write it in the following form. \[ \begin{pmatrix} Q_m x\\ Q_{m-1}x \end{pmatrix} = \begin{pmatrix} w_2^{2^{m-1}} & w_3^{2^{m-1}} \\ 1 & 0 \end{pmatrix} \begin{pmatrix} Q_{m-1}x \\ Q_{m-2}x \end{pmatrix}. \] Let us define a matrix $A_m$ whose coefficients are polynomials in $w_2^2$, $w_3^2$ as follows: \[ A_m =\begin{pmatrix} w_2^{2^{m-1}} & w_3^{2^{m-1}} \\ 1 & 0 \end{pmatrix} \begin{pmatrix} w_2^{2^{m-2}} & w_3^{2^{m-2}} \\ 1 & 0 \end{pmatrix} \cdots \begin{pmatrix} w_2^{4} & w_3^{4} \\ 1 & 0 \end{pmatrix} \begin{pmatrix} w_2^{2} & w_3^{2} \\ 1 & 0 \end{pmatrix}. \] Furthermore, let us define polynomials $f_{m,0}$, $f_{m,1}$ by \[ \begin{pmatrix} f_{m,1} & f_{m,0} \end{pmatrix}= \begin{pmatrix}1 & 0 \end{pmatrix} A_m. \] Then, for $x$ in the mod $2$ cohomology of $BSO(3)$, we have \[ Q_m x= \begin{pmatrix} 1 & 0 \end{pmatrix} \begin{pmatrix} Q_{m} x\\ Q_{m-1} x \end{pmatrix} =\begin{pmatrix} 1 & 0 \end{pmatrix} A_m \begin{pmatrix} Q_1 x\\ Q_0 x \end{pmatrix} =f_{m,1} Q_1x + f_{m,0} Q_0x. \] \begin{proof}[Proof of Lemma~\ref{lemma:2.1}] We have the following congruence. \[ A_m \equiv \begin{pmatrix} w_2^{2^{m-1}} & 0 \\ 1 & 0 \end{pmatrix} \cdots \begin{pmatrix} w_2^{2} & 0 \\ 1 & 0 \end{pmatrix} \equiv \begin{pmatrix} w_2^{2^{m}-2} & 0 \\ w_2^{2^{m-1}-2} & 0 \end{pmatrix} \mod (w_3^2). \] Hence, we have $f_{m,0}\equiv 0 \mod (w_3^2)$. Therefore, there exists a polynomial $g_m$ in $w_2^2$ and $w_3^2$ such that \[ f_{m,0}=g_m w_3^2. \] Recall the fact that $Q_0Q_1 w_2=w_3^2$. Thus, we have $Q_1 Q_1=0$. Therefore, we have \begin{align*} Q_mQ_1 w_2&= f_{m,1}Q_1 Q_1w_2 + f_{m,0} Q_0Q_1 w_2 \\ &=f_{m,0} Q_0Q_1 w_2 \\ &=g_m w_3^4. \qedhere \end{align*} \end{proof} \begin{example}For $m=2, 3, 4$, elements $Q_mx$ and polynomials $g_m$ in Lemma~\ref{lemma:2.1} are as follows: \begin{align*} Q_2 x&= w_2^2 Q_1 x+ w_3^2 Q_0 x, & g_2&=1, \\ Q_3 x&= (w_2^6+w_3^4) Q_1x + w_2^4 w_3^2 Q_0 x, & g_3 &= w_2^4, \\ Q_4 x &= (w_2^{14}+w_2^8 w_3^4+ w_2^2 w_3^8) Q_1 x + (w_2^{12}+w_3^8)w_3^2 Q_0 x, & g_4&= w_2^{12}+w_3^8. \end{align*} \end{example} \section{The nonzero odd degree element} \label{section:3} In this section, we prove Theorem~\ref{theorem:1.4} as Proposition~\ref{proposition:3.5}. First, we recall the definition of the connected Lie group $G$ in Section~\ref{section:1} and set up notations. Let us consider the $3$-fold product of symplectic groups $Sp(1)\subset \mathbb{H}$ consisting of unit quaternions. Let \[ \Gamma_3=\{ (\pm 1, \pm 1, \pm 1 )\} \] be the center of $Sp(1)^3$. Let $ \Gamma_2 $ be its subgroup generated by $ (-1, 1, -1), (1, -1, -1)$ and \[ G=Sp(1)^3/\Gamma_2. \] Let $\mathbb{Z}/2=\{ (\pm 1, 1, 1) \} \subset \Gamma_3$. Then, $\mathbb{Z}/2$ and $\Gamma_2$ generate $\Gamma_3$. Moreover, we have \[ Sp(1)^3/\Gamma_3=SO(3)^3. \] Therefore, we have the following fiber sequence: \[ B\mathbb{Z}/2 \to BG\to BSO(3)^3. \] Let \[ B\pi_i\colon BSO(3)^3\to BSO(3) \] be the map induced by the projection to the $i^{\mathrm{th}}$ factor for $i=1, 2, 3$. We denote by $w_i'$, $w_i''$, $w_i'''$ the cohomology classes $B\pi_1^{*}(w_i)$, $B\pi_2^{*}(w_i)$, $B\pi_3^{*}(w_{i})$, respectively. Let $u_1$ be the generator of the mod $2$ cohomology $H^{1}(B\mathbb{Z}/2;\mathbb{Z}/2)\cong\mathbb{Z}/2$ of the fibre $B\mathbb{Z}/2$. It is easy to compute the mod $2$ cohomology of $BG$ using the Leray-Serre spectral sequence associated with this fiber sequence. We quickly review it. See \cite{kameko-preprint} for more detail. The $E_2$-term and the first nontrivial differential $d_2$ are given by \begin{align*} E_2&=\mathbb{Z}/2[ w_2', w_3', w_2'', w_3'', w_2''', w_3''']\otimes \mathbb{Z}/2[u_1] \\ d_2(u_1)&=w_2'+w_2''+ w_2'''. \end{align*} Starting with the above $E_2$ and $d_2$, using the fact that the transgression commutes with Steenrod squares, we have the following $E_r$-tems and differentials. \begin{align*} E_3&=\mathbb{Z}/2[w_2', w_3', w_2'', w_3'', w_3''']\otimes \mathbb{Z}/2[ u_1^2], \\ d_3(u_1^2)&=\mathrm{Sq}^1(w_2' +w_2''+ w_2''') \\ &=w_3' +w_3''+ w_3''', \\ E_4&=\mathbb{Z}/2[w_2', w_3', w_2'', w_3'']\otimes \mathbb{Z}/2[ u_1^4], \\ d_5(u_1^4)&=\mathrm{Sq}^2 (w_3' +w_3''+ w_3''') \\ &=w_2'w_3'+ w_2'' w_3'' + w_2''' w_3''' \\ & =w_2' w_3''+ w_2'' w_3', \\ E_6&=\mathbb{Z}/2[w_2', w_3', w_2'', w_3'']/(w_2' w_3''+ w_2'' w_3') \otimes \mathbb{Z}/2[ u_1^8], \\ d_9(u_1^8)&=\mathrm{Sq}^4 (w_2' w_3''+ w_2'' w_3') \\ &=w_2'^2 w_2'' w_3''+w_3' w_3''^2 +w_2''^2 w_2' w_3'+w_3'' w_3''^2 \\ &=w_3' w_3''^2 + w_3'' w_3'^2. \end{align*} Since $w_2'w_3''+ w_2'' w_3'$, $w_3' w_3''^2 + w_3'' w_3'^2$ is a regular sequence, we have \begin{align*} E_{10}=& \mathbb{Z}/2[ w_2', w_3', w_2'', w_3'']/(w_2'w_3''+ w_2'' w_3', w_3'w_3''^2+ w_3'' w_3'^2) \otimes \mathbb{Z}/2[u_1^{16}]. \intertext{and finally, we have} d_{17}(u_1^{8})&=\mathrm{Sq}^{8} (w_3'w_3''^2 + w_3'' w_3'^2) \\ &=w_2'w_3' w_3''^4+ w_2'' w_3'' w_3'^4 \\ &=0. \end{align*} Hence, we have $E_\infty=E_{10}$. Let \[ N=\mathbb{Z}/2[ w_2', w_3', w_2'', w_3'']/(w_2' w_3''+w_2''w_3', w_3'w_3''^2 + w_3'' w_3'^2). \] We deal with the subalgebra $N$ above of the mod $2$ cohomology ring of $BG$ generated by $w_2', w_3', w_2'', w_3''$. What we need is the fact that the induced homomorphism \[ N\to H^{*}(BG;\mathbb{Z}/2) \] is injective, and $N$ is closed under the action of Milnor operations $Q_m$ for $m\geq 0$. For a graded set $\{ x_1, x_2, \dots \}$, we denote by $\mathbb{Z}/2\{ x_1, x_2, \dots \}$ the graded $\mathbb{Z}/2$-module spanned by $\{ x_1, x_2, \dots \}$. We assign weight $0$, $1$, $0$, $1$ to $w_2'$, $w_3'$, $w_2''$, $w_3''$, respectively. Then, the ideal is homogeneous with respect to weight, and $N$ is a direct sum of subspace $N_k$ generated by weight $k$ monomials. Moreover, it is easy to find their bases for $k=0, 1, 2$ since the only relation we need to take care is $w_2'w_3''+ w_2'' w_3'$. We will define the element $x_{13}$ as an element in $N_1$. We also need the following Proposition~\ref{proposition:3.1} on the basis for $N_1$ to show that $x_{13}$ is nonzero. \begin{proposition}\label{proposition:3.1} For $N_0, N_1, N_2$, we have \begin{align*} N_0&=\mathbb{Z}/2\{ w_2'^mw_2''^n\;|\; m, n \geq 0\}, \\ N_1&=\mathbb{Z}/2\{ w_2'^m w_3', w_2'^m w_2''^n w_3''\; |\; m, n \geq 0\}, \\ N_2&=\mathbb{Z}/2\{ w_2'^m w_3'^2, w_2'^m w_2''^n w_3' w_3'', w_2''^n w_3''^2 \;|\; m, n \geq 0\}. \end{align*} \end{proposition} We need the following lemma on $N_k$ ($k\geq 3$) to show that $Q_m x_{13}=0$ for $m\geq 2$. \begin{proposition}\label{proposition:3.2} Suppose that $k\geq 3$. For $1\leq i \leq k-1$, $m\geq 0$, $n\geq 0$, we have \[ w_2'^m w_2''^n w_3'^i w_3''^{k-i} = w_2''^{m+n}w_3' w_3''^{k-1} \] in $N_k$. \end{proposition} \begin{proof} For $i\geq 2$, we have \begin{align*} w_3'^i w_3''^{k-i}&=w_3'^2 w_3'' \cdot w_3'^{i-2} w_3''^{k-i-1} \\ &=w_3' w_3''^2 \cdot w_3'^{i-2} w_3''^{k-i-1} (\because w_3'^2 w_3''=w_3'w_3''^2)\\ &=w_3'^i w_3''^{k-i}. \end{align*} Iterating this process, we have $w_3'^i w_3''^{k-i}=w_3'w_3''^{k-1}$. For $m\geq 1$, we have \begin{align*} w_2'^m w_2''^n w_3' w_3''^{k-1} &= w_2' w_3'' \cdot w_2'^{m-1} w_2''^n w_3' w_3''^{k-2} \\ &=w_3' w_2'' \cdot w_2'^{m-1} w_2''^n w_3' w_3''^{k-2} (\because w_2' w_3''= w_3' w_2'') \\ &=w_2'^{m-1} w_2''^{n+1} w_3'^2 w_3''^{k-2} \\ &=w_2'^{m-1} w_2''^{n+1} w_3' w_3''^{k-1} (\because w_3'^2 w_3^{k-2}=w_3' w_3''^{k-1}). \end{align*} Hence, we have the desired result $ w_2'^m w_2''^n w_3'^i w_3''^{k-i} = w_2''^{m+n}w_3' w_3''^{k-1} $. \end{proof} \begin{remark}\label{remark:3.3} With Proposition~\ref{proposition:3.2}, it is easy to find a basis for $N_k$. And we have the following. \[ N_k =\mathbb{Z}/2\{ w_2'^m w_3'^k, w_2''^n w_3' w_3''^{k-1}, w_2''^n w_3''^k \;|\; m, n \geq 0 \}. \] \end{remark} \begin{remark}\label{remark:3.4} It is easy to compute the Poincar\'{e} series \[ \dfrac{(1-t^5)(1-t^9)}{(1-t^2)^2(1-t^{3})^2} \] of $N$ since $w_2'w_3''+ w_2'' w_3'$, $w_3' w_3''^2 + w_3'' w_3'^2$ is a regular sequence. To prove the linear independence of elements in Propositions~\ref{proposition:3.1} and \ref{proposition:3.2}, one may compute the Poincar\'{e} series of each $N_k$ and add them up to obtain the Poincar\'{e} series of $N$ above. \end{remark} \begin{proposition}\label{proposition:3.5} Let us define an element $x_{13}$ of degree $13$ in the mod $2$ cohomology of $BG$ by \[ x_{13}:= B\pi_1^{*}(Q_1 w_2) w_2''^2 (w_2'^2 +w_2''^2). \] Then, $x_{13}$ is nonzero and for $m\geq 1$, we have \[ Q_m x_{13}=0. \] \end{proposition} \begin{proof} First, we verify that $x_{13}$ is nonzero. Since $B\pi_1^{*}(Q_1 w_2)=w_2' w_3'$, we have \begin{align*} x_{13}&=w_2'^3 w_2''^2 w_3'+ w_2' w_2''^4 w_3'\\ &=w_2'^4 w_2'' w_3''+ w_2'^2 w_2''^3 w_3'' \\ &\not=0 \end{align*} in $N_1$ by Proposition~\ref{proposition:3.1}. Next, we compute $Q_mx_{13}$. Since $Q_m$ acts trivially on $w_2''^2(w_2'^2+w_2''^2)$, \begin{align*} Q_m x_{13}&= B\pi_1^*(Q_m Q_1 w_2 ) w_2''^2 (w_2'^2 + w_2''^2). \end{align*} For $m=1$, since $Q_1Q_1=0$, we have $Q_1x_{13}=0$. For $m\geq 2$, by Lemma~\ref{lemma:2.1}, we have \begin{align*} B\pi_1^*(Q_m Q_1 w_2 ) w_2''^2 (w_2'^2 + w_2''^2)&=B\pi_1^{*}(g_m w_3^4)w_2''^2 (w_2'^2 + w_2''^2) \\ &=B\pi_1^{*}(g_m) w_3'^4 w_2''^2 (w_2'^2 + w_2''^2). \end{align*} By Proposition~\ref{proposition:3.2}, we obtain \begin{align*} w_3'^4 w_2''^2 w_2'^2 &= w_2''^4 w_3' w_3''^3, \\ w_3'^4 w_2''^2 w_2''^2 &= w_2''^4 w_3' w_3''^3, \end{align*} hence, we have \[ w_3'^4 w_2''^2 (w_2'^2 + w_2''^2)=0. \] Therefore, we obtain $ Q_mx_{13}=0 $. \end{proof} \begin{bibdiv} \begin{biblist} \bib{adams-wilkerson-1980}{article}{ author={Adams, J. F.}, author={Wilkerson, C. W.}, title={Finite $H$-spaces and algebras over the Steenrod algebra}, journal={Ann. of Math. (2)}, volume={111}, date={1980}, number={1}, pages={95--143}, issn={0003-486X}, doi={10.2307/1971218}, } \bib{antieau-williams-2014}{article}{ author={Antieau, Benjamin}, author={Williams, Ben}, title={The topological period-index problem over 6-complexes}, journal={J. Topol.}, volume={7}, date={2014}, number={3}, pages={617--640}, issn={1753-8416}, doi={10.1112/jtopol/jtt042}, } \bib{antieau-2016}{article}{ author={Antieau, Benjamin}, title={On the integral Tate conjecture for finite fields and representation theory}, journal={Algebr. Geom.}, volume={3}, date={2016}, number={2}, pages={138--149}, issn={2313-1691}, doi={10.14231/AG-2016-007}, } \bib{GM-2019}{article}{ author={Garc\'{\i}a-Etxebarria, I\~{n}aki}, author={Montero, Miguel}, title={Dai-Freed anomalies in particle physics}, journal={J. High Energy Phys.}, date={2019}, number={8}, pages={003, 77}, issn={1126-6708}, doi={10.1007/jhep08(2019)003}, } \bib{feshbach-1981}{article}{ author={Feshbach, Mark}, title={The image of $H\sp{\ast} (BG,\,{\bf Z})$ in $H\sp{\ast} (BT,\,{\bf Z})$ for $G$ a compact Lie group with maximal torus $T$}, journal={Topology}, volume={20}, date={1981}, number={1}, pages={93--95}, issn={0040-9383}, doi={10.1016/0040-9383(81)90015-X}, } \bib{gu-2019}{article}{ author={Gu, Xing}, title={The topological period-index problem over 8-complexes, I}, journal={J. Topol.}, volume={12}, date={2019}, number={4}, pages={1368--1395}, issn={1753-8416}, doi={10.1112/topo.12119}, } \bib{gu-2020}{article}{ author={Gu, Xing}, title={The topological period-index problem over 8-complexes, II}, journal={Proc. Amer. Math. Soc.}, volume={148}, date={2020}, number={10}, pages={4531--4545}, issn={0002-9939}, doi={10.1090/proc/15112}, } \bib{gu-2022}{article}{ author={Gu, Xing}, author={Zhang, Yu}, author={Zhang, Zhilei}, author={Zhong, Linan}, title={The $p$-primary subgroups of the cohomology of $BPU_n$ in dimensions less than $2p+5$}, journal={Proc. Amer. Math. Soc.}, volume={150}, date={2022}, number={9}, pages={4099--4111}, issn={0002-9939}, review={\MR{4446254}}, doi={10.1090/proc/16000}, } \bib{kameko-2015}{article}{ author={Kameko, Masaki}, title={On the integral Tate conjecture over finite fields}, journal={Math. Proc. Cambridge Philos. Soc.}, volume={158}, date={2015}, number={3}, pages={531--546}, issn={0305-0041}, doi={10.1017/S0305004115000134}, } \bib{kameko-2017}{article}{ author={Kameko, Masaki}, title={Representation theory and the cycle map of a classifying space}, journal={Algebr. Geom.}, volume={4}, date={2017}, number={2}, pages={221--228}, issn={2313-1691}, doi={10.14231/AG-2017-011}, } \bib{kameko-preprint}{article}{ author={Kameko, Masaki}, title={Nilpotent elements in the cohomology of the classifying space of a connected Lie group}, journal={J. Topol. Anal. to appear, Preprint, arXiv:1906.04499}, date={2019}, } \bib{kono-yagita-1993}{article}{ author={Kono, Akira}, author={Yagita, Nobuaki}, title={Brown-Peterson and ordinary cohomology theories of classifying spaces for compact Lie groups}, journal={Trans. Amer. Math. Soc.}, volume={339}, date={1993}, number={2}, pages={781--798}, issn={0002-9947}, doi={10.2307/2154298}, } \bib{totaro-1997}{article}{ author={Totaro, Burt}, title={Torsion algebraic cycles and complex cobordism}, journal={J. Amer. Math. Soc.}, volume={10}, date={1997}, number={2}, pages={467--493}, issn={0894-0347}, doi={10.1090/S0894-0347-97-00232-4}, } \bib{totaro-1999}{article}{ author={Totaro, Burt}, title={The Chow ring of a classifying space}, conference={ title={Algebraic $K$-theory}, address={Seattle, WA}, date={1997}, }, book={ series={Proc. Sympos. Pure Math.}, volume={67}, publisher={Amer. Math. Soc., Providence, RI}, }, date={1999}, pages={249--281}, doi={10.1090/pspum/067/1743244}, } \bib{tripathy-preprint}{article}{ author={Tripathy, Arnav}, title={Further counterexamples to the integral Hodge conjecture}, journal={Preprint, arXiv:1601.06170}, date={2016}, } \bib{vavpetic-viruel-2005}{article}{ author={Vavpeti\v{c}, Ale\v{s}}, author={Viruel, Antonio}, title={On the mod $p$ cohomology of $B{\rm PU}(p)$}, journal={Trans. Amer. Math. Soc.}, volume={357}, date={2005}, number={11}, pages={4517--4532}, issn={0002-9947}, doi={10.1090/S0002-9947-05-03983-8}, } \bib{wilkerson-1983}{article}{ author={Wilkerson, Clarence}, title={A primer on the Dickson invariants}, conference={ title={Proceedings of the Northwestern Homotopy Theory Conference}, address={Evanston, Ill.}, date={1982}, }, book={ series={Contemp. Math.}, volume={19}, publisher={Amer. Math. Soc., Providence, RI}, }, date={1983}, pages={421--434}, doi={10.1090/conm/019/711066}, } \end{biblist} \end{bibdiv} \end{document}
1,314,259,994,437
arxiv
\section{Introduction} The investigation of $4f$-containing metals by far-infrared optical spectroscopy provides valuable insight into the nature of strong electronic correlations. This in particular holds true for heavy fermion (HF) compounds where at low temperatures a weak $4f$-conduction electron ($cf$-)hybridization generates mass-renormalized quasiparticles with a coherent ground state which is in many HF systems of the Landau Fermi liquid (LFL) type.~\cite{stewart01} The quasiparticles influence thermodynamic quantities which are described in terms of a large effective mass $m^{*}$ exceeding the free electron mass $m_{0}$ by three orders of magnitude. Furthermore, in typical HF materials, below a single-ion Kondo temperature ($T_{\rm K}$), the coherent state is characterized by a dynamical screening of the $4f$ magnetic moments through the conduction electrons. Several highly correlated metals exhibit so-called non-Fermi liquid (NFL), {\it i.e.}, strong deviations from a renormalized LFL behavior when $T\rightarrow0$~K.~\cite{stewart01} The system YbRh$_{2}$Si$_{2}$ studied in this paper is one of a few clean stoichiometric HF metals with pronounced NFL behavior at ambient pressure which is related to both antiferromagnetic (AF) as well as ferromagnetic quantum critical spin fluctuations in close proximity to an AF quantum critical point (QCP).~\cite{geg02,cus03,geg05} Those NFL effects manifest as a divergence of the $4f$-derived increment to the specific heat $\Delta C/T \propto -\ln T$ and in the electrical resistivity $\rho(T)$ showing a power law exponent close to 1 in a temperature range substantially larger than one decade and extending up to $T\simeq10$~K.~\cite{tro00} Transport and thermodynamic properties are consistent with a single-ion Kondo temperature $T_{\rm K}=25$~K (associated with the crystalline-electric-field-derived doublet ground state~\cite{sto05}). \indent The electrodynamical response of HF systems is characterized by an optical conductivity $\sigma(\omega)$ which follows at room temperature the {\it classical} Drude model [$\sigma(\omega)~=~N e^{2} \tau/{m^{*} (1 + \omega^{2} \tau^{2})}$; $N$: charge carrier density] with frequency independent $m^{*}$ and scattering rate $1/\tau$.~\cite{DG02} At low temperatures, upon entering the coherent state, large deviations are observed which are caused by many-body effects. Then a narrow, renormalized peak at zero photon energy $\hbar\omega$ = 0~eV is formed and a so-called hybridization gap appears which is related to the transition between the bonding and antibonding states resulting from the $cf$-hybridization.~\cite{deg01,mil87a,mil87b} The coherent part of the underlying strong electron-electron correlations are treated in an {\it extended} Drude model by renormalized and frequency dependent $m^{*}(\omega)/m_{0}$ and $1/\tau(\omega)$;~\cite{web86,awa93,kim94,deg99} \[ \frac{m^{*}(\omega)}{m_{0}} = \frac{N e^2}{m_0 \omega} \cdot Im\left(\frac{1}{\tilde{\sigma}(\omega)}\right), \frac{1}{\tau(\omega)} = \frac{N e^2}{m_0} \cdot Re\left(\frac{1}{\tilde{\sigma}(\omega)}\right). \] Here, $\tilde{\sigma}(\omega)$ is the complex optical conductivity derived from the Kramers-Kronig analysis (KKA) of the reflectivity spectrum $R(\omega)$. The LFL theory predicts a dynamical scattering rate $1/\tau(\omega)~\propto~(2\pi k_{\rm B}T)^2+(\hbar\omega)^2$ which also accounts for the electrical resistivity, $\rho(T)$, growing quadratically with temperature~\cite{deg99}. The $(\hbar\omega)^2$ behavior is indicated in $1/\tau(\omega)$ of many renormalized LFL metals, e.g. YbAl$_3$~\cite{oka04}, CePd$_3$~\cite{web86}, and CeAl$_3$~\cite{awa93}. At the same time, $m^*(\omega)$ increases with decreasing $T$ and $\omega$ indicating the formation of heavy quasiparticles at low temperatures. NFL behavior in optical properties is typically indicated by a linear frequency dependence of $1/\tau(\omega)$.~\cite{deg99} Up to now, optical NFL effects were explicitly investigated for correlated materials whose NFL state is believed to be related to disorder (several U-based Kondo alloys) or to two-channel Kondo physics (UBe$_{13}$).~\cite{deg99} \indent Yet, to our knowledge, the optical properties of a heavy-fermion NFL state due to spin fluctuations in close proximity to a QCP, as is the case for YbRh$_{2}$Si$_{2}$, have not been investigated so far. As shown by our preliminary optical experiments on YbRh$_{2}$Si$_{2}$ the $T$-linear NFL behavior of the zero-frequency resistivity, $\rho_{\rm DC}(T)$, is also reflected in $\sigma(\omega,T)$ for $T<20$~K, $\hbar\omega<20$~meV and $\omega\tau\gg1$, assuming a frequency independent $\tau$ consistent with a {\it classical} Drude approximation of the data.~\cite{kim04} This behavior was interpreted as the temperature dependence of a renormalized scattering rate of a Drude peak whose tail at $T=2.7$~K was observable just above the lowest measured energy of 10~meV. Moreover, a peak at around 0.2~eV, visible already at $T=300$~K and gradually developing with decreasing temperature, appears beyond a pseudogap-like structure similar to that reported for several other Kondo-lattice systems.~\cite{deg01,oka04,web86,men05} \indent Here we report the extension of our optical investigations down to energies of 2~meV and temperatures down to 0.4~K. This allowed us to obtain yet inaccessible information on the low-energy HF optical response of YbRh$_{2}$Si$_{2}$ and provides a detailed characterization of the electrodynamic NFL properties. In particular the low-energy heavy quasiparticle excitations could be analyzed within the {\it extended} Drude model which yields $m^*(\omega,T)$ and $1/\tau(\omega,T)$. \indent Near-normal incident $R(\omega)$ spectra were acquired in a very wide photon-energy region of 2~meV -- 30~eV to ensure an accurate KKA. We investigated the tetragonal $ab$-plane of two single crystalline samples with as-grown sample surfaces and sizes of $2.2 \times 1.5 \times 0.1$ mm$^3$ and $3.5 \times 4.2 \times 0.5$ mm$^3$, respectively. The preparation as well as the magnetic and transport properties has been described elsewhere.~\cite{tro00,geg02,cus03} The high quality of the single crystals is evidenced by a residual resistivity ratio of $\rho_{\rm 300K}/\rho_0\simeq 65$ $(\rho_0\simeq1\mu\Omega {\rm cm})$ and a very sharp anomaly in the specific heat at $T = T_{\rm N}$.~\cite{cus03} Rapid-scan Fourier spectrometers of Martin-Puplett and Michelson type were used at photon energies of 2--30~meV and 0.01--1.5~eV, respectively, at sample temperatures between 0.4--300~K using a $^4$He ($T\rightarrow 5.5$~K) and a $^3$He ($T\rightarrow 0.4$~K) cryostat. To obtain $R(\omega)$, a reference spectrum was measured by using the sample surface evaporated {\it in-situ} with gold. At $T=300$~K, $R(\omega)$ was measured for energies 1.2--30~eV by using synchrotron radiation.~\cite{fuk01} In order to obtain $\sigma(\omega)$ via a KKA of $R(\omega)$ the spectra were extrapolated below 2~meV with $R(\omega)=1-(2\omega/\pi \sigma_{DC})^{1/2}$ and above 30~eV with a free-electron approximation $R(\omega) \propto \omega^{-4}$.~\cite{DG02} \begin{figure}[t] \begin{center} \includegraphics[width=0.35\textwidth]{fig1} \end{center} \caption{ (Color online) Temperature dependence of the reflectivity spectrum $R(\omega)$ in the photon energy range of 2 -- 500~meV. Inset: $R(\omega)$ at 5.5 and 300~K in the complete accessible range of photon energies up to 30~eV. } \label{fig1} \end{figure} \indent The temperature dependence of the $R(\omega)$ spectra of YbRh$_{2}$Si$_{2}$ is shown in Fig.~\ref{fig1}. The inset shows an extended energy region where above 500~meV $R(\omega)$ is dominated by interband transitions. In this study, we focus only on the intraband transition region below 500~meV where the spectra display a strong temperature dependence. With decreasing temperature $R(\omega)$ gets strongly suppressed, creating a dip structure at around 100~meV. Simultaneously, below 12~meV, $R(\omega)$ approaches unity with decreasing temperature. These pronounced temperature dependences are typical for HF compounds.~\cite{deg99} Most clear coincidence is found when comparing the optical properties of YbRh$_{2}$Si$_{2}$ with those of the intermediate-valent compound YbAl$_{3}$.~\cite{oka04} Their low-temperature, low-energy shapes of $R(\omega)$ are very similar, albeit a weaker temperature dependence is found for YbAl$_{3}$ reflecting its much stronger $cf$-hybridization which underlies its intermediate-valence behavior. However, very similar to $R(\omega)$ of YbRh$_{2}$Si$_{2}$ at $T=300$~K, the $R(\omega)$ of the non-magnetic reference compound LaAl$_{3}$ does not show any dip-structure. Therefore, as already identified for YbAl$_{3}$, the pronounced low-temperature dip in $R(\omega)$ of YbRh$_{2}$Si$_{2}$ can be related to Yb-$4f$ electronic states near the Fermi energy. By decreasing the temperature, due to $cf$-hybridization, the character of the 4f states changes from localized to itinerant where optical transitions between the $cf$-hybridization states are expected.~\cite{deg01} This is consistent with the observed $R(\omega)$ dip structure and its temperature evolution in YbRh$_{2}$Si$_{2}$. \begin{figure}[t] \begin{center} \includegraphics[width=0.35\textwidth]{fig2} \end{center} \caption{ (Color online) Temperature dependence of the optical conductivity $\sigma(\omega)$ (solid lines) with corresponding direct current conductivity ($\sigma_{DC}$, symbols). Dashed lines: {\it Classical} Drude model with implicit Drude masses $m^*$ as indicated. Corresponding $\sigma_{DC}$ and carrier densities (derived from the Hall coefficient) were used. } \label{fig2} \end{figure} \indent The KKA of $R(\omega)$ yields optical quantities as shown in Fig.~\ref{fig2}. At $T=300$~K, $\sigma(\omega)$ shows normal metallic behavior, {\it i.e.}, a monotonic decrease with increasing photon energy, and a zero-energy extrapolation consistent with $\sigma_{DC}$ (symbols at left axis of Fig.~\ref{fig2}). However, as shown by the dashed line in Fig.~\ref{fig2}, the experimental $\sigma(\omega)$ is poorly represented by a {\it classical} Drude fit [with parameters $m^{*} = 15 m_{0} $, $1/\tau = 4.0 \cdot 10^{13}$ sec$^{-1}$, $N = 2.7 \cdot 10^{22}$~cm$^{-3}$ (Hall effect result~\cite{pas04})]. This discrepancy indicates that the scattering rate depends on photon energy, as discussed below and as shown in Fig.~\ref{fig3}b. With decreasing temperature, a pseudogap-like suppression of $\sigma(\omega)$ appears below 100~meV with a simultaneous increase in $\sigma_{DC}$. A minimum of $\sigma(\omega)$ develops whose position continuously decreases towards low temperatures. The onset temperature of pseudogap formation between 80 and 160~K corresponds to the maximum of $\rho_{\rm DC}(T)$ at $T_{\rm coh}=120$~K which marks the onset of coherence effects upon $cf$-hybridization. This suggests that the temperature dependence of $\sigma(\omega)$ is indeed related to the formation of heavy quasiparticles and the formation of a minimum in $\sigma(\omega)$ may be associated with a heavy plasma mode. \indent As already indicated from the above discussion, the energy and temperature behavior of the optical conductivity implies that highly energy dependent $m^{*}$ and $1/\tau(\omega)$ are involved. For example, $\sigma(\omega)$ at $T=5.5$~K cannot be represented by energy-independent values of both $m^{*}$ and $1/\tau$ within a {\it classical} Drude curve as shown in Fig.~\ref{fig2}. Moreover, due to the different temperature dependences in $\sigma(\omega)$ and $\sigma_{DC}$, the {\it classical} Drude analysis emphasizes the need for strongly temperature dependent and, at low temperatures, very heavy effective masses ($m_{\rm Drude}^{*} = 600~m_{0}$, $1/\tau = 1.6 \cdot 10^{11}$ sec$^{-1}$ at 5.5~K). In general, such behavior of the optical mass and scattering rate reflects electron-electron scattering or electron scattering off spin fluctuations. In case of HF compounds, a many-body effect due to the $cf$-hybridization is effective at low energies and temperatures where the conduction electrons are scattered resonantly off the hybridized charge carriers.~\cite{deg01} \begin{figure}[t] \begin{center} \includegraphics[width=0.3\textwidth]{fig3} \end{center} \caption{ (Color online) Temperature dependence of (a) the effective mass relative to the free electron mass, $m^{*}(\omega)/m_{0}$, and (b) the scattering rate $1/\tau(\omega)$ as a function of photon energy $\hbar\omega$. Inset of (b) is the low energy part of $1/\tau(\omega)$. Dashed line emphasizes a $1/\tau\propto\hbar\omega$ behavior. } \label{fig3} \end{figure} \indent Such scattering process is reflected in the temperature- and photon-energy dependences of $m^{*}$ and $1/\tau$ which we obtained from an {\it extended} Drude analysis and which are shown in Fig.~\ref{fig3} for energies lower than the interband transition spectrum. At $T=300$~K both $m^{*}(\omega)/m_{0}$ and $1/\tau(\omega)$ are almost constant, with values of about $15m_{0}$ and $1\cdot10^{14}$ sec$^{-1}$, respectively. Therefore, it is not surprising that $\sigma(\omega)$ at 300~K clearly contains the features of a {\it classical} Drude model as shown in Fig.~\ref{fig2}. With decreasing temperature from 300~K to 0.4~K, $m^{*}(\omega)/m_{0}$ below $\simeq20$~meV monotonically increases and exceeds values of 130. Clearly, this enhancement can be related to the HF state formation in YbRh$_{2}$Si$_{2}$ as the enhancement of $m^*(\omega)$ occurs at energies comparable to $k_{\rm B}T_{\rm coh}$. Interesting to note, below 10~meV, $m^{*}(\omega)/m_{0}$ does not seem to saturate with decreasing temperature and energy but rather increases continuously. We speculate that this behavior indicates an energy equivalence to the electron effective mass temperature divergence to infinity as observed in the electronic specific heat.~\cite{tro00,geg02,cus03} The appearance of a negative mass at energies above $\simeq30$~meV and at low $T$ is caused by a positive $\varepsilon_{1}(\omega)$ indicating a heavy plasma mode (not shown). Equivalently, one may relate transitions across the hybridization gap to the observed negative optical mass. Such behavior is observed in many other heavy-fermion materials.~\cite{bon88, men05, dre02, dor01} The $m^{*}(\omega)/m_{0}$ enhancement with decreasing temperature is accompanied by a formation of a broad peak in $1/\tau(\omega)$ in the energy region where the pseudogap-like suppression of $\sigma(\omega)$ appears as shown in Fig.~\ref{fig3}b. It is related to the process of mass renormalization as, at 0.4~K, the $1/\tau(\omega)$ reaches the maximum position of $\simeq22$~meV which corresponds to the onset of the $m^{*}(\omega)/m_{0}$ enhancement. Again, transitions across the hybridization gap lead to such enhanced dynamical scattering rates reflecting the particular quasiparticle excitation in accord with hybridization-gap scenarios for HF-derived optical properties.~\cite{mil87a,deg01,awa93} As shown in the inset of Fig.~\ref{fig3}b the HF state is characterized by $1/\tau~\propto~\hbar\omega$ for energies up to $\simeq7$~meV which is a pronounced NFL behavior, see the dashed line for the data at 5.5~K. It is worth to remind that in stoichiometric YbRh$_{2}$Si$_{2}$ NFL effects due to disorder can be excluded.~\cite{tro00} Therefore, we attribute the low-energy linear in $\omega$ behavior of $1/\tau(\omega)$ to spin fluctuations due to the close vicinity to the QCP. \begin{figure}[t] \begin{center} \includegraphics[width=0.3\textwidth]{fig4} \end{center} \caption{ (Color online) Temperature dependence of (a) the dynamical mass $m^{*}(T)/m_{0}$ and (b) the scattering rate $1/\tau(\omega,T)$ at specific photon energies as indicated. Dashed line in (b) shows a non-Fermi liquid $1/\tau\propto T$ behavior. } \label{fig4} \end{figure} \indent The extended Drude description of the optical properties of correlated electron systems yields the energy dependence of the renormalization effects. In the low-energy limit the frequency dependence of both $m^*$ and $1/\tau$ should resemble their temperature dependence.~\cite{dre02} This expectation is satisfied when comparing the data of Fig.~\ref{fig3} with Fig.~\ref{fig4}. The latter shows the temperature dependence of $m^{*}(T)/m_{0}$ at 5~meV and that of $1/\tau(T)$ at 5~meV and at 18~meV obtained from Fig.~\ref{fig3}. Note that the $m^{*}(T)/m_{0}$ enhancement occurs below 160~K which roughly corresponds to the onset energy of mass enhancement. Similar to $m^{*}(\omega)/m_{0}$, $m^{*}(T)/m_{0}$ does not saturate even at the lowest accessible temperature below $T_{\rm K}=25$~K. However, in contrast to the divergence of the electronic specific heat coefficient $\Delta C/T \propto -\ln T$, $m^{*}(T,\omega)/m_{0}$ shows an almost linear increase towards low temperatures, at least for photon energies down to 5~meV. From this discrepancy, we anticipate that a divergence of the optical mass renormalization may occur below the single-ion Kondo energy scale of $k_{\rm B}T_{\rm K}=2$~meV. The mass enhancement with decreasing temperature corresponds to a continuous increase of $1/\tau(T)$ at 18~meV as shown in Fig.~\ref{fig4}b. However, below $T_{\rm K}$ the increase of $1/\tau(T)$ at 18~meV becomes stronger. At the same time, $1/\tau(T)$ at 5~meV assumes a NFL temperature dependence which is approximately linear as the dashed line emphasizes in Fig.~\ref{fig4}b. Therefore, at the single-ion Kondo temperature $T_{\rm K}$, the charge dynamics changes while a single ion Kondo scenario fails to explain the magnetic properties of YbRh$_{2}$Si$_{2}$ below $T_{\rm K}$ (large fluctuating $4f$-magnetic moments~\cite{cus03} and a sharp electron spin resonance line~\cite{sic03}). \indent At temperatures near 80~K, $m^{*}(T)/m_{0}$ starts to get enhanced and $1/\tau(T)$ shows a kink or a small peak. This temperature range corresponds to that at which both the $^{29}$Si-NMR Knight shift and relaxation rate show an anomaly,~\cite{ish02} indicating a change in the magnetic characteristics at 80~K. For a proper interpretation of the peak in $1/\tau(T)$, carrier scattering by phonons should also be taken into account. \indent In conclusion, we found distinct electrodynamical non-Fermi liquid behavior of the low-energy charge dynamics of clean ({\it i.e.},atomically ordered, stoichiometric) YbRh$_{2}$Si$_{2}$. We relate our results to the close proximity of YbRh$_{2}$Si$_{2}$ to an antiferromagnetic quantum critical point as the latter is the origin of the pronounced NFL effects of thermodynamic and transport properties.~\cite{tro00,geg02,cus03} Our findings were accomplished by measuring the temperature dependence of the optical conductivity of YbRh$_{2}$Si$_{2}$ down to $T=0.4$~K in the photon energy range 2~meV -- 30~eV. From an extended Drude analysis, the scattering rate below $\hbar\omega\simeq7$~meV and below $T\simeq20$~K is consistent with a NFL linear proportionality both to the photon energy and temperature. Moreover, towards low temperatures, clear signatures of heavy fermion behavior are found: formation of an interband peak at 0.2~eV and a heavy plasmon mode below 30~meV which both can be related to $cf$-hybridization. The low-temperature optical effective mass is strongly enhanced below 20~meV and continues to increase down to the accessible lowest energies (2~meV) and temperatures (0.4~K). \indent We would like to thank Q. Si and O. Sakai for fruitful discussions. This work was a joint studies program of the Institute for Molecular Science and was partially supported by Grants-in-Aid for Scientific Research (Grant No.~18340110) from MEXT of Japan and by DFG under the auspices of SFB 463 of Germany.
1,314,259,994,438
arxiv
\section*{Photonic quantum information processing} Photonic quantum hardware is rapidly developing where the benefits of advanced photonic chip technology are exploited. Quantum photonics therefore appears as a front runner in quantum technology where real-world applications emerge early. Photonics is indispensable in quantum communication where pulses of light are carriers of quantum information through optical fibers. Figure \ref{fig:fig1} illustrates the concepts covered in the present manuscript. Our point of departure is the availability of efficient photon-emitter interfaces providing deterministic and fully quantum coherent light-matter interaction, cf. Fig. \ref{fig:fig1}(a), providing a basis for deterministic single- and multi-photon sources. Subsequently, photonic integrated circuit (PIC) technology can be exploited for scaling up, e.g., by processing many photonic qubits or for synthesizing advanced photonic resources, cf. Fig. \ref{fig:fig1}(b). In contrast, other probabilistic sources require heralding for scaling up which results in excess resource overhead \cite{Rudolph2017APLphoton}. These PIC-based quantum processors could potentially be implemented in applications, cf. \ref{fig:fig1}(c), and we will outline specific architectures tailored to quantum-dot (QD) single-photon hardware within quantum communication and photonic quantum computing. \begin{figure} \centering \includegraphics[width=150mm]{Figure1.png} \caption{Scalable and modular quantum photonic technologies based on deterministic single-photon quantum hardware. (a) Illustration of a deterministic photon-emitter interface in a nanophotonic waveguide. (b) Illustration of PIC configured for synthesizing multi-photon entangled states. (c) Artist view of advanced photonic quantum networks where communication between network links is facilitated by multi-photon entangled pulses.} \label{fig:fig1} \end{figure} \section{Deterministic and quantum coherent photon-emitter interfaces} A single quantum emitter, e.g., an atom, ion or solid-state emitter, constitutes the fundamental quantum interface between light and matter. It couples a single excitation of light (the photon) to a single atomic excitation. The coupling is usually weak and any incoherent dephasing processes may deteriorate the inherent quantum character of the interaction. Both challenges have recently been overcome by using quantum emitters in photonic nanostructures after implementing careful shielding of external noise, cf. Fig. \ref{fig:emitterphoton}. Different quantum emitters are considered at optical frequencies, including QDs, atoms, vacancy centers in diamond, molecules, or excitons in two-dimensional materials \cite{Aharonovich2016np}. Furthermore, the underlying physics applies as well to superconducting qubits in resonators and waveguides \cite{Blais2020arXiv} although their operation at microwave frequencies precludes the applications in quantum communication. In order to present precise figures-of-merit and benchmarks that are essential for projecting out to the proposed applications, we will focus on QDs in nanophotonic cavities and waveguides. This platform has recently matured dramatically leading to the realization of a near-deterministic and coherent photon-emitter interface \cite{senellart2017natnano,Wang2019natphton,uppu2020scienceadv,tomm2021nn}. \begin{figure} \centering \includegraphics[width=160mm]{Figure2.pdf} \caption{Illustration of a deterministic photon-emitter interface for the exemplary case of a QD in a planar nanophotonic waveguide. The concepts are general and apply to other types of emitters and cavity/waveguide implementations as well. The relevant deteriorating decoherence processes for QDs leading to linewidth broadening are shown, including coupling to phonons, charges, and a fluctuating nuclear spin bath that additionally introduces decoherence of the electronic spin degrees of freedom. The devices can be operated either as a single-photon or entanglement source (left illustration) or as a giant nonlinearity that operates at the single-photon level (right illustration).} \label{fig:emitterphoton} \end{figure} A QD in a single-mode waveguide or nanocavity is a prototypical implementation of a deterministic photon-emitter interface. Figure \ref{fig:emitterphoton} illustrates the case of, e.g., an InAs QD placed in a photonic-crystal waveguide where the QD is positioned such that it couples efficiently to the spatially varying fundamental waveguide mode. The QD exhibits two linear orthogonal transition dipoles, and the waveguide can be aligned with one dipole maximally coupled to the fundamental waveguide mode, while the other dipole is suppressed. The leakage to other modes can be suppressed in cavities and waveguides, which is quantified by the $\beta$-factor specifying the probability that the QD emits into the desired mode. Near-unity $\beta$-factors are routinely achieved in nanophotonics \cite{lodahl2015rmp}, however for quantum applications all quantum decoherence processes must be suppressed as well. To this end, a relevant figure-of-merit is the degree of indistinguishability (ID) of subsequently emitted photons, and for QDs ID above 95 \% was reported over extended time scales to produce more than hundred indistinguishable photons \cite{uppu2020scienceadv,tomm2021nn}. The photon-emitter interface can be operated as an on-demand source of single photons by resonantly exciting the QD that subsequently emits photons into the waveguide. In another configuration, resonant photons are launched into the waveguide and the QD serves as a giant non-linearity that introduces strong correlations between individual photons. These two cases are illustrated in Fig. \ref{fig:emitterphoton} and highlight the versatility of the approach. Realizing highly coherent emitters has been an outstanding challenge for solid-state emitters and requires identifying and combating all noise processes affecting photon emission. For QDs, the relevant decoherence processes are sketched in Fig. \ref{fig:emitterphoton}. They include phononic broadening due to a finite temperature, charge noise from electric charges in the vicinity of the QD, and spin noise from the coupling of the electron spin to the randomly-oriented nuclear spins of the atoms making up the QD. Remarkably, charge noise can be fully suppressed in epitaxially grown and electrically-contacted samples and nuclear spin noise only leads to minor broadening effects \cite{Kuhlmann2013NatPhys}. Consequently, transform-limited QD emission has been demonstrated \cite{kuhlmann2015nc}, which subsequently was realized also in high $\beta$-factor nanophotonic waveguides \cite{pedersen2020ACSphoton}. In this manner, a coherent and deterministic photon-emitter interface is realized. To realize high-fidelity quantum operation, as required for advanced quantum applications, even minor decoherence contributions need to be accounted for. Phonon scattering remains the fundamental decoherence mechanism that contributes even at cryogenic temperatures, which limits the reported values of ID to high nineties. Going beyond, an experimentally feasible strategy has been identified for increasing ID above $99 \%$ through phonon damping by proper clamping of the nanostructures \cite{dreessen2018qst}. An alternative strategy applies strong Purcell enhancement to increase the emission rate relative to the decoherence rate \cite{Santori2002Nature}. It has already been established that state-of-the-art QD single-photon sources suffice for realizing Quantum Advantage in a boson sampling quantum-simulation algorithm requiring about 50 high-quality photons \cite{uppu2020scienceadv}, see Sec. II for further discussion. Further experiments will likely show that these sources can emit thousands if not millions of highly indistinguishable photons since the photon emission is much faster (typically 100 ps) than slow residual drift processes (typically milliseconds). It will be exciting to see in the future whether this massive photonic quantum resource delivered by just a single QD can be a key enabler in advanced quantum-information processing applications. This is intimately connected to how efficient the generated string of photons can be coupled, switched, and processed, which are topics covered in the following section of the present manuscript. Electrically contacted QDs have the additional benefit that various charge states can be deterministically prepared leading to diverse opportunities. Loading the QD with a single electron or hole introduces a two-fold metastable ground state corresponding to spin up/down relative to an external magnetic field serving as a quantum memory of the system. The spin coherence is limited, however, due to the coupling of the charge to the nuclear spin bath. The typical spin dephasing time ($T_2^* $) is nanoseconds for single electrons that can be extended to hundred nanoseconds for hole spins \cite{huthmacher2018prb}. However, the spin coherence time ($T_2$), which is relevant in protocols where spin-echo refocusing is implemented, can reach the level of microseconds \cite{Stockill2016natcomm}. Importantly, since emission is rapid, a QD can emit many photons within the spin coherence time, which is essential for the scalability of advanced multi-photon entanglement sources, as will be described in a later section. Multi-particle excitations provide further opportunities. Biexciton states consist of two electrons/holes in the conduction/valence bands of the QD and recombine through a cascaded two-photon process. The availability of two indistinguishable decay paths lead to the generation of polarization-entangled photons on-demand \cite{Benson2000prl,liu2019natnano}. Coupling multiple QDs is another essential requirement, e.g., for the generation of advanced multi-dimensional entangled states. This can be achieved by exploiting coherent electronic tunnel coupling \cite{greilich2011np} or by optical dipole-dipole interaction possibly engineered by the photonic nanostructure \cite{Grim2019natmat}. QD inhomogeneities introduced during growth remains a major challenge in order to scale up from present-day few QD experiments to many. Importantly, most of the applications that are considered in the present manuscript require only a few and sometimes even just one QD. Nonetheless overcoming QD inhomogeneous broadening would constitute a major breakthrough and new selective-area growth methods may provide a path way to realize the required control on the atomic scale \cite{krizek2018prm}. \begin{figure} \centering \includegraphics[width=0.5\columnwidth]{Figure3.pdf} \caption{ Photon quality requirements for scaling up the boson sampling algorithm. The plot shows the trace distance separation $(\Delta)$ \cite{Shchesnovich2015pra} between a real boson sampler with a finite degree of ID (V) of the photons and a perfect boson sampler with $V = 100 \%$ as a function of the number of photons. $\Delta=0$ corresponds to the computationally easy case of distinguishable photon, while $\Delta>0$ is the regime where computational hardness appears. $V \geq 96\%$ has been realized with QDs over long strings of many photons \cite{uppu2020scienceadv,tomm2021nn} enabling boson sampling in the QA regime of $\approx 50$ photons with $\Delta>0.4$.} \label{fig:bosonsampling} \end{figure} \section{Quantum hardware enabling a quantum advantage} What are the performance requirements of quantum hardware for carrying out tasks that are impossible with classical hardware? Obviously this question has no simple answer, since it would depend on the specific application targeted among the multitude of potential applications of quantum technology. Nonetheless this question is essential for the researchers developing the quantum hardware, since clear performance benchmarks are required for guiding future work. Despite the diversity of applications often the same physical parameters are relevant, since transformative quantum applications rely on similar physical principles, e.g., quantum-superposition states or multi-particle entanglement. In the main text, the essential physical parameters for photonic qubits are discussed. Quantitative benchmarking requires zooming in on the precise application. In the context of quantum simulations/computing, a quantitative benchmark is defined that is generally referred to as Quantum Advantage (QA) \footnote{In the literature also the term Quantum Supremacy prevails. However, as has been discussed in a commentary (Palacios-Berraquero et al., \textit{Nature} \textbf{576}, 213 (2019)), the term Quantum Advantage captures the importance of this rather technical quantum hardware achievement while avoiding associations to the historical meaning of the term supremacy.}: QA signifies the threshold at which the accessible hardware can implement a specific quantum algorithm that cannot be realized on even the World's largest supercomputer \cite{preskill2012arx} since the classical algorithm scales exponentially with the size of the system being simulated. QA has been realized with superconducting qubits in 2019 by Google \cite{arute2019nat}. Boson sampling \cite{aaronson2011computational} is an algorithm formulated specifically for photonics, which is realized by linear interference of highly ID single photons and sampling from the photon distribution. About 50 high-quality photons suffices reaching QA, although minor variations in the exact number may change as optimized classical algorithms are being developed. Figure \ref{fig:bosonsampling} quantifies the required quality of the photons for the boson sampling algorithm. So far a 20-photon boson sampling experiment with a QD source has been performed \cite{Wang2019prl}, while it has been shown that improved QD sources allow scaling further up and into the regime of QA \cite{uppu2020scienceadv}. An explicit photonics QA demonstration was very recently reported \cite{Zhong2020science} albeit in this case single-photon sources were not applied but rather squeezed light sources for realizing a Gaussian boson sampling algorithm. Nonetheless, this experiment constitutes a very important milestone for photonic quantum computing in explicitly demonstrating a setup that can process and detect the many optical modes. Furthermore, the setup could be generalized to control also single photons from the optimized QD sources for an explicit QA demonstration of photonic qubit technology. What are the next steps beyond the QA demonstrations? This is another essential question. Indeed, the current QA simulators are not solving any relevant problems and hence to justify the huge experimental efforts required, it is essential that they constitute stepping stones towards addressing pertinent problems. The present article highlights some of the opportunities identified for quantum photonics based on deterministic photon-emitter interfaces, including the route towards realizing them. The concept of QA, as discussed above, can be also formulated in a broader context than for quantum simulations. In this spirit, any protocol that exploits inherent quantum effects to realize applications that are not possible with classical resources could be referred to as QA. This entails applications such as device-independent quantum key distribution, quantum repeaters, and certain quantum sensing protocols, to mention a few examples. The break-down of these protocols in actual hardware architectures, including a thorough bench-marking of the hardware requirements for entering the QA regime, would constitute important guidelines for the quantum hardware development. \section{Photonic building blocks} The application of photon-emitter interfaces in quantum photonics technologies requires interfacing to additional functionalities. Quantum photonics is favorable since a modular scaling-up approach applies where fundamental building blocks of sufficiently high quality are combined into complex architectures. Furthermore, the hardware can be fabricated with advanced nanofabrication equipment that has been developed for classical photonics applications. Photonics technology offers high functional stability, mass productivity, and ultimate level of integration \cite{Sun2015nature}. Importantly, the quantum photonics applications are compatible with classical hardware development of PICs \cite{Bogaerts2020nat}, yet the performance requirements of quantum technology are pushing current boundaries, in particular requiring ultra-low-loss operation. These improvements would lead to significant spill over of technology into the area of classical ``green IT technology'' \cite{Shaikh2015ieee} where the rapidly growing energy consumption of the internet is a concern \cite{Morley2018energy}. Quantum applications require low-loss performance due to the ubiquitous ``no-cloning theorem'' \cite{Wootters1982nature} stating that quantum information cannot be amplified without noise penalties. As a consequence, the scalability of quantum photonics is intimately linked to the loss performance of all involved components. In the following we will briefly outline the functionality and performance of photonic devices required for scaling up. Figure \ref{fig:photonic_modules} outlines a vision for a general-purpose photonic quantum processor comprising photon sources, couplers, switches, converters, detectors, and more. A hybrid configuration \cite{Elshaari2020np,kim2020optica} consisting of a source chip and a processing chip is sketched, which would be the most flexible approach with currently available technology based on material compatibility considerations. Long-term, the full integration of photonic quantum processors on a single chip can be envisioned. The source chip is based on a direct bandgap semiconductor material, e.g. GaAs, hosting high quality QDs to produce photons. Photon-photon nonlinear interaction can be mediated by scattering off the QDs, which implements photonic two-qubit gates. Additional functionalities can be implemented on the source chip, e.g., filters to remove residual light from pumping or spin control pulses or switches to de-multiplex the photons. High efficiency mode converters are required to couple photons out of the source chip and into the processing chip. In between the two chips, frequency conversion modules could optionally be implemented, e.g., to convert the photon frequency to compensate variations between different QDs or to reach the telecom band as required for quantum communication. Furthermore, optical fiber delays can be inserted for proper timing of the photon stream. The processing chip would carry out the actual quantum operation, e.g., a quantum simulator algorithm, on the resource produced by the source chip. This generally requires a reconfigurable circuit in order to mutually interfere the photons combined with low-loss optical delay lines, filters and integrated single-photon detectors. In addition, fast feed-forward from the detectors to the reconfigurable circuit is essential in many advanced applications \cite{zanin2020opex}. The detectors are preferably integrated in the processing chip, and superconducting nanowire single-photon detectors (SNSPDs) are very well suited for this. Figure \ref{fig:photonic_modules} illustrates the building blocks for general-purpose photonics quantum-information processing. For specific applications an actual chip design would need to be laid out. For the processing tasks, explicit photon-photon nonlinear interaction may be a major asset, therefore also more extended hybrid configurations can be envisioned involving more than the two chips illustrated in Fig. \ref{fig:photonic_modules} or by active routing back and forth between the two chips. In the following, we will briefly outline the operational principles and specifications of the various building blocks of the proposed architectures. Our aim is not to exhaustively cover the vast amount of developments in integrated photonics, but rather to point to some opportunities for specific hardware that is compatible with the QD photon-emitter interfaces considered here. \begin{figure} \centering \includegraphics[width=\columnwidth]{Figure4.pdf} \caption{Illustration of basic functionalities required to construct a general-purpose quantum processor based in deterministic photon-emitter interfaces. A source chip comprises QD photon sources (single-photon sources and spin-based entanglement sources) together with spectral filters, photon routers, and nonlinear units. The prepared photonic resource is subsequently coupled off-chip with efficient mode converters for various applications either in quantum communication (fiber link) or quantum computing (processing chip). After the source chip optionally variable fiber delays and frequency conversion units are implemented. The source chip contains mode converters, on-chip optical delays, reconfigurable circuits to implement unitary optical transitions, filters, and detectors. Feed-forward from detection to the circuit is required as well. The various photonic modules are discussed in the main text. } \label{fig:photonic_modules} \end{figure} \underline{Mode converters:} Routing photons efficiently in and out of the photonic chip is required to realize hybrid architectures. The coupling efficiency from the chip and into a single-mode fiber is a relevant and quantitative figure-of-merit, although chip-to-chip efficiencies are important as well. Different approaches have been researched: end-fire coupling is favoured for its wide bandwidth and 86\% efficient coupling from an inverse tapered waveguide to a cleaved fiber has been realized \cite{cohen2013oe}. Another approach exploits surface grating for vertical outcoupling from the chip where special apodized grating couplers combined with a thin substrate metal mirror has reached 86\% coupling efficiency \cite{Ding2014ol}. Finally, evanescent coupling between waveguides and tapered fibers have demonstrated impressive transfer efficiencies exceeding 95\% \cite{Tiecke2015optica}), although it remains a challenge to scale up this method to many fibers due to the demanding requirements in terms of alignment precision. For large-scale integrated quantum devices, a scalable approach is desirable, which favours the former two approaches. \underline{Photonic switches}: Switches are key components in quantum photonics, since they enable directing single photons into, e.g., different spatial modes, corresponding to single-qubit operations. Essential switching figures-of-merits include the operation speed, switching contrast, insertion loss, and device footprint, and cryogenic compatibility is often a necessity. Ultimately the overall switching speed (switch repetition and on/off time) is faster than the radiative emission time of the quantum emitter (i.e. sub-ns), since this would allow controlling each emitted photon. However, in many practical cases a much lower switching speed can be tolerated, since the photon source may not be operated at the highest possible internal repetition rate and/or switching of blocks containing, e.g., 10 photon pulses suffices. The switching of blocks instead of each individual photons only linearly decreases the count rate when de-multiplexing a deterministic single-photon source as opposed to optical loss that kicks in exponentially. Usable switching rates range from tens of MHz to several GHz, which are achievable, e.g., with electro-optical devices\cite{Lenzini2017lpr} or nano-electro-mechanical devices\cite{Papon2019optica}. These methods also allow controlling the splitting ratio, whereby arbitrary photonic qubits can be prepared. As mentioned above, the switching loss budget is essential and various material platforms are considered featuring low-loss waveguides including silicon nitride (SiN) \cite{Bauters2011oe}, silicon (Si)\cite{Li2012oe}, and lithium niobate (LiNbO$_3$)\cite{Zhang2017optica}. The overall switch footprint, determined by the refractive index contrast and applied switching method, is essential for low-loss performance as well. Three electro-optical switches in LiNbO$_3$ have been integrated and operated at 80 MHz for realizing single-photon de-multiplexing of a QD into four modes \cite{Lenzini2017lpr}. Recent breakthroughs in thin-film LiNbO$_3$ technology have attracted much attention for building fast and low-loss switches\cite{wang2018nature} that are well suited for quantum applications. Furthermore, combining different materials provides a promising approach to boosting the electro-optic effects, leading to a lower footprint, as demonstrated on the Si platform\cite{he2019natphoton}. Nano-opto-mechanical devices based on electrostatic or piezoelectric controls offer novel opportunities \cite{midolo2018nnano}. These devices feature small footprint and therefore ultra-low loss, since materials-induced electric-optical effects are not required. Furthermore, the capacitive nature of the actuation potentially leads to low electrical noise. Switching speeds of up to 12 MHz have been achieved \cite{Haffner2019science} and single photons from a QD were routed with only 15 \% switch insertion loss \cite{Papon2019optica}. Furthermore, wafer-scale integration of 240 $\times$ 240 switching arrays have been realized \cite{Seok2019optica}, showing the exciting potential for scalability. \underline{Optical filters}: Optical control pulses are generally required for operating photon-emitter interfaces, and on-chip optical filters allow rejecting, e.g., pump stray light or to remove phonon sidebands for increased photon ID. Reconfigurability is essential in order to tune the filter to the individual QD frequency. Two approaches can be considered using either tunable high quality factor ($Q$) cavities \cite{elshaari2017natcom,elshaari2018nl,zhou2019chip} or multi-layer gratings \cite{li2019oe}. Along with low insertion loss, ideal filters feature high extinction of stray light, tailored bandpass linewidths, wide-band tunability to cover the spectral inhomogeneity of QDs, and operation at cryogenic temperatures. Spectral filtering of QD single-photon sources has been realized using thermal and strain tuning with a hybrid approach \cite{elshaari2017natcom,elshaari2018nl} and based on nano-opto-mechanical tuning in monolithical GaAs devices \cite{zhou2019chip}. A quantitative benchmark is to realize $Q \approx 10^4$, which suffices for filtering of phonon sidebands. \underline{Optical delays}: Processing of photonic quantum information typically requires optical delay lines, which can either be realized on-chip with low-loss optical waveguides/cavities or by routing photons off-chip and into an optical fiber delay. Photon propagation does generally not introduce decoherence (apart from residual loss) and therefore a low loss optical delay line controlled by an optical switch constitutes a practical quantum memory for photons. Ultra-low loss delay lines of up to 27 m (corresponding to 136 ns) have been realized on a SiN chip featuring $<$0.1 dB/m loss \cite{Bauters2011oe,lee2012natcom}. Such a delay, would suffice for coupling individual photons from deterministic chains of about hundred photons. For comparison, typical fiber loss is 3.5 dB/km at the current operation wavelength of QD sources (about 950 nm), which is improved to 0.18 dB/km at the telecom C-band. \underline{Frequency conversion}: With current growth and fabrication methods, solid-state quantum emitters have limited tunability and inhomogeneity between emitters is an issue. Nonlinear frequency conversion may be implemented for overcoming these issues, and can conveniently translate the photon frequency all the way to the telecom C-band, as required for quantum communication applications. Frequency conversion applies a strong and tunable pump laser to bridge the energy difference between the initial and target photon frequencies using $\chi^{(2)}$ or $\chi^{(3)}$ nonlinear materials. Frequency conversion of a QD source to the telecom C-band has been reported in an external periodically-poled LiNbO$_3$ crystals leading to an end-to-end efficiency of $\approx$35\% \cite{weber2019natnano}. The efficiency can likely be improved further, e.g., by engineering the coupling to the nonlinear crystal, since the internal conversion efficiency may reach near unity. It should be noted that the nonlinear conversion process could introduce decoherence thereby reducing ID of the converted photons, as has been reported recently and requires further attention and optimization \cite{morrison2021arx}. Advances in thin-film LiNbO$_3$ \cite{wang2018optica} and modal phase-matching of GaAs waveguides \cite{chang2018lpr} hold promise of realizing on-chip $\chi^{(2)}$ nonlinear conversion. Finally, the $\chi^{(3)}$ nonlinearity of integrated Si or SiN waveguides and cavities have been applied for frequency conversion of QD single-photon sources \cite{Singh2019optica}. \underline{Single-photon detectors}: To scale up quantum photonics, all components need to be low-loss and therefore mutually compatible. This applies as well to the read-out of photonic quantum information. The recent decades have witnessed very important progress on single-photon detectors \cite{hadfield2009np}. Key specifications of single-photon detectors include: low timing-jitter, high speed operation, near-unity efficiency, low dark-count rates, and preferably compatibility with PIC technology. SNSPDs have emerged as a very promising technology matching all these requirements \cite{you2020nanophoton} reaching $\geq$98\% detection efficiency \cite{Reddy2020optica}, $>$1.5 GHz \cite{zhang2019ras} count rates, $<$10 dark counts/s \cite{marsili2013np}, and $< 3$ ps timing jitter \cite{Korzh2020natphoton}. Furthermore, photon-number-resolving capabilities can be realized using arrayed SNSPDs with advanced configuration scheme \cite{zhu2018natphoton}. All this progress makes SNSPDs today a mature technology that can be readily implemented in the complex architectures considered here. \underline{Reconfigurable photonic circuits}: Advanced PICs can be fabricated in commercial foundries providing a very mature and flexible resource capable of processing large photonic resources. Quantum photonic PICs typically comprise an array of input waveguides containing photonic qubits that are subsequently coupled in a complex architecture of Mach-Zehnder interferometers. These circuits are reconfigurable by the use of thermo-optical transducers and can be scaled up to remarkable complexity. For example, a universal linear optics circuit was constructed based on 26 input waveguides and 88 Mach-Zehnder interferometers \cite{Harris2017natphoton}, although these systems typically use probabilistic photon sources \cite{Carolan2015science}. Rooted in this technology, the perspective of the present manuscript is to add additional quantum photonics resources, notably deterministic single-photon and entanglement sources and quantum nonlinearity (cf. Fig. \ref{fig:photonic_modules}), to go beyond the paradigm of linear quantum optics for advanced applications. To this end, the maturity of PICs is a major asset of photonics as compared to other qubit technologies. With PICs technology, the ultimate scaling up to process thousands and millions of qubits can be foreseen, which is required for the long-term applications of fault-tolerant quantum computing \cite{Rudolph2017APLphoton}. \section{Photonic quantum resources} \begin{figure} \centering \includegraphics[width=\columnwidth]{Figure5.pdf} \caption{(a) Sketch of a demultiplexing setup that switches subsequently emitted photons from a deterministic source into separate spatial modes and compensate for the optical delays to produce N separate single-photon sources. Here $N=4$ is illustrated. (b) Rate of producing $N$ photons in an $N$-channel demultiplexing setup for a deterministic single-photon source (source efficiency $78\%$ operated at $1$ GHz repetition rate) for various values of loss per switching event and including also realistic fiber and mode-matching losses (total efficiency of $90\%$ \cite{Wang2019prl,Hummel2019apl}). (c), (d) Layout of photonic circuits for realizing heralded entangled Bell and three-photon GHZ states, respectively. (e) Level-structure of a biexciton cascaded decay producing a polarization entangled Bell state on demand that can be efficiently coupled out of rotation symmetric photonic nanostructure such as the indicated "Bull's eye grating". (f) Protocol for deterministic generation of a multi-photon cluster state by repeatedly exciting a QD that subsequently emits photons to the waveguide. By implementing coherent spin rotations entanglement is generated, where the qubit is encoded in either an early (e) or a late (l) time bin.} \label{fig:resources} \end{figure} In this section, we will discuss the photonic quantum resources that can be realized with deterministic photon-emitter interfaces in conjunction with the building blocks considered in the previous section. By fully exploiting the quantum light-matter interface, a wide selection of high-fidelity quantum states can be prepared on demand, which testifies the flexibility of the approach being a major asset for scalability. \underline{Multi-photon source:} A deterministic single-photon source can be demultiplexed to realize multi-photon sources. A demultiplexing architecture is illustrated in Fig. \ref{fig:resources}(a): the deterministic train of photons is routed to different spatial modes by cascading electro-optical switches, and the photons mutual delays are compensated by inserting varying optical delays (e.g., fibers) in the output modes. The scalability of this approach is ultimately determined by the residual switch and delay loss, cf. Fig. \ref{fig:resources}(b), while state-of-the-art QD sources are capable of delivering many high-quality qubits, as discussed previously. This highlights the general opportunity for QD sources: even few matter qubits can produce many high-fidelity photonic qubits that subsequently can be demultiplexed and processed. So far, demultiplexing of up to 20 simultaneous photons from a QD has been experimentally realized using bulk optical components \cite{Wang2019prl}, and the improved QD sources allow scaling into the QA regime, cf. Sec. II. Low-loss chip-integrated demultiplexing schemes could potentially scale the multi-photon sources even further, since the influence of out-coupling loss from the chip could be reduced. \underline{Heralded entanglement sources}: Based on highly ID multi-photon sources, more advanced entanglement sources can be synthesized by quantum interference. Heralding can be incorporated into the scheme, at the cost of additional photons, whereby entanglement can be generated on demand. Specific examples are two- and three-photon entanglement, exemplified by Bell and Greenberger-Horne-Zeilinger (GHZ) states. These are essential building blocks also for more advanced multi-photon entangled states; indeed it has been shown that three-photon GHZ states suffice for synthesizing a universal multi-photon cluster state by ballistic scattering in a linear optics circuit \cite{GimenoSegovia2015prl}. Figure \ref{fig:resources}(c) and (d) illustrate the linear optic circuits required to produce heralded Bell states and three-photon GHZ states starting from four and six photons, respectively \cite{Zhang2008pra, Varnava2008prl}. These protocols have been realized with probabilistic sources \cite{Barz2010natphot} thus with limited efficiency. However, very recently heralded Bell-state generation was demonstrated with a deterministic QD source as well \cite{li2020arx}. Unfortunately, the linear-optics approach introduces an unavoidable effective loss, for instance, the Bell pair generation method of Fig. \ref{fig:resources}(b) succeeds with a heralding probability of $3/16$. Nonetheless entanglement generation rates exceeding MHz are within reach with deterministic QD sources, which would be an important step forward compared to the performance of probabilistic sources. \underline{Deterministic Bell entanglement sources:} An alternative route to entanglement generation exploits a QD radiative cascade. Biexcitons consist of two electrons and two holes confined to a QD and can be deterministically prepared, thereby alleviating the need for heralding. The radiative recombination of biexcitons have been proposed for Bell state generation \cite{Benson2000prl}, cf. Fig. \ref{fig:resources}(e). Here the presence of two indistinguishable decay paths imply that entanglement is generated provided that the intrinsic QD fine-structure splitting can be tuned to zero. The efficiency of the entanglement source can be boosted with photonic nanostructures as well. However in the present implementation, in-plane rotationally-symmetric structures are required to retain the polarization symmetry required for polarization entanglement. Figure \ref{fig:resources}(e) shows a "Bull's eye grating" that has been successfully applied for Bell state generation \cite{liu2019natnano}. Biexciton cascaded emission can also be exploited for hyper-entanglement generation (entanglement in both time and polarization degrees of freedom) \cite{Prilmuller2018prl}. Such hyper-entangled states could enhance the channel capacity in quantum-communication protocols or enable deterministic entanglement purification \cite{Sheng2010pra}. \underline{Deterministic multi-photon entanglement sources:} Introducing a spin in a QD leads to additional opportunities for the generation of entanglement. The coherent control of an electron's or hole's spin can be utilized to entangle subsequently emitted photons: the 'quantum knitting machine' \cite{gershoni2018celo}. A metastable spin ground state in a QD is obtained by tunneling in a single carrier (either an electron or a hole) in electrically gated devices; the corresponding level system is depicted in Fig. \ref{fig:resources}(f), where an external magnetic field is applied to align the spin and Zeeman-tune the energy levels. Spin-photon entanglement has been demonstrated \cite{gao2012nature}, which in combination with a repeated and alternating sequence of spin rotations and photon emissions, has led to the explicit demonstration of three-qubit entanglement \cite{Schwartz2016science}. It has been an open question how these encouraging results can be scaled up in future experiments given the physical imperfections of the photon-emitter interfaces. To this end, it was predicted that for realistic physical parameters, QDs in nanophotonic waveguides may generate long (i.e. $>10$) multi-photon cluster states with infidelity per photon of only $1.6 \%$ \cite{tiurev2020arx}, by implementing a particularly favorable time-bin encoding protocol, cf. layout of Fig.\ref{fig:resources}(f) . Excitingly the fidelity of such multi-photon entanglement states are reaching the demanding requirements for measurement based fault-tolerant quantum computing. It will be exciting to witness in the future whether such sources can break new ground for photonic quantum simulators and advanced quantum communication protocols possibly even without reaching the threshold of fault tolerance. \underline{Higher-dimensional photonic cluster states:} For the most advanced quantum photonics applications, notably for measurement-based quantum computing \cite{Briegel2009natphys}, photon entanglement along a one-dimensional string is not sufficient. Rather two- or three-dimensional entangled clusters are required. Such higher-dimensional cluster states can be synthesized by linear optic fusion gates \cite{GimenoSegovia2015prl}, however at the cost of reduced efficiency and they require a vast amount of ancillary photons. Deterministic sources can be developed as well, either by coherently coupling two quantum emitters hosting spins \cite{Economou2010prl} or by routing back a one-dimensional photon string in real-time to the spin to create entanglement links beyond the nearest-neighbour \cite{pichler2017pnas}. Coupled QDs can be realized either via tunnel coupling or optical coupling, as discussed previously. However, an all-optical spin-spin gate may be realized as well by placing the QDs in each separate arm of an interferometer and sending a single photon through. Heralded by the observation of the photon in one output mode from the interferometer, a spin-spin gate operation can be realized \cite{Mahmoodian2016prl}. This gate can be implemented with near-unity fidelity and success probability, which holds in the limit where the $\beta$-factor of the photon-emitter coupling approaches unity. \underline{Nonlinear quantum optics with photon-emitter interfaces:} The deterministic photon-emitter interface can also be exploited as a giant photon nonlinearity leading to novel opportunities. A single quantum emitter can only scatter a single photon at a time, and if a narrow-band (relative to the emitter linewidth) photon interacts coherently with a high $\beta$-factor emitter, the scattering probability approaches unity. As a consequence, strong photon-photon correlations will be introduced during the scattering process if a pulse contains two or more photons \cite{lejeannic2021prl}. By coherently controlling a spin in the quantum emitter, this allows a photonic switch controlled by a single spin \cite{javadi2018nn} and if the spin is coherently controlled, a Schrodinger cat state can be produced \cite{chang2007natphys}. Such a nonlinear interaction constitutes a non-Gaussian photonic operation. Interestingly, non-Gaussian operations constitute the missing key functionality in quantum-information processing based on continuous variables \cite{Braunstein2005rmp} as opposed to the discrete qubit technology otherwise considered here. Hybrid discrete-continuous variable photonic quantum architectures remain an interesting future research direction. \begin{figure} \centering \includegraphics[width=\columnwidth]{Figure6.pdf} \caption{Applications of deterministic photon-emitter interfaces in quantum communication and quantum computing. (a) Generic quantum cryptography protocol for sending encrypted keys in single photons. (b) Operation principle of a one-way quantum repeater where a qubit is encoded non-locally in a photonic cluster state, sent through a lossy channel, and re-encoded at the next station. (c) Generic layout of a reconfigurable PIC that is fed by QD sources to realize a NISQ device. (d) Generic layout of a quantum photonics neural network is composed of sources, reconfigurable linear PICs, nonlinear interaction layers, and efficient detectors. (e) Illustration of small-scale photonic cluster state generation with QD sources. (f) Probabilistic fusion of small-scale cluster states into a percolated cluster state for quantum computing. } \label{fig:applications} \end{figure} \section{Applications} Emerging quantum technology offers a plethora of novel applications in various areas. Here we will focus on specific applications that photon-emitter interfaces seem particularly well suited for with no attempts of being exhaustive. Two main application areas will be discussed, respectively within quantum communication and photonic quantum computing. Quantum cryptography exploits encoded quantum information to distribute encrypted messages, and the security of the protocol is guaranteed by the laws of quantum mechanics \cite{Nicolas2002rmp}. A quantum key can be distributed using a stream of single-photon qubits over long distances with the benefit of detecting any eavesdropping attempts on the transmission channel, cf. Fig. \ref{fig:applications}(a). Quantum key distribution (QKD) could benefit from a deterministic single-photon source, as opposed to attenuated laser pulses that deliver single-photons probabilistically, since the ultimate bit rate is achieved when each communication pulse contains one and only one photon. High brightness QD single-photon sources are well suited for the task, but since QKD is a rather mature technology area, the higher costs associated with true single-photon sources as opposed to the cheaper alternatives may be an issue for most standard QKD protocols. Consequently, it is likely that deterministic sources will be of relevance in advanced QKD protocols offering ultimate security. Device-independent QKD is such a protocol and requires very efficient sources of highly indistinguishable single-photons for entanglement generation. The observation of the violation of the Bell inequality testifies that the system is protected not only against hacking attacks on the communication line but also against side-channel attacks on the receiver/sender hardware \cite{Vazirani2014prl}. High-quality deterministic single-photon sources have been proposed for a fully device-independent QKD implementation \cite{kolodynski2020quantum}, where the challenging requirements in terms of source efficiency and photon ID seem reachable with QD sources. Another related application of single-photon qubits is for the generation of a bit stream of random numbers. True random number generators have important applications in computing, e.g., in the context of Monte-Carlo simulation methods and also enables fundamental tests of quantum physics \cite{Herrero2017rmp}. Physically randomness can be created by reflecting single photons off a non-polarizing $50/50$ beam splitter to produce a binary bit stream of random numbers. Quantum random number generators can be made device-independent \cite{liu2018nature} in a similar manner as the QKD protocol discussed above. Full-blown quantum computing is in principle possible based on single-photons and linear optics \cite{knill2001nature}. However, the resource requirements are staggering and additional hardware is required to make this approach more feasible. The present manuscript has introduced a few opportunities utilizing the nonlinear photon-emitter interface. It is interesting to consider whether specialized photonic quantum simulators can be developed for specific computing tasks within the current era of noisy intermediate scale quantum (NISQ) processors \cite{preskill2018quantum}. Measurement-based quantum computing protocols \cite{Briegel2009natphys} are generally well suited for photonics where the general idea is to produce a multi-photon entangled state up front and subsequently carry out single qubit measurements to implement the algorithm. A promising direction is to tailor a multi-photon cluster state to a specific application or with a specific loss tolerance target \cite{Buterakos2017prx}, which could be significantly more resource efficient than starting with a universal cluster state potentially containing many redundant qubits. Quantum photonics is very well suited for simulating the dynamical evolution of complex quantum systems. Photons propagating through PICs emulate the physical system and the propagation depth of the PIC represents the evolved time, cf. Fig. \ref{fig:applications}(c). This is likely an area where photonics could offer quantum advantage in the near future utilizing current NISQ technology. So far, proof-of-concept quantum simulations of molecular vibrational dynamics have been carried out using probabilistic sources \cite{Sparrow2018nature}. Such simulations could be scaled up further with deterministic single-photon and multi-photon sources, notably anharmonic vibrational effects would be of interest, which require a nonlinear interaction. Another emerging application area is the simulation of molecular dynamics problems, e.g., the dissociation of molecular bonds resulting from molecule-molecule interactions or the docking of a small molecule onto a larger host \cite{cao2019cr}. Despite being inherently quantum, such processes are today simulated by approximate molecular dynamics methods that rely on solving Newtonian equations of motion \cite{Henriksen2018book}. A photonic quantum simulator could be configured to treat such problems fully quantum mechanically and thereby testing the validity of existing methods. Precise simulations of vibrational dynamics and molecular docking are important in order to model complex protein folding problems, and a hybrid quantum/classical processor could be advantageous where a designated part of the problem is solved quantum mechanically while the rest can be approximated by classical means. Importantly, computation of protein folding problems is a major challenge in drug discovery, and even modest computational advantages could be of major value and impact \cite{cao2018IBMjrd}. Variational quantum algorithms (VQA) constitute another class of algorithms that are well suited for photonics due to the availability of flexible and reconfigurable PIC hardware. VQA requires only coherent quantum evolution of a very limited depth together with a classical algorithm that subsequently updates the quantum circuit before the next iteration. This makes NISQ hardware promising for VQA, and proof-of-concept photonics implementations determining molecular ground-state energies have been reported \cite{Peruzzo2014natcomm}. Quantum neural networks \cite{biamonte2017nature} provide another opportunity conveniently utilizing the reconfigurable PIC platform. The overall idea is to exploit the massive amount of information contained in large-scale quantum states as a novel resource for training algorithms. Quantum neural networks require access to nonlinearities and could be implemented in photonics via the deterministic photon-emitter interfaces, cf. Fig. \ref{fig:applications}(d) for an illustration of a quantum photonics neural network \cite{steinbrecher2019npjqi}. Such a processor could be trained, e.g., to synthesize a desired multi-photon entangled state for a targeted measurement-based quantum algorithm. The availability of multi-photon entanglement leads to additional opportunities also in quantum-communication applications. A general idea is to encode a qubit of information non-locally in a multi-photon entangled cluster, as opposed to using a single photon. Such multi-photon encoding makes the qubit more robust towards loss and errors. The quantum communication `holy grail' is the quantum repeater \cite{Sangouard2011rmp}, which allows distributing quantum information over extended distances in the presence of unavoidable optical propagation loss. Ultimately the realization of a quantum repeater would pave the way for a quantum internet that could be used to scale-up quantum computers \cite{Kimble2008nature,Wehner2018science}. A long-lived quantum memory interfaced to the photonic links for efficient storage of photons would enable repeater architectures, however this is a challenging yet maturing research direction \cite{Heshami2016Jmodopt}. An alternative architecture is the `one-way quantum repeater' \cite{Fowler2010prl} that circumvents the requirement of a long-lived quantum memory, and is therefore well suited for QD-based photonic hardware. The qubit is encoded non-locally in a cluster state at a transmitter station and entanglement distribution proceeds by directly transmitting the cluster state. In this case the redundancy of encoding in many photons implies that the encoded qubit is loss tolerant and can be re-encoded in a new photon cluster at the receiver station for further transmission, cf. Fig. \ref{fig:applications}(b). It has been proposed that coupled QDs can be configured to generate photonic cluster states suitable for quantum repeaters \cite{Buterakos2017prx} and based on that a blueprint of a QD-based one-way quantum repeater protocol was put forward and bench-marked \cite{borregaard2020prx}. This protocol was optimized and tailored to QD hardware such that only three QDs per repeater station were required and it was found to be realizable with experimentally feasible values of photon-emitter coupling, spin coherence, and spin-photon gate fidelity. Large-scale fault-tolerant quantum computing is the ultimate challenge for any quantum computing technology. It has been argued that photonic integration technology is a major \emph{raison d'\^{e}tre} for photonic quantum computing suggesting a real technological pathway to the daunting requirements for fault tolerance \cite{Rudolph2017APLphoton}. Measurement-based quantum computing architectures \cite{raussendorf2001prl,walther2005nature} appear currently to be the most promising approach. It remains an open challenge whether the metrics of the photonic qubits can be of sufficient quality to reach fault tolerance. In the present manuscript, we have reviewed two approaches for creating the percolated large-scale photonic cluster state required for quantum computation: i) on-demand generation by coupled QDs in photonic nanostructures or ii) fusion of three-photon GHZ states. Approach i) has the advantage that the cluster state is delivered deterministically from the source, but is susceptible to imperfections of the QD sources leading to a decoherence of the state and hence limits the achievable cluster size. In approach ii), percolation of the cluster is done by linear optics, which does not introduce decoherence, but relies on probabilistic fusion of photons and therefore requires ancillary photons to boost the efficiency \cite{GimenoSegovia2015prl}. Consequently an optimum strategy would likely be a combination of the two approaches where the QD sources are used for generating small-scale cluster states on-demand and linear optics subsequently allows growing the state bigger, cf. Fig. \ref{fig:applications}(e) and (f). Another opportunity would be to exploit the non-linear photon-photon interaction mediated by QDs to improve the photon fusion operation beyond the limitations set by linear optics. To this end, an explicit protocol for a Bell state analyzer has been put forward based on deterministic photon-emitter interfaces \cite{witthaut2012epl}. \section{The road ahead} Deterministic and coherent photon-emitter interfaces are now routinely realized in scalable solid-state devices, and we have summarized some of the near-term and long-term applications that this novel `photonic building block' could realize. The compatibility with a host of other photonic functionalities is essential, and we have highlighted the requirements and relevant specifications. Looking ahead, it is obvious that very serious engineering efforts are required in order to take the next step in this burgeoning technology area in order to tackle real-world problems. Indeed, in many cases the fundamental principles have been demonstrated for each device/functionality separately, but merging the building blocks together in advanced applications would introduce new tolerances in relation to fabrication yield and reproducibility, coupling loss, and cross-talk between devices. Excitingly, the high performance and thorough fundamental understanding of the building blocks now justify serious technology development, and likely we will see further quantum photonic hardware development gradually shifting towards industrial labs. While the all-solid-state platform based on QDs may have a number of appealing features, two main issues require additional attention: i) reducing emitter-emitter inhomogeneity and ii) coupling to a long-lived quantum memory. Although the protocols discussed in the present manuscript have been tailored to sidestep those `Achilles' heels' of QDs, it is clear that overcoming these obstacles would lead to an even more powerful and capable platform. The former challenge pertains primarily to the QD growth, and could be resolved if QDs were reproducibly synthesized with atomic precision at predetermined positions. The latter requires additional degrees of freedom for storage, and one promising candidate is to exploit coupling to the QD nuclear spins \cite{Gangloff2019science}, although this coupling may be difficult to control in present-day QDs due to asymmetric strain profiles. Alternatively, a hybrid approach may be pursued, where QDs are coupled to, e.g., ensembles of atoms or ions \cite{akopian2011np,Meyer2015prl} or ultra-long-lived opto-mechanical oscillators \cite{tsaturyan2017nn}. In these cases, efficient bandwidth and wavelength matching of the two systems is required, which could be pursued with nonlinear conversion, as was discussed previously. Hybrid interfacing to photonics may enable even more opportunities. In many qubit systems, e.g., based on spins or superconductors, qubit-qubit interactions beyond nearest neighbour are usually weak, which limits their scalability. An efficient quantum interface to photons would be a method for establishing such long-range interactions, and photon links have been proposed for scaling up ion-trap quantum computers following a modular approach \cite{Monroe2013science}. Such interfaces require proper quantum coherent transduction between the different qubit operation frequencies, e.g., transduction from microwaves to optical frequencies in the case of superconductor-photon coupling \cite{Mirhosseini2020nature}. A QD photon-emitter interface could be configured to implement such a transduction, e.g., by driving a tailored Raman transition in coherently coupled QDs \cite{Elfving2019pra}. Such a hybrid interface could lead to entirely new opportunities utilizing matter degrees of freedom for computation and photons as communication links. The availability of the coherent and deterministic photon-emitter interface today, the point of departure of the present manuscript, implies that such advanced hybrid interfaces are within reach. The ultimate dream of a large-scale quantum internet or a scaled-up quantum computer could be the outcome of such advancements. \begin{acknowledgments} We thank Stefano Paesani for constructive comments on the manuscript. We gratefully acknowledge financial support from Danmarks Grundforskningsfond (DNRF 139, Hy-Q Center for Hybrid Quantum Networks). \end{acknowledgments}
1,314,259,994,439
arxiv
\section*{Motivation} This lecture series is aimed at describing our recent attempts to derive the properties of nuclei using the light-front formalism. Nuclear properties are very well handled within existing conventional nuclear theory, so it is necessary to explain the motivation. It seems to me that understanding experiments involving high energy nuclear reactions requires that light-front dynamics and light cone variables be used. Consider the EMC experiment \cite{emc}, which showed that there is a significant difference between the parton distributions of free nucleons and nucleons in a nucleus. This difference can interpreted as a shift in the momentum distribution of valence quarks towards smaller values of the Bjorken variable $x$. This variable is a ratio of the plus-momentum $k^+=k^0+k^3$ of a quark to that of the target. If one uses $k^+$ as a momentum variable, the corresponding canonical spatial variable is $x^-=x^0-x^3$ and the time variable is $x^+=x^0 +x^3$. To do calculations in this framework is to use light front dynamics. Light front dynamics applies to nucleons within the nucleus as well as to partons of the nucleons, and this is a useful approach whenever the momentum of initial or final state nucleons is large compared to their mass \cite{fs}. For example, this technique can be used for $(e,e'p)$ and $(p,2p)$ reactions at sufficiently high energies. The use of light-front variables for nucleons in a nucleus is not sufficient. It is also necessary to include all the relevant features of conventional nuclear dynamics. Combining these two aspects provides the technical challenge which we have been addressing. I'd like to begin by describing how using the light-front approach leads to important simplifications. Consider high energy electron scattering from nucleons in nuclei. Let the four-momentum $q$ of the exchanged virtual photon be given by $\left(\nu,0,0,-\sqrt{Q^2+\nu^2}\right)$, with $Q^2=-q^2$, and $Q^2$ and $\nu^2$ are both very large but $Q^2/\nu$ is finite (the Bjorken limit). Use the light-cone variables $q^\pm=q^0\pm q^3$ in which $q^+\approx Q^2/2\nu=Mx$, $q^-\approx 2\nu -Q^2/2\nu$, so that $q^-\gg q^+$. Here $M$ is the mass of a nucleon. We neglect $q^-$ in comparison to $q^+$; corrections to this can be handled in a systematic fashion. Then the scattering cross section for $e+A\to e' +(A-1)_f +p$, where $f$ represents the final nuclear eigenstate of $P^-$, and $p$ the four-momentum of the final proton, takes the form \begin{equation} d\sigma\sim \sum_f \int {d^3p_f\over E_f}\int d^4p\, \delta (p^2-M^2)\delta^{(4)}(q+p_i-p_f-p)| \langle p,f\mid J(q)\mid i\rangle\mid^2, \end{equation} with the operator $J(q)$ as a schematic representation of the electromagnetic current. Performing the four-dimensional integral over $p$ leads to the expression \begin{equation} d\sigma\sim \sum_f \int {d^2p_fdp_f^+\over p^+_f} \delta\left((p_i-p_f+q)^2-m^2\right)\mid \langle p,f\mid J(q)\mid i\rangle\mid^2 \label{int}. \end{equation} The argument of the delta function $(p_i-p_f+q)^2-M^2 \approx -Q^2+2q^-(p_i-p_f)^+$. Thus $p_f^-$ does not appear in the argument of the delta function, or anywhere else, so that we can replace the sum over intermediate states by unity. In the usual equal-time representation, the argument of the delta function is $-Q^2+2\nu(E_i-E_f)$. The energy of the final state appears, and one can not do the sum over states. It is useful to define $\bbox{p_B}\equiv \bbox{p_i}-\bbox{p_f}$, because $ p_B^+=Q^2/2\nu\equiv M x $ (as demanded by the delta function). Then one can re-express Eq.~(\ref{int}) as \begin{equation} d\sigma\sim d^2{p_B}_\perp n(M x,{p_B}_\perp), \label{ds2} \end{equation} where $ n(M x,{p_B}_\perp)$ is the probability for a nucleon in the ground state to have a momentum $(M x,{p_B}_\perp)$. Integration in Eq.~(\ref{ds2}) leads to \begin{equation} \sigma \sim\int d^2p_\perp\, n(M x,{p}_\perp)\equiv f(Mx), \end{equation} with $f(Mx)$ as the probability for a nucleon in the ground state to have a plus momentum of $Mx$. The use of light-front dynamics to compute nuclear wave functions should allow us to compute $f(Mx)$ from first principles. We also claim that using light-front dynamics incorporates the experimentally relevant kinematics from the beginning, and therefore is the most efficient way to compute the cross sections for nuclear deep inelastic scattering and nuclear quasi-elastic scattering. Since much of this work is motivated by the desire to understand nuclear deep inelastic scattering and related experiments, it is worthwhile to review some of the features of the EMC effect \cite{emc,emcrevs}. One key experimental result is the suppression of the structure function for $x\sim 0.5$. This means that the valence quarks of bound nucleons carry less plus-momentum than those of free nucleons. This may be understood by postulating that mesons carry a larger fraction of the plus-momentum in the nucleus than in free space. While such a model explains the shift in the valence distribution, one obtains at the same time a meson (i.e. anti-quark) distribution in the nucleus, which is strongly enhanced compared to free nucleons and which should be observable in Drell-Yan experiments \cite{dyth}. However, no such enhancement has been observed experimentally \cite{dyexp}, and the implications are analyzed in Ref.~\cite{missing}. The use of light-front dynamics should allow us to compute the necessary nuclear meson distribution functions using variables which are experimentally relevant. The need for a computation of such functions in a manner consistent with generally known properties of nuclei led us to begin this program. There are other motivations for using the light front formalism that have been emphasized in many reviews\cite{lcrevs}. One key feature is that the vacuum of the theory is trivial because it can not create pairs. Another is that the theory is a Hamiltonian theory and the many-body techniques of equal time theory can be used here too. I also like to say: Ask not what the light front can do for nuclear physics; instead ask what nuclear physics can do for the light front. This is to provide a set of non-trivial four dimensional examples with real physics content. Finally I quote the review by Geesaman et al. ``In light front dynamics LFD, the particles are on mass-shell, and there are no off-shell ambiguities. However, ... we have little or no experience in calculating the wave function of a realistic nucleus in LFD''. The aim here is to provide such wave functions. \subsection*{Outline} We shall begin with a simple description of what is light front dynamics. Then the formal procedures of light front quantization of a hadronic Lagrangian ${\cal L}$ will be discussed. The first application is a study of infinite nuclear matter within the mean field approximation\cite{jerry}. The distribution functions $f(y)$ for nucleons and mesons will be computed. The above topics comprise the first lecture. The next lecture is devoted to a study of finite nuclei\cite{bbm99} using the mean field approximation. Here one must confront a difficulty. The use of $x^-=t-z$ as a spatial variable violates manifest rotational invariance because $x^-$ and $x_\perp$ are different variables. We show that rotational invariance re-emerges after one does the appropriate dynamical calculation. It is necessary to go beyond the mean field approximation, and the third lecture deals with that\cite{rmgm98}. Nucleon-nucleon scattering is studied first and used in the many-body calculation. The influence of nucleon-nucleon correlations on the properties of nuclear matter is studied by making the necessary light front calculations. Applications are to compute the nuclear pionic content and to nuclear deep inelastic scattering and Drell-Yan processes. The goal is to provide a series of examples showing that the light front can be used for high energy realistic and relativistic nuclear physics. \section*{ What is light front dynamics? } This is a relativistic treatment of dynamics in which the fields are quantized at a fixed ``time''$\tau =t+z =x^0+x^3\equiv x^+$. This means that the orthogonal spatial variable must be $x^-\equiv t-z$ so that the canonical momentum is $ p^0+p^3\equiv p^+$. The remainder of the spatial variables are given by: $ \vec{x}_\perp,\vec{p}_\perp$. The consequence of using $\tau$ as a ``time'' variable is that the canonical energy is $p^-=p^0-p^3$. In general our notation is given by \begin{equation} A^\pm\equiv A^0\pm A^3, \end{equation} with \begin{equation} A\cdot B =A^\mu B_\mu={1\over2}\left(A^+B^- +A^-B^+\right) -\vec{A}_\perp\cdot\vec{B}_\perp .\end{equation} The key reason for using such unusual coordinates is phenomenological. For a particle with $\vec v\approx c\hat{e_3}$, the quantity $p^+$ is BIG. Thus experiments tend to measure quantities associated with $p^+$. Another important feature is the relativistic dispersion relation $p^\mu p_\mu =m^2$, which in light front dynamics takes the form: \begin{equation} p^-={p_\perp^2+m^2\over p^+} .\end{equation} Thus one has a form of relativistic kinematics that avoids using a square root. The main formal consequence of using light front dynamics is that the minus component of the total momentum, $P^-$, is used as a Hamiltonian operator, and the plus component $P^+$ is used as a momentum operator. The procedures to obtain these operators are discussed in the next section. \section*{Light Front Quantization} My intent here is to discuss the basic aspects in as informal way as possible. For more details see the reviews and the references. I'll start by considering one free field at a time. These will be the scalar meson $\phi$, the Dirac fermion $\psi$ and the massive vector meson $V^\mu$. \subsection*{Free Scalar field} Consider the Lagrangian \begin{eqnarray} {\cal L}_\phi = {1\over 2} (\partial^+\phi \partial^-\phi -\bbox{\nabla}_\perp\phi \cdot\bbox{\nabla}_\perp\phi-m_s^2\phi^2).\label{lagphi} \end{eqnarray} The notation is such that $ \partial^\pm=\partial^0\pm\partial^3=2 {\partial\over \partial x^\mp}$. The Euler-Lagrange equation leads to the wave equation \begin{eqnarray} i\partial^-\phi={-\nabla^2_\perp+m_s^2\over i\partial^+}\;\phi. \end{eqnarray} The most general solution is a superposition of plane waves: \begin{equation} \phi(x) \int{ d^2k_\perp dk^+ \theta(k^+)\over (2\pi)^{3/2}\sqrt{2k^+}}\left[ a(\bbox{k})e^{-ik\cdot x +a^\dagger(\bbox{k})e^{ik\cdot x}\right] \label{expp}\end {equation} where $k\cdot x={1\over2}(k^-x^++k^+x^-)-\bbox{k_\perp\cdot x}_\perp$ with $k^-={k_\perp^2+m_s^2\over k^+}$, and $\bbox{k}\equiv(k^+,\bbox{k}_\perp)$. The $\theta$ function restricts $k^+$ to positive values. Note that \begin{equation} i\partial^+e^{-ik\cdot x}=k^+e^{-ik\cdot x} .\end{equation} The value of $x^+$ that appears in Eq.~(\ref{expp}) can be set to zero, but only after taking necessary derivatives. Deriving the equal $x^+$ commutation relations for the fields is a somewhat obscure procedure \cite{yan12}, but the result can be stated in terms of familiar commutation relations: \begin{equation} [a(\bbox{k}),a^\dagger(\bbox{k}')]= \delta(\bbox{k}_\perp-\bbox{k}'_\perp) \delta(k^+-k'^+)\label{comm} \end {equation} with $[a(\bbox{k}),a(\bbox{k}')]=0$. The next step is compute the Hamiltonian $P^-$ for this system. The conserved energy-momentum tensor is given in terms of the Lagrangian: \begin{equation} T^{\mu\nu}_\phi=-g^{\mu\nu}{\cal L_\phi} +{\partial{\cal L}_\phi \over\partial (\partial_\mu\phi)}\partial^\nu\phi.\label{tmunup} .\end {equation} This brings us to the question of what is $g^{\mu\nu}$? This is straightforward, although the results (viewed for the first time) can be surprising: \begin{eqnarray} g^{+\nu}=g^{0\nu}+g^{3\nu} \end{eqnarray} Thus \begin{eqnarray} g^{++}&=&g^{00}+g^{03}+g^{30}+g^{33}=1+0+0-1=0\nonumber\\ g^{ij}&=&-\delta_{i,j} (i=1,2,j=1,2);\quad g^{+-}=g^{-+}=2. \end{eqnarray} Then one finds that \begin{equation} T^{+-}_\phi={1\over 2}\bbox{\nabla}_\perp \phi \cdot \bbox{\nabla}_\perp\phi+ {1\over 2}m^2_s \phi^2. \end {equation} The term $T^{+-}$ is the density for the operator $P^-$: \begin{equation} P^-={1\over 2}\int d^2x_\perp dx^- T^{+-}. \end{equation} The use of the field expansion (\ref{expp}), along with normal ordering followed by integration leads to the result: \begin{equation} P^-_\phi=\int d^2k_\perp dk^+ \theta(k^+)a^\dagger(\bbox{k})a(\bbox{k}){k_\perp^2+m_s^2\over k^+}. \end{equation} One defines a vacuum state $\mid0\rangle$ such that $ a(\bbox{p})\mid0\rangle=0. $ Then the creation operators acting on the vacuum give the usual single particle states: \begin{eqnarray} P^-_\phi a^\dagger(\bbox{p})\mid0\rangle={p_\perp^2+m_s^2\over p^+} a^\dagger(\bbox{p})\mid0\rangle.\end{eqnarray} The momentum operator $P^+$ is constructed by integrating $T^{++}$: \begin{equation} P^+_\phi=\int d^2k_\perp dk^+ \theta(k^+)a^\dagger(\bbox{k})a(\bbox{k}) k^+. \end {equation} \subsubsection*{Interactions and Light Front Simplification} Suppose we take the Lagrangian \begin{eqnarray} {\cal L}={1\over 2} (\partial_\mu \phi \partial^\mu \phi-m_s^2\phi^2)+\lambda \phi^4. \end{eqnarray} The operator $\phi$ creates or destroys a particles of plus-momenta $k^+>0$. Thus a possible term in which $\lambda \phi^4$ term converts the vacuum $\mid0\rangle$ into a four particle state vanishes by virtue of the conservation of plus-momentum. The vacuum of $p^+=0$ can not be connected to four particles, each having a positive $k^+$. This vanishing simplifies Hamiltonian ($x^+$-ordered perturbation) calculations. \subsection*{Free Dirac Field} Consider the Lagrangian \begin{eqnarray} {\cal L}_\psi= \bar{\psi}(\gamma^\mu {i\over 2}\stackrel{\leftrightarrow}{\partial}_\mu -M)\psi, \label{lagpsi} \end{eqnarray} and its equation of motion: \begin{equation} \left(i \gamma^\mu\partial_\mu-M\right)\psi=0 \label{dirac0} .\ee A fermion has spin 1/2, so there can only be two independent degrees of freedom. The standard Dirac spinor has four components, so two of these must represent dependent degrees of freedom. In the light front formalism one separates the independent and dependent degrees of freedom by using projection operators: $ \Lambda_\pm\equiv {1\over 2} \gamma^0\gamma^\pm .$ Then the independent field is $\psi_+=\Lambda_+\psi$ and the dependent one is $\psi_-=\Lambda_-\psi$ The Dirac equation (\ref{dirac0}) is re-written as \begin{equation} \left({i\over 2}\gamma^+\partial^- +{i\over2}\gamma^-\partial^+ +i\bbox{\gamma}_\perp\cdot\bbox{\nabla}_\perp-M\right)\psi=0 \label{dirac1}.\ee Equations for $\psi_\pm$ can be obtained by multiplying Eq.~(\ref{dirac1}) on the left by $\Lambda_\pm$: \begin{eqnarray}}\def\eea{\end{eqnarray} i\partial^- \psi_+=(\bbox{\alpha}_\perp\cdot{ \bbox{\nabla}_\perp\over i} +\beta M)\psi_-\nonumber\\ i\partial^+ \psi_-=(\bbox{\alpha}_\perp\cdot{ \bbox{\nabla}_\perp\over i} +\beta M)\psi_+, \eea so that the equation of motion of $\psi_+$ becomes \begin{equation} i\partial^- \psi_+=(\bbox{\alpha}_\perp\cdot{ \bbox{\nabla}_\perp\over i}+M) {1\over i\partial^+}(\bbox{\alpha}_\perp\cdot{ \bbox{\nabla}_\perp\over i}+M)\psi_+ .\ee One can make the field expansion and determine the momenta in a manner similar to the previous section. The key results are \begin{eqnarray}}\def\eea{\end{eqnarray} T^{+-}_\psi &=&\psi^\dagger _+\left(\alpha_\perp\cdot{\bbox{\nabla}_\perp\over i} +\beta M\right){1\over i\partial^+} \left(\alpha_\perp\cdot{\bbox{\nabla}_\perp\over i} +\beta M\right)\psi_+, \\ P^-_\psi &=&\sum_\lambda\int d^2p_\perp dp^+\theta(p^+){p_\perp^2+M^2\over p^+} \left[b^\dagger(\bbox{p},\lambda)b(\bbox{p},\lambda)+ d^\dagger(\bbox{p},\lambda) d(\bbox{p},\lambda)\right], \eea where $b(\bbox{p},\lambda),d(\bbox{p},\lambda)$ are nucleon and anti-nucleon destruction operators. \subsection*{Free Vector Meson} The formalism for massive vector mesons was worked out by Soper\cite{des71} and later by Yan\cite{yan34} using a different formulation. I generally follow Yan's approach. The formalism is lengthy and detailed in the references, so I only state the minimum. There are three independent degrees of freedom, even though the Lagrangian depends on $V^\mu$ and $V^{\mu\nu}= \partial ^\mu V^\nu-\partial^\nu V^\mu$. These are chosen to be $V^+$ and $V^{+i}$. The other terms $ V^-,V^i,V^{-i}$ and $V^{ij}$ can be written in terms of $V^+$ and $V^{+i}$. \subsection*{We need a Lagrangian, no matter how bad} It seems to me that one can not do complete dynamical calculations using the light front formalism without specifying some Lagrangian. One starts\cite{jerry} with $\cal L$ and derives field equations. These are used to express the dependent degrees of freedom in terms of independent ones. One also uses $\cal L$ to derive $T^{\mu\nu}$ (as a function of independent degrees of freedom) which is used to obtain the total momentum operators $P^\pm$. It is $P^-$ that acts as a Hamiltonian operator in the light front $x^+$-ordered perturbation theory. We start with a Lagrangian containing scalar and vector mesons and nucleons $\psi'$. This is the minimal Lagrangian for obtaining a caricature of nuclear physics because the exchange of scalar mesons provides a medium range attraction which can bind the nucleons and the exchange of vector mesons provides the short-range repulsion which prevents a collapse. Thus we take \begin{eqnarray} {\cal L} &=&{1\over 2} (\partial_\mu \phi \partial^\mu \phi-m_s^2\phi^2) -{1\over 4} V^{\mu\nu}V_{\mu\nu} +{m_v^2\over 2}V^\mu V_\mu \nonumber\\ &+&\bar{\psi}^\prime\left(\gamma^\mu ({i\over 2}\stackrel{\leftrightarrow}{\partial}_\mu -g_v\;V_\mu) - M -g_s\phi\right)\psi', \label{lag} \end{eqnarray} with the effects of other mesons included elsewhere and below. The equations of motion are \begin{eqnarray} \partial_\mu V^{\mu\nu}+m_v^2 V^\nu&=&g_v\bar \psi'\gamma^\nu\psi' \label{vmeson}\\ \partial_\mu\partial^\mu \phi+m_s^2\phi&=&-g_s\bar\psi'\psi', \label{smeson}\\ (i\partial^--g_vV^-)\psi'_+&=&(\bbox{\alpha}_\perp\cdot (\bbox{p}_\perp-g_v\bbox{V}_\perp)+\beta (M +g_s\phi))\psi'_-\label{nfg0}\\ (i\partial^+-g_vV^+)\psi'_-&=&(\bbox{\alpha}_\perp\cdot (\bbox{p}_\perp-g_v\bbox{V}_\perp)+\beta (M +g_s\phi))\psi'_+. \label{nfg} \end{eqnarray} The presence of the interaction term $V^+$ on the left-hand side of the second equation presents a problem because one can not easily solve for $\psi_-$ in terms of $\psi_+$. This difficulty is handled by using the Soper-Yan transformation: \begin{equation}\psi'=e^{-ig_v\Lambda(x)}\psi ,\qquad \partial^+ \Lambda=V^+. \ee Using this in Eqs.~(\ref{nfg0})-(\ref{nfg}) leads to the more usable form \begin{eqnarray} (i\partial^--g_v \bar V^-)\psi_+=(\bbox{\alpha}_\perp\cdot (\bbox{p}_\perp-g_v\bbox{\bar V}_\perp)+\beta(M+g_s\phi))\psi_-\nonumber\\ i\partial^+\psi_-=(\bbox{\alpha}_\perp\cdot (\bbox{p}_\perp-g_v\bbox{\bar V}_\perp)+\beta(M+g_s\phi))\psi_+. \label{yan} \end{eqnarray} The cost of the transformation is that one gets new terms resulting from taking derivatives of $\Lambda(x)$. One uses $\bar V^\mu$ with $ \bar V^\mu=V^\mu-{1\over\partial^+}\partial^\mu V^+, $ and $\bar V^\mu$ enters in the nucleon field equations, but $V^\mu$ enters in the meson field equations. \section*{ nuclear matter Mean Field Theory} The philosophy\cite{bsjdw} is that the nucleonic densities which are mesonic sources are large enough to generate a large number of mesons to enable a classical treatment (replacing an operator by an expectation value). In infinite nuclear matter, the volume is taken as infinity so that all positions are equivalent. Thus we make the replacement: \begin{equation} g_s\bar\psi(x)\psi(x)\to g_s\langle\bar\psi(0)\psi(0)\rangle, \quad\phi={-g_s \over m_s^2}\langle\bar \psi(0)\psi(0)\rangle ,\label{sphi}\ee in which the expectation value is in the ground state, and second part of the equation is obtained from the field equation (\ref{smeson}) with a constant source. Similarly $g_v\psi(x)\gamma^\mu\psi(x)\to g_v\langle\bar{\psi(0)} \gamma^\mu\psi(0)\rangle \delta_{\mu,0},$ in which the notion that there is no special direction in space is used. (The nucleus is taken to be at rest.) Again the source is constant, so that the solution of the field equation (\ref{vmeson}) is \begin{equation} \bar{V}^-=V^-=V^0={g_v\over m_v^2}\langle\psi^\dagger(0)\psi(0)\rangle;\quad \bar{V}^{+,i}=0. \label{sv}\ee Since the potentials entering the light-front Dirac equation (\ref{yan}) are constant, the nucleon modes are plane waves $\psi \sim e^{ik\cdot x}$, and the many-body system is a kind of Fermi gas. The solutions of Eq.~(\ref{yan}) are \begin{equation} i\partial^- \psi_+=g_v\bar{V}^-\psi_++{k_\perp^2+(M+g_s\phi)^2\over k^+}\psi_+.\label{sd}\ee Solving the equations (\ref{sphi}),(\ref{sv}) and (\ref{sd}) yields a self-consistent solution. \subsection*{ Nuclear Momentum Content} The expectation value of $T^{+\mu}$ is used to obtain the total momentum: \begin{equation} P^\mu= {1\over 2}\int d^2x_\perp dx^- \langle T^{+\mu} \rangle. \ee The expectation value is constant so that the volume $\Omega={1\over 2}\int d^2x_\perp dx^- $ will enter as a factor. A straightforward evaluation leads to the results \begin{eqnarray}}\def\eea{\end{eqnarray} {P^-\over\Omega}&=&m_s^2\phi^2+{4\over (2\pi)^3}\int_F d^2k_\perp dk^+\;{k_\perp^2+(M+g_s\phi)^2\over k^+}\\ {P^+\over\Omega}&=&m_v^2(V^-)^2+{4\over (2\pi)^3}\int_F d^2k_\perp dk^+\;k^+.\label{pplus}\eea To proceed further one needs to define the Fermi surface $F$. The use of a transformation $ k^+\equiv \sqrt{(M+g_s\phi)^2+\vec{k}^2} +k^3\equiv E(k)+k^3$ to define a new variable $ k^3$ enables one to simplify the integrals. One replaces the integral over $k^+$ by one over $k^3$ (including the Jacobian factor ${\partial k^+\over \partial k^3}={k^+\over E})$ leads to: \begin{equation}\int_F d^2k_\perp dk^+\cdots \equiv \int d^3k\theta(k_F-|\vec k|)\cdots.\ee The nuclear energy $E$ is the average of $P^+$ and $P^-$: $ E\equiv{1\over 2}\left(P^-+P^+\right)$ and one gets the very same expression as in the original Walecka model. This provides a useful check on the algebra. There is a potential problem: for nuclear matter in its rest frame we need to have $P^+=P^-=M_A$. If one looks at the expressions for $P^\pm$ this result does not seem likely. However, the value of the fermi momentum has not yet been determined. There is one more condition to be satisfied: \begin{equation}\left({\partial (E/A)\over\partial k_F}\right)_\Omega=0.\ee Satisfying this equation determines $k_f$ and for the value so obtained the values of $P^+$ and $P^-$ turn are the same. Thus we see that our light front procedure reproduces standard results for energy and density. We use the parameters of Chin and Walecka\cite{cw} $g_v^2M^2/m_v^2=195.9$ and $g_s^2M^2/m_s^2=267.1$ to obtain first numerical results. Then $k_F=1.42 \quad$fm$^{-1}$, the binding energy per nucleon is 15.75 MeV and $M+g_s\phi=0.56 M$. The last number corresponds to a huge attraction that is nearly cancelled by the huge repulsion. Then one may use Eq.~(\ref{pplus}) to obtain the separate contributions of the vector mesons and nucleons, with spectacular results. Nucleons carry only 65\% of the plus-momentum. Thus is much less than the 90\% needed to explain the EMC effect for infinite nuclear matter\cite{sdm}. Furthermore, vector mesons carry 35\% of the plus-momentum, which is an amazingly large number. The distribution of this vector meson plus-momentum is an interesting quantity. The mean fields $\phi, V^\mu$ are constants in space and time. Thus $V^-$ has support only for $k^+=0$. The physical interpretation of this is that $\infty$ number of mesons carry a vanishingly small $\epsilon$ of the plus-momentum, but the product is 35\%. One can also show\cite{bm98} that \begin{equation} k^+f_v(k^+)=0.35 M \delta(k^+).\ee There is an important phenomenological consequence the value $k^+=0$ corresponds to $ x_{Bj}=0$ which can not be reached in experiments. This means one can't use the momentum sum rule as a phenomenological tool to analyze deep inelastic scattering data to determine the different contributions to the plus-momentum. Of course this result is caused by solving a simple model for a simple system with a simple mean field approximation. It is necessary to ask if any of the qualitative features of the present results will persist in more detailed treatments. \section*{Mean Field Theory for finite-sized nuclei} It is important to make calculations for finite nuclei because all laboratory experiments are done for such targets or projectiles. The most basic feature of all of nuclear physics is that the shell model is able to explain the magic numbers. Rotational invariance causes the $2j+1$ degeneracy of the single particle orbitals, and full occupation leads to increased binding. But light front dynamics does not make rotational invariance manifest because the different components of the spatial variable are treated differently: $x^-,\bbox{x}_\perp$. However, the final results must respect rotational invariance. Therefore, the challenge of making successful calculations of the properties of finite nuclei is important to us. Let's discuss, in a general way, how it is that we will be able to find spectra which have the correct number of degenerate states. Suppose we try to determine eigenstates of a LF Hamiltonian by means of a variational calculation. Simply minimizing the LF energy leads to nonsensical results because $P^-=M_A^2/P^+$. One can easily reach zero energy by letting $P^+$ be infinite. This is not a problem if one is able to use a Fock space basis in which the total plus and $\perp$ momentum of each component are fixed. But in calculations involving many particles, the Fock state approach cannot be used in practical calculations. One needs to find a sensible variational procedure. One such is to perform a constrained variation, in which the total LF momentum is fixed by including a Lagrange multiplier term proportional to the total momentum in the LF Hamiltonian. We minimize the expectation value of $P^+$ subject to the condition that the expectation values of $P^-$ and $P^+$ are equal. This is the same as minimizing the expectation value of the average of $P^-$ and $P^+$. The need to include the plus-momentum along with the minus momentum can be seen in a simple example. Consider a nucleus of $A$ nucleons of momentum $P_A^+=M_A$, ${\bbox{P}_A}_\perp=0$, which consists of a nucleon of momentum $(p^+,\bbox{p}_\perp)$, and a residual $(A-1)$ nucleon system which must have momentum $(P^+_A-p^+,-\bbox{p}_\perp)$. The kinetic energy $K$ is given by the expression \begin{equation} K={p_\perp^2+M^2\over p^+}+{p_\perp^2+M_{A-1}^2\over P^+_A- p^+}. \end{equation} In the second expression, one is tempted to neglect the term $p^+$ in comparison with $ P^+_A\approx M_A$. This would be a mistake. Instead make the expansion \begin{eqnarray} K&\approx&{p_\perp^2+M^2\over p^+}+{M_{A-1}^2\over P^+_A}\left(1+ {p^+\over P_A^+}\right)\nonumber\\ &\approx&{p_\perp^2+M^2\over p^+}+p^+ +M_{A-1}, \end{eqnarray} because for large $A$, $M^2_{A-1}/P_A^2\approx 1$. For free particles, of ordinary three momentum $\bbox{p}$ one has $E^2(p)=\bbox{p}^2+m^2$ and $p^+=E(p)+p^3$, so that \begin{equation} K\approx {\left(E^2(p)-(p^3)^2\right)\over E(p)+p^3} +E(p)+p^3+M_{A-1}=2E(p)+M_{A-1}. \end{equation} We see that $K$ depends only on the magnitude of a three-momentum and rotational invariance is restored. The physical mechanism of this restoration is the inclusion of the recoil kinetic energy of the residual nucleus. \subsection*{Results} The formalism is described in recent papers\cite{bbm99}, so I simply summarize the results. If our solutions are to have any relevance, they should respect rotational invariance. The success in achieving this is examined in Tables I and II of \cite{bbm99} which give our results for the spectra of $^{16}$O and $^{40}$Ca, respectively. Scalar and vector meson parameters are taken from Horowitz and Serot\cite{hs}, and we have assumed isospin symmetry. We see that the $J_z=\pm1/2$ spectrum contains the eigenvalues of all states, since all states must have a $J_z=\pm1/2 $ component. Furthermore, the essential feature that the expected degeneracies among states with different values of $J_z$ are reproduced numerically. The obtained eigenvalues of the nucleon mode equation are essentially the same as the single particle energies of the ET formalism, to within the expected numerical accuracy of our program. This equality is not mandated by spherical symmetry alone because the solutions in the equal-time framework have non-vanishing components with negative values of $p^+$. Table III of \cite{bbm99} gives the contributions to the total $P^+$ momentum from the nucleons, scalar mesons, and vector mesons for $^{16}$O, $^{40}$Ca, and $^{80}$Zr, as well as the nuclear matter limit. The vector mesons carry approximately 30\% of the nuclear plus-momentum. The technical reason for the difference with the scalar mesons (which have negligible effect) is that the evaluation of $a^\dagger(\bbox{k},\omega)a(\bbox{k},\omega)$ counts vector mesons ``in the air''and the resulting expression contains polarization vectors that give a factor of ${1\over k^+}$ which enhances the distribution of vector mesons of low $k^+$. The results for the vector meson distribution are shown in Fig.~2 of \cite{bbm99}. As the size of the nucleus increases the enhancement of the distribution at lower values of $k^+$ becomes more evident. \subsection*{Lepton-nucleus deep inelastic scattering} It is worthwhile to see how the present results are related to lepton-nucleus deep inelastic scattering experiments. We find that the nucleons carry only about 70\% of the plus-momentum. The use of our $f_N$ in standard convolution formulae lead to a reduction in the nuclear structure function that is far too large ($\sim$95\% is needed \cite{emcrevs}) to account for the reduction observed \cite{emcrevs} in the vicinity of $x\sim 0.5$. The reason for this is that the quantity $M +g_s\phi$ acts as a nucleon effective mass of about 670 MeV, which is very small. A similar difficulty occurs in the $(e,e')$ reaction \cite{frank} when the mean field theory is used for the initial and final states. The use of a small effective mass and a large vector potential enables a simple reproduction of the nuclear spin orbit force \cite{bsjdw,hs}. Furthermore, the use of other Lagrangians\cite{zim,qmc} will lead to improved results. We also expect that including effects beyond the mean field would lead to a significant effective tensor coupling of the isoscalar vector meson \cite{ls}, and to an increased value of the effective mass. Such effects are incorporated in Brueckner theory, and a light-front version \cite{rmgm98} could be applied to finite nuclei with better success in reproducing the data. This is discussed in the next sections. \section*{Correlated Infinite Nuclear matter } The first step is to derive a light front version of the nucleon-nucleon interaction. This is most easily done within the framework of the one boson exchange approximation. The formalism and philosophy are discussed in \cite{jerry}, and the calculation is discussed in \cite{rmgm98}. The nucleon-nucleon potential $V(NN)$ describes phase shifts reasonably well. The corresponding density is ${\cal V}(NN)$. The basic Lagrangian density contains a free nucleon term ${\cal L}_0(N)$, a free meson term ${\cal L}_0({\rm mesons})$ and an interaction term ${\cal L}_I(N,{\rm mesons})$ but does not contain ${\cal V}(NN)$. Thus one adds this term and subtracts it: \begin{eqnarray}}\def\eea{\end{eqnarray}{\cal L}&=&{\cal L}_0(N)-{\cal V}(NN) + {\cal L}_{\rm m}\\ {\cal L}_{\rm m}& =&{\cal L}_I(N,{\rm mesons})+{\cal L}_0({\rm mesons}) +{\cal V}(NN).\eea We use the term ${\cal L}_0(N)-{\cal V}(NN)$ to obtain a first solution $\mid\Phi\rangle$ to the many-body problem. The term ${\cal L}_{\rm m} $ accounts for mesonic content of Fock space, and we present\cite{rmgm98} a scheme to incorporate the effects of ${\cal L}_{\rm m} $ and calculate the full wave function $\mid\Psi\rangle$. Our procedure allows us to assess whether or not ${\cal V}(NN)$ has been chosen well. If it has, the effects of ${\cal L}_{\rm m} $ can be treated perturbatively. Solving for $\mid\Phi\rangle$ is no easy task --it demands a separate non-perturbative treatment. One introduces a mean field $U_{MF}$ which acts on single nucleons. \begin{equation}{\cal L}_0(N)-{\cal V}(NN) ={\cal L}_0(N)-U_{MF} + \left(U_{MF}-{\cal V}(NN)\right).\ee The operator $U_{MF}$ is chosen to minimize the effects of $\langle \Psi|U_{MF}-{\cal V}(NN)|\Psi \rangle.$ There is a well-known procedure, called Brueckner theory, which is used to determine $U_{MF}$. In schematic terms: \begin{equation} U_{MF}\sim G \times \rho,\ee in which $G$ is a nucleon-nucleon scattering matrix, as modified by the Pauli principle, $\rho$ is the nuclear density, and the $\times$ represents a convolution. The result \cite{rmgm98} is a rather complete theory in which the full wave function $|\Psi\rangle$ includes the effects of both NN correlations and explicit mesons. \subsection*{Results } The trivial nature of the vacuum in the light front formalism was exploited in deriving\cite{rmgm98} the necessary equations. Applying our light front OBEP, the nuclear matter saturation properties are reasonably well reproduced. The binding energy per nucleon is 14.71 MeV with a value of $k_F$ of 1.35 fm$^{-1}$. This is good considering that we have no three-body force. The computed value of the compressibility, 180 MeV, is smaller than that of alternative relativistic approaches to nuclear matter in which the compressibility usually comes out too large. The replacement of meson degrees of freedom by a NN interaction was shown to be a reasonable approximation, and that the formalism allows one to calculate corrections to this approximation in a well-organized manner. The mesonic Fock space components of the nuclear wave function are studied we find that there are about 0.05 excess pions per nucleon. The magnitudes of the scalar and vector potentials are far smaller than found in the mean field approximation. Our first calculation neglected the influence of two-particle-two-hole states to obtain an approximate version of $f(k^+)$ the nucleons carry 81\% (as opposed to the 65\% of mean field theory) of the nuclear plus momentum. This is a vast improvement in the description of nuclear deep inelastic scattering as the minimum value of the ratio $F_{2A}/F_{2N}$ is increased by a factor of twenty towards the data. This is not enough to provide a satisfactory description, but it is an excellent start. I am optimistic about future results because including nucleons with momentum greater than $k_F$ can be expected to substantially increase the computed ratio $F_{2A}/F_{2N}$\cite{rmgm98}. Let me discuss the observational aspects, concentrating on the experimental information about the nuclear pionic content. The Drell-Yan experiment on nuclear targets \cite{dyexp} showed no enhancement of nuclear pions within an error of about 5\%-10\% for their heaviest target. Understanding this result is an important challenge to the understanding of nuclear dynamics~\cite{missing}. Here we have a good description of nuclear dynamics, and our 5\% enhancement is consistent, within errors, with the Drell-Yan data. \section*{SUMMARY} The light front approach has been applied, within the mean field approximation, to both infinite and finite nuclear matter. Furthermore, LF studies of $\pi N$ and $NN$ scattering have been made. This is input to LF calculations of correlated nucleons in infinite nuclear matter. One can use light front dynamics to compute nuclear energies, wave functions and the experimentally observable plus-momentum distributions for a wide variety of Lagrangians. There are indications that the computed quantities will ultimately be in good agreement with experiment. The use of light front dynamics in nuclear physics is only in its infancy, but it seems to be a tool that can be used for any problem in high energy nuclear physics. \section*{Acknowledgments} These lectures are based on work performed in collaboration with P.G.~Blunden, M.~Burkardt, and R.~Machleidt.
1,314,259,994,440
arxiv
\section{Introduction}\label{sec:intro} Competitive analysis is a well-known technique to measure the quality of online versus offline decisions \cite{borodin1998,sleator1985}. \emph{Online} decisions are irrevocable (i.e.\ we cannot change past decisions) and instantaneous (i.e.\ we cannot use future knowledge). \emph{Offline} decisions are made supposing the entire problem information is available. Competitive analysis has been applied in various areas during the years, e.g.\ online bipartite matching, online stochastic matching, online sequential allocation, online sequential bin packing, online scheduling \cite{albers2012,gyorgy2010,jaillet2014,karp1990,khuller1994}. For some online problems, quite successful algorithms are already known under particular assumptions about the arriving input (e.g.\ \cite{brubach2016}). For other problems, this is unfortunately not the case. For example, in the uniform knapsack problem, any deterministic online algorithm without advice has an unbounded competitive ratio. Interestingly, with just one bit of advice, it is possible to implement a 2-competitive algorithm for this problem \cite{marchetti1995}. In general, we can increase the competitive ratio of any online algorithm by giving it enough advice. This motivates the development of novel frameworks such as \emph{advice complexity}. An online algorithm has now an access to an \emph{oracle tape} for the problem of interest and can request an \emph{advice string} when making a decision. The oracle is normally assumed to have an unlimited computational power but the number of bits in the advice string must be polynomially bounded in the size of the input offline problem. For a detailed survey on advice complexity, we refer to \cite{boyar2016}. Advice complexity is also related to \emph{semi-online} and \emph{look-ahead} algorithms that suppose some of the input is available \cite{seiden2000}. This raises a number of questions. How many advice bits are sufficient to increase the competitive ratio of an online algorithm to a certain threshold? How many bits are needed to match an {\em optimal} offline algorithm? For example, in the popular paging problem, to achieve offline optimality with an online algorithm we need $\lceil\log_2 k\rceil$ bits of advice to specify which page to delete from the buffer of size $k$. This results in advice complexity of $n\cdot\lceil\log_2 k\rceil$ for instances with $n$ requests, whereas it is shown that $n+k$ bits of advice suffice \cite{bockenhauer2009,dobrev2009}. As another example, in online bipartite matching with a graph of size $n$ (i.e.\ the number of vertices in a partition), a corresponding deterministic online algorithm is optimal (w.r.t.\ the expected matching size) whenever it has an access to $\lceil\log_2 n!\rceil$ but not less advice bits \cite{miyazaki2014}. Here, for the first time, we introduce techniques from competitive analysis {\em and} advice complexity into online fair division. Online fair division is an important and challenging problem facing society today due to the uncertainty we may have about future resources, e.g.\ deceased organs to patients, donated food to charities, electric vehicles to charging stations, tenants to houses, even students to courses, etc. We often cannot wait until the end of the year, week or even day before starting to allocate incoming resources. For example, organs cannot be kept too long on ice or products cannot be stored in the warehouse before distributing to a food bank \cite{walsh2014,walsh2015}. We extend past work by asking how many advice bits are needed to increase the welfare. Advice helps us understand how the competitive ratio depends on uncertainty about the future. It can be based on information about past or future items. For example, consider the allocation of food donations to charities by a central decision maker. A number of contractors usually donate food on a regular basis and at specific times so the decision maker knows when some of the items will arrive. Also, each item could have a \emph{type} that is the set of charities that like the item. The oracle might then keep track on the item types that have arrived in the past and thus bias the allocation of the new item type whenever possible. As another example, consider the allocation of deceased organs to patients. The administrator of a hospital might know what organs will arrive that can be exchanged with a neighboring hospital. They might use this offline information to improve significantly the best local online match for the current organ. Further, the oracle could keep track of how long patients are in the waiting list and thus bias the future organ matching decisions based on this information under various constraints, e.g.\ a patient should not wait for an organ more than 30 days, a patient who arrived at time moment 10 to the waiting list should receive organs earlier than a patient who arrived after time moment 10, etc. \emph{Our contributions:} Our work is novel for several reasons. For example, we combine advice complexity and competitive analysis in the context of online fair division. As another example, we study multiple objectives and online competitiveness of mechanisms. We first observe a 1-to-1 correspondence between online bipartite matching and online fair division. By using this correspondence, we can transfer and significantly extend objectives and algorithms from online bipartite matching to online fair division and vice versa. This is useful for a number of reasons. For example, agents in fair division have preferences and can be strategic which is an aspect not typically considered in bipartite matching. As a second example, allocations may be more difficult to find than matchings if we want them to satisfy multiple fairness and efficiency criteria. We thus view algorithms for online bipartite matching as \emph{mechanisms} for online fair division. Following this, we study the competitive performance of the popular matching \textsc{Ra\-nk\-ing}\ mechanism and the attractive \textsc{Li\-ke}, \textsc{Ba\-la\-n\-ced Li\-ke}\ and \textsc{Ma\-xi\-mum Li\-ke}\ allocation mechanisms w.r.t. three different objectives: the \emph{expected matching size}, the \emph{utilitarian welfare} and the \emph{egalitarian welfare}. We consider three settings, namely online fair division setting \emph{without advice}, \emph{with full advice} and \emph{with partial advice}. In each of these settings, we analyse these four mechanisms and present a most competitive mechanism for each objective supposing adversarial input. We further plot their competitive ratios. We finally proved that there is no mechanism that maximizes the expected matching size or the egalitarian welfare and uses less than full advice. The next Section~\ref{sec:pre} provides the notions, the mechanisms and the objectives that we use throughout the paper. In Sections~\ref{sec:on},~\ref{sec:advone} and~\ref{sec:advtwo}, we report our results for the online setting without advice, the online setting with full advice and the online setting with partial advice, respectively. Finally, we discuss related work, future work and conclude in Section~\ref{sec:rel}. \section{Preliminaries}\label{sec:pre} {\bf Online bipartite matching instance:} An \emph{instance} $\mathcal{G}$ has (1) a set of $n$ ``\emph{boy}'' vertices, (2) a set of $m$ ``\emph{girl}'' vertices, (3) a \emph{weight} matrix where the $(i,j)$-th cell contains the \emph{weight} of the edge between vertices the $i$-th ``boy'' vertex and the $j$-th ``girl'' vertex, and (4) a \emph{sequence} of the ``girl'' vertices. We consider \emph{binary} (i.e.\ unweighted graph) and \emph{non-negative} (i.e.\ weighted graph) weights. {\bf Online fair division instance:} An \emph{instance} $\mathcal{I}$ has (1) a set $A$ of \emph{agents} $a_1,\ldots,a_n$, (2) a set $O$ of indivisible \emph{items} $o_1,\ldots,o_m$, (3) \emph{utility} matrix $U=(u_{ij})_{n\times m}$ where $u_{ij}$ is the private \emph{utility} of agent $a_i$ for item $o_j$, and (4) \emph{ordering} $o$ of the items. We consider \emph{binary} and \emph{general} non-negative rational utilities. {\bf Online setting:} Let $\mathcal{G}_{\mathcal{I}}=(A,O,U,o)$ be the online bipartite graph associated with $\mathcal{I}$. We suppose that ordering $o$ reveals item $o_j$ in round $j$ when each agent $a_i$ bids a rational non-negative value $v_{ij}$ for item $o_j$ and a \emph{mechanism} allocates item $o_j$ to an agent. Further, we assume that the ordering $o$ is adversarial which captures the worst-case arrival sequences. {\bf Fair division axioms:} A mechanism is \emph{strategy-proof} if, with complete information, no agent can misreport their utilities and thus increase their expected outcome. Agent $a_1$ \emph{envies (ex ante) ex post} agent $a_2$ if $a_1$ assigns greater (expected) utility to the (expected) allocation of $a_2$ than to their own (expected) allocation. A mechanism is \emph{bounded envy-free (ex ante) ex post with $r$} if no agent envies (ex ante) ex post another one with more than $r$ given the (expected) allocation returned by the mechanism. A mechanism is \emph{(ex ante) ex post Pareto efficient} if its returned (expected) allocation is Pareto optimal. {\bf Mechanisms:} We use an oracle tape to specify some of the behavior of the optimal offline mechanism. An \emph{online} mechanism $M$ does not consult the oracle tape and makes the current decision supposing the past decisions are irrevocable and no information about future items is available. By comparison, its modification {\sc Adviced} $M$ can at each round decide whether to consult the oracle or not. If ``yes'', the oracle encodes the identifier of the agent that should receive the item on the advice tape when the mechanism reads the tape and allocates the item to the adviced agent. If ``no'', {\sc Adviced} $M$ runs $M$ to allocate the current item. There are two extreme cases. If {\sc Adviced} $M$ does not read the oracle tape at any round, then its performance coincides with the one of $M$. If {\sc Adviced} $M$ reads the oracle tape at each round, then its performance coincides with the one of an optimal offline mechanism. We consider four online mechanisms. The \textsc{Ma\-xi\-mum Li\-ke}\ mechanism allocates each item $o_j$ uniformly at random to an agent with the greatest bid for $o_j$. The \textsc{Ra\-nk\-ing}\ mechanism from \cite{karp1990} picks a strict priority ordering over the agents uniformly at random and allocates each item $o_j$ to an agent that has positive bid for it, has not been allocated items previously and has the greatest priority. We further use the \textsc{Li\-ke}\ and \textsc{Ba\-la\-n\-ced Li\-ke}\ mechanisms from \cite{aleksandrov2015ijcai}. The \textsc{Li\-ke}\ mechanism allocates each item $o_j$ uniformly at random to an agent that bids positively for the item. The \textsc{Ba\-la\-n\-ced Li\-ke}\ mechanism allocates each item $o_j$ uniformly at random to an agent among those agents who bid positively for the item and have been allocated fewest items previously. We modify these four mechanisms to read advice bits from the oracle tape: {\sc Adviced} \textsc{Ma\-xi\-mum Li\-ke}, {\sc Adviced} \textsc{Ra\-nk\-ing}, {\sc Adviced} \textsc{Li\-ke}\ and {\sc Adviced} \textsc{Ba\-la\-n\-ced Li\-ke}. These mechanisms satisfy many nice axioms. For example, \textsc{Ma\-xi\-mum Li\-ke}\ is Pareto efficient. In fact, it is one of a few Pareto efficient mechanisms but unfortunately it is not strategy-proof or envy-free. \textsc{Li\-ke}\ is strategy-proof and envy-free ex ante. In fact, each envy-free ex ante mechanism assigns probabilities for items to agents as \textsc{Li\-ke}\ does. However, \textsc{Li\-ke}\ is not envy-free ex post. In contrast, \textsc{Ba\-la\-n\-ced Li\-ke}\ mechanism bounds the envy ex post. Interestingly, with 0/1 utilities, it is also Pareto efficient and envy-free ex ante. We further analysed the matching \textsc{Ra\-nk\-ing}\ mechanism from a fair division point of view. For example, it is strategy-proof, envy-free ex ante and bounds the envy ex post but only with simple 0/1 utilities. However, it is not Pareto efficient in this setting as it may discard items. These axiomatic properties are well-understood. We, therefore, turn attention to the competitive properties of these mechanisms. {\bf Objectives:} Given instance $\mathcal{I}$, each mechanism induces a probability distribution over a set $\Pi_{\mathcal{I}}$ of allocations. The \emph{expected matching size} $\overline{k}(\mathcal{I})$ is equal to $\sum_{\pi\in \Pi_{\mathcal{I}}} p(\pi)\cdot k(\pi)$ where $p(\pi)$ is the probability of allocation $\pi$ and $k(\pi)$ is the number of agents that are allocated items in $\pi$. The \emph{expected utility} $\overline{u}_{i}(\mathcal{I})$ of agent $a_i$ is $\sum_{j=1}^m p_i(j,\mathcal{I})\cdot u_{ij}$ where $p_i(j,\mathcal{I})$ is the expected probability of agent $a_i$ for item $o_j$. The \emph{utilitarian welfare} $\overline{u}(\mathcal{I})$ is equal to $\sum_{i=1}^n \overline{u}_{i}(\mathcal{I})$. The \emph{egalitarian welfare} $\overline{e}(\mathcal{I})$ is equal to $\min_{i=1}^n \overline{u}_{i}(\mathcal{I})$. \begin{myexample}\label{exp:one} $(${\bf Upper-triangular instance}$)$ Consider $\mathcal{I}$ with $n$ agents, $n$ items and let each agent $a_i$ has utilities equal to 1 for items $o_1$ to $o_{n-i+1}$. For a deterministic mechanism that allocates all items to agents that like them, we have that $\overline{k}(\mathcal{I})\in\lbrace 1,2,\ldots,n\rbrace$, $\overline{u}(\mathcal{I})=n$ and $\overline{e}(\mathcal{I})\in\lbrace 0,1\rbrace$. \hfill\mbox{$\Box$} \end{myexample} {\bf Performance measures:} We use the objectives in order to define three statistics to measure the performance of online mechanisms over all instances. \begin{equation} (ES) \min_{\mathcal{I}}\overline{k}(\mathcal{I}) \end{equation} \begin{equation} (UW) \min_{\mathcal{I}}\overline{u}(\mathcal{I}) \end{equation} \begin{equation} (EW) \min_{\mathcal{I}}\overline{e}(\mathcal{I}) \end{equation} {\bf Online ratios with advice:} We say that an online mechanism $M$ has an \emph{offline (online) competitive ratio} $c(m)$ with $m$ advice bits w.r.t. welfare $W$ if, for an instance $\mathcal{I}$ and an ordering $o$ of $m$ items, we have that $W(OPT_{off(on)})\leq c(m)\cdot W(M(\mathcal{I}))+b(m)$ holds where $b(m)$ is an additive constant and $OPT_{off(on)}$ is the optimal offline (online) mechanism. Note that the $OPT_{off}$ mechanism does not depend on the ordering of the items whilst $OPT_{on}$ does. A mechanism $M$ is \emph{most $c(m)$-competitive} w.r.t. welfare $W$ if $M$ has a competitive ratio $c(m)$ w.r.t. $W$ and each other mechanism has a competitive ratio that is at least $c(m)$. We say that $M_1$ is \emph{strictly better} than $M_2$ on a set of instances if the welfare value of $M_1$ is not lower than the one of $M_2$ on all instances from the set, and greater than the one of $M_2$ on some instances from the set. We say that $M_1$ and $M_2$ are \emph{incomparable} if $M_1$ is strictly better than $M_2$ on some instances and $M_2$ is strictly better than $M_1$ on some other instances. We suppose throughout the paper that agents \emph{sincerely} report their utilities for items. Also, we assume that each agent has positive utility for at least one item and each item is liked by at least one agent. We show our results for the case when $m=n$ and there is a \emph{perfect allocation} in $\mathcal{I}$ (or \emph{perfect matching} in $\mathcal{G}_{\mathcal{I}}$), i.e.\ an allocation in which each agent receives exactly one item that they like. However, we also draw conclusions for the case when $m>n$ and there is an allocation in which each agent receives at least one item that they like. Finally, we extended all our results to the case when the maximum number of agents that receive items that they like in each possible allocation is $k<n$ (or \emph{maximum matching} in $\mathcal{G}_{\mathcal{I}}$ of cardinality $k<n$). However, we omit these results for reasons of space. \section{Online Fair Division without Advice}\label{sec:on} We study the competitiveness of our four online mechanisms w.r.t. to the optimal offline mechanism for the expected matching size (ES), the utilitarian welfare (UW) and the egalitarian welfare (EW). The optimal \emph{offline} mechanism returns an allocation in which each agent receives exactly one item for (ES), an allocation in which each item is received by an agent that values it most for (UW) and a perfect allocation that maximizes the egalitarian welfare for (EW). \subsection{Expected Matching Size}\label{subsec:exp} A mechanism that maximizes the objective $\overline{k}(\mathcal{I})$ also maximizes both $\overline{u}(\mathcal{I})$ and $\overline{e}(\mathcal{I})$ simultaneously when agents have simple binary utilities. By Theorem 2 from \cite{miyazaki2014}, no deterministic online mechanism can maximize (ES). We, therefore, turn our attention to randomized mechanisms for (ES). By \cite{karp1990}, the \textsc{Ra\-nk\-ing}\ mechanism is most competitive for (ES) with expected matching size of $n\cdot (1-\frac{1}{e})+o(n)$ when the arriving sequence is adversarial. Its competitive ratio is $1+\frac{1}{e-1}$. For this reason, we next report the competitive ratios of \textsc{Ba\-la\-n\-ced Li\-ke}, \textsc{Li\-ke}\ and \textsc{Ma\-xi\-mum Li\-ke}\ with respect to the optimal offline mechanism and \textsc{Ra\-nk\-ing}. The optimal offline mechanism returns a matching of expected size $n$. \begin{mytheorem}\label{thm:one} The \textsc{Li\-ke}\ and \textsc{Ba\-la\-n\-ced Li\-ke}\ mechanisms are $2$-competitive and $2\cdot(1-\frac{1}{e})$-online competitive whereas the \textsc{Ma\-xi\-mum Li\-ke}\ mechanism is $n$-competitive and $n\cdot(1-\frac{1}{e})$-online competitive for (ES). \end{mytheorem} \begin{myproof} For \textsc{Ba\-la\-n\-ced Li\-ke}, consider the \textsc{Ra\-n\-d\-om}\ mechanism that allocates each next item uniformly at random to an agent among those with 0 items. If no such agent exists for the current item, then \textsc{Ra\-n\-d\-om}\ discards the item. The \textsc{Ba\-la\-n\-ced Li\-ke}\ mechanism can be seen as a \emph{completion} of \textsc{Ra\-n\-d\-om}, i.e.\ \textsc{Ba\-la\-n\-ced Li\-ke}\ allocates even the items that \textsc{Ra\-n\-d\-om}\ discards. It is easy to prove that the expected matching sizes of \textsc{Ba\-la\-n\-ced Li\-ke}\ and \textsc{Ra\-n\-d\-om}\ are the same for each instance. Therefore, the \textsc{Ba\-la\-n\-ced Li\-ke}\ and \textsc{Ra\-n\-d\-om}\ mechanisms achieve the same expected matching size for each instance. By \cite{karp1990}, the minimum such size is equal to $\frac{n}{2}+o(\log_2 n)$. For \textsc{Li\-ke}, consider $n$ agents, $n$ items. Suppose that each agent likes the first $n/2$ items. The remaining $n/2$ items are chosen by the adversary. We have that $k\in[1,n/2]$ different agents are allocated the first $n/2$ items. The adversary then chooses the next $n/2$ items in such a way so that $n/2$ different agents like them and $k$ of them are the ones matched the first $n/2$ items. The expected matching size is $\frac{n}{2}+o(\log_2 n)$. There could be instances, however, when this size is lower. For \textsc{Ma\-xi\-mum Li\-ke}, consider an instance with $n$ agents and $n$ items. Let us suppose that all agents have positive utilities for all items but only agent $a_1$ has the greatest utility for each item. The mechanism gives all items to agent $a_1$ and thus achieves a matching size of 1. Note that this is the worst possible outcome. For each instance, the expected matching size of this mechanism is at least 1 because it allocates all items to at least one agent.\hfill \mbox{$\Box$} \end{myproof} \begin{myobservation}\label{obs:one} The \textsc{Ra\-nk\-ing}\ mechanism is strictly better than the \textsc{Ba\-la\-n\-ced Li\-ke}\ mechanism which is strictly better than the \textsc{Li\-ke}\ mechanism for (ES). \end{myobservation} For \textsc{Ra\-nk\-ing}\ and \textsc{Ba\-la\-n\-ced Li\-ke}, the result in Observation~\ref{obs:one} follows immediately from Theorem~\ref{thm:one}. By Lemma~\ref{lem:one}, \textsc{Ba\-la\-n\-ced Li\-ke}\ is at least as competitive than \textsc{Li\-ke}\ for each instance. For some instances, \textsc{Ba\-la\-n\-ced Li\-ke}\ is more competitive than \textsc{Li\-ke}. Hence, \textsc{Ba\-la\-n\-ced Li\-ke}\ is strictly better than \textsc{Li\-ke}. \begin{mylemma}\label{lem:one} Let $\pi_j$ be an allocation of items $o_1$ to $o_j$, and $\rho_j$ and $\sigma_j$ extend $\pi_j$ to all items by using \textsc{Ba\-la\-n\-ced Li\-ke}\ and \textsc{Li\-ke}, respectively. Further, let $b(\rho_j)$ and $l(\sigma_j)$ be their probabilities. For each instance, $j\in[1,n]$ and $\pi_j$, we have that $\sum_{\rho_j} b(\rho_j)\cdot k(\rho_j)\geq \sum_{\sigma_j} l(\sigma_j)\cdot k(\sigma_j)$ holds. \end{mylemma} \textsc{Ra\-nk\-ing}\ outperforms \textsc{Ma\-xi\-mum Li\-ke}\ in general over all instances. In contrast, there are instances on which \textsc{Ma\-xi\-mum Li\-ke}\ outperforms \textsc{Ra\-nk\-ing}. We illustrate this in Example~\ref{exp:two}. \begin{myexample}\label{exp:two} $(${\bf Expected matching incomparabilities}$)$ Consider the fair division of 2 items between 2 agents. Let $u_{11}=2,u_{12}=0,u_{21}=1,u_{22}=2$. The expected matching sizes of \textsc{Ma\-xi\-mum Li\-ke}\ and \textsc{Ra\-nk\-ing}\ are $2$ and $3/2$. \hfill\mbox{$\Box$} \end{myexample} If $m>n$, our results hold as well. We conclude that \textsc{Ra\-nk\-ing}\ is more competitive than \textsc{Ba\-la\-n\-ced Li\-ke}, \textsc{Li\-ke}\ and \textsc{Ma\-xi\-mum Li\-ke}\ for (ES) in the worst case. \subsection{Utilitarian Welfare}\label{subsec:util} In general, the utilitarian welfare can be maximized even online with no information about future items. One most competitive online mechanism that achieves the optimal offline welfare is \textsc{Ma\-xi\-mum Li\-ke}. Hence, the offline and online competitive ratios of online mechanisms conflate to just one competitive ratio. \begin{myproposition}\label{prop:one} With general utilities, the \textsc{Ma\-xi\-mum Li\-ke}\ mechanism maximizes (UW). \end{myproposition} \begin{myproof} \textsc{Ma\-xi\-mum Li\-ke}\ allocates each next item in the ordering to an agent with the greatest utility for the item. The returned online welfare value coincides with the maximum possible offline value of this welfare, i.e.\ the maximum utility sum over the items. \hfill\mbox{$\Box$} \end{myproof} The result in Proposition~\ref{prop:one} is straightforward in our setting but there are fair division settings in which optimizing the utilitarian welfare is intractable even \emph{offline} when the entire problem input information is available \cite{nguyen2014}. We, therefore, find our result fundamental. On the other hand, with binary utilities, note that each mechanism that gives all items to agents that like them maximizes the utilitarian welfare. Indeed, \textsc{Ba\-la\-n\-ced Li\-ke}\ and \textsc{Li\-ke}\ do maximize it whereas \textsc{Ra\-nk\-ing}\ does not because it might discard items. \begin{myobservation}\label{obs:two} With 0/1 utilities, the \textsc{Ba\-la\-n\-ced Li\-ke}\ and \textsc{Li\-ke}\ mechanisms are strictly better than the \textsc{Ra\-nk\-ing}\ mechanism for (UW). \end{myobservation} With general utilities, \textsc{Li\-ke}\ is $n$-competitive; see the example in the proof of Theorem 9 from \cite{aleksandrov2015ijcai}. By comparison, \textsc{Ra\-nk\-ing}\ and \textsc{Ba\-la\-n\-ced Li\-ke}\ are not competitive from a utilitarian perspective even with just two agents and two items. We illustrate these results in Example~\ref{exp:three}. \begin{myexample}\label{exp:three} $(${\bf Utilitarian non-competitiveness}$)$ Consider the fair division of 2 items to 2 agents. Let $u_{11}=0,u_{12}=1,u_{21}=1,u_{22}=u$. The optimal offline utilitarian welfare is $u+1$ whereas the one of \textsc{Ba\-la\-n\-ced Li\-ke}\ and \textsc{Ra\-nk\-ing}\ is $2$. Their ratios go to $\infty$ as $u$ goes to $\infty$. \hfill\mbox{$\Box$} \end{myexample} Our Example~\ref{exp:three} is in-line with an impossibility example and an impossibility remark presented by \cite{khuller1994} for online weighted bipartite matching. These show that there is no deterministic or randomized online algorithm that maximizes (or minimizes) the \emph{perfect utilitarian welfare} (the sum of the utilities in a perfect allocation) where the competitive ratio of the algorithm depends only on the number of agents $n$. In contrast, our utilitarian welfare objective (UW) is different because its maximum value could be obtained by allocating all items to a single agent. As a result, \textsc{Ma\-xi\-mum Li\-ke}\ is a mechanism whose competitive ratio does not depend even on $n$ and \textsc{Li\-ke}\ is a mechanism whose competitive ratio depends solely on $n$. If $m>n$, the \textsc{Ma\-xi\-mum Li\-ke}\ mechanism remains optimal for (UW). We used the argument in the proof of Theorem 9 from \cite{aleksandrov2015ijcai} to construct an example and show that \textsc{Li\-ke}\ remains $n$-competitive. Both \textsc{Ra\-nk\-ing}\ and \textsc{Ba\-la\-n\-ced Li\-ke}\ remain not competitive; see the example in the proof of Theorem 10 from \cite{aleksandrov2015ijcai}. We conclude that \textsc{Ma\-xi\-mum Li\-ke}\ is more competitive than \textsc{Ra\-nk\-ing}, \textsc{Ba\-la\-n\-ced Li\-ke}\ and \textsc{Li\-ke}\ for (UW) in any case. \subsection{Egalitarian Welfare}\label{subsec:egal} In this section, we optimize the egalitarian welfare. It is easy to see that there is no deterministic online mechanism that maximizes the egalitarian welfare. We focus therefore on randomized mechanisms. With binary utilities, both \textsc{Li\-ke}\ and \textsc{Ba\-la\-n\-ced Li\-ke}\ are $n$-competitive from an egalitarian perspective; see Example~\ref{exp:one}. Moreover, \textsc{Ma\-xi\-mum Li\-ke}\ is equivalent to \textsc{Li\-ke}\ and hence it is also $n$-competitive. With general utilities, \textsc{Ma\-xi\-mum Li\-ke}\ is unfortunately not competitive at all even with just two agents and two items. See Example~\ref{exp:four} for this simple result. \begin{myexample}\label{exp:four} $(${\bf Egalitarian non-competitiveness}$)$ Consider the fair division of 2 items to 2 agents. Let $u_{11}=2,u_{12}=2,u_{21}=1,u_{22}=1$. An optimal offline egalitarian mechanism gives say item $o_1$ to agent $a_1$ with probability $1$ and item $o_2$ to agent $a_2$ with probability $1$. Its egalitarian welfare is equal to $1$. \textsc{Ma\-xi\-mum Li\-ke}\ gives items $o_1$ and $o_2$ to agent $a_1$ with probability 1. Its welfare is equal to 0. Hence, its ratio is $\infty$. \hfill\mbox{$\Box$} \end{myexample} Interestingly, with general utilities, \textsc{Li\-ke}, \textsc{Ba\-la\-n\-ced Li\-ke}\ and \textsc{Ra\-nk\-ing}\ are all most $n$-competitive from an egalitarian perspective. \begin{mytheorem}\label{thm:two} With general utilities, the \textsc{Ba\-la\-n\-ced Li\-ke}, \textsc{Li\-ke}\ and \textsc{Ra\-nk\-ing}\ mechanisms are most $n$-competitive for (EW). \end{mytheorem} \begin{myproof} The mechanisms have competitive ratios of $n$. Consider instance $\mathcal{I}$, agent $a_i$ and the first item $o_j$ in the ordering such that agent $a_i$ has positive utility for it. We show that $\overline{e}(\mathcal{I})$ is at least $\frac{1}{n}$. With \textsc{Li\-ke}, we have that the probability $p_i(j,\mathcal{I})$ of agent $a_i$ for item $o_j$ is equal to $1/n_j$ where $n_j$ is the number of agents that like item $o_j$. Since $n_j\leq n$, we have $p_i(j,\mathcal{I})\geq 1/n$. With \textsc{Ba\-la\-n\-ced Li\-ke}\ and \textsc{Ra\-nk\-ing}, the worst case for agent $a_i$ is when they have been allocated 0 items prior round $j$ and all agents together have positive utilities for item $o_j$. Therefore, we have $p_i(j,\mathcal{I})\geq 1/n_j\geq 1/n$. Hence, agent $a_i$ receives expected utility of at least $\frac{1}{n}$. This lower bound is achieved in Example~\ref{exp:one}. Next, we confirm that every other mechanism has competitive ratio at least $n$. Consider the upper-triangular instance from Example~\ref{exp:one} and a mechanism $M$. If $M$ shares the probability for the first item uniformly at random, then its competitive ratio is equal to $\frac{1}{n}$. If $M$ shares the probability for the first item not uniformly at random, then its competitive ratio is lower than $\frac{1}{n}$. Suppose that $M$ gives the first item to agent $a_n$ with probability $p>\frac{1}{n}$. The probability of some other agent must be smaller than $\frac{1}{n}$ as these probability values sum up to at most 1. WLOG, suppose that the probability $q$ of agent $a_1$ for this first item is one such value smaller than $\frac{1}{n}$. The egalitarian welfare on the upper-triangular instance is then $p$. However, consider next the \emph{lower-triangular} instance, i.e.\ agent $a_i$ likes items $o_1$ to $o_i$. The mechanism gives expected utility of $q<\frac{1}{n}$ to agent $a_1$. This value is also the welfare on the lower-triangular instance. $M$ has competitive ratio of $1/q$ because the optimal offline welfare is 1. \hfill\mbox{$\Box$} \end{myproof} \begin{myobservation}\label{obs:three} With 0/1 utilities, the \textsc{Ra\-nk\-ing}\ mechanism is strictly better than the \textsc{Ba\-la\-n\-ced Li\-ke}\ mechanism which is strictly better than the \textsc{Li\-ke}\ mechanism for (EW). \end{myobservation} Observation~\ref{obs:three} can be shown similarly as Observation~\ref{obs:one}. Surprisingly, there are instances on which \textsc{Ma\-xi\-mum Li\-ke}\ outperforms all the other three mechanisms even though it is not competitive in general. See Example~\ref{exp:five} for this result. \begin{myexample}\label{exp:five} $(${\bf Egalitarian incomparabilities}$)$ Let $\mathcal{I}$ has 2 items, 2 agents and $u_{11}=2,u_{12}=1,u_{21}=1,u_{22}=2$. The value of $\overline{e}(\mathcal{I})$ of \textsc{Ma\-xi\-mum Li\-ke}\ is $2$ whereas the value of $\overline{e}(\mathcal{I})$ of \textsc{Ba\-la\-n\-ced Li\-ke}, \textsc{Li\-ke}\ or \textsc{Ra\-nk\-ing}\ is equal to $3/2$.\hfill\mbox{$\Box$} \end{myexample} If $m>n$, \textsc{Ra\-nk\-ing}\ and \textsc{Ba\-la\-n\-ced Li\-ke}\ become not competitive; see the example in the proof of Theorem 10 from \cite{aleksandrov2015ijcai}. \textsc{Li\-ke}\ however remains most $n$-competitive; see the example in the proof of Theorem 9 from \cite{aleksandrov2015ijcai}. We conclude that \textsc{Li\-ke}\ is more competitive than \textsc{Ra\-nk\-ing}, \textsc{Ba\-la\-n\-ced Li\-ke}, \textsc{Ma\-xi\-mum Li\-ke}\ for (EW) in the worst case. \section{Online Fair Division with Full Advice}\label{sec:advone} We next study most competitive adviced mechanisms for the expected matching size (ES), the utilitarian welfare (UW) and the egalitarian welfare (EW). By Proposition~\ref{prop:one}, there is a deterministic online mechanism that maximizes (UW) even without any advice. We, therefore, focus on (ES) and (EW). We assume that the oracle specifies on the tape a different agent for each of the $n$ items. Such an encoding requires $\lceil\log_2 n!\rceil$ advice bits. By Theorem 1 from \cite{miyazaki2014}, there is a deterministic online mechanism that uses $\lceil\log_2 n!\rceil$ advice bits and maximizes (ES). By Theorem 2 from \cite{miyazaki2014}, no deterministic online mechanism can use less than $\lceil\log_2 n!\rceil$ advice bits and maximize (ES). These two results are inherited for (EW) as well. Interestingly, we next prove that no randomized online mechanism can use less than $\lceil\log_2 n!\rceil$ advice bits and maximize either objective (ES) or (EW). \begin{mytheorem}\label{thm:three} There is {\bf no} randomized online algorithm that uses less than $\lceil\log_2 n!\rceil$ advice bits and maximizes (ES). Even with 0/1 utilities, there is {\bf no} randomized online algorithm that uses less than $\lceil\log_2 n!\rceil$ advice bits and maximizes (EW). \end{mytheorem} \begin{myproof} For (ES), suppose that there is such a mechanism. The maximum value of (ES) is $n$. Let $\pi$ be an allocation returned by the mechanism and $p(\pi)$ its probability. Recall that $k(\pi)\leq n$ denotes the number of different agents that receive items in $\pi$. If $\sum_{\pi} p(\pi)<1$ holds, then we conclude that $\sum_{\pi} p(\pi)\cdot k(\pi)<n$ holds. Therefore, the mechanism does not maximize (ES) which is a contradiction. Consequently, $\sum_{\pi} p(\pi)=1$ holds. But, now we have that $\sum_{\pi} p(\pi)\cdot k(\pi)<n$ iff $k(\pi)<n$ for some $\pi$ returned by the mechanism. Therefore, as the mechanism maximizes (ES), we conclude that $k(\pi)=n$ for each $\pi$. To sum up, the mechanism returns only perfect allocations and their probabilities sum up to 1. We can define now a deterministic online mechanism given one $\pi$ returned by the randomized online mechanism. This deterministic online mechanism also uses less than $\lceil\log_2 n!\rceil$ advice bits and maximizes (ES). This is in contradiction with Theorem 2 from \cite{miyazaki2014}. This result holds even with more items than agents. For (EW) and binary utilities, suppose that there is such a mechanism. Hence, each agent receives an expected utility of 1 and the probability of 1 for each item is shared completely between agents that like the item. Given instance $\mathcal{I}$, consider the \emph{random assignment} matrix $P(\mathcal{I})=(p_i(j,\mathcal{I}))_{n\times n}$ of this mechanism. The matrix $P(\mathcal{I})$ is \emph{bistochastic} because $\sum_{i=1}^n p_i(j,\mathcal{I})=1$ for each $j$ and $\sum_{j=1}^n p_i(j,\mathcal{I})=1$ for each $i$ hold. By the famous result of Birkhoff, every bistochastic matrix is a convex combination of permutation matrices \cite{brualdi2006}. Each permutation matrix corresponds to a perfect allocation in $\mathcal{I}$. There could be multiple combinations for the same bistochastic matrix. For each such combination, we can define a randomized online algorithm that uses less than $\lceil\log_2 n!\rceil$ advice bits and maximizes (ES). This is in contradiction with the previous result. This result holds even with more items than agents.\hfill\mbox{$\Box$} \end{myproof} \section{Online Fair Division with Partial Advice}\label{sec:advtwo} In this section, we report the reciprocal ratios of the mechanisms. We assume that the oracle specifies agents for $k<m$ items. We start with the case when $m=n$. For (ES), the oracle specifies a different agent for each of the first $k$ items. An efficient encoding requires $\lceil\log_2 k!\rceil$ advice bits. If $k=n-1$, {\sc Adviced} \textsc{Ra\-nk\-ing}\ and {\sc Adviced} \textsc{Ba\-la\-n\-ced Li\-ke}\ are optimal because they keep track on the past allocation whereas {\sc Adviced} \textsc{Ma\-xi\-mum Li\-ke}\ and {\sc Adviced} \textsc{Li\-ke}\ have ratios $1-\frac{1}{n}$ and $1-\frac{1}{n}+\frac{1}{n^2}$. If $k<n-1$, we next report their ratios. \begin{mytheorem}\label{thm:four} With $\lceil\log_2 k!\rceil$ advice bits, {\sc Adviced} \textsc{Ra\-nk\-ing}\ is most $\frac{(e-1)n+k}{en}$-competitive for (ES). \end{mytheorem} \begin{myproof} The mechanism has two components: (1) one that allocates items deterministically and (2) another one that allocates items according to \textsc{Ra\-nk\-ing}. Let the entire input graph be $\mathcal{G}_{\mathcal{I}}$ with $n$ vertices in each partition. Let us remove the $k$ deterministically decided vertices from both partitions together with their edges from $\mathcal{G}_{\mathcal{I}}$. Now, consider the remaining bipartite sub-graph with $(n-k)$ vertices in each partition. This graph has perfect matching of size $(n-k)$ and \textsc{Ra\-nk\-ing}\ matches vertices in this graph. Therefore, the expected matching size of \textsc{Ra\-nk\-ing}\ on this smaller graph is $(n-k)\cdot(1-\frac{1}{e})+o(n-k)$. We conclude that this size for {\sc Adviced} \textsc{Ra\-nk\-ing}\ is $k+(n-k)\cdot(1-\frac{1}{e})+o(n-k)$. By Theorem 1 from \cite{miyazaki2014}, the deterministic component of {\sc Adviced} \textsc{Ra\-nk\-ing}\ maximizes (ES) on the bipartite sub-graph of $\mathcal{G}_{\mathcal{I}}$ that contains the adviced $2\cdot k$ vertices. By \cite{karp1990}, we conclude that the randomized component of {\sc Adviced} \textsc{Ra\-nk\-ing}\ maximizes (ES) on the bipartite sub-graph of $\mathcal{G}_{\mathcal{I}}$ that contains the remaining unadviced $2\cdot(n-k)$ vertices.\hfill\mbox{$\Box$} \end{myproof} By Theorem 2 from \cite{miyazaki2014} and our Theorem~\ref{thm:three}, there is no mechanism that uses less than $\lceil\log_2 k!\rceil$ advice bits and has a greater competitive ratio than {\sc Adviced} \textsc{Ra\-nk\-ing}\ with $\lceil\log_2 k!\rceil$ advice bits. We also obtained that the offline ratios of {\sc Adviced} \textsc{Ma\-xi\-mum Li\-ke}, {\sc Adviced} \textsc{Ba\-la\-n\-ced Li\-ke}\ and {\sc Adviced} \textsc{Li\-ke}\ for (ES) and $k\in[1,n-1)$ are $\frac{k}{n}$, $\frac{k+n}{2n}$ and at most $\frac{k+n}{2n}$. Their online ratios are $\frac{ek}{(e-1)n+k}$, $\frac{e(k+n)}{2(e-1)n+2k}$ and at most $\frac{e(k+n)}{2(e-1)n+2k}$. In Figure~\ref{fig:one}, we plot these ratios for $n=10$ agents and $k\in[0,n]$ oracle calls. \vspace{-0.75cm} \begin{figure}[h] \resizebox{\textwidth}{!}{ \includegraphics[height=3.75cm,width=0.475\textwidth,clip=true,trim=10 0 35 15]{graph-off}% \includegraphics[height=3.75cm,width=0.475\textwidth,clip=true,trim=10 0 35 15]{graph-on}% } \caption{(left) w.r.t optimal offline mechanism, (right) w.r.t. {\sc Adviced} \textsc{Ra\-nk\-ing}} \label{fig:one} \end{figure} For (UW), (EW) and 0/1 utilities, the oracle specifies a different agent for each of the first $k$ items. For (UW) and general utilities, the oracle specifies an agent for each of $k$ most valued items. The worst case for {\sc Adviced} \textsc{Ra\-nk\-ing}\ and {\sc Adviced} \textsc{Ba\-la\-n\-ced Li\-ke}\ is when the adviced allocation biases the allocation of future items towards agents who receive negligibly small utilities for these items. Instead, {\sc Adviced} \textsc{Li\-ke}\ allocates each such unadviced item to an agent with probability at least $\frac{1}{n}$. {\sc Adviced} \textsc{Ma\-xi\-mum Li\-ke}\ optimizes (UW) by Proposition~\ref{prop:one}. For (EW) and general utilities, the oracle computes an allocation of $k$ items to agents that maximizes the egalitarian welfare and then specifies the $k$ agents for the $k$ items in this computed allocation. {\sc Adviced} \textsc{Ra\-nk\-ing}\ and {\sc Adviced} \textsc{Ba\-la\-n\-ced Li\-ke}\ focus on agents with zero and fewest items whereas {\sc Adviced} \textsc{Li\-ke}\ and {\sc Adviced} \textsc{Ma\-xi\-mum Li\-ke}\ perform as \textsc{Li\-ke}\ and \textsc{Ma\-xi\-mum Li\-ke}. We next consider the case when $m>n$. For (ES), we conclude the same results as above. For (UW), (EW) and 0/1 utilities, we assume that the oracle specifies $k$ agents for the first $k$ items in the ordering for which the $k$ agents are different. For (UW), (EW) and general utilities, the oracle specifications are as in the case when $m=n$. We summarize all ratios in Table~\ref{tab:one}. \vspace{-0.25cm} \begin{table}[h] \captionsetup{justification=centering} \caption{Ratios for $k\in[0,m)$ adviced items and $l\in[1,n)$ adviced agents:\hspace{1cm} (b) - binary utilities, (g)-general utilities.} \resizebox{\textwidth}{!}{ \begin{tabular}{|C|L|L|L|L|L|} \hline \multirow{2}{1.5cm}{\bf Mechanism} & {\bf (UW)-b} & {\bf (UW)-g} & {\bf (EW)-b} & {\bf (EW)-g} & {\bf (EW)-g} \\ & $m\geq n$ & $m\geq n$ & $m\geq n$ & $m=n$ & $m>n$ \\ \hline \multicolumn{1}{|c|}{\sc Adv.Max.Like} & $1$ & $1$ & $\frac{1}{n}$ & $0$ & $0$ \\ \multicolumn{1}{|c|}{\sc Adv.Bal.Like} & $1$ & $\frac{k}{m}$ & $\frac{1}{n-l}$ & $\frac{1}{n-l}$ & $0$ \\ \multicolumn{1}{|c|}{\sc Adv.Like} & $1$ & $\frac{k}{m}+\frac{1}{n}-\frac{k}{nm}$ & $\frac{1}{n}$ & $\frac{1}{n}$ & $\frac{1}{n}$ \\ \multicolumn{1}{|c|}{\sc Adv.Ranking} & $\leq \frac{n}{m}$ & $\frac{k}{m}$ & $\frac{1}{n-l}$ & $\frac{1}{n-l}$ & $0$ \\ \hline \end{tabular} } \label{tab:one} \end{table} \vspace{-0.75cm} \section{Related Work and Conclusions}\label{sec:rel} We combined competitive analysis, advice complexity and online fair division. Our results are simple but fundamental to understand better the interface between matching and fair division problems. In conclusion, the chair might use {\sc Adviced} \textsc{Ra\-nk\-ing}\ for (ES), {\sc Adviced} \textsc{Ma\-xi\-mum Li\-ke}\ for (UW) and {\sc Adviced} \textsc{Li\-ke}\ or {\sc Adviced} \textsc{Ba\-la\-n\-ced Li\-ke}\ for (EW). We quantify the offline and online performance of these mechanisms with respect to the number of advice bits they can read from an oracle tape. We also presented two impossibility results and closed an open question from \cite{miyazaki2014}. In future, we will analyse other $b$-matching mechanisms from a fair division viewpoint \cite{jaillet2014,kalyanasundaram2000}. Also, we can explore more objectives (e.g.\ the Nash welfare) or competitive measures (e.g.\ price of anarchy) \cite{aleksandrov2015ijcai,vincent2016}. There are more general matching models with weights attached to the ``boy'' vertices or ``girl'' vertices arriving from a known distribution or a random order \cite{mehta2013}. It would be interesting to see if our mechanisms remain most competitive in such models. \bibliographystyle{splncs}
1,314,259,994,441
arxiv
\section{Introduction} This paper considers the problem of estimating an average treatment effect from observational or experimental data, provided that a sufficient set of control variables are available. We pose the question: might the statistical precision of our estimates improve if we used only a subset of the available controls or possibly a dimension reduced transformation of them? This question is evergreen in the applied social sciences (see \cite{leamer1983let} or \cite{hernan2020causal}, page 195), but is surprisingly tricky to navigate for many applied researchers. In this paper, we break the problem down by considering the somewhat stylized situation of discrete covariates with finite support, where we are able to conduct a thorough variance analysis. This paper examines this question in detail using tools from three distinct formalisms: potential outcomes, causal diagrams, and structural equations. We show (Section 2) that a key condition licensing valid causal inference from observational data can be expressed equivalently in each of the three distinct frameworks (conditional unconfoundedness, the back-door criterion, and exogenous errors), allowing us to alternate between perspectives as is convenient pedagogically. Importantly, this equivalence is established in terms of a generic function of observed covariates, meaning that it covers not only variable selection, but ``feature selection''; this generality means that insights built on this equivalence apply seamlessly to modern methods such as regression trees or neural networks, which implicitly introduce potentially non-invertible transformations of the observed covariates. For clarity, we focus on the simplified (yet fairly common in practice) setting of discrete covariates with finite support, which allows us to derive finite sample properties of common stratification estimators, including widely-used linear regression and propensity score methods. Section 3 presents two novel-but-elementary results that will be used to re-analyze earlier theoretical results pertaining to regression adjustment for causal effect estimation. The first result defines the notion of a minimal control function, allowing us to distinguish between necessary and sufficient statistical control for causal effect estimation. The second result is a finite-sample analysis of stratification estimators of average causal effects in the setting of discrete control variables with finite support. This finite-sample analysis, presented in Theorem \ref{theorem2}, articulates the conditions by which a control function may be viewed as optimal in the sense of minimum variance. Section 4 collects concrete examples illustrating practical implications of the theory presented in Section 3, detailing how these results relate to previous literature, both classic and contemporary. By bringing together these profound results in the context of a common statistical framework, we hope to harmonize their insights for practitioners. Section 5 concludes by discussing further connections to previous literature. \section{Formal frameworks for causal inference} Let $\RV{Y}$ be the outcome/response of interest, $\RV{Z}$ be a binary treatment assignment, and $\RV{X}$ be a vector of covariates drawn from covariate space $\mathspace{X}$, all denoted here as random variables. For a sample of size $n$, observations are assumed to be drawn independently as triples $(X_i, Y_i, Z_i)$, for $i = 1, \dots, n$. The goal of causal effect estimation is to understand how the response variable $Y$ changes according to hypothetical manipulations of the treatment assignment variable, $Z$. For simplicity, we will refer to our observational units as ``individuals'', although of course in applications that need not be the case. The essential challenge to causal estimation is that only one of the two possible treatment assignments can be observed; as a consequence, if individuals who happen to receive the treatment differ systematically from those who do not, either in terms of their likely response value or in terms of how they respond to treatment, naive comparisons between the treated and untreated units will not simply reflect the causal impact of the treatment --- the treatment effect is said to be {\em confounded} with other aspects of the population. The field of causal inference has proposed and developed a variety of techniques for coping with this difficulty, the most common of which is some form of regression adjustment (meant here to include propensity score estimators and matching estimators, etc), which entails estimating average causal effects as (weighted) averages of (estimated) conditional expectations. The key assumption that justifies this process is referred to as {\em conditional unconfoundedness}, which asserts that the measured covariates adequately account for all of the systematic differences between the treated and untreated individuals in our observational sample; formalizing this assumption can be approached in a number of ways, which we turn to now. Only after the notation of these formalisms has been introduced can our causal estimand, and the class of estimators we will study, be precisely defined. \subsection{Potential outcomes} \label{potential_outcome_section} The potential outcomes framework casts causal inference as a missing data problem: causal estimands are contrasts between pairs of outcomes that are mutually unobservable --- when we see one, we cannot see the other. At present, the standard reference for the potential outcomes framework is \cite{imbens2015causal}, which contains extensive citations to the primary literature. Let $\RV{Y}^1$ and $\RV{Y}^0$ refer to the ``potential outcomes'' when $\RV{Z}=1$ and $\RV{Z}=0$. For individual $i$, the {\em individual treatment effect} will be defined as the difference between the potential outcomes: $$\tau_i = Y^1_i - Y^0_i.$$ Other treatment effects, such as a ratio rather than a difference, are sometimes considered, but in this paper we focus on the difference. Because the potential outcomes $(\RV{Y}^1, \RV{Y}^0)$ are never observed simultaneously, individual treatment effects can never be estimated directly. However, {\em average} treatment effects can be identified (learned from data) provided certain assumptions are satisfied. The causal estimand this paper will focus on is the average treatment effect, or ATE: \begin{equation} \bar{\tau} \equiv \mathbb{E}[\RV{Y}^1 - \RV{Y}^0]. \end{equation} The precise population over which this expectation is taken will be discussed in more detail in section \ref{estimands}. The standard assumptions that allow this average effect to be estimated are: \begin{enumerate} \item Stable unit treatment value assumption (SUTVA), which consists of two conditions: \begin{enumerate} \item {\em Consistency}: The observed data is related to the potential outcomes via the identity \begin{equation}\label{gating} \RV{Y} = \RV{Y}^1 Z + \RV{Y}^0 (1 - Z), \end{equation} which describes the ``gating'' role of the observed treatment assignment, $Z$. \item {\em No Interference}: for any sample of size $\scalarobs{n}$ with $\RV{Y} \in \mathcal{Y}$ and $\RV{Z} \in \mathcal{Z}$, $(\RV{Y}_i^1, \RV{Y}_i^0) \independent \RV{Z}_j$ for all $i,j \in \{1, ..., \scalarobs{n}\}$ with $j \neq i$, which rules out interference between observational units. \end{enumerate} \item Positivity: $0 < \mathbb{P}(\RV{Z}=1 \mid \RV{X}= x) < 1$ for all $x \in \mathcal{X}$ \item Conditional unconfoundedness: $(\RV{Y}^1, \RV{Y}^0) \independent \RV{Z} \mid \RV{X}$ \end{enumerate} Imagining concrete violations of these conditions is intuition-building. Consistency can be violated under non-compliance, so that treatment assignment doesn't match treatment actually received. No interference can be violated, for example, if we were studying the effect of individual tutoring on student grades in a certain classroom and students study together; Jimmy's treatment assignment may impact Sally's grade. Positivity is violated if certain individuals can never receive treatment, rendering their contribution to the average treatment effect unlearnable. And finally, conditional unconfoundedness can be violated, for example, if both treatment assignment and the outcome variable share a common cause. However, this is not the only way conditional unconfoundedness can be violated, and exploring other possibilities in full generality is the topic of the remainder of the paper. Taken together, the above assumptions enable identification of average treatment effects because they imply the following equality, the left-hand side of which is estimable: \begin{equation*} \begin{aligned} \mathbb{E}_X[\mathbb{E}[\RV{Y} \mid \RV{X}, \RV{Z} = 1] - \mathbb{E}[\RV{Y} \mid \RV{X}, \RV{Z} = 0]] & = \mathbb{E}[\RV{Y}^1 - \RV{Y}^0]. \end{aligned} \end{equation*} In more detail, the equivalence is established as follows: \begin{equation*} \begin{aligned} \mathbb{E}_X[\mathbb{E}[\RV{Y} \mid \RV{X}, \RV{Z} = 1] &= \mathbb{E}_X[\mathbb{E}[\RV{Y}^1 Z + \RV{Y}^0 (1-Z) \mid \RV{X}, \RV{Z} = 1]]\\ &= \mathbb{E}_X[\mathbb{E}[\RV{Y}^1 \mid \RV{X}, \RV{Z} = 1] = \mathbb{E}[\RV{Y}^1].\\ \mathbb{E}_X[\mathbb{E}[\RV{Y} \mid \RV{X}, \RV{Z} = 0] &= \mathbb{E}_X[\mathbb{E}[\RV{Y}^1 Z + \RV{Y}^0 (1-Z) \mid \RV{X}, \RV{Z} = 0]]\\ &= \mathbb{E}_X[\mathbb{E}[\RV{Y}^0 \mid \RV{X}, \RV{Z} = 0] = \mathbb{E}[\RV{Y}^0]. \end{aligned} \end{equation*} An alternative parametrization is: $$Y_i = Y_i^0 + \tau_i Z_i$$ where $$\tau_i = Y_i^1 - Y_i^0,$$ which emphasizes that $\tau_i$ itself can differ across units and, as a random variable, can be {\em dependent} on the treatment assignment so that $\tau \not\independent Z$. This treatment effect parametrization will be used extensively in our exposition. This paper is focused on the following question: If $X$ satisfies conditional unconfoundedness, might there be a function of $X$ with a reduced range that also satisfies conditional unconfoundedness? That is, can $X$ be reduced in dimension while still providing valid causal effect estimation? Answering this question requires a more detailed examination of {\em how} conditional unconfoundedness is achieved in any particular data generating process, which is facilitated by the introduction of causal diagrams. \subsection{Causal diagrams} \label{DAG_section} \subsubsection{Graph theory for causal identification} Causal diagrams provide a more fine-grained look at confounding, as they consider the full joint distribution of the response, treatment, and control variables regressors. The graphical approach to causality has its earliest roots in the work of Sewell Wright \citep{wright1918nature, wright1920relative, wright1921correlation}, but attained its mature modern form in the prodigious work of Judea Pearl \citep{pearl1987embracing, pearl1987logic, pearl1995theory, pearl1995causal}. See \cite{pearl2009causality} for a textbook treatment and comprehensive references. The presentation here loosely follows the expository treatment in \cite{shalizi2021advanced}. Recall that any joint density over $p$ random variables may be expressed in {\em compositional form}, as a product of conditional densities: $$f(x_1, x_2, \dots, x_p) = f(x_1)f(x_2 \mid x_1)f(x_3 \mid x_1, x_2)...f(x_p \mid x_1, x_2, \dots, x_{p-1}),$$ where the density functions $f(\cdot)$ and $f(\cdot \mid \cdot)$ refer to different densities depending on their arguments. The labeling of the variables is arbitrary, and so we can chain together these marginal and conditional distributions in any order (though of course that will lead to different forms). Some of these variables might exhibit {\em conditional independence}, meaning that, for example $$f(x_1 \mid x_2, x_3) = f(x_1 \mid x_2)$$ which is equivalently expressed as $$X_1 \independent X_3 \mid X_2.$$ The relationship to {\em directed (acyclic) graphs} (DAG) is straightforward: draw a node for each variable and draw a line from $X_j$ going into $X_i$ if $X_j$ appears in the conditional distribution of $X_i$. This graph is {\em directed}, with the arrow pointing from $X_j$ {\em to} $X_i$. We say that $X_j$ is a ``parent'' of $X_i$ and that $X_i$ is the ``child'' of $X_j$. From the graph, the joint distribution may be expressed as $$f(x_1, \dots, x_p) = \prod_{j = 1}^p f(x_j \mid \mbox{parents}(x_j)).$$ This leads us to the {\em Markov property}, which is $$X_j \independent \mbox{non-descendants}(X_j) \mid \mbox{parents}(X_j),$$ where ``descendant'' refers to children, grandchildren, great-grandchildren, etc. We can see this by dividing through by the marginal distribution of $\mbox{parents}(X_j)$ and observing that the resulting distribution is a product of terms involving either $X_j$ or $\mbox{non-descendants}(X_j)$, but not both. The Markov property allows one to efficiently deduce conditional independence relationships and underpins Pearl's algorithm (which will be described shortly). Finally, a complete treatment of confounding in the causal diagram framework requires the following definition: \begin{definition} A {\em collider} is a node/variable $V$ in a DAG that sits on an undirected path between two other nodes/variables, $X_j$ and $X_i$, and the paths both have arrows pointing {\em into} $V$. \end{definition} Conditioning on a collider induces dependence between its parents. For a classic example of this phenomenon, suppose that a certain college grants admission only to applicants with high test scores and/or athletic talent. Even if we grant that in the general population these talents may be independent, but among admitted students, these two attributes become highly dependent. If we know that a student is not athletic, then we know for sure that they must be academically gifted and vice-versa. While this is a basic result in probability theory, Pearl's work emphasized its significance to the problem of regression adjustment for causal effect estimation. With a DAG in hand, it is possible to deduce -- rather than assume -- conditional unconfoundedness: Pearl developed an algorithm for determining subsets of variables in $X$ (i.e., its coordinate dimensions) that define valid regression estimators. The inputs to this algorithm are a directed acyclic graph (DAG) that characterizes the causal relationships between variables; such a graph describes a particular compositional representation of the joint distribution, reflecting conditional independences that are implied by the {\em stipulated} causal relationships. The prohibition on cycles rules out positive feedback self-causation. Here we present Pearl's algorithm in a somewhat simplified form, assuming that the graph contains no descendants of $Z$ other than $Y$. Given an input DAG, $\mathcal{G}$ and a subset of nodes $S$, the ``backdoor'' algorithm proceeds as follows: \begin{enumerate} \item Identify all (undirected) paths between $Z$ and $Y$. \item Consider each variable along each of these paths and make sure that at least one of them is ``blocked''. \begin{enumerate} \item A variable $W$ is blocked if \begin{enumerate} \item $W$ is not a collider and is in the set $S$ or \item $W$ is a collider and neither $W$ nor any of its descendants is in the set $S$. \end{enumerate} \end{enumerate} \item Return {\tt TRUE} if every ``backdoor'' path between $Z$ and $Y$ (all paths except the direct causal arrow from $Z$ to $Y$), is blocked. Otherwise return {\tt FALSE}. \end{enumerate} Sets of variables satisfying the backdoor criterion --- those sets where the algorithm returns {\tt TRUE} --- are valid adjustment sets in the sense that $Y$ and $Z$ {\em would be} conditionally independent, given those variables, {\em if there were no causal relationship} between $Y$ and $Z$. By ruling out all other possible sources of association, any observed association may be interpreted as arising from a causal relationship. \subsubsection{Functional causal models.}\label{fcm} Causal DAGs may be associated with a functional causal model, a set of deterministic functions that take as inputs elements of $X$ as well as independent (``exogenous'') error terms. The basic triangle confounding graph corresponding to an $(X, Y, Z)$ triple satisfying conditional unconfoundedness is shown in Figure \ref{graph1}. \begin{figure} \ctikzfig{graph1} \caption{A simple triangle confounding diagram, where a control variable $X$ causally influences both the treatment $Z$ and the response $Y$. This graph does not clarify what information contained in (the potentially multidimensional) $X$ is relevant for $Z$ or $Y$ or both or neither, only that knowing the value of $X$ in its entirety permits causal estimation.}\label{graph1} \end{figure} The corresponding functional causal model can be expressed as \begin{equation} \begin{split} Z &\leftarrow G(X,\epsilon_z)\\ Y & \leftarrow F(X,Z,\epsilon_y) \end{split} \end{equation} where $X$, $\epsilon_z$ and $\epsilon_y$ are mutually independent (though all three may be vector-valued with non-independent elements). The exogenous errors ($\epsilon_z$ and $\epsilon_y$) that appear in a single equation are suppressed in the graph. All of the stochasticity is inherited from the exogenous variables, while all of the deterministic relationships are reflected in the functions $G(\cdot)$ and $F(\cdot)$, which are explicitly endowed with a causal interpretation. Specifically, the potential outcomes are given by: \begin{equation} \begin{split} Y^1 &\leftarrow F(X,1,\epsilon_y)\\ Y^0 & \leftarrow F(X,0,\epsilon_y) \end{split} \end{equation} where $(X, \epsilon_y)$ are drawn from their marginal distributions, irrespective of the value of the treatment argument. As was mentioned previously, throughout this paper we assume that $X$ does not contain any causal descendants of $Z$. Consider two ways to conceptualize the data generating process for both the potential outcome pairs, $(Y^0, Y^1)$, and the observed response $Y$. On the one hand, the potential outcomes can be generated from the functional causal model, by fixing the $Z$ argument to 0 or 1, irrespective of the implied distribution of $Z \mid X$. Procedurally, this would look like drawing $X$ from its marginal distribution, drawing $\epsilon_y$, and evaluating $F(X, 0, \epsilon_y)$ and $F(X, 1, \epsilon_y)$. The observed data can then be constructed via the consistency assumption $Y = F(X, 1, \epsilon_y)Z + F(X, 0, \epsilon_y)(1-Z)$. Equivalently, $Y$ may be drawn directly via $F(X, Z, \epsilon_y)$, where $Z$ (the observed treatment assignment) was drawn according to $Z \mid X$ (as specified by the CDAG). This equivalence is especially instructive as to why $Y \mid Z = z$ and $Y^z$ do not generally have the same distribution and, furthermore, why $Y \mid Z = z, X = x$ and $Y^z \mid X= x$ do have the same distribution (assuming, as we have above, that $X$ is causally exhaustive). The role of $\epsilon_y$ in defining the distribution of the potential outcomes is worth considering in more detail. Note that for a binary $Z$, any functional causal model $F$ may be rewritten as $$F(X, Z , \epsilon_y) = F(X, 0, \epsilon_y) + Z \left[ F(X, 1, \epsilon_y) - F(X, 0, \epsilon_y) \right] = \mu(X, \epsilon_y) + Z \tau(X, \epsilon_y).$$ This formulation invites us to consider that $\epsilon_y$ may be multivariate, distinct elements of which may affect $\mu(X, \epsilon_y)$ and $\tau(X, \epsilon_y)$. Three particular cases are especially notable: \begin{enumerate} \item $\mu(X, \epsilon_y) = \mu(X) + \epsilon_y$ and $\tau(X, \epsilon_y) = \tau(X)$: here, $\epsilon_y$ has the same effect on the two potential outcomes $F(X, 1, \epsilon_y)$ and $F(X, 0, \epsilon_y)$, so that their joint distribution is singular. \item $\mu(X, \epsilon_y) = \mu(X) + \epsilon_{y,0}$ and $\tau(X, \epsilon_y) = \tau(X) + \left( \epsilon_{y,1} - \epsilon_{y,0} \right)$ where the exogenous error is partitioned as $\epsilon_y = (\epsilon_{y, 0}, \epsilon_{y, 1})$. Here, $\epsilon_{y,0}$ and $\epsilon_{y,1}$ are distinct random variables that separately define the potential outcome distributions so that one effect of the treatment is in changing {\em which} exogenous influences affect the response. \item $\mu(X, \epsilon_y) = \mu(X) + \epsilon_{y, \mu}$, $\tau(X, \epsilon_y) = \tau(X) + \epsilon_{y, \tau}$, where the exogenous errors is partitioned as $\epsilon_y = (\epsilon_{y, \mu}, \epsilon_{y, \tau})$. In this case, a distinct set of causal factors dictate exogenous variation in the prognostic (baseline) response and exogenous variation in the treatment effect itself. For example, variation in the baseline response may be due to environmental factors that are independent from genetic factors dictating one's response to a new drug. \end{enumerate} These three cases are visualized in Figure \ref{errors} with $\tau(X) = 1$. Empirically, these cases are indistinguishable in that they are ``observationally equivalent'' --- because the potential outcomes are never jointly observed, most aspects of their joint distribution are fundamentally unidentified. \begin{figure} \includegraphics[width=2.5in]{Error_Comparison_a.png} \includegraphics[width=2.5in]{Error_Comparison_b.png} \caption{Left panel: Potential outcome distributions with a common additive univariate error and a homogeneous treatment effect (which shifts the line up one unit from the diagonal), articulated in Case 1 below. Right panel: Potential outcome distributions with a homogeneous treatment effect and distinct additive bivariate errors, $\epsilon_{y,0}$ and $\epsilon_{y,0}$, shown here with a positive correlation less than one, articulated in Case 2 below.}\label{errors} \end{figure} With a more detailed causal graph, a more detailed assessment of conditional unconfoundedness can be made. For instance, consider Figure \ref{graph2}, which is equivalent to the standard triangle digram in the sense that controlling for all of the elements of $X = (X_1, X_2, X_3, X_4)$ indeed satisfies conditional unconfoundedness. However, Pearl's algorithm reveals that $(X_1, X_2)$ would suffice. By positing more information about the joint distribution of $X$, it is possible to absorb $X_3$ into $\epsilon_z$ and $X_4$ into $\epsilon_y$, while redefining $X = (X_1, X_2)$, bringing us back to the triangle graph, but with a reduced set of control variables. \begin{figure} \centering \begin{tikzpicture}[baseline=-0.25em,scale=0.5] \begin{pgfonlayer}{nodelayer} \node [style=myvar] (1) at (2.5, 2.5) {$X_1$}; \node [style=myvar] (2) at (2.5, -2.5) {$X_2$}; \node [style=myvar] (3) at (-2.5, -2.5) {$X_3$}; \node [style=myvar] (4) at (7.5, 2.5) {$X_4$}; \node [style=myvar] (5) at (-2.5, 0) {$Z$}; \node [style=myvar] (6) at (7.5, 0) {$Y$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=arrow] (1) to (5); \draw [style=arrow] (1) to (6); \draw [style=arrow] (2) to (5); \draw [style=arrow] (2) to (6); \draw [style=arrow] (3) to (5); \draw [style=arrow] (4) to (6); \draw [style=arrow] (5) to (6); \end{pgfonlayer} \end{tikzpicture} \caption{An elaboration of the triangle graph, depicting $X_1$ and $X_2$ as confounders, $X_4$ as a pure prognostic variable, and $X_3$ is an instrument.}\label{graph2} \end{figure} \subsection{Structural equations: Mean regression models with exogenous additive errors} \label{structural_model_section} Finally, the classic econometric literature approaches causality in terms of mean regression models with additive (but not necessarily homoskedastic) error terms, which are referred to as ``structural'' models (although the term is often used informally and imprecisely in the applied literature). \cite{heckman2005structural} reviews the structural model approach in econometrics in depth, noting that such methods have their origin in the study of dynamic macroeconomic systems. A seminal reference is \cite{haavelmo1943statistical}. The mean regression perspective arises naturally if one takes a linear regression model as a starting point, but is straightforward to motivate starting from a generic functional causal model. Define \begin{equation} \begin{split} \mu(x) &\equiv \mathbb{E}(F(x,0,\epsilon_y)), \\ \tau(x) &\equiv \mathbb{E}(F(x,1,\epsilon_y)) - \mu(x),\\ \upsilon(x,\epsilon_y) &\equiv F(x,0,\epsilon_y) - \mu(x),\\ \delta(x,\epsilon_y) &\equiv F(x,1,\epsilon_y) - F(x,0,\epsilon_y) - \tau(x) \end{split} \end{equation} giving a ``structural model'' \begin{equation}\label{structural_eq} Y = \mu(x) + \upsilon(x,\epsilon_y) + (\tau(x) + \delta(x,\epsilon_y)) Z \end{equation} where $\upsilon(x, \epsilon_y)$ and $\delta(x, \epsilon_y)$ are deterministic functions, both of which are mean zero integrating over $\epsilon_y$ (for any $x$): $\mathbb{E}(\upsilon(x, \epsilon_y)) = 0$ and $\mathbb{E}(\delta(x, \epsilon_y)) = 0$. In this formulation, conditional unconfoundedness may be expressed in terms of independence of the treatment, $Z$, and the error terms $\upsilon(x,\epsilon_y)$ and $\delta(x, \epsilon_y)$. Such models are commonly used in a simplified form, where $\delta(x, \epsilon_y)$ is assumed to be identically zero and $\tau(x)$ is assumed to be constant in $x$, but such assumptions are not intrinsic to the formalism. \subsection{Relating the three frameworks}\label{equivalence} If every node in a causal diagram is observable, all remaining factors determining $Y$ are attributable to the exogenous errors, which are, by definition, independent of the treatment assignment. In that case, it is easy to forge a connection between the three formalisms, as they all assert that \begin{equation} Y^z \mid X=x \;\;\; \,{\buildrel d \over \sim}\, \;\;\; Y \mid X = x, Z = z, \end{equation} where (recall) $Y^z = F(x, z, \epsilon_y)$, with distribution induced by the distribution over $\epsilon_y$. The above assertion essentially declares that the estimable conditional distributions which appear on the right hand side warrant a causal interpretation. For sets of control variables that are {\em not} exhaustive, more care is needed in translating the formalisms, but a precise relationship can be obtained, as spelled out in the following lemma. \begin{lemma}\label{synthesis} The assertions below (with their corresponding causal framework labeled in brackets) stand in the following logical relationship: $1 \Rightarrow 2 \Leftrightarrow 3$. \begin{enumerate} \item $S = s(X)$ satisfies the back-door criterion. [Causal DAGs] \item $S= s(X)$ satisfies conditional unconfoundedness: $(Y^0, Y^1) \independent Z \mid S$. [Potential Outcomes] \item The response $Y$ can be represented in terms of a mean regression model with error terms $(\upsilon(s,X,\epsilon_y), \delta(s,X, \epsilon_y) ) \independent Z \mid s(X) = s$. [Structural Equations] \end{enumerate} \end{lemma} \begin{proof} Let $X$ denote all of the variables in a complete causal diagram with the exception of the treatment variable $Z$ and response variable $Y,$ and consider the following causal model, written in terms of functional equations, potential outcomes, and a structural mean regression with additive exogenous errors: \begin{equation} \begin{split} Z &\leftarrow G(X, \epsilon_z),\\ Y^z &\leftarrow F(X, z, \epsilon_y) = \mu(X) + \upsilon(X,\epsilon_y) + (\tau(X) + \delta(X,\epsilon_y)) z,\\ \begin{pmatrix} Y^0 \\ Y^1 \end{pmatrix} &\leftarrow \begin{pmatrix} \mu(X) + \upsilon(X,\epsilon_y) \\ \mu(X) + \tau(X) + \upsilon(X,\epsilon_y) + \delta(X, \epsilon_y) \end{pmatrix}. \end{split} \end{equation} To see that 1 implies 2, recall that 1 means that $S$ renders the treatment and response conditionally independent in the modified DAG with no causal arrow between $Z$ and $Y$. But it is precisely such a graph that defines the relationship between $Z$ and the potential outcomes $Y^0 = F(X, 0, \epsilon_y)$ and $Y^1 = F(X, 1, \epsilon_y)$, as shown in Figure \ref{po_graph}. To see that 2 and 3 are equivalent, re-parametrize the additive error model in terms of $S$, as follows: \begin{equation} \begin{split} Y^z &\leftarrow \mu(s) + \upsilon(s,X,\epsilon_y) + (\tau(s) + \delta(s, X, \epsilon_y))z\\ \mu(s) &\equiv \mathbb{E}(\mu(X) \mid S(X) = s)\\ \tau(s) &\equiv \mathbb{E}(\tau(X) \mid S(X) = s)\\ \upsilon(s,X,\epsilon_y) &\equiv \mu(X) - \mu(s) + \upsilon(X,\epsilon_y)\\ \delta(s,X,\epsilon_y) &\equiv \tau(X) - \tau(s) + \delta(X,\epsilon_y). \end{split} \end{equation} For a fixed value of $s$, the mean terms $\mu(s)$ and $\tau(s)$ are constant, so that $(Y^0, Y^1)$ stands in a one-to-one relationship with $\upsilon(s,X,\epsilon_y)$ and $\delta(s,X,\epsilon_y)$; therefore if the former are independent of $Z$, then so must be the latter, and vice-versa. \end{proof} \begin{figure} \centering \begin{tikzpicture}[baseline=-0.25em,scale=0.5] \begin{pgfonlayer}{nodelayer} \node [style=myvar] (1) at (2.5, 2.5) {$X_1$}; \node [style=myvar] (2) at (2.5, -2.5) {$X_2$}; \node [style=myvar] (5) at (-2.5, 0) {$Z$}; \node [style=myvar] (6) at (7.5, 0) {$Y$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=arrow] (1) to (5); \draw [style=arrow] (1) to (6); \draw [style=arrow] (2) to (5); \draw [style=arrow] (2) to (6); \draw [style=arrow] (5) to (6); \end{pgfonlayer} \end{tikzpicture} \hfill \begin{tikzpicture}[baseline=-0.25em,scale=0.5] \begin{pgfonlayer}{nodelayer} \node [style=myvar] (1) at (2.5, 2.5) {$X_1$}; \node [style=myvar] (2) at (2.5, -2.5) {$X_2$}; \node [style=myvar] (5) at (-2.5, 0) {$Z$}; \node [style=myvar] (6) at (7.5, 0) {$Y^*$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=arrow] (1) to (5); \draw [style=arrow] (1) to (6); \draw [style=arrow] (2) to (6); \draw [style=arrow] (2) to (5); \end{pgfonlayer} \end{tikzpicture} \caption{A typical causal DAG (CDAG) and its potential outcome counterpart, where $Y^* = (Y^0, Y^1)$.}\label{po_graph} \end{figure} \begin{figure} \begin{minipage}[b]{180pt} \centering \begin{tikzpicture}[baseline=-0.25em,scale=0.5] \begin{pgfonlayer}{nodelayer} \node [style=myvar] (1) at (-2.5, 2.5) {$X_1$}; \node [style=myvar] (3) at (2.5, 2.5) {$X_2$}; \node [style=myvar] (5) at (-2.5, 0) {$Z$}; \node [style=myvar] (6) at (2.5, 0) {$Y$}; \node [style=myvar] (7) at (-2.5, -2.5) {$X_3$}; \node [style=myvar] (8) at (2.5, -2.5) {$X_4$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=arrow] (1) to (5); \draw [style=arrow] (3) to (6); \draw [style=arrow] (5) to (6); \draw [style=arrow] (1) to (3); \draw [style=arrow] (7) to (5); \draw [style=arrow] (8) to (7); \draw [style=arrow] (8) to (6); \end{pgfonlayer} \end{tikzpicture} \caption{The ``box diagram'', which implies several valid control sets: any set containing at least one of $\{ X_1, X_2\}$ and at least one of $\{ X_3, X_4\}$.} \label{rectangle} \end{minipage} \hfill \begin{minipage}[b]{180pt} \centering \begin{tikzpicture}[baseline=-0.25em,scale=0.5] \begin{pgfonlayer}{nodelayer} \node [style=myvar] (1) at (2.5, 2.5) {$X_2$}; \node [style=myvar] (2) at (2.5, -2.5) {$X_3$}; \node [style=myvar] (5) at (-2.5, 0) {$Z$}; \node [style=myvar] (6) at (7.5, 0) {$Y$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=probedge] (1) to (5); \draw [style=arrow] (1) to (6); \draw [style=arrow] (2) to (5); \draw [style=arrow] (5) to (6); \draw [style=probedge] (2) to (6); \end{pgfonlayer} \end{tikzpicture} \caption{The ``box diagram'' with $X_1$ and $X_4$ omitted; a CDAG representation is no longer possible.\vspace{0.2in}} \label{triangle} \end{minipage} \end{figure} \subsection{Estimands, estimators, and sampling distributions}\label{estimands} As described previously, by {\em treatment effect}, we mean the difference between the treated and untreated potential outcomes. By {\em average} treatment effect, we mean the average of this difference over some population of individuals. The functional causal model and a distribution over the exogeneous errors define an infinite hypothetical {\em population} from which the observed data is assumed to be a random sample. From this perspective, the population average treatment effect (PATE) may be expressed as $$\mathbb{E}(\tau(X) + \delta(X, \epsilon)) = \mathbb{E}(\tau(X)),$$ where $\tau$ is a fixed-but-unknown function and the expectation is taken with respect to the data generating process defined by the CDAG and the associated functional causal model, so that $X$ and $\epsilon$ are both being averaged over. Other average causal effects, differing in terms of the (sub)population over which the average is taken, are likewise readily defined in terms of the functional causal model (FCM). For instance, if we wish to restrict our attention to the average treatment effect among individuals in our observed sample, we may define our estimand as the {\em sample average treatment effect}, or SATE: $$\frac{1}{N} \sum_{i = 1}^{N} \left( \tau(x_i) + \delta(x_i, \epsilon_i) \right ).$$ Note that the SATE and the PATE differ from one another in that, in general, $$\mathbb{E}(\tau(X)) \neq \frac{1}{N} \sum_{i = 1}^{N} \tau(x_i)$$ and $$\frac{1}{N} \sum_{i = 1}^{N} \delta(x_i, \epsilon_i) \neq \mathbb{E}(\delta(X, \epsilon)) = 0.$$ In this paper, we will compare stratification estimators of the PATE, evaluating them in terms of their finite sample variance over repeated sampling of independent draws from $(X_i, Y_i, Z_i)$. While it would be possible to consider the sampling distribution over $(Y_i, Z_i)$ for a fixed vector of observed covariates $x_i$, doing so would make cross comparison of different stratifications impossible, because the sampling distribution would be over-specified relative to the coarser stratification. Because the PATE is of wide applied interest, we argue that averaging over observed control variables $X_i$ is sensible and all of our results are derived in this setting. Another average treatment effect of broad interest is the {\em conditional average treatment effect} (CATE), which defines an average treatment effect conditional on a set of covariate values. The population CATE, $$\mathbb{E}(\tau(X) + \delta(X, \epsilon) \mid X = x) = \mathbb{E}(\tau(X) \mid X = x),$$ takes an expectation with respect to a conditional sampling distribution $\tau(X) \mid X = x$, where $\left\{X = x\right\}$ may denote a set of covariates rather than a single value. While the focus of this paper is on the PATE, its insights extend automatically to the population CATE. The CATE is sometimes mistakenly reported in the literature as the {\em individual treatment effect} (ITE), which is a separate estimand that is only identified with more restrictive assumptions. The ITE is defined at the unit level as the difference in potential outcomes. For unit $i$, the ITE is given by $$F(X_i, Z_i = 1, \epsilon_{i,y,1}) - F(X_i, Z_i = 0, \epsilon_{i,y,0}).$$ This is unidentified without further assumptions on the nature of the error term, as in general $\epsilon_{i,y,1} \neq \epsilon_{i,y,0}$; see Figure \ref{errors}. \section{Minimal and optimal statistical control} \subsection{The principal deconfounding function} Although conditional unconfoundedness is central to our conception of causal effect estimation, in fact it is a stronger than necessary assumption for identifying the ATE. More specifically, one only needs a function $s(x)$ that satisfies {\em mean conditional unconfoundedness}. \begin{definition} A function $s$ on covariate space $\mathcal{X}$ is said to satisfy {\em mean conditional unconfoundedness} if \begin{equation} Z \independent (\mu(X), \tau(X)) \mid s(X). \end{equation} \end{definition} \begin{lemma}\label{MCU} Mean conditional unconfoundedness is a sufficient condition for estimating average treatment effects. \end{lemma} \begin{proof} Denote the causal model as $$Y^z \leftarrow \mu(X) + \upsilon(X,\epsilon_y) + (\tau(X) + \delta(X,\epsilon_y)) z$$ where $\epsilon_y \independent (Z, X)$, $\mathbb{E}(\upsilon(x,\epsilon_y)) = 0$, and $\mathbb{E}( \delta(x,\epsilon_y)) = 0$ for all $x$. We aim to show that $$\mathbb{E}(Y^z \mid s(X) = s) = \mathbb{E}(Y \mid s(X) = s, Z = z),$$ from which the result follows by the estimability of the right hand side for both $z = 0$ and $z = 1$. Recalling the relationship between $Y^z$ and $Y \mid Z = z$ described in Section \ref{fcm}, this is equivalent to showing that \begin{align*} \mathbb{E}(\mu(X) + &\upsilon(X,\epsilon_y) + (\tau(X) + \delta(X,\epsilon_y)) z \mid s(X) = s) =\\ & \mathbb{E}(\mu(X) + \upsilon(X,\epsilon_y) + (\tau(X) + \delta(X,\epsilon_y)) z \mid s(X) = s, Z = z), \end{align*} where the expectation over $(X, \epsilon_y)$ is with respect to its marginal distribution on the left hand side and with respect to its conditional distribution, given $Z = z$, on the right hand side. By the independence of $\epsilon_y$, the mean zero errors for each $x$, and the linearity of expectation, this reduces to showing that $$\mathbb{E}(\mu(X) + \tau(X)z \mid s(X) = s) = \mathbb{E}(\mu(X) + \tau(X)z \mid s(X) = s, Z = z).$$ By the assumption of mean conditional unconfoundedness, $Z \independent (\mu(X), \tau(X)) \mid s(X)$, and the result follows. \end{proof} Mean conditional unconfoundedness can be used to define a {\em minimal} control function, but first we must recall the definition of the propensity score \citep{rosenbaum1983central}, which we will denote by $\pi(\cdot)$. \begin{definition} The {\em propensity score}, based on a vector of control variables $x$, is the conditional probability of receiving treatment: \begin{equation}\label{propscore} \pi(x) \equiv \mathbb{P}(\RV{Z} = 1 \mid \RV{X} = x). \end{equation} \end{definition} \noindent It is common to interchangeably refer to the propensity {\em score}, which emphasizes a specific numerical value, $\pi(x)$, and the propensity {\em function}, which emphasizes the mapping, $\pi: \mathcal{X} \rightarrow (0, 1)$. In turn, we have: \begin{definition} The {\em principal deconfounding function} is given by following conditional expectation: $$\lambda(x) = \mathbb{E}(\pi(X) \mid \mu(X) = \mu(x), \tau(X) = \tau(x)).$$ \end{definition} \begin{theorem} \label{theorem1} The principal deconfounding function is the coarsest function satisfying mean conditional unconfoundedness. \end{theorem} \begin{proof} By iterated expectation, $Z \mid \mu(X), \tau(X)$ is a Bernoulli random variable with probability $\lambda(X)$, therefore $$\mathbb{E}(Z \mid \tau(X), \mu(X), \lambda(X)) = \mathbb{E}(Z \mid \lambda(X)),$$ which shows that $Z \independent \left ( \mu(X), \tau(X)\right ) \mid \lambda(X)$ because $Z$ is binary. Furthermore, $|\lambda(\mathcal{X})|$ is minimal: it takes exactly as many values as there are unique conditional distributions of $Z \mid \mu(X), \tau(X)$. In more detail, suppose $s(x)$ is coarser than $\lambda(x)$ so that there exists $x_1$ and $x_2$ such that $s(x_1) = s(x_2)$ but $\lambda(x_1) \neq \lambda(x_2)$. But $\lambda(x_1) \neq \lambda(x_2)$ implies $(\mu(x_1), \tau(x_1)) \neq (\mu(x_2), \tau(x_2))$, which in turn shows that $$Z \not \independent \mu(X), \tau(X) \mid s(X)$$ so mean conditional unconfoundedness is violated. \end{proof} \subsection{Optimal stratification for causal effect estimation} \label{trueprop} Recognizing that valid control features are non-unique raises the question: which control features are the best ones? To make this question precise, we study the finite sample variance of fixed-strata estimators, restricting our attention to a vector of discrete control variables. Without loss of generality, discrete control variables with finite support can be represented as a single covariate taking $K = |\mathcal{X}|$ distinct values. For example, a length $d$ vector of binary covariates would be represented as a single variable taking $2^d$ values. This assumption is mathematically convenient and, by setting $K$ large enough, can capture most empirical applications to a satisfactory degree of realism. (We revisit the plausibility of this assumption in the discussion section.) In the mathematical formalism and discussion of this paper, we will use the words ``strata" and ``features" interchangeably, to refer to functions of this single categorical variable. In detail, this paper considers the following data generating process: \begin{equation}\label{dgp_equation} \begin{split} \mathcal{X} &= \{1, \dots, K\},\\ \pi: \mathcal{X} &\mapsto (0, 1),\\ \RV{Z} &\sim \mbox{Bernoulli}(\pi(\RV{X})),\\ \RV{Y} &\leftarrow \mu(X) + \upsilon_X + (\tau(X) + \delta_X) Z \end{split} \end{equation} where $\mathbb{E}(\upsilon_x) = 0$ and $\mathbb{E}(\delta_x) = 0$ for all $x$ so that $\mu(\scalarobs{x}) = \mathbb{E}(\RV{Y} \mid \RV{X} = \scalarobs{x}, \RV{Z} = 0)$ and $\mu(\scalarobs{x}) + \tau(\scalarobs{x}) = \mathbb{E}(\RV{Y} \mid \RV{X} = \scalarobs{x}, \RV{Z} = 1)$. Lastly, let the random variable $\RV{N}$ denote the overall sample size and define subset-specific sample sizes as follows: \begin{itemize} \item $\RV{N}_{x}$: the number of observations with $\RV{X} = x$, \item $\RV{N}_{x, z}$: the number of observations with $\RV{X} = x$ and $\RV{Z} = z$. \end{itemize} We define the stratification estimator using a stratification function $s\left(\mathcal{X}\right)$, which returns $J \leq K$ discrete function values. We compute the average difference in outcomes between the treated and control groups separately for individuals in each of the $J$ strata, so that \begin{equation*} \begin{aligned} \bar{\tau}^{s}_{strat} &= \sum_{j \in s(\mathcal{X})} \frac{N_{j}}{n} \left( \bar{Y}_{j,1} - \bar{Y}_{j, 0} \right)\\ N_{j, 0} &= \sum_{i=1}^n \mathbf{1}\left\{s(X_i) = j\right\} \mathbf{1}\left\{Z_i = 0\right\}\\ \bar{Y}_{j,0} &= \frac{1}{N_{j, 0}} \sum_{i=1}^n Y_i \mathbf{1}\left\{s(X_i) = j\right\} \mathbf{1}\left\{Z_i = 0\right\} \end{aligned}\;\;\;\;\;\; \begin{aligned} N_{j} &= \sum_{i=1}^n \mathbf{1}\left\{s(X_i) = j\right\}\\ N_{j, 1} &= \sum_{i=1}^n \mathbf{1}\left\{s(X_i) = j\right\} \mathbf{1}\left\{Z_i = 1\right\}\\ \bar{Y}_{j,1} &= \frac{1}{N_{j, 1}} \sum_{i=1}^n Y_i \mathbf{1}\left\{s(X_i) = j\right\} \mathbf{1}\left\{Z_i = 1\right\}\\ \end{aligned} \end{equation*} Note that if we choose the trivial stratification $s(x) = x$, we stratify completely on all $K$ unique levels of $\mathcal{X}$. The following theorem describes when stratification beyond the minimal valid stratification, $\lambda(X)$, is beneficial, in terms of conditions on the underlying data generating process. \begin{theorem} \label{theorem2} Assume we have stratified on $\lambda(X)$ so that the average treatment effect is identified using a minimal deconfounding set. Consider a {\em refinement} of $\lambda$, $s(X)$, which also identifies the ATE: $s(x) \neq s(x')$ while $\lambda(x) = \lambda(x')$ for at least two $x, x' \in \mathcal{X}$. Define $\bar{\tau}_{\textrm{strat}}^{\lambda}$ as a stratification estimator which uses level sets of $\lambda(X)$ to define strata and $\bar{\tau}_{\textrm{strat}}^{s}$ as a stratification estimator which uses level sets of $s(X)$. Then $\mathbb{V} \left( \bar{\tau}_{\textrm{strat}}^{s} \right) < \mathbb{V} \left( \bar{\tau}_{\textrm{strat}}^{\lambda} \right)$ if $\nu < \eta$ where \begin{equation*} \begin{aligned} m(j) &= \lvert \left\{ s(x) : x \in \mathcal{X}\mbox{ such that } \lambda(x) = j \right\} \rvert\\ \mathcal{B} &= \left\{j \in \lambda(\mathcal{X}): m(j) > 1 \textrm{ and all sub-strata means and variances are constant} \right\}\\ \mathcal{C} &= \left\{j \in \lambda(\mathcal{X}): m(j) > 1 \textrm{ and either the sub-strata means or variances are non-constant} \right\}\\ \nu &= \sum_{b \in \mathcal{B}} \left[ \mathbb{V}\left( \frac{N_{b}}{n} \left( \bar{Y}_{b,1} - \bar{Y}_{b, 0} \right) \right) - \mathbb{V}\left( \sum_{\ell=1}^{m(b)} \frac{N_{b\ell}}{n} \left( \bar{Y}_{b\ell,1} - \bar{Y}_{b\ell, 0} \right) \right)\right]\\ \eta &= \sum_{c \in \mathcal{C}} \left[ \mathbb{V}\left( \sum_{\ell=1}^{m(c)} \frac{N_{c\ell}}{n} \left( \bar{Y}_{c\ell,1} - \bar{Y}_{c\ell, 0} \right) \right) - \mathbb{V}\left( \frac{N_{c}}{n} \left( \bar{Y}_{c,1} - \bar{Y}_{c, 0} \right) \right)\right]\\ \end{aligned} \end{equation*} and $\mathbb{V} \left( \bar{\tau}_{\textrm{strat}}^{s} \right) \geq \mathbb{V} \left( \bar{\tau}_{\textrm{strat}}^{\lambda} \right)$ otherwise. \end{theorem} A detailed proof is provided in Appendix \ref{appA}, but here we offer a sketch of the proof to build intuition. In comparing two stratifications, $\lambda$ and $s$, across discrete covariates $X$, we can partition the level sets of the two stratfication functions as follows: \begin{enumerate} \item $\mathcal{A}$: values of $x \in \mathcal{X}$ for which both $\lambda$ and $s$ agree \item $\mathcal{B}$: values of $x \in \mathcal{X}$ for which $s$ substratifies $\lambda$ but the mean and variance of $Y \mid Z$ are constant across substrata formed by $s$ \item $\mathcal{C}$: values of $x \in \mathcal{X}$ for which $s$ substratifies $\lambda$ and either the mean of $Y \mid Z$, the variance of $Y \mid Z$, or both vary across substrata formed by $s$ \end{enumerate} We ignore $\mathcal{A}$ and focus on $\mathcal{B}$ and $\mathcal{C}$. In the case of $\mathcal{B}$, $s$ performs ``unnecessary" stratification, estimating and re-aggregating conditional means which are the same in the underlying data generating process, and thus incurs additional variance over the $\lambda$ stratification estimator. On the other hand, when we consider $\mathcal{C}$, $\lambda$ incurs additional variance over $s$ by failing to control for differences in the $Y \mid Z$. In summary, $\mathcal{B}$ induces a variance penalty on $s$ relative to $\lambda$ by ``overstratification", while $\mathcal{C}$ induces a variance penalty on $\lambda$ relative to $s$ by ``understratification." Which estimator is preferred depends on the magnitude of these competing effects, as articulated in the $\nu < \eta$ inequality above. The practical upshot of this theorem is that stratification that accounts for substantial variation in the response will tend to reduce variance of the treatment effect estimator (whether or not it is confounded in the sense of covarying with propensity to receive treatment), while stratification that accounts only for variation in treatment assignment will increase variance of the treatment effect estimator. This conclusion is illustrated in the examples of the following section. \section{Vignettes}\label{examples} This section collects examples illustrating the statistical trade-offs underlying feature selection for causal effect estimation that are articulated in Theorem \ref{theorem2}. Many of the examples are interesting in their own right; connections to previous literature are provided throughout. \subsection{In what sense is randomization the ``gold standard'' for causal effect estimation?} It has become boiler-plate in reports on observational studies to remark that ``in the absence of the gold standard of a randomized clinical trial, one may pursue statistical methods to control for confounding''. But in what sense is randomized treatment assignment the gold standard? Surely solid-state physicists do not randomize their lab conditions and hope their sample size is large enough to reveal interesting results. Famously, esteemed physicist Ernst Rutherford quipped ``If your experiment needs statistics, you ought to have done a better experiment'' (\cite{hammersley1962monte}). The intuition behind this remark is that it is {\em control} that is central, not randomization. See section \ref{constantcontrol} for a definition of a control feature that evokes the experimental notion of ``control''. Indeed, randomization is simply a way to guarantee control {\em on average} in the event that exact control is impossible, such as when crucial confounding factors are unobserved. This perspective in turn suggests that controlling for factors that we {\em can} observe and randomizing only for factors that we cannot observe would be the ideal approach. The following thought experiment amplifies this intuition. Consider studying the effect of treatment $Z$ on outcome $Y$ in a sample of $n$ pairs of identical twins and deciding how to allocate treatment across the $2n$ study participants. Completely randomized treatment assignment satisfies the assumptions outlined above and thus identifies the treatment effect. However, a naive randomization would sometimes accidentally treat both twins and leave other twin pairs untreated. This violates most people's intuition about why twin studies are interesting and useful, which is that giving one twin the treatment and the other a placebo implicitly ``controls for'' all of the shared biological and environmental factors that may impact the treatment effect. Randomization within each twin pair can protect against unmeasured factors that may confound the result, such as (perhaps) which twin was born first. In this case, both $Z$ and the twin pair index, $X$, are informative about the expected value of $Y$. Now consider four possible approaches to study the effect of $Z$ on $Y$: \begin{center} \begin{tabular}{c | c | c} & Design & Estimator \\ \hline 1 & Complete randomization & Unadjusted mean difference \\ 2 & Twin pair randomization & Unadjusted mean difference \\ 3 & Complete randomization & \;\;Adjusted mean difference \\ 4 & Twin pair randomization & \;\;Adjusted mean difference \\ \end{tabular} \end{center} where the unadjusted mean difference estimator is defined as $$\bar{\tau}_U = \bar{Y}_{Z=1} - \bar{Y}_{Z=0}$$ and the adjusted mean difference estimator is defined as $$\bar{\tau}_A = \sum_{x \in \mathcal{X}} \frac{n_x}{n} \left( \bar{Y}_{X=x, Z=1} - \bar{Y}_{X=x, Z=0} \right)$$ where $\mathcal{X}$ is the set of twin pairs and $X$ is a variable that indexes twin pairs. Each of the four approaches above identifies the ATE. However, adjusting for twin pairs (approaches 3 and 4) will tend to reduce variance over the unadjusted alternatives (1 and 2) and, similarly, designs that incorporate twin pairs in randomization (2 and 4) will also see a reduction in variance over the completely randomized alternatives (1 and 3). These results are implicit in Theorem \ref{theorem2}, which can be applied even if the propensity function is constant, as in a randomized trial. As intuitive as this example may be, and despite its lesson being a straightforward implication of Theorem \ref{theorem2}, regression adjustment for randomized trial data remains controversial. Freedman \citep{freedman2008regression, freedman2008randomization} criticized regression adjustment on the grounds that linear or linear logistic regression is potentially biased. Unfortunately, many researchers took this advice without first considering non-linear alternatives. \cite{lin2013agnostic} shows that regression adjustment in experimental data is not asymptotically unbiased if one entertains a richer set of interacted or saturated models, rather than a basic linear model. Of course, the stratification estimators studied here are fundamentally nonparametric and so are consistent with the conclusions of \cite{lin2013agnostic}. At the same time, Theorem \ref{theorem2} concedes that for some data generating processes, undertaking a regression adjustment (via stratification) would simply produce unnecessary variability, specifically for data generating processes where the available control factors are not sufficiently predictive of the response. In many applied problems we find ourselves somewhere in between this case of mostly useless controls and the twin experiment situation of profoundly informative controls. \subsection{Propensity scores}\label{propensity} Following the work of \cite{rosenbaum1983central}, the propensity score (expression \ref{propscore}) has become a central element in many applied analyses of causal effects. In that paper, it was first shown that $\pi(x)$ satisfies conditional unconfoundedness, from which it follows that \begin{equation} \textrm{ATE} = \mathbb{E}[\RV{Y}^1 - \RV{Y}^0] = \mathbb{E}_{\pi(\RV{X})}[\mathbb{E}[\RV{Y} \mid \pi(\RV{X}), \RV{Z} = 1] - \mathbb{E}[\RV{Y} \mid \pi(\RV{X}), \RV{Z} = 0]]. \end{equation} This differs from the more general form of conditional unconfoundedness in that $\pi(\RV{X})$ is one-dimensional, while $\RV{X}$ itself typically involves many controls. An especially common use of the propensity score in practice is via the inverse-propensity weighted (IPW) estimator \begin{equation} \label{ipw} \bar{\tau}_{\textrm{ipw}} = \frac{1}{\RV{N}} \sum_{i=1}^{\RV{N}} \left( \frac{\RV{Y}_i \RV{Z}_i}{\pi(\RV{X}_i)} - \frac{\RV{Y}_i (1-\RV{Z}_i)}{1-\pi(\RV{X}_i)} \right), \end{equation} which is known to be consistent and has been widely studied theoretically. Here we re-examine a curious result of \cite{hirano2003efficient} which shows that an IPW estimator based on estimated propensity scores attains lower asymptotic variance than one based on the true propensity function. We can apply the finite-sample results of Theorem \ref{theorem2} to re-evaluate the meaning of this widely-known result by noting the following correspondence between IPW estimators and stratification estimators: \begin{lemma}\label{ipw_strat} The empirical inverse propensity weighting (IPW) estimator is equivalent to $\bar{\tau}^{x}_{strat}$ under the following conditions: \begin{enumerate} \item $\mathcal{X}$ is discrete, \item For all $x \in \mathcal{X}$, $N_{x,1} > 0$ and $N_{x,0} > 0$, \item The propensity weighting function is estimated nonparametrically as $\hat{\pi}(x) = N_{x, 1} / N_{x}$ for each $x \in \mathcal{X}$. \end{enumerate} \end{lemma} \begin{proof} By direct calculation, \begin{equation*} \begin{aligned} \bar{\tau}^{x}_{ipw} &= \frac{1}{n} \sum_{i=1}^n \left(\frac{Y_i Z_i}{\hat{\pi}(X_i)} - \frac{Y_i(1-Z_i)}{1-\hat{\pi}(X_i)}\right) \\ &= \frac{1}{n} \sum_{i=1}^n \left(\frac{Y_i Z_i}{N_{x_i, 1} / N_{x_i}} - \frac{Y_i(1-Z_i)}{N_{x_i, 0} / N_{x_i}}\right) = \frac{1}{n} \sum_{i=1}^n \left(\frac{Y_i Z_i N_{x_i}}{N_{x_i, 1}} - \frac{Y_i (1-Z_i) N_{x_i}}{N_{x_i, 0}}\right)\\ &= \frac{1}{n} \sum_{x \in \mathcal{X}} \left( \frac{N_x}{N_{x,1}} \left( \sum_{i: X_i = x} Y_i Z_i \right) - \frac{N_x}{N_{x,0}} \left( \sum_{i: X_i = x} Y_i (1 - Z_i) \right)\right)\\ &= \frac{1}{n} \sum_{x \in \mathcal{X}} \left( \frac{N_x}{N_{x,1}} \left( N_{x,1} \bar{Y}_{x,1} \right) - \frac{N_x}{N_{x,0}} \left( N_{x,0} \bar{Y}_{x,0} \right)\right)\\ &= \sum_{x \in \mathcal{X}} \frac{N_x}{n} \left(\frac{N_{x,1} \bar{Y}_{x,1}}{N_{x,1}} - \frac{N_{x,0} \bar{Y}_{x,0}}{N_{x,0}}\right) = \sum_{x \in \mathcal{X}} \frac{N_x}{n} \left(\bar{Y}_{x,1} - \bar{Y}_{x,0}\right) = \bar{\tau}^{x}_{strat}. \end{aligned} \end{equation*} \end{proof} First, we give a finite-sample analogue of the \cite{hirano2003efficient} result in the stratification context. Then, we demonstrate a modified estimator based on a known propensity score that improves upon the estimated propensity score IPW. \subsubsection{``Noisy estimates of one''.} Denote a candidate propensity function by $q: \mathcal{X} \mapsto (0, 1)$, so that the corresponding IPW estimator is \begin{equation} \bar{\tau}_{\textrm{ipw}}^q = \sum_x \left( \frac{\RV{N}_x}{\RV{N}} \right) \bar{\tau}_{\textrm{ipw}}^{q,x} \end{equation} where \begin{equation} \bar{\tau}_{\textrm{ipw}}^{q,x} = \left( \frac{\hat{\pi}(x)}{q(x)} \bar{\RV{Y}}_{x, \RV{Z}=1} - \frac{1-\hat{\pi}(x)}{1-q(x)} \bar{\RV{Y}}_{x, \RV{Z}=0} \right) \end{equation} and $\hat{\pi}(x) = \left( \RV{N}_{x, \vectorobs{z}=1} / \RV{N}_{x} \right)$ is the proportion of treated units in each stratum. Taking $q(x) = p(x)$ is the ``true propensity score'' case, while letting $q(x) = \hat{\pi}(x)$ is the ``estimated propensity score'' case. In the former case, the treated and untreated stratum averages are weighted by $\hat{\pi}(x) / \pi(x)$ and $\left( 1-\hat{\pi}(x) \right) / \left(1-\pi(x) \right)$, respectively; in the latter case the weights are identically one. This difference in weights leads to the following analogue of the result of \cite{hirano2003efficient}: \begin{theorem} \label{theorem3} There exists some $\epsilon > 0$ such that if $\lvert \mu(x) \rvert + \lvert \tau(x) \rvert > \epsilon$ for at least one $x \in \mathcal{X}$, $$\mathbb{V} \left( \sum_{x \in \mathcal{X}} \bar{\tau}_{\textrm{ipw}}^{\hat{\pi},x} \right) \leq \mathbb{V} \left( \sum_{x \in \mathcal{X}} \bar{\tau}_{\textrm{ipw}}^{\pi,x} \right).$$ \end{theorem} Essentially, the random weights in the true propensity IPW are only adding variability, compared to the IPW based on estimated weights, where exact cancellation occurs. A proof may be found in the appendix. Of course, there are many other possible IPW estimators, such as those based on parametric estimates. However, any parametric form will have a similar problem to the true propensity IPW if exact cancellation is not obtained. \subsubsection{The dimension reduction benefit of known propensity scores.} Armed with an understanding of why the estimated propensity weights outperform the true propensity weights permits us to consider a modified estimator that is able to make use of knowledge of the true propensity scores (should they be known). Suppose $K_{\pi} = |\pi(\mathcal{X})| < |\mathcal{X}| = K$. If $\pi$ were known exactly prior to estimating the average treatment effect, this reduction in the strata should confer a benefit in terms of variance reduction --- there are simply fewer conditional expectations to estimate and there are more data available for estimating each one. Moreover, it is still possible to avoid the noisy-estimation-of-one effect by estimating the propensity score values on the level sets of $\pi$; letting $\rho \in \pi(\mathcal{X})$ denote a specific value in the range of $\pi$ gives \begin{equation} \begin{split} \bar{\tau}_{\textrm{ipw}}^{\hat{\pi},\rho} &= \left( \frac{\hat{\pi}(\rho)}{\hat{\pi}(\rho)} \bar{\RV{Y}}_{\rho, \RV{Z}=1} - \frac{1-\hat{\pi}(j)}{1-\hat{\pi}(\rho)} \bar{\RV{Y}}_{\rho, \RV{Z}=0} \right)\\ &= \bar{\RV{Y}}_{\rho, \RV{Z}=1} - \bar{\RV{Y}}_{\rho, \RV{Z}=0} . \end{split} \end{equation} More precisely: \begin{corollary} \label{corollary1} Suppose the following conditions hold: \begin{enumerate} \item If $\pi(x) = \pi(x')$, then $\mu(x) = \mu(x')$ and $\tau(x) = \tau(x')$, \item $\mathbb{V}(\epsilon \mid \RV{X} = x) = \sigma_{j}^2$ for all $x$ with $\pi(x) = j$ and for all $j \in \pi(\mathcal{X})$, and \item $|\pi(\mathcal{X})| < |\mathcal{X}|$. \end{enumerate} Then $$\mathbb{V} \left( \sum_{j \in \pi(\mathcal{X})} \bar{\tau}_{\textrm{ipw}}^{\hat{\pi},j} \right) \leq \mathbb{V} \left( \sum_{j \in \pi(\mathcal{X})} \left( \sum_{x: \pi(x) = j} \frac{N_x}{N_j} \bar{\tau}_{\textrm{ipw}}^{\hat{\pi},x} \right) \right).$$ \end{corollary} This result formalizes the intuition that fewer strata implies a greater degree of aggregation and that, with larger sample sizes in the remaining strata, estimation should be accordingly more efficient. In other words, knowledge of the true propensity function permits feature selection, after which the empirical propensities can be used in an IPW estimator (which is equivalent to the stratification estimator on the selected features). Condition one requires some explanation: in the fixed-strata case ``over-stratification'' can actually be beneficial if the additional strata are predictive of the response itself and condition one rules out this possibility, as it states that $\mu\mbox{-}\tau$ is at least as coarse as $\pi$. That is, in addition to the ``noisy-estimation-of one'' phenomenon, empirical estimates of the propensity score can benefit from being defined on strata that are predictive of the response, but {\em not} the treatment assignment; this benefit is not directly related to the true-versus-actual propensity score question, but merely reflects the fact that controlling for prognostic factors can benefit treatment effect estimation. \subsubsection{The inefficiency of instrumental controls.} While a known propensity score can potentially aid IPW estimation by preventing unnecessary stratification, an additional corollary of Theorem \ref{theorem2} tells us that stratification based on a known propensity function may produce unnecessary stratification as a result of {\em unconfounded} variation in propensity scores, which we refer to as ``instrumental'' stratification. \begin{corollary} \label{corollary2} Define a stratification $s$ such that $\lvert s(\mathcal{X}) \rvert < \lvert \pi(\mathcal{X}) \rvert$ and define $g: \pi(\mathcal{X}) \rightarrow s(\mathcal{X})$ as a function that collapses level sets of $\pi$ into level sets of $s$. Let $m(j) = \lvert \left\{ \pi(x): g(\pi(x)) = j \right\} \rvert$ and suppose the following conditions hold: \begin{enumerate} \item There exist $x, x'$ such that $\pi(x) \neq \pi(x')$ while $s(x) = s(x')$, $\mu(x) = \mu(x')$, and $\tau(x) = \tau(x')$, \item $\mathbb{V}(\epsilon \mid \pi(\RV{X}) = p) = \sigma_{j}^2$ for all $x$ with $\pi(x) = p$ and $g(\pi(x)) = j$ and for all $j \in s(\mathcal{X})$ \end{enumerate} Then, $$\mathbb{V} \left( \sum_{j \in s(\mathcal{X})} \bar{\tau}_{\textrm{ipw}}^{\hat{\pi},j} \right) \leq \mathbb{V} \left( \sum_{j \in s(\mathcal{X})} \left( \sum_{\pi: g(\pi) = j} \frac{N_{\pi}}{N_j} \bar{\tau}_{\textrm{ipw}}^{\hat{\pi},\pi} \right) \right).$$ \end{corollary} This corollary and the other results of this subsection provide rigorous finite-sample corroboration of the advice offered in \cite{hernan2020causal} quoted in the introduction. \subsection{Generalized prognostic scores} In data generating processes where variation in $\tau$ is independent of $Z$, the {\em prognostic score}, $\mathbb{E}(Y^0 \mid X = x) = \mu(x)$, is a sufficient control function. This follows because mean conditional unconfoundedness is satisfied trivially by $s(X) = \mu(X)$ when $\tau(X) \independent Z$; see Lemma \ref{MCU}. Like the propensity score, the prognostic score can be estimated from partially observed data --- the propensity score can be estimated from $(X, Z)$ pairs and the prognostic score can be estimated from control units only, $(X, Z = 0, Y)$, which in many contexts are more readily available than treated observations. See \cite{hansen2008prognostic} for a rigorous exposition of prognostic scores. The vector-valued function $(\mu, \tau)$ is a ``generalized'' prognostic score, containing both the usual prognostic score, as well as the treatment effect itself. This version of the prognostic score has received little attention, presumably because it ``begs the question'', in that one of its elements is the very estimand of interest. However, note that conditioning on a random variable is not about the values of that variable per se, but is rather about the level sets of the function defining that random variable. In particular, any one-to-one function of $\mu$-$\tau$ also satisfies mean conditional unconfoundedness; knowledge of the treatment effect itself is not required, merely knowledge of which strata have distinct treatment effects. Note also that Theorem \ref{theorem2} suggests that prognostic strata are more desirable from an estimation variance perspective, suggesting, perhaps counterintuitively, that large control groups may be advantageous in practice and that investing in data collection of prognostic factors should be prioritized in cases where randomization of treatment assignment is not possible. \subsection{Constant control function}\label{constantcontrol} The previous two examples showed that propensity scores and prognostic scores are sufficient control functions; this example demonstrates a function that may be coarser than either one. Consider a function on $\mathcal{X}$ defined as follows: \begin{definition} A function $s$ on $\mathcal{X}$ is a {\em constant control} function if for all $x, x' \in \mathcal{X}$ such that $s(x) = s(x')$ at least one of the following holds \begin{itemize} \item $\pi(x) = \pi(x')$, \item $\mu(x) = \mu(x')$ and $\tau(x) = \tau(x')$. \end{itemize} \end{definition} In other words, a constant control function is a coarsening of $\mathcal{X}$ such that on each level set defined by $s$, either $\pi(x)$ or $(\mu(x), \tau(x))$ are constant. The following lemma shows that a constant control function defines a random variable $S = s(X)$ such that $\mathbb{E}(Y \mid Z = z, S) = \mathbb{E}(Y^z \mid S)$. \begin{lemma} \label{lemma4} Assume $X$ satisfies conditional unconfoundedness and consider the random variable $S = s(X)$, where $s$ is a constant control function; then $S$ satisfies conditional unconfoundedness. \end{lemma} \begin{figure} \tikzfig{tikgraph}\tikzfig{tikgraph2} \caption{Causal graphical model and partially causal graphical model after integrating out $X$.} \label{decomposed_graphs} \end{figure} \begin{proof} Consider the causal diagram of $(X, Z, Y)$ expanded to include random variables $X_p = \pi(X)$, $X_y = (\mu(X), \tau(X))$, and $X_c = s(X)$ for $s$ defined above, depicted in panel (a) of Figure \ref{decomposed_graphs}. Integrating out $X$ leads to a probabilistic graphical model as shown in panel (b) of Figure \ref{decomposed_graphs}; dashed lines denote not-necessarily causal probabilistic dependence and solid arrows denote causal relationships. The result follows by showing that $X_p \independent X_y \mid X_c$; in terms of the diagram this means that the curved dashed line does not exist. But this follows immediately from the definition of $X_c$. For any value of $X_c$, either $X_p$ or $X_y$ is constant, and so the conditional distribution of $X_p$ and $X_y$ is trivially a product distribution. \end{proof} \noindent The intuition behind a constant control function is that one way to control for ``systematic co-variation'' is simply to remove all variation. Clearly, both $\pi(X)$ and $(\mu(X), \tau(X))$ are themselves constant control functions, as is $X$ itself. However, a constant control function may be coarser than either, as illustrated in Figure \ref{CCDR}, which shows an example of a simple data generating process that has a constant control function. In this example, the CCDR comprises just two strata, although $\mu$ and $\pi$ take 10 and 11 unique values, respectively, and $|\mathcal{X}| = 20$. The treatment effect is heterogeneous but unconfounded: $\tau \sim \mbox{U}(5,10)$. The second panel of Figure \ref{CCDR} shows the sampling distributions of four different stratification estimators: one using level sets of $\mu$, one using level sets of $\pi$, one using all 20 values of $x$, and one using the two values of the minimal constant control function, indicating if $x \leq 11$. All four stratification estimators are unbiased, but their differing variances exhibit a pattern consistent with Theorem \ref{theorem2}: $\mu$ gives the lowest variance, followed by $x$, followed by the constant control function, followed by $\pi$. \begin{figure} \hspace*{-0.5cm} \includegraphics[width=3in]{DGP_level_sets.png}\includegraphics[width=3in]{Estimator_distribution.png} \caption{An example of a DGP admitting a simple constant control function, $\lambda = \mathds{1}(x \leq 11)$. Here $\tau \sim \mbox{U}(5,10)$ is heterogeneous and $x \in \lbrace 1, \dots 20 \rbrace$. The left panel shows the $\mu$ values in black and the $\pi$ values in gray. The right panel shows the sampling distributions of stratification estimators based on the level sets of different function: $\mu$ (solid black), $x$ (solid gray), $\lambda$ (dashed gray) and $\pi$ (dashed black). All four estimators are unbiased, with variances that differ in line with the results of Theorem \ref{theorem2}.} \label{CCDR} \end{figure} \begin{figure} \begin{tikzpicture} \begin{scope} \fill[lightgray] (330:1.25cm) circle (1.5cm); \fill[lightgray] (210:1.25cm) circle (1.5cm); \end{scope} \begin{scope} \fill[white] (90:1.25cm) circle (1.5cm); \end{scope} \begin{scope} \clip (330:1.25cm) circle (1.5cm); \fill[draw=black, pattern=north east lines] (90:1.25cm) circle (1.5cm); \end{scope} \begin{scope} \clip (210:1.25cm) circle (1.5cm); \fill[draw=black, pattern=north east lines] (90:1.25cm) circle (1.5cm); \end{scope} \draw (90:1.25cm) circle (1.5cm) node[text=black,above] {$\pi$}; \draw (210:1.25cm) circle (1.5cm) node [text=black,below left] {$\mu$}; \draw (330:1.25cm) circle (1.5cm) node [text=black,below right] {$\tau$}; \end{tikzpicture} \caption{If the coordinate dimensions of $\RV{X}$ are independent, then which variables $X_j$ appear (or not) in the eight possible combinations of $\pi$, $\mu$, and $\tau$ can be used to characterize four relevant variable types with respect to treatment effect estimation: necessary controls, pure prognostic variables, instruments, and extraneous (or noise) variables. The above Venn diagram depicts these eight regions, shaded according to these designations. Variables in the cross-hatched region are necessary controls, as they appear in both $\pi$ and either $\mu$ or $\tau$ (or both). The gray shaded region corresponds to pure prognostic variables, appearing in $\mu$ or $\tau$ (or both), but not appearing in $\pi$. The white region corresponds to instruments, variables which appear in $\pi$, but neither in $\mu$ nor $\tau$. Variables outside of the three circles are entirely irrelevant to either the outcome or the treatment. Such designations become considerably more complicated when the elements of $X$ are not independent (cf. Example \ref{noncausalSEM}).} \label{venn} \end{figure} \subsection{Independent variables in both $\pi$ and $(\mu, \tau)$.}\label{independent} When the coordinates of $\RV{X}$ (the nodes in the graph) are all mutually independent, a valid control set is the elements $X_j$ occurring in {\em both} the propensity model and (at least one of) the prognostic and moderation models. For example, this was the strategy used in concocting the example DGP presented in section \ref{partial}. As a more general example, if $\pi(x_1, \dots, x_d) = \pi(x_1, x_2)$, $\mu(x_1, \dots, x_d) = \mu(x_2, x_3)$, and $\tau(x_1, \dots, x_d) = \tau(x_4, x_5)$, and $X_1 \independent X_{j}$ for $j \neq 1$, then $X_2$ is a sufficient control. This is because $X_1$ can be integrated out of the model without inducing dependence between $\pi(x_2) = \mathbb{E}(\pi(X_1, x_2) \mid X_2 = x_2)$ and $(\mu, \tau)$, because $X_1$ is independent of variables appearing in $\mu$ and/or $\tau$ (and does not itself appear). A similar integration could be performed for variables in $\mu$ and/or $\tau$ that do not appear in $\pi$, so long as it too was independent. In fact, only conditional independence is necessary; in the present example, $X_1 \independent (X_3, X_4, X_5) \mid X_2$. Figure \ref{venn} depicts the characterization of variables into four categories (necessary controls, pure prognostics variables, instruments, and extraneous) in the case that they are mutually independent. \subsection{Sets satisfying the back-door criterion according to a given CDAG.}\label{noncausalSEM} Consider the causal diagram in Figure \ref{rectangle}. Either the propensity controls $(X_1, X_3)$ or the prognostic-moderation controls $(X_2, X_4)$ are adequate for statistical control. However, Pearl's algorithm tells us that ``mixed'' variables also suffice, such as $(X_1, X_4)$ or $(X_2, X_3)$. Interestingly, such examples show that the notion of ``instrumental'' variables and ``prognostic'' variables are context dependent. Specifically, relative to a conditioning set of $(X_2, X_3)$, additional stratification using $X_4$ is prognostic, while additional stratification on $X_1$ would be instrumental. Theorem \ref{theorem2} suggests that adding prognostic controls is often desirable, while adding instruments should be avoided, but such designations will fluctuate depending on what has already been included. This example also illustrates a limitation of the triangle graph. Suppose that only $(X_2, X_3)$ were available for measurement. The resulting diagram for just those two controls (Figure \ref{triangle}) is {\em not} the usual causal diagram, because $X_2$ has no causal impact on $Z$, while $X_3$ has no causal impact on $Y.$ Accordingly, there is no unaugmented CDAG describing $(X_2, X_3, Z, Y)$; instead, we must denote merely statistical relationships using dashed lines. When a practitioner invokes conditional unconfoundedness in the potential outcomes framework, it therefore does not imply the triangular CDAG. Similarly, invoking (conditionally) exogenous errors does not imply that the resulting mean components of the structural model are causal. In more detail, if the potential outcomes are defined in terms of the CDAG on the full set $(X_1, X_2, X_3, X_4)$, a structural model can be derived that only involves $(X_2, X_3)$, as follows: \begin{equation} \begin{split} Y^0 &= F(x_1, x_2, x_3, x_4, z = 0, \epsilon_y) = F(x_2, x_3, z = 0, \epsilon_y)\\ Y^1 &= F(x_1, x_2, x_3, x_4, z = 1, \epsilon_y) = F(x_2, x_3, z = 1, \epsilon_y)\\ \mu(x_2, x_3) &\equiv \mathbb{E}(Y^0 \mid X_2, = x_2, X_3 = x_3)\\ \tau(x_2, x_3) &\equiv \mathbb{E}(Y^1 \mid X_2 = x_2, X_3 = x_3) - \mathbb{E}(Y^0 \mid X_2, = x_2, X_3 = x_3)\\ \upsilon(X_1, x_2, x_3, X_4, \epsilon_y) &\equiv F(X_1, x_2, x_3, X_4, z = 0, \epsilon_y) - \mu(x_2, x_3)\\ & = F(x_2, X_4, z = 0, \epsilon_y) - \mu(x_2, x_3)\\ \delta(X_1, x_2, x_3, X_4, \epsilon_y) &\equiv F(X_1, x_2, x_3, X_4, z = 1, \epsilon_y) - F(X_1, x_2, x_3, X_4, z = 0, \epsilon_y) - \tau(x_2, x_3)\\ &= F(x_2, X_4, z = 1, \epsilon_y) - F(x_2, X_4, z = 0, \epsilon_y) - \tau(x_2, x_3). \end{split} \end{equation} Noting that the resulting error terms now depend not only $\epsilon_y$, but also on $X_4$, it is necessary to show that $$(X_4, \epsilon_y) \independent Z \mid (X_2, X_3).$$ But this follows from the fact that $\mathbb{E}(Z \mid X_2 = x_2, X_3 = x_3) = \mathbb{E}(\pi(X_1, X_3) \mid X_2 = x_2, X_3 = x_3) \equiv \pi(x_2, x_3)$ is free of $X_4$. In this model, $\mu(x_2, x_3)$, $\tau(x_2, x_3)$ and $\pi(x_2, x_3)$ must not be interpreted as causal functions, despite yielding the required exogenous errors; specifically, from the graph we know that $X_2$ has no causal impact on $Z$ and $X_3$ has no causal impact on $Y,$ as depicted in Figures \ref{rectangle} and \ref{triangle}. \subsection{Sets satisfying the back-door criterion according to a transformed CDAG.} \label{transformedCDAG} This example considers a data generating process that admits distinct CDAGs, depending on how the control variables are parametrized. This scenario is not commonly discussed, presumably because observed measurements are taken to be designated by ``nature'', so to speak. However, reflecting on invertible transformations such as $(x_1, x_2) \rightarrow (x_1, x_1/x_2)$ highlights that functional causal models are certainly subject to changes of variables. More concretely, consider the following DGP: \begin{equation*} \begin{aligned} X_j &\stackrel{iid}{\sim}\mbox{Bernoulli}\left(p_j\right)\\ \pi(X) &= \beta_0 + \beta_1 (2X_1X_2 - X_1 - X_2 + 1) + \beta_2 X_3\\ Z &\sim \mbox{Bernoulli}\left(\pi(X)\right)\\ \mu(X) &= \alpha_0 + \alpha_1 (2X_1X_2 - X_1 - X_2 + 1) + \alpha_2 X_4\\ \tau(X) &= \tau \;\; \mbox{(constant treatment effect)}\\ Y &= \mu(X) + \tau(X) Z + \epsilon, \;\;\; \epsilon \sim \mathcal{N}\left(0, \sigma^2_{\epsilon}\right)\\ \end{aligned} \end{equation*} Next, define random variable $W = (2X_1X_2 - X_1 - X_2 + 1)$, regarding $X_2$ as the exogenous variable in the functional model for $W \mid X_1$. Additionally, suppressing $X_3$ and $X_4$, as they represent exogenous variation, yields the causal graph in Figure \ref{graph5}. \begin{figure} \begin{minipage}[b]{180pt} \centering \begin{tikzpicture}[baseline=-0.25em,scale=0.45] \begin{pgfonlayer}{nodelayer} \node [style=myvar] (1) at (2.5, 2.5) {$X_1$}; \node [style=myvar] (2) at (2.5, -2.5) {$X_2$}; \node [style=myvar] (3) at (-2.5, -2.5) {$X_3$}; \node [style=myvar] (4) at (7.5, 2.5) {$X_4$}; \node [style=myvar] (5) at (-2.5, 0) {$Z$}; \node [style=myvar] (6) at (7.5, 0) {$Y$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=arrow] (1) to (5); \draw [style=arrow] (1) to (6); \draw [style=arrow] (2) to (5); \draw [style=arrow] (2) to (6); \draw [style=arrow] (3) to (5); \draw [style=arrow] (4) to (6); \draw [style=arrow] (5) to (6); \end{pgfonlayer} \end{tikzpicture} \caption{Causal graph in terms of original covariates} \label{graph5a} \end{minipage} \hfill \begin{minipage}[b]{180pt} \centering \begin{tikzpicture}[baseline=-0.25em,scale=0.75] \begin{pgfonlayer}{nodelayer} \node [style=myvar] (1) at (-2.5, 2.5) {$X_1$}; \node [style=myvar] (2) at (2.5, 2.5) {$W$}; \node [style=myvar] (7) at (0.66, 0) {$Z$}; \node [style=myvar] (8) at (4.33, 0) {$Y$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=arrow] (1) to (2); \draw [style=arrow] (2) to (7); \draw [style=arrow] (2) to (8); \draw [style=arrow] (7) to (8); \end{pgfonlayer} \end{tikzpicture} \caption{Causal graph under transformed covariates} \label{graph5} \end{minipage} \end{figure} From this graph, it is clear that conditioning on $W$ satisfies conditional unconfoundedness. Most interestingly, $\lvert \mu(\mathcal{X}) \rvert = \lvert \pi(\mathcal{X}) \rvert = 4$. while $\lvert \mathcal{W} \rvert = 2$; thus $W$ provides the smallest possible random variable. Indeed, the level sets of $W$ are exactly the level sets of $\lambda$: $$\mathbb{E}(\pi(X) \mid \mu(X)) = \mathbb{E}(\pi(X) \mid W, X_4) = \mathbb{E}(\pi(X) \mid W).$$ \subsection{Sets that induce collider bias in a graph without colliders}\label{pseudoCollider} We see in Section \ref{transformedCDAG} that conditioning on synthetic ``features" that combine existing variables can lead to smaller control sets than their component variables. It is thus perhaps natural to consider machine learning useful in searching for and constructing such sets. It is true that \textit{certain} combinations of confounding variables may create a synthetic, minimal deconfounders. However, it is also possible to combine two independent variables to create a ``collider" (defined in Section \ref{DAG_section}) which confounds the causal effect of $Z$ on $Y$ after conditioning. Consider the graph in Figure \ref{pseudo_collider_graph} and define its data generating equations as \begin{equation*} \begin{aligned} Y &\sim \mathcal{N}\left(\alpha X_2 + \tau Z , \sigma^2 \right)\\ Z &\sim \mbox{Bernoulli}\left(1 / 4 + X_1 / 2\right)\\ X_1, X_2 &\sim \mbox{Bernoulli}\left(1 / 2\right) \end{aligned} \end{equation*} \begin{figure} \centering \begin{tikzpicture}[baseline=-0.25em,scale=0.45] \begin{pgfonlayer}{nodelayer} \node [style=myvar] (1) at (-3, 3) {$X_1$}; \node [style=myvar] (2) at (3, 3) {$X_2$}; \node [style=myvar] (3) at (-3, 0) {$Z$}; \node [style=myvar] (4) at (3, 0) {$Y$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=arrow] (1) to (3); \draw [style=arrow] (2) to (4); \draw [style=arrow] (3) to (4); \end{pgfonlayer} \end{tikzpicture} \caption{Graph with no confounding and no colliders} \label{pseudo_collider_graph} \end{figure} From this graph, we can see that the average causal effect of $Z$ on $Y$ is identified unconditional of $X_1$ and $X_2$, though we may condition on either or both variables. Suppose we construct two synthetic variables \begin{equation*} \begin{aligned} \tilde{X}_A &= \min\left\{X_1, X_2\right\}\\ \tilde{X}_B &= a \left[ \mathbf{1}\left\{X_1 == 1\right\} \mathbf{1}\left\{X_2 == 1\right\} + \mathbf{1}\left\{X_1 == 0\right\}\mathbf{1}\left\{X_2 == 0\right\} \right]\\ &\;\;\;\;\; + b \left( \mathbf{1}\left\{X_1 == 1\right\} \mathbf{1}\left\{X_2 == 0\right\} \right) + c \left( \mathbf{1}\left\{X_1 == 0\right\} \mathbf{1}\left\{X_2 == 1\right\} \right) \end{aligned} \end{equation*} where the unique values of categorical $\tilde{X}_B$ may be treated as strata of a conditioning set. We show in the simulation results in Table \ref{tab:table2} that conditioning on either $\tilde{X}_A$ or $\tilde{X}_B$ biases the average treatment effect, while conditioning on both $X_1$ and $X_2$ does not. Note that both $\tilde{X}_A$ and $\tilde{X}_B$ are constructed in a manner not unlike the ``feature learning" step of common machine learning algorithms, such as neural networks and decision trees. \input{pseudo_collider.tex} \subsection{Sets satisfying the back-door criterion with respect to a mean CDAG.}\label{meanCDAG} The structural model perspective permits us to produce, starting from a given CDAG, a modified causal diagram that reflects only the mean dependencies. For estimation of average causal differences, such a graph suffices to identify valid control variable sets that are potentially smaller than any control set satisfying the back-door criterion on the original CDAG. For example, consider the following data generating process: \begin{equation*} \begin{aligned} X &\sim \mbox{Bernoulli}(1/2),\\ Z &\sim \mbox{Bernoulli}\left(1/4 + X/2 \right)\\ Y &\sim \mathcal{N}\left(\tau Z, (\sigma + X)^2\right) \end{aligned} \end{equation*} For this DGP, $\mu(X) = 0$ and $\tau(X) = \tau$ are both constant in $X$, which implies that the null set satisfies mean conditional unconfoundedness; even though $X$ is a common cause of $Z$ and $Y$, it only affects the variance of $Y$, but not the mean. Therefore, the full joint distribution of $X, Z, Y$ is the triangle diagram of Figure \ref{triangle}, while Figure \ref{graph16} depicts the joint distribution of $(X, Z, \mathbb{E} (Y \mid X, Z))$, in which $X$ is unconnected to $\mathbb{E}(Y \mid X, Z) = \mathbb{E}(Y \mid Z)$. Note that while mean conditional unconfoundedness identifies the ATE, it does not identify other causal estimands. For instance, consider the quantile treatment effect (QTE), for $q \in (0,1)$: $$F^{-1}_{Y^1}(q) - F^{-1}_{Y^0}(q)$$ where $F^{-1}$ denotes an inverse cumulative distribution function. Integrating out $X$, $Y \mid Z = z$ is a mixture of two normal random variables, with PDF and CDF defined as \begin{equation*} \begin{aligned} f(y \mid Z = z) &= w_z \phi(y, \tau z, (\sigma + 1)^2) + (1 - w_z) \phi(y, \tau z, \sigma^2),\\ F(y \mid Z = z) &= w_z \Phi(y, \tau z, (\sigma + 1)^2) + (1 - w_z) \Phi(y, \tau z, \sigma^2) \end{aligned} \end{equation*} where $w_z = \mathbb{P}\left( X = 1 \mid Z = z\right)$. By contrast, the PDF and CDF of $Y^z$ are given by \begin{equation*} \begin{aligned} f(y^z \mid Z = z) &= \frac{1}{2} \phi(y, \tau z, (\sigma + 1)^2) + \frac{1}{2} \phi(y, \tau z, \sigma^2),\\ F(Y^z \mid Z = z) &= \frac{1}{2} \Phi(y, \tau z, (\sigma + 1)^2) + \frac{1}{2} \Phi(y, \tau z, \sigma^2). \end{aligned} \end{equation*} Because $X \not\independent Z$, $w_z \neq 1/2$ and therefore \begin{equation*} \begin{aligned} F^{-1}_{Y^1}(q) - F^{-1}_{Y^0}(q) &\neq F^{-1}_{Y \mid Z=1}(q) - F^{-1}_{Y \mid Z=0}(q), \end{aligned} \end{equation*} as illustrated in Figure \ref{QTE}. \begin{figure}[h] \centering \begin{tikzpicture}[baseline=-0.5em,scale=0.5] \begin{pgfonlayer}{nodelayer} \node [style=myvar] (1) at (-2, -1) {$Z$}; \node [style=myvar] (3) at (0, 1) {$X$}; \node [style=myvar] (2) at (2, -1) {$\mathbb{E} (Y \mid X, Z)$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=arrow] (3) to (1); \draw [style=arrow] (1) to (2); \end{pgfonlayer} \end{tikzpicture}\vspace{-0.6cm} \caption{Mean causal graph}\label{graph16} \end{figure} \begin{figure} \includegraphics[width=2.5in]{example4point9a.png} \includegraphics[width=2.5in]{example4point9b.png}\\ \includegraphics[width=2.5in]{example4point9c.png} \includegraphics[width=2.5in]{example4point9d.png}\\ \includegraphics[width=2.5in]{example4point9e.png} \includegraphics[width=2.5in]{example4point9f.png} \caption{An illustration of a confounded quantile treatment effect with unconfounded ATE. The top two panels depict the density and CDF functions of the DGP from section \ref{meanCDAG} for the four combinations of $X \in \{ 0, 1 \}$ and $Z \in \{ 0, 1 \}$. For each value of $X$ the change in the quantile is a constant shift to the right. The second row shows the densities of the potential outcome distributions and the conditional distribution of $Y \mid Z$, respectively, with $X$ integrated out. In both cases, the resulting density is a mixture of two normals with different variances and a common mean. However, the potential outcomes densities are just translations of the same mixture density, whereas the conditional distribution of $Y \mid Z$ also differs in terms of the mixture weights. The bottom row depicts the same relationship, but in terms of the CDFs. Attempts to estimate the quantile treatment effect --- shown here as the distance between the black and grey curves at the horizontal dashed line in left panel --- using the analogous distance from the right panel would misestimate the effect.}\label{QTE} \end{figure} \subsection{Partial randomization}\label{partial} Some estimands require weaker assumptions than estimating the average treatment effect over the whole population does. For example, the {\em average treatment effect among the treated}, or ATT, is defined as $\mathbb{E}(Y^1 - Y^0 \mid Z = 1) = \mathbb{E}(Y^1 \mid Z = 1) - \mathbb{E}(Y^0 \mid Z = 1)$\footnote{In our experience, this potential outcomes notation for the ATT can give students fits, particularly the $\mathbb{E}(Y^0 \mid Z = 1)$ term. Such students may find the structural equation notation to be somewhat more transparent: $\mathbb{E}(\tau(X) \mid Z = 1)$ makes it clear that the probabilistic impact of conditioning on $Z = 1$ is to modify the distribution over $X$ defining the expectation; there is no opportunity for cognitive interference from the fact that the ``$z$'' in $Y^z$ is different from that in the condition $Z = z$.}. This estimand is important in the program evaluation literature, see for example \cite{heckman1996identification} and \cite{heckman1997matching}. Here we use structural model notation to compare the ATT to the ATE, as relates to the ``naive'' contrast that compares the average response among treated individuals to the average response among the untreated individuals. In terms of the population, the naive contrast estimates $\mathbb{E}(Y \mid Z = 1) - \mathbb{E}(Y \mid Z = 0)$. In terms of the structural model, this is equivalent to $$\mathbb{E}(\mu(X) + \tau(X) \mid Z = 1) - \mathbb{E}(\mu(X) \mid Z = 0).$$ By definition, the exogenous errors are mean zero and vanish from the above expression. Now, randomization of $Z$ implies that $(\mu(X), \tau(X)) \independent Z$, which in turn implies that $\mathbb{E}(\mu(X) \mid Z = 1) = \mathbb{E}(\mu(X) \mid Z = 0)$ and therefore that $$\mathbb{E}(\mu(X) + \tau(X) \mid Z = 1) - \mathbb{E}(\mu(X) \mid Z = 0) = \mathbb{E}(\tau(X) \mid Z = 1),$$ the ATT. Randomization further implies that $\mathbb{E}(\tau(X) \mid Z = 1) = \mathbb{E}(\tau(X))$, so that the ATE and the ATT are the same. However, the above derivation also reveals that to estimate the ATT one only needs $\mathbb{E}(\mu(X) \mid Z = 1) = \mathbb{E}(\mu(X) \mid Z = 0)$, or what we might call {\em mean prognostic unconfoundedness}, which itself follows from $\mu(X) \independent Z$, or {\em prognostic unconfoundedness}. Thus, when the ATT is the sole interest, one only needs to rule out prognostic confounding. Meanwhile, treatment effect confounding, $\tau(X) \not \independent Z$, entails that the ATT and ATE are different, so that the ATE remains unknown even with the ATT in hand. Note that a similar argument works for $\mathbb{E}(\tau(X) \mid Z = 0)$, the average effect of the treatment on the control (untreated) population, or ATC. This is easiest to see by reparametrizing the structural model in terms of: $Z^* = 1 - Z$, $\mu^*(X) = \mu(X) + \tau(X)$, and $\tau^*(X) = -\tau(X)$. It then follows that the ATC may be estimated from the naive contrast so long as $\mu^*(X) \independent Z$. As it relates to feature selection, it is notable that a smaller feature set may allow estimating the ATT than would be required for estimating the ATE. The following DGP is a concrete example: \begin{align*} X_1 \sim \mbox{Bernoulli}(1/2)&,\;\;\;X_2 \sim \mbox{Bernoulli}(1/2),\\ Z \mid X_1, X_2 &\sim \mbox{Bernoulli}(0.25 + 0.5 X_2),\\ Y \mid X_1, X_2, Z &\sim \mathcal{N}(X_1 + (1 + 2 X_2)Z, \sigma^2). \end{align*} In this example, $\tau(X) = \tau(X_2) = 1 + 2 X_2$, $\mu(X) = \mu(X_1) = X_1$, and the ATE is $\mathbb{E}(\tau(X)) = 1 + 2\mathbb{E}(X_2) = 2$. The ATT, on the other hand, is $\mathbb{E}(\tau(X) \mid Z = 1) = 1 + 2\mathbb{E}(X_2 \mid Z = 1) = 3/2$. It is a nice simulation exercise to demonstrate that the naive contrast is consistent for the ATT, but not the ATE. \subsection{A two-stage estimator using two distinct control features.}\label{split_sample} This example builds upon the ideas presented in the previous one, but returns to the goal of regression adjustments for the ATE. Suppose we know that $\mu(X) \independent Z \mid s_1(X)$ and $\tau(X) \independent Z \mid s_2(X)$, for distinct functions (features) $s_1$ and $s_2$. One approach to estimating the ATE under this assumption would be to stratify on the common refinement of $s_1(X)$ and $s_2(X)$, thus guaranteeing that $(\mu(X), \tau(X)) \independent Z \mid s(X) = s_1(X) \vee s_2(X)$. But an alternative two-stage approach is possible, which requires estimating fewer individual strata means. The procedure is: \begin{enumerate} \item Estimate $\mu(s_1(X)) = \mathbb{E}(Y \mid Z = 0, s_1(X))$ from the control data. \item Define $R = Y - \mu(s_1(X))$. \item Estimate $\mathbb{E}(R \mid Z = 1, s_2(X))$ from the treated data. \item Compute the ATE as $\mathbb{E}_X(\mathbb{E}(R \mid Z = 1, s_2(X)))$, where the outer expectation is over $X$, with respect to its marginal distribution. \end{enumerate} We may verify the validity of this estimator by first expressing the procedure as the following iterated expectation: \begin{equation*} \begin{aligned} &\mathbb{E}_{X}\left( \mathbb{E}(Y - \mathbb{E}(Y \mid Z = 0, s_1(X)) \mid Z = 1, s_2(X)) \right)\\ &\quad= \mathbb{E}_{X}\left( \mathbb{E}(Y \mid Z = 1, s_2(X)) \right) - \mathbb{E}_{X}\left( \mathbb{E}(Y \mid Z = 0, s_1(X)) \right)\\ &\quad= \mathbb{E}_{X}\left( \mathbb{E}(\mu(X) + \tau(X) \mid Z = 1, s_2(X)) \right) - \mathbb{E}_{X}\left( \mathbb{E}(\mu(X) \mid Z = 0, s_1(X)) \right)\\ &\quad= \mathbb{E}_{X}\left( \mathbb{E}(\tau(X) \mid Z = 1, s_2(X)) \right) + \mathbb{E}_{X}\left( \mathbb{E}(\mu(X) \mid Z = 1, s_2(X)) \right) - \mathbb{E}_{X}\left( \mathbb{E}(\mu(X) \mid Z = 0, s_1(X)) \right). \end{aligned} \end{equation*} By the assumption that $\mu(X) \independent Z \mid s_1(X)$, we find that $\mathbb{E}(\mu(X) \mid Z = 0, s_1(X)) = \mathbb{E}(\mu(X) \mid Z = 1, s_1(X))$, which in turn implies that the second and third terms above are both equal to $\mathbb{E}_X(\mu(X) \mid Z = 1)$ (just expressed as distinct iterated expectations) and thus cancel. By the assumption that $\tau(X) \independent Z \mid s_2(X)$, the remaining term is equal to $\mathbb{E}(\tau(X) \mid s_2(X))$ and the desired result follows after taking the outer expectation: $\mathbb{E}(\tau(X)) = \mathbb{E}_X(\mathbb{E}(\tau(X) \mid s_2(X))$. \section{Discussion} To conclude, we synopsize our results and discuss further relationships to previous literature. \subsection{Famous results or debates revisited} The discrete covariate setting studied here allowed us to revisit several important existing results from a unique perspective. \subsubsection*{Virtues of the propensity score.} \cite{rosenbaum1983central} is often cited in support of propensity score methods for causal inference, but its results are often over-stated. First, there is not one propensity score, but many, one corresponding to each valid set of control features. Second, a propensity score need not be minimal; it is the minimal balancing score for the complete set of features used to create it, but balancing on those features is not necessary to estimate causal effects. Third, a propensity score method that disregards important prognostic features can be much less efficient than a method that does incorporate such features. \subsubsection*{Estimated versus True propensity scores.} In practice, the propensity score (corresponding to a given set of control features) is rarely known and so must be estimated. \cite{hirano2003efficient} is sometimes cited to put a positive spin on this state of affairs: estimating a propensity function is better than knowing it exactly! But the actual situation is more nuanced. The asymptotic analysis of \cite{hirano2003efficient} comparing the IPW estimator using true versus estimated propensity scores conceals the variety of specific ways the two estimators differ. Viewing the IPW as a stratification estimator in the discrete covariate setting puts these distinctions into immediate relief. One, the IPW using the true propensity scores uses different strata weights than the one using the estimated propensity scores, resulting in a higher variance estimator. Two, the IPW based on a true propensity score is able to collapse unnecessary strata, which can reduce the variance of the estimator. Three, collapsing unnecessary strata does not {\em always} reduce the variance, because the ``extraneous'' strata may be informative about {\em unconfounded} variation in the response. That is, an IPW estimator based on estimated propensity scores can have lower variance than one based on a true propensity score because it performs an implicit regression adjustment that is essentially unrelated to the propensity score. To be sure, the mathematics of \cite{hirano2003efficient} are consistent with our analysis, and one can parse their expressions for such meaning, but their analysis does not expose the importance of either variable selection or prognostic stratification. \subsubsection*{Regression adjustments for randomized experiments.} \cite{freedman2008regression} is sometimes cited as a reason to avoid regression adjustment for causal effect estimation altogether. However, Freedman's result was more about model specification --- or {\em mis}specification --- than it was about regression adjustment per se. Provided that one undertakes a nonparametric adjustment, as advocated by \cite{lin2013agnostic}, Freedman's main concerns are addressed. However, nonparametric adjustment poses its own challenges, in the form of high-variance estimators. Whether or not the inclusion of strong prognostic features is enough to offset the increased variability that comes with estimating a nonparametric model with limited data is impossible to say in any generality. Theorem \ref{theorem2} approaches this question quantitatively. \subsubsection*{The peril of colliders.} \cite{greenland1999causal} introduce the ``M-Graph'' and the problem of conditioning on unblocked colliders. The issue was vigorously debated in a series of articles and replies in {\it Statistics in Medicine} between 2007 and 2009. \cite{rubin2007design} suggested that all available pre-treatment covariates should be included in the conditioning set of any observational causal analysis, while others (\cite{shrier2008letter}; \cite{sjolander2009propensity}; \cite{pearl2009remarks}) contended that such a strategy could incur collider bias. \cite{rubin2009should} responded that unblocked colliders are a stylized problem that has few practical ramifications. This exchange in turn motivated further research, including \cite{ding2015adjust}, \cite{rohde2019bayesian}, and \cite{cinelli2020crash}. Here, we observed that should colliders appear in a set of control variables --- along with the associated blocking variables --- regularization can unintentionally induce collider bias, revealing that colliders are not only a problem when their parents are unobserved. In particular, regularized regression approaches will struggle with colliders that are blocked by only a propensity-side ancestor. Additionally, Section \ref{pseudoCollider} demonstrated that composite features that combine non-collider variables can ``feature engineer'' a pseudo-collider; how likely this is to occur in practice for particular supervised learning algorithms is an interesting open question. \subsubsection*{Conditional unconfoundedness versus mean conditional unconfoundedness.} In a discussion of \cite{angrist1996identification}, Heckman \citep{heckman1996identification} makes a point similar to the one we make in section \ref{meanCDAG}, that conditional unconfoundedness is stronger than necessary for estimating certain treatment effects. Angrist rejoins that identification based on ``functional form'' is undesirable. Here, we have taken the perspective of Heckman, as mean conditional unconfoundedness is the key notion for defining the principal deconfounding function, so it is perhaps worthwhile to unpack why. Our interest was in understanding the conditions according to which a particular set of control variables would yield a valid stratification estimator. From this perspective, a more {\em specific} assumption is {\em weaker} than a more general one: Conditional unconfoundedness implies mean conditional unconfoundedness, but not the other way around. It is the specificity of the {\em estimand} that permits the weaker (more general) assumption on the DGP. As we explored in section \ref{meanCDAG}, mean conditional unconfoundedness does not permit estimation of quantile treatment effects. In order for mean conditional unconfoundedness to license estimation of quantile treatment effects, one would need to impose additional restrictions on the DGP, such as a fixed distributional shape around the unconfounded mean. But that is not our suggestion (nor do we believe it was Heckman's). Interestingly, this distinction between conditional unconfoundedness and mean conditional unconfoundedness is at the heart of the the difference between general causal diagrams and more traditional path analysis. By focusing on correlations, the path diagram must only respect the mean causal relationships. Sometimes this is described by saying that path analysis ``has a structural model, but no measurement model'' (Wikipedia). \\ Additionally, a number of elementary, but easily-overlooked, facts were clarified: regression, propensity score weighting (and, {\em a fortiori}, double robust estimators based thereon) are identical in the case of discrete covariates (cf. lemma \ref{ipw_strat}); CDAGs are non-unique (cf. Section \ref{transformedCDAG}), and instrumental and prognostic variable designations are inherently contingent (cf. Section \ref{noncausalSEM}). \subsection{Methodological ecumenicalism}\label{ecumenicalism} In section \ref{equivalence}, it was shown that the potential outcomes, CDAG, and exogenous errors definitions of conditional unconfoundedness are substantively equivalent. This result allows us to conveniently move between the conventions of these alternative frameworks, which implicitly emphasize distinct aspects of the problem they all address --- estimating treatment effects from data. For example, the causal graph approach reminds us that sets of valid control variables are not unique and, consequently, we must not speak of {\em the} propensity score, but rather {\em a} propensity score and, perhaps, many candidate propensity scores (cf. section \ref{propensity}). This observation is fundamental to understanding how regularization will impact bias due to feature selection on graphs including colliders and instruments. The potential outcomes approach reminds us that the exogenous errors need not be common among the treatment arms (cf. figure \ref{graph1}). More generally, because the potential outcome notation is intrinsically individualized, it emphasizes the idea that some individuals in a population may have distinct causal diagrams; in particular, some arrows may not appear in every individual's graph. This is not at odds with the graphical formalism; rather it emerges simply because the graph alone does not fully determine the data generating process. In this paper, this distinction is not particularly important, but in estimation techniques relying on instrumental variables, it becomes critical \citep{angrist1996identification}. From the exogeneous errors approach, we are reminded that full conditional unconfoundedness is not actually necessary to estimate particular causal effects (cf. section \ref{meanCDAG}); we leverage this result in defining the principal deconfounding function. Synthesizing the three methods also clarifies common misunderstandings that can occur when operating solely within a single framework; for example, a mean regression model with exogeneous additive errors need not be structural (e.g., causal) in all of its arguments --- rather, the exogeneity of the errors narrowly licenses a causal interpretation in the treatment variable (cf. section \ref{noncausalSEM}). \subsection{On discrete covariates with finite support} The approach in this paper has been to consider stratification estimators in the case of discrete control variables with finite support. Discrete covariates are both common in practice (indeed, more common than continuous covariates) and pedagogically illuminating, and therefore worthy of careful study. We are aware that not everyone agrees; we read in the textbook of Imbens and Rubin (Section 12.2.2): \small \begin{quote} If...we view the covariates as having a discrete distribution with finite support, the implication of unconfoundedness is simply that one should stratify by the values of the covariates. In that case there will be, with high probability, in sufficiently large samples, both treated and control units with the exact same values of the covariates. In this way we can immediately remove all biases arising from differences between covariates, and many adjustment methods will give similar, or even identical, answers. \\ However, as we stated before, this case rarely occurs in practice. In many applications it is not feasible to stratify fully on all covariates, because too many strata would have only a single unit. \\ The differences between various adjustment methods arise precisely in such settings where it is not feasible to stratify on all values of the covariates, and mathematically these differences are most easily analyzed in settings with random samples from large populations using effectively continuous distributions for the covariates...[Therefore] for the purpose of discussing various frequentist approaches to estimation and inference under unconfoundedness...it is helpful to view the covariates as having been randomly drawn from an approximately continuous distribution. \end{quote} \normalsize To paraphrase, the two main premises of this quote are: a) confounding --- and, more specifically, {\em de}confounding --- is relatively easy to understand in the case of discrete covariates with finite support, and b) complete stratification is infeasible in many applications. We agree with these statements. But the conclusion --- that the stylized setting of continuous covariates is therefore better suited to studying statistical methods for causal inference --- does not necessarily follow. Indeed, we employ a different stylized mathematical assumption --- that each strata has at least one treated-control contrast --- and find that, even in that case, bias variance trade-offs emerge. More importantly, these trade-offs can be studied directly, without resorting to asymptotic arguments, which may be untrustworthy guides to a method's operating characteristics in practice. For example, \cite{hahn2004functional} concludes that foreknowledge of which variables are instruments is asymptotically irrelevant for regression estimators of average treatment effects. As we have seen in Section \ref{examples} of this paper, being able to distinguish instruments from confounders is certainly relevant for finite-sample performance. \subsection{Relationship to semi-supervised learning} This paper considers the problem of feature selection for causal effect estimation when a propensity function is available, but a causal diagram is not. This assumption is of course implausible in many practical scenarios, although there are cases where it may be approximately true. For example, suppose that a researcher has a dataset with $n$ complete observations of $(X, Z, Y)$ and $m$ ``partially observed" samples, where $m \gg n$. Partial samples of $(X,Z)$ pairs could be used to more accurately estimate $\pi(X)$, bringing their applied problem closer to the setting studied above. Similarly, partial samples on $(X, Z = 0, Y)$ could be used to better estimate $\mu(X)$, which is particularly useful in the situation described in Section \ref{split_sample}. Such scenarios may be plausible in electronic health records, for instance, in which a treatment (say, a new blood pressure medicine) is rarely administered but an outcome (say, blood pressure) is very commonly measured. The idea of using large auxiliary datasets is common in machine learning, where it is known as semi-supervised learning \citep{zhu2009introduction, belkin2006manifold, liang2007use}. Using unlabeled data to estimate a propensity function in conjunction with machine learning or other regularization methods represent an exciting application of semi-supervised learning to the problem of causal effect estimation. While it is often easier to formalize and motivate the use of auxiliary data for prediction, rather than estimation, this paper shows that there is a role for function estimation techniques in machine learning causal inference. \newpage \section*{Acknowledgements} This work was partially supported by NSF Grant DMS-1502640. \bibliographystyle{imsart-nameyear}
1,314,259,994,442
arxiv
\section{Introduction} Unpredictability of play calls is widely accepted to be a key ingredient to success in the NFL. For example, according to several players of the 2017 Dallas Cowboys, being too predictable regarding their play calling may have been one reason for their elimination from the playoff contention of the 2017 NFL season. Being unpredictable hence is desirable, and, vice versa, it is clearly also of interest to be able to accurately predict the opponent's next play call. In earlier studies, play call predictions were carried out by simple arithmetics, such as calculating the relative frequencies of runs and passes of previous matches \citep{heiny2011predicting}. Driven by the availability of play-by-play NFL data, several studies considered statistical models for play call predictions. These studies can be divided in those where play-by-play data only is considered (see, \citealp{heiny2011predicting, teich2016nfl}) and those who consider additional data on the players on the field, such as the number of offensive players for a certain position and player ratings (see \citealp{leepredicting, joashpredicting}). The former report prediction accuracy of about 0.67, whereas the latter provide prediction accuracy of about 0.75. However, most of these studies use basic statistical models, e.g.\ linear discriminant analysis, logistic regression, or decision trees, which do not account for the time series structure of the data at hand. This contribution considers HMMs for modelling and forecasting NFL play calls. In the recent past, HMMs have been applied in different areas of research for forecasting, including stock markets (see, e.g., \citealp{de2013dynamic, dias2015clustering}), environmental science (see, e.g., \citealp{chambers2012earthquake, tseng2020forecasting}) and political conflicts \citep{schrodt2006forecasting}. Within HMMs, the observations are assumed to be driven by an underlying state variable. In the context of play calling, the underlying states serve as a proxy for the team's current propensity to make a pass (as opposed to a run). The state sequence is modelled as a Markov chain, thereby inducing correlation in the observations and hence accounting for the time series structure of the data. HMMs are fitted to data from seasons 2009 to 2017 to predict the play calls for season 2018. In practice, these predictions are helpful for defense coordinators to make adjustments in real time on the field. Offense coordinators may also benefit from these models, since they allow them to check the predictability of their own play calls. This paper is organised as follows: Section \ref{chap:data} describes the the play-by-play data and provides exploratory data analysis. Section \ref{chap:methods} explains HMMs in furhther detail, and section \ref{chap:results} presents the results. \section{Data}\label{chap:data} The data for predicting play calls in the NFL were taken from \url{www.kaggle.com}, covering (almost) all plays of regular season matches between 2009 to 2018. In total, $m = 2,526$ matches are considered\footnote{The data comprises 2,526 regular-season matches out of 2,560 matches which have taken place in the time period considered.}, each of which is split up into two time series (one for each team's offense), totalling in 5,052 time series containing 318,691 plays. The observed time series $\{y_{m,p}\}_{p=1,\ldots,P_m}$ indicates whether a run or a pass play has been called in the $p$-th play in match $m$, with $$ y_{m,p} = \begin{cases} 1, & \text{if $p$--th play is a pass;} \\ 0, & \text{otherwise} \end{cases} $$ and $P_m$ denoting the total number of plays in match $m$. For all matches considered, other plays such as field goals and kickoffs, which occur typically at the beginning or the end of drives, are ignored here. Since the main goal is to predict play calls, we divide the data into a training and a test data set. The data set for training the models covers all matches from seasons 2009 -- 2017, comprising 2,302 matches and 289,191 plays. The test data covers 224 matches, totalling in 29,500 plays. For the full data set, about 58.4\% of play calls were passes. Since the play of the offense is likely affected by intermediate information on the match (such as the current score), several covariates are considered, which have also been considered by previous studies on predicting play calls summarised above: a dummy indicating whether the match is played at home (\textit{home}), the yards to go for a first down (\textit{ydstogo}), the current down number (\textit{down1}, \textit{down2}, \textit{down3}, and \textit{down4}), a dummy indicating whether the formation is shotgun (\textit{shotgun}), a dummy indicating whether the play is a no-huddle play (\textit{no-huddle}), the difference in the intermediate score (own score minus the opponent's score) (\textit{scorediff}), a dummy indicating whether the current play is a goal-to-go play (\textit{goaltogo}), and a dummy indicating whether the team is starting within 10 yards of their own end zone (\textit{yardline90}). Table \ref{tab:nfl_descriptives} summarises the covariates and displays corresponding descriptive statistics (for the full data set). \begin{table}[h] \centering \caption{Descriptive statistics of the covariates.} \label{tab:nfl_descriptives} \scalebox{0.8}{ \begin{tabular}{@{\extracolsep{5pt}}lcccc} \\[-1.8ex]\hline \hline \\[-1.8ex] & \multicolumn{1}{c}{mean} & \multicolumn{1}{c}{st.\ dev.} & \multicolumn{1}{c}{min.} & \multicolumn{1}{c}{max.} \\ \hline \\[-1.8ex] \textit{pass} (response) & 0.584 & 0.493 & 0 & 1 \\ \textit{home} & 0.503 & 0.500 & 0 & 1 \\ \textit{ydstogo} & 8.634 & 3.931 & 1 & 50 \\ \textit{down1} & 0.443 & 0.497 & 0 & 1 \\ \textit{down2} & 0.333 & 0.471 & 0 & 1 \\ \textit{down3} & 0.209 & 0.407 & 0 & 1 \\ \textit{down4} & 0.015 & 0.121 & 0 & 1 \\ \textit{shotgun} & 0.525 & 0.499 & 0 & 1 \\ \textit{no-huddle} & 0.087 & 0.282 & 0 & 1 \\ \textit{scorediff} & $-$1.458 & 10.84 & $-$59 & 59 \\ \textit{goaltogo} & 0.057 & 0.232 & 0 & 1 \\ \textit{yardline90} & 0.033 & 0.178 & 0 & 1 \\ \hline \\[-1.8ex] \end{tabular}} \end{table} To investigate how the play calling varies with different downs and the shotgun formation, Figure \ref{fig:down_shotgun} shows the empirical proportions for a pass found in the data, separated for the different downs and the shotgun formation. As indicated by the figure, a pass becomes more likely with increasing number of downs, and there is a substantial increase in passes observed if the team is in shotgun formation. However, whether a run or a pass is called is also likely to depend on the yards to go for a first down, which is shown in Figure \ref{fig:scorediff}, indicating that a pass becomes more likely the more yards are needed for a first down. The colours in Figure \ref{fig:scorediff} indicate the (categorised) score difference, suggesting that a pass becomes more likely if teams are trailing. In addition to the covariates potentially affecting the decision to call a pass or a run, one example time series from the data set, corresponding to the play calls observed for the New Orleans Saints in the match against the New York Giants played in November 2015 is shown in Figure \ref{fig:timeseries}. With 101 points scored in total, this match is one of the highest scoring NFL games. The plays shown in the figure underline that there are periods with a fairly high number of passing plays (e.g.\ around play 20), and those where more runs are called (e.g.\ around play 30). \begin{figure} \centering \includegraphics[scale = 0.75]{figure_down_shotgun.pdf} \caption{Empirical proportions for a pass found in the data for different downs and the shotgun formation.} \label{fig:down_shotgun} \end{figure} \begin{figure} \centering \includegraphics[scale = 0.75]{figure_scorediff.pdf} \caption{Empirical proportions for a pass found in the data for the different yards to go for a first down. Colours indicate the (categorised) score difference. The proportion for a pass for 10 yards to go is relatively low, since most of these observations correspond to a first down, where a run is more likely. Observations with more than 25 yards to go are excluded (the number of observations for each of these categories is less than 100).} \label{fig:scorediff} \end{figure} \begin{figure} \centering \includegraphics[scale = 0.9]{figure_data2.pdf} \caption{Example time series found in the data: the play calls of the New Orleans Saints observed for the match against the New York Giants played on November 1, 2015.} \label{fig:timeseries}\vspace*{-9pt} \end{figure} \begin{table}[!htbp] \centering \caption{Descriptive statistics of the covariates considered.} \label{tab:descriptives} \begin{tabular}{@{\extracolsep{5pt}}lcccc} \\[-1.8ex]\hline \hline \\[-1.8ex] & \multicolumn{1}{c}{mean} & \multicolumn{1}{c}{std.\ dev.} & \multicolumn{1}{c}{min} & \multicolumn{1}{c}{max} \\ \hline \\[-1.8ex] \textit{pass} (response) & 0.584 & 0.493 & 0 & 1 \\ \textit{home} & 0.503 & 0.500 & 0 & 1 \\ \textit{ydstogo} & 8.634 & 3.931 & 1 & 50 \\ \textit{down1} & 0.443 & 0.497 & 0 & 1 \\ \textit{down2} & 0.333 & 0.471 & 0 & 1 \\ \textit{down3} & 0.209 & 0.407 & 0 & 1 \\ \textit{down4} & 0.015 & 0.121 & 0 & 1 \\ \textit{shotgun} & 0.525 & 0.499 & 0 & 1 \\ \textit{no-huddle} & 0.087 & 0.282 & 0 & 1 \\ \textit{scorediff} & $-$1.458 & 10.84 & $-$59 & 59 \\ \textit{goaltogo} & 0.057 & 0.232 & 0 & 1 \\ \textit{yardline90} & 0.033 & 0.178 & 0 & 1 \\ \hline \\[-1.8ex] \end{tabular} \end{table} \section{Modelling and forecasting play-calls}\label{chap:methods} To account for the periods of passes and runs as indicated by Figure \ref{fig:timeseries}, HMMs are considered for modelling and forecasting play calls. The underlying states can be interpreted as the propensity to make a pass (as opposed to a run) of the team considered. A HMM involves two components, namely an observed state-dependent process and an unobserved Markov chain with $N$ states, assuming that the observations are generated by one of $N$ pre-specified state-dependent distributions. The dependence structure of the HMM considered is shown in Figure \ref{fig:HMM}. Here, the observed time series are the play calls $\{y_{m,p}\}_{p=1,\ldots,P_m}$, which are denoted from now on by $y_p$ for notational simplicity. The unobserved state process, modelled by a $N$-state Markov chain, is denoted by $\{s_p\}_{p=1,\ldots,P_m }$. For the state transitions, a transition probability matrix (t.p.m.) $\boldsymbol{\Gamma} = (\gamma_{ij})$ is defined, with $\gamma_{ij}=\Pr(s_p = j | s_{p-1}=i$), i.e.\ the probability of switching from state $i$ at play $p-1$ to state $j$ in play $p$. For the model formulation of an HMM to be completed, the number of states $N$ and the class of the state-dependent distribution have to be selected. Since the play calls are binary, the Bernoulli distribution is chosen here. The corresponding probabilities of the observation given state $i$, i.e.\ $f(y_p\, |\, s_p = i)$ are comprised in the $i-$th diagonal element of the $N \times N$ diagonal matrix $\mathbf{P}(y_{p})$. Since assuming a team to start in its stationary distribution at the beginning of an American football match is fairly unrealistic, we estimate the initial distribution $\boldsymbol{\delta}= \big(\Pr (s_{p} = 1),\ldots,\Pr (s_{p} = N) \big)$. To include the covariates introduced above which may lead to state-switching, we allow the transition probabilities $\gamma_{ij}$ to depend on covariates at play $p$. This is done by linking $\gamma_{ij}^{(p)}$ to covariates (denoted by $x_1^{(p)},\ldots,x_k^{(p)}$) using the multinomial logit link: $$ \gamma_{ij}^{(p)} = \dfrac{\exp(\eta_{ij}^{(p)})}{\sum_{k=1}^N \exp(\eta_{ik}^{(p)})} $$ with \vspace{0.5cm} $$ \eta_{ij}^{(p)} = \begin{cases} \beta_0^{(ij)} + \sum_{l=1}^K \beta_l^{(ij)} x_l^{(p)} & \text{if }\, i\ne j; \\ 0 & \text{otherwise}. \end{cases} \vspace{1cm} $$ Since the transition probabilities depend on covariates, the t.p.m.\ as introduced above is not constant across time, and hence denoted by $\boldsymbol{\Gamma}^{(p)}$. To formulate the likelihood, we apply the forward algorithm, which allows to calculate the likelihood recursively at low computational cost \citep{zucchini2016hidden}. The likelihood for a single match $m$ is then given by: \begin{equation*} L = \boldsymbol{\delta} \mathbf{P}(y_{m,1}) \boldsymbol{\Gamma}^{(m,2)}\mathbf{P}(y_{m,2}) \dots \boldsymbol{\Gamma}^{({m,P_m})}\mathbf{P}(y_{m,P_m}) \mathbf{1} \end{equation*} with column vector $\mathbf{1}=(1,\ldots,1)' \in \mathbb{R}^N$ \citep{zucchini2016hidden}. To obtain the likelihood for the full data set, we assume independence between the individual matches such that the likelihood is given by the product of likelihoods for the individual matches: \begin{equation*} L = \prod_{m=1}^{M} \boldsymbol{\delta} \mathbf{P}(y_{m,1}) \boldsymbol{\Gamma}^{(m,2)}\mathbf{P}(y_{m,2}) \dots \boldsymbol{\Gamma}^{({m,P_m})}\mathbf{P}(y_{m,P_m}) \mathbf{1}, \end{equation*} where $M$ denotes the total number of matches. The model parameters are estimated by numerically maximising the likelihood using \texttt{nlm()} in R \citep{rcoreteam}. Subsequently, we predict play calls for the test data using the fitted models. Specifically, to forecast play calls, the forecast distribution is considered, which is for a single match given as a ratio of likelihoods (dropping the subscript $m$ for notational simplicity): $$ \Pr(y_{P+1} = y \,|\, \mathbf{y}^{(P)}) = \dfrac{\boldsymbol{\delta} \mathbf{P}(y_{1}) \boldsymbol{\Gamma}^{({2})} \mathbf{P}(y_{2}) \cdots \boldsymbol{\Gamma}^{({P})} \mathbf{P}(y_{P}) \boldsymbol{\Gamma}^{(y)} \mathbf{P}(y) \mathbf{1}}{\boldsymbol{\delta} \mathbf{P}(y_{1}) \boldsymbol{\Gamma}^{({2})} \mathbf{P}(y_{2}) \cdots \boldsymbol{\Gamma}^{({P})} \mathbf{P}(y_{P}) \mathbf{1}}, $$ where $\boldsymbol{\Gamma}^{(y)}$ and $\mathbf{y}^{(P)}$ denote the t.p.m.\ as implied by the new covariates and the vector of all preceding observations of the match considered, respectively \citep{zucchini2016hidden}. The play which is most likely under the forecast distribution is then taken as the one-step-ahead forecast. To address heterogeneity between teams, the models are fitted to data of each team individually instead of pooling the data of all teams. The corresponding results are presented in the next section. \begin{figure}[h!] \centering \begin{tikzpicture} \node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (A) at (2, -5) {$s_{p-1}$}; \node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (A1) at (-0.5, -5) {...}; \node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (B) at (4.5, -5) {$s_{p}$}; \node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (C) at (7, -5) {$s_{p+1}$}; \node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (C1) at (9.5, -5) {...}; \node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (Y1) at (2, -2.5) {$y_{p-1}$}; \node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (Y2) at (4.5, -2.5) {$y_{p}$}; \node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (Y3) at (7, -2.5) {$y_{p+1}$}; \draw[-{Latex[scale=2]}] (A)--(B); \draw[-{Latex[scale=2]}] (B)--(C); \draw[-{Latex[scale=2]}] (A1)--(A); \draw[-{Latex[scale=2]}] (C)--(C1); \draw[-{Latex[scale=2]}] (A)--(Y1); \draw[-{Latex[scale=2]}] (B)--(Y2); \draw[-{Latex[scale=2]}] (C)--(Y3); \end{tikzpicture} \caption{Dependence structure of the HMM considered. Each observation $y_{p}$ is assumed to be generated by one of $N$ distributions according to the state process $s_{p}$, which serves for the team's current propensity to make a pass (as opposed to a run).} \label{fig:HMM} \end{figure} \section{Results}\label{chap:results} Before presenting the results on the prediction of play calls, the number of states $N$ and the covariates have to be selected. As the number of parameters (due to the inclusion of covariates) increases considerably fast compared to the number of observations per team, we select $N=2$ states here to avoid numerical instability. We apply a forward selection of the covariates described in Section \ref{chap:data} based on the AIC. In addition, we also include several interactions between the covariates, such as an interaction between \textit{ydstogo} and \textit{scorediff}, which was already indicated by in Figure \ref{fig:scorediff}. Based on further explanatory data analysis, the following additional interaction terms are considered: interactions between the different downs and \textit{ydstogo}, between \textit{shotgun} and \textit{ydstogo}, between \textit{nohudlle} and \textit{scorediff}, and between \textit{nohuddle} and \textit{shotgun}. The AIC-based forward covariate selection is then applied for each team individually, with the covariates selected being slightly different between the teams. The play call forecasts are evaluated by the prediction accuracy (i.e.\ the proportion of correct predictions), the precision (i.e.\ the proportion of predicted runs/passes that were actually correct) and the recall (i.e.\ the proportion of actual runs/passes that were identified correctly). The weighted average of the prediction accuracy over all teams is obtained as 0.715. This is a substantial improvement compared to existing studies that were also based on play-by-play data only (i.e.\ without including information on the players on the field). Moreover, the prediction accuracy obtained here is only slightly lower than the ones reported by \citet{leepredicting} and \citet{joashpredicting} (which are about 75\%), notably \textit{without} taking into account information about the players on the field. The prediction accuracy for the individual teams is shown in Figure \ref{fig:predteams}, indicating that the lowest and highest prediction accuracy are obtained for the Seattle Seahaws (0.602) and the New England Patriots (0.779), respectively. In addition, the precision rates for a run range from 0.532 (Green Bay Packers) to 0.763 (Houston Texans), which can be interpreted as follows:\ when our model predicts a run for the Houston Texans (Green Bay Packers), it is correct in about 76.3\% (53.2\%) of all predicted runs. The recall rates for a run range from 0.324 (Baltimore Ravens) to 0.886 (Los Angeles Rams) --- in other words, our model correctly predicts 88.6\% of all runs for the Los Angeles Rams. For passing plays, precision and recall range from 0.559 (Seattle Seahawks) to 0.9 (Los Angeles Rams), and from 0.664 (Los Angeles Rams) to 0.922 (Pittsburgh Steelers), respectively. These summary statistics on the predicted play calls reveal that there are substantial differences in the predictive power with regard to the individual teams. Section \ref{chap:discussion} discusses practical implications following from these summary statistics. It took us on avarage 7 hours to conduct the AIC-based forward selection for the covariates on a standard desktop computer. However, using the fitted models to predict play calls takes less than a second for a single match, thus rendering the approach considered suitable for application in practice. \begin{figure}[!t] \centering \includegraphics[width=0.99\textwidth]{nfl_figure_pred_teams.pdf} \caption{Prediction accuracy for the individual teams. The number of out-of-sample observations (i.e.\ of predicted plays) is shown at the top of the bars.} \label{fig:predteams}\vspace*{-9pt} \end{figure} \section{Discussion}\label{chap:discussion} The use of HMMs to predict play calls in the NFL indicates that the accuracy of the predictions is increased -- compared to similar previous studies -- by accounting for the time series structure of the data. We split the data into a training set (seasons 2009--2017) and a test set (season 2018), and fitted HMMs to the (training) data of all teams individually, which yields 71.5\% correctly predicted out-of-sample play calls. The prediction accuracy for the individual teams range from 60.2\% to 77.9\%, with the highest prediction accuracy obtained for the New England Patriots (see Figure \ref{fig:predteams}). Practitioners have to take into account the variation in the prediction accuracy across teams and plays. For example, if a pass is predicted for the Los Angeles Rams, it is fairly likely that the actual play will indeed be a pass (according to our model), since the corresponding precision is obtained as 90\%. On the other hand, if a pass is predicted for the Seattle Seahawks, this forecast has to be treated with caution, as the precision is obtained as 55.9\%. Additional aspects for practitioners are the costs of an incorrect decision. For example, if teams want to avoid that a pass is anticipated although the actual play of the opponent's offense is a run, then coaches should carefully consider the corresponding precision rates. Since the models presented here provide probabilistic forecasts and not only binary classifications, coaches could consult the forecasts only if the predicted probability exceeds a chosen threshold. In any case, practitioners should not regard these models as a tool which delivers defense adjustments for each play automatically, but rather as an additional help to make better defense and offense plays, respectively. Further research could focus on including additional covariates to improve the predictive power, such as the personnel of the team, i.e.\ the information on how many running backs/fullbacks, tight ends and wide receiver are on the field. In addition, the current strength of the team is not captured yet. This could be quantified by, for instance, the player ratings provided by the video game Madden, which was also done by \citet{leepredicting} and \citet{joashpredicting}. However, it is at least questionable whether information on players can indeed be used on the field in practice, since players are substituted fairly frequently during a match. Finally, updating the model throughout the 2018 season dynamically, rather than using the model fitted up to season 2018 in the out-of-sample prediction would further improve the predictive power. \newpage \bibliographystyle{apalike}
1,314,259,994,443
arxiv
\section*{Keywords} \setcounter{tocdepth}{3} \begingroup \let\cleardoublepage\clearpag \tableofcontents \endgroup \markboth{}{ \section{Introduction} Black hole perturbation theory has a long and rich history, dating back at least as far as Regge and Wheeler's study of odd-parity perturbations of Schwarzschild spacetime in the late 1950s \cite{Regge:1957td}. This was followed up in 1970 by Zerilli's study of even-parity perturbations \cite{Zerilli:1970se,Zerilli:1971wd}. Soon afterwards, Vishveshwara \cite{Vishveshwara:1970zz} identified quasinormal modes in perturbations of Schwarzschild spacetime, Press \cite{Press:1971wr} studied the associated quasinormal mode frequencies, and Chandrasekhar and Detweiler \cite{Chandrasekhar:1975zza} numerically computed the frequencies. Teukolsky's success in deriving decoupled and separable equations for perturbations of Kerr spacetime \cite{Teukolsky:1972my,Teukolsky:1973ha} paved the way for similar progress in the Kerr case. The idea of a self-force has an even longer history, having been studied by Dirac in 1938 in his relativistic generalization of the Abraham–Lorentz self-force to the context of an electric charge undergoing acceleration in flat spacetime \cite{Dirac:1938nz}. In the 1960s this was extended by DeWitt and Brehme to the curved spacetime case \cite{DeWitt:1960fc}. The gravitational self-force acting on a point mass was studied in the late 1990s by Mino, Sasaki and Tanaka \cite{Mino:1996nk} and by Quinn and Wald \cite{Quinn:1996am}, leading to the MiSaTaQuWa equation that is named after the authors of those first papers. Subsequent work has put gravitational self-force theory on a very strong theoretical footing \cite{Gralla:2008fg,Pound:2009sm,Pound:2010wa} and has extended the formalism to second order in perturbation theory \cite{Rosenthal:2006iy,Detweiler:2011tt,Pound:2012nt,Gralla:2012db}. The last 20 years have seen increasingly intense focus on the study of gravitational self-force in perturbations of black hole spacetimes. This has been motivated to a large extent by the European Space Agency's LISA mission, which is scheduled for launch in the 2030s \cite{LISA} and which will observe gravitational waves in the millihertz frequency band. One of the key sources for LISA will be extreme-mass-ratio inspirals (EMRIs), binary systems consisting of a compact solar-mass object orbiting a massive black hole. The presence of a small parameter (the mass ratio, which is expected to be in the region of $10^{-6}$) makes black hole perturbation theory an ideal tool for the development of theoretical models of the gravitational waveforms from EMRIs. Over the several year timescale that the LISA mission is expected to run, the smaller body in an EMRI will execute $\sim 10^4$--$10^5$ intricate orbits in the strong-field regime around the central black hole, acting as a precise probe and enabling high-precision measurements of the black hole's parameters, tests of its Kerr nature, and tests of general relativity. Radiation reaction will cause the orbit to significantly evolve and possibly plunge into the black hole in that time, meaning that self-force effects will be important to include in waveform models. Indeed, in order to extract the maximum information from the observation of EMRIs by LISA it has been established that it will be necessary to incorporate information at second order in perturbation theory by computing the second order gravitational self-force \cite{Hinderer:2008dm,Isoyama:2012bx,Burko:2013cca}. Aside from EMRIs, gravitational self-force is also potentially highly accurate for intermediate-mass-ratio inspirals (IMRIs)~\cite{vandeMeent:2020xgc}, in which the mass ratio may be as large as $\sim10^{-2}$. This makes black hole perturbation theory and self-force also relevant for the current generation of ground-based gravitational wave detectors including LIGO \cite{LIGO}, Virgo \cite{Virgo} and Kagra \cite{Kagra}. There are already numerous reviews of these topics in the literature. The classic text by Chandrasekhar~\cite{Chandrasekhar:1985kt} provides a comprehensive introduction to black hole physics, linear black hole perturbation theory, and geodesic motion in black hole spacetimes. Ref.~\cite{Sasaki:2003xr} reviews linear black hole perturbation theory with an emphasis on analytical post-Newtonian expansions of the perturbation equations. Ref.~\cite{Berti:2009kk} provides a thorough introduction to quasinormal modes of black holes. Ref.~\cite{Barack:2018yvs} offers a broad introduction to self-force calculations for non-experts, including a survey of concrete physical results through 2018. Refs.~\cite{Poisson:2011nh,Pound:2015tma} cover the foundations of self-force theory, and Ref.~\cite{Harte:2014wya} provides a complementary view of the foundations from a fully nonlinear perspective. Finally, Refs.~\cite{Barack:2009ux,Wardell:2015kea} provide detailed introductions to methods of computing the self-force. Our aim is to complement rather than reiterate these existing reviews. We keep our description of self-force theory brief, only summarizing the key ideas and methods, and we forgo a survey of physical results. Instead, we focus on detailing the main perturbative methods required to model waveforms from small-mass-ratio binaries, leading ultimately to a multiscale expansion of the Einstein equations with a small-body source. At the same time, we keep much of the material sufficiently general to apply to other scenarios of interest. Our aim is also not to provide detailed descriptions of the numerical approaches to solving the many equations detailed in this review. Open source codes implementing state-of-the-art numerical algorithms for solving the equations of black hole perturbation theory and self-force are available through the Black Hole Perturbation Toolkit \cite{BHPToolkit}. The Black Hole Perturbation Toolkit also acts as a repository for collating data (typically in the form of numerical tables or analytical post-Newtonian series expansions) produced by the research community. Our discussion is divided into three distinct parts. Sections 2 and 3 briefly introduce relevant background material on perturbation theory in general relativity and the Kerr spacetime. Sections 4, 5, and 6 review three disjoint topics: black hole perturbation theory; geodesics and accelerated orbits in Kerr spacetime; and the foundations of the ``local problem'' in self-force theory. These three sections are written to be largely independent of one another, and they can be read in any order. Finally, in Section 7 we bring together all three topics in a description of black hole perturbation theory with a (skeletal) small-body source, focusing on the multiscale formulation. The multiscale expansion of the Einstein equation for generic (nonresonant) orbits in Kerr, and the post-adiabatic waveform-generation framework that comes along with it, appears here for the first time. \section{Perturbation theory in General Relativity} The overarching framework for our review is perturbation theory in general relativity. In self-force calculations, this is typically applied to the specific case of a small object in the spacetime of a Kerr black hole, and in much of the review we specialize to that scenario. But to allow for generality in some sections, we first consider the more generic case of smooth perturbations of an arbitrary vacuum spacetime. We assume the metric can be expanded in powers of a small parameter $\epsilon$, \begin{equation} g^{\rm exact}_{\mu\nu} = g_{\mu\nu} + \epsilon h^{(1)}_{\mu\nu} + \epsilon^2 h^{(2)}_{\mu\nu} + O(\epsilon^3),\label{g expansion} \end{equation} where $g_{\mu\nu}$ is a vacuum metric, and that the stress-energy can be similarly expanded as \begin{equation} T_{\mu\nu} = \epsilon T^{(1)}_{\mu\nu} + \epsilon^2 T^{(2)}_{\mu\nu} + O(\epsilon^3).\label{T expansion} \end{equation} For later convenience, we define the total metric perturbation $h_{\mu\nu}=\sum_{n>0}\epsilon^n h^{(n)}_{\mu\nu}$. We also warn the reader that we will later treat $\epsilon$ as a formal counting parameter that can be set equal to 1. To expand the Einstein equations $G_{\mu\nu}[g+h]=8\pi T_{\mu\nu}$ in powers of $\epsilon$, we first note that the Einstein tensor of a metric $g_{\mu\nu}+h_{\mu\nu}$ can be expanded in powers of the exact perturbation $h_{\mu\nu}$: $G_{\mu\nu}[g+h] = G_{\mu\nu}[g]+G^{(1)}_{\mu\nu}[h] + G^{(2)}_{\mu\nu}[h,h] + O(|h|^3)$. The quantities $G^{(n)}_{\mu\nu}$ are easily obtained from the exact Riemann tensor (see, e.g., Ch. 7.5 of Ref.~\cite{Wald:1984rg}). For a vacuum background, the first two terms are \begin{align} G^{(1)}_{\mu\nu}[h] &= \left(g_\mu{}^\alpha g_\nu{}^\beta-\tfrac{1}{2}g_{\mu\nu}g^{\alpha\beta}\right)R^{(1)}_{\alpha\beta},\label{Einstein1}\\ G^{(2)}_{\mu\nu}[h,h] &= \left(g_\mu{}^\alpha g_\nu{}^\beta-\tfrac{1}{2}g_{\mu\nu}g^{\alpha\beta}\right)R^{(2)}_{\alpha\beta} -\tfrac{1}{2}\left(h_{\mu\nu}g^{\alpha\beta}-g_{\mu\nu}h^{\alpha\beta}\right)R^{(1)}_{\alpha\beta},\label{Einstein2} \end{align} where the linear and quadratic terms in the Ricci tensor are \begin{align} R^{(1)}_{\alpha\beta}[h] &= -\tfrac{1}{2}\left(\Box h_{\alpha\beta}+2R_\alpha{}^\mu{}_\beta{}^\nu h_{\mu\nu}-2\bar h_{\mu(\alpha}{}^{;\mu}{}_{\beta)}\right),\\ R^{(2)}_{\alpha\beta}[h,h] &= \tfrac{1}{4}h^{\mu\nu}{}_{;\alpha}h_{\mu\nu;\beta} + \tfrac{1}{2}h^{\mu}{}_{\beta}{}^{;\nu}\left(h_{\mu\alpha;\nu} - h_{\nu\alpha;\mu}\right) - \tfrac{1}{2}\bar h^{\mu\nu}{}_{;\nu}\left(2h_{\mu(\alpha;\beta)}-h_{\alpha\beta;\mu}\right)\nonumber\\ &\quad -\tfrac{1}{2} h^{\mu\nu}\left(2h_{\mu(\alpha;\beta)\nu} - h_{\alpha\beta;\mu\nu} - h_{\mu\nu;\alpha\beta}\right). \end{align} Here we have defined the trace-reversed perturbation $\bar h_{\mu\nu}:=h_{\mu\nu}-\tfrac{1}{2}g_{\mu\nu}g^{\alpha\beta}h_{\alpha\beta}$ and the d'Alembertian $\Box:=g^{\mu\nu}\nabla_{\!\mu}\nabla_{\!\nu}$. A semicolon and $\nabla$ both denote the covariant derivative compatible with $g_{\mu\nu}$. So, substituting the expansions~\eqref{g expansion} and \eqref{T expansion} into the Einstein equations and equating powers of $\epsilon$, we obtain \begin{align} G^{(1)}_{\mu\nu}[h^{(1)}] &= 8\pi T^{(1)}_{\mu\nu},\label{EFE1}\\ G^{(1)}_{\mu\nu}[h^{(2)}] &= 8\pi T^{(2)}_{\mu\nu} - G^{(2)}_{\mu\nu}[h^{(1)},h^{(1)}].\label{EFE2} \end{align} This perturbative expansion comes with the freedom to perform gauge transformations \begin{align} h^{(1)}_{\mu\nu} &\to h^{(1)}_{\mu\nu} + \pounds_{\xi_{(1)}}g_{\mu\nu},\\ h^{(2)}_{\mu\nu} &\to h^{(2)}_{\mu\nu} + \pounds_{\xi_{(2)}}g_{\mu\nu} + \tfrac{1}{2}\pounds^2_{\xi_{(1)}}g_{\mu\nu} +\pounds_{\xi_{(1)}}h^{(1)}_{\mu\nu}, \end{align} where $\pounds_{\xi}$ is a Lie derivative, and $\xi^\alpha_{(n)}$ are freely chosen vector fields. In self-force theory, this freedom is commonly used to impose the Lorenz gauge condition, \begin{equation} \label{eq:LorenzGauge} \nabla_\alpha\bar h^{\alpha\beta}=0, \end{equation} in which case the linearized Einstein tensor simplifies to \begin{equation} \label{eq:LorenzField} G^{(1)}_{\mu\nu}[h] = -\tfrac{1}{2}\left(\Box \bar h_{\mu\nu}+2R_\mu{}^\alpha{}_\nu{}^\beta\bar h_{\alpha\beta}\right). \end{equation} A perturbed metric will come hand in hand with a perturbed equation of motion for objects in the spacetime: \begin{equation}\label{perturbed geodesic equation} \frac{D^2z^\mu}{d\tau^2} = f_{(0)}^\mu + \epsilon f_{(1)}^\mu + \epsilon^2 f_{(2)}^\mu + O(\epsilon^3). \end{equation} Here $z^\mu(\tau)$ is a perturbed worldline, $\tau$ is its proper time as measured in the background $g_{\mu\nu}$, $\frac{D^2 z^\mu}{d\tau^2} = \frac{dz^\nu}{d\tau}\nabla_{\nu}\frac{dz^\mu}{d\tau}:=a^\mu$ is its covariant acceleration with respect to $g_{\mu\nu}$, and $f_{(n)}^\mu$ is the $n$th-order covariant force (per unit mass) driving the acceleration. In our review, we will consider the general case including a zeroth-order force, but we will focus primarily on cases with $f^\mu_{(0)}=0$. The forces $f^\mu_{(n)}$ will arise from (parts of) the metric perturbations $h^{(n)}_{\mu\nu}$ as well as from coupling of $g_{\mu\nu}$ to the matter that creates those perturbations. Here we have limited the treatment to first- and second-order perturbations, which are expected to be necessary and sufficient for modelling small-mass-ratio binaries. In some sections we will restrict the context to first, linearized order. \section{Isolated, stationary black hole spacetimes} \label{sec:bh} In most of our review, we take the background spacetime to be that of an isolated, stationary black hole. In this section we provide an overview of the properties of these spacetimes. \subsection{Metric} \label{sec:metric} The Schwarzschild spacetime is a static, spherically symmetric solution of the vacuum Einstein equations representing a non-rotating black hole with mass $M$. It has a line element given by \begin{equation} \label{eq:SchwMetric} ds^2 = - f(r) dt^2 + f(r)^{-1} dr^2 + r^2 \big(d\theta^2 + \sin^2\theta d\phi^2\big), \end{equation} where $f(r) := 1 - \frac{2M}{r}$. The Schwarzschild spacetime may be generalized to allow the black hole to have a charge per unit mass, $Q$, resulting in the Reissner-Nordstr\"{o}m solution of the Einstein-Maxwell equations, with line element \begin{equation} \label{eq:RNMetric} ds^2 = - \left( 1-\frac{2M}{r} + \frac{Q^2}{r^2}\right) dt^2 + \left( 1-\frac{2M}{r} + \frac{Q^2}{r^2}\right)^{-1} dr^2 + r^2 \big(d\theta^2 + \sin^2\theta d\phi^2\big). \end{equation} The spacetime of a spinning black hole is given by the Kerr metric with angular momentum per unit mass $a$. In Boyer-Lindquist coordinates, its line-element is \begin{multline} \label{eq:KerrMetric} ds^2 = - \left[1-\frac{2Mr}{\Sigma}\right]\,dt^2 - \frac{4aMr\sin^2\theta}{\Sigma}\,dt\,d\phi + \frac{\Sigma}{\Delta}\,dr^2 \\ + \Sigma\,d\theta^2 + \left[\Delta+\frac{2Mr(r^2+a^2)} {\Sigma}\right] \sin^2\theta\, d\phi^2, \end{multline} where $\Sigma := r^2+a^2\cos^2\theta$ and $\Delta := r^2-2Mr+a^2 = (r-r_+)(r-r_-)$ with $r_\pm := M \pm \sqrt{M^2-a^2}$ the locations of the inner and outer horizons. As was the case with Schwarzschild spacetime, the Kerr spacetime may be generalized to allow the black hole to have a charge per unit mass, $Q$, giving the Kerr-Newman solution of the Einstein-Maxwell equations. In Boyer-Lindquist coordinates, the Kerr-Newman metric is: \begin{multline} \label{eq:KerrNewmanMetric} ds^2 = - \left[1-\frac{2Mr-Q^2}{\Sigma} \right]\,dt^2 - \frac{2(2Mr - Q^2) a \sin^2\theta}{\Sigma}\,dt\,d\phi + \frac{\Sigma}{\Delta+Q^2}\,dr^2\\ + \Sigma \,d\theta^2 + \left[\Delta+Q^2 + \frac{(2Mr-Q^2)(a^2+r^2)}{\Sigma}\right] \sin^2\theta\,d\phi^2. \end{multline} In astrophysical scenarios, a charged black hole will quickly be neutralized. For that reason, in later sections we will restrict our attention to the Kerr spacetime. We will also later use $Q$ to denote the Carter constant, associated with the Kerr metric's third, hidden symmetry discussed below. However, we include the charged black hole metrics here for completeness. \subsection{Null tetrads} \label{sec:null-tetrads} The black hole spacetimes above are all of Petrov type D and thus have two non-degenerate principal null directions. This gives us a natural way to define a complex null tetrad by having two of the tetrad legs aligned with the principal null directions. Choosing $l^\alpha := e^\alpha_{(1)}$ to align with the outward null direction and $n^\alpha := e^\alpha_{(2)}$ to align with the inward null direction, there is still residual freedom in the choice of scaling of each tetrad leg, and also in the relative orientation of the remaining two tetrad legs, $m^\alpha := e^\alpha_{(3)}$ and ${\bar{m}}^\alpha := e^\alpha_{(4)}$. The two most common choices in Kerr spacetime are Carter's canonical tetrad \cite{Carter:1987hk}\footnote{Carter's original tetrad had interchanged $l^\mu \leftrightarrow n^\mu$ and $m^\mu \leftrightarrow \bar{m}^\mu$. Carter also worked in different coordinates $(\tilde{t} = t - a \phi, r, q = a \cos \theta, \tilde{\phi}= \phi/a)$ which more fully reflect the inherent symmetries of Kerr. We deviate from that here and keep with the convention of having $l^\alpha$ point outwards and working in the more common Boyer-Lindquist coordinates.}, \begin{alignat}{4} l^\alpha &= \frac{1}{\sqrt{2\Delta \Sigma}}\Big[r^2+a^2,\Delta,0,a\Big], \quad & n^\alpha &= \frac{1}{\sqrt{2\Delta \Sigma}}\Big[r^2+a^2,-\Delta,0,a\Big], \nonumber \\ m^\alpha &= \frac{1}{\sqrt{2\Sigma}}\Big[i a \sin \theta,0,1,\frac{i}{\sin \theta}\Big], \quad & \bar{m}^\alpha &= \frac{1}{\sqrt{2\Sigma}}\Big[-i a \sin \theta,0,1,-\frac{i}{\sin \theta}\Big], \end{alignat} and the Kinnersley tetrad \cite{Kinnersley:1969zza}, which is related to Carter's canonical tetrad by a simple rescaling: $l^\alpha = \sqrt{\frac{\Delta}{2\Sigma}}\,l^\alpha_\mathrm{K}$, $n^\alpha = \sqrt{\frac{2\Sigma}{\Delta}}\,n^\alpha_\mathrm{K}$, $m^\alpha = \frac{\bar{\zeta}}{\sqrt{\Sigma}} m^\alpha_\mathrm{K}$ and ${\bar{m}}^\alpha = \frac{\zeta}{\sqrt{\Sigma}} {\bar{m}}^\alpha_\mathrm{K}$, where \begin{equation} \zeta := r-i a \cos \theta \end{equation} is an important quantity that we will encounter again later (note that $\Sigma = \zeta \bar{\zeta}$). The Carter tetrad transforms as $l \leftrightarrow - n$, $m \leftrightarrow {\bar{m}}$ under $\{t,\phi\} \to \{-t, -\phi\}$. Although the Kinnersley tetrad formed a crucial part of Teukolsky's separability result for perturbations of the Weyl tensor \cite{Teukolsky:1972my} it has two unfortunate features that make it less than ideal for elucidating the symmetric structure of Kerr spacetime: (i) it violates the $\{t, \phi\} \to \{-t,-\phi\}$ symmetry; and (ii) it destroys a symmetry in $\{r, \theta\}$. Carter's canonical tetrad does not suffer from either of these deficiencies and is slightly preferable from that point of view. Note, however, that all of the results that follow can be derived using either tetrad. \subsection{Symmetries} Much of the success in studying Kerr spacetime has arisen from the inherent symmetries it possesses. Two of these are associated with the existence of two Killing vectors, $\xi^\alpha$ and $\eta^\alpha$, which satisfy Killing's equation,\footnote{Note that the Killing vector $\delta^\alpha_\phi = \frac{1}{a}\eta^\alpha - a \delta^\alpha_t$ is often used in place of $\eta^\alpha$ when working in Boyer-Lindquist coordinates.} \begin{equation} \xi_{(\alpha;\beta)} = 0 = \eta_{(\alpha;\beta)}. \end{equation} In Kerr spacetime these are related to the timelike and axial symmetries, \begin{equation} \xi^\alpha = \delta_t^\alpha, \quad \eta^\alpha = \delta_{\tilde{\phi}}^\alpha = a (\delta_\phi^\alpha + a \delta_t^\alpha). \end{equation} The spacetime also admits a conformal Killing-Yano tensor, \begin{equation} f_{\alpha \beta} = (\zeta + \bar{\zeta}) n_{[\alpha} l_{\beta]} - (\zeta - \bar{\zeta}) {\bar{m}}_{[\alpha} m_{\beta]}, \end{equation} which satisfies \begin{equation} f_{\alpha(\beta;\gamma)} = g_{\beta\gamma} \xi_\alpha - g_{\alpha (\beta} \xi_{\gamma)}. \end{equation} Here, we have introduced the Killing spinor coefficient, $\zeta$, which we previously encountered as a coordinate expression in Sec.~\ref{sec:null-tetrads}. Its appearance here can be considered more fundamental, and does not depend on any particular coordinate choice. The divergence of this conformal Killing-Yano tensor is a Killing vector, \begin{equation} \xi_\alpha = \tfrac13 f_{\alpha \beta}{}^{;\beta}, \end{equation} and its Hodge dual, \begin{equation} {}^\star f_{\alpha \beta} = \tfrac12 \epsilon_{\alpha\beta}{}^{\gamma\delta} f_{\gamma\delta} = i (\zeta - \bar{\zeta}) n_{[\alpha} l_{\beta]} - i (\zeta + \bar{\zeta}) {\bar{m}}_{[\alpha} m_{\beta]}, \end{equation} is a Killing-Yano tensor satisfying \begin{equation} {}^\star f_{\alpha(\beta;\gamma)} = 0. \end{equation} The products of these Killing-Yano tensors generate two conformal Killing tensors, \begin{align} K_{\alpha \beta} = f_{\alpha \gamma} f_\beta{}^\gamma &= \tfrac12 (\zeta + \bar{\zeta})^2 n_{(\alpha} l_{\beta)} - \tfrac12 (\zeta - \bar{\zeta})^2 {\bar{m}}_{(\alpha} m_{\beta)}, \\ \overset{\star}{K}_{\alpha \beta} = f_{\alpha \gamma} \,{}^\star f_\beta{}^\gamma &= \tfrac12 i (\zeta^2 - \bar{\zeta}^2) (n_{(\alpha} l_{\beta)} + {\bar{m}}_{(\alpha} m_{\beta)}), \end{align} which satisfy \begin{equation} K_{(\alpha\beta;\gamma)} = g_{(\alpha\beta} K_{\gamma)}, \quad \overset{\star}{K}_{(\alpha\beta;\gamma)} = g_{(\alpha\beta} \overset{\star}{K}_{\gamma)}, \end{equation} where $K_\alpha = \tfrac16 (2 K_{\beta\alpha}{}^{;\beta} + K_{\beta}{}^{\beta}{}_{;\alpha})$ and $\overset{\star}{K}_\alpha = \tfrac16 (2 \overset{\star}{K}_{\beta\alpha}{}^{;\beta} + \overset{\star}{K}_{\beta}{}^{\beta}{}_{;\alpha})$. They also generate a Killing tensor, \begin{equation} \overset{\star\star}{K}_{\alpha \beta} = {}^\star f_{\alpha \gamma} \,{}^\star f_\beta{}^\gamma = - \tfrac12 (\zeta - \bar{\zeta})^2 n_{(\alpha} l_{\beta)} + \tfrac12 (\zeta + \bar{\zeta})^2 {\bar{m}}_{(\alpha} m_{\beta)}, \end{equation} satisfying \begin{equation} \overset{\star\star}{K}_{(\alpha\beta;\gamma)} = 0. \end{equation} This Killing tensor gives a relationship between the two Killing vectors, \begin{equation} \eta^\alpha = - \overset{\star\star}{K} {}^{\alpha \beta} \xi_\beta. \end{equation} \section{Black hole perturbation theory}\label{sec:black hole perturbation theory} We now consider perturbations of the isolated black hole spacetimes. We describe, in a unified notation, how to calculate metric perturbations in the most commonly used gauges: radiation gauges, Regge-Wheeler-Zerilli gauges, and the Lorenz gauge. Our focus is particularly on reconstruction methods, in which most or all of the metric perturbation is reconstructed from decoupled scalar variables. Since the left-hand sides of the perturbative Einstein equations~\eqref{EFE1} and \eqref{EFE2} are the same at every order, we specialize to the first-order case. We refer the reader to Refs.~\cite{Campanelli:1998jv,Brizuela:2009qd} for general discussions of second-order perturbation theory in Schwarzschild and Kerr spacetimes. \subsection{The Teukolsky formalism and radiation gauge} \label{sec:Teukolsky} Teukolsky \cite{Teukolsky:1972my} showed that the equations governing perturbations of rotating black hole spacetimes can be recast into a form where they are given by decoupled equations. These equations further have the remarkable property of being separable, reducing the problem to the solution of a set of uncoupled ordinary differential equations. In the case of metric perturbations, Teukolsky's results yield solutions for the spin-weight $\pm2$ components of the perturbed Weyl tensor, but do not give a method for obtaining a corresponding metric perturbation. Subsequent results (and their equivalents for electromagnetic perturbations) \cite{Chrzanowski:1975wv,Kegeles:1979an,Wald:1978vm,Whiting:2005hr,Pound:2013faa,Stewart:1978tm,Green:2019nam,Hollands:2020vjg} derived a method for reconstructing a metric perturbation from a Hertz potential, which in turn can be obtained from the spin-weight $\pm2$ components of the Weyl tensor. \subsubsection{Geroch-Held-Penrose formalism} Our exposition makes use of the formalism of Geroch, Held and Penrose (GHP) \cite{Geroch:1973am}, which is a simplified and more explicitly covariant version of the Newman-Penrose (NP) \cite{Newman:1961qr} formalism originally used by Teukolsky. Here we provide a concise review of the key features of the formalism needed to understand metric perturbations of black hole spacetimes; see Refs.~\cite{Price:Thesis,Aksteiner:2014zyp,Penrose:1987uia} for more thorough treatments. The GHP formalism prioritises the concepts of spin- and boost-weights; within the formalism, everything has a well-defined type $\{p,q\}$, which is related to its spin-weight $s=(p-q)/2$ and its boost-weight $b=(p+q)/2$. Only objects of the same type can be added together, providing a useful consistency check on any equations. Multiplication of two quantities yields a resulting object with type given by the sum of the types of its constituents. We first introduce a null tetrad $\{e_{(a)}^\alpha\} = \{l^\alpha, n^\alpha, m^\alpha, {\bar{m}}^\alpha\}$ with normalisation \begin{equation} l^\alpha n_\alpha = -1, \quad m^\alpha {\bar{m}}_\alpha = 1, \end{equation} and with all other inner products vanishing. In terms of the tetrad vectors, the metric may be written as \begin{equation} g_{\alpha\beta} = -2 l_{(\alpha} n_{\beta)} + 2 m_{(\alpha} {\bar{m}}_{\beta)}. \end{equation} There are three discrete transformations that reflect the inherent symmetry in the GHP formalism, corresponding to simultaneous interchange of the tetrad vectors: \begin{enumerate} \item $'$: $l^\alpha \leftrightarrow n^\alpha$ and $m^\alpha \leftrightarrow \bar{m}^\alpha$, $\{p,q\} \rightarrow \{-p, -q\}$; \item $\bar{\phantom{m}}$: $m^\alpha \leftrightarrow \bar{m}^\alpha$, $\{p,q\} \rightarrow \{q, p\}$; \item $\ast$: $l^\alpha \rightarrow m^\alpha$, $n^\alpha \rightarrow -\bar{m}^\alpha$, $m^\alpha \rightarrow -l^\alpha$, ${\bar{m}}^\alpha \rightarrow \bar{n}^\alpha$. \end{enumerate} We next introduce the {\it spin coefficients}, defined to be the 12 directional derivatives of the tetrad vectors. Of these, the 8 with well-defined GHP type are \begin{alignat}{2} \kappa = - l^\mu m^\nu \nabla_\mu l_\nu, & \quad \sigma = - m^\mu m^\nu \nabla_\mu l_\nu, &\quad \rho = -{\bar{m}}^\mu m^\nu \nabla_\mu l_\nu, & \quad \tau = - n^\mu m^\nu \nabla_\mu l_\nu, \end{alignat} along with their primed variants, $\kappa'$, $\sigma'$, $\rho'$ and $\tau'$. These have GHP type given by \begin{alignat*}{6} \kappa : \{3,1\}, & \quad \sigma : \{3,-1\}, & \quad \rho : \{1,1\},\quad \tau : \{1,-1\}. \end{alignat*} The remaining 4 spin coefficients are used to define the GHP derivative operators (that depend on the GHP type of the object on which they are acting), \begin{alignat}{3} \hbox{\ec\char'336} &:= (l^\alpha \nabla_\alpha - p \epsilon - q \bar{\epsilon}), & \quad \hbox{\ec\char'336}' &:= (n^\alpha \nabla_\alpha + p \epsilon' + q \bar{\epsilon}'), \nonumber \\ \hbox{\ec\char'360} &:= (m^\alpha \nabla_\alpha - p \beta + q \bar{\beta}'),& \quad \hbox{\ec\char'360}' &:= (\bar{m}^\alpha \nabla_\alpha + p \beta' - q\bar{\beta}), \end{alignat} where \begin{align} \beta &= \frac{1}{2} (m^\mu {\bar{m}}^\nu \nabla_\mu m_\nu-m^\mu n^\nu \nabla_\mu l_\nu), & \quad \epsilon &= \frac{1}{2} (l^\mu {\bar{m}}^\nu \nabla_\mu m_\nu-l^\mu n^\nu \nabla_\mu l_\nu), \end{align} along with their primed variants, $\beta'$ and $\epsilon'$. These spin coefficients have no well-defined GHP type and never appear explicitly in covariant equations. The action of a GHP derivative causes the type to change by an amount $\{p,q\}\to\{p+r,q+s\}$ where $\{r,s\}$ for each of the operators is given by \begin{alignat*}{4} \hbox{\ec\char'336} : \{1,1\}, & \quad \hbox{\ec\char'336}' : \{-1,-1\}, & \quad \hbox{\ec\char'360} : \{1,-1\},\quad \hbox{\ec\char'360}' : \{-1,1\}. \end{alignat*} In this sense we interpret $\hbox{\ec\char'336}$ and $\hbox{\ec\char'336}'$ as boost raising and lowering operators, respectively, while we interpret $\hbox{\ec\char'360}$ and $\hbox{\ec\char'360}'$ as spin raising and lowering operators, respectively. The adjoints of the GHP operators are given by \begin{alignat}{3} \hbox{\ec\char'336}^\dag &:= -(\hbox{\ec\char'336}- \rho - \bar{\rho}), & \quad \hbox{\ec\char'336}'^\dag &:= -(\hbox{\ec\char'336}'- \rho' - \bar{\rho}'),\nonumber \\ \hbox{\ec\char'360}^\dag &:= -(\hbox{\ec\char'360}- \tau -\bar{\tau}'),& \quad \hbox{\ec\char'360}'^\dag &:= -(\hbox{\ec\char'360}'- \tau' - \bar{\tau}), \end{alignat} or, alternatively, \begin{equation} \mathscr{D}^\dag = - (\Psi_2 \bar{\Psi}_2)^{1/3} \mathscr{D} (\Psi_2 \bar{\Psi}_2)^{-1/3}, \quad \mathscr{D}\in\{\hbox{\ec\char'336}, \hbox{\ec\char'336}', \hbox{\ec\char'360}, \hbox{\ec\char'360}'\}. \end{equation} In vacuum spacetimes, the only non-zero components of the Riemann tensor are given by the tetrad components of the Weyl tensor, which can be represented by five complex Weyl scalars, \begin{gather} \Psi_0 = C_{lmlm},\quad \Psi_1 = C_{lnlm}, \quad \Psi_2 = C_{lm{\bar{m}} n},\quad \Psi_3 = C_{ln{\bar{m}} n},\quad \Psi_4 = C_{n{\bar{m}} n{\bar{m}}}, \end{gather} with types inherited from the tetrad vectors that appear in their definition, \begin{gather*} \Psi_0 : \{4,0\}, \quad \Psi_1 : \{2,0\}, \quad \Psi_2 : \{0,0\},\quad \Psi_3 : \{-2,0\}, \quad \Psi_4 : \{-4,0\}. \end{gather*} Many of the results that follow will be specialised to type-D spacetimes with $l^\mu$ and $n^\mu$ aligned to the two principal null directions, in which case the Goldberg-Sachs theorem implies that 4 of the of the spin coefficients vanish, \begin{equation} \kappa = \kappa' = \sigma = \sigma' = 0, \end{equation} and also that most of the Weyl scalars vanish \begin{equation} \Psi_0 = \Psi_1 = \Psi_3 = \Psi_4 = 0. \end{equation} The GHP equations give relations between the Weyl scalars and the directional derivatives of the spin coefficients. For type-D spacetimes they are given by \begin{alignat}{4} \hbox{\ec\char'336} \rho &= \rho^2, & \quad \hbox{\ec\char'336} \tau &= \rho(\tau-\bar{\tau}'), \nonumber \\ \hbox{\ec\char'360} \tau &= \tau^2, & \quad \hbox{\ec\char'360} \rho &= \tau(\rho-\bar{\rho}), \nonumber \\ \hbox{\ec\char'336}' \rho &= \rho\bar{\rho}' - \tau \bar{\tau} &- \Psi_2 &+ \hbox{\ec\char'360}' \tau, \end{alignat} along with the Bianchi identity, \begin{equation} \hbox{\ec\char'336} \Psi_2 = 3 \rho \Psi_2, \quad \hbox{\ec\char'360} \Psi_2 = 3 \tau \Psi_2, \end{equation} and the conjugate, prime, and prime conjugate of these equations. Similarly the commutator of any pair of directional derivatives can be written in terms of a linear combination of spin coefficients multiplying single directional derivatives. Again for type-D, they are given by \begin{subequations} \begin{align} [\hbox{\ec\char'336}, \hbox{\ec\char'336}'] &= (\bar{\tau} - \tau')\hbox{\ec\char'360} + (\tau - \bar{\tau}')\hbox{\ec\char'360}' - p (\Psi_2 - \tau \tau') - q (\bar{\Psi}_2 - \bar{\tau}\bar{\tau}'), \\ [\hbox{\ec\char'336}, \hbox{\ec\char'360}] &= \bar{\rho}\hbox{\ec\char'360} - \bar{\tau}'\hbox{\ec\char'336} + q \bar{\rho}\bar{\tau}', \\ [\hbox{\ec\char'360}, \hbox{\ec\char'360}'] &= (\bar{\rho}' - \rho')\hbox{\ec\char'336} + (\rho-\bar{\rho})\hbox{\ec\char'336}' + p (\Psi_2 + \rho \rho') - q (\bar{\rho}\bar{\rho}' + \bar{\Psi}_2), \end{align} \end{subequations} along with the conjugate, prime, and prime conjugate of these. If we further restrict to spacetimes that admit a Killing tensor, $\overset{\star\star}{K}_{\alpha \beta}$, the associated symmetries lead to additional identities relating the spin coefficients, \begin{equation} \frac{\rho}{\bar{\rho}} = \frac{\rho'}{\bar{\rho}'} = -\frac{\tau}{\bar{\tau}'}= -\frac{\tau'}{\bar{\tau}} = \frac{\bar{M}^{1/3}}{M^{1/3}}\frac{\Psi_2^{1/3}}{\bar{\Psi}_2^{1/3}} = \frac{\bar{\zeta}}{\zeta}, \end{equation} for some complex function $M$ that is annihilated by $\hbox{\ec\char'336}$.\footnote{In the case of Kerr spacetime, $M$ is the mass of the spacetime as one might anticipate.} Here, we have used the fact that the Killing spinor coefficient is related to $\Psi_2$ by \begin{equation} \zeta = -M^{1/3} \Psi_2^{-1/3}. \end{equation} These identities can be used along with the GHP equations to obtain a complementary set of identities, \begin{subequations} \begin{gather} \hbox{\ec\char'336} \tau' = 2\rho \tau' = \hbox{\ec\char'360}' \rho, \\ \hbox{\ec\char'336}' \rho = \rho \rho' + \tau' (\tau - \bar{\tau}') -\frac12 \Psi_2 - \frac{\bar{\zeta}}{2\zeta} \bar{\Psi}_2, \\ \hbox{\ec\char'360}' \tau = \tau \tau' + \rho (\rho' - \bar{\rho}') + \frac12 \Psi_2 - \frac{\bar{\zeta}}{2\zeta} \bar{\Psi}_2, \end{gather} \end{subequations} along with the conjugate, prime, and prime conjugate of these equations. A consequence of these additional relations is that there is an operator \begin{equation} \mathcal{\pounds}_\xi = -\zeta \big( - \rho' \hbox{\ec\char'336} + \rho \hbox{\ec\char'336}' + \tau' \hbox{\ec\char'360} - \tau \hbox{\ec\char'360}') - \frac{p}{2} \zeta \Psi_2 - \frac{q}{2} \bar{\zeta} \bar{\Psi}_2, \end{equation} associated with the Killing vector \begin{equation} \xi^\alpha = -\zeta(-\rho' l^\alpha + \rho n^\alpha + \tau' m^\alpha - \tau {\bar{m}}^\alpha). \end{equation} There is a second operator \begin{align} \mathcal{\pounds}_\eta &= -\tfrac{\zeta}{4} \big[(\zeta-\bar{\zeta})^2(\rho' \hbox{\ec\char'336} - \rho \hbox{\ec\char'336}') - (\zeta+\bar{\zeta})^2(\tau' \hbox{\ec\char'360} - \tau \hbox{\ec\char'360}') \big] + p\, {}_\eta h_1 + q\, {}_\eta \bar{h}_1 \end{align} where \begin{align} {}_\eta h_1 &= \tfrac18 \zeta(\zeta^2+\bar{\zeta}^2)\Psi_2 - \tfrac14 \zeta\bar{\zeta}^2 \bar{\Psi}_2 + \tfrac12 \rho \rho' \zeta^2 (\bar{\zeta}-\zeta) + \tfrac12 \tau \tau' \zeta^2 (\bar{\zeta}+\zeta). \end{align} This is associated with the second Killing vector \begin{equation} \eta^\alpha = -\tfrac{\zeta}{4}\big[(\zeta-\bar{\zeta})^2 (\rho' l^\alpha - \rho n^\alpha) -(\zeta+\bar{\zeta})^2 (\tau' m^\alpha - \tau {\bar{m}}^\alpha) \big]. \end{equation} Both $\mathcal{\pounds}_\xi$ and $\mathcal{\pounds}_\eta$ commute with all of the GHP operators and annihilate all of the spin coefficients and $\Psi_2$. \subsubsection{Teukolsky equations} \label{sec:TeukolskyEquations} We now consider perturbations of vacuum type-D spacetimes. Teukolsky \cite{Teukolsky:1972my} showed that the perturbations to $\Psi_0$ and $\Psi_4$ (which we will denote by $\psi_0$ and $\psi_4$) are gauge invariant and satisfy decoupled and fully separable second order equations. These perturbations may be written in GHP form as \begin{align} \psi_0 = C^{(1)}_{lmlm}[h] = \mathcal{T}_0 h, \quad \psi_4 = C^{(1)}_{n{\bar{m}} n {\bar{m}}}[h] = \mathcal{T}_4 h,\label{eq:Weyl-scalars-definition} \end{align} where the operators $\mathcal{T}_I$ are given by \begin{subequations} \begin{align} \mathcal{T}_0 h &= - \frac12 \Big[(\hbox{\ec\char'360} - \bar{\tau}')(\hbox{\ec\char'360}-\bar{\tau}') h_{ll} + (\hbox{\ec\char'336}-\bar{\rho})(\hbox{\ec\char'336}-\bar{\rho}) h_{mm} \nonumber \\ & \qquad \qquad- \big((\hbox{\ec\char'336}-\bar{\rho})(\hbox{\ec\char'360}-2\bar{\tau}')+ (\hbox{\ec\char'360}-\bar{\tau}')(\hbox{\ec\char'336}-2\bar{\rho})\big) h_{(lm)} \Big],\label{eq:T0} \\ \mathcal{T}_4 h &= - \frac12 \Big[(\hbox{\ec\char'360}' - \bar{\tau})(\hbox{\ec\char'360}'-\bar{\tau}) h_{nn} + (\hbox{\ec\char'336}'-\bar{\rho}')(\hbox{\ec\char'336}'-\bar{\rho}') h_{{\bar{m}}\mb} \nonumber \\ & \qquad \qquad - \big((\hbox{\ec\char'336}'-\bar{\rho}')(\hbox{\ec\char'360}'-2\bar{\tau})+ (\hbox{\ec\char'360}'-\bar{\tau})(\hbox{\ec\char'336}'-2\bar{\rho}')\big) h_{(n{\bar{m}})} \Big].\label{eq:T4} \end{align} \end{subequations} We will later also need the adjoints of these, which are given by \begin{subequations} \begin{align} (\mathcal{T}_0^\dag \Psi)_{\alpha \beta} &= - \frac12 \Big[l_\alpha l_\beta (\hbox{\ec\char'360} - \tau)(\hbox{\ec\char'360} - \tau) + m_\alpha m_\beta (\hbox{\ec\char'336}-\rho)(\hbox{\ec\char'336}-\rho) \nonumber \\ & - l_{(\alpha} m_{\beta)} \big((\hbox{\ec\char'360}-\tau+\bar{\tau}')(\hbox{\ec\char'336}-\rho)+ (\hbox{\ec\char'336}-\rho+\bar{\rho})(\hbox{\ec\char'360}-\tau)\big) \Big]\Psi,\label{eq:T0dag} \\ (\mathcal{T}_4^\dag \Psi)_{\alpha \beta} &= - \frac12 \Big[n_\alpha n_\beta (\hbox{\ec\char'360}' - \tau')(\hbox{\ec\char'360}' - \tau') + {\bar{m}}_\alpha {\bar{m}}_\beta (\hbox{\ec\char'336}'-\rho')(\hbox{\ec\char'336}'- \rho')\nonumber \\ & - n_{(\alpha} {\bar{m}}_{\beta)} \big((\hbox{\ec\char'360}'-\tau'+\bar{\tau})(\hbox{\ec\char'336}'-\rho') + (\hbox{\ec\char'336}'-\rho'+\bar{\rho}')(\hbox{\ec\char'360}'-\tau')\big) \Big]\Psi.\label{eq:T4dag} \end{align} \end{subequations} The scalars $\psi_0$ and $\psi_4$ satisfy the Teukolsky equations,\footnote{Note that $\mathcal{O}' \psi_4= \zeta^{-4} \mathcal{O} \zeta^4 \psi_4$ and $\mathcal{O} \psi_0= \zeta^{-4} \mathcal{O} \zeta^4 \psi_0$.} \begin{alignat}{4} \mathcal{O} \psi_0 &= 8 \pi \mathcal{S}_0 T,& \qquad \mathcal{O}' \psi_4 &= 8 \pi \mathcal{S}_4 T, \end{alignat} where \begin{align} \mathcal{O} &:= \big(\hbox{\ec\char'336} - 2 \,s\, \rho - \bar{\rho}\big)\big(\hbox{\ec\char'336}'-\rho'\big) - \big(\hbox{\ec\char'360} - 2\, s \,\tau - \bar{\tau}'\big)\big(\hbox{\ec\char'360}' - \tau'\big) + \tfrac12 \big[\big(6 s-2\big)-4s^2\big] \Psi_2 \end{align} is the spin-weight $s$ Teukolsky operator.\footnote{Some authors (e.g. \cite{Wald:1978vm,Green:2019nam}) define $\mathcal{O}$ to be the operator with $s=+2$. Then, the operator for the negative $s$ fields is its adjoint $\mathcal{O}^\dag$.} The decoupling operators \begin{subequations} \label{eq:S} \begin{align} \mathcal{S}_0 T &= \tfrac12 (\hbox{\ec\char'360}-\bar{\tau}'-4\tau) \big[(\hbox{\ec\char'336}-2\bar{\rho}) T_{(lm)} -(\hbox{\ec\char'360}-\bar{\tau}') T_{ll} \big] \nonumber \\ & \quad + \tfrac12(\hbox{\ec\char'336}-4\rho-\bar{\rho}) \big[(\hbox{\ec\char'360}-2\bar{\tau}') T_{(lm)} - (\hbox{\ec\char'336}-\bar{\rho}) T_{mm} \big], \\ \mathcal{S}_4 T &= \tfrac12(\hbox{\ec\char'360}'-\bar{\tau}-4\tau') \big[(\hbox{\ec\char'336}'-2\bar{\rho}') T_{(n{\bar{m}})} -(\hbox{\ec\char'360}'-\bar{\tau}) T_{nn} \big] \nonumber \\ & \quad + \tfrac12(\hbox{\ec\char'336}'-4\rho'-\bar{\rho}') \big[(\hbox{\ec\char'360}'-2\bar{\tau}) T_{(n{\bar{m}})} - (\hbox{\ec\char'336}'-\bar{\rho}') T_{{\bar{m}}\mb} \big], \end{align} \end{subequations} allow the sources for the Teukolsky equations to be constructed from the stress-energy tensor. We will later also need the adjoints of these, which are given by \begin{subequations} \label{eq:Sdag} \begin{align} \label{eq:Sdag0} (\mathcal{S}_0^\dag \Psi)_{\alpha \beta} &= - \tfrac12 l_\alpha l_\beta (\hbox{\ec\char'360}-\tau)(\hbox{\ec\char'360}+3\tau) \Psi - \tfrac12 m_{\alpha}m_\beta (\hbox{\ec\char'336}-\rho)(\hbox{\ec\char'336} + 3\rho) \Psi \nonumber \\ & \quad + \tfrac12 l_{(\alpha}m_{\beta)} \big[(\hbox{\ec\char'336}-\rho+\bar{\rho})(\hbox{\ec\char'360}+3\tau) +(\hbox{\ec\char'360}-\tau+\bar{\tau}')(\hbox{\ec\char'336}+3\rho)] \Psi, \\ \label{eq:Sdag4} (\mathcal{S}_4^\dag \Psi)_{\alpha \beta} &= - \tfrac12 n_\alpha n_\beta (\hbox{\ec\char'360}'-\tau')(\hbox{\ec\char'360}'+3\tau') \Psi - \tfrac12 {\bar{m}}_{\alpha}{\bar{m}}_\beta (\hbox{\ec\char'336}'-\rho')(\hbox{\ec\char'336}' + 3\rho') \Psi \nonumber \\ & \quad + \tfrac12 n_{(\alpha}\bar{m}_{\beta)} \big[(\hbox{\ec\char'336}'-\rho'+\bar{\rho}')(\hbox{\ec\char'360}'+3\tau') +(\hbox{\ec\char'360}'-\tau'+\bar{\tau})(\hbox{\ec\char'336}'+3\rho')] \Psi. \end{align} \end{subequations} Introducing the index-free linearised Einstein operator $(\mathcal{E}h)_{\alpha\beta} := G_{\alpha \beta}^{(1)}[h]$, we see that Teukolsky's result for decoupling the equations are a consequence of the operator identities \label{eq:SEOT} \begin{align} \mathcal{S}_0 \mathcal{E} = \mathcal{O} \mathcal{T}_0, \quad \mathcal{S}_4 \mathcal{E} = \mathcal{O}' \mathcal{T}_4. \end{align} In vacuum Kerr-NUT spacetimes, the Teukolsky operator may be written in manifestly separable form by rewriting it in terms of the commuting operators \cite{Aksteiner:2014zyp} \begin{align} \mathscr{R} &:= \zeta \bar{\zeta} (\hbox{\ec\char'336} - \rho - \bar{\rho})(\hbox{\ec\char'336}' - 2 b \rho') + \frac{2b-1}{2} (\zeta + \bar{\zeta}) \mathcal{\pounds}_\xi, \end{align} and \begin{align} \mathscr{S} &:= \zeta \bar{\zeta} (\hbox{\ec\char'360} - \tau - \bar{\tau}')(\hbox{\ec\char'360}' - 2 s \tau') + \frac{2s-1}{2} (\zeta - \bar{\zeta}) \mathcal{\pounds}_\xi. \end{align} Then, Teukolsky operator is given by \begin{equation} \zeta \bar{\zeta} \mathcal{O} = \mathscr{R} - \mathscr{S}. \end{equation} The symmetry operators satisfy the commutation relations $\big[\mathscr{R}, \mathscr{S}\big] = 0$ when acting on a type $\{p,0\}$ object. We will see later that when written as a coordinate expression in Boyer-Lindquist coordinates in Kerr spacetime the operators $\mathscr{R}$ and $\mathscr{S}$ reduce to the radial Teukolsky and spin-weighted spheroidal operators (with a common eigenvalue). \subsubsection{Teukolsky-Starobinsky identities} In regions where they satisfy the homogeneous Teukolsky equations, the scalars $\psi_0$ and $\psi_4$ are not independent. Instead, they are related by the Teukolsky-Starobinsky identities, which are given in GHP form by \begin{subequations} \begin{align} \hbox{\ec\char'336}^4 \zeta^4 \psi_4 &= \hbox{\ec\char'360}'^4 \zeta^4 \psi_0 - 3 M \mathcal{\pounds}_\xi \bar{\psi}_0, \\ \hbox{\ec\char'336}'^4 \zeta^4 \psi_0 &= \hbox{\ec\char'360}^4 \zeta^4 \psi_4 + 3 M \mathcal{\pounds}_\xi \bar{\psi}_4, \end{align} \end{subequations} where we recall that $M = - \zeta^3 \psi_2$. From these, we can also derive eighth-order Teukolsky-Starobinsky identities that do not mix the scalars \begin{subequations} \begin{align} \hbox{\ec\char'336}^4 \bar{\zeta}^4 \hbox{\ec\char'336}'^4 \zeta^4 \psi_0 &= \hbox{\ec\char'360}'^4 \bar{\zeta}^4 \hbox{\ec\char'360}^4 \zeta^4 \psi_0 - 9 M^2 \mathcal{\pounds}_\xi^2 \psi_0, \\ \hbox{\ec\char'336}'^4 \bar{\zeta}^4 \hbox{\ec\char'336}^4 \zeta^4 \psi_4 &= \hbox{\ec\char'360}^4 \bar{\zeta}^4 \hbox{\ec\char'360}'^4 \zeta^4 \psi_4 - 9 M^2 \mathcal{\pounds}_\xi^2 \psi_4. \end{align} \end{subequations} \subsubsection{Reconstruction of a metric perturbation in radiation gauge} \label{sec:metric-reconstruction} Solutions of the Teukolsky equations can be related back to solutions for the metric perturbation $h_{\alpha \beta}$ by use of a Hertz potential \cite{Wald:1978vm, Chrzanowski:1975wv, Kegeles:1979an, Lousto:2002em, Whiting:2005hr}. In fact, there are two different Hertz potentials: $\psi^{\rm IRG}$, which produces a metric perturbation in the ingoing radiation gauge; and $\psi^{\rm ORG}$, which produces a metric perturbation in the outgoing radiation gauge. In the ingoing radiation gauge (IRG), the metric perturbation may be reconstructed by applying a second-order differential operator to a scalar Hertz potential $\psi^{\rm IRG}$ of type $\{-4,0\}$ (i.e. the same type as $\psi_4$). In terms of this Hertz potential, the IRG metric perturbation is given explicitly by \begin{equation} h_{\alpha\beta}^{\rm IRG} = 2 \Re \big[(\mathcal{S}_0^\dag \psi^{\rm IRG})_{\alpha \beta}\big].\label{eq:reconstruction} \end{equation} where $\mathcal{S}_0^\dag$ is the operator given in Eq.~\eqref{eq:Sdag0}. The IRG Hertz potential satisfies $\mathcal{O} \psi^{\rm IRG} = \eta^\text{IRG} $, where $\eta^\text{IRG}$ satisfies $2\Re(\mathcal{T}_0^\dag \eta^\text{IRG})_{\alpha\beta} = 8\pi T_{\alpha\beta}$. In other words, $\psi^{\rm IRG}$ is a solution of the equation satisfied by $\zeta^{4} \psi_4$ (equivalently, the adjoint of the equation satisfied by $\psi_0$), but with a different source. The IRG Hertz potential manifestly satisfies the gauge conditions $l^\alpha h_{\alpha\beta}=0$ and $h=0$ and it necessarily requires that $(\mathcal{E} h^{\rm IRG})_{ll} = 0 = T_{ll}$. Computing the perturbed Weyl scalars from it, we find \begin{subequations} \begin{align} \psi_0 &= \frac14 \hbox{\ec\char'336}^4 \overline{\psi^\text{IRG}} \label{eq:psi0-IRG}\\ \psi_4 &= \frac14 \hbox{\ec\char'360}'^4\overline{\psi^\text{IRG}} - \frac34 M \zeta^{-4} \mathcal{\pounds}_\xi \psi^\text{IRG},\nonumber \\ & \quad + \frac14 \Big[ \zeta^{-2}\mathcal{O}\zeta^{2} + 2 \zeta^{-1}\mathcal{\pounds}_\xi - 2(\tau' \tau-\rho' \rho - \Psi_2)\Big] \eta^\text{IRG}.\label{eq:psi4-IRG} \end{align} \end{subequations} The IRG Hertz potential may therefore be obtained either by solving the sourced (adjoint) Teukolsky equation or by solving either of the fourth-order equations sourced by the perturbed Weyl scalars. The equations involving $\psi_0$ and $\psi_4$ are often referred to as the ``radial'' and ``angular'' inversion equations, respectively. Acting on the perturbed Weyl scalars with the Teukolsky operator and commuting operators, we find \begin{subequations} \begin{align} \mathcal{O}\psi_0 &= \frac14 (\hbox{\ec\char'336}-\rho-\bar{\rho})^4 \overline{\eta^\text{IRG}} \label{eq:Opsi0-IRG}\\ \mathcal{O}'\psi_4 &= \frac14 (\hbox{\ec\char'360}'-\tau'-\bar{\tau})^4 \overline{\eta^\text{IRG}} - \frac34 M \zeta^{-4} \mathcal{\pounds}_\xi \eta^\text{IRG},\nonumber \\ & \quad + \frac14 \mathcal{O}'\Big[ \zeta^{-2}\mathcal{O}\zeta^{2} + 2 \zeta^{-1}\mathcal{\pounds}_\xi - 2(\tau' \tau-\rho' \rho - \Psi_2)\Big] \eta^\text{IRG}.\label{eq:Opsi4-IRG} \end{align} \end{subequations} Thus, in regions where the Hertz potential satisfies the homogenous equation $\mathcal{O}\psi^\text{IRG}=0$ the second line of Eq.~\eqref{eq:psi4-IRG} vanishes and the perturbed Weyl scalars satisfy the homogeneous Teukolsky equations. A similar procedure also works in the outgoing radiation gauge (ORG), the prime of the IRG. There, we have\footnote{Some authors \cite{vandeMeent:2015lxa} define a slightly different ORG Hertz potential related to the one here by $\hat{\psi}^{\rm ORG} = \zeta^{-4} \psi^{\rm ORG}$ and $(\hat{\mathcal{S}}_4^\dag)_{\alpha\beta} = (\mathcal{S}_4^\dag \zeta^{4})_{\alpha\beta}$. Both conventions yield the same metric perturbation, $(\hat{\mathcal{S}}_4^\dag\hat{\psi}^{\rm ORG})_{\alpha\beta} = (\mathcal{S}_4^\dag \psi^{\rm ORG})_{\alpha\beta})$.} \begin{equation} h_{\alpha\beta}^{\rm ORG} = 2\Re \big[ (\mathcal{S}_4^\dag \psi^{\rm ORG})_{\alpha \beta}\big], \end{equation} where the ORG Hertz potential, $\psi^{\rm ORG}$, is of type $\{4,0\}$ (i.e. the same as $\psi_0$). The ORG Hertz potential satisfies $\mathcal{O}' \psi^\text{ORG} = \eta^{\text{ORG}}$, where $\eta^\text{ORG}$ satisfies $2\Re(\mathcal{T}_4^\dag \eta^\text{ORG})_{\alpha\beta} = 8\pi T_{\alpha\beta}$. In other words, $\psi^{\rm ORG}$ is a solution of the equation satisfied by $\zeta^4\psi_0$ (equivalently, the adjoint of the equation satisfied by $\psi_4$), but with a different source. The ORG Hertz potential manifestly satisfies the gauge conditions $n^\alpha h_{\alpha\beta}=0$ and $h=0$, and it necessarily requires that $(\mathcal{E} h^{\rm IRG})_{nn} = 0 = T_{nn}$. Computing the perturbed Weyl scalars from it, we find \begin{subequations} \begin{align} \psi_0 &= \frac14 \hbox{\ec\char'360}^4 \overline{\psi^{\rm ORG}} + \frac34 M \zeta^{-4} \mathcal{\pounds}_\xi\psi^{\rm ORG} \nonumber \\ & \quad + \frac14 \Big[ \zeta^{2}\mathcal{O}\zeta^{-2} - 2 \zeta^{-1}\mathcal{\pounds}_\xi - 2(\tau' \tau-\rho' \rho - \Psi_2)\Big] \eta^\text{ORG}\label{eq:psi0-ORG}\\ \psi_4 &= \frac14 \hbox{\ec\char'336}'^4 \overline{\psi^{\rm ORG}}\label{eq:psi4-ORG}, \end{align} \end{subequations} The ORG Hertz potential may therefore be obtained either by solving the sourced (adjoint) Teukolsky equation or by solving either of the fourth-order equations sourced by the perturbed Weyl scalars. The equations involving $\psi_0$ and $\psi_4$ are often referred to as the ``angular'' and ``radial'' inversion equations, respectively. Acting on the perturbed Weyl scalars with the Teukolsky operator and commuting operators, we find \begin{subequations} \begin{align} \mathcal{O}\psi_0 &= \frac14 (\hbox{\ec\char'360}-\tau-\bar{\tau}')^4 \overline{\eta^{\rm ORG}} + \frac34 M \zeta^{-4} \mathcal{\pounds}_\xi\eta^{\rm ORG} \nonumber \\ & \quad + \frac14 \mathcal{O} \Big[ \zeta^{2}\mathcal{O}\zeta^{-2} - 2 \zeta^{-1}\mathcal{\pounds}_\xi - 2(\tau' \tau-\rho' \rho - \Psi_2)\Big] \eta^\text{ORG} \label{eq:Opsi0-ORG}\\ \mathcal{O}'\psi_4 &= \frac14 (\hbox{\ec\char'336}'-\rho'-\bar{\rho}')^4 \overline{\eta^{\rm ORG}}.\label{eq:Opsi4-ORG} \end{align} \end{subequations} Thus, in regions where the Hertz potential satisfies the homogenous equation $\mathcal{O}'\psi^\text{ORG}=0$ the second line of Eq.~\eqref{eq:psi0-ORG} vanishes and the perturbed Weyl scalars satisfy the homogeneous Teukolsky equations. As with the Weyl scalars, the IRG and ORG Hertz potentials are not independent in the homogeneous case. By demanding that they produce the same $\psi_0$ and $\psi_4$ we obtain Teukolsky-Starobinsky identities relating them, \begin{align} \hbox{\ec\char'336}^4 \psi^{\rm IRG} &= \hbox{\ec\char'360}'^4 \psi^{\rm ORG} + 3 \overline{M \zeta^{-4} \mathcal{\pounds}_\xi \psi^{\rm ORG}} \\ \hbox{\ec\char'336}'^4 \psi^{\rm ORG} &= \hbox{\ec\char'360}^4 \psi^{\rm IRG} - 3 \overline{M \zeta^{-4} \mathcal{\pounds}_\xi \psi^{\rm IRG}}. \end{align} The fact that the Hertz potentials yield solutions of the homogeneous linearised Einstein equations was succinctly summarised by Wald \cite{Wald:1978vm} using the method of adjoints: since the operators satisfy the identity $\mathcal{S} \mathcal{E} = \mathcal{O} \mathcal{T}$, by taking the adjoint and using the fact that $\mathcal{E}$ is self-adjoint we find that $\mathcal{E} \mathcal{S}^\dag = \mathcal{T}^\dag \mathcal{O}^\dag$ so we have a homogeneous solution of the linearised Einstein equations provided the Hertz potential satisfies the (adjoint) homogeneous Teukolsky equation. Finally, we note that in addition to imposing conditions on the stress-energy, the standard radiation gauge reconstruction procedure fails to reproduce certain ``completion'' portions of the metric perturbation associated with small shifts in the central mass and angular momentum, and gauge. A more generally valid metric perturbation may be obtained by supplementing the reconstructed piece described here with completion pieces and with a ``corrector'' tensor $x_{\alpha \beta}$ that is designed to eliminated any restrictions on the stress-energy, \begin{equation} h_{\alpha \beta} = 2 \Re (\mathcal{S}^\dag \Psi)_{\alpha \beta} + x_{\alpha \beta} + \dot{g}_{\alpha \beta} + (\mathcal{\pounds}_X g)_{\alpha \beta}. \end{equation} The interested reader may refer to \cite{Wald:1978vm,Kegeles:1979an,Chrzanowski:1975wv} for the original derivations of the reconstruction procedure, to \cite{Barack:2017oir} for an analysis of the sourced equation satisfied by the Hertz potential, to \cite{Merlin:2016boc,vandeMeent:2017fqk} for details of metric completion, and to \cite{Green:2019nam,Hollands:2020vjg} for a thorough explanation of the corrector tensor approach. \subsubsection{Gravitational waves} In order to determine gravitational wave strain, we require the metric perturbation far from the source. If we consider the metric perturbation reconstructed in radiation gauge, then to leading order in a large-distance expansion from the source the components $h_{mm}$ and $h_{{\bar{m}}\mb}$ dominate, with both falling of as $(\text{distance})^{-1}$. It is common to write these in terms of the two gravitational wave polarizations, \begin{equation} \label{eq:Strain} h_{mm} = h_+ + i h_\times, \quad h_{{\bar{m}}\mb} = h_+ - i h_\times. \end{equation} Furthermore, at large radius the operator $\mathcal{T}_4$ of Eq.~\eqref{eq:T4} reduces to a second derivative along the $l^\mu$ null direction, leading to a simple relationship between $\psi_4$ and the second time derivative of the strain, \begin{equation} \label{eq:Psi4-Strain} \psi_4 \sim -\frac12 \ddot{h}_{{\bar{m}}\mb}. \end{equation} This gives us a straightforward way to determine the strain by computing two time integrals of $\psi_4$. Further mathematical details on the relationship between $\psi_4$ and outgoing gravitational radiation are given in Refs.~\cite{Newman:1961qr,Newman:1962cia,Szekeres:1965ux}, on the equivalent relationship between $\psi_0$ and incoming radiation are given in Ref.~\cite{Walker:1979zk}, and on numerical implementation considerations in Refs.~\cite{Reisswig:2010di,Lehner:2007ip}. \subsubsection{GHP formalism in Kerr spacetime} We now give explicit expressions for the various quantities defined in the previous sections specialized to Kerr spacetime. The spin coefficients are tetrad dependent. When working with the Carter tetrad, the non-zero spin coefficients have a particularly symmetric form given by \begin{alignat}{4} \rho &= -\rho' = -\frac{1}{\zeta} \sqrt{\frac{\Delta}{2 \Sigma}}, \quad & \tau &= \tau' = -\frac{i a \sin \theta}{\zeta\sqrt{2 \Sigma}}, \nonumber\\ \beta &= \beta' = -\frac{i}{\zeta} \frac{a+i r \cos \theta}{2\sin\theta\sqrt{2\Sigma}},\quad & \epsilon &= - \epsilon' = \frac{M r - a^2 - i a (r-M) \cos \theta}{2\zeta\sqrt{2 \Sigma \Delta}}, \end{alignat} while for the Kinnersley tetrad they are given by \begin{gather} \rho_K = -\frac{1}{\zeta}, \quad \rho_K' = \frac{\Delta}{2 \zeta^2 \bar{\zeta}}, \quad \tau_K = -\frac{i a \sin \theta}{\sqrt{2}\zeta \bar{\zeta}}, \quad \tau_K' = -\frac{i a \sin \theta}{\sqrt{2}\zeta^2},\nonumber \\ \beta_K = \frac{\cot \theta}{2\sqrt{2} \bar{\zeta}}, \quad \beta_K' = \frac{\cot \theta}{2\sqrt{2} \zeta} -\frac{i a \sin \theta}{\sqrt{2}\zeta^2} ,\quad \epsilon_K = 0, \quad \epsilon_K' = \frac{\Delta}{2 \zeta^2 \bar{\zeta}} - \frac{r-M}{2 \zeta \bar{\zeta}}. \end{gather} The commuting GHP operators have the same form in both tetrads, \begin{equation} \mathcal{\pounds}_{\xi} = \partial_t,\quad \mathcal{\pounds}_{\eta} = a^2 \partial_t + a \partial_\phi = \partial_{\tilde{\phi}}. \end{equation} \subsubsection{Mode decomposed equations in Kerr spacetime} \label{sec:KerrModes} In additional to decoupling the equations, Teukolsky further showed that the Teukolsky equations are fully separable using a mode ansatz. The specific form of the ansatz depends on the choice of null tetrad. Teukolsky worked with the Kinnersley tetrad \cite{Kinnersley:1969zza}, in which case the Teukolsky equations are separable using the ansatz\footnote{A similar separability result also holds when working with the Carter tetrad by replacing the left hand sides as follows: \begin{equation*} \psi_0 \to \zeta^2 \Delta^{-1} \psi_0, \quad \zeta^4 \psi_4 \to \zeta^2 \Delta \psi_4, \quad (\mathcal{S}_0 T) \to \zeta^2 \Delta^{-1} (\mathcal{S}_0 T), \quad \zeta^4 (\mathcal{S}_4 T) \to \zeta^2 \Delta (\mathcal{S}_4 T). \end{equation*} The factors of $\Delta$ here are not required for separability, but are included so that the radial functions are consistent with Teukolsky's original radial functions.} \begin{align} \psi_0 &= \int_{-\infty}^\infty \sum_{\ell=2}^\infty \sum_{m=-\ell}^\ell \, {}_2 \psi_{\ell {\mathscr m} \omega}(r) \, {}_2 S_{\ell {\mathscr m}}(\theta, \phi; a \omega) e^{-i \omega t}\, d\omega , \label{eq:psi0-FD}\\ \zeta^4 \psi_4 &= \int_{-\infty}^\infty \sum_{\ell=2}^\infty \sum_{m=-\ell}^\ell \,{}_{-2} \psi_{\ell {\mathscr m} \omega}(r) \, {}_{-2} S_{\ell {\mathscr m}}(\theta, \phi; a \omega) e^{-i \omega t} d\omega, \label{eq:psi4-FD} \end{align} with the functions ${}_{s} \psi_{\ell {\mathscr m} \omega}(r)$ and ${}_{s} S_{\ell {\mathscr m}}(\theta, \phi; a \omega)$ satisfying the spin-weighted spheroidal harmonic and Teukolsky radial equations, respectively, \begin{equation} \label{eq:SWSH} \bigg[\dfrac{d}{d\chi} \bigg((1-\chi^2)\dfrac{d}{d\chi} \bigg) +a^2 \omega^2 \chi^2 -\frac{(m+s \chi)^2}{1-\chi^2} - 2 a s \omega \chi +s + A\bigg] {}_{s} S_{\ell {\mathscr m}} = 0, \end{equation} and \begin{equation} \label{eq:TeukolskyR} \bigg[\Delta^{-s} \dfrac{d}{dr}\bigg( \Delta^{s+1}\dfrac{d }{dr}\bigg) +\frac{K^2 - 2 i s (r-M)K}{\Delta} + 4 i s \omega r - {}_s \lambda_{\ell {\mathscr m}} \bigg]{}_{s} \psi_{\ell {\mathscr m} \omega} = {}_{s} T_{\ell {\mathscr m} \omega}, \end{equation} where $\chi := \cos \theta$, $A:= {}_s \lambda_{\ell {\mathscr m}}+2 a m \omega -a^2 \omega^2$ and $K:=(r^2+a^2)\omega-a m$, and where the eigenvalue ${}_s \lambda_{\ell {\mathscr m}}$ depends on the value of $a\omega$. As with the standard spherical harmonics, the dependence of the spin-weighted spheroidal harmonics on the azimuthal coordinate is exclusively through an overall complex exponential factor, \begin{equation} {}_{s} S_{\ell {\mathscr m}}(\theta, \phi; a \omega) = {}_{s} S_{\ell {\mathscr m}}(\theta, 0; a \omega) e^{i {\mathscr m} \phi}. \end{equation} With this definition, the spin-weighted spheroidal harmonics are orthonormal, \begin{equation} \int {}_{s} S_{\ell {\mathscr m}}(\theta, \phi; a \omega) {}_{s} \bar{S}_{\ell' {\mathscr m}'}(\theta, \phi; a \omega) d\Omega = \delta_{\ell \ell'} \delta_{{\mathscr m} {\mathscr m}'}, \end{equation} where $d \Omega = \sin \theta d\theta d\phi$ is the volume element on the two-sphere. They also satisfy two symmetry identities, \begin{subequations} \begin{align} {}_{s} S_{\ell {\mathscr m}}(\theta, \phi; a \omega) &= (-1)^{\ell+{\mathscr m}}\, {}_{-s} S_{\ell {\mathscr m}}(\pi - \theta, \phi; a \omega), \\ {}_{s} S_{\ell {\mathscr m}}(\theta, \phi; a \omega) &= (-1)^{\ell+s}\, {}_{s} \bar{S}_{\ell -{\mathscr m}}(\pi - \theta, \phi; -a \omega) \end{align} \end{subequations} which can be combined to obtain the useful identity \begin{equation} {}_{s} S_{\ell {\mathscr m}}(\theta, \phi; a \omega) = (-1)^{{\mathscr m}+s} {}_{-s} \bar{S}_{\ell -{\mathscr m}}(\theta, \phi; -a \omega), \end{equation} which relates an $(s, \ell, {\mathscr m}, a \omega)$ harmonic to the conjugate of an $(-s, \ell, -{\mathscr m}, -a \omega)$ harmonic. Similarly, the eigenvalue satisfies the identities \begin{subequations} \begin{align} {}_{s} \lambda_{\ell {\mathscr m}}(a \omega) &= {}_{-s} \lambda_{\ell {\mathscr m}}(a \omega) - 2 s, \\ {}_{s} \lambda_{\ell {\mathscr m}}(a \omega) &= {}_{s} \lambda_{\ell -{\mathscr m}}(-a \omega) \end{align} \end{subequations} which can be combined to obtain \begin{equation} {}_{s} \lambda_{\ell {\mathscr m}}(a \omega) = {}_{-s} \lambda_{\ell -{\mathscr m}}(-a \omega) - 2 s. \end{equation} The sources for the radial Teukolsky equation are defined by \begin{subequations} \begin{align} 8 \pi (\mathcal{S}_0 T) &= -\frac{1}{2\Sigma}\int_{-\infty}^\infty \sum_{\ell=2}^\infty \sum_{{\mathscr m}=-\ell}^\ell {}_2 T_{\ell {\mathscr m} \omega}(r) {}_2 S_{\ell {\mathscr m}}(\theta, \varphi; a \omega) e^{-i \omega t} d\omega, \label{eq:T0-FD}\\ 8 \pi \zeta^4 (\mathcal{S}_4 T) &= -\frac{1}{2 \Sigma} \int_{-\infty}^\infty \sum_{\ell=2}^\infty \sum_{{\mathscr m}=-\ell}^\ell {}_{-2} T_{\ell {\mathscr m} \omega}(r) {}_{-2} S_{\ell {\mathscr m}}(\theta, \varphi; a \omega) e^{-i \omega t} d\omega.\label{eq:T4-FD} \end{align} \end{subequations} Finally, when acting on a single mode of the mode-decomposed Weyl scalars the symmetry operators yield \begin{alignat}{4} \mathscr{S} \psi_0 &= - \frac12 {}_{|2|} \lambda_{\ell {\mathscr m}} \psi_0,& \quad \mathscr{R} \psi_0 &= - \frac12 {}_{|2|} \lambda_{\ell {\mathscr m}} \psi_0 + \zeta \bar{\zeta} \mathcal{S}_0 T, \nonumber \\ \mathscr{S}' \psi_4 &= - \frac12 {}_{|-2|} \lambda_{\ell {\mathscr m}} \psi_4,& \quad \mathscr{R}' \psi_4 &= - \frac12 {}_{|2|} \lambda_{\ell {\mathscr m}} \psi_4 + \zeta \bar{\zeta} \mathcal{S}_ T, \end{alignat} where ${}_{|s|} \lambda_{\ell {\mathscr m} \omega} := {}_{s} \lambda_{\ell {\mathscr m} \omega} + |s| + s$ is independent of the sign of $s$.\footnote{This is distinct from Chandrasekhar's eigenvalue which is given in Eq.~\eqref{eq:lambdaCh}.} Solutions to the radial Teukolsky equation may be written in terms of a pair of homogeneous mode basis functions chosen according to their asymptotic behavior at the four null boundaries to the spacetime. For radiative ($\omega \ne 0$) modes, the four common choices are denoted \begin{itemize} \item ``in'': representing waves coming in from $\mathcal{I}^-$ then partially falling into the horizon and partially scattering back out to $\mathcal{I}^+$; these modes are purely ingoing into the horizon; \item ``up'': representing waves coming up from $\mathcal{H}^-$ then partially travelling out to $\mathcal{I}^+$ and partially scattering back into $\mathcal{H}^+$; these modes are purely outgoing at infinity; \item ``out'': representing waves coming from $\mathcal{I}^-$ and $\mathcal{H}^-$ then travelling out to $\mathcal{I}^+$; these modes are purely outgoing from the horizon; \item ``down'': representing waves coming from $\mathcal{I}^-$ and $\mathcal{H}^-$ then travelling down to $\mathcal{H}^+$; these modes are purely incoming at infinity; \end{itemize} These have asymptotic behaviour given by \begin{subequations} \begin{alignat}{5} \label{eq:bcRin} {}_s R^{\text{in}}_{\ell {\mathscr m} \omega}(r) &\sim \Big\{& \begin{array}{c} 0 \\ {}_s R^{\text{in,ref}}_{\ell {\mathscr m} \omega} r^{-1-2s} e^{+i\omega r_*} \end{array}& \begin{array}{c} + \\ + \end{array} \begin{array}{c} {}_s R^{\text{in,trans}}_{\ell {\mathscr m} \omega} \Delta^{-s} e^{-i k r_*} \\ {}_s R^{\text{in,inc}}_{\ell {\mathscr m} \omega} r^{-1} e^{-i\omega r_*} \end{array} &\qquad \begin{array}{l} r \to r_+\\ r \to \infty \end{array} \\ \label{eq:bcRup} {}_s R^{\text{up}}_{\ell {\mathscr m} \omega}(r) &\sim \Big\{& \begin{array}{c} {}_s R^{\text{up,inc}}_{\ell {\mathscr m} \omega} e^{+i k r_*} \\ {}_s R^{\text{up,trans}}_{\ell {\mathscr m} \omega} r^{-1-2s} e^{+i\omega r_*} \end{array}& \begin{array}{c} + \\ + \end{array} \begin{array}{c} {}_s R^{\text{up,ref}}_{\ell {\mathscr m} \omega} \Delta^{-s} e^{-i k r_*} \\ 0 \end{array} &\qquad \begin{array}{l} r \to r_+\\ r \to \infty \end{array} \\ \label{eq:bcRout} {}_s R^{\text{out}}_{\ell {\mathscr m} \omega}(r) &\sim \Big\{& \begin{array}{c} {}_s R^{\text{out,trans}}_{\ell {\mathscr m} \omega} e^{+i k r_*} \\ {}_s R^{\text{out,inc}}_{\ell {\mathscr m} \omega} r^{-1-2s} e^{+i \omega r_*} \end{array}& \begin{array}{c} + \\ + \end{array} \begin{array}{c} 0 \\ {}_s R^{\text{out,ref}}_{\ell {\mathscr m} \omega} r^{-1} e^{-i \omega r_*} \end{array} &\qquad \begin{array}{l} r \to r_+\\ r \to \infty \end{array} \\ \label{eq:bcRdown} {}_s R^{\text{down}}_{\ell {\mathscr m} \omega}(r) &\sim \Big\{& \begin{array}{c} {}_s R^{\text{down,ref}}_{\ell {\mathscr m} \omega} e^{+i k r_*} \\ 0 \end{array}& \begin{array}{c} + \\ + \end{array} \begin{array}{c} {}_s R^{\text{down,inc}}_{\ell {\mathscr m} \omega} \Delta^{-s} e^{-i k r_*} \\ {}_s R^{\text{down,trans}}_{\ell {\mathscr m} \omega} r^{-1} e^{-i \omega r_*} \end{array} & \begin{array}{l} r \to r_+\\ r \to \infty \end{array} \end{alignat} \end{subequations} where $k := \omega - m \Omega_+$ with $\Omega_+ := \frac{a}{2 M r_+}$ the angular velocity of the horizon, and where $r_* := r + \frac{1}{2\kappa_+} \ln \frac{r-r_+}{2M} + \frac{1}{2\kappa_-} \ln \frac{r-r_-}{2M}$ with $\kappa_\pm := \frac{r_\pm - r_\mp}{2(r_\pm^2+a^2)}$ the surface gravity on the outer/inner horizon. This behaviour is depicted graphically in Fig.~\ref{fig:BCs}. \begin{figure}[t] \begin{center} \includegraphics[width=11cm]{BCs.pdf} \caption{ Left to right: boundary conditions satisfied by the ``in'', ``up'', ``out'' and ``down'' solutions.} \label{fig:BCs} \end{center} \end{figure} Inhomogeneous solutions of the radial Teukolsky equation can then be written in terms of a linear combination of the basis functions, \begin{align} \label{eq:TeukolskyInhomogeneousModes} {}_2 \psi_{\ell {\mathscr m} \omega}(r) &= {}_2 C^{\text{in}}_{\ell {\mathscr m} \omega}(r) {}_2 R^{\text{in}}_{\ell {\mathscr m} \omega}(r)+{}_2 C^{\text{up}}_{\ell {\mathscr m} \omega}(r) {}_2 R^{\text{up}}_{\ell {\mathscr m} \omega}(r), \\ {}_{-2} \psi_{\ell {\mathscr m} \omega}(r) &= {}_{-2} C^{\text{in}}_{\ell {\mathscr m} \omega}(r) {}_{-2} R^{\text{in}}_{\ell {\mathscr m} \omega}(r)+{}_{-2} C^{\text{up}}_{\ell {\mathscr m} \omega}(r) {}_{-2} R^{\text{up}}_{\ell {\mathscr m} \omega}(r), \end{align} where the weighting coefficients are determined by variation of parameters, \begin{subequations} \label{eq:weighting-coefficients} \begin{align} {}_s C_{\ell {\mathscr m} \omega}^{\text{in}}(r) &= \int^{\infty}_r \frac{{}_s R^{\text{up}}_{\ell {\mathscr m} \omega}(r')}{W(r')\Delta} {}_s T_{\ell {\mathscr m} \omega}(r') dr', \\ {}_s C_{\ell {\mathscr m} \omega}^{\text{up}}(r) &= \int_{r_+}^r \frac{{}_s R^{\text{in}}_{\ell {\mathscr m} \omega}(r')}{W(r')\Delta} {}_s T_{\ell {\mathscr m} \omega}(r') dr', \end{align} \end{subequations} with $W(r) = {}_s R^{\text{in}}_{\ell {\mathscr m} \omega}(r) \partial_r [{}_s R^{\text{up}}_{\ell {\mathscr m} \omega}(r)] - {}_s R^{\text{up}}_{\ell {\mathscr m} \omega}(r) \partial_r [{}_s R^{\text{in}}_{\ell {\mathscr m} \omega}(r)]$ the Wronskian [in practice, it is convenient to use the fact that $\Delta^{s+1} W(r) = \text{const}$]. If one computes the ``in'' and ``up'' mode functions with normalisation such that transmission coefficients are unity, ${}_s R^{\text{in,trans}}_{\ell {\mathscr m} \omega} = 1 = {}_s R^{\text{up,trans}}_{\ell {\mathscr m} \omega}$, then the gravitational wave strain can be determined directly from $\psi_4$ using Eq.~\eqref{eq:Psi4-Strain} to give \begin{equation} \lim_{r\to \infty} r(h_+ - i h_\times) = 2 \int_{-\infty}^\infty \sum_{\ell=2}^\infty \sum_{m=-\ell}^\ell \, \frac{{}_{-2} C_{\ell {\mathscr m} \omega}^{\text{up}}}{\omega^2} \, {}_{-2} S_{\ell {\mathscr m}}(\theta, \phi; a \omega) e^{-i \omega (t-r_*)} d\omega, \label{eq:Teukolsky-waveform} \end{equation} where the weighting coefficient ${}_{-2} C_{\ell {\mathscr m} \omega}^{\text{up}}$ is to be evaluated in the limit $r\to \infty$. Similarly, the time averaged flux of energy carried by gravitational waves\footnote{Strictly speaking, the horizon fluxes given here have been derived from the rates of change of the black hole parameters due to shear of the horizon generators~\cite{Teukolsky:1974yv}. It is generally assumed that these are equivalent to the gravitational wave fluxes, although this has not, to our knowledge, been shown explicitly.} passing through infinity and the horizon can be computed from the ``in'' and ``up'' normalization coefficients \cite{Hughes:1999bq}, \begin{align} \mathcal{F}_E^\mathcal{H} &= \lim_{r \to r_+} \sum_{\ell{\mathscr m}\omega} \frac{2\pi \alpha_{\ell {\mathscr m} \omega}}{\omega^2} |{}_{-2} C^{\text{in}}_{\ell {\mathscr m} \omega}|^2, \label{EdotH v1} \\ \mathcal{F}_E^\mathcal{I} &= \lim_{r \to \infty} \sum_{\ell{\mathscr m}\omega} \frac{2\pi}{\omega^2}|\, {}_{-2} C^{\text{up}}_{\ell {\mathscr m} \omega}|^2, \end{align} where $\alpha_{\ell {\mathscr m} \omega} := \frac{256(2M r_+)^5 k(k^2+4\varepsilon^2)(k^2+16\varepsilon^2)\omega^3}{|\mathcal{C}_{\ell {\mathscr m} \omega}|^2}$ with $\varepsilon := \sqrt{M^2-a^2}/(4 M r_+)$. Similarly, the flux of angular momentum is given by \begin{align} \mathcal{F}_{L_z}^\mathcal{H} &= \lim_{r \to r_+} \sum_{\ell{\mathscr m}\omega} \frac{2\pi {\mathscr m} \alpha_{\ell {\mathscr m} \omega}}{\omega^3} |{}_{-2} C^{\text{in}}_{\ell {\mathscr m} \omega}|^2, \label{LdotH v1} \\ \mathcal{F}_{L_z}^\mathcal{I} &= \lim_{r \to \infty} \sum_{\ell{\mathscr m}\omega} \frac{2\pi {\mathscr m}}{\omega^3}|\, {}_{-2} C^{\text{up}}_{\ell {\mathscr m} \omega}|^2.\label{LdotI v1} \end{align} Similar expressions can be obtained in terms of the modes ${}_2 \psi_{\ell{\mathscr m}\omega}$ of $\psi_0$ by using the Teukolsky-Starobinsky identities to relate ${}_{-2} C^{\text{up}}_{\ell {\mathscr m} \omega}$ to ${}_{2} C^{\text{up}}_{\ell {\mathscr m} \omega}$. The necessary details of how these asymptotic amplitudes are related can be found in Refs.~\cite{Ori:2002uv,vandeMeent:2015lxa}. When decomposed into modes, each of the Teukolsky-Starobinsky identities separate to yield identities relating the positive spin-weight spheroidal and radial functions to the negative spin-weight ones, \begin{subequations} \label{eq:TS-mode} \begin{align} \mathcal{D}_0^4 ({}_{-2} \psi_{\ell {\mathscr m} \omega}) &= \tfrac14 \mathcal{C}_{\ell {\mathscr m} \omega} \, {}_{2} \psi_{\ell {\mathscr m} \omega}, \\ \Delta^{2} (\mathcal{D}^\dag_0)^4 (\Delta^{2}\, {}_{2} \psi_{\ell {\mathscr m} \omega}) &= 4 \bar{\mathcal{C}}_{\ell {\mathscr m} \omega} \,{}_{-2} \psi_{\ell {\mathscr m} \omega}, \\ \mathcal{L}_{-1}\mathcal{L}_{0}\mathcal{L}_{1}\mathcal{L}_{2} ({}_{2} S_{\ell {\mathscr m} \omega}) &= D \, {}_{-2} S_{\ell {\mathscr m} \omega}, \\ \mathcal{L}^\dag_{-1}\mathcal{L}^\dag_{0}\mathcal{L}^\dag_{1}\mathcal{L}^\dag_{2} ( {}_{-2} S_{\ell {\mathscr m} \omega}) &= D \, {}_{2} S_{\ell {\mathscr m} \omega}, \end{align} \end{subequations} where \begin{subequations} \label{eq:DandL} \begin{alignat}{4} \mathcal{D}_n &:= \partial_r - \frac{i K}{\Delta} + 2 n \frac{r-M}{\Delta}, \quad& \mathcal{D}^\dag_n &:= \partial_r + \frac{i K}{\Delta} + 2 n \frac{r-M}{\Delta}, \\ \mathcal{L}_n &:= \partial_\theta + Q + n \cot \theta, \quad& \mathcal{L}^\dag_n &:= \partial_\theta - Q + n \cot \theta, \end{alignat} \end{subequations} (with $K$ defined above and $Q:=-a \omega \sin \theta + m \csc \theta$) are essentially mode versions of the GHP differential operators. The constants of proportionality are given by \begin{subequations} \begin{align} \label{eq:TS_C} \mathcal{C}_{\ell {\mathscr m} \omega} &= D + (-1)^{\ell+m} 12 i M \omega,\\ \label{eq:TS_D} D^2 &= ({}_{s}\lambda^{\rm Ch}_{\ell {\mathscr m}})^2 ({}_{s}\lambda^{\rm Ch}_{\ell {\mathscr m}}-2)^2 + 8 a \omega(m-a \omega)({}_{s}\lambda^{\rm Ch}_{\ell {\mathscr m}}-2)(5 {}_s \lambda^{\rm Ch}_{\ell {\mathscr m}}-4) \nonumber \\ & \qquad +48 (a\omega)^2\big[2({}_{s}\lambda^{\rm Ch}_{\ell {\mathscr m}}-2)+3(m-a\omega)^2\big], \end{align} \end{subequations} where \begin{equation} \label{eq:lambdaCh} {}_{s} \lambda^{\rm Ch}_{\ell {\mathscr m} \omega} := {}_{s} \lambda_{\ell {\mathscr m} \omega} + s^2 + s \end{equation} is the eigenvalue used by Chandrasekhar \cite{Chandrasekhar:1985kt}. This particular choice of $\mathcal{C}_{\ell {\mathscr m} \omega}$ ensures that the $s=+2$ and $s=-2$ modes represent the same physical perturbation.\footnote{An alternative proportionality constant can be derived such that the $s=+2$ and $s=-2$ modes have the same transmission coefficient; see \cite{Ori:2002uv} for details.} Finally, when written in terms of modes the homogeneous radiation gauge angular inversion equations can be algebraically inverted to give the modes of the Hertz potentials in terms of the modes of the Weyl scalar, \begin{align} \label{eq:angular-inversion-ORG} \psi^{\text{ORG}}_{\ell {\mathscr m} \omega} & = 16\frac{(-1)^m D\;\! {}_2\bar{\psi}_{-\omega\ell-m}+12 i M \omega\;\! {}_2\psi_{\ell {\mathscr m} \omega} }{|\mathcal{C}_{\ell {\mathscr m} \omega}|^2}, \\ \label{eq:angular-inversion-IRG} \psi^{\text{IRG}}_{\ell {\mathscr m} \omega} & = 16\frac{(-1)^m D\;\! {}_{-2}\bar{\psi}_{-\omega\ell-m}-12 i M \omega\;\! {}_{-2}\psi_{\ell {\mathscr m} \omega} }{|\mathcal{C}_{\ell {\mathscr m} \omega}|^2}. \end{align} where the separability ansatz for the Hertz potentials differs by a factor of $\zeta^{-4}$ from that of the Weyl scalars, \begin{align} \zeta^{-4} \psi^{\text{ORG}} &= \int_{-\infty}^\infty \sum_{\ell=2}^\infty \sum_{m=-\ell}^\ell \, \psi^{\text{ORG}}_{\ell {\mathscr m} \omega}(r) \, {}_2 S_{\ell {\mathscr m}}(\theta, \phi; a \omega) e^{-i \omega t}\, d\omega , \\ \psi^{\text{IRG}} &= \int_{-\infty}^\infty \sum_{\ell=2}^\infty \sum_{m=-\ell}^\ell \, \psi^{\text{IRG}}_{\ell {\mathscr m} \omega}(r) \, {}_{-2} S_{\ell {\mathscr m}}(\theta, \phi; a \omega) e^{-i \omega t} d\omega. \end{align} Alternatively, one can use the radial inversion equations to relate the asymptotic amplitudes of $\psi^{\text{IRG}}_{\ell {\mathscr m} \omega}$ to the asymptotic amplitudes of ${}_{2}\psi_{\ell {\mathscr m} \omega}$ and to relate the asymptotic amplitudes of $\psi^{\text{ORG}}_{\ell {\mathscr m} \omega}$ to the asymptotic amplitudes of ${}_{-2}\psi_{\ell {\mathscr m} \omega}$. Further details are given in \cite{Ori:2002uv} for the IRG case and in \cite{vandeMeent:2015lxa} for the ORG case. Note that in order to transform back to the time-domain solution, as a final step we must perform an inverse Fourier transform. This poses a challenge in gravitational self-force calculations, where non-smoothness of the solutions in the vicinity of the worldline lead to the Gibbs phenomenon of non-convergence of the inverse Fourier transform. Resolutions to this problem typically rely on avoiding directly transforming the inhomogeneous solution by using the methods of extended homogeneous or extended particular solutions. For further details, see \cite{Hopper:2010uv,Hopper:2012ty}. \subsubsection{Sasaki-Nakamura transformation} In numerical implementations, the Teukolsky equation can be problematic to work with due to the presence of a long-ranged potential. One approach to this problem is to transform to an alternative master function that satisfies an equation with a more short-ranged potential. The Sasaki-Nakamura transformation is designed to do exactly this. It introduces a new function of the form \begin{equation} X \sim \begin{cases} \frac{\bar{\zeta}^{2}}{\zeta^{2}} (r^2+a^2)^{1/2} r^{2} \hbox{\ec\char'336}' \hbox{\ec\char'336}' \frac{1}{r^2} \zeta^{4} \psi_0 \\ (r^2+a^2)^{1/2} r^{2} \hbox{\ec\char'336} \hbox{\ec\char'336} \frac{1}{r^2} \zeta^{4} \psi_4 \end{cases}, \end{equation} where the factors of $\zeta$ ensure that these are purely radial operators.\footnote{This expression is appropriate when working with the Kinnersley tetrad; for the Carter tetrad both definitions for $X$ need to be scaled by a common factor of $\frac{\bar{\zeta}}{\zeta}$ to obtain a radial operator.} There is considerable freedom to rescale these expressions by inserting appropriate functions of $r$, for more details see Ref.~\cite{Hughes:2000pf} in which case the $X$ given here corresponds to $\sqrt{r^2+a^2} r^2 J_- J_- \frac{1}{r^2} R$ in the $s=-2$ case and to $\frac14 \sqrt{r^2+a^2} r^2 J_+ J_+ \frac{\Delta^2}{r^2} R$ in the $s=+2$ case. \subsection{Metric perturbations of Schwarzschild spacetime} \label{sec:schw-perturbations} On a Schwarzschild background spacetime, separability is readily achieved without having to rely on the Teukolsky formalism. Writing the metric perturbation in terms of its null tetrad components, they have GHP type \begin{alignat*}{5} s&=0: &\qquad h_{ln} &: \{0,0\}, &\quad h_{m{\bar{m}}} &: \{0,0\}, &\quad h_{ll} &: \{2,2\}, &\quad h_{nn} &: \{-2,-2\} \nonumber \\ s&=\pm1: &\qquad h_{lm} &: \{2,0\}, &\quad h_{l{\bar{m}}} &: \{0,2\}, &\quad h_{nm} &: \{0,-2\}, &\quad h_{n{\bar{m}}} &: \{-2,0\} \nonumber \\ s&=\pm2: &\qquad h_{mm} &: \{2,-2\}, &\quad h_{{\bar{m}}\mb} &: \{-2,2\}. & & & & \end{alignat*} Here we have gathered the components into scalar ($s=0$), vector ($s=\pm1$) and tensor ($s=\pm2$) sectors. In some instances, it is convenient to work with the trace-reversed metric perturbation, $\bar{h}_{\alpha\beta} = h_{\alpha\beta} - \frac12 h \,g_{\alpha \beta}$. In terms of null tetrad components, the trace is given by $h = -2 (h_{ln} - h_{m{\bar{m}}})$ so a trace reversal simply corresponds to the interchange $h_{ln} \leftrightarrow h_{m{\bar{m}}}$: $\bar{h}_{ln} = h_{m{\bar{m}}}$ and $\bar{h}_{m{\bar{m}}} = h_{ln}$, with all other components unchanged. The tetrad components may be decomposed into a basis of spin-weighted spherical harmonics \begin{align} h_{ab} &= \sum_{\ell=|s|}^\infty \sum_{m=-\ell}^\ell \, h_{ab}^{\ell {\mathscr m}}(t, r) \, {}_s Y_{\ell {\mathscr m}}(\theta, \phi) \end{align} where $s=0$ for $h_{ln}$, $h_{m{\bar{m}}}$, $h_{ll}$ and $h_{nn}$; $s=+1$ for $h_{lm}$ and $h_{nm}$; $s=-1$ for $h_{l{\bar{m}}}$ and $h_{n{\bar{m}}}$; $s=+2$ for $h_{mm}$; and $s=-2$ for $h_{{\bar{m}}\mb}$. Here, we have introduced the spin-weighted spherical harmonics ${}_s Y_{\ell {\mathscr m}}(\theta, \phi) = {}_s S_{\ell {\mathscr m} \omega}(\theta, \phi; 0)$ with the associated eigenvalue ${}_s \lambda_\ell := {}_s \lambda_{{\ell\mathscr{m}}} (a\omega = 0) = \ell(\ell+1) - s(s+1)$. In the Schwarzschild case the GHP derivative operators split into operators that (up to an overall factor of $\frac{1}{r}$) act only on the two-sphere, \begin{equation} \label{eq:edth-Schw} \hbox{\ec\char'360} = \tfrac{1}{\sqrt{2}r}( \partial_\theta + i \csc \theta \partial_\phi - s \cot\theta),\quad \hbox{\ec\char'360}' = \tfrac{1}{\sqrt{2}r}( \partial_\theta - i \csc \theta \partial_\phi + s \cot\theta). \end{equation} and operators that act only in the $t-r$ subspace\footnote{These expressions are obtained when working with the Carter tetrad. The equivalent operators for the Kinnersley tetrad are \begin{equation} \label{eq:th-Schw-Kinnersley} \hbox{\ec\char'336} = f^{-1} \partial_t + \partial_r, \quad \hbox{\ec\char'336}' = \tfrac12 (\partial_t -f \partial_r-2 b M/r^2). \end{equation} } \begin{equation} \label{eq:th-Schw-Carter} \hbox{\ec\char'336} = \frac{1}{\sqrt{2 f}}\bigg[\partial_t + f \partial_r - \frac{b M}{r^2}\bigg], \quad \hbox{\ec\char'336}' = \frac{1}{\sqrt{2 f}}\bigg[\partial_t - f \partial_r - \frac{b M}{r^2}\bigg]. \end{equation} The two-sphere operators act as spin-raising and lowering operators to relate spin-weighted spherical harmonics of different spin-weight \begin{subequations} \label{eq:spin-raising-lowering} \begin{align} \sqrt{2} r \,\hbox{\ec\char'360} \big[{}_s Y_{\ell {\mathscr m}}(\theta, \phi)\big] &= -\big[\ell(\ell+1)-s(s+1)\big]^{1/2} \, {}_{s+1} Y_{\ell {\mathscr m}} (\theta, \phi),\\ \sqrt{2} r \, \hbox{\ec\char'360}' \big[{}_s Y_{\ell {\mathscr m}}(\theta, \phi)\big] &= \big[\ell(\ell+1)-s(s-1)\big]^{1/2} \, {}_{s-1} Y_{\ell {\mathscr m}} (\theta, \phi). \end{align} \end{subequations} In particular, this provides a relationship between the spin-weighted spherical harmonics and the scalar spherical harmonics. It is convenient to split the six vector and tensor sector components of the metric perturbation into real (even parity) and imaginary (odd parity) parts, representing whether they are even or odd under the transformation $(\theta,\phi)\rightarrow (\pi-\theta, \phi+\pi)$: \begin{alignat*}{6} h_{lm}^{\ell {\mathscr m}} &= h_{l,{\rm even}}^{\ell {\mathscr m}} + i\, h_{l,{\rm odd}}^{\ell {\mathscr m}},\qquad & h_{l{\bar{m}}}^{\ell {\mathscr m}} &= -h_{l,{\rm even}}^{\ell {\mathscr m}} + i\, h_{l,{\rm odd}}^{\ell {\mathscr m}},\\ h_{nm}^{\ell {\mathscr m}} &= h_{n,{\rm even}}^{\ell {\mathscr m}} + i\, h_{n,{\rm odd}}^{\ell {\mathscr m}},\qquad & h_{n{\bar{m}}}^{\ell {\mathscr m}} &= -h_{n,{\rm even}}^{\ell {\mathscr m}} + i\, h_{n,{\rm odd}}^{\ell {\mathscr m}}, \\ h_{mm}^{\ell {\mathscr m}} &= h_{2,{\rm even}}^{\ell {\mathscr m}} + i\, h_{2,{\rm odd}}^{\ell {\mathscr m}},\qquad & h_{{\bar{m}}\mb}^{\ell {\mathscr m}} &= h_{2,{\rm even}}^{\ell {\mathscr m}} - i\, h_{2,{\rm odd}}^{\ell {\mathscr m}}. \end{alignat*} The four scalar sector components are necessarily even parity, so we therefore have seven fields in the even-parity sector and three in the odd-parity sector. The even and odd parity sectors decouple, meaning that they can be solved for independently. In instances where there is symmetry under reflection about the equatorial plane this decoupling is more explicit in that the even parity sector only contributes for $\ell+{\mathscr m}$ even and the odd parity sector only contributes for $\ell+{\mathscr m}$ odd. Finally, we can also optionally further decompose into the frequency domain, \begin{align} h_{ab}^{\ell {\mathscr m}}(t,r) &= \int_{-\infty}^\infty h_{ab}^{\ell {\mathscr m} \omega}(r) e^{-i\omega t}\, d\omega \end{align} in order to obtain functions of $r$ only. This has the advantage of reducing the problem of computing the metric perturbation to that of solving systems of $7+3$ coupled ordinary differential equations, one for each $(\ell,{\mathscr m},\omega)$. \subsubsection{Alternative tensor bases} There is some freedom in the specific choice of basis into which tensors are decomposed. In particular, the relative scaling of the $l^\mu$ and $n^\mu$ tetrad vectors leads to a slightly different basis if one works with the Kinnersley tetrad rather than the Carter tetrad. It is also possible to work with alternative basis vectors spanning the $t-r$ space. In some instances it is convenient to work with coordinate basis vectors $\delta^\mu_t$ and $\delta^\mu_r$ rather than null vectors. One can also choose to omit the factor of $\frac{1}{r}$ in the definition of $m^\mu$ and ${\bar{m}}^\mu$. The choice of basis does not have a fundamental impact, but some choices lead to more straightforward or natural interpretations of the resulting equations. Additionally, as an alternative to a spin-weighted harmonic basis, one could equivalently work with an orthonormal basis of vector and tensor spherical harmonics, which are related to the spin-weighted spherical harmonics by \begin{subequations} \begin{align} Z_A^{\ell {\mathscr m}} &:= \big[\ell(\ell+1)]^{-1/2} D_A Y^{\ell {\mathscr m}} = \frac{1}{\sqrt{2}} \Big( {}_{-1}Y_{\ell {\mathscr m}} m_A - {}_{1}Y_{\ell {\mathscr m}} {\bar{m}}_A \Big) \\ Z_{AB}^{\ell {\mathscr m}} &:= \bigg[2\frac{(\ell-2)!}{(\ell+2)!}\bigg]^{1/2} \big[D_{A} D_B + \tfrac12 \ell(\ell+1) \Omega_{AB}\big] Y^{\ell {\mathscr m}} \nonumber \\ &= \frac{1}{\sqrt{2}} \Big( {}_{-2}Y_{\ell {\mathscr m}} m_A m_B + {}_{2}Y_{\ell {\mathscr m}} {\bar{m}}_A {\bar{m}}_B \Big). \end{align} \end{subequations} for the even-parity sector and \begin{subequations} \begin{align} X_A^{\ell {\mathscr m}} &:= -\big[\ell(\ell+1)]^{-1/2}\epsilon_A{}^B D_B Y^{\ell {\mathscr m}} = -\frac{i}{\sqrt{2}} \Big( {}_{-1}Y_{\ell {\mathscr m}} m_A + {}_{1}Y_{\ell {\mathscr m}} {\bar{m}}_A \Big) \\ X_{AB}^{\ell {\mathscr m}} &:= -\bigg[2\frac{(\ell-2)!}{(\ell+2)!}\bigg]^{1/2} \epsilon_{(A}{}^C D_{B)} D_C Y^{\ell {\mathscr m}} = -\frac{i}{\sqrt{2}} \Big( {}_{-2}Y_{\ell {\mathscr m}} m_A m_B - {}_{2}Y_{\ell {\mathscr m}} {\bar{m}}_A {\bar{m}}_B \Big). \end{align} \end{subequations} for the odd-parity sector. Here, $m_A = \frac{1}{\sqrt{2}}[1, i \sin \theta]$ and ${\bar{m}}_A$ form a complex orthonormal basis on the two-sphere and are related to the two-sphere components of the tetrad vectors $m_\alpha$ and ${\bar{m}}_\alpha$ by a factor of $r$. The differential operator $D$ is the covariant derivate on the two-sphere with metric $\Omega_{AB} = {\rm diag}(1,\sin^2 \theta)$. Note that the definitions of the vector and tensor harmonics given here differs from those of Martel and Poisson \cite{Martel:2005ir} in their overall normalisation but are otherwise the same. The choice here ensures that the harmonics are unit-normalised, \begin{subequations} \begin{align} \int Z^A_{\ell {\mathscr m}}(\theta, \phi) \bar{Z}_A^{\ell' {\mathscr m}'}(\theta, \phi) d\Omega = \delta_{\ell \ell'} \delta_{{\mathscr m} {\mathscr m}'}, \\ \int X^A_{\ell {\mathscr m}}(\theta, \phi) \bar{X}_A^{\ell' {\mathscr m}'}(\theta, \phi) d\Omega = \delta_{\ell \ell'} \delta_{{\mathscr m} {\mathscr m}'}, \\ \int Z^{AB}_{\ell {\mathscr m}}(\theta, \phi) \bar{Z}_{AB}^{\ell' {\mathscr m}'}(\theta, \phi) d\Omega = \delta_{\ell \ell'} \delta_{{\mathscr m} {\mathscr m}'}, \\ \int X^{AB}_{\ell {\mathscr m}}(\theta, \phi) \bar{Z}_{AB}^{\ell' {\mathscr m}'}(\theta, \phi) d\Omega = \delta_{\ell \ell'} \delta_{{\mathscr m} {\mathscr m}'}. \end{align} \end{subequations} For simplicity we opt to work exclusively with a spin-weighted spherical harmonic basis, but point out that equivalent results hold for other choices of basis. In particular, the expressions that follow can be transformed to the commonly-used Barack-Lousto-Sago \cite{Barack:2005nr,Barack:2007tm,Wardell:2015ada} basis, $h^{(i)}_{\ell {\mathscr m}}$ , the $A$--$K$ basis \cite{Chen:2016plo,Thompson:2018lgb}, the Martel-Poisson basis \cite{Martel:2005ir}, and the Berndtson basis \cite{Berndtson:2009hp} using the relations given in Table \ref{tab:bases}. The table also gives the translation between a Carter null-tetad basis $(l^\alpha,n^\alpha,m^\alpha,{\bar{m}}^\alpha)$ and a $t$--$r$ coordinate basis $(\delta_t^\alpha,\delta^\alpha,m^\alpha,{\bar{m}}^\alpha)$. Note that the Barack-Lousto-Sago expressions involve the non-trace-reversed metric. A trace reversal in the Barack-Lousto-Sago basis corresponds to the interchange $h^{(3)}_{\ell{\mathscr m}} \leftrightarrow h^{(6)}_{\ell{\mathscr m}}$, consistent with the trace reversal in the null tetrad basis corresponding to the interchange $h_{ln}^{\ell{\mathscr m}} \leftrightarrow h_{m{\bar{m}}}^{\ell{\mathscr m}}$. \begin{table} \label{tab:bases} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|} \hline Tetrad & Barack-Lousto-Sago & $A$--$K$ & Martel-Poisson & Berndtson & Coord. \\ \hline \hline $\frac{f}{2} \Big(h_{ll}^{\ell{\mathscr m}}+h_{nn}^{\ell{\mathscr m}} + 2 h_{ln}^{\ell{\mathscr m}}\Big)$ & $\frac{1}{2 r}\Big(h^{(1)}_{\ell{\mathscr m}}+f h^{(3)}_{\ell{\mathscr m}}\Big)$ & $A^{\ell{\mathscr m}}$ & $h^{\ell{\mathscr m}}_{tt}$ & $f H_0$ & $h_{tt}^{\ell{\mathscr m}}$ \\ \hline $\frac{1}{2f} \Big(h_{ll}^{\ell{\mathscr m}}+h_{nn}^{\ell{\mathscr m}} - 2 h_{ln}^{\ell{\mathscr m}}\Big)$ & $\frac{1}{2 r f^2}\Big(h^{(1)}_{\ell{\mathscr m}}- fh^{(3)}_{\ell{\mathscr m}}\Big)$ & $K^{\ell{\mathscr m}}$ & $h^{\ell{\mathscr m}}_{rr}$ & $\frac{1}{f} H_2$ & $h_{rr}^{\ell{\mathscr m}}$ \\ \hline $\frac{1}{2} \Big(h_{ll}^{\ell{\mathscr m}}-h_{nn}^{\ell{\mathscr m}}\Big)$ & $\frac{1}{2 r f}h^{(2)}_{\ell{\mathscr m}}$ & $-D^{\ell{\mathscr m}}$ & $h^{\ell{\mathscr m}}_{tr}$ & $H_1$ & $h_{tr}^{\ell{\mathscr m}}$ \\ \hline $h_{m{\bar{m}}}^{\ell{\mathscr m}}$ & $\frac{1}{2 r} h^{(6)}_{\ell{\mathscr m}}$ & $E^{\ell{\mathscr m}}$ & $K^{\ell{\mathscr m}}$ & $K$ & $h_{m{\bar{m}}}^{\ell{\mathscr m}}$ \\ \hline $\sqrt{\frac{f}{2}}\Big(h_{l,\rm{even}}^{\ell{\mathscr m}} + h_{n,\rm{even}}^{\ell{\mathscr m}}\Big)$ & $-\frac{h^{(4)}_{\ell{\mathscr m}}}{2 r \sqrt{2} \sqrt{\ell(\ell+1)}}$ & $\frac{\sqrt{\ell(\ell+1)}}{\sqrt{2}} B^{\ell{\mathscr m}}$ & $-\frac{\sqrt{\ell(\ell+1)}}{r \sqrt{2}} h_0$ & $-\frac{\sqrt{\ell(\ell+1)}}{r \sqrt{2}} j_t^{\ell{\mathscr m}}$ & $h_{t,\rm{even}}^{\ell{\mathscr m}}$ \\ \hline $\sqrt{\frac{1}{2f}}\Big(h_{l,\rm{even}}^{\ell{\mathscr m}} - h_{n,\rm{even}}^{\ell{\mathscr m}}\Big)$ & $-\frac{h^{(5)}_{\ell{\mathscr m}}}{2 r f \sqrt{2} \sqrt{\ell(\ell+1)}} $ & $-\frac{\sqrt{\ell(\ell+1)}}{\sqrt{2}} H^{\ell{\mathscr m}}$ & $-\frac{\sqrt{\ell(\ell+1)}}{r \sqrt{2}} h_1$ & $-\frac{\sqrt{\ell(\ell+1)}}{r \sqrt{2}} j_r^{\ell{\mathscr m}}$ & $h_{r,\rm{even}}^{\ell{\mathscr m}}$ \\ \hline $\sqrt{\frac{f}{2}}\Big(h_{l,\rm{odd}}^{\ell{\mathscr m}} + h_{n,\rm{odd}}^{\ell{\mathscr m}}\Big)$ & $\frac{h^{(8)}_{\ell{\mathscr m}}}{2 r \sqrt{2} \sqrt{\ell(\ell+1)}} $ & $\frac{\sqrt{\ell(\ell+1)}}{\sqrt{2}} C^{\ell{\mathscr m}}$ & $-\frac{\sqrt{\ell(\ell+1)}}{r \sqrt{2}} h_t^{\ell{\mathscr m}}$ & $\frac{\sqrt{\ell(\ell+1)}}{r \sqrt{2}} h_0$ & $h_{t,\rm{odd}}^{\ell{\mathscr m}}$ \\ \hline $\sqrt{\frac{1}{2f}}\Big(h_{l,\rm{odd}}^{\ell{\mathscr m}} - h_{n,\rm{odd}}^{\ell{\mathscr m}}\Big)$ & $\frac{h^{(9)}_{\ell{\mathscr m}}}{2 r f \sqrt{2} \sqrt{\ell(\ell+1)}}$ & $-\frac{\sqrt{\ell(\ell+1)}}{\sqrt{2}} J^{\ell{\mathscr m}}$ & $-\frac{\sqrt{\ell(\ell+1)}}{r \sqrt{2}} h_r^{\ell{\mathscr m}}$ & $\frac{\sqrt{\ell(\ell+1)}}{r \sqrt{2}} h_1$ & $h_{r,\rm{odd}}^{\ell{\mathscr m}}$ \\ \hline $h_{2,\rm{even}}^{\ell{\mathscr m}}$ & $\frac{1}{2r} \sqrt{\frac{(\ell-2)!}{(\ell+2)!}} h^{(7)}_{\ell{\mathscr m}}$ & $\frac12 \sqrt{\frac{(\ell+2)!}{(\ell-2)!}} F^{\ell{\mathscr m}}$ & $\frac12 \sqrt{\frac{(\ell+2)!}{(\ell-2)!}} G^{\ell{\mathscr m}}$ & $\sqrt{\frac{(\ell+2)!}{(\ell-2)!}} G^{\ell{\mathscr m}}$ & $h_{2,\rm{even}}^{\ell{\mathscr m}}$ \\ \hline $h_{2,\rm{odd}}^{\ell{\mathscr m}}$ & $-\frac{1}{2r} \sqrt{\frac{(\ell-2)!}{(\ell+2)!}} h^{(10)}_{\ell{\mathscr m}}$ & $\frac12 \sqrt{\frac{(\ell+2)!}{(\ell-2)!}} G^{\ell{\mathscr m}}$ & $\frac{1}{2r^2} \sqrt{\frac{(\ell+2)!}{(\ell-2)!}} h_2^{\ell{\mathscr m}}$ & $\frac{1}{r^2} \sqrt{\frac{(\ell+2)!}{(\ell-2)!}} h_2$ & $h_{2,\rm{odd}}^{\ell{\mathscr m}}$ \\ \hline \end{tabular}} \caption{Relationship between choices of basis for perturbations of Schwarzschild spacetime.} \end{table} \subsection{Regge-Wheeler formalism and Regge-Wheeler gauge} The Regge-Wheeler formalism is based on the idea of constructing solutions to the linearised Einstein equations from solutions to the scalar wave equation with a potential. In the case of the Regge-Wheeler master function, it is a solution of \begin{equation} \label{eq:RW} \bigg[\Box + \frac{2Ms^2}{r^3} \bigg] \psi^{\rm RW}_s = S_s, \end{equation} where $s$ is the spin of the field ($s=0$ for scalar fields, $s=1$ for electromagnetic fields and $s=2$ for gravitational fields). Equation \eqref{eq:RW} is separable in Schwarzschild spacetime using the ansatz \begin{align} \psi^{\rm RW}_s &= \sum_{\ell=0}^\infty \sum_{m=-\ell}^\ell \, \frac{1}{r} \psi^{\rm RW}_{s \ell {\mathscr m}}(t, r) \, {}_0 Y_{\ell {\mathscr m}}(\theta, \phi), \end{align} with $\psi^{\rm RW}_{s \ell {\mathscr m}}(t,r)$ satisfying the Regge-Wheeler equation, \begin{equation} \label{eq:RWl} \bigg[ \dfrac{\partial}{\partial r} \bigg(f \dfrac{\partial }{\partial r}\bigg) - \frac{1}{f}\frac{\partial }{\partial t^2} - \bigg(\frac{\ell(\ell+1)}{r^2} + \frac{2M(1-s^2)}{r^3} \bigg)\bigg]\psi^{\rm RW}_{s\ell {\mathscr m}} = S^{\rm RW}_{s\ell {\mathscr m}}. \end{equation} In order to study metric perturbations of Schwarzschild spacetime, we consider the $s=2$ case. The Regge-Wheeler master function is then defined in terms of the metric perturbation by \begin{equation} \psi^{\rm RW}_{2 \ell {\mathscr m}} := -\frac{f}{r}\bigg[\frac{\sqrt{2} r\, h^{\ell{\mathscr m}}_{r,{\rm odd}}}{\sqrt{\ell(\ell+1)}} + \frac{r^2 \partial_r h^{\ell{\mathscr m}}_{2,{\rm odd}} }{\sqrt{(\ell-1)\ell(\ell+1)(\ell+2)}} \bigg]. \end{equation} It satisfies the $s=2$ Regge-Wheeler equation with source derived from the mode-decomposed stress-energy tensor,\footnote{Different conventions for the source exist in the literature. For example, the source given in Ref.~\cite{Hopper:2010uv} differs from that given here by a factor of $f$; this is a consequence of the left hand side of their Regge-Wheeler equation [Eq.~(2.13) in Ref.~\cite{Hopper:2010uv}] also differing from our Eq.~\eqref{eq:RWl} by a factor of $f$.} \begin{equation} S^{\rm RW}_{2\ell {\mathscr m}} = 16\pi \bigg[\frac{\sqrt{2} f T^{\ell{\mathscr m}}_{r,{\rm odd}}}{\sqrt{\ell(\ell+1)}} + \frac{r \,\partial_r (f T^{\ell{\mathscr m}}_{2,{\rm odd}})}{\sqrt{2 (\ell-1)\ell(\ell+1)(\ell+2)}} \bigg]. \end{equation} Rather than working with the Regge-Wheeler master function itself, it is often preferable to introduce two closely related functions: the Cunningham-Price-Moncrief (CPM) master function defined by \begin{equation} \psi^{\rm CPM}_{\ell {\mathscr m}} := -\frac{\sqrt{2}}{\sqrt{\ell(\ell+1)}} \frac{2r}{(\ell-1)(\ell+2)}\bigg[\partial_r (r h^{\ell{\mathscr m}}_{t,{\rm odd}}) - \partial_t (r h^{\ell{\mathscr m}}_{r,{\rm odd}}) - 2 h^{\ell{\mathscr m}}_{t,{\rm odd}} \bigg], \end{equation} and the Zerilli-Moncrief (ZM) master function defined by \begin{equation} \psi^{\rm ZM}_{\ell {\mathscr m}} := \frac{2r}{\ell(\ell+1)}\bigg[ \tilde{K}^{\ell\mathscr{m}} + \frac{2}{\Lambda} (f^2 \tilde{h}_{rr}^{\ell\mathscr{m}}- r f \partial_r \tilde{K}^{\ell\mathscr{m}}) \bigg], \end{equation} where $\Lambda := (\ell-1)(\ell+2) + \frac{6M}{r}$ and \begin{subequations} \begin{align} \tilde{K}^{\ell\mathscr{m}} &:= h_{m{\bar{m}}}^{\ell{\mathscr m}} - \frac{2 f \, h^{\ell{\mathscr m}}_{r,{\rm even}}}{\sqrt{\ell(\ell+1)}} + \frac{\Big[\ell(\ell+1) - 2 r\, f\, \partial_r \Big] h^{\ell{\mathscr m}}_{2,{\rm even}}}{\sqrt{2(\ell-1)\ell(\ell+1)(\ell+2)}}, \\ \tilde{h}_{rr}^{\ell\mathscr{m}} &:= h_{rr}^{\ell\mathscr{m}} - \frac{2 \partial_{r} (r\, h^{\ell{\mathscr m}}_{r,{\rm even}})}{\sqrt{\ell(\ell+1)}} + \frac{\partial_{r} \big( r^2 \partial_{r} h_2^{\rm even} \big)}{\sqrt{2(\ell-1)\ell(\ell+1)(\ell+2)}} \end{align} \end{subequations} are gauge invariant fields. The CPM master function satisfies the same $s=2$ Regge-Wheeler equation as the Regge-Wheeler master function, but with a different source given by \begin{equation} S^{\rm CPM}_{\ell {\mathscr m}} = 16 \pi \frac{\sqrt{2}}{\sqrt{\ell(\ell+1)}} \frac{2r}{(\ell-1)(\ell+2)} \bigg[\partial_r (r\, T^{\ell{\mathscr m}}_{t,{\rm odd}}) - \partial_t (r\, T^{\ell{\mathscr m}}_{r,{\rm odd}}) \bigg]. \end{equation} The RW and CPM master functions are related by a time derivative (plus source terms), \begin{equation} \psi^{\rm RW}_{2 \ell {\mathscr m}} = \frac12 \partial_t \psi^{\rm CPM}_{\ell {\mathscr m}} - \frac{16 \pi r^2 f}{(\ell-1)(\ell+2)} \frac{\sqrt{2}}{\sqrt{\ell(\ell+1)}} T^{\ell{\mathscr m}}_{r,{\rm odd}}. \end{equation} The ZM master function satisfies the Zerilli equation (the Regge-Wheeler equation with a different potential), \begin{equation} \bigg[ \dfrac{\partial}{\partial r} \bigg(f \dfrac{\partial }{\partial r}\bigg) - \frac{1}{f}\frac{\partial}{\partial t^2} - V^{\rm ZM} \bigg]\psi^{\rm ZM}_{\ell {\mathscr m}} = S^{\rm ZM}_{\ell {\mathscr m}}, \end{equation} where \begin{equation} V^{\rm ZM} = \frac{\ell(\ell+1)}{r^2}- \frac{6M}{r^3} + \frac{72 M^2 f}{\Lambda^2 r^4} - \frac{24 M (r-3M)}{\Lambda r^4 }, \end{equation} and where the ZM source is \begin{align} S^{\rm ZM}_{\ell {\mathscr m}} &= \frac{4f}{\Lambda} \frac{16\pi}{\sqrt{\ell(\ell+1)}} T^{\ell{\mathscr m}}_{r,{\rm even}} - \frac{16\pi\sqrt{2}}{\sqrt{(\ell-1)\ell(\ell+1)(\ell+2)}} r \, T^{\ell{\mathscr m}}_{2,{\rm even}} \nonumber \\ &+ \frac{2}{\ell(\ell+1)\Lambda}\Big\{\Big[\frac{r}{\Lambda}\Big((\ell-1)(\ell+2)(l^2+l-4) + 12 (\ell^2+\ell-5)\frac{M}{r} + 84 \frac{M^2}{r^2}\Big)\nonumber \\ & \qquad\qquad -2r^2 f \partial_r \Big](f\, T^{\ell{\mathscr m}}_{rr}-f^{-1}\, T^{\ell{\mathscr m}}_{tt}) + \frac{24M}{\Lambda} f^2 T^{\ell{\mathscr m}}_{rr} + 2 r\,f\,T^{\ell{\mathscr m}}_{m{\bar{m}}} \Big\}. \end{align} \subsubsection{Regge-Wheeler formalism in the frequency domain} Transforming to the frequency domain, the Regge-Wheeler and Zerilli equations become a set of ordinary differential equations, one for each $(\ell,{\mathscr m},\omega)$ mode. Solutions to these equations may be written in terms of a pair of homogeneous mode basis functions chosen according to their asymptotic behavior at the four null boundaries to the spacetime. For radiative ($\omega \ne 0$) modes, the four common choices are denoted ``in'', ``up'', ``out'' and ``down'', with the same interpretation as described in Sec.~\ref{sec:KerrModes} for the Teukolsky equation. These have asymptotic behaviour given by \begin{subequations} \begin{alignat}{5} \label{eq:bcXin} {}_s X^{\text{in}}_{\ell {\mathscr m} \omega}(r) &\sim \Big\{& \begin{array}{c} 0 \\ {}_s X^{\text{in,ref}}_{\ell {\mathscr m} \omega} e^{+i\omega r_*} \end{array}& \begin{array}{c} + \\ + \end{array} \begin{array}{c} {}_s X^{\text{in,trans}}_{\ell {\mathscr m} \omega} e^{-i \omega r_*} \\ {}_s X^{\text{in,inc}}_{\ell {\mathscr m} \omega} e^{-i\omega r_*} \end{array} &\qquad \begin{array}{l} r \to 2M\\ r \to \infty \end{array} \\ \label{eq:bcXup} {}_s X^{\text{up}}_{\ell {\mathscr m} \omega}(r) &\sim \Big\{& \begin{array}{c} {}_s X^{\text{up,inc}}_{\ell {\mathscr m} \omega} e^{+i \omega r_*} \\ {}_s X^{\text{up,trans}}_{\ell {\mathscr m} \omega} e^{+i\omega r_*} \end{array}& \begin{array}{c} + \\ + \end{array} \begin{array}{c} {}_s X^{\text{up,ref}}_{\ell {\mathscr m} \omega} e^{-i \omega r_*} \\ 0 \end{array} &\qquad \begin{array}{l} r \to 2M\\ r \to \infty \end{array} \\ \label{eq:bcXout} {}_s X^{\text{out}}_{\ell {\mathscr m} \omega}(r) &\sim \Big\{& \begin{array}{c} {}_s X^{\text{out,trans}}_{\ell {\mathscr m} \omega} e^{+i \omega r_*} \\ {}_s X^{\text{out,inc}}_{\ell {\mathscr m} \omega} e^{+i \omega r_*} \end{array}& \begin{array}{c} + \\ + \end{array} \begin{array}{c} 0 \\ {}_s X^{\text{out,ref}}_{\ell {\mathscr m} \omega} e^{-i \omega r_*} \end{array} &\qquad \begin{array}{l} r \to 2M\\ r \to \infty \end{array} \\ \label{eq:bcXdown} {}_s X^{\text{down}}_{\ell {\mathscr m} \omega}(r) &\sim \Big\{& \begin{array}{c} {}_s X^{\text{down,ref}}_{\ell {\mathscr m} \omega} e^{+i \omega r_*} \\ 0 \end{array}& \begin{array}{c} + \\ + \end{array} \begin{array}{c} {}_s X^{\text{down,inc}}_{\ell {\mathscr m} \omega} e^{-i \omega r_*} \\ {}_s X^{\text{down,trans}}_{\ell {\mathscr m} \omega} e^{-i \omega r_*} \end{array} & \begin{array}{l} r \to 2M\\ r \to \infty \end{array} \end{alignat} \end{subequations} where $r_* = r + 2 M \ln (\frac{r}{2M}-1)$ is the Regge-Wheeler tortoise coordinate. Inhomogeneous solutions of the Regge-Wheeler equation can then be written in terms of a linear combination of the basis functions, \begin{align} \label{eq:RWInhomogeneousModes} {}_s \psi_{\ell {\mathscr m} \omega}(r) &= {}_s C^{\text{in}}_{\ell {\mathscr m} \omega}(r) {}_s X^{\text{in}}_{\ell {\mathscr m} \omega}(r)+{}_s C^{\text{up}}_{\ell {\mathscr m} \omega}(r) {}_s X^{\text{up}}_{\ell {\mathscr m} \omega}(r), \end{align} where the weighting coefficients are determined by variation of parameters, \begin{subequations} \label{eq:weighting-coefficients-RW} \begin{align} {}_s C_{\ell {\mathscr m} \omega}^{\text{in}}(r) &= \int_{r}^{\infty} \frac{{}_s X^{\text{up}}_{\ell {\mathscr m} \omega}(r')}{W(r')f} {}_s S_{\ell {\mathscr m} \omega}(r') dr', \\ {}_s C_{\ell {\mathscr m} \omega}^{\text{up}}(r) &= \int^{r}_{2M} \frac{{}_s X^{\text{in}}_{\ell {\mathscr m} \omega}(r')}{W(r')f} {}_s S_{\ell {\mathscr m} \omega}(r') dr', \end{align} \end{subequations} with $W(r) = {}_s X^{\text{in}}_{\ell {\mathscr m} \omega}(r) \partial_r [{}_s X^{\text{up}}_{\ell {\mathscr m} \omega}(r)] - {}_s X^{\text{up}}_{\ell {\mathscr m} \omega}(r) \partial_r [{}_s X^{\text{in}}_{\ell {\mathscr m} \omega}(r)]$ the Wronskian [in practice, it is convenient to use the fact that $f(r) W(r) = \text{const}$]. \subsubsection{Transformation between Regge-Wheeler and Zerilli solutions} Homogeneous solutions to the Zerilli equation can be obtained from homogeneous solutions to the Regge-Wheeler equation by applying differential operators, \begin{subequations} \begin{align} X^{\text{ZM,up}}_{\ell{\mathscr m}\omega}&=\frac{\Big[(\ell-1)\ell(\ell+1)(\ell+2)+\tfrac{72 M^2 f} {r^2 \Lambda}\Big]{}_2 X^{\text{RW,up}}_{\ell{\mathscr m}\omega} +3 M f \frac{d{}_2 X^{\text{RW,up}}_{\ell{\mathscr m}\omega}}{dr}}{(\ell-1)\ell(\ell+1)(\ell+2)+12 i \omega M}, \\ X^{\text{ZM,in}}_{\ell{\mathscr m}\omega}&=\frac{\Big[(\ell-1)\ell(\ell+1)(\ell+2)+\tfrac{72 M^2 f} {r^2 \Lambda}\Big]{}_2 X^{\text{RW,in}}_{\ell{\mathscr m}\omega} +3 M f \frac{d{}_2 X^{\text{RW,in}}_{\ell{\mathscr m}\omega}}{dr}}{(\ell-1)\ell(\ell+1)(\ell+2)-12 i \omega M}. \end{align} \end{subequations} The constant of proportionality here is such that the transmission coefficients of the two Zerilli solutions is the same as that of the Regge-Wheeler solution. \subsubsection{Transformation between Regge-Wheeler and Teukolsky formalism} The modes of the CPM master function are related to the modes of the Teukolsky radial function by the Chandrasekhar transformation, \begin{subequations} \label{eq:ChandrasekharTransformation} \begin{align} {}_2 \psi_{\ell {\mathscr m} \omega} &= -\tfrac{i D}{4 r^2} \mathcal{D}^\dag \mathcal{D}^\dag \big(r \psi^{\rm CPM}_{\ell{\mathscr m}\omega}\big), \\ {}_{-2} \psi_{\ell {\mathscr m} \omega} &= -\tfrac{i D}{16} r^2 f^2 \mathcal{D} \mathcal{D} \big(r \psi^{\rm CPM}_{\ell{\mathscr m}\omega}\big), \end{align} \end{subequations} where $D = \sqrt{(\ell-1)\ell(\ell+1)(\ell+2)}$ is the Schwarzschild limit of the constant that appears in the Teukolsky-Starobinsky identities, Eq.~\eqref{eq:TS_D}. In the absence of sources this can be inverted to give \begin{align} \psi^{\rm CPM}_{\ell{\mathscr m}\omega} &= \tfrac{1}{D \mathcal{C}_{\ell{\mathscr m}\omega}} r^3 \mathcal{D}^\dag \mathcal{D}^\dag \big(\tfrac{1}{r^2} {}_{-2} \psi_{\ell {\mathscr m} \omega} \big), \\ \psi^{\rm CPM}_{\ell{\mathscr m}\omega} &= \tfrac{1}{4D \mathcal{C}^\dag_{\ell{\mathscr m}\omega}} r^2 f^{-1} \mathcal{D} r^2 f^2 \mathcal{D} \big(r f {}_{2} \psi_{\ell {\mathscr m} \omega} \big), \end{align} where $\mathcal{C}_{\ell{\mathscr m}\omega} = D - 12 i M \omega$ is the Schwarzschild limit of the second constant that appears in the Teukolsky-Starobinsky identities, Eq.~\eqref{eq:TS_C}. \subsubsection{Gravitational waves} As in the radiation gauge case, the gravitational wave strain can be determined directly from $\psi^{\rm ZM}$ and $\psi^{\rm CPM}$. There is a slight subtlety in that the Regge-Wheeler-Zerilli gauge in which the metric is typically reconstructed is not compatible with the transverse-traceless gauge in which gravitational waves are normally defined (it is easy to see this since $h_{mm} = 0 = h_{{\bar{m}}\mb}$ in the Regge-Wheeler-Zerilli gauge). Instead, we can use the Chandrasekhar transformation in Eq.~\eqref{eq:ChandrasekharTransformation} to first transform to $\psi_4$ and then compute the strain from that as we did in radiation gauge. Doing so we have \begin{equation} r(h_+ - i h_\times) = \sum_{\ell=2}^\infty \sum_{m=-\ell}^\ell \, \frac{D}{2} (\psi^{\rm ZM}_{\ell {\mathscr m}}-i \psi^{\rm CPM}_{\ell {\mathscr m}}) {}_{-2} Y_{\ell {\mathscr m}}(\theta, \phi), \label{eq:RW-waveform} \end{equation} where it is understood that equality holds in the limit $r \to \infty$ (at fixed $u=t-r^*$). If we work in the frequency domain and compute the ``in'' and ``up'' mode functions with normalisation such that transmission coefficients are unity, ${}_s X^{\text{in,trans}}_{\ell {\mathscr m} \omega} = 1 = {}_s X^{\text{up,trans}}_{\ell {\mathscr m} \omega}$, then $\psi^{\rm ZM}_{\ell {\mathscr m}}$ and $\psi^{\rm CPM}_{\ell {\mathscr m}}$ are given by the ``up'' weighting coefficients $ C_{\ell {\mathscr m} \omega}^{\text{ZM,up}}$ and $ C_{\ell {\mathscr m} \omega}^{\text{CPM,up}}$ evaluated in the limit $r\to \infty$. Similarly, the time averaged flux of energy carried by gravitational waves passing through infinity and the horizon can be computed from the ``in'' and ``up'' weighting coefficients, \begin{subequations} \begin{align} \mathcal{F}_E^\mathcal{H} &= \lim_{r \to 2M} \sum_{{\ell\mathscr{m}}\omega} \frac{D^2}{64\pi} \omega^2 \bigg[| C^{\text{ZM,in}}_{\ell {\mathscr m} \omega}|^2 + | C^{\text{CPM,in}}_{\ell {\mathscr m} \omega}|^2 \bigg], \label{EdotH v2}\\ \mathcal{F}_E^\mathcal{I} &= \lim_{r \to \infty} \sum_{{\ell\mathscr{m}}\omega} \frac{D^2}{64\pi} \omega^2 \bigg[| C^{\text{ZM,up}}_{\ell {\mathscr m} \omega}|^2 + | C^{\text{CPM,up}}_{\ell {\mathscr m} \omega}|^2 \bigg]. \end{align} \end{subequations} Similarly, the flux of angular momentum through infinity and the horizon can be computed from the ``in'' and ``up'' normalization coefficients, \begin{subequations} \begin{align} \mathcal{F}_{L_z}^\mathcal{H} &= \lim_{r \to 2M}\sum_{{\ell\mathscr{m}}\omega} \frac{D^2}{64\pi} {\mathscr m}\omega \bigg[| C^{\text{ZM,in}}_{\ell {\mathscr m} \omega}|^2 + | C^{\text{CPM,in}}_{\ell {\mathscr m} \omega}|^2 \bigg], \label{LdotH v2}\\ \mathcal{F}_{L_z}^\mathcal{I} &= \lim_{r \to \infty} \sum_{{\ell\mathscr{m}}\omega} \frac{D^2}{64\pi} {\mathscr m}\omega \bigg[| C^{\text{ZM,up}}_{\ell {\mathscr m} \omega}|^2 + | C^{\text{CPM,up}}_{\ell {\mathscr m} \omega}|^2 \bigg]. \end{align} \end{subequations} \subsubsection{Metric reconstruction in Regge-Wheeler gauge} Much like in the Teukolsky formalism, the CPM master function is gauge invariant and may be used to reconstruct the metric perturbation in a chosen gauge. In the Regge-Wheeler gauge, defined by the choice $h^{\ell\mathscr{m}}_{a,{\rm even}} = h^{\ell\mathscr{m}}_{2,{\rm even}} = h^{\ell\mathscr{m}}_{2,{\rm odd}} = 0$, the odd parity metric perturbation is given by \begin{align} h^{{\ell\mathscr{m}}}_{l,{\rm odd}} &= \frac{\sqrt{f}\sqrt{\ell(\ell+1)}}{2 \sqrt{2} r}\big(\partial_r + f^{-1} \partial_t\big) \big(r \psi^{\rm CPM}_{{\ell\mathscr{m}}}\big) + \frac{16 \pi r^2}{(\ell-1)(\ell+2)} T^{\ell\mathscr{m}}_{l,{\rm odd}}, \\ h^{{\ell\mathscr{m}}}_{n,{\rm odd}} &= \frac{\sqrt{f}\sqrt{\ell(\ell+1)}}{2 \sqrt{2} r}\big(\partial_r - f^{-1} \partial_t\big) \big(r \psi^{\rm CPM}_{{\ell\mathscr{m}}}\big) + \frac{16 \pi r^2}{(\ell-1)(\ell+2)} T^{\ell\mathscr{m}}_{n,{\rm odd}}, \end{align} and the even parity metric perturbation is given by \cite{Hopper:2010uv} \begin{align} \begin{split} h_{m{\bar{m}}}^{\ell\mathscr{m}} &= f \partial_r \psi^{\rm ZM}_{\ell\mathscr{m}} + A \psi^{\rm ZM}_{\ell\mathscr{m}} - \frac{32\pi r^2}{\ell(\ell+1) \Lambda} T^{\ell\mathscr{m}}_{tt}, \\ h_{rr}^{\ell\mathscr{m}} &= \frac{\Lambda}{2f^2} \left[ \frac{\ell(\ell+1)}{2r} \psi^{\rm ZM}_{\ell\mathscr{m}} - h^{\ell\mathscr{m}}_{m{\bar{m}}} \right] + \frac{r}{f} \partial_r h^{\ell\mathscr{m}}_{m{\bar{m}}}, \\ h_{tr}^{\ell\mathscr{m}} &= r \partial_t \partial_r \psi^{\rm ZM}_{\ell\mathscr{m}} + r B \, \partial_t \psi^{\rm ZM}_{\ell\mathscr{m}} + \frac{16 \pi r^2}{\ell(\ell+1)} \left[ T^{\ell\mathscr{m}}_{tr} - \frac{2r}{f\Lambda} \partial_t T^{{\ell\mathscr{m}}}_{tt} \right], \\ h_{tt}^{\ell\mathscr{m}} &= f^2 h_{rr} + \frac{8\pi f }{\sqrt{2(\ell-1)\ell(\ell+1)(\ell+2)}}T^{\ell\mathscr{m}}_{2,{\rm even}}, \end{split} \end{align} where \begin{align} A(r) &:= \frac{2}{r \Lambda} \left[ \frac14 (\ell-1)\ell(\ell+1)(\ell+2) + \frac{3M}{2r} \Big((\ell-1)(\ell+2) + \frac{4M}{r} \Big) \right], \\ B(r) &:= \frac{2}{r f \Lambda} \left[ \frac{(\ell-1)(\ell+2)}{2} \left( 1 - \frac{3M}{r} \right) - \frac{3M^2}{r^2} \right]. \end{align} As in the Teukolsky case, in order to transform back to the time-domain solution, as a final step we must perform an inverse Fourier transform. This poses a challenge in gravitational self-force calculations, where non-smoothness of the solutions in the vicinity of the worldline lead to the Gibbs phenomenon of non-convergence of the inverse Fourier transform. Resolutions to this problem typically rely on avoiding directly transforming the inhomogeneous solution by using the methods of extended homogeneous or extended particular solutions. For further details, see \cite{Hopper:2010uv,Hopper:2012ty}. \subsection{Lorenz gauge} In the case of perturbations of a Schwarzschild black hole, the equations for the metric perturbation itself are separable. This makes it practical to work in the Lorenz gauge and to directly solve the Lorenz gauge field equations for the metric perturbation. Rewriting the Lorenz gauge condition, Eq.~\eqref{eq:LorenzGauge}, in terms of null tetrad components we have four gauge equations, \begin{subequations} \begin{align} (\hbox{\ec\char'336}' - 2 \rho') h_{ll} + (\hbox{\ec\char'336} - 2 \rho) h_{m{\bar{m}}} - 2 \rho h_{ln} - (\hbox{\ec\char'360} h_{l{\bar{m}}} + \hbox{\ec\char'360}' h_{lm}) &= 0, \\ (\hbox{\ec\char'336} - 2 \rho) h_{nn} + (\hbox{\ec\char'336}' - 2 \rho') h_{m{\bar{m}}} - 2 \rho' h_{ln} - (\hbox{\ec\char'360}' h_{nm} + \hbox{\ec\char'360} h_{n{\bar{m}}}) &= 0, \\ (\hbox{\ec\char'336}' - 3 \rho') h_{lm} + (\hbox{\ec\char'336} - 3 \rho) h_{nm} - \hbox{\ec\char'360} h_{ln} - \hbox{\ec\char'360}' h_{mm} &= 0, \\ (\hbox{\ec\char'336}' - 3 \rho') h_{l{\bar{m}}} + (\hbox{\ec\char'336} - 3 \rho) h_{n{\bar{m}}} - \hbox{\ec\char'360}' h_{ln} - \hbox{\ec\char'360} h_{{\bar{m}}\mb} &= 0. \end{align} \end{subequations} These decouple into 3 even parity equations (the first two and the real part of either the third or fourth) and 1 odd-parity equation (the imaginary part of either the third or fourth equation). Similarly, the Lorenz gauge linearised Einstein equation, Eq.~\eqref{eq:LorenzField}, yields ten field equations (7 even and 3 odd) given by \begin{subequations} \begin{align} &\hat{\Box} (h_{m{\bar{m}}} - h_{ln}) = 8 \pi T, \\ &(\hat{\Box} - 8 \psi_2 + 8 \rho \rho') (h_{ln} + h_{m{\bar{m}}}) + 4 \rho^2 h_{nn} + 4 \rho'^2 h_{ll} \nonumber \\ & \qquad + 4 \rho(\hbox{\ec\char'360} h_{n{\bar{m}}} + \hbox{\ec\char'360}' h_{nm}) + 4 \rho'(\hbox{\ec\char'360} h_{l{\bar{m}}} + \hbox{\ec\char'360}' h_{lm}) = -16 \pi (T_{ln} + T_{m{\bar{m}}}), \\ &(\hat{\Box} + 4 \rho \rho') h_{ll} + 4 \rho^2 (h_{ln} + h_{m{\bar{m}}}) + 4 \rho (\hbox{\ec\char'360} h_{l{\bar{m}}}+ \hbox{\ec\char'360}' h_{lm}) = -16 \pi T_{ll}, \\ &(\hat{\Box}' + 4 \rho \rho') h_{nn} + 4 \rho'^2 (h_{ln} + h_{m{\bar{m}}}) + 4 \rho' (\hbox{\ec\char'360}' h_{nm}+ \hbox{\ec\char'360} h_{n{\bar{m}}}) = -16 \pi T_{nn}, \\ &(\hat{\Box} - 6 \psi_2 + 4 \rho \rho') h_{lm} + 4 \rho^2 h_{nm} + 2 \rho \hbox{\ec\char'360} (h_{ln}+ h_{m{\bar{m}}}) \nonumber \\ & \qquad + 2 \rho' \hbox{\ec\char'360} h_{ll} + 2 \rho \hbox{\ec\char'360}' h_{mm} = -16 \pi T_{lm}, \\ &(\bar{\hat{\Box}} - 6 \psi_2 + 4 \rho \rho') h_{l{\bar{m}}} + 4 \rho^2 h_{n{\bar{m}}} + 2 \rho \hbox{\ec\char'360}' (h_{ln}+ h_{m{\bar{m}}}) \nonumber \\ & \qquad + 2 \rho' \hbox{\ec\char'360}' h_{ll} + 2 \rho \hbox{\ec\char'360} h_{{\bar{m}}\mb} = -16 \pi T_{l{\bar{m}}}, \\ &(\bar{\hat{\Box}}' - 6 \psi_2 + 4 \rho \rho') h_{nm} + 4 \rho'^2 h_{lm} + 2 \rho' \hbox{\ec\char'360} (h_{ln}+ h_{m{\bar{m}}}) \nonumber \\ & \qquad + 2 \rho \hbox{\ec\char'360} h_{nn} + 2 \rho' \hbox{\ec\char'360}' h_{mm} = -16 \pi T_{nm}, \\ &(\hat{\Box}' - 6 \psi_2 + 4 \rho \rho') h_{n{\bar{m}}} + 4 \rho'^2 h_{l{\bar{m}}} + 2 \rho' \hbox{\ec\char'360}' (h_{ln}+ h_{m{\bar{m}}}) \nonumber \\ & \qquad + 2 \rho \hbox{\ec\char'360}' h_{nn} + 2 \rho' \hbox{\ec\char'360} h_{{\bar{m}}\mb} = -16 \pi T_{n{\bar{m}}}, \\ &\hat{\Box} h_{mm} + 4 \rho \hbox{\ec\char'360} h_{nm} + 4 \rho' \hbox{\ec\char'360} h_{lm} = -16 \pi T_{mm}, \\ &\bar{\hat{\Box}} h_{{\bar{m}}\mb} + 4 \rho \hbox{\ec\char'360}' h_{n{\bar{m}}} + 4 \rho' \hbox{\ec\char'360}' h_{l{\bar{m}}} = -16 \pi T_{{\bar{m}}\mb} \end{align} \end{subequations} where the operators \begin{align*} \hat{\Box} := - 2 \hbox{\ec\char'336} \hbox{\ec\char'336}' + 2 \rho' \hbox{\ec\char'336} + 2 \rho \hbox{\ec\char'336}' + 2 \hbox{\ec\char'360} \hbox{\ec\char'360}',\qquad \hat{\Box}' := - 2 \hbox{\ec\char'336}' \hbox{\ec\char'336} + 2 \rho \hbox{\ec\char'336}' + 2 \rho' \hbox{\ec\char'336} + 2 \hbox{\ec\char'360}' \hbox{\ec\char'360},\\ \bar{\hat{\Box}} := - 2 \hbox{\ec\char'336} \hbox{\ec\char'336}' + 2 \rho' \hbox{\ec\char'336} + 2 \rho \hbox{\ec\char'336}' + 2 \hbox{\ec\char'360}' \hbox{\ec\char'360},\qquad \bar{\hat{\Box}}' := - 2 \hbox{\ec\char'336}' \hbox{\ec\char'336} + 2 \rho \hbox{\ec\char'336}' + 2 \rho' \hbox{\ec\char'336} + 2 \hbox{\ec\char'360} \hbox{\ec\char'360}', \end{align*} all coincide with the scalar wave operator when acting on type $\{0,0\}$ objects (but differ when acting on objects of generic GHP type). Note that we have chosen here to work with the non-trace-reversed metric perturbation; equivalent equations for the trace-reversed perturbation can be obtained by noting that a trace-reversal corresponds to the interchange $h_{ln} \leftrightarrow h_{m{\bar{m}}}$. The Lorenz gauge equations can be decomposed into the same basis of spin-weighted spherical harmonics as for the metric perturbation itself. The mode decomposed equations follow immediately from the above GHP expressions along with Eqs.~\eqref{eq:spin-raising-lowering} and either \eqref{eq:th-Schw-Kinnersley} or \eqref{eq:th-Schw-Carter} for the GHP derivative operators (the specific form for the mode decomposed equations depends on the choice of tetrad). \subsubsection{Lorenz gauge formalism in the frequency domain} Following a procedure much like in the Regge-Wheeler and Teukolsky cases, one can construct solutions to the Lorenz gauge equations by working in the frequency domain and solving ordinary differential equations \cite{Akcay:2013wfa,Osburn:2014hoa,Wardell:2015ada}. The only additional complexity is that for each $(\ell,{\mathscr m},\omega)$ mode we must now work with a system of $k$ coupled second order radial equations with $2k$ linearly independent homogeneous solutions.\footnote{There are $k=7$ even parity equations and $k=3$ odd parity equations in general, although these can be reduced to $4+2$ equations using the $3+1$ gauge conditions. The number of equations are also further reduced in certain special cases such as static or low multipole modes.} As we did in the Regge-Wheeler and Teukolsky cases, it is natural to divide these into $k$ ``in'' solutions and $k$ ``up'' solutions satisfying appropriate boundary conditions at the horizon or radial infinity. Then, using variation of parameters the inhomogeneous solutions are given by \begin{align} h^{(i)}_{\ell{\mathscr m}\omega}(r) = \mathbf{C}^{\rm in}_{\ell{\mathscr m}\omega}(r) \cdot \mathbf{h}^{(i),{\rm in}}_{\ell{\mathscr m}\omega}(r) + \mathbf{C}^{\rm up}_{\ell{\mathscr m}\omega}(r) \cdot \mathbf{h}^{(i),{\rm up}}_{\ell{\mathscr m}\omega}(r) \end{align} where $i=1,\ldots,k$ represent the $k$ components of the metric perturbation and where $\mathbf{h}^{(i),{\rm in}}_{\ell{\mathscr m}\omega}(r)$ are vectors of $k$ linearly independent homogeneous solutions for a given $i$. To compute the weighting coefficient vectors $\mathbf{C}^{\rm in/up}_{\ell{\mathscr m}\omega}(r)$ we define a $2k\times2k$ matrix of homogeneous solutions by \begin{eqnarray}\label{eq:Phi_matrix} \arraycolsep=1.4pt\def1.5{1.5} \Phi(r) = \left(\begin{array}{c | c}-\mathbf{h}^{(i),{\rm in}}_{\ell{\mathscr m}\omega} & \mathbf{h}^{(i),{\rm up}}_{\ell{\mathscr m}\omega} \\ \hline -\partial_r \mathbf{h}^{(i),{\rm in}}_{\ell{\mathscr m}\omega} & \partial_r \mathbf{h}^{(i),{\rm up}}_{\ell{\mathscr m}\omega} \end{array}\right). \end{eqnarray} The vectors of weighting coefficients are then obtained with the standard variation of parameters prescription: \begin{align} \left(\begin{array}{c} \mathbf{C}^{\rm in}(r) \\ \mathbf{C}^{\rm up}(r)\end{array}\right) = \int \Phi^{-1}(r')\left(\begin{array}{c} \mathbf{0} \\ \mathbf{T}(r')\end{array}\right)\,dr', \end{align} where $\mathbf{T}(r')$ represents the vector of $k$ sources constructed from the components of the stress energy tensor projected onto the basis and decomposed into modes. The limits on the integral depend upon whether the ``in'' or ``up'' weighting coefficient are being solved for, in the same way as for the Regge-Wheeler and Teukolsky cases. As in the Regge-Wheeler and Teukolsky cases, in order to transform back to the time-domain solution, as a final step we must perform an inverse Fourier transform. This poses a challenge in gravitational self-force calculations, where non-smoothness of the solutions in the vicinity of the worldline lead to the Gibbs phenomenon of non-convergence of the inverse Fourier transform. Resolutions to this problem typically rely on avoiding directly transforming the inhomogeneous solution by using the methods of extended homogeneous or extended particular solutions. For further details, see \cite{Hopper:2010uv,Hopper:2012ty}. \subsubsection{Lorenz gauge metric reconstruction from Regge-Wheeler master functions} As an alternative to directly solving the $7+3$ coupled Lorenz gauge field equations, Berndtson \cite{Berndtson:2009hp} showed that the solutions could instead be reconstructed from particular solutions to the $s=0$, $1$ and $2$ Regge-Wheeler-Zerilli equations, along with a fourth field obtained by solving the $s=0$ Regge-Wheeler equation sourced by the other $s=0$ field. The explicit expressions are quite unwieldy, particularly when sources are included. Focusing only on the relatively simple odd sector and ignoring the special case such as low multipoles or $\omega=0$ modes, Berndtson's expressions may be written as \begin{align} h_{l,{\rm odd}}^{\ell {\mathscr m}} &= -\frac{\sqrt{f}\sqrt{\ell(\ell+1)}}{2(i \omega)^2 r} \bigg[r^2 \mathcal{D}_0 \bigg(\frac{ \psi^{\rm RW}_{1\ell{\mathscr m}}}{r^{2}}\bigg)+ \frac{2\lambda}{3 r}\mathcal{D}_0 \big(r \psi^{\rm RW}_{2\ell{\mathscr m}}\big) \bigg] + \frac{8 \pi f}{(i\omega)^2} (T^{\ell{\mathscr m}}_{l,\rm odd}-T^{\ell{\mathscr m}}_{n,\rm odd}), \\ h_{n,{\rm odd}}^{\ell {\mathscr m}} &= \frac{\sqrt{f}\sqrt{\ell(\ell+1)}}{2(i \omega)^2 r} \bigg[r^2 \mathcal{D}^\dag_0 \bigg(\frac{ \psi^{\rm RW}_{1\ell{\mathscr m}}}{r^{2}}\bigg)+ \frac{2\lambda}{3 r}\mathcal{D}^\dag_0 \big(r \psi^{\rm RW}_{2\ell{\mathscr m}}\big) \bigg] - \frac{8 \pi f}{(i\omega)^2} (T^{\ell{\mathscr m}}_{l,\rm odd}-T^{\ell{\mathscr m}}_{n,\rm odd}), \\ h_{2,{\rm odd}}^{\ell {\mathscr m}} &= \frac{\sqrt{(\ell-1)\ell(\ell+1)(\ell+2)}}{(i \omega)^2 r^2} \Big[ \psi^{\rm RW}_{1\ell{\mathscr m}} + f \partial_r \big( r \psi^{\rm RW}_{2\ell{\mathscr m}} \big) + \frac{2\lambda}{3} \psi^{\rm RW}_{2\ell{\mathscr m}} \Big] + \frac{16 \pi f}{(i\omega)^2} T^{\ell{\mathscr m}}_{2,\rm odd} , \end{align} where $\mathcal{D}_0$ and $\mathcal{D}_0^\dag$ are the operators defined in Eq.~\eqref{eq:DandL} specialized to the Schwarzschild ($a=0$) case. Equivalent expressions for the even sector are significantly more complicated and are given in Appendix A of Ref.~\cite{Berndtson:2009hp}, while expressions for low multipoles and $\omega=0$ modes are given elsewhere in the same reference. \subsubsection{Gravitational waves} As in the Regge-Wheeler-Zerilli and Teukolsky cases, the flux of gravitational wave energy and angular momentum may be computed from the asymptotic values of the fields. In the Lorenz gauge case where one solves for the metric perturbation directly, the gravitational wave strain is simply given by $h_{mm}$ as in Eq.~\eqref{eq:Strain}, \begin{equation} r(h_+ + i h_\times) = r \, h_{mm} = \sum_{\ell=2}^\infty \sum_{{\mathscr m}=-\ell}^\ell \int_{-\infty}^\infty r\, h_{mm}^{\ell{\mathscr m}\omega} {}_{2} Y_{\ell{\mathscr m}}(\theta,\phi) e^{-i \omega (t-r_*)} d \omega,\label{eq:LorenzGauge-waveform} \end{equation} where it is understand that the equality holds in the limit $r\to\infty$. Similarly, the energy fluxes are given explicitly by \cite{Barack:2007tm} \begin{subequations} \begin{align} \mathcal{F}_E^\mathcal{I} &= \lim_{r \to \infty} \sum_{{\ell\mathscr{m}}\omega} \frac{\omega^2 r^2}{16\pi} |h_{mm}^{\ell {\mathscr m} \omega} |^2,\\ \mathcal{F}_E^\mathcal{H} &= \lim_{r \to 2M} \sum_{{\ell\mathscr{m}}\omega} \frac{1}{256\pi M^2(1+16 M^2 \omega^2)} \times \nonumber \\ & \qquad \bigg| \sqrt{(\ell-1)\ell(\ell+1)(\ell+2)} r f h_{ll}^{\ell {\mathscr m} \omega} \nonumber \\ & \qquad \, - 2 \sqrt{(\ell-1)(\ell+2)} (1+4 i M \omega) r \sqrt{f} h_{lm}^{\ell {\mathscr m} \omega} \nonumber \\ & \qquad \, + 4 i M \omega (1+4 i M \omega) r h_{mm}^{\ell {\mathscr m} \omega} \bigg|^2,\label{EdotH v3} \end{align} \end{subequations} and the angular momentum fluxes are given by \begin{subequations} \begin{align} \mathcal{F}_{L_z}^\mathcal{I} &= \lim_{r \to \infty} \sum_{{\ell\mathscr{m}}\omega} \frac{{\mathscr m}\omega r^2}{16\pi} |h_{mm}^{\ell {\mathscr m} \omega} |^2,\\ \mathcal{F}_{L_z}^\mathcal{H} &= \lim_{r \to 2M} \sum_{{\ell\mathscr{m}}\omega} \frac{{\mathscr m}}{256\pi M^2 \omega (1+16 M^2 \omega^2)} \times \nonumber \\ & \qquad \bigg| \sqrt{(\ell-1)\ell(\ell+1)(\ell+2)} r f h_{ll}^{\ell {\mathscr m} \omega} \nonumber \\ & \qquad \, - 2 \sqrt{(\ell-1)(\ell+2)} (1+4 i M \omega) r \sqrt{f} h_{lm}^{\ell {\mathscr m} \omega} \nonumber \\ & \qquad \, + 4 i M \omega (1+4 i M \omega) r h_{mm}^{\ell {\mathscr m} \omega} \bigg|^2.\label{LdotH v3} \end{align} \end{subequations} \section{Small objects in General Relativity} In the previous section we reviewed black hole perturbation theory with a generic source term. In this section, we consider how to formulate the source describing a small object. This is the {\em local problem} in self-force theory: In a spacetime perturbed by a small body, what are the sources in the field equations~\eqref{EFE1} and \eqref{EFE2}? Moreover, if the body's bulk motion is described by an equation of motion~\eqref{perturbed geodesic equation}, what are the forces on the right-hand side? The result of the analysis is (i) a skeletonization of the small body, in which the body is reduced to a singularity equipped with the body's multipole moments, together with (ii) an equation of motion governing the singularity's trajectory. The setting here is very general: the background can be any vacuum spacetime. Our coverage of the subject is terse, and we refer to Refs.~\cite{Poisson:2011nh,Pound:2015tma} for detailed reviews or to Ref.~\cite{Barack:2018yvs} for a non-expert introduction. \subsection{Matched asymptotic expansions} For simplicity, we assume that outside of the small object, the spacetime is vacuum, and that the perturbations are due solely to the object. Over most of the spacetime, the metric is well described by the external background metric $g_{\alpha\beta}$. However, very near the object, in a region comparable to the object's own size, the object's gravity dominates. In this region, which we call the {\em body zone}, the approximation~\eqref{g expansion} breaks down. This problem is usually overcome in one of two ways: using effective field theory~\cite{Galley:2008ih} (common in post-Newtonian and post-Minkowskian theory~\cite{Porto:2016pyg}) or using the method of matched asymptotic expansions (see, e.g., Refs.~\cite{Eckhaus:79,Kevorkian-Cole:96} for broad introductions, Refs.~\cite{Damour:83,Futamase:2007zz,Poisson:2020vap} for applications in post-Newtonian theory, and Refs.~\cite{DEath:1975jps,Kates:1980zz,Thorne:1984mz,Mino:1996nk,Mino:1997wh,Detweiler:2000gt,Poisson:2003wz,Detweiler:2005kq,Gralla:2008fg,Pound:2009sm,Detweiler:2011tt,Pound:2012nt,Gralla:2012db,Pound:2017psq} and the reviews~\cite{Poisson:2011nh,Pound:2015tma} for the work most relevant here). Here we adopt the latter approach. We let $\epsilon=m/{\cal R}$, where $m$ is the small object's mass and ${\cal R}$ is a characteristic length scale of the external universe; in a small-mass-ratio binary, ${\cal R}$ will be the mass $M$ of the primary, while in a weak-field binary it can be the orbital separation. We then assume Eq.~\eqref{g expansion}, which we dub the {\em outer expansion}, is accurate outside the body zone. Near the object, we assume the metric is well approximated by a second expansion, called an {\em inner expansion}, that effectively zooms in on the body zone. To make this ``zooming in'' precise, we first choose some measure, $\mathscr{r}$, of radial distance from the object, with $\mathscr{r}$ an order-1 function of the external coordinates $x^\alpha$. We then define the scaled distance $\tilde{ \mathscr{r}} := \mathscr{r}/\epsilon$. The body zone corresponds to $\mathscr{r}\sim \epsilon{\cal R}$, but to $\tilde{\mathscr{r}}\sim {\cal R}$. The outer expansion~\eqref{g expansion} is an approximation in the limit $\epsilon\to0$ at fixed coordinate values and therefore at fixed $\mathscr{r}$. The inner expansion is instead an approximation in the limit $\epsilon\to0$ at fixed $\tilde{\mathscr{r}}$, \begin{equation} g^{\rm exact}_{\mu\nu}(\tilde{\mathscr{r}},\epsilon) = g^{\rm obj}_{\mu\nu}(\tilde{\mathscr{r}}) + \epsilon H^{(1)}_{\mu\nu}(\tilde{\mathscr{r}}) + \epsilon^2 H^{(2)}_{\mu\nu}(\tilde{\mathscr{r}})+O(\epsilon^3). \end{equation} (We suppress other coordinate dependence.) In the body zone, the coefficients $g^{\rm obj}_{\mu\nu}$ and $H^{(n)}_{\mu\nu}$ are order unity. The background metric $g^{\rm obj}_{\mu\nu}$ represents the metric of the small object's spacetime as if it were isolated, and the perturbations $H^{(n)}_{\mu\nu}$ arise from the tidal fields of the external universe and nonlinear interactions between those tidal fields and the body's own gravity. In our construction of the inner expansion, we have assumed that there is only one scale that sets the size of the body zone: the object's mass $m$. This implicitly assumes that the object is compact, such that its typical diameter $d$ is comparable to $m$. That in turn implies that the object's $\ell$th multipole moment scales as \begin{equation}\label{moments scaling} m d^\ell\sim m^{\ell+1} = \epsilon^{\ell+1}{\cal R}^{\ell+1}. \end{equation} For a noncompact object, we would need to introduce additional perturbation parameters in the outer expansion, and additional scales in the inner expansion. Our inner expansion also assumes that while there is a small length scale associated with the object, there is no analogous time scale; in other words, the object is not undergoing changes on its own internal time scale $\sim m$. This is equivalent to assuming the object is in quasi-equilibrium with its surroundings. In practice it corresponds to a spatial derivative near the object dominating over a time derivative by one power of $\mathscr{r}$ (in the outer expansion) or by one power of $\epsilon$ (in the inner expansion). To date, inner expansions have been calculated for tidally perturbed Schwarz\-schild and Kerr black holes as well as nonrotating or slowly rotating neutron stars; see Refs.~\cite{Damour:2009vw,Binnington:2009bb,Landry:2014jka,Poisson:2014gka,Pani:2015hfa,Pani:2015nua,Landry:2015zfa,Poisson:2018qqd,LeTiec:2020bos,Poisson:2020vap} for recent examples of such work.\footnote{Ref.~\cite{Poisson:2020mdi} alerts readers to a significant error in some of the work on slowly rotating bodies.} These calculations represent one of the major applications of the methods of black hole perturbation theory reviewed in the previous section, and they form part of an ongoing endeavour to include tidal effects in gravitational-wave templates and to infer properties of neutron stars from observed signals~\cite{Flanagan:2007ix,Yagi:2013awa}. However, in self-force applications we require only a minimal amount of information from the inner expansion, often much less than is provided in the above references. The necessary information is extracted from a {\em matching condition}: because the two expansions are expansions of the same metric, they must match one another when appropriately compared. The most pragmatic formulation of this condition is that the inner and outer expansions must commute. If we perform an outer expansion of the inner expansion (or equivalently, re-expand it for $\mathscr{r}\gg\epsilon{\cal R}$), and if we perform an inner expansion of the outer expansion (or equivalently, expand for $\mathscr{r}\ll{\cal R}$), and express the end results as functions of $r$, then both procedures yield a double series for small $\epsilon$ and small $\mathscr{r}$. We assume that these two double expansions agree with one another, order by order in $\epsilon$ and $\mathscr{r}$. A primary consequence of this matching condition is that near the small object, the metric perturbations in the outer expansions must behave as \begin{equation}\label{hn behavior} h^{(n)}_{\mu\nu} = \frac{h^{(n,-n)}_{\mu\nu}}{\mathscr{r}^n}+\frac{h^{(n,-n+1)}_{\mu\nu}}{\mathscr{r}^{n-1}} +\frac{h^{(n,-n+2)}_{\mu\nu}}{\mathscr{r}^{n-2}} + \ldots, \end{equation} growing large at small $\mathscr{r}$. If $h^{(n)}_{\mu\nu}$ grew more rapidly (for example, if $h^{(n)}_{\mu\nu} \sim \frac{1}{\mathscr{r}^{n+1}}$), then the outer expansion could not match an inner expansion. Moreover, the coefficient of $\frac{1}{\mathscr{r}^n}$ matches a term in the $\mathscr{r}\gg\epsilon{\cal R}$ expansion of $g^{\rm obj}_{\mu\nu}$: \begin{equation}\label{gobj behavior} g^{\rm obj}_{\mu\nu} = \eta_{\mu\nu} + \frac{\epsilon h^{(1,-1)}_{\mu\nu}}{\mathscr{r}} + \frac{\epsilon^2 h^{(2,-2)}_{\mu\nu}}{\mathscr{r}^2} + \frac{\epsilon^3 h^{(3,-3)}_{\mu\nu}}{\mathscr{r}^3} +\ldots, \end{equation} where $\eta_{\mu\nu}$ is the metric of flat spacetime. The terms in this series are in one-to-one correspondence with the multipole moments of $g^{\rm obj}_{\mu\nu}$, which in turn can be interpreted as the multipole moments of the object itself. This allows us to write $h^{(n,-n)}_{\mu\nu}$ in terms of the object's first $n$ moments; one new moment arises at each new order in $\epsilon$, just as one would expect from the scaling~\eqref{moments scaling}. The moments, together with the general form~\eqref{hn behavior}, are all we require from the inner expansion. After it is obtained, we can effectively ``integrate out'' the body zone from the problem, as described below. To intuitively understand the meanings of the double expansions, and of expressions such as~\eqref{hn behavior} and \eqref{gobj behavior}, we can interpret them as being valid in the {\em buffer region} $\epsilon{\cal R}\ll \mathscr{r}\ll\cal R$. This region is the large-$\tilde{\mathscr{r}}$ limit of the body zone but the small-$\mathscr{r}$ limit of the external universe. \subsection{Tools of local analysis} To determine more than just the general form of the perturbations, we substitute Eq.~\eqref{hn behavior} into the Einstein equations~\eqref{EFE1}--\eqref{EFE2} and then solve order by order in $\epsilon$ and $\mathscr{r}$. These types of local calculations are carried out using two tools: covariant near-coincidence expansions and expansions in local coordinate systems. Ref.~\cite{Poisson:2011nh} contains a thorough, pedagogical introduction to both methods. Here we summarize only the basic ingredients. Covariant expansions are based on Synge's world function, \begin{equation} \sigma(x^\alpha,x^{\alpha'}) = \frac{1}{2}\left(\int_\beta ds\right)^2, \end{equation} which is equal to 1/2 the square of the proper distance $s$ (as measured in $g_{\mu\nu}$) between the points $x^{\alpha'}$ and $x^\alpha$ along the unique geodesic $\beta$ connecting the two points; for a given $x^{\alpha'}$, this is a well-defined function of $x^\alpha$ so long as $x^\alpha$ is within the convex normal neighbourhood of $x^{\alpha'}$. The other necessary tool is the bitensor $g_{\mu}^{\mu'}(x^\alpha,x^{\alpha'})$, which parallel propagates vectors from $x^{\alpha'}$ to $x^\alpha$. A smooth tensor field $A_\mu{}^\nu$ at $x^\alpha$ can be expanded around $x^{\alpha'}$ as \begin{equation} A_\mu{}^\nu(x^\alpha) = g_\mu^{\mu'}g^\nu_{\nu'}\left[A_{\mu'}{}^{\nu'} - A_{\mu'}{}^{\nu'}{}_{;\alpha'}\sigma^{\alpha'} + \tfrac{1}{2}A_{\mu'}{}^{\nu'}{}_{;\alpha'\beta'}\sigma^{\alpha'}\sigma^{\beta'}+O(\lambda^3)\right],\label{covariant expansion} \end{equation} where we use $\lambda:=1$ to count powers of distance between $x^{\alpha'}$ and $x^\alpha$. The vector $\sigma_{\alpha'}:=\nabla_{\!\alpha'}\sigma$ is tangent to $\beta$ and has a magnitude $\sqrt{2\sigma}$ equal to the proper distance between $x^{\alpha'}$ and $x^\alpha$. The (perhaps unexpected) minus sign in Eq.~\eqref{covariant expansion} arises because $\sigma_{\alpha'}$ points {\em away} from $x^\alpha$ rather than toward it. When a derivative, either at $x^\alpha$ or at $x^{\alpha'}$, acts on an expansion like~\eqref{covariant expansion}, it involves derivatives of $g^{\mu}_{\mu'}$ and $\sigma_{\alpha'}$. These can then be re-expanded using, for example, \begin{equation} g^{\mu}_{\mu';\nu} = \frac{1}{2}g^\mu_{\rho'}g_{\nu}^{\nu'}R^{\rho'}{}_{\!\!\mu'\nu'\delta'}\sigma^{\delta'}+O(\lambda^2) \end{equation} and \begin{equation} \sigma_{;\mu\mu'} = -g_\mu^{\nu'}\left(g_{\mu'\nu'}+\tfrac{1}{6}R_{\mu'\alpha'\nu'\beta'}\sigma^{\alpha'}\sigma^{\beta'}\right)+O(\lambda^3); \end{equation} see Eqs.~(6.7)--(6.11) of Ref.~\cite{Poisson:2011nh}. To make use of these tools, we install a curve $\gamma$, with coordinates $z^\alpha(\tau)$, in the spacetime of $g_{\mu\nu}$, which will be a representative worldline for the small object. Recall that $\tau$ is proper time as measured in $g_{\mu\nu}$. If the object is a material body, the worldline will be in its physical interior. If the object is a black hole, the worldline will only serve as a reference point for the field {\em outside} the black hole; mathematically, $\gamma$ resides in the manifold on which $g_{\mu\nu}$ lives, not the manifold on which $g^{\rm obj}_{\mu\nu}$ lives. In either case, we only analyze the metric in the object's exterior, never in its interior. A suitable measure of distance from $\gamma$ is \begin{equation} {\sf s}(x^\alpha,x^{\alpha'}):=\sqrt{P_{\mu'\nu'}\sigma^{\mu'}\sigma^{\nu'}}, \end{equation} where $x^{\alpha'}=z^\alpha(\tau')$ is a point on $\gamma$ near $x^\alpha$, and $P_{\mu\nu}:=g_{\mu\nu}+u_\mu u_\nu$ projects orthogonally to $\gamma$. Here we have introduced $\gamma$'s four-velocity $u^\mu=\frac{dz^\mu}{d\tau}$, normalized to $g_{\mu\nu}u^\mu u^\nu=-1$. Note that ${\sf s}$ remains positive regardless of whether $x^{\alpha'}$ and $x^{\alpha}$ are connected by a spacelike, timelike, or null geodesic. In terms of these covariant quantities, the expansion~\eqref{hn behavior} can be written more concretely as \begin{equation}\label{hn covariant} h^{(n)}_{\mu\nu}(x^{\alpha}) = g_\mu^{\mu'}g_\nu^{\nu'}\left[\frac{h^{(n,-n)}_{\mu'\nu'}(x^{\alpha'},\sigma^{\alpha'}/{\sf s})}{{\sf s}^n}+\frac{h^{(n,-n+1)}_{\mu'\nu'}(x^{\alpha'},\sigma^{\alpha'}/{\sf s})}{{\sf s}^{n-1}}+O(\lambda^{-n+2})\right]\!. \end{equation} ${\sf s}$ represents the distance from $x^{\alpha'}$ to $x^{\alpha}$ [playing the role of $\mathscr{r}$ in~\eqref{hn behavior}], and the vector $\sigma^{\alpha'}/{\sf s}$ represents the direction of the geodesic connecting $x^{\alpha'}$ to $x^{\alpha}$. Generically, $\log({\sf s})$ terms also appear~\cite{Pound:2012dk}, but we suppress them for simplicity. Rather than directly substituting an ansatz of the form~\eqref{hn covariant} into the vacuum field equations and solving for the coefficients $h^{(n,p)}_{\mu'\nu'}$, it is typically more convenient to adopt a local coordinate system centred on $\gamma$ and afterward recover~\eqref{hn covariant}. Here we adopt Fermi-Walker coordinates $(\tau,x^a)$, which are quasi-Cartesian coordinates constructed from a tetrad $(u^\alpha,e^\alpha_a)$ on $\gamma$. The spatial triad $e^\alpha_a$ is Fermi-Walker transported along the worldline according to \begin{equation \frac{De^\alpha_a}{d\tau}=a_au^\alpha, \end{equation} where $\frac{D}{d\tau} :=u^\mu \nabla_\mu$. $a_a:= a_\mu e^\mu_a$ is a spatial component of the covariant acceleration $a^\mu:=\frac{Du^\mu}{d\tau}$; this will eventually become the left-hand side of Eq.~\eqref{perturbed geodesic equation}. At each value of proper time $\tau$, we send a space-filling family of geodesics orthogonally outward from $\bar x^{\alpha}=z^{\alpha}(\tau)$, generating a spatial hypersurface $\Sigma_{\tau}$. Each such surface is labelled with a coordinate time $\tau$, and each point on the surface is labeled with spatial coordinates \begin{equation}\label{xa_def} x^a=-e^a_{\bar \alpha}\sigma^{\bar \alpha}, \end{equation} where $\sigma_{\bar\alpha}:=\nabla_{\!\bar\alpha}\sigma$ is tangent to $\Sigma_{\bar\tau}$, satisfying $\sigma_{\bar\alpha} u^{\bar\alpha} = 0$. The magnitude of these coordinates, given by $s := \sqrt{\delta_{ab}x^ax^b} = \sqrt{g_{\bar\alpha\bar\beta}\sigma^{\bar\alpha}\sigma^{\bar\beta}}$, is the proper distance from $\bar x^{\alpha}$ to $x^{\alpha}$. In the special case that $x^{\alpha'}=\bar x^{\alpha}$, ${\sf s}$ and $s$ are identical. The analog of Eq.~\eqref{covariant expansion} is the coordinate Taylor series \begin{equation} A_\mu{}^\nu(\tau,x^a) = A_\mu{}^\nu(\tau,0) + A_\mu{}^\nu{}_{,a}(\tau,0)x^a + \tfrac{1}{2}A_\mu{}^\nu{}_{,ab}(\tau,0)x^a x^b + O(s^3). \end{equation} In these coordinates, the four-velocity reduces to $u^\mu=(1,0,0,0)$, and the acceleration to $a^\mu=(0,a^i)$. The external background metric, which is smooth at $x^a=0$, is given by \begin{subequations}\label{FW metric} \begin{align} g_{\tau\tau} &= -1-2a_ix^i-\left(R_{\tau i\tau j}+a_ia_j\right)x^ix^j+O(s^3),\\ g_{\tau a} &= -\tfrac{2}{3}R_{\tau iaj}x^ix^j+O(s^3),\\ g_{ab} &= \delta_{ab}-\tfrac{1}{3}R_{aibj}x^ix^j+O(s^3), \end{align} \end{subequations} reducing to the Minkowski metric on $\gamma$, and the only nonzero Christoffel symbols on $\gamma$ are $\Gamma^a_{\tau\tau}=a^a$ and $\Gamma^\tau_{\tau a}=a_a$. If the worldline is not accelerating, the coordinates become inertial along $\gamma$. The Riemann tensor components in Eq.~\eqref{FW metric} are evaluated on the worldline. Higher powers of $x^a$ in the expansion come with higher powers of the acceleration, derivatives of the Riemann tensor, and nonlinear combinations of the Riemann tensor. In a vacuum background, the Riemann tensor on the worldline is commonly decomposed into tidal moments. The quadrupolar moments are defined as \begin{align} {\cal E}_{ab} &:= R_{\tau a\tau b}, \label{Eab}\\ {\cal B}_{ab} &:= \tfrac{1}{2}\epsilon^{pq}{}_{(a}R_{b)\tau pq}. \label{Bab} \end{align} Higher moments involve derivatives of the Riemann tensor. Equations~(44)--(48) of Ref.~\cite{Pound:2014xva} display the background metric~\eqref{FW metric} through order $s^3$ and the octupolar tidal moments. Ref.~\cite{Poisson:2009qj} presents the background metric in an alternative, lightcone-based coordinate system through order $\lambda^4$ and the hexadecapolar moments. Given the local Fermi-Walker coordinates, one can adopt a coordinate analog of Eq.~\eqref{hn covariant}, \begin{equation}\label{hn FW} h^{(n)}_{\mu\nu} = \frac{h^{(n,-n)}_{\mu\nu}(\tau,n^a)}{s^{n}} + \frac{h^{(n,-n+1)}_{\mu\nu}(\tau,n^a)}{s^{n-1}} + \frac{h^{(n,-n+1)}_{\mu\nu}(\tau,n^a)}{s^{n-2}} +O(s^{-n+3}). \end{equation} Here $n^a = \frac{x^a}{s}=\delta^{ab}\partial_b s$ is a radial unit vector. To facilitate solving the field equations, we can expand the coefficients in angular harmonics: \begin{equation} h^{(n,p)}_{\mu\nu}(\tau,n^a) = \sum_{\ell\geq0}h^{(n,p,\ell)}_{\mu\nu L}(\tau)\hat n^L, \end{equation} where $L:=i_1\cdots i_\ell$ is a multi-index, and $\hat n^L:=n^{\langle L\rangle}$, where $n^L:=n^{i_1}\cdots n^{i_\ell}$. The angular brackets denote the symmetric, trace-free (STF) combination of indices, where the trace is defined with $\delta_{ab}$. This is equivalent to expanding the coefficients $h^{(n,p)}_{\mu\nu}$ in scalar spherical harmonics: \begin{equation} h^{(n,p)}_{\mu\nu}(\tau,n^a) = \sum_{\ell=0}^\infty\sum_{m=-\ell}^\ell h^{(n,p,\ell {\mathscr m})}_{\mu\nu}(\tau)Y^{\ell {\mathscr m}}(\vartheta,\varphi), \end{equation} where the angles $(\vartheta,\varphi)$ are defined in the natural way from \begin{equation} n^a=(\sin\vartheta\cos\varphi,\sin\vartheta\sin\varphi,\cos\vartheta). \end{equation} Like spherical harmonics, $\hat n^L$ is an eigenfunction of the flat-space Laplacian, satisfying $\delta^{ab}\partial_a\partial_b \hat n^L = -\frac{\ell(\ell+1)}{s^2}\hat n^L$. One can further decompose $h^{(n,p,\ell)}_{\mu\nu L}$ into irreducible STF pieces that are in one-to-one correspondence with the coefficients in a tensor spherical harmonic decomposition. We refer the reader to Appendix A of Ref.~\cite{Blanchet:1985sp} for a detailed introduction to such expansions and a collection of useful identities. The general local solution in the buffer region can be found by substituting the expansions~\eqref{FW metric} and~\eqref{hn FW} into the vacuum field equations and working order by order in $\epsilon$ and $s$. Because spatial derivatives increase the power of $1/s$, dominating over $\tau$ derivatives, this process reduces to solving a sequence of stationary field equations. An alternative approach is to instead solve for the perturbations $H^{(n)}_{\mu\nu}$ in the inner expansion, starting with a large-$\tilde{\mathscr{r}}$ ansatz complementary to Eq.~\eqref{hn behavior}, and then translate the results into the small-$\mathscr{r}$ expansions for $h^{(n)}_{\mu\nu}$. This approach can draw on existing, high-order inner expansions (e.g., Refs.~\cite{Poisson:2009qj,Poisson:2018qqd,Poisson:2020vap}), though doing so often requires transformations of the coordinates and of the perturbative gauge to arrive at a practical form for the outer expansion (see, e.g., Ref.~\cite{Pound:2017psq}). \subsection{Local solution: self-field and an effective external metric} The general solutions for $h^{(1)}_{\mu\nu}$ and $h^{(2)}_{\mu\nu}$ in the buffer region are known to varying orders in $\epsilon$ and $\mathscr{r}$ in a variety of gauges, including classes of ``rest gauges'' (terminology from Ref.~\cite{Pound:2017psq}), ``P smooth'' gauges~\cite{Gralla:2012db}, ``highly regular'' gauges~\cite{Pound:2017psq} (in which no $1/s^2$ term appears in $h^{(2)}_{\mu\nu}$), radiation gauges~\cite{Pound:2013faa}, and the Regge-Wheeler-Zerilli gauge~\cite{Thompson:2018lgb}. (In the last two cases, the gauge choices are restricted to particular classes of external backgrounds.) However, nearly all covariant expressions, and expansions to the highest order in $\mathscr{r}$, are in the Lorenz gauge. Ref.~\cite{Pound:2012dk} provides an algorithm for generating the local solution in the Lorenz gauge, and a large class of similar gauges, to arbitrary order in $\epsilon$. In all gauges, the general solution is typically divided into two pieces: \begin{equation}\label{S-R split} h^{(n)}_{\mu\nu} = h^{\S(n)}_{\mu\nu} + h^{{\rm R}(n)}_{\mu\nu}. \end{equation} This is akin to the usual split of a general solution into a particular and a homogeneous solution. $h^{\S}_{\mu\nu} = \sum_n\epsilon^n h^{\S(n)}_{\mu\nu}$ is the object's {\em self-field}, encoding all the local information about the object's multipole structure (including the entirety of $g^{\rm obj}_{\mu\nu}$). Although this field is defined only outside the object, it would be singular at $s=0$ if the expansion~\eqref{hn FW} were taken to hold for all ${\cal R}\gg s>0$; it contains all the negative powers of $s$ in~\eqref{hn FW}, as well as all non-negative powers of $s$ with finite differentiability (e.g., all terms proportional to $s^p n^L$ with $p\geq0$ but $p\neq\ell$). For that reason, it is also known as the {\em singular field}. The second piece of the general solution, $h^{\rm R}_{\mu\nu}=\sum_n\epsilon^n h^{{\rm R}(n)}_{\mu\nu}$, encodes effectively {\em external} information linked to global boundary conditions. It takes the form of a power series, \begin{equation}\label{hRn FW} h^{{\rm R}(n)}_{\mu\nu} = \sum_{\ell} c^{(n)}_{\mu\nu L}(\tau)x^L; \end{equation} unlike $h^\S_{\mu\nu}$, which involves the locally determined multipole moments, every coefficient $c^{(n)}_{\mu\nu L}$ is an unknown that can only be determined when external boundary conditions are imposed. Although, once again, the field is defined {\em outside} the object, we can identify $\sum_{\ell} c^{(n)}_{\mu\nu L}(\tau)x^L$ with a Taylor series, where the coefficients $c^{(n)}_{\mu\nu L}(\tau)=\frac{1}{\ell!}\partial_L h^{{\rm R}(n)}_{\mu\nu}(\tau,0)$ define $h^{{\rm R}(n)}_{\mu\nu}$ and its derivatives on the worldline. Moreover, $h^{{\rm R}}_{\mu\nu}$ can be combined with the external background to form an {\em effective metric} \begin{equation} \breve{g}_{\mu\nu} := g_{\mu\nu} + h^{\rm R}_{\mu\nu} \end{equation} that is a vacuum solution, satisfying \begin{equation} G_{\mu\nu}[\breve g] = 0 \end{equation} even on $\gamma$. $\breve{g}_{\mu\nu}$ characterizes the object's rest frame and local tidal environment. Because $h^{{\rm R}}_{\mu\nu}$ is smooth at $x^a=0$, it is also referred to as the {\em regular field}. This type of division of the local solution into $h^\S_{\mu\nu}$ and $h^{{\rm R}}_{\mu\nu}$ was first emphasized by Detweiler and Whiting at first order in $\epsilon$~\cite{Detweiler:2000gt,Detweiler:2002mi}. There is considerable freedom in the specific division, as smooth vacuum perturbations can be interchanged between the two pieces, and multiple distinct choices have been made in practice, particularly beyond linear order~\cite{Rosenthal:2006nh,Harte:2011ku,Pound:2012nt,Gralla:2012db}. However, one can always choose the division such that (i) $\breve{g}_{\mu\nu}$ is a smooth vacuum metric, and (ii) $\breve{g}_{\mu\nu}$ is effectively the ``external'' metric, in the sense that the object moves as a test body in it, as described in the next section. Here for concreteness we adopt the choice introduced in Ref.~\cite{Pound:2012nt} (see also Refs.~\cite{Pound:2012dk,Pound:2014xva,Pound:2015tma}), and we provide the explicit forms of the first- and second-order self-fields in the Lorenz gauge, as presented in Ref.~\cite{Pound:2014xva}. For the purpose of explicitly displaying factors of the object's multipole moments, from this point forward we take $\epsilon$ to be a formal counting parameter that can be set equal to unity. At first order, the self-field is determined by the object's mass. It is given in Fermi-Walker coordinates by \begin{subequations}\label{hS1 FW} \begin{align} h^{\S(1)}_{\tau\tau} &= \frac{2m}{s}+3ma_i n^i+\tfrac{5}{3} ms\mathcal{E}_{ab}\hat n^{ab} + O(s^2),\\ h^{\S(1)}_{\tau a} &= 2ms\left(\tfrac{1}{3}\mathcal{B}^{bc} \epsilon_{acd}\hat n_{b}{}^{d}-\dot a_a\right)+O(s^2),\\ h^{\S(1)}_{ab} &= \frac{2m\delta_{ab}}{s}-m\delta_{ab}a_in^i+ms\left( \tfrac{4}{3} \mathcal{E}_{(a}{}^{c} \hat{n}_{b)c} - \tfrac{38}{9} \mathcal{E}_{ab} - \mathcal{E}_{cd}\delta_{ab} \hat{n}^{cd}\right)+O(s^2) \end{align} \end{subequations} and in covariant form by \begin{align}\label{hS1 covariant} h^{\S(1)}_{\mu\nu} &=\frac{2 m}{\lambda{\sf s}} g^{\alpha'}_{\mu} g^{\beta'}_{\nu} \left(g_{\alpha'\beta'} + 2 u_{\alpha'} u_{\beta'}\right) +\frac{m\lambda^0}{{\sf s}^3} g^{\alpha'}_\mu g^{\beta'}_{\nu} \left[ \left({\sf s}^2- \r^2\right) a_{\sigma} (g_{\alpha'\beta'}+2u_{\alpha'} u_{\beta'})\right.\nonumber\\ &\quad \left.+8\r{\sf s}^2 a_{(\alpha'} u_{\beta')}\right] + \lambda\frac{mg^{\alpha'}_\mu g^{\beta'}_{\nu}}{3{\sf s}^3} \Big[\!\left(\r^2 - {\sf s}^2\right) \left(g_{\alpha'\beta'}+2u_{\alpha'} u_{\beta'}\right)R_{u\sigma u\sigma} \nonumber\\ &\quad - 12{\sf s}^4 R_{\alpha' u\beta' u}- 12\r{\sf s}^2 u_{(\alpha'}R_{\beta')u\sigma u}+12{\sf s}^2 (\r^2 + {\sf s}^2)\dot{a}_{(\alpha'}u_{\beta')}\nonumber\\ &\quad + \r (3{\sf s}^2-\r^2)\dot{a}_{\sigma}(g_{\alpha'\beta'}+ 2u_{\alpha'} u_{\beta'})\Big] +O(\lambda^2), \end{align} where $x^{\alpha'}$ is an arbitrary point on $\gamma$ near the field point $x^{\alpha}$. In the covariant expressions we have adopted the notation $a_\sigma:=a_{\alpha'}\sigma^{\alpha'}$, $R_{u\sigma u\sigma}:=R_{\mu'\alpha'\nu'\beta'}u^{\mu'}\sigma^{\alpha'}u^{\nu'}\sigma^{\beta'}$, etc. The quantity $\r:=u_{\mu'}\sigma^{\mu'}$ is a measure of the proper time between $x^{\alpha'}$ and $x^\alpha$. Equations~\eqref{hS1 FW} and \eqref{hS1 covariant} are given in Ref.~\cite{Pound:2014xva} through order $\lambda^2$. Equation~(4.7) of Ref.~\cite{Heffernan:2012su} presents the covariant expansion of $h^{\S(1)}_{\mu\nu}$ through order $\lambda^4$ (omitting acceleration terms). At second order, the self-field involves both the mass and spin of the object. It can be written as the sum of three pieces,\footnote{Ref.~\cite{Pound:2014xva} further divides $h^{\S{\rm R}}_{\mu\nu}$ into two pieces, labeled $h^{\S{\rm R}}_{\mu\nu}$ and $h^{\delta m}_{\mu\nu}$.} \begin{equation}\label{hS2} h^{\S(2)}_{\mu\nu}=h^{\S\S}_{\mu\nu}+h^{\S{\rm R}}_{\mu\nu}+h^{\rm spin}_{\mu\nu}. \end{equation} The spin contribution is \begin{equation} h^{\rm spin}_{\tau a} = \frac{2S_{ai}n^i}{s^2}+O(s^0), \end{equation} where other components are $O(s^0)$, and where $S_{ab}=\epsilon_{abi}S^i$ is the spin tensor and $S^i$ the spin vector. The other two pieces are either quadratic in the mass, \begin{subequations}\label{hSS FW}% \allowdisplaybreaks\begin{align} h^{\S\S}_{\tau\tau} &= -\frac{2m^2}{s^2} - \tfrac{7}{3}m^2\mathcal{E}_{ab} \hat{n}^{ab}+O(s\ln s),\\ h^{\S\S}_{\tau a} &= - \tfrac{10}{3} m^2\mathcal{B}^{bc} \epsilon_{acd} \hat{n}_{b}{}^{d} +O(s\ln s),\\ h^{\S\S}_{ab} &= \frac{\tfrac{8}{3}m^2\delta_{ab} - 7m^2\hat{n}_{ab}}{s^2} + m^2\left(4 \mathcal{E}_{c(a} \hat{n}_{b)}{}^{c} - \tfrac{4}{3} \mathcal{E}_{cd} \delta_{ab} \hat{n}^{cd} + \tfrac{7}{5}\mathcal{E}_{cd} \hat{n}_{ab}{}^{cd}\right)\nonumber\\ &\quad-\tfrac{16}{15}m^2\mathcal{E}_{ab}\ln s +O(s\ln s), \end{align} \end{subequations} or involve products of the mass with the regular field, \begin{subequations}\label{hSR FW}% \begin{align} h^{\S{\rm R}}_{\tau\tau} &= -\frac{m}{s}\left(h^{\R1}_{ab} \hat{n}^{ab}+ \tfrac{1}{3} h^{\R1}_{ab}\delta^{ab}+ 2 h_{\tau\tau}^{\R1}\right)+O(s^0),\\ h^{\S{\rm R}}_{\tau a} &= -\frac{m}{s}\left(h_{\tau b}^{\R1}\hat{n}_a{}^b+ \tfrac{4}{3}h_{\tau a}^{\R1}\right)+O(s^0),\\ h^{\S{\rm R}}_{ab} &= \frac{m}{s}\Big[2h^{\R1}_{c(a}\hat{n}_{b)}{}^c -\delta_{ab} h^{\R1}_{cd} \hat{n}^{cd} - \left(h^{\R1}_{ij}\delta^{ij}+h_{\tau\tau}^{\R1}\right)\hat{n}_{ab}\nonumber\\ &\quad +\tfrac{2}{3} h^{\R1}_{ab} + \tfrac{1}{3}\delta_{ab} h^{\R1}_{cd}\delta^{cd} + \tfrac{2}{3} \delta_{ab} h_{\tau\tau}^{\R1}\Big] +O(s^0). \end{align} \end{subequations} On the right, the components of $h^{\R1}_{\mu\nu}$ are evaluated at $s=0$. At order $s^0$, $h^{\S{\rm R}}_{\mu\nu}$ also depends on first derivatives of $h^{\R1}_{\mu\nu}$ evaluated at $s=0$; at order $s$, it depends on second derivatives of $h^{\R1}_{\mu\nu}$ evaluated at $s=0$; and so on. $h^{\S(2)}_{\alpha\beta}$ is given in Fermi coordinates through order $s$ in Appendix D of Ref.~\cite{Pound:2012dk}. In covariant form, these fields are \begin{equation} h^{\rm spin}_{\mu\nu} = \frac{4g^{\alpha'}_{\mu} g^{\beta'}_{\nu}u_{(\alpha'}S_{\beta')\gamma'}\sigma^{\gamma'}}{\lambda^2{\sf s}^3} + O(\lambda^0), \end{equation} with $S_{\alpha'\beta'}:=e^a_{\alpha'}e^b_{\beta'}S_{ab}$, \begin{align}\label{hSS covariant} h^{\S\S}_{\mu\nu}&= \frac{m^2}{\lambda^2{\sf s}^4} g^{\alpha'}_{\mu} g^{\beta'}_{\nu} \Bigl\{ 5{\sf s}^2g_{\alpha'\beta'} -7 \sigma_{\alpha'}\sigma_{\beta'} - 14 \r \sigma_{(\alpha'} u_{\beta')}- (7 \r^2 - 3 {\sf s}^2) u_{\alpha'} u_{\beta'}\bigr]\Bigr\} \nonumber\\ &\quad - \frac{16}{15} m^2 g^{\alpha'}_{\mu} g^{\beta'}_{\nu} \ln(\lambda{\sf s}) R_{\alpha' u\beta' u} + \frac{m^2\lambda^0}{150 {\sf s}^6} g^{\alpha'}_{\mu} g^{\beta'}_{\nu} \bigg\{10{\sf s}^2 g_{\alpha'\beta'}\left(25 \r^2+{\sf s}^2\right)R_{\sigma u\sigma u}\nonumber\\ &\quad + 20 \r {\sf s}^2 \left[35 \r \sigma_{(\alpha'} R_{\beta')u \sigma u} + \left(35 \r^2 - 31 {\sf s}^2\right) u_{(\alpha'} R_{\beta')u\sigma u} - {\sf s}^2 R_{\sigma(\alpha' \beta') u}\right] \nonumber\\ &\quad + 10 {\sf s}^4 R_{\alpha' \sigma \beta' \sigma} - 350 \r{\sf s}^2\sigma_{(\alpha'}R_{\beta')\sigma u\sigma} - 10{\sf s}^2\bigl(35 \r^2 - 17 {\sf s}^2\bigr) u_{(\alpha'}R_{\beta')\sigma u\sigma} \nonumber\\ &\quad + 2{\sf s}^4 \left(5 \r^2 + 26 {\sf s}^2\right)R_{\alpha' u\beta' u} - 70 \bigl[\left(10 \r^2 - 3 {\sf s}^2\right) \sigma_{\alpha'}\sigma_{\beta'} \nonumber\\ &\quad + 4 \r \left(5 \r^2 - 4 {\sf s}^2\right) u_{(\alpha'}\sigma_{\beta')}\bigr] R_{\sigma u\sigma u} - 20\left(35 \r^4 - 53 \r^2 {\sf s}^2 - 6 {\sf s}^4\right) u_{\alpha'}u_{\beta'} R_{\sigma u\sigma u} \bigg\}\nonumber\\ &\quad +O\left(\lambda\ln \lambda\right), \end{align} and \begin{align}\label{hSR covariant} h^{\S{\rm R}}_{\mu\nu} &= \frac{m}{\lambda{\sf s}^3} g^{\alpha'}_{\mu} g^{\beta'}_{\nu} \Biggl\{g_{\alpha'\beta'}\left[\frac{2}{3}{\sf s}^2h^{\R1}_{\mu'\nu'}g^{\mu'\nu'} - \left(\r^2 - {\sf s}^2\right)h^{\R1}_{uu} - h^{\R1}_{\sigma\sigma} - 2 \r h^{\R1}_{u\sigma}\right]\nonumber\\ &\quad+{\sf s}^2\delta m_{\alpha'\beta'} -\frac{2}{3} h^{\R1}_{\alpha' \beta'} {\sf s}^2 + 2h^{\R1}_{\sigma(\alpha'} \sigma_{\beta')} + 2\r h^{\R1}_{\sigma(\alpha'}u_{\beta')} - 2 h^{\R1}_{\sigma\sigma} u_{\alpha'} u_{\beta'} \nonumber\\ &\quad - h^{\R1}_{\mu'\nu'} g^{\mu'\nu'}\Bigl[\sigma_{\alpha'}\sigma_{\beta'} + 2\r \sigma_{(\alpha'}u_{\beta')} + (\r^2 - {\sf s}^2) u_{\alpha'}u_{\beta'}\Bigr] + 2\r h^{\R1}_{u(\alpha'}\sigma_{\beta')} \nonumber\\ &\quad+ 2(\r^2-{\sf s}^2)h^{\R1}_{u(\alpha'}u_{\beta')} + 4 h^{\R1}_{u\sigma} \sigma_{(\alpha'} u_{\beta')}- 2 h^{\R1}_{uu} \sigma_{\alpha'} \sigma_{\beta'}\Biggr\} + O\left(\lambda^0\right), \end{align} where \begin{align}\label{dm_SC_cov} \delta m_{\alpha\beta} &= \frac{1}{3}m\left(2h^{\R1}_{\alpha\beta}+g_{\alpha\beta}g^{\mu\nu}h^{\R1}_{\mu\nu}\right) +4mu_{(\alpha}h^{\R1}_{\beta)\mu}u^\mu\nonumber\\ &\quad +m(g_{\alpha\beta}+2u_{\alpha} u_{\beta})u^\mu u^\nu h^{\R1}_{\mu\nu}. \end{align} The covariant expressions for $h^{\S\S}_{\mu\nu}$ and $h^{\S{\rm R}}_{\mu\nu}$ are known through order $\lambda$~\cite{Pound:2014xva} and are available upon request to the authors. The covariant expansion of $h^{\rm spin}_{\mu\nu}$ appears explicitly here for the first time, but it is known to higher order in $\lambda$~\cite{Mathews:2020}. In this section, we have stated results from the so-called {\em self-consistent} expansion of the metric~\cite{Pound:2009sm}. In this framework, the metric is not expanded in an ordinary Taylor series in $\epsilon$. Instead, it takes the form \begin{equation} g^{\rm exact}_{\mu\nu}(x^\alpha,\epsilon) = g_{\mu\nu}(x^\alpha) + \epsilon h^{(1)}_{\mu\nu}(x^\alpha,{\cal P}) + \epsilon^2 h^{(2)}_{\mu\nu}(x^\alpha,{\cal P}) + O(\epsilon^3),\label{self-consistent expansion} \end{equation} where ${\cal P}$ represents a list of system parameters: the worldline $\gamma$ and multipole moments of the small object, along with any evolving external parameters. If the small object is orbiting a black hole that is approximately Kerr, the external parameters will consist of small, slowly evolving corrections to the black hole's mass and spin~\cite{Miller:2020bft}. These parameters all evolve with time in an $\epsilon$-dependent way, meaning that Eq.~\eqref{self-consistent expansion} is not a Taylor series; this allowance for $\epsilon$-dependent coefficients is a hallmark of singular perturbation theory~\cite{Kevorkian-Cole:96}. In the self-force problem, it must be allowed in order to construct a uniformly accurate approximation on large time scales~\cite{Pound:2010pj}. It will lead naturally into the multiscale expansion described in the later sections of this review. If we use an ordinary Taylor series in place of Eq.~\eqref{self-consistent expansion}, then $z^\mu$ is replaced with the series expansion $z^\mu(\tau,\epsilon) = z^\mu_0(\tau) + \epsilon z^\mu_1 (\tau) + \ldots$ (referred to as a Gralla-Wald expansion after the authors of Ref.~\cite{Gralla:2008fg}). Here $z^\mu_0$ is a geodesic of the external background spacetime, and the local analysis described above is carried out with series expansions in powers of distance from this geodesic. The acceleration $a^\mu$ in this approach is thus set to zero in all the above formulas. The $\epsilon$ dependence of $z^\mu$ then manifests itself in $h^{\S(2)}_{\mu\nu}$ through an additional term, \begin{align}\label{dipole term} h^{\rm dipole}_{\mu\nu} &= \frac{2m_i n^i(g_{\mu\nu}+2u_\mu u_\nu)}{s^2}+O(1/s)\\ &= g^{\alpha'}_{\mu} g^{\beta'}_{\nu}\left[-\frac{2m_{\mu'} \sigma^{\mu'}}{\lambda^2{\sf s}^3}(g_{\alpha'\beta'}+2u_{\alpha'} u_{\beta'})+O(1/\lambda)\right], \end{align} proportional to a mass dipole moment $m^\alpha = e_a^\alpha m^a = m z^\alpha_1$. $m^\alpha$ describes the position of the object's center of mass relative to $z^\mu_0$. It appears in the second-order metric perturbation in the outer expansion but in the zeroth-order inner metric, $g^{\rm obj}_{\mu\nu}$. By setting $m_i$ to zero in the self-consistent expansion, one defines $\gamma$ to be the center of mass at this order. A correction to $m_i$ generically appears in $h^{(3)}_{\mu\nu}$ and in $g^{\rm obj}_{\mu\nu}+\epsilon H^{(1)}_{\mu\nu}$, and it is likewise set to zero in a self-consistent expansion~\cite{Pound:2012nt,Pound:2017psq}. In a Gralla-Wald expansion, $m_i$ and corrections to it are allowed to be nonzero; for that case, $h^{\rm dipole}_{\mu\nu}$ is given through order $\lambda^0$ in Fermi coordinates in Sec.~IVC of Ref.~\cite{Pound:2009sm} (where $m_i$ is denoted $M_i$). Explicit expressions through order $\lambda$, in both Fermi-coordinate and covariant form, are known through order $\lambda$~\cite{Pound:2014xva} and are available upon request. In the context of a binary, the small object inspirals, eventually moving very far from any initially nearby background geodesic. This causes $z^\mu_1$ and higher corrections to grow large with time, spelling the breakdown of the Gralla-Wald expansion. For this reason, we have focused on the self-consistent formulation in this review. Refs.~\cite{Pound:2009sm,Pound:2015fma,Pound:2015tma} provide detailed explications of the relationship between the two types of expansions. \subsection{Equations of motion} Along with the local form of the metric perturbations, the Einstein equations determine the motion of the small object and the evolution of its mass and spin. Specifically, if we let $\gamma$ be the object's center of mass (by setting the mass dipole moment in $h^{(2)}_{\mu\nu}$ to zero), then the vacuum field equations uniquely determine the first-order equations of motion~\cite{Mino:1996nk,Gralla:2008fg,Pound:2009sm} \begin{equation} \frac{D^2 z^\alpha}{d\tau^2} = -\frac{1}{2}P^{\alpha\delta}\!\left(2h^{{\rm R}(1)}_{\delta\beta;\gamma}-h^{{\rm R}(1)}_{\beta\gamma;\delta}\right)\!u^\beta u^\gamma -\frac{1}{2m}R^\alpha{}_{\beta\gamma\delta}u^\beta S^{\gamma\delta}+ O(\epsilon^2)\label{EOM spin} \end{equation} and \begin{equation}\label{mass and spin} \frac{dm}{d\tau} = O(\epsilon^2) \quad \text{and}\quad \frac{DS^{\alpha\beta}}{d\tau} = O(\epsilon^3). \end{equation} The first term on the right of Eq.~\eqref{EOM spin} is referred to as the first-order gravitational self-force (per unit mass) or as the MiSaTaQuWa force (after the authors of Refs.~\cite{Mino:1996nk,Quinn:1996am}); the second term on the right is the Mathisson-Papapetrou spin force~\cite{Mathisson:1937zz,Papapetrou:1951pa}. Equation~\eqref{EOM spin} represents the leading correction to geodesic motion for a gravitating, extended, compact object.\footnote{For a non-compact object, finite-size effects from higher multipole moments will dominate over self-force effects.} However, these equations are equivalent to those of a test body, not in the background or in the physical spacetime but in the effective metric $\breve{g}_{\mu\nu}$. In particular, Eq.~\eqref{EOM spin} can be rewritten as \begin{equation} \frac{\breve{D}^2 z^\mu}{d\breve{\tau}^2} = -\frac{1}{2m}\breve{R}^\alpha{}_{\beta\gamma\delta}\breve{u}^\beta S^{\gamma\delta}+ O(\epsilon^2), \end{equation} where $\breve{\tau}$ is proper time in $\breve g_{\mu\nu}$, $\frac{\breve{D}}{d\breve\tau}:=\breve{u}^\alpha \breve\nabla_{\!\alpha}$, $\breve\nabla$ is a covariant derivative compatible with $\breve g_{\mu\nu}$, and $\breve u^\mu = \frac{dz^\mu}{d\breve\tau}$. This is the equation of motion of a spinning test particle. Similarly, the evolution equations~\eqref{mass and spin} are the equations of a test mass and spin, which are constant and parallel propagated, respectively. If we specialize to a spherical, nonspinning object (and set the subleading mass dipole moment to zero), the field equations determine the second-order equation of motion~\cite{Pound:2012nt,Pound:2017psq} \begin{equation} \frac{D^2 z^\alpha}{d\tau^2} = -\frac{1}{2}P^{\alpha\mu}\left(g_\mu{}^\delta-h^{{\rm R}\ \delta}_\mu\right)\!\left(2h^{{\rm R}}_{\delta\beta;\gamma}-h^{{\rm R}}_{\beta\gamma;\delta}\right)\!u^\beta u^\gamma +O(\epsilon^3)\label{EOM2} \end{equation} and $\frac{dm}{d\tau} = O(\epsilon^3)$. This can be rewritten as the geodesic equation in $\breve{g}_{\mu\nu}$, \begin{equation}\label{EOM2 v2} \frac{\breve{D}^2 z^\mu}{d\breve{\tau}^2} = O(\epsilon^3). \end{equation} See Sec.~IIIA of Ref.~\cite{Pound:2015fma} for the (simple) steps involved in rewriting Eq.~\eqref{EOM2} as Eq.~\eqref{EOM2 v2}. For a generic compact object, the spin and quadrupole moments will both appear in Eq.~\eqref{EOM2}. Although the second-order equations of motion have not been derived directly from the field equations in that case, it is known that at least through this order, the motion remains that of a test body in {\em some} effective metric~\cite{Thorne:1984mz}. At least for a material body, this remains true even in the fully nonlinear setting~\cite{Harte:2011ku}. The spin's evolution and its contribution to the acceleration through second order, extracted from the nonlinear results for a material body, are given in Eq.~(2.11) of Ref.~\cite{Akcay:2019bvk}. In this section we have again presented results for the self-consistent expansion. In the Gralla-Wald approach, one instead obtains evolution equations for the mass dipole moment. Such equations are derived at first order in Refs.~\cite{Gralla:2008fg,Pound:2009sm,Gralla:2011zr} and at second order in Ref.~\cite{Gralla:2012db} (see also Ref.~\cite{Pound:2015fma}, which derives such second-order equations in a more compact, parametrization-invariant form). We stress that the equations in this section follow directly from the vacuum Einstein equations, together with a center-of-mass condition, {\em outside} the small object. There is no assumption about the object's internal composition, nor is there any regularization of singular quantities. We refer to Refs.~\cite{Detweiler:2000gt,Rosenthal:2006iy,Detweiler:2011tt} for variants of the approach described here and to Refs.~\cite{Quinn:1996am,Galley:2008ih,Harte:2011ku} for alternatives to the matched-expansions approach. \subsection{Skeleton sources: punctures and particles} After having derived the local form of the metric, and the equations of motion, we can effectively remove the body zone from the problem. We do so by allowing the local forms~\eqref{hRn FW}--\eqref{hSR covariant} to hold all the way down to $\gamma$. This causes the self-field to diverge at $\gamma$, artificially introducing a singular field. However, this does not alter the physics in the buffer region or external universe, and the singularity is more easily handled than the small-scale physics of the small object. Once the fields have been extended to $\gamma$, one can solve the field equations throughout the spacetime using either a puncture scheme or point-particle methods. The puncture scheme is the more general of the two approaches. We define the {\em puncture field} \begin{equation} h^{\P(n)}_{\mu\nu} := h^{\S(n)}_{\mu\nu}{\cal W} \end{equation} as the local expansion of $h^{\S(n)}_{\mu\nu}$ truncated at some order $\lambda^k$, multiplied by a window function ${\cal W}$ that is equal to 1 in a neighbourhood of $z^\alpha$ and transitions to zero at some finite distance from $z^\alpha$.\footnote{Our description may seem (incorrectly) to imply that the puncture field is only defined in a convex normal neighbourhood of the body. For numerical purposes, the puncture is extended over a region of any convenient size. Typically this is done by converting the local, covariant expressions in terms of Synge's world function into expansions in coordinate distance, using, e.g., the Boyer-Lindquist coordinates of the background spacetime. The punctures can then be extended as these coordinate functions. The end result for the combined field $h^{(n)}_{\mu\nu}=h^{{\cal R}(n)}_{\mu\nu}+h^{\P(n)}_{\mu\nu}$ is insensitive to the choice of extension.} This implies that $h^{\P(n)}_{\mu\nu}=h^{\S(n)}_{\mu\nu}+O(\lambda^{k+1})$. We then define the {\em residual field} \begin{equation} h^{{\cal R}(n)}_{\mu\nu}:=h^{(n)}_{\mu\nu}-h^{\P(n)}_{\mu\nu}, \end{equation} which satisfies $h^{{\cal R}(n)}_{\mu\nu}=h^{{\rm R}(n)}_{\mu\nu} + O(\lambda^{k+1})$, making $h^{{\cal R}(n)}_{\mu\nu}$ a $C^k$ field at $\gamma$. Outside the support of $h^{\P(n)}_{\mu\nu}$, $h^{{\cal R}(n)}_{\mu\nu}$ becomes identical to the full field $h^{(n)}_{\mu\nu}$. Moving $h^{\P(1)}_{\mu\nu}$ to the right-hand side of the vacuum field equations, we obtain field equations for $h^{{\cal R}(n)}_{\mu\nu}$:\footnote{In the self-consistent approach, some care is required in formulating these equations. Specifically, they can only be split into a sequence of equations, one at each order in $\epsilon$, after imposing a gauge condition~\cite{Pound:2009sm}; this is required in order to allow the puncture to move on an accelerated trajectory. We do not belabour this point because we ultimately formulate the equations in a somewhat different, multiscale form tailored to binary inspirals.} \begin{align} G^{(1)}_{\mu\nu}[h^{{\cal R}(1)}] &= - G^{(1)}_{\mu\nu}[h^{\P(1)}]:=S^{\rm eff(1)}_{\mu\nu},\\ G^{(1)}_{\mu\nu}[h^{{\cal R}(2)}] &= - G^{(2)}_{\mu\nu}[h^{(1)},h^{(1)}] - G^{(1)}_{\mu\nu}[h^{\P(2)}]:=S^{\rm eff(2)}_{\mu\nu}. \end{align} These equations hold at all points off $\gamma$. The $C^k$ behaviour of the solution is then enforced by defining the effective sources $S^{\rm eff(n)}_{\mu\nu}$ as ordinary integrable functions at $\gamma$, rather than treating $G^{(1)}_{\mu\nu}[h^{\P(n)}]$ in the distributional sense of a linear operator acting on an integrable function; this distinction is important to rule out delta functions in the source, which would create spurious singularities in the residual field. If $k\geq1$, then we can replace $h^{{\rm R}(n)}_{\mu\nu}$ with $h^{{\cal R}(n)}_{\mu\nu}$ in the equations of motion~\eqref{EOM spin} and \eqref{EOM2}. The total field $h^{(n)}_{\mu\nu}=h^{{\cal R}(n)}_{\mu\nu}+h^{\P(n)}_{\mu\nu}$ is also guaranteed to satisfy the physical boundary condition in the buffer region (i.e., the matching condition) and at the outer boundaries of the problem. An alternative to the puncture scheme is to solve directly for the total fields $h^{(n)}_{\mu\nu}$. Once extended to $\gamma$, they satisfy \begin{align} G^{(1)}_{\mu\nu}[\epsilon h^{(1)} + \epsilon^2h^{(2)}] + \epsilon^2 G^{(2)}_{\mu\nu}[h^{(1)},h^{(1)}] = 8\pi T_{\mu\nu} + O(\epsilon^3),\label{skeleton EFE} \end{align} where here we {\em do} interpret each term on the left-hand side in a distributional sense. The stress-energy tensor is then defined by the left-hand side. Through second order, it can be shown to be the stress-energy of a spinning particle in the effective metric~\cite{DEath:1975jps,Gralla:2008fg,Pound:2009sm,Pound:2012dk,Upton:2021}:\footnote{At second order, this is true in a class of highly regular gauges. In other gauges, it requires a direct use of the puncture via a particular distributional definition of the nonlinear quantity $G^{(2)}_{\mu\nu}[h^{(1)},h^{(1)}]$~\cite{Upton:2021}.} \begin{equation} T_{\mu\nu} = m\int_\gamma \breve{u}_\mu \breve{u}_\nu \breve{\delta}(x,z(\breve\tau))d\breve{\tau} + \int_\gamma \breve u_{(\mu}S_{\nu)}{}^{\alpha}\breve{\nabla}_{\!\alpha}\breve{\delta}(x,z(\breve\tau))d\breve\tau,\label{skeleton Tab} \end{equation} where $\breve{\delta}(x,x')=\frac{\delta^4(x^\alpha-x'^\alpha)}{\sqrt{-\breve{g}}}$ and $\breve{u}_\mu:=\breve{g}_{\mu\nu}\breve{u}^\nu$. We refer to this point-particle stress-energy as the {\em Detweiler stress-energy} after the author of Ref.~\cite{Detweiler:2011tt}. Like the equations of motion, the point-particle approximation is a derived consequence of the vacuum Einstein equations and the matching condition, rather than an input. In cases where the point-particle method is well defined, it and the puncture scheme yield identical full fields $h^{(n)}_{\mu\nu}$. However, unlike a puncture scheme, a point-particle method does not yield the regular fields $h^{{\rm R}(n)}_{\mu\nu}$ as output. The regular fields, and self-forces, must instead be extracted from $h^{(n)}_{\mu\nu}$. This is most often done using the method of {\em mode-sum regularization}~\cite{Barack:2001gx,Barack:2002mh} reviewed in detail in Refs.~\cite{Barack:2009ux,Wardell:2015kea} and sketched in Sec.~\ref{mode decompositions of hS} below. We will refer to both the effective sources $S^{\rm eff(n)}_{\mu\nu}$ and the point-particle source $T_{\mu\nu}$ as {\em skeleton sources}. This terminology follows Mathisson's notion~\cite{Mathisson:1937zz,Dixon:2015vxa} of a ``gravitational skeleton'' (see also Refs.~\cite{Dixon:1970zza,Dixon:1970zz,Dixon:74}): an extended body can be represented by a singularity equipped with an infinite set of multipole moments. Punctures provide a generalization of this concept to settings where the singularities are too strong to be represented by distributions. For that reason, although the nomenclature of punctures and effective sources originated from methods of solving the first-order field equations in Refs.~\cite{Barack:2007we,Vega:2007mc}, punctures have a more fundamental role at second and higher orders~\cite{Rosenthal:2006nh,Rosenthal:2006iy,Detweiler:2011tt,Gralla:2012db,Pound:2012dk,Pound:2015tma}. For the same reason, we have presented punctures as a more primitive concept than the point-particle stress-energy. In either approach, the skeleton sources presented here apply equally for all compact objects, whether black holes or material bodies. The only distinguishing feature of a material body would be a spin that surpasses the Kerr bound (i.e., $|S^i|>m^2$). However, at third order in perturbation theory, the quadrupole moment will appear in the perturbation $h^{(3)}_{\mu\nu}$. Unlike the mass and dipole moments, the quadrupole moment is not governed by the Einstein equation~\cite{Dixon:74,Harte:2011ku,Harte:2014wya}, and its evolution must be determined from the object's equation of state. Hence, at third order the interior composition of the object begins to influence the external metric, and we can begin to distinguish between black holes and material bodies. But note that the quadrupole moments of compact objects differ primarily due to their differing tidal deformability, and this difference is suppressed by an additional five powers of $\epsilon$~\cite{Binnington:2009bb}, suggesting it is almost certainly irrelevant for small-mass-ratio binaries. \section{Orbital dynamics in Kerr spacetime} The previous section summarized the local problem in self-force theory: the reduction of an extended body to a skeleton source in the Einstein equations, along with an equation of motion for that source. In the remaining sections, we turn to the {\em global problem}: solving the perturbative Einstein equations, coupled to the equation of motion~\eqref{EOM spin} or \eqref{EOM2}, globally in a specific background metric. In the context of a small-mass-ratio binary, the background geometry is the Kerr spacetime of the central black hole. According to the equations of motion, the small body in the binary is only slightly accelerated away from geodesic motion in that background. This section summarizes (i) properties of bound geodesic motion in Kerr spacetime and (ii) how to exploit those properties to analyze accelerated orbits. We emphasize action-angle methods that mesh specifically with our treatment of the Einstein equations in the final section of this review. However, much of our treatment is valid for a more generic acceleration. We warn the reader that the notation in this section differs in several ways from that of the preceding section. The differences are noted in the first subsubsection below. \subsection{Geodesic motion} \subsubsection{Constants of motion, separable geodesic equation, and conventions} Geodesics in Kerr spacetime are integrable, with three constants of motion associated with the spacetime's three Killing symmetries: (specific) energy $E=-u_\alpha \xi^\alpha$, (specific) azimuthal angular momentum $L_z = u_\alpha \delta^\alpha_\phi$, and the {\em Carter constant} $Q=u_\alpha u_\beta (\overset{\star\star}{K}{}^{\alpha\beta} - \frac{1}{a^2}\eta^\alpha\eta^\beta)$.\footnote{The constant $K=u_\alpha u_\beta \overset{\star\star}{K}{}^{\alpha\beta}$ is also sometimes referred to as Carter's constant.} Inverting these three equations, together with $g^{\alpha\beta}u_\alpha u_\beta=-1$, for the four-velocity components, we obtain~\cite{Fujita:2009bp} \begin{align} \Sigma^2\left(\frac{dr}{d\tau}\right)^2 &= R(r),\label{drdtau}\\ \Sigma^2\left(\frac{dz}{d\tau}\right)^2 &= Z(z),\label{dzdtau}\\ \Sigma\frac{dt}{d\tau} &= T_r(r) + T_z(z) + aL_z:=\mathscr{f}_t,\label{dtdtau}\\ \Sigma\frac{d\phi}{d\tau} &= \Phi_r(r) + \Phi_z(z) - a E:=\mathscr{f}_\phi.\label{dphidtau} \end{align} Here $(t,r,z:=\cos\theta,\phi)$ refer to Boyer-Lindquist coordinates,\footnote{Refs.~\cite{Drasco:2003ky,Fujita:2009bp} and many other references instead define $z$ as $\cos^2\theta$, with analogous differences in their definitions of the roots $z_n$ defined below.} and \begin{align} R(r) &:= [P(r)]^2-\Delta\left[r^2+(aE-L_z)^2+Q\right],\\ Z(z) &:= Q-\left(Q+a^2\gamma+L_z^2\right)z^2+a^2\gamma\, z^4,\\ T_r(r) &:= \frac{r^2+a^2}{\Delta}P(r),\label{Tr}\\ T_z(z) &:= -a^2E(1-z^2),\label{Tz}\\ \Phi_r(r) &:= \frac{a}{\Delta}P(r),\\ \Phi_z(z) &:= \frac{L_z}{1-z^2},\label{Phiz} \end{align} with $P(r):=E(r^2+a^2)-aL_z$ and $\gamma:=1-E^2$. We opt to use $z$ rather than $\theta$ throughout this section. The equations for $r(\tau)$ and $z(\tau)$ are coupled, but they are immediately decoupled by adopting a new parameter $\lambda$, called {\rm Mino time}~\cite{Mino:2003yg}, that satisfies \begin{equation}\label{Mino time} \frac{d\lambda}{d\tau} = \Sigma^{-1}. \end{equation} (This is not to be confused with the bookkeeping parameter used in the local expansions of the previous section.) The equations also take a hierarchical form: once $r(\lambda)$ and $z(\lambda)$ are known, Eqs.~\eqref{dtdtau} and \eqref{dphidtau} can be straightforwardly integrated to obtain $t(\lambda)$ and $\phi(\lambda)$. Given this hierarchical form, we will focus on the $r$--$z$ dynamics. In Eq.~\eqref{drdtau}, $R(r)$ is a fourth-order polynomial in $r$, meaning it can also be written as $R(r)=-\gamma(r-r_1)(r-r_2)(r-r_3)(r-r_4)$, with $r_1\geq r_2\geq r_3\geq r_4$. Similarly, in Eq.~\eqref{dzdtau}, $Z(z)=a^2\gamma(z^2-z_1^2)(z^2-z_2^2)$, with $|z_1|>|z_2|$. For bound orbits, the radial motion oscillates between the turning points $r_a=r_1$ (apoapsis) and $r_p=r_2$ (periapsis), and the polar motion between $z_{\rm max}=|z_2|$ and $z_{\rm min}=-|z_2|$.\footnote{The other roots ($r_3$, $r_4$, and $z_1$) do not correspond to physical turning points. In particular, $|z_1|>1$.} Hence, the geodesic is confined to a torus-like region $r_p\leq r\leq r_a$, $|z|<z_{\rm max}$. If $Q=0$, the motion is confined to the equatorial plane $z=0$. If $a=0$ (i.e., in Schwarzschild spacetime), the geodesic is likewise confined to a plane, which, due to Schwarzschild's spherical symmetry, can be freely chosen as $z=0$. However, a generic orbit ergodically fills the torus-like region. For convenience in the remaining sections, we use lowercase Latin indices from the beginning of the alphabet ($a,b,c$) to denote $r$ or $z$ and define $\bm{x} = (r,z)$. However, repeated indices, as in an expression such as $f_a x^a$, are not summed over; instead, such sums will be written as $\bm{f}\cdot \bm{x} := f_r x^r + f_z x^z$. An expression such as $f_a(x^a)$ will denote either one of $f_r(r)$ or $f_z(z)$, while an expression such as $f_a(\bm{x})$ will denote either one of $f_r(r,z)$ or $f_z(r,z)$. $f_\alpha(x^\beta)$ will denote $f_\alpha(t,r,z,\phi)$. We use lowercase Latin indices from the middle of the alphabet ($i,j,k$) to label elements of a set of orbital parameters. For example, $P^i=(E,L_z,Q)$. For these indices, unlike $a,b,c$, we use Einstein summation. We use $f$ throughout this section to denote a generic function, not the specific function $f(r)$ that appears in the Schwarzschild metric~\eqref{eq:SchwMetric}. An overdot will denote a derivative with respect to $\lambda$. Finally, we preemptively refer the reader to Refs.~\cite{Schmidt:2002qk,Mino:2003yg,Drasco:2003ky,Drasco:2005kz,Fujita:2009bp,Warburton:2013yj,Stein:2019buj} for additional details about geodesic orbits in Kerr. \subsubsection{Quasi-Keplerian parametrization} Unlike Keplerian orbits, generic geodesics in Kerr do not close; the periods of radial, polar, and azimuthal motion are all, generically, incommensurate. Nevertheless, because of their doubly oscillatory nature, it is often useful in applications to express the geodesic trajectories in a quasi-Keplerian form, replacing the constants $\{E,L_z,Q\}$ with an alternative set $\{p,e,z_{\rm max}\}$. In terms of these, $r$ and $z$ can be written in the manifestly periodic form~\cite{Drasco:2003ky} \begin{align} r(\psi_r) &= \frac{pM}{1+e\cos\psi_r},\label{r(psi)}\\ z(\psi_z) &= z_{\rm max}\cos\psi_z,\label{z(psi)} \end{align} where, for a bound orbit, $0 \leq e < 1$. The phases $(\psi_r,\psi_z)$ are multiples of $2\pi$ at periapsis and at $z=z_{\rm max}$, respectively. Unlike $r$ and $z$, which change direction every half cycle, $\psi_r$ and $\psi_z$ grow monotonically, leading to better numerical behavior at the turning points. Because none of the periods are commensurate, $\psi_r$ and $\psi_z$ evolve independently (of each other and of $\phi$). Using $\frac{d\psi_a}{d\lambda} =\frac{dx^a}{d\lambda}/\frac{dx^a}{d\psi_a}$, one finds~\cite{Drasco:2003ky} \begin{align} \frac{d\psi_r}{d\lambda} &= \frac{M\sqrt{\gamma[(p-p_3)-e(p+p_3\cos\psi_r)][(p-p_4)+e(p-p_4\cos\psi_r)]}}{1-e^2}\nonumber\\ &:=\mathscr{f}_r,\label{psi_r dot}\\ \frac{d\psi_z}{d\lambda} &= \sqrt{a^2\gamma(z^2_1-z^2_{\rm max}\cos^2\psi_z)}:=\mathscr{f}_z,\label{psi_theta dot} \end{align} where $p_3:=r_3(1-e)/M$ and $p_4:=r_4(1+e)/M$. These can be integrated subject to arbitrary choices of initial phase $\psi_a(0)=\psi^0_a$ The parameters $\{p,e,z_{\rm max}\}$, unlike $\{E,L_z,Q\}$, are related directly to the coordinate shape of the orbit, specifically to its turning points. Equation~\eqref{r(psi)} is the formula for an ellipse, and it implicitly defines $p$ and $e$ to be the semi-latus rectum and eccentricity of that ellipse, related to the periapsis and apoapsis by \begin{equation}\label{rp and ra} r_p = \frac{pM}{1+e} \quad \text{and}\quad r_a = \frac{pM}{1-e}. \end{equation} As stated above, $z_{\rm max}=z_2$, but to further the analogy with Keplerian orbits, we can also define an inclination angle $\iota$ such that\footnote{Ref.~\cite{Drasco:2003ky} and some other authors use the alternative, inequivalent definition $\cos\iota = \frac{L_z}{\sqrt{L_z^2+Q}}$. This does not describe the maximum coordinate inclination angle but has other useful properties~\cite{Hughes:2002ei}.} \begin{equation}\label{zmax} z_2 = z_{\rm max} = \sin\iota. \end{equation} The remaining roots of $R(r)$ and $Z(z)$ are also compactly expressed in terms of these parameters~\cite{Fujita:2009bp}: \begin{equation} r_3 = \frac{1}{2}\left(\alpha+\sqrt{\alpha^2-4\beta}\right) \quad \text{and}\quad r_4 = \beta/r_3,\\ \end{equation} where $\alpha:=2M/\gamma-(r_a+r_p)$ and $\beta:=a^2 Q/(\gamma r_a r_p)$, and \begin{equation} z_1 = \sqrt{\frac{Q}{a^2\gamma z_{\rm max}^2}}. \end{equation} These expressions are in a ``mixed'' form that involves both sets of constants. However, $\{E,L_z,Q\}$ can be written in terms of $\{p,e,\iota\}$ as~\cite{Drasco:2005kz}\footnote{Note that $r_1$ and $r_2$ have the opposite meaning in Ref.~\cite{Drasco:2005kz} than their meaning here. Our notation for the roots $r_n$ follows Ref.~\cite{Fujita:2009bp}.} \begin{align} E^2 &= \frac{2|d,g,h|-|d,h,f|-2\chi\sqrt{|d,g,h|^2 +|h,d,g,h,f|+|h,d,h,g,f|}}{|f,h|^2+4|f,g,h|},\label{E(pi)}\\ L_z &= -\frac{g_p ME}{h_p}+M\chi\sqrt{\frac{g_p^2 E^2}{h_p^2}+\frac{f_pE^2-d_p}{h_p}},\label{L(pi)}\\ Q &= z_{\rm max}^2\left(a^2\gamma+\frac{L_z^2}{\cos^2\iota}\right),\label{Q(pi)} \end{align} where $\chi:={\rm sgn}(L_z)$ is $+1$ for prograde orbits and $-1$ for retrograde, \begin{align} d(r) &:=\Delta(r^2+z_{\rm max}^2a^2)/M^4,\\ f(r) &:=(r/M)^4+a^2[r(r+2M)+z_{\rm max}^2\Delta]/M^4,\\ g(r) &:=2ar/M^2, \\ h(r) &:=[r(r-2M)+\Delta\tan^2\iota]/M^2, \end{align} and a subscript $a$ or $p$ indicates evaluation at $r_a$ or $r_p$. The quantities $|\cdot|$ appearing in $E^2$ are determinants or products of determinants that we define recursively as $| f,g|:=f_p g_a-f_ag_p$ and $|f,g,\ldots|:=|f,g| |g,\ldots|$. Given the parametrizations~\eqref{r(psi)} and \eqref{z(psi)} and the equations of motion~\eqref{dtdtau} and \eqref{dphidtau}, $t(\lambda)$ and $\phi(\lambda)$ can be written as \begin{align} t(\lambda) &= t_0 + t_r(\psi_r(\lambda)) + t_z(\psi_z(\lambda)) +aL_z\lambda ,\label{t(psi)}\\ \phi(\lambda) &= \phi_0 + \phi_r(\psi_r(\lambda)) + \phi_z(\psi_z(\lambda)) -aE\lambda,\label{phi(psi)} \end{align} with \begin{equation} t_a(\psi_a) = \int_{\psi^0_{a}}^{\psi_a}\frac{T_a(\psi'_a)}{\mathscr{f}_a(\psi'_a)}d\psi'_a \quad \text{and} \quad \phi_a(\psi_a) = \int_{\psi^0_{a}}^{\psi_a}\frac{\Phi_a(\psi'_a)}{\mathscr{f}_a(\psi'_a)}d\psi'_a.\label{t_a(psi_a)} \end{equation} Here $t_0$ and $\phi_0$ are integration constants This completes the quasi-Keplerian description of geodesic orbital motion. Trajectories are described by the three constants of motion $p^i:=(p,e,\iota)$ and the four secularly growing phase variables $\psi_\alpha:=(t,\psi_r,\psi_z,\phi)$. A given trajectory is uniquely specified by the set of seven constants $\{p,e,\iota, t_0,\psi_r^0,\psi_z^0,\phi_0\}$, called orbital elements. The solution to the geodesic equation can also be put in closed, analytical form~\cite{Fujita:2009bp} by expressing $\psi_\alpha(\lambda)$ in terms of elliptic integrals and their inverses (the Jacobi elliptic functions). \subsubsection{Fundamental Mino frequencies and action angles}\label{geodesic Mino angle variables} It is often essential to decompose quantities on the worldline into Fourier series, particularly when solving the perturbative Einstein equations in the frequency domain. This procedure is expedited by knowing the orbit's fundamental frequencies. In this section, we summarize the calculation of frequencies and of phase variables (action angles) associated with those frequencies. Unlike the phases $\psi_\alpha$, the angle variables are strictly linear in $\lambda$, facilitating Fourier expansions in that time variable. In the right-hand sides of Eqs.~\eqref{dtdtau}, \eqref{dphidtau}, \eqref{psi_r dot}, and \eqref{psi_theta dot}, we have defined the ``frequencies'' $\mathscr{f}_\alpha(\bm{\psi})$ as the rates of change of $\psi_\alpha$, \begin{equation} \frac{d\psi_\alpha}{d\lambda} = \mathscr{f}_\alpha(\bm{\psi}). \end{equation} The true frequencies $\Upsilon_\alpha$ associated with $\lambda$ are the average rates of change of $\psi_\alpha$, \begin{equation}\label{Upsilon = <f>} \Upsilon_\alpha = \left\langle\mathscr{f}_\alpha\right\rangle_\lambda, \end{equation} and the corresponding action angles ar \begin{equation} q_\alpha = \Upsilon_\alpha \lambda + q^0_\alpha, \end{equation} with arbitrary constants $q^0_\alpha$. For a function $f[r(\lambda),z(\lambda)]$ on the worldline, the average is defined as \begin{align} \left\langle f\right\rangle_\lambda := \lim_{\Lambda\to\infty}\frac{1}{2\Lambda}\int_{-\Lambda}^\Lambda f d\lambda. \label{lambda average} \end{align} For a generic, nonresonant orbit, this average agrees with the {\em torus average}\footnote{We focus only on functions of $r$ and $z$, which are automatically periodic functions of the intrinsic phases $\bm{\psi}$ and $\bm{q}$. The averaging operation immediately generalizes in the natural way to functions $f[z^\alpha(\lambda)]$ that are periodic in $t$ and $\phi$.} \begin{equation} \left\langle f\right\rangle_{\bm{q}} = \frac{1}{(2\pi)^2}\oint f d^2q, \label{q average} \end{equation} We use $\oint d^2q$ to denote $\int_0^{2\pi}dq_r\int_0^{2\pi}dq_z$ and $\oint d^2\psi$ for the analogous integral over $\bm{\psi}$. To simplify the analysis, we choose our phase space coordinates $\bm{q}$ such that $q_r$ vanishes at some periapsis and $q_z$ vanishes at some $z=z_{\rm max}$. We furthermore choose $q_t$, $q_\phi$, our spacetime coordinates $t$ and $\phi$, and our parameter $\lambda$ such that they all vanish at some particular passage through periapsis. These choices, which do not represent any loss of generality, imply \begin{subequations}\label{q and psi ICs} \begin{align} q^0_\alpha &= \psi_\alpha^0 = 0 \quad\text{for } \alpha=t,r,\phi, \\ q^0_z &= - \Upsilon_z\lambda^0_z, \end{align} \end{subequations} where $\lambda^0_z$ is the first value of $\lambda$ at which $z=z_{\rm max}$. $\psi^0_z$ can be inferred from $q^0_z$. One can easily do without these specifications if desired. With our choices, $q_a$ represents the mean growth of $\psi_a$ from the first radial or polar turning point, and we can express it in terms of $\psi_a$ as \begin{equation}\label{q_a(psi_a)} q_a(\psi_a) = \Upsilon_a \int_{0}^{\psi_a}\frac{d\psi'_a}{\mathscr{f}_a(\psi_a')}. \end{equation} This allows us to straightforwardly write the torus average as as an integral over $\bm{\psi}$, \begin{equation} \left\langle f\right\rangle_{\bm{q}} = \frac{1}{\Lambda_r\Lambda_z}\displaystyle\oint \frac{ f d^2\psi}{\mathscr{f}_r(\psi_r) \mathscr{f}_z(\psi_z)}, \label{psi average}\\ \end{equation} where $\Lambda_a = \int_{0}^{2\pi}\frac{d\psi_a}{\mathscr{f}_a(\psi_a)}$ is the radial or polar period with respect to $\lambda$. Although they agree generically, $\langle f\rangle_\lambda$ and $\langle f\rangle_{\bm{q}}$ differ in the special case of resonant orbits, discussed in later sections. For the $r$ and $z$ motion, the frequencies reduce to $\Upsilon_a = 2\pi/\Lambda_a$, which can be analytically evaluated to~\cite{Fujita:2009bp} \begin{align} \Upsilon_r &= \frac{\pi\sqrt{\gamma(r_a-r_3)(r_p-r_4)}}{2{\sf K}(k_r)},\\ \Upsilon_z &= \frac{\pi\sqrt{a^2\gamma}\,z_1}{2{\sf K}(k_z)}, \end{align} where \begin{align} {\sf K}(x)&:=\int_0^{\pi/2}\frac{d\theta}{\sqrt{1-x\sin^2\theta}} \end{align} is the complete elliptic integral of the first kind, and its arguments are $k_r:=\frac{r_a-r_p}{r_a-r_3}\frac{r_3-r_4}{r_p-r_4}$ and $k_z:=(z_{\rm max}/z_1)^2$. The frequencies of $t$ and $\phi$ motion can also be found analytically. Because of the additive forms of $\frac{dt}{d\lambda}$ and $\frac{d\phi}{d\lambda}$ in ~\eqref{dtdtau} and \eqref{dphidtau}, the averages reduce to a sum of one-dimensional integrals. Evaluating those integrals leads to~\cite{Warburton:2013yj} \begin{align} \Upsilon_t &= \frac{E}{2}\left[r_3(r_a+r_p+r_3) - r_a r_p + (r_a + r_p + r_3 + r_4)F_r +(r_a-r_3)(r_p-r_4)G_r\right] \nonumber\\ &\quad + \frac{2M}{r_+ - r_-}\left[ \frac{(4M^2E - aL_z)r_+ - 2Ma^2E}{r_3 - r_+}\left(1-\frac{F_+}{r_p-r_+}\right) - (+\leftrightarrow -) \right]\nonumber\\ &\quad +4M^2E + \frac{EQ(1-G_z)}{\gamma\, z_{\rm max}^2}+ 2ME(r_3+F_r) ,\label{Gamma}\\ \Upsilon_\phi &= \frac{a}{r_+-r_-}\left[\frac{2MEr_+-aL_z}{r_3-r_+}\left(1-\frac{F_+}{r_p-r_+}\right)-(+\leftrightarrow-)\right]\nonumber\\ &\quad +\frac{L_z\Pi(z^2_{\rm max},k_z)}{{\sf K}(k_z)}, \end{align} where we have introduced $G_a := \frac{{\sf E}(k_a)}{{\sf K}(k_a)}$ and $F_A:=(r_p-r_3)\frac{\Pi(h_A,k_r)}{{\sf K}(k_r)}$ for $A=\{r,+,-\}$, with $h_r = \frac{r_a-r_p}{r_a-r_3}$ and $h_\pm:=\frac{(r_a-r_p)(r_3-r_{\pm})}{(r_a-r_3)(r_p-r_{\pm})}$. Here $r_\pm = M \pm \sqrt{M^2-a^2}$ denote the inner and outer horizon radii, and \begin{align} {\sf E}(x)&:=\int_{0}^{\pi/2}d\theta\sqrt{1-x\sin^2\theta},\\ \Pi(x,y)&:=\int_0^{\pi/2}\frac{d\theta}{(1-x\sin^2\theta)\sqrt{1-y\sin^2\theta}} \end{align} are the complete elliptic integrals of the second and third kind, respectively. In terms of the angle variables, a quantity $f(r,z)$ on the worldline can be expanded in the Fourier series \begin{equation}\label{Fourier q} f[r(\lambda),z(\lambda)] = \sum_{\bm{k}}f^{(q)}_{\bm{k}} e^{-iq_{\bm{k}}(\lambda)}, \end{equation} where $q_{\bm{k}} := \bm{k}\cdot\bm{q}= k_r q_r + k_z q_z$, and unless stated otherwise, sums range over $\bm{k}\in\mathbb{Z}^2$. The coefficients are given by \begin{equation}\label{Fourier q coeffs} f^{(q)}_{\bm{k}} = \frac{1}{(2\pi)^2}\oint f e^{iq_{\bm{k}}}d^2q, \end{equation} which can also be calculated as \begin{equation}\label{Fourier q coeffs from psi} f^{(q)}_{\bm{k}} = \frac{\Upsilon_r\Upsilon_z}{(2\pi)^2}\oint \frac{f e^{iq_{\bm{k}}(\bm{\psi})}}{\mathscr{f}_r(\psi_r)\mathscr{f}_z(\psi_z)} d^2\psi \end{equation} with $q_{\bm{k}}(\bm{\psi}) = k_r q_r(\psi_r)+i k_z q_z(\psi_z)$ given by Eq.~\eqref{q_a(psi_a)}. The torus average of the function (and infinite $\lambda$ average for nonresonant orbits) is the zero mode in the Fourier series: $\left\langle f\right\rangle_{\bm{q}} = f^{(q)}_{00}$. Using such Fourier expansions, we can invert Eq.~\eqref{q_a(psi_a)} to write the phases $\psi_\alpha$ in terms of the angle variables. The transformation $q_\alpha\to \psi_\alpha(q_\beta)$ must satisfy $\frac{\partial\psi_\alpha}{\partial q_\beta}\Upsilon_\beta = \frac{d\psi_\alpha}{d\lambda} = \mathscr{f}_\alpha$ together with our choice $\psi_\alpha(q_\beta=0)=0$. The solution is the sum of a secular and a purely oscillatory piece: \begin{equation} \psi_\alpha(q_\beta) = q_\alpha - \Delta\psi_\alpha(0) + \Delta \psi_\alpha(\bm{q}),\label{psi to q} \end{equation} where \begin{align} \Delta\psi_a &= \sum_{k\neq0} \frac{\mathscr{f}^{k}_a e^{-ikq_a}}{-ik\Upsilon_a},\label{Dpsia}\\ \Delta t &=\Delta t_r + \Delta t_z := \sum_{k\neq0}\left(\frac{T^{k}_r e^{-ikq_r}}{-ik\Upsilon_r}+\frac{T^{k}_z e^{-ikq_z}}{-ik\Upsilon_z}\right),\label{Dt}\\ \Delta \phi &=\Delta\phi_r + \Delta\phi_z := \sum_{k\neq0}\left(\frac{\Phi^{k}_r e^{-ikq_r}}{-ik\Upsilon_r}+\frac{\Phi^{k}_z e^{-ikq_z}}{-ik\Upsilon_z}\right).\label{Dphi} \end{align} $T_a$ and $\Phi_a$ are given in Eqs.~\eqref{Tr}--\eqref{Phiz}, and we have written them, along with $\mathscr{f}_a(q_a)$, as one-dimensional Fourier series in $q_a$; e.g., $T_a = \sum_k T_a^k e^{-ikq_a}$. We will consistently use $\Delta$ to indicate that a quantity is periodic in $\bm{q}$ with zero average. We can conveniently write the coordinate trajectory in terms of the action angles as the sum of a secular term and an oscillatory term: \begin{equation}\label{zG(lambda)} z^\alpha(\lambda) = z^\alpha_{\rm sec}[q_\beta(\lambda)] + \Delta z^\alpha[\bm{q}(\lambda)], \end{equation} where the secular piece is \begin{align} z^\alpha_{\rm sec}(q_\beta) &= (q_t,0,0,q_\phi) + \left[-\Delta t(0),r^{(q)}_{0},z^{(q)}_{0},-\Delta\phi(0)\right], \end{align} and the purely oscillatory pieces are given by Eqs.~\eqref{Dt}, \eqref{Dphi}, and \begin{equation} \Delta x^a(q_a) = \sum_{k\neq0}x^{a}_{(q)k}\,e^{-ikq_{a}}, \end{equation} with coefficients readily calculated from Eq.~\eqref{Fourier q coeffs from psi}. Ref.~\cite{Fujita:2009bp} gives $x^a(q_a)$ in closed form in their Eqs.~(26) and (38), $\Delta t$ as the sum of their $t^{(r)}$ and $t^{(\theta)}$ in their Eqs.~(28) and (39), and $\Delta\phi$ as the sum of their $\phi^{(r)}$ and $\phi^{(\theta)}$ in those same equations. (We caution the reader that the notation in Ref.~\cite{Fujita:2009bp} differs from ours in several ways.) Our description here has followed the constructive approach of Refs.~\cite{Drasco:2003ky,Drasco:2005kz,Fujita:2009bp}, finding the frequencies and angle variables by directly solving the geodesic equation. There is an alternative, historically prior approach~\cite{Schmidt:2002qk} based on the Hamiltonian description of geodesics, which builds on Carter's original proof~\cite{Carter:1968rr} of integrability using the Hamilton-Jacobi equation. That approach derives action angles and their associated fundamental frequencies from a canonical transformation $(z^\alpha,u_\alpha)\to(q^\alpha,J_\alpha)$, where the actions $J_\alpha$ are the canonical momenta conjugate to the action angles $q^\alpha$. \subsubsection{Fundamental Boyer-Lindquist frequencies and action angles}\label{geodesic BL angle variables} For the purpose of decomposing fields, such as the metric perturbation, into Fourier modes, it is more useful to know the frequencies with respect to coordinate time $t$. These are the frequencies observed at infinity and that appear in the gravitational waveform. They are given by \begin{equation} \Omega_\alpha = \frac{\Upsilon_\alpha}{\Upsilon_t}. \end{equation} The angle variables associated with them are \begin{equation}\label{varphi geodesic} \varphi_\alpha = \Omega_\alpha t +\varphi^0_\alpha \end{equation} with $\Omega_t=1$. We choose the origin of this phase space in analogy with Eq.~\eqref{q and psi ICs}: \begin{equation} \varphi^0_\alpha = 0 \quad\text{for }\alpha=t,r,\phi, \quad \text{and }\varphi^0_z = -\Omega_z t^0_z, \end{equation} where $t^0_z$ is the first value of $t$ at which $z=z_{\rm max}$. These new angle variables are related to $q_\alpha$ by a transformation that must satisfy $\frac{\partial\varphi_\alpha}{\partial q_\beta}\Upsilon_\beta = \frac{d\varphi_\alpha}{d\lambda} = \Omega_\alpha\mathscr{f}_t(\bm{q})$. Such a transformation, with our choice of origin $\varphi_\alpha(q_\beta=0)=0$, is \begin{equation}\label{varphi(q)} \varphi_\alpha(q_\beta) = q_\alpha - \Omega_\alpha \Delta t(0) + \Omega_\alpha \Delta t(\bm{q}) \end{equation} with $\Delta t$ given by Eq.~\eqref{Dt}. In analogy with Eq.~\eqref{Fourier q}, a function of $r$ and $z$ on the worldline can be expanded in a Fourier series \begin{equation} f[r(t),z(t)] = \sum_{\bm{k}} f^{(\varphi)}_{\bm{k}} e^{-i\varphi_{\bm{k}}(t)}, \end{equation} with $\varphi_{\bm{k}}:=\bm{k}\cdot\bm{\varphi}=k_r\varphi_r+k_z\varphi_z$ and with coefficients given by the analog of Eq.~\eqref{Fourier q coeffs}. Using the Jacobian ${\rm det}\left(\frac{\partial\varphi_a}{\partial q_b}\right)=\mathscr{f}_t/\Upsilon_t$, we can also write the coefficients as integrals over $\bm{q}$, \begin{equation}\label{Fourier coeff relationship} f^{(\varphi)}_{\bm{k}} = \frac{e^{-i\Omega_{\bm{k}}\Delta t(0)}}{(2\pi)^2\Upsilon_t}\oint \mathscr{f}_t\,e^{iq_{\bm{k}}+i\Omega_{\bm{k}}\Delta t(\bm{q})}f d^2q, \end{equation} where $\Omega_{\bm{k}}:=k_r\Omega_r+k_z\Omega_z$. Or we can write them as integrals over $\bm{\psi}$, \begin{equation}\label{Fourier coeff relationship psi} f^{(\varphi)}_{\bm{k}} = \frac{\Upsilon_r\Upsilon_z }{(2\pi)^2\Upsilon_t}\oint \frac{\mathscr{f}_t(\bm{\psi})\,e^{iq_{\bm{k}}(\bm{\psi})+i\Omega_{\bm{k}}[\delta t_r(\psi_r)+\delta t_z(\psi_z)]}f}{\mathscr{f}_r(\psi_r)\mathscr{f}_z(\psi_z)} d^2\psi, \end{equation} where $q_a(\psi_a)$ is given by Eq.~\eqref{q_a(psi_a)}, and \begin{equation} \delta t_a(\psi_a) := \Delta t_a[q_a(\psi_a)] - \Delta t_a(0) = \int^{\psi_a}_0 \frac{T_a(\psi'_a)-\left\langle T_a\right\rangle_\lambda}{\mathscr{f}_a(\psi'_a)}d\psi'_a,\label{delta t} \end{equation} with $T_a$ given by Eq.~\eqref{Tr} and \eqref{Tz}. If $f$ is separable [i.e., if it can be written as a sum of products of the form $f_r(r)f_z(z)$], then expressing the integrals in terms of $\bm{q}$ or $\bm{\psi}$ allows one to evaluate the two-dimensional integral as a product of one-dimensional integrals. We can further define an average over $t$, \begin{equation} \left\langle f[r(t),z(t)]\right\rangle_t := \lim_{T\to\infty}\frac{1}{2T}\int_{-\infty}^\infty f dt, \end{equation} which for nonresonant orbits is equal to the torus average \begin{equation} \langle f\rangle_{\bm{\varphi}} := \frac{1}{(2\pi)^2}\oint f d^2\varphi = f^{(\varphi)}_{00}. \end{equation} Note that the meaning of a time average (and associated torus average) inherently depends on one's choice of time parameter~\cite{Pound:2007ti}, and that $\left\langle f\right\rangle_t$ differs from $\left\langle f\right\rangle_\lambda$: \begin{equation} \left\langle f\right\rangle_t = \frac{1}{\Upsilon_t}\left\langle \mathscr{f}_t f \right\rangle_\lambda = \left\langle f\right\rangle_\lambda + \frac{1}{\Upsilon_t}\sum_{\bm{k}\neq0}\mathscr{f}^{(q)}_{t\bm{k}}f^{(q)}_{-\bm{k}} . \end{equation} The relevance of each average depends on context. Using these Fourier expansions, we can express the phases $\psi_\alpha$ in terms of $\varphi_\alpha$. The two are related by a transformation satisfying $\frac{\partial\psi_\alpha}{\partial\varphi_\beta}\Omega_\beta = \frac{d\psi_\alpha}{dt}$. With our choice of origin $\psi_\alpha(\varphi_\beta=0) = 0$, the solution is \begin{equation}\label{psi(phi)} \psi_\alpha(\varphi_\beta) = \varphi_\alpha -\Delta_\varphi \psi_\alpha(0) + \Delta_\varphi \psi_\alpha(\bm{\varphi}). \end{equation} The oscillatory terms are $\Delta_\varphi \psi_t = 0$ and \begin{equation} \Delta_\varphi\psi_\alpha = \sum_{\bm{k}\neq0}\frac{\left(\frac{d\psi_\alpha}{dt}\right)^{\!(\varphi)}_{\!\bm{k}}}{-i\Omega_{\bm{k}}}e^{-i\varphi_{\bm{k}}}\quad \text{for }\alpha=r,z,\phi.\label{Dpsi varphi} \end{equation} Here we use $\Delta_\varphi$ rather than $\Delta$ to indicate that a quantity is purely oscillatory (i.e., periodic with zero mean) with respect to $\bm{\varphi}$ rather than $\bm{q}$. $(d\psi_\alpha/dt)^{(\varphi)}_{\bm{k}}$ can be calculated using Eq.~\eqref{Fourier coeff relationship psi} with $d\psi_\alpha/dt = \mathscr{f}_\alpha/\mathscr{f}_t$. Just as we did with $q_\alpha$, we can express the coordinate trajectory in terms of $\varphi_\alpha$ as the sum of a secular and an oscillatory piece, \begin{equation}\label{zG(t)} z^\alpha(t) = z^\alpha_{(\varphi)\rm sec}[\varphi_\beta(t)] + \Delta_\varphi z^\alpha[\bm{\varphi}(t)], \end{equation} where the secular piece is \begin{align} z^\alpha_{(\varphi)\rm sec}(\varphi_\beta) &= \left(\varphi_t,0,0,\varphi_\phi\right) -\left[0,r^{(\varphi)}_{00},z^{(\varphi)}_{00},-\Delta_\varphi\phi(0)\right], \end{align} and the oscillatory pieces are $\Delta_\varphi t = 0$, $\Delta_\varphi \phi$ given by Eq.~\eqref{Dpsi varphi} (recalling $\psi_\phi:=\phi$), and \begin{equation} \Delta_\varphi x^a(\bm{\varphi}) = \sum_{\bm{k}\neq0}x^{a}_{(\varphi)\bm{k}}\,e^{-i\varphi_{\bm{k}}}, \end{equation} with coefficients calculated from Eq.~\eqref{Fourier coeff relationship psi}. \subsubsection{Resonant orbits} Recall that the the radial and polar motions are restricted to a torus-like region $r_p\leq r\leq r_a$ and $|z|\leq z_{\rm max}$ in physical space. If the periods of radial and polar motion are incommensurate, then the orbit ergodically fills this region. The transformation $x^a\to q^a$ maps the $r$--$z$ motion onto the surface of a torus in phase space, which the angles $q^a$ ergodically cover. However, for some values of the orbital parameters, the periods are commensurate, meaning $k^{\rm res}_r\Upsilon_r+k^{\rm res}_z\Upsilon_z=0$ for some nonzero integers $(k^{\rm res}_r,k^{\rm res}_z)$. [Since integer multiples of $(k^{\rm res}_r,k^{\rm res}_z)$ will also satisfy this condition, we take $(k^{\rm res}_r,k^{\rm res}_z)$ to be the smallest two such integers.] In these cases, rather than having two independent frequencies, the $r$--$z$ motion has a single frequency, $\Upsilon=\Upsilon_z/|k^{\rm res}_r|=\Upsilon_r/|k^{\rm res}_z|$, and rather than ergodically covering the torus, the orbit closes on the torus and in the $r$--$z$ plane. The actual shape of this closed orbit is not uniquely specified by its frequencies but depends strongly on the relative initial phase $\psi^0_r-\psi^0_z$. Such orbits are referred to as {\em resonant}~\cite{Flanagan:2010cd}. For resonant orbits, the average over the torus no longer represents a meaningful average over the orbit. Rather than having the single stationary mode $f^{(q)}_{00}$, a function $f(r,z)$ on the worldline has an infinite set of stationary modes corresponding to all integer multiples of $\bm{k}^{\rm res}$. The infinite Mino-time average in Eq.~\eqref{lambda average} is the \begin{equation} \left\langle f[r(\lambda),z(\lambda)]\right\rangle_\lambda = \lim_{\Lambda\to\infty}\frac{1}{2\Lambda}\int_{-\Lambda}^\Lambda f d\lambda = \sum_{N=-\infty}^\infty f^{(q)}_{N\bm{k}^{\rm res}}; \end{equation} for a resonant orbit, the infinite $\lambda$ average does not, generically, agree with the torus average $f^{(q)}_{00}$. More broadly, the Fourier series~\eqref{Fourier q} becomes degenerate: $q_{\bm{k}}(\lambda) = q_{\bm{k}+N\bm{k}^{\rm res}}(\lambda)$ for all integers $N$. However, since there is a common period, we can replace the two action angles $q_a$ with a single angle $q(\lambda) =\Upsilon \lambda$ and rewrite the two-dimensional Fourier series~\eqref{Fourier q} as a non-degenerate one-dimensional one, \begin{equation} f[r(\lambda),z(\lambda)] = \sum_{k\in\mathbb{Z}}f^{(q)}_k e^{-kq(\lambda)}. \end{equation} The coefficients are related to those in Eq.~\eqref{Fourier q} by $f^{(q)}_k = \sum_{\bm{k}} f^{(q)}_{\bm{k}}$, where the sum ranges over all $(k_r,k_z)$ satisfying $k_r |k^{\rm res}_z| +k_z |k^{\rm res}_r|=k$. We then have $\left\langle f\right\rangle_\lambda = f^{(q)}_0$. The set of resonant orbits is dense in the space of frequencies, though it is of measure zero. A given resonant ratio $\Upsilon_r/\Upsilon_z=|k^{\rm res}_z|/|k^{\rm res}_r|$ describes a surface in the parameter space spanned by $p^i$. We refer to Ref.~\cite{Brink:2015roa} for the characterization of the locations of these surfaces and to Refs.~\cite{Grossman:2011im,Flanagan:2012kg,vandeMeent:2013sza,Brink:2013nna} for further discussion of resonant geodesic orbits. \subsection{Accelerated motion} \subsubsection{Evolution of orbital parameters} We now consider an accelerated orbit satisfying the equation of motion~\eqref{perturbed geodesic equation}, which we write compactly as \begin{equation}\label{eq mot} \frac{D^2 z^\alpha}{d\tau^2} = f^{\alpha}. \end{equation} The normalization $u^{\alpha}u_{\alpha} = -1$ implies that $f^\alpha$ is orthogonal to the worldline: $f^{\alpha}u_{\alpha} = 0$. If we continue to define $E=-u_t$, $L_z=u_\phi$, and $Q=u_\alpha u_\beta \overset{\star\star}{K}{}^{\alpha\beta} - \frac{1}{a^2}(u_{\tilde\phi})^2$ on the accelerated orbit, then \begin{align} \frac{dE}{d\tau} = -f_t, \quad \frac{dL_z}{d\tau} = f_\phi, \quad \frac{dQ}{d\tau} = 2\overset{\star\star}{K}{}^{\alpha\beta}u_\alpha f_\beta - \frac{2}{a^2}u_{\tilde \phi}f_{\tilde\phi},\label{dEdtau} \end{align} where $f_{\tilde\phi} = a(f_\phi + a f_t)$ and $u_{\tilde\phi} = a(L_z - aE)$. In other words, the ``constants'' of motion are no longer constant. However, if $f^\alpha$ is small, each parameter will change only slowly or oscillate slightly around a slowly varying mean. Our treatment of accelerated orbits mirrors that of geodesics, beginning with quasi-Keplerian methods and then describing the calculation of fundamental frequencies and perturbed angle variables. In the quasi-Keplerian treatment we place no restriction on $f^\alpha$, and in particular we do not assume it is small. In the treatment of perturbed angle variables we restrict the analysis to a small perturbing force, setting $f^\mu_{(0)}=0$ in Eq.~\eqref{perturbed geodesic equation}. \subsubsection{Method of osculating geodesics} In celestial mechanics, perturbed Keplerian orbits have historically been described using the method of osculating orbits. The idea in this method is, given an exact solution to the unperturbed problem in terms of a set of orbital elements $p^i=\{p,e,\iota,\ldots\}$, to write the perturbed orbit in precisely the same form but to promote the orbital elements to functions of time. At each instant $t$, the perturbed orbit with elements $\{p(t),e(t),\iota(t),\ldots\}$ is tangent to a Keplerian ellipse (called the osculating orbit) with those same values of orbital elements. In general relativity, this idea is referred to as the method of osculating geodesics \cite{Mino:2003yg,Pound:2007th,Gair:2010iv,Warburton:2017sxk}. Our geodesics in Kerr are described by Eqs.~\eqref{r(psi)}, \eqref{z(psi)}, \eqref{t(psi)}, and \eqref{phi(psi)}, which involve the seven orbital elements $I^A =\{p,e,\iota,t_0,\psi_r^0,\psi_z^0,\phi_0\}$. If we let $z^\alpha_G(I^A,\lambda)$ denote a geodesic with these orbital elements, and $\dot z^\alpha_G(I^A,\lambda)=\partial z^\alpha_G/\partial\lambda$ its tangent vector, then the {\em osculation conditions} are \begin{align} z^\alpha(\lambda) = z^\alpha_G[I^A(\lambda),\lambda]\quad\text{and}\quad \frac{dz^\alpha}{d\lambda}(\lambda) = \dot z^\alpha_G[I^A(\lambda),\lambda]. \end{align} These conditions define a one-to-one transformation $(z^\alpha,\dot z^\alpha)\to I^A$. Such a transformation is possible because the number of orbital elements is equal to the number of degrees of freedom on the orbit: the eight degrees of freedom $\{z^\alpha,\dot z^{\alpha}\}$ minus the constraint $\dot z_\alpha f^\alpha=0$. The osculation conditions can be used to transform the equation of motion~\eqref{eq mot} into evolution equations for $I^A(\lambda)$. Appealing to the chain rule $\frac{dz^\alpha}{d\lambda} = \frac{\partial z_G^\alpha}{\partial I^A}\frac{d I^A}{d\lambda} + \frac{\partial z_G^\alpha}{\partial \lambda}$, to the geodesic equation for $z_G^\alpha$ (in terms of the non-affine parameter $\lambda)$, and to the equation of motion~\eqref{eq mot} for $z^\alpha$ (converted to the non-affine parametrization), we find~\cite{Pound:2007th} \begin{align} \frac{\partial z^\alpha_G}{\partial I^A}\frac{d I^A}{d\lambda} & = 0,\label{diffI 1b} \\ \frac{\partial\dot z_G^{\alpha}}{\partial I^A}\frac{d I^A}{d\lambda} & = f^{\alpha} \left(\frac{d\tau}{d\lambda}\right)^2 + [\kappa(\lambda)-\kappa_G(\lambda)]\dot z^\alpha_G, \label{diffI 2b v0} \end{align} where $\kappa = \left(\frac{d\tau}{d\lambda}\right)^{-1}\frac{d^2\tau}{d\lambda^2}$. If we define $\lambda$ to satisfy Eq.~\eqref{Mino time} on both the geodesic and accelerated orbit, then $\kappa=\kappa_G = \Sigma^{-1}\frac{d\Sigma}{d\lambda}$, simplifying Eq.~\eqref{diffI 2b v0} to \begin{align} \frac{\partial\dot z_G^{\alpha}}{\partial I^A}\frac{d I^A}{d\lambda} & = \Sigma^2 f^{\alpha} \label{diffI 2b}. \end{align} These equations are exact, and $f^\alpha$ need not be small. Equations~\eqref{diffI 1b} and \eqref{diffI 2b} can be straightforwardly inverted to solve for $\frac{d I^A}{d\lambda}$, providing a system of first-order ordinary differential equations for the orbital elements. However, working with the initial phases $\{t_0,\psi^0_r, \psi^0_z, \phi_0\}$ is cumbersome in practice. In the above evolution equations, the phases $\psi_a$ are given by their geodesic values, meaning the solutions to Eqs.~\eqref{psi_r dot} and \eqref{psi_theta dot} with fixed $I^A$. That is, at each value of $\lambda$, in Eqs.~\eqref{psi_r dot} and \eqref{psi_theta dot} we replace $d\psi_a/d\lambda$ with $d\psi_a/d\lambda'$, then integrate from $\lambda'=0$, with initial values $\psi^0_a(\lambda)$, up to $\lambda'=\lambda$. Similarly, in Eqs.~\eqref{t_a(psi_a)}, the integrals are evaluated with fixed orbital elements in the integrands. The evolution equations also involve derivatives of these integrals with respect to the orbital elements. Evaluating all these integrals at every time step would be computationally expensive. In applications, it is therefore preferable to work with the variables $\{p,e,\iota,\psi_\alpha\}$ instead of $I^A$. We write a geodesic trajectory and its tangent vector as $z^\alpha_G[p^i,\psi_\beta(\lambda)]$ and $\dot z^\alpha_G[p^i,\psi_\beta(\lambda)]=\dot\psi^G_\beta\frac{\partial z^\alpha_G}{\partial\psi_\beta}$, where $\dot\psi^G_\alpha=\mathscr{f}_\alpha(p^i,\bm{\psi})$ are the geodesic ``frequencies'' given by Eqs.~\eqref{dtdtau}, \eqref{dphidtau}, \eqref{psi_r dot}, and \eqref{psi_theta dot}. The osculation conditions then read \begin{align} z^\alpha(\lambda) = z^\alpha_G[p^i(\lambda),\psi_\beta(\lambda)]\quad\text{and}\quad \frac{dz^\alpha}{d\lambda}(\lambda) & = \dot z^\alpha_G[p^i(\lambda),\psi_\beta(\lambda)]. \end{align Appealing to the chain rule $\frac{dz^\alpha}{d\lambda} = \frac{d\psi_\beta}{d\lambda}\frac{\partial z^\alpha_G}{\partial\psi_\beta}+\frac{d p^i}{d\lambda}\frac{\partial z^\alpha_G}{\partial p^i}$, to the geodesic equation for $z_G^\alpha$ (in terms of $\lambda$), and to the equation of motion~\eqref{eq mot} for $z^\alpha$ (in terms of $\lambda$), we find that the osculation conditions imply \begin{align} \frac{\partial z^\alpha_G}{\partial p^i}\frac{d p^i}{d\lambda} +\frac{\partial z^\alpha_G}{\partial\bm{\psi}}\cdot \bm{\delta\mathscr{f}} & = 0,\label{diffI 1.1} \\ \frac{\partial\dot z_G^{\alpha}}{\partial p^i}\frac{d p^i}{d\lambda} +\frac{\partial\dot z_G^{\alpha}}{\partial \bm{\psi}}\cdot\bm{\delta\mathscr{f}} & = \Sigma^2 f^{\alpha} \label{diffI 2.1}. \end{align} Here $\frac{\partial z^\alpha_G}{\partial\bm{\psi}}\cdot \bm{\delta\mathscr{f}} = \frac{\partial z^\alpha_G}{\partial\psi_r}\delta\mathscr{f}_r+\frac{\partial z^\alpha_G}{\partial\psi_z}\delta\mathscr{f}_z$, and we have defined \begin{equation} \delta\mathscr{f}_a := \frac{d\psi_a}{d\lambda} - \dot\psi^G_a. \end{equation} Eq.~\eqref{diffI 1.1} and \eqref{diffI 2.1} provide eight equations for the seven derivatives $\frac{d\psi_\alpha}{d\lambda}$ and $\frac{dp^i}{d\lambda}$; any one of the four equations represented by~\eqref{diffI 2.1} may be eliminated using $f^\alpha u_\alpha = 0$. The $t$ and $\phi$ components of Eq.~\eqref{diffI 1.1} are simply the osculation conditions \begin{equation}\label{tdot and phidot} \frac{d\psi_\alpha}{d\lambda} = \mathscr{f}_\alpha(\bm{\psi},p^i) \quad \text{for } \alpha=t,\phi. \end{equation} The $r$ and $z$ components of Eq.~\eqref{diffI 1.1} can be rearranged to obtain \begin{equation} \delta\mathscr{f}_a = - \frac{\partial z_G^a/\partial p^i}{\partial z^a_G/\partial\psi_a}\frac{d p^i}{d\lambda},\label{Df} \end{equation} where we have used the fact that $r$ is independent of $\psi_z$ and that $z$ is independent of $\psi_r$. Substituting this into Eq.~\eqref{diffI 2.1} yields \begin{equation} \frac{dp^i}{d\lambda}{\cal L}_i(z^\alpha_G) = \Sigma^2 f^\alpha,\label{pi dot} \end{equation} where \begin{equation} {\cal L}_i(x) := \frac{\partial \dot{x}}{\partial p^i} - \frac{\partial r/\partial p^i}{\partial r/\partial\psi_r} \frac{\partial\dot{x}}{\partial \psi_r} - \frac{\partial z/\partial p^i}{\partial z/\partial\psi_z} \frac{\partial\dot{x}}{\partial \psi_z}.\label{L_i(x)} \end{equation} One can easily invert Eq.~\eqref{pi dot} to obtain equations for $\frac{dp^i}{d\lambda}$: \begin{align} \frac{dp}{d\lambda} &= \frac{\Sigma^2}{D} \left\{ \left[{\cal L}_e(z),{\cal L}_\iota(\phi)\right]f^r+\left[{\cal L}_e(\phi),{\cal L}_\iota(r)\right]f^z + \left[{\cal L}_e(r),{\cal L}_\iota(z)\right]f^\phi \right\}\!,\label{pdot}\\ \frac{de}{d\lambda} &= \frac{\Sigma^2}{D} \left\{ \left[{\cal L}_\iota(z),{\cal L}_p(\phi)\right]f^r +\left[{\cal L}_\iota(\phi),{\cal L}_p(r)\right]f^z +\left[{\cal L}_\iota(r),{\cal L}_p(z)\right]f^\phi \right\}\!,\label{edot}\\ \frac{d\iota}{d\lambda} &= \frac{\Sigma^2}{D} \left\{ \left[{\cal L}_p(z),{\cal L}_e(\phi)\right]f^r + \left[{\cal L}_p(\phi),{\cal L}_e(r)\right]f^z + \left[{\cal L}_p(r),{\cal L}_e(z)\right]f^\phi \right\}\!,\label{idot} \end{align} with $[{\cal L}_i(x),{\cal L}_j(y)]:={\cal L}_i(x){\cal L}_j(y)-{\cal L}_j(x){\cal L}_i(y)$ and \begin{align} D := {\cal L}_p(r)[{\cal L}_e(z),{\cal L}_\iota(\phi)]+{\cal L}_e(r)[{\cal L}_\iota(z),{\cal L}_p(\phi)] +{\cal L}_\iota(r)[{\cal L}_p(z),{\cal L}_e(\phi)]. \end{align} Finally, the evolution equations for $\psi_a$ are obtained by substituting Eqs.~\eqref{pdot}--\eqref{idot} into Eq.~\eqref{Df}, yielding \begin{equation}\label{psidot osc} \frac{d\psi_a}{d\lambda} = \mathscr{f}_a(p^i,\bm{\psi})+\delta\mathscr{f}_a(p^i,\bm{\psi}), \end{equation} where \begin{align} \delta\mathscr{f}_r &= -\frac{1}{pe\sin\psi_r} \left[(1+e\cos\psi_r) \frac{d p}{d\lambda} - p\frac{d e}{d\lambda}\cos\psi_r\right], \label{Df_r}\\ \delta\mathscr{f}_z &= \frac{d\iota}{d\lambda}\cot\iota\cot\psi_z.\label{dfz} \end{align} There are superficial singularities in these formulas when $\psi_a$ is an integer multiple of $\pi$. However, the divergences are analytically eliminated when the formulas are explicitly evaluated. The full set of evolution equations is given by Eqs.~\eqref{pdot}--\eqref{idot}, \eqref{psidot osc}, and \eqref{tdot and phidot}. In these equations, $x^a(\lambda)=x^a_G[p^i(\lambda),\psi_a(\lambda)]$ is given by Eqs.~\eqref{r(psi)}--\eqref{z(psi)}, $\dot x^a(\lambda) = \dot x^a_G[p^i(\lambda),\psi_a(\lambda)]$ by \begin{align} \dot r = \frac{p M e\mathscr{f}_r\sin\psi_r}{(1+e\cos\psi_r)^2} \quad\text{and}\quad \dot z = -z_{\rm max}\mathscr{f}_z\sin\psi_z,\label{dot xa} \end{align} with Eqs.~\eqref{psi_r dot}--\eqref{psi_theta dot} for $\mathscr{f}_a$, and $(\dot t,\dot\phi) = (\mathscr{f}_t,\mathscr{f}_\phi)$ by Eqs.~\eqref{dtdtau} and \eqref{dphidtau}. Wherever $E$, $L_z$, and $Q$ appear, they are given in terms of $p^i$ by their geodesic expressions~\eqref{E(pi)}--\eqref{Q(pi)}. The quantities $[{\cal L}_i(x),{\cal L}_j(y)]$, when explicitly evaluated, constitute lengthy analytical formulas in terms of $p^i$ and $\bm{\psi}$. However, for several ${\cal L}_i(x)$, the second and third term vanish in Eq.~\eqref{L_i(x)}. Specifically, ${\cal L}_\iota(r) = \frac{\partial \dot r}{\partial \iota} = \frac{\partial r}{\partial \psi_r}\frac{\partial\mathscr{f}_r}{\partial\iota}$, and ${\cal L}_j(z) = \frac{\partial \dot z}{\partial p^j}=\frac{\partial z}{\partial \psi_z}\frac{\partial\mathscr{f}_z}{\partial j}$ for $j=p,e$. The evolution can be slightly simplified by adopting $\psi_r$ or $\psi_z$ as the parameter along the trajectory. That is easily done by using, e.g., $\frac{dp^i}{d\psi_a} = \frac{1}{d\psi_a/d\lambda} \frac{dp^i}{d\lambda}$. However, for a sufficiently large perturbing force, $\frac{d\psi_a}{d\lambda}$ can vanish at some points in the evolution, making $\psi_a$ an invalid parameter. In that case, we can split $\psi_a$ into $\psi_a = \psi^G_a - \psi^0_a$, where $d\psi^G_a/d\lambda = \mathscr{f}_a$ and $d\psi^0_a/d\lambda = -\delta\mathscr{f}_a$. $\psi^G_a$ is then a convenient, monotonic parameter along the worldline. Alternatively, $t$ can be used, applying, e.g., $\frac{dp^i}{dt}=\mathscr{f}_t^{-1} \frac{d p^i}{d\lambda}$. The evolution equations simplify more dramatically in the special case of equatorial orbits, for which $z=f^z=0$. In this case, $\iota$ and $\psi_z$ do not appear, and Eqs.~\eqref{pdot}--\eqref{edot} reduce to \begin{align} \frac{dp}{d\lambda} &= \frac{r^4\left[{\cal L}_e(\phi)f^r - {\cal L}_e(r)f^\phi\right]}{{\cal L}_p(r){\cal L}_e(\phi)-{\cal L}_e(r){\cal L}_p(\phi)},\label{pdot - equatorial}\\ \frac{de}{d\lambda} &= \frac{r^4\left[{\cal L}_p(r)f^\phi - {\cal L}_p(\phi)f^r\right]}{{\cal L}_p(r){\cal L}_e(\phi)-{\cal L}_e(r){\cal L}_p(\phi)},\label{edot - equatorial} \end{align} If $\psi^G_r$ is used as the independent parameter along the orbit, then the other three evolution equations are $d\psi^0_r/d\psi^G_r = -\delta\mathscr{f}_r/\mathscr{f}_r$ and Eq.~\eqref{tdot and phidot} for $t(\psi^G_r)$ and $\phi(\psi^G_r)$. In our treatment we have left the evolution equations in a highly inexplicit form even in the relatively simple equatorial case. Refs.~\cite{Pound:2007th} and \cite{Warburton:2017sxk} provide explicit formulas in the cases of planar and nonplanar orbits in Schwarzschild spacetime. Ref.~\cite{Gair:2010iv} details the generic case in Kerr spacetime and describes several alternative formulations of the osculating evolution. Before proceeding, we note again that the equations in this section are valid for arbitrary forces,\footnote{However, the method has most often been applied~\cite{Warburton:2011fk,Osburn:2015duj,Warburton:2017sxk,vandeMeent:2018rms} in the context of an approximation in which the self-force at each instant is approximated by the value it would take if the particle had spent its entire prior history moving on the osculating geodesic. Since the force is then constructed from the field generated by the osculating geodesic particle, this approximation might more properly be dubbed the method of osculating sources.} though the orbital elements are most meaningful when the force is small and the orbit is close to a geodesic. In the next section, we restrict to the case of a small perturbing force. \subsubsection{Perturbed Mino frequencies and action angles}\label{perturbed Mino frequencies} In the unperturbed case, the equations of geodesic motion could be written in terms of the orbital parameters and angle variables as \begin{align} \frac{d q_\alpha}{d\lambda} &= \Upsilon_\alpha(p^i),\label{qdot geodesic}\\ \frac{d p^i}{d\lambda} &= 0.\label{pidot geodesic} \end{align} If the perturbing force is small, with an expansion \begin{equation}\label{small force1} f^\alpha = \epsilon f^\alpha_{(1)}(z^\mu,\dot z^\mu)+\epsilon^2 f^\alpha_{(2)}(z^\mu,\dot z^\mu)+O(\epsilon^3), \end{equation} and is periodic in $t$ and $\phi$, then the equations of forced motion can still be written in terms of orbital parameters and angle variables: \begin{align} \frac{d q_\alpha}{d\lambda} &= \Upsilon^{(0)}_\alpha(p_q^j) +\epsilon\Upsilon^{(1)}_\alpha(p_q^j)+ O(\epsilon^2),\label{qdot perturbative}\\ \frac{d p_q^i}{d\lambda} &= \epsilon G_{(1)}^i(p_q^j) + \epsilon^2 G_{(2)}^i(p_q^j) + O(\epsilon^3).\label{pqidot} \end{align} Note that the subscript $q$ on the orbital parameters $p_q^i=(p_q,e_q,\iota_q)$ is a label, not an index. The form~\eqref{small force1} is mildly restrictive, and it does not include the Mathisson-Papapetrou spin force, for example; for a spinning body, we must introduce additional parameters and action angles describing the spin's magnitude and direction~\cite{Ruangsri:2015cvg,Witzany:2019nml}. For our purposes we adopt a more restrictive form, \begin{equation}\label{small force} f^\alpha = \epsilon f^\alpha_{(1)}(\bm{x},\dot z^\mu)+\epsilon^2 f^\alpha_{(2)}(\bm{x},\dot z^\mu)+O(\epsilon^3), \end{equation} which assumes that the force inherits the background spacetime's symmetries. We explain in Sec.~\ref{expansion of source} that the form~\eqref{small force} still needs further, minor alteration to describe the self-force, but it is sufficiently general as a starting point. Unlike in the unperturbed case, the orbital parameters $p^i_q$ and frequencies are no longer constant; they evolve slowly with time. However, the variables $(q_\alpha,p_q^i)$ cleanly separate the two scales in the orbit's evolution: the variables $p_q^i$ only change slowly, over the long time scale $\sim1/\epsilon$, while the angle variables $q_\alpha$ change on the orbital time scale $\sim 2\pi/\Upsilon^{(0)}_\alpha$. In the context of a small-mass-ratio binary, where the inspiral is driven by gravitational-wave emission, the long time scale $\sim 1/\epsilon$ is referred to as the {\em radiation-reaction time}. The division of the orbital dynamics into slowly and rapidly varying functions has the same utility as in the geodesic case: it enables convenient Fourier expansions of functions on the worldline, which mesh with a Fourier expansion of the field equations (described in the final section of this chapter). Functions $f(r,z)$ on the accelerated worldline can be expanded in the Fourier series \begin{equation}\label{Fourier q perturbed} f[r(\lambda),z(\lambda)] = \sum_{\bm{k}}f^{(q)}_{\bm{k}}(p_q^j)e^{-iq_{\bm{k}}(\lambda)}, \end{equation} with a clean separation between slowly varying amplitudes and rapidly varying phases. The coefficients remain given by Eq.~\eqref{Fourier q coeffs}. By eliminating oscillatory driving terms in the orbital evolution equations, the transformation to $(q_\alpha,p_q^i)$ also facilitates more rapid numerical evolutions~\cite{vandeMeent:2018rms} and, ultimately, more rapid generation of waveforms~\cite{Chua:2020stf}. In this and the next section, for visual simplicity we shall omit the label ``$(q)$'' from mode coefficients associated with $\bm{q}$. Now, to put the equations of motion in the form~\eqref{qdot perturbative}--\eqref{pqidot}, we begin with the evolution equations~\eqref{tdot and phidot}, \eqref{pdot}--\eqref{idot}, and \eqref{psidot osc}. Given the expansion~\eqref{small force}, these equations take the form \begin{align} \frac{d \psi_\alpha}{d\lambda} &= \mathscr{f}_\alpha^{(0)}(\bm{\psi},p^j) + \epsilon\mathscr{f}_{\alpha}^{(1)}(\bm{\psi},p^j) + O(\epsilon^2),\label{psidot perturbative}\\ \frac{d p^i}{d\lambda} &= \epsilon g_{(1)}^i(\bm{\psi},p^j) + \epsilon^2 g_{(2)}^i(\bm{\psi},p^j) + O(\epsilon^3).\label{pidot perturbative} \end{align} Here $g_{(n)}^i$ is given by Eqs.~\eqref{pdot}--\eqref{idot} with $f^\alpha\to f^\alpha_{(n)}$. We have renamed $\mathscr{f}_\alpha$ to $\mathscr{f}_\alpha^{(0)}$, $\mathscr{f}_a^{(1)}$ is given by $\delta\mathscr{f}_a$ with $f^\alpha\to f^\alpha_{(1)}$, and $\mathscr{f}_\alpha^{(1)}=0$ for $\alpha=t,\phi$. In this form of the equations, every term on the right is a periodic, oscillatory function of the phases. However, one can transform to the new variables $(q_\alpha,p_q^i)$, which have no oscillatory driving terms, using an averaging transformation~\cite{Kevorkian-Cole:96,vandeMeent:2018rms},\footnote{Here we combine a near-identity averaging transformation with a zeroth-order one. } \begin{align} \psi_\alpha(q_\beta,p_q^j,\epsilon) &= \psi^{(0)}_\alpha(q_\beta,p_q^j)+\epsilon \psi^{(1)}_\alpha(q_\beta,p_q^j)+O(\epsilon^2),\label{near-identity psi}\\ p^i(q_\beta,p_q^j,\epsilon) & = p_q^i+\epsilon p_{(1)}^i(\bm{q},p_q^j)+\epsilon^2p^i_{(2)}(\bm{q},p_q^j)+O(\epsilon^3),\label{near-identity pi} \end{align} where \begin{equation} \psi^{(0)}_\alpha(q_\beta,p_q^j) = q_\alpha - \Delta \psi^{(0)}_\alpha(0,p_q^i) + \Delta \psi^{(0)}_\alpha(\bm{q},p_q^j)\label{psi0 from q} \end{equation} is the geodesic relationship, and the corrections $\psi^{(n)}_\alpha$ and $p_{(n)}^i$ for $n>0$ are $2\pi$-periodic in each $q_a$ (with a potentially nonzero mean). In analogy with the geodesic case, we have chosen the origin of phase space such that $\psi^{(0)}_\alpha(q_\beta=0)=0$. This choice will ensure that at fixed $p^i_q$, $\psi^{(0)}_\alpha$ and $q_\alpha$ satisfy all the relationships in Sec.~\ref{geodesic Mino angle variables}. Note that we could replace $\Delta \psi^{(0)}_\alpha(0,p_q^i)$ with any other $q_\alpha$-independent function of $p^i_q$; this would still represent a geodesic relationship between $\psi^{(0)}_\alpha$ and $q_\alpha$, but with different choices of initial phases for different values of $p^i_q$. For convenience in later expressions, we define \begin{equation} A_\alpha(p^i_q):=- \Delta \psi^{(0)}_\alpha(0,p_q^i).\label{a} \end{equation} By substituting the expansions~\eqref{near-identity psi} and \eqref{near-identity pi} into Eqs.~\eqref{psidot perturbative} and \eqref{pidot perturbative}, appealing to \eqref{qdot perturbative} and~\eqref{pqidot}, and equating coefficients of powers of $\epsilon$, one can solve for the frequencies $\Upsilon_\alpha^{(n)}$ and driving forces $G^i_{(n)}$, as well as for the functions in the averaging transformation. Explicitly, the leading-order terms in Eqs.~\eqref{psidot perturbative} and \eqref{pidot perturbative} are \begin{align} \frac{\partial\psi^{(0)}_\alpha}{\partial q_\beta}\Upsilon^{(0)}_\beta = \Upsilon^{(0)}_\alpha + \bm{\Upsilon}^{(0)}\cdot\frac{\partial \Delta\psi^{(0)}_\alpha}{\partial\bm{q}} &= \mathscr{f}^{(0)}_\alpha(\bm{\psi}^{(0)},p_q^j),\label{NITpsi0}\\ G^i_{(1)} + \bm{\Upsilon}^{(0)}\cdot\frac{\partial p^i_{(1)}}{\partial\bm{ q}} &= g^i_{(1)}(\bm{\psi}^{(0)},p_q^j).\label{NITp1} \end{align} Equation~\eqref{NITpsi0} is simply the geodesic relationship between $\psi_\alpha$ and $q_\alpha$. It follows that we can use the geodesic solution~\eqref{q_a(psi_a)} for $q_a(\psi^{(0)}_a,p^i_q)$. Concretely, we may write \begin{equation} q_a(\psi^{(0)}_a,p^i_q) = \Upsilon^{(0)}_a(p^i_q)\int_{0}^{\psi^{(0)}_a}\frac{d\psi'_a}{\mathscr{f}^{(0)}_a(\psi_a',p^i_q)},\label{qa(psia) perturbed} \end{equation} implying that the Fourier coefficients in Eq.~\eqref{Fourier q perturbed} can be computed as the integrals over $\bm{\psi}^{(0)}$ in Eq.~\eqref{Fourier q coeffs}, with the replacements $\Upsilon_a\to \Upsilon_a^{(0)}$ and $\psi_a\to\psi_a^{(0)}$. This relies on our particular choice of $A_\alpha$ in Eq.~\eqref{a}; different choices would lead to different $p^i_q$-dependent lower limits of integration in Eq.~\eqref{qa(psia) perturbed}, which in turn would lead to $p^i_q$-dependent phase factors appearing in Eq.~\eqref{Fourier q coeffs}. Using either of the forms~\eqref{Fourier q coeffs} or~\eqref{Fourier q coeffs from psi}, we can easily decompose Eqs.~\eqref{NITpsi0} and \eqref{NITp1} into Fourier series, with $\Delta\psi^{(0)}_\alpha = \sum_{\bm{k}\neq0}\Delta\psi^{(0,\bm{k})}_\alpha(p^j_q)e^{-iq_{\bm{k}}}$ and $p^i_{(1)}=\sum_{\bm{k}}p^i_{(1,\bm{k})}(p^j_q)e^{-iq_{\bm{k}}}$. From the $\bm{k}=0$ terms in the equations, we find \begin{align} \Upsilon^{(0)}_\alpha(p_q^j) &= \left\langle\mathscr{f}^{(0)}_\alpha(\bm{\psi}^{(0)},p_q^j)\right\rangle_{\bm{q}},\label{Upsilon0}\\ G_{(1)}^i(p_q^j) &= \left\langle g^i_{(1)}(\bm{\psi}^{(0)},p_q^j)\right\rangle_{\bm{q}},\label{G1} \end{align} and from the $\bm{k}\neq0$ terms we find \begin{align} \Delta\psi^{(0,\bm{k})}_\alpha(p_q^j) &= -\frac{\mathscr{f}^{(0,\bm{k})}_\alpha(p_q^j)}{i\Upsilon^{(0)}_{\bm{k}}(p_q^j)},\label{Dpsi0}\\ p^i_{(1,\bm{k})}(p_q^j) &= -\frac{g^i_{(1,\bm{k})}(p_q^j)}{i\Upsilon^{(0)}_{\bm{k}}(p_q^j)},\label{Dp1} \end{align} where $\Upsilon^{(0)}_{\bm{k}}:=\bm{k}\cdot \bm{\Upsilon}^{(0)} = k_r\Upsilon^{(0)}_r + k_z\Upsilon^{(0)}_z$. Note that these equations leave $p^i_{(1,00)}$ arbitrary. As foreshadowed above, Eqs.~\eqref{Upsilon0} and \eqref{Dpsi0} are precisely the same as the geodesic formulas ~\eqref{Upsilon = <f>} and \eqref{psi to q} (with the replacement $p^i\to p^i_q$). The only change is that the orbital parameters $p^i_q$, which determine the frequencies and amplitudes, now adiabatically evolve with time. Importantly, Eq.~\eqref{Dp1} requires $\Upsilon^{(0)}_{\bm{k}}\neq0$. This condition fails at resonances, where $\Upsilon^{(0)}_{\bm{k}^{\rm res}}=0$. Therefore, the averaging transformation is impossible when there is a resonance. We discuss this resonant case in the next section. Eq.~\eqref{Dpsi0} also superficially appears to encounter a singularity at resonance, but this is skirted by the particular form of $\mathscr{f}^{(0,\bm{k})}_\alpha$, as we see from the more explicit formula~\eqref{psi to q}. Moving onto the first subleading order in Eqs.~\eqref{psidot perturbative} and \eqref{pidot perturbative}, we have \begin{align} \Upsilon^{(1)}_\alpha + G^j_{(1)}\frac{\partial\psi^{(0)}_\alpha}{\partial p_q^j} +\bm{\Upsilon}^{(1)}\cdot&\frac{\partial\Delta\psi^{(0)}_\alpha}{\partial\bm{ q}}+\bm{\Upsilon}^{(0)}\cdot\frac{\partial\psi^{(1)}_\alpha}{\partial\bm{ q}} \nonumber\\ &= \mathscr{f}^{(1)}_\alpha + p^j_{(1)}\frac{\partial\mathscr{f}^{(0)}_\alpha}{\partial p_q^j} + \bm{\psi}^{(1)}\cdot\frac{\partial\mathscr{f}^{(0)}_\alpha}{\partial\bm{\psi}^{(0)}},\label{psi1 eqn}\\ G^i_{(2)} +G^j_{(1)}\frac{\partial p^i_{(1)}}{\partial p_q^j}+\bm{\Upsilon}^{(1)}\cdot&\frac{\partial p^i_{(1)}}{\partial \bm{q}}+\bm{\Upsilon}^{(0)}\cdot\frac{\partial p^i_{(2)}}{\partial \bm{q}} \nonumber\\ &= g^i_{(2)} + p^j_{(1)}\frac{\partial g^i_{(1)}}{\partial p_q^j} + \bm{\psi}^{(1)}\cdot\frac{\partial g^i_{(1)}}{\partial \bm{\psi}^{(0)}},\label{p2 eqn} \end{align} where all quantities on the left are functions of $(\bm{q},p_q^j)$, and all those on the right are functions of $(\bm{\psi}^{(0)},p_q^j)$. Taking the average of these equations yields \begin{align} \Upsilon^{(1)}_\alpha(p_q^i) &= \left\langle\mathscr{f}^{(1)}_\alpha\right\rangle_{\!\bm{q}}+\left\langle p^i_{(1)}\frac{\partial\mathscr{f}^{(0)}_\alpha}{\partial p_q^i} +\bm{\psi}^{(1)}\cdot\frac{\partial\mathscr{f}^{(0)}_\alpha}{\partial \bm{\psi}^{(0)}}\right\rangle_{\!\bm{q}} - G^j_{(1)}\frac{\partial A_\alpha}{\partial p^j_q}, \label{Upsilon1}\\ G_{(2)}^i(p_q^j) &= \left\langle g^i_{(2)}\right\rangle_{\bm{q}} + \left\langle p^j_{(1)}\frac{\partial g^i_{(1)}}{\partial p_q^j} +\bm{\psi}^{(1)}\cdot\frac{\partial g^i_{(1)}}{\partial\bm{ \psi}^{(0)}}\right\rangle_{\!\bm{q}} - G^j_{(1)}\frac{\partial p^i_{(1,00)}}{\partial p^j_q}.\label{G2} \end{align} We see from Eq.~\eqref{Upsilon1} that a judicious choice of $p^i_{(1,00)}$ allows us to set \begin{equation} \Upsilon^{(1)}_\alpha = 0 \quad\text{for }\alpha=r,z,\phi.\label{Upsilon1_rzphi} \end{equation} Such a $p^i_{(1,00)}$ is determined from \begin{equation} p^i_{(1,00)}\frac{\partial\Upsilon^{(0)}_\alpha}{\partial p_q^i} = -\left\langle\mathscr{f}^{(1)}_\alpha\right\rangle_{\!\bm{q}} - \sum_{\bm{k}\neq0}p^i_{(1,\bm{k})}\frac{\partial\mathscr{f}^{(0,-\bm{k})}_\alpha}{\partial p_q^i} -\left\langle\bm{\psi}^{(1)}\cdot\frac{\partial\mathscr{f}^{(0)}_\alpha}{\partial \bm{\psi}^{(0)}}\right\rangle_{\!\bm{q}}+ G^j_{(1)}\frac{\partial A_\alpha}{\partial p^j_q}\label{p100} \end{equation} for $\alpha=r,z,\phi$. We could alternatively set $\Upsilon^{(1)}_\alpha = 0$ for a different trio of components, but this choice will be particularly useful in the final section of this review. This freedom is in addition to the freedom discussed above regarding the choice of $A_\alpha$; i.e., the functions $A_\alpha$ and $p^i_{(1,00)}$ in the averaging transformation are degenerate with $\Upsilon^{(1)}_\alpha$. Ref.~\cite{vandeMeent:2018rms} provides a more thorough discussion of the freedom within near-identity averaging transformations. The averages in Eqs.~\eqref{Upsilon1}--\eqref{G2} involve $\psi^{(1)}_a$, which can be obtained from Eq.~\eqref{psi1 eqn}. A $2\pi$-biperiodic solution to that equation is\footnote{This seems to be the unique $2\pi$-biperiodic solution. Any other solution can only differ by a homogeneous solution to Eq.~\eqref{psi1 eqn}, which must take the form $\exp(\int \mathscr{f}'_a dq_a/\Upsilon^{(0)}_a)f(q_b-q_a\Upsilon^{(0)}_b/\Upsilon^{(0)}_a)$ for some function $f$, with $b\neq a$. It appears that such a function cannot simultaneously be $2\pi$ periodic in both $q_a$ and $q_b$.} \begin{align} \psi_a^{(1)}(\bm{q}, p^j_q) = \frac{1}{Y_a(q_a, p^j_q)}\sum_{\bm{k}}\sum_k \frac{S^{\bm{k}}_a(p^j_q)Y^k_a(p^j_q)}{-i\Upsilon^{(0)}_{\bm{k}}-ik\Upsilon^{(0)}_a - \langle \mathscr{f}^{\prime}_a\rangle_{\bm{q}}} e^{-i q_{\bm{k}} - i k q_a},\label{psi1_a soln} \end{align} where $Y_a(q_a, p^j_q) := \exp[-F_a(q_a, p^j_q)/\Upsilon^{(0)}_a(p^j_q)] = \sum_k Y^k_a e^{-ikq_a}$, $F_a := \sum_{k\neq0}\frac{\mathscr{f}^{\prime\,k}_a}{-ik} e^{-ikq_a}$ is the antiderivative of the purely oscillatory part of $\mathscr{f}^\prime_a:=\partial\mathscr{f}^{(0)}_a/\partial\psi^{(0)}_a$, and \begin{equation} S_a(\bm{q},p^i_q) := - G^j_{(1)}\frac{\partial\psi^{(0)}_a}{\partial p_q^j} + \mathscr{f}^{(1)}_a + p^j_{(1)}\frac{\partial\mathscr{f}^{(0)}_a}{\partial p_q^j} = \sum_{\bm{k}}S^{\bm{k}}_a(p^i_q) e^{-i q_{\bm{k}}}. \end{equation} The remaining pieces of Eqs.~\eqref{psi1 eqn} and \eqref{p2 eqn} determine the purely oscillatory parts of $\psi^{(1)}_t$, $\psi^{(1)}_\phi$, and $p^i_{(2)}$. Specifically, \begin{align} \psi^{(1,\bm{k})}_\alpha &= \frac{1}{-i\Upsilon_{\bm{k}}^{(0)}} \left(\mathscr{f}^{(1,\bm{k})}_\alpha + P_\alpha^{\bm{k}} - G^j_{(1)}\frac{\partial\Delta\psi^{(0,\bm{k})}_\alpha}{\partial p_q^j}\right),\label{Dpsi1}\\ p^i_{(2,\bm{k})} &= \frac{1}{-i\Upsilon_{\bm{k}}^{(0)}} \left(g^i_{(2,\bm{k})} + Q^i_{\bm{k}} - G^j_{(1)}\frac{\partial p^i_{(1,\bm{k})}}{\partial p_q^j}\right) \end{align} for $\alpha=t,\phi$, where $P_\alpha := p^j_{(1)}\frac{\partial\mathscr{f}^{(0)}_\alpha}{\partial p_q^j} + \bm{\psi}^{(1)}\cdot\frac{\partial\mathscr{f}^{(0)}_\alpha}{\partial\bm{\psi}^{(0)}}$ and $Q^i := p^j_{(1)}\frac{\partial g^i_{(1)}}{\partial p_q^j} + \bm{\psi}^{(1)}\cdot\frac{\partial g^i_{(1)}}{\partial\bm{\psi}^{(0)}}$. This averaging transformation can be carried to any order. Analogous calculations also apply if we use $P^i = (E,L_z,Q)$ rather than $p^i =(p,e,\iota)$. Ultimately, the coordinate trajectory $z^\alpha$ can be expressed in terms of $(q_\alpha,p^i_q)$ as \begin{equation} z^\alpha(q_\beta,p^i_q) = z^\alpha_{(0)}(q_\beta,p^i_q) + \epsilon z^\alpha_{(1)}(\bm{q},p^i_q) +O(\epsilon^2). \end{equation} The leading-order trajectory has the same dependence on $q_\alpha$ and $p^i_q$ as a geodesic. That is, if we write a geodesic as $z^\alpha_G(q_\beta,p^i)$, given by Eq.~\eqref{zG(lambda)}, then $z^\alpha_{(0)}(q_\beta,p^i_q)=z^\alpha_G(q_\beta,p^i_q)$. Wherever the geodesic expressions involve $P^i$, they are here evaluated at $P^i_q=(E_q,L_q,Q_q)$, which are related to $p^i_q$ by the geodesic relationships. (We suppress the subscript $z$ on $L_z$.) The difference between the geodesic and the accelerated trajectory lies entirely in the evolution of their arguments: rather than evolving according to Eqs.~\eqref{qdot geodesic} and \eqref{pidot geodesic}, $q_\alpha$ and $p^i_q$ now evolve according to Eqs.~\eqref{qdot perturbative} and \eqref{pqidot}. In the context of a binary, the small corrections $\epsilon z^\alpha_{(1)}(\bm{q},p^i_q)$ to the trajectory remain uniformly small over the entire inspiral until the transition to plunge~\cite{Ori:2000zn}; because they are periodic functions of $\bm{q}$, they have no large secular terms. $t^{(1)}$ and $ \phi^{(1)}$ are given by Eq.~\eqref{Dpsi1}, with $t^{(1,00)}$ and $\phi^{(1,00)}$ left arbitrary. $r^{(1)}$ and $z^{(1)}$ are given by \begin{align} x^a_{(1)} = \bm{\psi}^{(1)}\cdot\frac{\partial x^a_G}{\partial\bm{\psi}^{(0)}} + p^i_{(1)}\frac{\partial x^a_G}{\partial p^i_q}, \end{align} with $\psi^{(1)}_a$ given by Eq.~\eqref{psi1_a soln}, the oscillatory part of $p^i_{(1)}$ by Eq.~\eqref{Dp1}, and $p^i_{(1,00)}$ by Eq.~\eqref{p100}. Refs.~\cite{Hinderer:2008dm,vandeMeent:2013sza,Fujita:2016igj,Isoyama:2018sib,vandeMeent:2018rms} contain more detailed action-angle treatments of perturbed orbits. With the exception of Ref.~\cite{vandeMeent:2018rms}, these treatments have not begun with equations of the form~\eqref{psidot perturbative} and \eqref{pidot perturbative}. Instead, they began with approximate angle variables, which we will denote $\hat q_\alpha$ and which satisfy $\hat q_\alpha = q_\alpha + O(\epsilon)$. The equations of motion then take the form \begin{align} \frac{d\hat q_\alpha}{d\lambda} &= \Upsilon^{(0)}_\alpha(p^j) + \epsilon U^{(1)}_\alpha(\hat{\bm{q}},p^j)+O(\epsilon^2),\label{q0dot}\\ \frac{dp^i}{d\lambda} &= \epsilon F^i_{(1)}(\hat{\bm{q}},p^j) + \epsilon^2 F^i_{(2)}(\hat{\bm{q}},p^j)+O(\epsilon^3).\label{pdot(q0)} \end{align} Ref.~\cite{Hinderer:2008dm} derives concrete equations of this form in the case that proper time $\tau$ is used instead of Mino time and that $\{E,L_z,Q\}$ are used instead of $\{p,e,\iota\}$.\footnote{The notation in Ref.~\cite{Hinderer:2008dm} differs in several significant ways from ours. In particular, Ref.~\cite{Hinderer:2008dm} uses $\lambda$ to denote a rescaled $\tau$, $q_\alpha$ to denote an analogue of our $\hat q_\alpha$ (and associated with $\tau$ rather than Mino time), and $\psi_\alpha$ to denote an analogue of our $q_\alpha$ (again associated with $\tau$).} [The driving forces $F^i_{(n)}$ are then given by Eq.~\eqref{dEdtau}.] The averaging transformation $(\hat q_\alpha,p^i)\to (q_\alpha,p^i_q)$ can be found as we did above, with substantial simplifications arising from the fact that $\frac{d\hat q_\alpha}{d\lambda}$ is constant at leading order; the transformation is given by Eqs.~\eqref{q0 to q} and \eqref{p to pq} below (without the restriction $\bm{k}\neq N\bm{k}_{\rm res}$ in the nonresonant case). Our particular construction in this section and the next is instead designed to link the action-angle description with the quasi-Keplerian one. It appears here for the first time. However, Ref.~\cite{vandeMeent:2018rms} considers more general sets of coupled differential equations that involve variables analogous to our $\psi_\alpha$ as well as variables analogous to $\hat q_\alpha$, though without providing a solution analogous to our~\eqref{psi1_a soln}. \subsubsection{Perturbed Boyer-Lindquist frequencies and action angles}\label{perturbed BL frequencies} Because we solve field equations and extract waveforms using Boyer-Lindquist time $t$, it is once again useful to construct variables $(\varphi_\alpha,p^i_{\varphi})$ associated with $t$, where $\varphi_t=\psi_t=t$ and $p^i_\varphi=(p_\varphi,e_\varphi,\iota_\varphi)$. The construction of the variables $(\varphi_\alpha,p^i_{\varphi})$ (and of their evolution equations) is analogous to the construction based on $\lambda$: the osculating-geodesic equations for $d\psi_\alpha/dt$ and $dp^i/dt$ have the same form as Eqs.~\eqref{psidot perturbative} and \eqref{pidot perturbative}, simply with $\mathscr{f}^{(n)}_\alpha\to \mathscr{f}^{(n)}_\alpha/\mathscr{f}^{(0)}_t$ and $g^i_{(n)}\to g^i_{(n)}/\mathscr{f}^{(0)}_t$, and after a near-identity averaging transformation we arrive at the equations of motion \begin{align} \frac{d\varphi_\alpha}{dt} &= \Omega^{(0)}_\alpha(p^j_{\varphi}),\label{dphidt}\\ \frac{dp^i_{\varphi}}{dt} &= \epsilon \Gamma_{(1)}^i(p^j_{\varphi}) + \epsilon^2 \Gamma_{(2)}^i(p^j_{\varphi}) + O(\epsilon^3).\label{dpphidt} \end{align} $\Omega^{(0)}_\alpha$ are the geodesic frequencies, and $\Omega^{(0)}_t=1$. The corrections $\Omega^{(n)}_\alpha$ for $\alpha=r,z,\phi$ and $n>0$ are eliminated just as in the previous section, while $\Omega^{(n)}_t=0$ trivially for $n>0$ because $d\varphi_t/dt=1$. The two sets of variables $(\varphi_\alpha,p^i_\varphi)$ and $(q_\alpha,p^i_q)$ are related by a transformation \begin{align} \varphi_\alpha(q_\beta,p_q^j,\epsilon) &= \varphi^{(0)}_\alpha(q_\alpha,p^i_q) + \epsilon \Phi_\alpha^{(1)}(\bm{q},p_q^j) +O(\epsilon^2),\label{q to phi}\\ p^i_{\varphi}(q_\beta, p_q^j,\epsilon) &= p_q^i + \epsilon \pi^i_{(1)}(\bm{q},p_q^j)+O(\epsilon^2),\label{pq to pphi} \end{align} where the leading term in $\varphi_\alpha$ is given by the geodesic relationship~\eqref{varphi(q)}, which we restate as \begin{equation} \varphi^{(0)}(q_\alpha,p^i_q) := q_\alpha+B_\alpha(p^i_q)+\Omega_\alpha^{(0)}(p^i_q)\Delta t^{(0)}(\bm{q},p_q^j),\label{phi0} \end{equation} defining \begin{equation} B_\alpha(p^i_q):=- \Omega_\alpha^{(0)}(p^i_q)\Delta t^{(0)}(0,p_q^j). \end{equation} Like in the geodesic case, this value for $B_\alpha$ imposes that $\varphi_\alpha$ and $q_\alpha$ (and $\psi^{(0)}_\alpha$) have the same origin in phase space. As discussed around Eqs.~\eqref{a} and \eqref{qa(psia) perturbed}, this means that we can immediately utilize all the relationships from Sec.~\ref{geodesic BL angle variables}. The corrections $\Phi^{(1)}_\alpha$ and $\pi^i_{(1)}$ are $2\pi$-biperiodic in $\bm{q}$. The terms in this transformation, as well as the driving forces $\Gamma^i_{(n)}$, can be derived by substituting Eqs.~\eqref{q to phi} and \eqref{pq to pphi} into Eqs.~\eqref{dphidt} and \eqref{dpphidt}. Taking the average of the resulting equations and appealing to Eqs.~\eqref{qdot perturbative} and \eqref{pqidot}, we obtain \begin{align} \Omega^{(0)}_\alpha = \frac{\Upsilon^{(0)}_\alpha}{\Upsilon^{(0)}_t} \quad \text{and} \quad \Gamma_{(1)}^i = \frac{G^i_{(1)}}{\Upsilon^{(0)}_t} \end{align} at leading order and \begin{align} \Omega^{(1)}_\alpha =0 &= -\frac{1}{\Upsilon_t^{(0)}}\left(G^j_{(1)}\partial_{p^j}B_\alpha+\langle R^i\rangle_{\bm{q}} \partial_{p^i}\Omega^{(0)}_\alpha + \langle P_t\rangle_{\bm{q}} \Omega^{(0)}_\alpha\right),\label{Omega1}\\ \Gamma_{(2)}^i &= \frac{1}{\Upsilon^{(0)}_t}\left(G^i_{(2)} +G^j_{(1)}\partial_{p^j}\langle \pi^i_{(1)}\rangle_{\bm{q}} - \langle R^j\rangle_{\bm{q}}\partial_{p^j}\Gamma^i_{(1)} - \langle P_t\rangle_{\bm{q}}\Gamma^i_{(1)} \right)\label{Gamma2} \end{align} at the first subleading order, where $R^i:=\mathscr{f}^{(0)}_t \pi^i_{(1)}$ and $P_t=\bm{\psi}^{(1)}\cdot\partial_{\bm{\psi}}\mathscr{f}^{(0)}_t + p^i_{(1)}\partial_{p^i}\mathscr{f}^{(0)}_t$. The average $\langle \pi^i_{(1)}\rangle_{\bm{q}}$ is chosen to enforce $\Omega^{(1)}_\alpha=0$ in Eq.~\eqref{Omega1}. The oscillatory parts of the equations yield \begin{align} \Delta t^{(0,\bm{k})} &= \frac{\mathscr{f}^{(0,\bm{k})}_t}{-i\Upsilon^{(0)}_{\bm{k}}},\\ \pi^i_{(1,\bm{k})} &= \frac{\mathscr{f}^{(0,\bm{k})}_t}{-i\Upsilon^{(0)}_{\bm{k}}}\Gamma^i_{(1)} \end{align} at leading order and \begin{align} \Phi^{(1,\bm{k})}_\alpha &= \frac{1}{-i\Upsilon^{(0)}_{\bm{k}}}\left(R^i_{\bm{k}}\partial_{p^i}\Omega^{(0)}_\alpha+P^{\bm{k}}_t\Omega^{(0)}_\alpha -G^i_{(1)}\partial_i \Delta\varphi^{(0,\bm{k})}_\alpha\right) \end{align} at the first subleading order. In all of the above expressions, $\bm{k}$ refers to a Fourier decomposition into $e^{-iq_{\bm{k}}}$ modes. All functions of $p^i$ are evaluated at $p^i_q$, and inside the integrals~\eqref{Fourier q coeffs}, all functions of $\bm{\psi}$ are evaluated at $\bm{\psi}^{(0)}(\bm{q},p^i_q)$. When solving the field equations, we shall require Fourier decompositions with respect to $\bm{\varphi}$, \begin{equation} f[r(t),z(t)] = \sum_{\bm{k}} f^{(\varphi)}_{\bm{k}}(p^j_\varphi) e^{-i\varphi_{\bm{k}}}. \end{equation} We can calculate the coefficients as integrals over $\bm{q}$ using the transformation~\eqref{q to phi}. However, it is simpler to use the geodesic change of variables defined by the leading term in the transformation. The coefficients are then given by Eq.~\eqref{Fourier coeff relationship} with the replacements $p^i\to p^i_\varphi$, $\Upsilon_\alpha\to\Upsilon^{(0)}_\alpha$, $\mathscr{f}_\alpha\to\mathscr{f}^{(0)}_\alpha$, or by Eq.~\eqref{Fourier coeff relationship psi} with the additional replacement $\bm{\psi}\to\bm{\psi}^{(0)}$. We will also require the transformation from $(\psi_\alpha,p^i)$ to $(\varphi_\alpha,p^i_\varphi)$: \begin{align} \psi_\alpha(\varphi_\beta,p^i_\varphi,\epsilon) &= \psi^{(0)}_\alpha(\varphi_\beta,p^i_\varphi) + \epsilon\psi^{(\varphi,1)}_\alpha(\bm{\varphi},p^i_\varphi) + O(\epsilon^2),\label{psi from phi perturbative}\\ p^i(\varphi_\alpha,p^j_\varphi,\epsilon) &= p^i_\varphi + \epsilon p^i_{(\varphi,1)}(\bm{\varphi},p^j_\varphi)+O(\epsilon^2).\label{p from p_phi perturbative} \end{align} Following the same steps as in the previous section, at leading order we recover the geodesic frequencies and find $\psi^{(0)}_\alpha(\varphi_\beta,p^i_\varphi)$ is given by the geodesic relationship~\eqref{psi(phi)}. Solving the subleading-order equations is made difficult because the analogue of Eq.~\eqref{psi1 eqn} has the form \begin{equation} \bm{\Omega}^{(0)}\cdot\frac{\partial\psi^{(\varphi,1)}_\alpha}{\partial\bm{\varphi}} - \bm{\psi}^{(\varphi,1)}\cdot\frac{\partial}{\partial\bm{\psi}^{(0)}}\left(\frac{\mathscr{f}^{(0)}_\alpha}{\mathscr{f}^{(0)}_t}\right) = \ldots \end{equation} The $\alpha=r,z$ components of this equation, unlike those of Eq.~\eqref{psi1 eqn}, are coupled partial differential equations for $\bm{\psi}^{(\varphi,1)}$, which do not have a solution of the form~\eqref{psi1_a soln}. However, we can find the subleading terms in Eqs.~\eqref{psi from phi perturbative} and \eqref{p from p_phi perturbative} by combining our knowledge of $(\varphi_\alpha,p^i_\varphi)$ and $(\psi_\alpha,p^i)$ as functions of $(q_\alpha,p^i_q)$. Substituting the expansions~\eqref{q to phi} and \eqref{pq to pphi} into the right-hand sides of Eqs.~\eqref{psi from phi perturbative} and \eqref{p from p_phi perturbative} and equating the results with Eqs.~\eqref{near-identity psi} and \eqref{near-identity pi}, we find \begin{align} \psi^{(\varphi,1)}_\alpha(\varphi^{(0)}_\beta,p^i_\varphi) &= \psi^{(1)}_\alpha(q_\beta,p^i_\varphi) - \varphi^{(1)}_\beta\frac{\partial\psi^{(0)}_\alpha}{\partial\varphi_\beta}(\varphi^{(0)}_\gamma,p^i_\varphi) -\pi^i_{(1)}\frac{\partial\psi^{(0)}_\alpha}{\partial p^i_\varphi}(\varphi^{(0)}_\gamma,p^i_\varphi),\label{psi(varphi,1)}\\ p^i_{(\varphi,1)}(\bm{\varphi}^{(0)},p^j_\varphi) &= p^i_{(1)}(\bm{q},p^i_\varphi) - \pi^{i}_{(1)}(\bm{\varphi}^{(0)},p^j_\varphi),\label{p(varphi,1)} \end{align} where $\varphi^{(0)}_\alpha$ is given by Eq.~\eqref{phi0} with $p^i_q\to p^i_\varphi$. The inverse transformation, which we will also need, is \begin{align} \varphi_\alpha(\psi_\beta,p^i,\epsilon) &= \varphi^{(0)}_\alpha(\psi_\beta,p^i) +\epsilon\Phi^{(1)}_\alpha(\bm{q}^{(0)},p^i)-\epsilon\psi^{(1)}_\beta\frac{\partial \varphi^{(0)}_\alpha}{\partial\psi_\beta}-\epsilon p^i_{(1)}\frac{\partial \varphi^{(0)}_\alpha}{\partial p^i},\label{varphi(psi) perturbative}\\ p^i_\varphi(\psi_\beta,p^i,\epsilon) &= p^i +\epsilon\pi^i_{(1)}(\bm{\varphi}^{(0)},p^j) - \epsilon p^i_{(1)}(\bm{q}^{(0)},p^j),\label{pvarphi(psi) perturbative} \end{align} where $\varphi^{(0)}_\alpha$ and $q^{(0)}_\alpha$ are the geodesic functions of $\psi_\alpha$ and $p^i$. Finally, we can reconstruct the coordinate trajectory $z^\alpha$ in the form \begin{equation} z^\alpha(\varphi_\beta,p^i_\varphi) = z^\alpha_{(0)}(\varphi_\beta,p^i_{\varphi}) + \epsilon z^\alpha_{(\varphi,1)}(\bm{\varphi},p^i_\varphi) + O(\epsilon^2).\label{z expansion - varphi} \end{equation} The zeroth-order trajectory has the same functional dependence as a geodesic; that is, $z^\alpha_{(0)}(\varphi_\beta,p^i_\varphi) = z^\alpha_G(\varphi_\beta,p^i_\varphi)$, where $z^\alpha_G(\varphi_\beta,p^i)$ is given in Eq.~\eqref{zG(t)}. In analogy with the Mino-time solution, wherever the geodesic expressions involve $P^i$, they are here evaluated at $P^i_\varphi=(E_\varphi,L_\varphi,Q_\varphi)$, which are related to $p^i_\varphi$ by the geodesic relationships. The first-order corrections $z^\alpha_{(\varphi,1)}$ are $t_{(\varphi,1)} = 0$, $\phi_{(\varphi,1)}=\psi^{(\varphi,1)}_\phi$ given by Eq.~\eqref{psi(varphi,1)}, and \begin{align} x^a_{(\varphi,1)} &=\bm{\psi}^{(\varphi,1)}\cdot\frac{\partial x^a_G}{\partial\bm{\psi}^{(0)}}+ p^i_{(\varphi,1)}\frac{\partial x^a_G}{\partial p^i_\varphi},\label{xa(varphi,1)} \end{align} with $\bm{\psi}^{(\varphi,1)}$ and $p^i_{(\varphi,1)}$ given by Eqs.~\eqref{psi(varphi,1)} and \eqref{p(varphi,1)}. \subsubsection{Multiscale expansions, adiabatic approximation, and post-adiabatic approximations}\label{multiscale expansion of orbit} Self-accelerated orbits are often described with a multiscale (or two-timescale) expansion~\cite{Pound:2007ti,Hinderer:2008dm,Mino:2008rr,Pound:2010wa,Pound:2015wva,Hughes:2016xwf,Bonga:2019ycj,Miller:2020bft}. This is essentially equivalent to the averaging transformation described above. To illustrate the method, we return to Eqs.~\eqref{psidot perturbative} and \eqref{pidot perturbative}. We introduce a {\em slow time} variable $\tilde \lambda := \epsilon\lambda$; this changes by an amount $\sim \epsilon^0$ on the time scale $\sim 1/\epsilon$. In place of the transformations~\eqref{near-identity psi} and \eqref{near-identity pi}, we adopt expansions \begin{align} \psi_\alpha(q_\beta,\tilde\lambda,\epsilon) &= q_\alpha + \tilde A_\alpha(\tilde \lambda) + \Delta\tilde\psi^{(0)}_\alpha(\bm{q},\tilde \lambda) + \epsilon\tilde\psi^{(1)}_\alpha(\bm{q},\tilde\lambda) + O(\epsilon^2),\label{multiscale psi}\\ p^i(q_\alpha,\tilde\lambda,\epsilon) &= \tilde p^i_{(0)}(\tilde \lambda) + \epsilon \tilde p^i_{(1)}(\bm{q},\tilde\lambda) + O(\epsilon^2),\label{multiscale pi} \end{align} where $q_\alpha$ satisfies \begin{equation} \frac{dq_\alpha}{d\lambda} = \tilde\Upsilon^{(0)}_\alpha(\epsilon \lambda) + \epsilon \tilde \Upsilon^{(1)}_\alpha(\epsilon \lambda) + O(\epsilon^2):=\tilde\Upsilon_\alpha(\epsilon \lambda,\epsilon). \end{equation} We then substitute these expansions into Eqs.~\eqref{psidot perturbative} and \eqref{pidot perturbative}, applying the chain rule \begin{equation} \frac{d}{d\lambda} = \tilde\Upsilon_\alpha \frac{\partial}{\partial q_\alpha} +\epsilon\frac{\partial}{\partial\tilde\lambda}. \end{equation} $q_\alpha$ and $\tilde\lambda$ are then treated as independent variables, making Eqs.~\eqref{psidot perturbative} and \eqref{pidot perturbative} into a sequence of equations, one set at each order in $\epsilon$. These equations are essentially equivalent to \eqref{NITpsi0} and \eqref{NITp1} at leading order and to Eqs.~\eqref{psi1 eqn} and \eqref{p2 eqn} at first subleading order, with tildes placed over all quantities and the following replacements: $p^i_q\to \tilde p^i_{(0)}$, $G^i_{(n)}\to d\tilde p^i_{(n-1)}/d\tilde\lambda$, $G^i_{(1)}\frac{\partial\psi^{(0)}_\alpha}{\partial p^j_q}\to d(\tilde A_\alpha+\Delta\tilde\psi^{(0)}_\alpha)/d\tilde\lambda$, and $G^i_{(1)}\frac{\partial p^i_{(1)}}{\partial p^j_q}\to 0$. These equations can be solved just as we solved Eqs.~\eqref{NITpsi0}, \eqref{NITp1}, \eqref{psi1 eqn}, and \eqref{p2 eqn}. The only difference between this expansion and Eqs.~\eqref{near-identity psi} and \eqref{near-identity pi} is how each parameterizes the orbit's slow evolution, whether with slowly evolving parameters $p^i_q$ or with slow time $\tilde\lambda$. Indeed, the solutions are easily related. The solutions to Eqs.~\eqref{qdot perturbative} and \eqref{pqidot} can be expanded as \begin{align} q_\alpha(\tilde \lambda,\epsilon) &= \frac{1}{\epsilon}\left[\tilde q^{(0)}_\alpha(\tilde \lambda) + \epsilon \tilde q^{(1)}_\alpha(\tilde \lambda) + O(\epsilon^2)\right],\label{multiscale q}\\ p^i_q(\tilde \lambda,\epsilon) &= \tilde p^i_{q(0)}(\tilde\lambda) + \epsilon \tilde p^i_{q(1)}(\tilde \lambda) + O(\epsilon^2),\label{multiscale pq} \end{align} where $\tilde q^{(n)}_\alpha(\tilde\lambda) = \int_0^{\tilde\lambda}\tilde \Upsilon^{(n)}_\alpha(\tilde\lambda^\prime) d\tilde\lambda^\prime + \tilde q^{(n)}_\alpha(0)$ with \begin{align} \tilde\Upsilon^{(0)}_\alpha(\tilde \lambda) &= \Upsilon^{(0)}_\alpha,\\ \tilde\Upsilon^{(1)}_\alpha(\tilde \lambda) &= \Upsilon^{(1)}_\alpha + \tilde p^i_{q(1)}(\tilde\lambda)\partial_{p^i}\Upsilon^{(0)}_\alpha. \end{align} On the right, $\Upsilon^{(n)}_\alpha$ and its derivatives are evaluated at $\tilde p^i_{q(0)}(\tilde\lambda)$. Substituting these expansions into Eqs.~\eqref{near-identity psi} and \eqref{near-identity pi} and comparing to Eqs.~\eqref{multiscale psi} and \eqref{multiscale pi}, we read off \begin{align} \Delta\tilde\psi^{(0)}_\alpha(\bm{q},\tilde\lambda) &= \Delta\psi^{(0)}_\alpha,\\ \tilde p^i_{(0)}(\tilde\lambda) &= \tilde p^i_{q(0)}(\tilde\lambda),\\ \tilde\psi^{(1)}_\alpha(\bm{q},\tilde\lambda) &= \psi^{(1)}_\alpha + \tilde p^j_{q(1)}(\tilde\lambda)\partial_{p^j}\psi^{(0)}_\alpha,\\ \tilde p^i_{(1)}(\bm{q},\tilde\lambda) &= p^i_{(1)}+\tilde p^i_{q(1)}(\tilde\lambda). \end{align} Here $\psi^{(0)}_\alpha$, $p^i_{(1)}$, $\psi^{(1)}_\alpha$, and their derivatives are evaluated at $[\bm{q},\tilde p^j_{q(0)}(\tilde\lambda)]$. These particular relationships rely on choosing $\tilde A_\alpha(\tilde \lambda)=A_\alpha[\tilde p^i_{q(0)}(\tilde\lambda)]$. Just as the averaging transformation did, the multiscale expansion has considerable degeneracy between $\tilde A_\alpha$, $\tilde\Upsilon^{(1)}_\alpha$, and $\langle\tilde p^i_{(1)}\rangle$. If different choices are made, then we cannot identify $q_\alpha$ between the two methods. However, regardless of choices, both methods will ultimately output identical solutions of the form $\psi_\alpha(\tilde \lambda,\epsilon)$ and $p^i(\tilde\lambda,\epsilon)$ (assuming identical initial conditions), and when written in that form they can be unambiguously related All the same relationships apply if we instead use $t$-based variables with a slow time $\tilde t :=\epsilon t$. When considering the multiscale expansion of the Einstein equation, it will be useful to have at hand the expansions \begin{align} \varphi_\alpha(\tilde t,\epsilon) &= \frac{1}{\epsilon}\left[\tilde\varphi^{(0)}_\alpha(\tilde t) + \epsilon\tilde\varphi^{(1)}_\alpha(\tilde t) + O(\epsilon^2)\right],\label{multiscale varphi}\\ p^i_\varphi(\tilde t,\epsilon) &= \tilde p^i_{\varphi(0)}(\tilde t) + \epsilon\tilde p^i_{\varphi(1)}(\tilde t) + O(\epsilon^2).\label{multiscale pivarphi} \end{align} It follows from Eqs.~\eqref{dphidt} and \eqref{dpphidt} that the coefficients in these expansions satisfy \begin{align} \frac{d\tilde\varphi^{(0)}_\alpha}{d\tilde t} &= \Omega^{(0)}_\alpha(\tilde p^j_{\varphi(0)}),\label{0PA varphi}\\ \frac{d\tilde p^i_{\varphi(0)}}{d\tilde t} &= \Gamma^i_{(1)}(\tilde p^j_{\varphi(0)}),\label{0PA pivarphi}\\ \frac{d\tilde\varphi^{(1)}_\alpha}{d\tilde t} &= \tilde p^j_{\varphi(1)}\partial_j \Omega^{(0)}_\alpha(\tilde p^j_{\varphi(0)}),\label{1PA varphi}\\ \frac{d\tilde p^i_{\varphi(1)}}{d\tilde t} &= \Gamma^i_{(2)}(\tilde p^j_{\varphi(0)}) +\tilde p^j_{\varphi(1)}\partial_j \Gamma^i_{(1)}(\tilde p^j_{\varphi(0)}).\label{1PA pivarphi} \end{align} We can also write Eq.~\eqref{multiscale varphi} as \begin{equation} \varphi_\alpha(\tilde t,\epsilon) = \frac{1}{\epsilon}\int_0^{\tilde t}\Omega_\alpha(\tilde t',\epsilon)d\tilde t' + \varphi_\alpha(0,\epsilon), \end{equation} with \begin{equation}\label{Omega tilde expansion} \Omega_\alpha(\tilde t,\epsilon) = \tilde\Omega^{(0)}_\alpha(\tilde t) +\epsilon \tilde\Omega^{(1)}_\alpha(\tilde t) +O(\epsilon^2), \end{equation} where $\tilde\Omega^{(0)}_\alpha(\tilde t) = \Omega^{(0)}_\alpha[\tilde p^i_{\varphi(0)}(\tilde t)]$ and $\tilde\Omega^{(1)}_\alpha(\tilde t) = \tilde p^j_{\varphi(1)}(\tilde t)\partial_j \Omega^{(0)}_\alpha[\tilde p^j_{\varphi(0)}(\tilde t)]$. There is a tradeoff in solving Eqs.~\eqref{0PA varphi}--\eqref{1PA pivarphi} rather than Eqs.~\eqref{dphidt} and \eqref{dpphidt}: Eqs.~\eqref{0PA varphi}--\eqref{1PA pivarphi} double the number of numerical variables, but they are independent of $\epsilon$, meaning they can be solved for all values of $\epsilon$ simultaneously. Eqs.~\eqref{dphidt} and \eqref{dpphidt} have half as many variables, but they cannot be solved without first specifying a value of $\epsilon$. Since the waveform phase in a binary is directly related to the orbital phase, the expansion~\eqref{multiscale varphi} provides a simple means of assessing the level of accuracy of a given approximation. The approximation that includes only the first term, $\tilde\varphi_\alpha^{(0)}$, is called the {\em adiabatic approximation} (denoted 0PA); it consists of the coupled equations~\eqref{0PA varphi} and \eqref{0PA pivarphi}, which describe a slow evolution of the geodesic frequencies. An approximation that includes all terms through $\tilde\varphi_\alpha^{(n)}$ is called an {\em $n$th post-adiabatic approximation} (denoted $n$PA); it consists of the coupled equations~\eqref{0PA varphi}--\eqref{1PA pivarphi}. We return to the efficacy of 0PA and 1PA approximations in the final section of this review. Ref.~\cite{Hinderer:2008dm} determined what inputs are required for a 0PA or 1PA approximation. To describe these inputs, we define the time-reversal $\psi_\alpha\to-\psi_\alpha$, $f^\alpha(\bm{\psi})\to \varepsilon^\alpha f^\alpha(-\bm{\psi})$, where $f^\alpha$ is the accelerating force, $\epsilon^\alpha := (-1,1,1,-1)$, and there is no summation over $\alpha$. We then define the dissipative and conservative pieces of the force: \begin{align} f^\alpha_{\rm diss} &= \frac{1}{2}f^\alpha(\bm{\psi}) - \frac{1}{2}\varepsilon^\alpha f^\alpha(-\bm{\psi}),\label{fdiss}\\ f^\alpha_{\rm con} &= \frac{1}{2}f^\alpha(\bm{\psi}) + \frac{1}{2}\varepsilon^\alpha f^\alpha(-\bm{\psi}).\label{fcon} \end{align} These definitions imply that under time reversal, $f^\alpha_{\rm diss}\to - f^\alpha_{\rm diss}$ and $f^\alpha_{\rm con}\to +f^\alpha_{\rm con}$. It is straightforward to see from Eqs.~\eqref{pdot}--\eqref{dfz} and the definition of ${\cal L}_i$ that $dp^i/dt$ only receives a direct contribution from $f^\alpha_{\rm diss}$, while $d\psi_a/dt$ only receives a direct contribution from $f^\alpha_{\rm con}$. At 0PA order, $f^\alpha$ enters the evolution through Eq.~\eqref{0PA pivarphi}, in the quantity \begin{equation} \Gamma^i_{(1)}=\left\langle\left.\frac{dp^i}{dt}\right|_{f^\alpha\to f^\alpha_{(1)}}\right\rangle_{\bm{\varphi}} = \frac{1}{\Upsilon^{(0)}_t}\left\langle\left.\frac{dp^i}{d\lambda}\right|_{f^\alpha\to f^\alpha_{(1)}}\right\rangle_{\bm{q}}. \end{equation} Hence, the 0PA approximation only requires $f^\alpha_{(1)\rm diss}$. At 1PA, $f^\alpha$ enters the evolution through both $\Gamma^i_{(1)}$ and $\Gamma^i_{(2)}$ in Eq.~\eqref{1PA pivarphi}, where $\Gamma^i_{(2)}$ is given by Eq.~\eqref{Gamma2} with \eqref{G2}, \eqref{Dp1}, \eqref{psi1_a soln}, and with $p^i_{(1,00)}$ chosen such that $\Upsilon^{(1)}_\alpha=0$. These quantities involve $f^\alpha_{(2)\rm diss}$ [via $\langle g^i_{(2)}\rangle_{\bm{q}}=\langle\frac{dp^i}{d\lambda}|_{f^\alpha\to f^\alpha_{(2)}}\rangle_{\bm{q}}$ in Eq.~\eqref{G2}], $f^\alpha_{(1)\rm con}$ (via $\psi^{(1)}_\alpha$ and $p^i_{(1,00)}$, which are both partially determined by $\mathscr{f}^{(1)}_a = \delta\mathscr{f}_a|_{f^\alpha\to f^\alpha_{(1)}}$), and $f^\alpha_{(1)\rm diss}$. Hence, the 1PA approximation requires the entirety of $f^\alpha_{(1)}$ as well as$f^\alpha_{(2)\rm diss}$. The fact that dissipative effects dominate over conservative ones on the long time scale of an inspiral is important in practical simulations of binaries. At least at first order, the dissipative self-force is substantially easier to compute than the conservative self-force. We discuss this in the final section of this review. We refer to Ref.~\cite{Kevorkian-Cole:96} for a pedagogical introduction to multiscale expansions in more general contexts. Ref.~\cite{Hinderer:2008dm} contains a detailed discussion of the multiscale approximation for self-accelerated orbits in Kerr spacetime. Refs.~\cite{Pound:2007ti,Pound:2010wa} present variants of the method in simpler binary scenarios. \subsubsection{Transient resonances} Given that the orbital frequencies slowly evolve, they will occasionally encounter a resonance. Typically~\cite{vandeMeent:2013sza}, the frequencies will continue to evolve, transitioning out of the resonance. These transient resonances have significant impact on orbital evolution. The near-identity averaging transformation $(\psi_\alpha,p^i)\to (q_\alpha,p_q^i)$ becomes singular at a resonance, as described below Eq.~\eqref{Dp1}. Specifically, it becomes singular for the mode numbers $N\bm{k}_{\rm res}$ for which $\Upsilon_{N\bm{k}_{\rm res}}=0$. To assess the effect of a resonance, we start from the equations of motion in the form~\eqref{q0dot}--\eqref{pdot(q0)}. The driving forces $F^i_{(n)}$ and ``frequencies'' $U^{(1)}_\alpha$ can be expanded in Fourier series such as $F^i_{(n)} = \sum_{\bm{k}}F^i_{(n,\bm{k})}(p^j)e^{-i \hat q_{\bm{k}}}$. However, near a resonance, a set of apparently oscillatory terms becomes approximately stationary. Specifically, near a resonance where $\Upsilon^{(0)}_{\rm res}:=k^{\rm res}_r\Upsilon_r^{(0)}+ k^{\rm res}_z\Upsilon_z^{(0)}=0$, the phase $q_{\rm res} = \bm{k}_{\rm res}\cdot\hat{\bm{q}}$, and all integer multiples of it, ceases to evolve on the orbital time scale. To see this, suppose the resonance occurs at a time $\lambda_{\rm res}$. Near that time, $q_{\rm res}$ can be expanded in a Taylor series \begin{equation} q_{\rm res}(\lambda) = q_{\rm res}(\lambda_{\rm res}) + \dot q_{\rm res}(\lambda_{\rm res})(\lambda-\lambda_{\rm res}) + \frac{1}{2}\ddot q_{\rm res}(\lambda_{\rm res})(\lambda-\lambda_{\rm res})^2 + \ldots \end{equation} Since $\dot q_{\rm res}(\lambda_{\rm res}) \approx \Upsilon^{(0)}_{\rm res}(\lambda_{\rm res})=0$ and $\ddot q_{\rm res} \approx d\Upsilon^{(0)}_{\rm res}/d\lambda$, we see that $q_{\rm res}$ changes on the time scale \begin{equation} \delta \lambda = \sqrt{\frac{1}{d\Upsilon^{(0)}_{\rm res}/d\lambda}}\sim\frac{1}{\sqrt{\epsilon}},\label{resonant time scale} \end{equation} which is much longer than the orbital time scale (but much shorter than the radiation-reaction time). During the passage through a resonance, these additional quasistationary driving forces cause secular changes to the orbital parameters. We isolate these effects by performing a partial near-identity averaging transformation that eliminates all oscillations from the evolution equations {\em except} those depending on the resonant angle variable $q_{\rm res}$. An appropriate transformation, through 1PA order, is given by \begin{align} \hat q_\alpha(\bm{q},p^j_q,\epsilon) &= q_\alpha +\epsilon B_\alpha{}^\beta(p^i_q)q_\beta+\epsilon\sum_{\bm{k}\neq N\bm{k}_{\rm res}}\frac{U^{(1,\bm{k})}_\alpha-\frac{F^j_{(1,\bm{k})}}{i\Upsilon^{(0)}_{\bm{k}}}\frac{\partial\Upsilon^{(0)}_\alpha}{\partial p^j_q}}{-i\Upsilon^{(0)}_{\rm res}(p^j_q)}e^{-iq_{\bm{k}}},\label{q0 to q}\\ p^i(\bm{q},p^j_q,\epsilon) &= p^i_q + \epsilon C^i(p^j_q) +\epsilon\sum_{\bm{k}\neq N\bm{k}_{\rm res}}\frac{F^i_{(1,\bm{k})}}{-i\Upsilon^{(0)}_{\rm res}}e^{-iq_{\bm{k}}},\label{p to pq} \end{align} where functions of $p^j$ inside the sums are evaluated at $p^j_q$, and $B_\alpha{}^\beta$ and $C^i$ are any functions satisfying \begin{equation} B_\alpha{}^\beta \Upsilon^{(0)}_\beta = C^j\partial_{p^j_q}\Upsilon^{(0)}_\alpha + \langle U^{(1)}_\alpha\rangle_{\bm{q}}. \end{equation} These transformations satisfy the analogs of Eqs.~\eqref{NITp1} and \eqref{psi1 eqn} but with resonant modes excluded, with $B_\alpha{}^\beta$ and $C^i$ chosen to eliminate the frequency corrections $\Upsilon_\alpha^{(1)}$, and with the simplifications that $\mathscr{f}^{(0)}_a$ is replaced by $\Upsilon_a^{(0)}$ and $\Delta\psi^{(0)}_a$ by 0 (consequences of $q^{(0)}_\alpha$ already being a leading-order action angle). Together, the transformations~\eqref{q0 to q} and \eqref{p to pq} bring the equations of motion to the form \begin{align} \frac{dq_\alpha}{d\lambda} &= \Upsilon^{(0)}_\alpha(p^j_q) + O(\epsilon),\\ \frac{dp^i_q}{d\lambda} &= \epsilon G^i_{(1)}(p_q^j) + \epsilon \sum_{N\neq0}F^i_{(1,N\bm{k}_{\rm res})}(p_q^j)e^{-iNq_{\rm res}}+ O(\epsilon^2),\label{dot pqi with resonance} \end{align} with $G^i_{(1)} = \langle F^i_{(1)}\rangle_{\bm{q}}$. For simplicity, we suppress 1PA terms. How much does the second term in Eq.~\eqref{dot pqi with resonance} contribute to the evolution of $p^i_q$? Far from resonance, the additional term averages to zero. If we denote the term as $\epsilon\delta G^i$, then across resonance, it contributes an amount $\delta p^i_q = \epsilon \int \delta G^i d\lambda $. Applying the stationary phase approximation to the integral, we find \begin{equation} \delta p^i = \sum_{N\neq0}F^i_{(1,N\bm{k}_{\rm res})}\sqrt{\frac{2\pi \epsilon}{|N\dot\Upsilon^{(0)}_{{\rm res}}|}}\exp\left[{\rm sgn}\left(N\dot\Upsilon^{(0)}_{{\rm res}}\right)\frac{i\pi}{4}+iNq_{\rm res}(\lambda_{\rm res})\right]+o(\sqrt{\epsilon}),\label{resonant shifts} \end{equation} where we use a dot to denote a derivative with respect to $\tilde\lambda$, $\dot\Upsilon^{(0)}_{{\rm res}}:=\frac{d\Upsilon^{(0)}_{\rm res}}{d\tilde\lambda}=G^i_{(1)}\frac{\partial\Upsilon^{(0)}_{\rm res}}{\partial p^i_q}$, and both $\dot\Upsilon^{(0)}_{{\rm res}}$ and $F^i_{(1,N\bm{k}_{\rm res})}$ are evaluated at $p^j_q(\lambda_{\rm res})$. The magnitude of $\delta p^i$ is $\sim \sqrt{\epsilon}$; intuitively, this corresponds to a quasistationary driving force of size $\sim \epsilon$ multiplied by the resonance-crossing time $\delta \lambda\sim 1/\sqrt{\epsilon}$. But $\delta p^i$ is not a simple product of the two; each quasistationary driving force is weighted by a phase factor, such that $\delta p^i$ depends sensitively on the value of the resonant phase at resonance, $q_{\rm res}(\lambda_{\rm res})$. This implies that in order to determine $\delta p^i$ at leading order, one must know the 1PA phase evolution prior to resonance. A proper accounting of the passage through resonance requires matching a near-resonance expansion to an off-resonance, multiscale expansion; see Sec. III of Ref.~\cite{vandeMeent:2013sza} or Appendix B of Ref.~\cite{Berry:2016bit} for demonstrations of this matching procedure. Because a resonance shifts the orbital parameters by an amount $\sim\sqrt\epsilon$, and the shifted parameters subsequently evolve over the long time scale $\sim 1/\epsilon$, the resonance introduces half-integer powers into the multiscale expansion. For example, after a resonance, Eqs.~\eqref{multiscale varphi}--\eqref{multiscale pivarphi} become \begin{align} \varphi_\alpha(\tilde t,\epsilon) &= \frac{1}{\epsilon}\left[\tilde\varphi^{(0)}_\alpha(\tilde t) +\epsilon^{1/2}\tilde\varphi_\alpha^{(1/2)}(\tilde t) + \epsilon\tilde\varphi^{(1)}_\alpha(\tilde t) + O(\epsilon^{3/2})\right],\\ p^i_\varphi(\tilde t,\epsilon) &= \tilde p^i_{\varphi(0)}(\tilde t) + \epsilon^{1/2}\tilde p^i_{\varphi(1/2)}(\tilde t)+ \epsilon\tilde p^i_{\varphi(1)}(\tilde t) + O(\epsilon^{3/2}). \end{align} The effect of a single resonance therefore dominates over all other post-adiabatic effects. However, determining the resonant corrections~$\tilde\varphi^{(1/2)}_\alpha$ and $p^i_{\varphi(1/2)}$ requires the shifts~\eqref{resonant shifts}, which in turn require the resonant phase $q_{\rm res}$ through 1PA order. This means that the 1/2-post-adiabatic-order corrections can be thought of as outsize 1PA corrections. Further discussions of transient resonances can be found in Refs.~\cite{Flanagan:2010cd,Gair:2011mr,Flanagan:2012kg,Ruangsri:2013hra,vandeMeent:2013sza,Isoyama:2013yor,Berry:2016bit,Mihaylov:2017qwn,Isoyama:2018sib}. Because resonances are dense in the parameter space, an inspiraling body will pass through an infinite number of them. However, because the forcing coefficients $F^i_{(1,\bm{k})}$ decay with increasing $\bm{k}$, the only resonances with significant impact are ``low-order'' resonances, such as $\Upsilon_r/\Upsilon_z=1/2$. A large fraction of inspiraling orbits will encounter such a resonance in the late inspiral~\cite{Ruangsri:2013hra,Berry:2016bit}, but neglecting the effect of resonance in EMRIs may lead to only a small loss of detectable signals~\cite{Berry:2016bit}. In addition to the intrinsic $r$--$z$ orbital resonances discussed here, resonances can also occur due to a variety of other effects. There can be extrinsic resonances in which $k^{\rm res}_r\Omega_r+k^{\rm res}_z\Omega_z +k^{\rm res}_\phi\Omega_\phi =0$ for some triplet $(k^{\rm res}_r,k^{\rm res}_z,k^{\rm res}_\phi)$; these lead to non-isotropic emission of gravitational waves, causing possibly observable kicks to the system's center of mass~\cite{Hirata:2010xn,vandeMeent:2014raa}, but their effects are subdominant relative to $r$--$z$ resonances. If the secondary is spinning, its spin can also create resonances~\cite{Zelenka:2019nyp}, as can the presence of external matter source such as a third body~\cite{Bonga:2019ycj,Yang:2019iqa}. \section{Solving the Einstein equations with a skeleton source} In this section we describe how to combine the methods of the previous sections to model small-mass-ratio binaries. This consists of solving the global problem in a Kerr background: the perturbative Einstein equations with a skeleton source (i.e., a point particle or effective source) moving on a trajectory governed by Eq.~\eqref{EOM spin} or ~\eqref{EOM2}. The first part of the section summarizes a multiscale expansion of the field equations, building directly on our treatment of orbital dynamics. At adiabatic order, waveforms can be generated by solving the linearized Einstein or Teukolsky equation with a point particle source and calculating the dissipative first-order self-force. At 1PA order, one must solve the second-order Einstein equation and compute the first-order conservative self-force and second-order dissipative self-force. These 1PA calculations require, as a central ingredient, a mode decomposition of the singular field; this is the subject of the second part of the section. For simplicity, we assume the small object is spherical and nonspinning and that it does not encounter any significant orbital resonances. \subsection{Multiscale expansion} \subsubsection{Structure of the expansion} Like the orbital dynamics, the metric in a binary has two distinct time scales: the orbital periods $T_\alpha = 2\pi/\Omega_\alpha$ and the long radiation-reaction time $\sim 1/(\epsilon T_\alpha)$. The evolution on the orbital time scale is characterized by periodic dependence on the orbital action angles $\varphi_\alpha$, which satisfy Eq.~\eqref{dphidt}. The evolution on the radiation-reaction time is characterized by a slow change of the orbital parameters $p^i_\varphi$, governed by Eqs.~\eqref{dpphidt}, and of the central black hole parameters $(M_{BH},J_{BH})$, which evolve due to absorption of energy and angular momentum according to Eqs.~\eqref{EdotH v1} and \eqref{LdotH v1}. If we did not neglect the small object's spin and higher moments, they would come with additional parameters and phases~\cite{Ruangsri:2015cvg,Witzany:2019nml}. The black hole parameters change at a rate ${\mathcal{F}}^{\cal H}\propto |h|^2\sim \epsilon^2$. Over the radiation-reaction time, this accumulates to a change $\sim \epsilon$, allowing us to write the evolving parameters as $M_{BH}=M + \epsilon \delta M $ and $J_{BH} = J + \epsilon \delta J $, where $M$ and $J$ are constant and $ M_A:=(\delta M,\delta J)$ evolve on the radiation-reaction time. We then work on the fixed Kerr background with parameters $M$ and $a=J/M$, with a set of slowly evolving system parameters ${\cal P}^\alpha = \{p_\varphi^i, M_A\}$. We will use the split into action angles $\varphi_\alpha$ and system parameters ${\cal P}^\alpha$ to expand the metric perturbation and stress-energy as \begin{align} h_{\mu\nu} &= \sum_{n=1}^2\sum_{{\mathscr m},\bm{k}}\epsilon^n h^{(n{\mathscr m}\bm{k})}_{\mu\nu}({\cal P}^\alpha,\bm{x})e^{i{\mathscr m}\phi -i\varphi_{{\mathscr m}\bm{k}}}+O(\epsilon^3),\label{h multiscale} \\ T_{\mu\nu} &= \sum_{n=1}^2\sum_{{\mathscr m},\bm{k}}\epsilon^n T_{\mu\nu}^{(n{\mathscr m}\bm{k})}({\cal P}^\alpha,\bm{x}) e^{i{\mathscr m} \phi -i\varphi_{{\mathscr m}\bm{k}}}+O(\epsilon^3),\label{T multiscale} \end{align} where ${\mathscr m},k_r,k_z$ all run from $-\infty$ to $\infty$, $\bm{x} = (r,z)$, and $\varphi_{{\mathscr m}\bm{k}} := {\mathscr m}\varphi_{\phi} + k_r\varphi_r + k_z\varphi_z$. Here $\varphi_\alpha$ and ${\cal P}^\alpha$ are functions of $t$ and $\epsilon$ governed by Eqs.~\eqref{dphidt}, \eqref{dpphidt}, and \begin{equation} \frac{dM_A}{dt} = \epsilon{\mathcal{F}}^{(1)}_A(p^i_\varphi) +O(\epsilon^2),\label{dMAdt} \end{equation} where ${\mathcal{F}}_A^{(1)} = ({\mathcal{F}}_E^{\cal H}/\epsilon^2,{\mathcal{F}}_{L_z}^{\cal H}/\epsilon^2)$ is given by any of Eqs.~\eqref{EdotH v1} and \eqref{LdotH v1}, Eqs.~\eqref{EdotH v2} and \eqref{LdotH v2}, or Eqs.~\eqref{EdotH v3} and \eqref{LdotH v3}; the reason this depends only on $p^i_\varphi$ at leading order is explained in Sec.~\ref{adiabatic approximation} below. The decomposition into azimuthal modes $e^{i{\mathscr m}\phi}$ is not strictly necessary here, but it simplifies the analysis of the stress-energy in the next subsection, and it dovetails with the decompositions into angular harmonics in Sec.~4, as all the bases of harmonics involve $\phi$ only through the factor $e^{i{\mathscr m}\phi}$. The expansions~\eqref{h multiscale} and \eqref{T multiscale} differ from the ``self-consistent expansion''~\eqref{self-consistent expansion} in that the parameters ${\cal P}$ in the self-consistent expansion include the complete trajectory $z^\mu$ and its derivatives. We can therefore move from Eqs.~\eqref{self-consistent expansion} and \eqref{skeleton Tab} to Eqs.~\eqref{h multiscale} and \eqref{T multiscale} by substituting the expansion of $z^\alpha(t)$ from Eq.~\eqref{z expansion - varphi}. To fully motivate our multiscale expansion, we work through this expansion of $T_{\mu\nu}$ in the next subsection. But first, we focus on the overall structure and efficacy of the multiscale expansion. Given Eqs.~\eqref{h multiscale} and \eqref{T multiscale}, the perturbative field equations become equations for the Fourier coefficients $h^{(n{\mathscr m}\bm{k})}_{\mu\nu}$. These are identical, at leading order, to the usual frequency-domain field equations of black hole perturbation theory, with discrete frequencies \begin{equation} \frac{d\varphi_{{\mathscr m}\bm{k}}}{dt} = \omega_{{\mathscr m}\bm{k}}(p^i_\varphi) := k_r\Omega^{(0)}_r(p^i_\varphi) + k_z\Omega^{(0)}_z(p^i_\varphi) + {\mathscr m}\Omega^{(0)}_\phi(p^i_\varphi). \end{equation} More concretely, if we substitute the expansions~\eqref{h multiscale} and \eqref{T multiscale} into the Einstein equations, then $t$ derivatives act as \begin{align} \frac{\partial}{\partial t} &= \Omega^{(0)}_\alpha(p^j_\varphi)\frac{\partial}{\partial\varphi_\alpha} + \frac{d{\cal P}^\alpha}{dt} \frac{\partial}{\partial{\cal P}^\alpha}\label{ddt multiscale}\\ &\to -i\omega_{{\mathscr m}\bm{k}}(p^j_\varphi)+\epsilon\left[\Gamma^i_{(1)}(p^j_\varphi)\frac{\partial}{\partial p^i_\varphi} + {\cal F}^{(1)}_A(p^j_\varphi)\frac{\partial}{\partial M_A}\right] +O(\epsilon^2).\label{ddt multiscale modes} \end{align} Using this, we can write covariant derivatives as \begin{equation} \nabla_\alpha \to \tilde\nabla^{0{\mathscr m}\bm{k}}_{\alpha} + \epsilon\delta_\alpha^t \tilde\partial_t^{1{\mathscr m}\bm{k}} + O(\epsilon^2),\label{nabla multiscale} \end{equation} where $\tilde\nabla^{0{\mathscr m}\bm{k}}_{\alpha}$ is an ordinary covariant derivative with $\partial_\phi\to i{\mathscr m}$ and $\partial_t \to -i\omega_{{\mathscr m}\bm{k}}$, and $\tilde\partial_t^{1{\mathscr m}\bm{k}}$ is the operator in square brackets in Eq.~\eqref{ddt multiscale modes}. If we then treat $\varphi_\alpha$ and ${\cal P}^\alpha$ as independent variables, we can split the field equations into coefficients of $e^{i{\mathscr m}\phi -i\varphi_{{\mathscr m}\bm{k}}}$ and of explicit powers of $\epsilon$. This results in a sequence of differential equations in $(r,z)$ for the coefficients $h^{(n{\mathscr m}\bm{k})}_{\mu\nu}$: \begin{align} G^{(1{\mathscr m}\bm{k})}_{\mu\nu}[h^{(1{\mathscr m}\bm{k})}] &= 8\pi T^{(1{\mathscr m}\bm{k})}_{\mu\nu},\label{multiscale EFE1}\\ G^{(1{\mathscr m}\bm{k})}_{\mu\nu}[h^{(2{\mathscr m}\bm{k})}] &= 8\pi T^{(2{\mathscr m}\bm{k})}_{\mu\nu} - \sum_{{\mathscr m}'{\mathscr m}''}\sum_{\bm{k}'\bm{k}''}G^{(2{\mathscr m}\bm{k})}_{\mu\nu}[h^{(1{\mathscr m}'\bm{k}')},h^{(1{\mathscr m}''\bm{k}'')}] \nonumber\\ &\quad - \Gamma^i_{(1)}\dot G^{(1{\mathscr m}\bm{k})}_{\mu\nu}[\partial_{p^i_\varphi}h^{(1{\mathscr m}\bm{k})}] - {\cal F}^{(1)}_A\dot G^{(1{\mathscr m}\bm{k})}_{\mu\nu}[\partial_{ M_A}h^{(1{\mathscr m}\bm{k})}].\label{multiscale EFE2} \end{align} Here $G^{(1{\mathscr m}\bm{k})}_{\mu\nu}$ and $G^{(2{\mathscr m}\bm{k})}_{\mu\nu}$ are the linearized and quadratic Einstein tensors~\eqref{Einstein1} and \eqref{Einstein2} with the replacement $\nabla_\alpha \to \tilde\nabla^{0{\mathscr m}\bm{k}}_{\alpha}$. $\dot G^{(1{\mathscr m}\bm{k})}_{\mu\nu}$ is the piece of $G^{(1)}_{\mu\nu}$ that, after applying the rule~\eqref{nabla multiscale}, is linear in $\tilde\partial_t^{1{\mathscr m}\bm{k}}$. Explicit expressions for these quantities can be found in Sec. VC of Ref.~\cite{Miller:2020bft} in a Schwarzschild background in the Lorenz gauge.\footnote{The field equations in Ref.~\cite{Miller:2020bft} are further specialized to quasicircular orbits, with frequencies $\omega_{\mathscr m} = {\mathscr m} \Omega^{(0)}_\phi$, but they remain valid under the replacement $\omega_{\mathscr m}\to \omega_{{\mathscr m}\bm{k}}$. In Sec.~VC of Ref.~\cite{Miller:2020bft} they also include frequency corrections $\Omega^{(1)}_\phi$, which we have eliminated here with our choice of averaged variables $(\varphi_\alpha,p^i_\varphi)$; the analogue of our choice is described in their Appendix A. Beyond these minor differences, they more substantially differ by allowing the phases and system variables to depend on $r$ in addition to $t$. We discuss the reason for this in Sec.~\ref{snapshots}.} The left-hand side of the field equations~\eqref{multiscale EFE1} and \eqref{multiscale EFE2} is identical to what it would be if we expanded $h_{\mu\nu}$ in Fourier modes $e^{i{\mathscr m}\phi-i\omega_{{\mathscr m}\bm{k}}t}$. Such a Fourier expansion is what has been implemented historically in first-order frequency-domain calculations with geodesic sources (e.g.,~\cite{Detweiler:1978ge,Hughes:1999bq,Hughes:2005qb,Drasco:2005kz,Detweiler:2008ft,Fujita:2009us,Akcay:2010dx,Hopper:2010uv,Keidl:2010pm,Shah:2010bi,Flanagan:2012kg,Hopper:2012ty,Akcay:2013wfa,Osburn:2014hoa,Hopper:2015jxa,vandeMeent:2015lxa,vandeMeent:2017bcc}), and we can now immediately re-interpret those computations as leading-order implementations of the expansion~\eqref{h multiscale}.\footnote{First-order implementations in the time domain~\cite{Barack:2005nr,Barack:2007tm,Sundararajan:2008zm,Barack:2010tm,Dolan:2012jg,Harms:2014dqa,Barack:2017oir} do not mesh quite so readily with a multiscale expansion. We discuss their utility within a multiscale expansion in later subsections.} This is a principal advantage of using the variables $(\varphi_\alpha,p^i_\varphi)$ instead of $(q_\alpha,p^i_q)$. Importantly, Eqs.~\eqref{multiscale EFE1} and \eqref{multiscale EFE2} can be solved for any values of the parameters ${\cal P}^\alpha$, without having to simulate complete inspirals. At each point in the parameter space, the solution, comprising the set of amplitudes $h^{(n{\mathscr m}\bm{k})}_{\mu\nu}$, can loosely be thought of as a ``snapshot'' of the spacetime in the frequency domain. These solutions can be used to calculate the driving forces in the evolution equations~\eqref{dpphidt} and \eqref{dMAdt} for $d{\cal P}^\alpha/dt$. After populating the space of snapshots, one can then use these evolution equations, together with the phase evolution equation~\eqref{dphidt}, to evolve any particular binary spacetime through the space. Note that even though each snapshot is determined by an ``instantaneous'' value ${\cal P}^\alpha$, each snapshot fully accounts for dissipation and for the nongeodesic past history of the binary: because the evolution is slow compared to the orbital time scale, these effects are suppressed by a power of $\epsilon$ and are incorporated through the $\dot G^{(1{\mathscr m}\bm{k})}_{\mu\nu}$ source terms in Eq.~\eqref{multiscale EFE2}. What would go wrong if, rather than using this multiscale expansion, we were to actually use $h^{(1)}_{\mu\nu}=\sum_{{\mathscr m}\bm{k}}h^{(1{\mathscr m}\bm{k})}_{\mu\nu}(r,z)e^{i{\mathscr m}\phi-i\omega_{{\mathscr m}\bm{k}}t}$ as our first-order metric perturbation? This would be approximating the trajectory of the companion as a geodesic of the background black hole spacetime. As explained in the discussion around Eq.~\eqref{dipole term}, such an approximation would accumulate large errors with time: the ``small'' corrections to the trajectory would grow large as the object spirals inward. The growing correction, represented by $z^\alpha_1=m^\alpha/m$ in Eq.~\eqref{dipole term}, would manifest itself as a dipole term in $h^{(2)}_{\mu\nu}$ that would grow until $h^{(2)}_{\mu\nu}$ became larger than $h^{(1)}_{\mu\nu}$, spelling the breakdown of regular perturbation theory. We can now understand this behavior directly from the orbital phases. If we were to use the geodesic phases $\omega_{{\mathscr m}\bm{k}}t$, we would be implicitly expanding the phase \begin{equation}\label{varphi-soln} \varphi_\alpha(t,\epsilon) = \int_0^{ t}\Omega^{(0)}_\alpha[p^j_\varphi(\epsilon t',\epsilon)]d t' + \varphi_\alpha^0 \end{equation} in powers of $\epsilon$, as \begin{equation} \varphi_\alpha(t,\epsilon) = \tilde\Omega^{(0)}_\alpha(0) t + \epsilon\left[\frac{1}{2} \frac{d\tilde\Omega^{(0)}_\alpha}{d\tilde t}(0)t^2 + \tilde\Omega^{(1)}_\alpha(0)t\right]+ O(\epsilon^2) +\varphi_\alpha^0, \end{equation} where we have used Eq.~\eqref{Omega tilde expansion}. Such an expansion would be accurate on the orbital timescale but would accumulate large errors on the dephasing time $\sim 1/\sqrt{\epsilon}$, which is much shorter than the radiation reaction time. Moreover, the order-$\epsilon$ terms in this expansion would appear as non-oscillatory, linear- and quadratic-in-$t$ terms in $h^{(2)}_{\mu\nu}$, implying that $h^{(2)}_{\mu\nu}$ would not admit a discrete Fourier expansion or correctly describe the system's approximate triperiodicity. The multiscale expansion avoids these errors and maintains uniform accuracy prior to the transition to plunge (and excluding resonances). The basic idea of this multiscale expansion of the field equations was first put forward in Ref.~\cite{Hinderer:2008dm}. It is described in detail in Ref.~\cite{Miller:2020bft} for the special case of quasicircular orbits in Schwarzschild spacetime. Our presentation here, building on our particular treatment of orbital motion in the preceding section, is the most complete description to date of the generic case. We provide additional details below. A thorough description, focused on the mathematical foundations of the expansion, is in preparation~\cite{two-timescale-0}. \subsubsection{Multiscale expansion of source terms and driving forces}\label{expansion of source} We illustrate, and further motivate, the multiscale expansion by examining the multiscale form of the source terms in the coupled equations~\eqref{skeleton EFE} and \eqref{EOM2}: the Detweiler stress-energy and the self-force. We start with the stress-energy~\eqref{skeleton Tab}. Writing the trajectory as \begin{equation} z^\alpha(t) = [t,r_o(t),z_o(t),\phi_o(t)] \end{equation} (where the subscript stands for ``object's orbit''), setting the spin to zero, using the $\delta$ function to evaluate the integral, and expanding the factors of $\sqrt{-\breve g}$ and $\frac{d\tau}{d\breve{\tau}}$, we express $T_{\mu\nu}$ as \begin{align} T_{\mu\nu} &= \frac{m\breve{g}_{\mu\alpha} \breve{g}_{\nu\beta}u^\alpha u^\beta}{u^t\Sigma}\left[1+\frac{\epsilon}{2}\left(u^\gamma u^\delta - g^{\gamma\delta}\right)h^{{\rm R}(1)}_{\gamma\delta}\right]\nonumber\\ &\quad\times\delta^2[\bm{x}-\bm{x}_o(t)]\delta[\phi-\phi_o(t)] + O(\epsilon^3). \end{align} We now take as a given our multiscale expansion~\eqref{z expansion - varphi} of $z^\alpha(t)$; this assumed the form~\eqref{small force} for the force, which we return to below. Substituting~\eqref{z expansion - varphi} and using $u^\mu = \dot z^\mu/\Sigma$, we obtain the coefficients in the expansion \begin{equation} T_{\mu\nu} = \epsilon T^{(1)}_{\mu\nu}(\varphi_\alpha,p^i_\varphi) + \epsilon^2 T^{(2)}_{\mu\nu}(\varphi_\alpha,p^i_\varphi) +O(\epsilon^3). \end{equation} The leading term is \begin{equation} T^{(1)}_{\mu\nu}(\varphi_\alpha,p^i_\varphi) = \frac{m \dot z^{(0)}_\mu \dot z^{(0)}_\nu}{\mathscr{f}_t^{(0)}\Sigma^2_{(0)}}\delta^2[\bm{x}-\bm{x}_{(0)}(\bm{\varphi},p^i_\varphi)]\delta[\phi-\phi_{(0)}(\varphi_\alpha,p^i_\varphi)],\label{T1} \end{equation} where $\Sigma^2_{(0)} := r^2_{(0)}+a^2 z^2_{(0)}$, $\dot z^{(0)}_\mu := g_{\mu\nu}(\bm{x}_{(0)})\dot z^\nu_{(0)}$, $\dot z^\mu_{(0)}=\mathscr{f}^{(0)}_\mu(\bm{\psi}^{(0)},p^i_\varphi)$ for $\mu=t,\phi$, and $\dot{\bm{x}}_{(0)}$ is given by Eq.~\eqref{dot xa} with $p^i\to p^i_\varphi$ and $\psi_a\to\psi_a^{(0)}$. The second-order term is \begin{align} T^{(2)}_{\mu\nu}(\varphi_\alpha,p^i_\varphi) &= \frac{m}{\mathscr{f}_t^{(0)}\Sigma^2_{(0)}}\left[2\dot z^{(0)}_{(\mu} h^{{\rm R}(1)}_{\nu)\beta} \dot z_{(0)}^\beta+\frac{1}{2}\dot z^{(0)}_\mu \dot z^{(0)}_\nu\left(u^\gamma_{(0)} u^\delta_{(0)} - g^{\gamma\delta}_{(0)}\right)h^{{\rm R}(1)}_{\gamma\delta}\right]\nonumber\\ &\quad \times\delta^2[\bm{x}-\bm{x}_{(0)}(\bm{\varphi},p^i_\varphi)]\delta[\phi-\phi_{(0)}(\varphi_\alpha,p^i_\varphi)] + z^\alpha_{(\varphi,1)}\frac{\partial T^{(1)}_{\mu\nu}}{\partial x^\alpha_{(0)}},\label{T2} \end{align} where $u^\mu_{(0)} = \dot z^\mu_{(0)}/\Sigma_{(0)}$, $g^{\gamma\delta}_{(0)}:=g^{\gamma\delta}(\bm{x}_{(0)})$, and $h^{{\rm R}(1)}_{\gamma\delta}$ is evaluated at $z^\alpha_{(0)}$. The last term in Eq.~\eqref{T2} involves the action of $z^\alpha_{(\varphi,1)}\frac{\partial}{\partial x^\alpha_{(0)}}$ on $\dot z^\mu_{(0)}(\bm{\psi}^{(0)},p^i_\varphi)$; this can be evaluated using \begin{equation} \bm{x}_{(\varphi,1)}\cdot\frac{\partial}{\partial \bm{x}_{(0)}} = \bm{\psi}^{(\varphi,1)}\cdot\frac{\partial}{\partial \bm{\psi}^{(0)}} + p^i_{(\varphi,1)}\frac{\partial}{\partial p^i_{\varphi}}, \end{equation} with $\bm{\psi}^{(\varphi,1)}$ and $p^i_{(\varphi,1)}$ given by Eqs.~\eqref{psi(varphi,1)} and \eqref{p(varphi,1)}. Next, we consider the mode decomposition of the expanded stress-energy. We first define the mode coefficients \begin{equation} T_{\mu\nu}^{(n{\mathscr m}\emm'\bm{k})} := \frac{1}{(2\pi)^4}\oint T^{(n)}_{\mu\nu}e^{i\varphi_{{\mathscr m}\bm{k}}-i{\mathscr m}'\phi}d^2\varphi d\varphi_\phi d\phi,\label{Tnmm'k} \end{equation} which assume no relationship between the dependence on $\phi$ and $\varphi_\phi$. Substituting $T^{(1)}_{\mu\nu}$ from Eq.~\eqref{T1}, using the azimuthal $\delta$ function to evaluate the integral over $\phi$, inserting Eq.~\eqref{psi(phi)} for $\phi_{(0)}$, and using $\oint e^{i({\mathscr m}-{\mathscr m}')\varphi_\phi}d\varphi_\phi=2\pi \delta_{{\mathscr m}\emm'}$, we obtain \begin{align} T_{\mu\nu}^{(1{\mathscr m}\emm'\bm{k})} &=\frac{\delta_{{\mathscr m} {\mathscr m}'}}{(2\pi)^3}\oint \frac{m \dot z^{(0)}_\mu \dot z^{(0)}_\nu}{\mathscr{f}_t^{(0)}\Sigma^2_{(0)}}\delta^2(\bm{x}-\bm{x}_{(0)})e^{i\varphi_{\bm{k}} -i{\mathscr m}\Delta_\varphi \phi^{(0)}}d^2\varphi.\label{T1mm'k} \end{align} This enforces ${\mathscr m}'={\mathscr m}$, establishing that the stress-energy only depends on $\phi$ and $\varphi_\phi$ in the combination $e^{i{\mathscr m}(\phi-\varphi_\phi)}$. We can now do away with the ${\mathscr m}'$ label and evaluate the integral in Eq.~\eqref{T1mm'k} in the form~\eqref{Fourier coeff relationship} or \eqref{Fourier coeff relationship psi}. The result is \begin{align} T_{\mu\nu}^{(1{\mathscr m}\bm{k})} &= \frac{m\Upsilon^{(0)}_r\Upsilon^{(0)}_z}{(2\pi)^3\Upsilon^{(0)}_t}\sum_{\substack{\sigma_r=\pm\\\sigma_z=\pm}}\frac{\dot z^{(0)}_\mu(\bm{\psi}^{\sigma}) \dot z^{(0)}_\nu(\bm{\psi}^\sigma)}{\Sigma^2(\bm{x})\dot r_{(0)}(\psi^{\sigma_r}_{r})\dot z_{(0)}(\psi^{\sigma_z}_{z})} e^{i\varphi_{\bm{k}}(\bm{\psi}^{\sigma}) -i{\mathscr m}\Delta_\varphi \phi^{(0)}(\bm{\psi}^{\sigma})}\nonumber\\ &\quad\times[\theta(r-r_p) - \theta(r-r_a)]\theta(z_{\rm max}-|z|),\label{T1 modes} \end{align} where the various quantities have been defined as functions of the field point $\bm{x}=(r,z)$, and $\sigma_a=\pm$ refers to a portion of the orbit in which $x^a$ is increasing ($\sigma_a=+$) or decreasing ($\sigma_a=-$). $\psi_r^{\pm}(r)$ is the value of $\psi_r$ satisfying Eq.~\eqref{r(psi)} (with $p^i\to p^i_\varphi$) on an outgoing ($+$) or ingoing ($-$) leg of the radial motion; $\psi_z^{\pm}(z)$ is defined analogously from Eq.~\eqref{z(psi)}. $\varphi_{\bm{k}}(\bm{\psi}^\sigma)$ is given by \begin{equation} \varphi_{\bm{k}}(\bm{\psi}^\sigma) = q_{\bm{k}}(\bm{\psi}^{\sigma})+\Omega_{\bm{k}}^{(0)}\cdot[\delta t_r(\psi^{\sigma_r}_{r})+\delta t_z(\psi^{\sigma_z}_{z})], \end{equation} with $q_a(\psi_a)$ by Eq.~\eqref{q_a(psi_a)}, and $\delta t_a(\psi_a)$ by Eq.~\eqref{delta t}. $\Delta_\varphi \phi^{(0)}(\bm{\psi}^\sigma)$ is given by Eq.~\eqref{Dpsi varphi}. The Mino-time velocities are $\dot z^{(0)}_\mu(\bm{\psi}^\sigma) = g_{\mu\nu}(\bm{x}) \dot z^\nu_{(0)}(\bm{\psi}^\sigma)$ with $\dot x^a_{(0)}(\bm{\psi})$ given by Eq.~\eqref{dot xa} [or \eqref{drdtau} and \eqref{dzdtau}] and $\dot t$ and $\dot\phi$ by Eqs.~\eqref{dtdtau} and \eqref{dphidtau}. We can also use $\dot z^{(0)}_t = - E_\varphi \Sigma$ and $\dot z^{(0)}_\phi = L_\varphi\Sigma$; recall that we suppress the subscript $z$ on $L_z$ in $P^i_\varphi=(E_\varphi,L_\varphi,Q_\varphi)$. This calculation demonstrates how $T^{(1)}_{\mu\nu}$ inherits the form~\eqref{T multiscale} from the trajectory $z^\mu$. Given this form of $T^{(1)}_{\mu\nu}$, the linearized Einstein equation preserves it (in an appropriate class of gauges), justifying our ansatz for $h^{(1)}_{\mu\nu}$. Given that form of $h^{(1)}_{\mu\nu}$, the second-order stress-energy~\eqref{T2} inherits the same form, as do the other sources in Eq.~\eqref{multiscale EFE2}, and so, finally, does $h^{(2)}_{\mu\nu}$. All of this relies on the presumed form~\eqref{small force} for the force, from which we derived the form~\eqref{z expansion - varphi} for $z^\mu$. Our force on the right-hand side of Eq.~\eqref{EOM2} is not quite of that form. To derive its form, first note that, assuming Eq.~\eqref{z expansion - varphi}, the puncture field $h^{\P}_{\mu\nu}$ has a form analogous to Eq.~\eqref{h multiscale}, and therefore $h^{{\rm R}}_{\mu\nu}$ does as well. If we write this as $h^{{\rm R}}_{\mu\nu}({\cal P}^\alpha,\bm{x},\varphi_\phi-\phi,\bm{\varphi},\epsilon)$, apply a covariant derivative using~\eqref{nabla multiscale}, and evaluate the result on the trajectory $z^\mu(t)$, then the right-hand side of Eq.~\eqref{EOM2} takes the form \begin{equation} f^\mu({\cal P}^\alpha,\bm{x}_o,\dot z^\alpha,\bm{\varphi},\epsilon), \end{equation} where we have used $\varphi_\phi-\phi_o = - \Delta_{\varphi}\phi^{(0)}(\bm{\varphi}) - \epsilon\phi^{(\varphi,1)}(\bm{\varphi})$ to eliminate dependence on $\varphi_\phi$. This differs from Eq.~\eqref{small force} in two ways: it depends explicitly on $(\bm{\varphi},p^i_\varphi)$, and it depends on the additional parameters $M_A$. With respect to the first difference, we can use Eqs.~\eqref{varphi(psi) perturbative} and \eqref{pvarphi(psi) perturbative} to write the force in the form \begin{equation} f^\mu = \epsilon f^\mu_{(1)}(\bm{\psi},p^i,M_A) + \epsilon^2 f^\mu_{(2)}(\bm{\psi},p^i,M_A)+O(\epsilon^3). \end{equation} The system of equations~\eqref{psidot perturbative} and \eqref{pidot perturbative} thus becomes \begin{align} \frac{d \psi_\alpha}{d\lambda} &= \mathscr{f}_\alpha^{(0)}(\bm{\psi},p^j) + \epsilon\mathscr{f}_{\alpha}^{(1)}(\bm{\psi},p^j,M_A) + O(\epsilon^2),\label{psidot amended}\\ \frac{dp^i}{d\lambda} &= \epsilon g_{(1)}^i(\bm{\psi},p^j,M_A) + \epsilon^2 g_{(2)}^i(\bm{\psi},p^j,M_A) + O(\epsilon^3),\\ \frac{dM_A}{d\lambda} &= \epsilon\mathscr{f}^{(0)}_t(\bm{\psi},p^i){\mathcal{F}}^{(1)}_A(p^i)+O(\epsilon^2).\label{Mdot amended} \end{align} The analysis of these equations then follows essentially without change as in Secs.~\ref{perturbed Mino frequencies}--\ref{multiscale expansion of orbit}. To see why the use of Eqs.~\eqref{psidot perturbative} and \eqref{pidot perturbative} does not lead to vicious circularity, note that their subleading terms only affect $f^\mu_{(2)}$, which only enters into the dynamics in Eq.~\eqref{G2}. The nongeodesic functions appearing in Eqs.~\eqref{psidot perturbative} and \eqref{pidot perturbative} are therefore determined from lower-order equations prior to requiring $f^\mu_{(2)}$. Finally, how does $M_A$ influence the orbital dynamics? It enters into the driving forces $g^i_{(n)}$ and $\mathscr{f}^{(1)}_\alpha$. However, it does not enter into $\langle g^{(1)}_i\rangle$. This follows from the fact that $f^\mu_{(1)\rm con}$ depends on $M_A$ but $f^\mu_{(1)\rm diss}$ does not, as explained in Sec.~\ref{adiabatic approximation} below. $M_A$ therefore contributes to the action-angle dynamics at 1PA order via Eq.~\eqref{G2}, as well as to the coordinate trajectory correction $z^{\mu}_{(\varphi,1)}$ at 1PA order through $\psi_{\alpha}^{(\varphi,1)}$ and $p^i_{(\varphi,1)}$. This is the only material change to our treatment of the orbital dynamics in Secs.~\ref{perturbed Mino frequencies}--\ref{multiscale expansion of orbit}. Together, the analyses of this section establish the consistency of our multiscale treatments of the field equations and orbital motion. In the following sections, we describe more concretely how to utilize these treatments. \subsubsection{Snapshot solutions and evolving waveforms}\label{snapshots} Snapshot solutions, consisting of the mode amplitudes $h^{n{\mathscr m}\bm{k}}_{\mu\nu}({\cal P}^\alpha,\bm{x})$, can be computed using any of the frequency-domain methods reviewed in Sec.~\ref{sec:black hole perturbation theory}. As an example, in this section we sketch how this is done at first order using the method of metric reconstruction in the radiation gauge, starting from the Teukolsky equation. This summarizes work from Ref.~\cite{vandeMeent:2017bcc}, which provided the first calculation of the full first-order self-force for generic bound orbits in Kerr spacetime. Our summary also appeals to methods and results from Refs.~\cite{Ori:2002uv,Drasco:2005kz,Pound:2013faa,Merlin:2016boc,vandeMeent:2017fqk}. We first define leading-order Weyl scalars $\psi_0$ and $\psi_4$ related to the $h^{(1)}_{\mu\nu}$ of Eq.~\eqref{h multiscale} by Eqs.~\eqref{eq:Weyl-scalars-definition}--\eqref{eq:T4} with the replacements $\partial_t\to-i\omega_{{\mathscr m}\bm{k}}$ and $\partial_\phi\to i{\mathscr m}$. For concreteness, we use $\psi_0$. In analogy with Eqs.~\eqref{eq:psi0-FD} and \eqref{eq:psi4-FD}, it can be written as \begin{equation} \psi_0 = \sum_{\ell=2}^\infty\sum_{{\mathscr m}=-\ell}^\ell\sum_{\bm{k}}{}_2\psi_{\ell{\mathscr m}\bm{k}}(p^i_\varphi,r){}_2S_{\ell{\mathscr m}}(\theta,\phi;a\omega_{{\mathscr m}\bm{k}})e^{-i\varphi_{{\mathscr m}\bm{k}}}. \end{equation} Note that the radial coefficients depend on $p^i_\varphi$ but not on $M_A$; this is because the linearized $\psi_0$ and $\psi_4$ are insensitive to linear perturbations of the central black hole's mass or spin~\cite{Wald:1973}. The coefficients ${}_{2}\psi_{\ell{\mathscr m}\bm{k}}(p^i_\varphi,r)$ satisfy the radial Teukolsky equation~\eqref{eq:TeukolskyR} with $\omega\to\omega_{{\mathscr m}\bm{k}}$ and ${}_s\psi_{\ell{\mathscr m}\omega}\to {}_s\psi_{\ell{\mathscr m}\bm{k}}$. The source in that equation is constructed from the stress-energy~\eqref{T1} or its modes~\eqref{T1 modes} using the analog of Eq.~\eqref{eq:T0-FD}, \begin{equation} {}_2T_{\ell{\mathscr m}\bm{k}} = -32\pi^2\Sigma\int_{-z_{\rm max}}^{z_{\rm max}} (\tilde{\cal S}^{{\mathscr m}\bm{k}}_0T) (r,z){}_2S_{\ell{\mathscr m}}(\theta,0;a\omega_{{\mathscr m}\bm{k}}) dz, \label{T0-lmk} \end{equation} where the integral ranges over the support of $T^{(1{\mathscr m}\bm{k})}_{\mu\nu}$, $\theta$ is related to $z$ by $z=\cos\theta$, and we have suppressed the dependence on $p^i_\varphi$. The source $\tilde{\cal S}^{{\mathscr m}\bm{k}}_0T$ in the integrand is given by Eq.~\eqref{eq:S} with $T_{ll}\to T^{(1{\mathscr m}\bm{k})}_{ll}$ (and the same for other tetrad components), $\partial_t\to-i\omega_{{\mathscr m}\bm{k}}$, and $\partial_\phi\to i{\mathscr m}$. What may appear to be an extra factor of $2\pi$ in Eq.~\eqref{T0-lmk} accounts for the factor of $1/(2\pi)$ introduced in the integration over $\phi$ in Eq.~\eqref{Tnmm'k}. The retarded solution to the Teukolsky equation, as given in the variation-of-parameters form ~\eqref{eq:TeukolskyInhomogeneousModes}, is \begin{align}\label{psi0 nonvacuum} {}_2 \psi_{\ell {\mathscr m} \bm{k}}(r) &= {}_2 C^{\text{in}}_{\ell {\mathscr m} \bm{k}}(r) {}_2 R^{\text{in}}_{\ell {\mathscr m} \bm{k}}(r)+{}_2 C^{\text{up}}_{\ell {\mathscr m} \bm{k}}(r) {}_2 R^{\text{up}}_{\ell {\mathscr m} \bm{k}}(r), \end{align} where we have defined the homogeneous solutions ${}_2 R^{\text{in/up}}_{\ell {\mathscr m} \bm{k}}(r):={}_2 R^{\text{in/up}}_{\ell {\mathscr m} \omega_{{\mathscr m}\bm{k}}}(r)$. The weighting coefficients are given by Eq.~\eqref{eq:weighting-coefficients}, which we restate here as \begin{subequations} \begin{align} {}_2 C_{\ell {\mathscr m} \bm{k}}^{\text{in}}(r) &:= \int^{r_a}_r \frac{{}_2 R^{\text{up}}_{\ell {\mathscr m} \bm{k}}(r')}{W(r')\Delta} {}_2 T_{\ell {\mathscr m} \bm{k}}(r') dr', \\ {}_2 C_{\ell {\mathscr m} \bm{k}}^{\text{up}}(r) &:= \int_{r_p}^r \frac{{}_2 R^{\text{in}}_{\ell {\mathscr m} \bm{k}}(r')}{W(r')\Delta} {}_2 T_{\ell {\mathscr m} \bm{k}}(r') dr'. \end{align} \end{subequations} In the vacuum regions $r>r_a$ and $r<r_p$, outside the support of ${}_2 T_{\ell {\mathscr m} \bm{k}}$, the weighting coefficients become constants, \begin{subequations}\label{mode amplitudes} \begin{align} {}_2 \hat C_{\ell {\mathscr m} \bm{k}}^{\text{in}} &= \int^{r_a}_{r_p} \frac{{}_2 R^{\text{up}}_{\ell {\mathscr m} \bm{k}}(r')}{W(r')\Delta} {}_2 T_{\ell {\mathscr m} \bm{k}}(r') dr',\\ {}_2 \hat C_{\ell {\mathscr m} \bm{k}}^{\text{up}} &= \int_{r_p}^{r_a} \frac{{}_2 R^{\text{in}}_{\ell {\mathscr m} \omega_{{\mathscr m}\bm{k}}}(r')}{W(r')\Delta} {}_2 T_{\ell {\mathscr m} \bm{k}}(r') dr', \end{align} \end{subequations} and in those regions the solution becomes \begin{align}\label{psi0 vacuum region} {}_2 \psi_{\ell {\mathscr m} \bm{k}}(r) &= \begin{cases} {}_2\hat C^{\text{in}}_{\ell {\mathscr m} \bm{k}}\, {}_2 R^{\text{in}}_{\ell {\mathscr m} \bm{k}}(r) & \text{for } r<r_p\\ {}_2\hat C^{\text{up}}_{\ell {\mathscr m} \bm{k}}\, {}_2 R^{\text{up}}_{\ell {\mathscr m} \bm{k}}(r) & \text{for } r>r_a. \end{cases} \end{align} We can evaluate the $r$ and $z$ integrals in ${}_2 C_{\ell {\mathscr m} \bm{k}}^{\text{in/up}}$ as integrals over $\psi_r$ and $\psi_z$ by using appropriate changes of variables for each value of $\sigma_a$ in Eq.~\eqref{T1 modes}. For $\sigma_r=+$ and a generic function $f(r)$, the radial integrals are $\int^r_{r_p}f dr' = \int_0^{\psi^+_r(r)}f \frac{dr}{d\psi_r}d\psi_r$ and $\int_r^{r_a}f dr' = \int^\pi_{\psi^+_r(r)}f \frac{dr}{d\psi_r}d\psi_r$; for $\sigma_r=-$, they are $\int^r_{r_p}f dr' = -\int^{2\pi}_{\psi^-_r(r)}f \frac{dr}{d\psi_r}d\psi_r$ and $\int_r^{r_a}f dr' = -\int_\pi^{\psi^-_r(r)}f \frac{dr}{d\psi_r}d\psi_r$. The transformations for $\sigma_z=\pm$ are analogous. We can also write the $r$ and $z$ derivatives in $\tilde{\cal S}^{{\mathscr m}\bm{k}}_0T$ as $\frac{\partial}{\partial x^a} = \frac{\partial\psi_a}{\partial x^a}\frac{\partial}{\partial\psi_a}$ (with no sum over $a$). For more explicit formulas for the integrands, see Sec. 3B of Ref.~\cite{Drasco:2005kz}. See also, e.g., Refs.~\cite{Fujita:2009us,Hopper:2015jxa} for discussion of practical methods of numerically evaluating such integrals. The modes of $\psi_0$ (or $\psi_4$) are by themselves sufficient to calculate many quantities, such as gravitational-wave fluxes. But for other purposes, such as the calculation of the self-force and the needed input for the second-order field equations, one must compute the entire metric perturbation. Starting from the modes of $\psi_0$ or $\psi_4$, this can be done using the method of metric reconstruction reviewed in Sec.~\ref{sec:metric-reconstruction}. In the presence of a source, metric reconstruction typically yields a metric perturbation that has a gauge singularity extending in a ``shadow'' from the matter source to the black hole horizon or from the matter to infinity~\cite{Ori:2002uv,Green:2019nam}. In the case of a point particle, this shadow becomes a string singularity. However, we can more usefully reconstruct the metric perturbation in a ``no-string'' radiation gauge~\cite{Pound:2013faa}, in which it has no string but does have a jump discontinuity and radial $\delta$ function on a sphere of varying radius $r=r_{(0)}(t)$. To construct the no-string solution in practice, we first find a Hertz potential $\psi^{IRG}$ satisfying Eq.~\eqref{eq:psi0-IRG} (at fixed $p^i_\varphi$) in the disjoint vacuum regions $r<r_p$ and $r>r_a$, subject to regularity at infinity and the horizon. The appropriate solution in each region is given by Eq.~(15) of Ref.~\cite{Ori:2002uv}. In the libration region $r_p<r<r_a$, the radial source ${}_2 T_{\ell {\mathscr m} \bm{k}}$ is nonzero, as the Fourier decomposition smears the point particle source over the entire toroidal region $\{r_p<r<r_a,|z|<z_{\rm max}\}$. The solution~(15) of Ref.~\cite{Ori:2002uv} therefore cannot be used in the libration region. However, it can be analytically extended into that region, using Eq.~\eqref{psi0 vacuum region} in place of Eq.~\eqref{psi0 nonvacuum}. Because the time-domain solution is analytic everywhere except on the sphere at $r_{(0)}(t)$, the sum over ${\mathscr m}\bm{k}$ of the analytically continued functions from $r<r_p$ yields the correct result for all $r\leq r_{(0)}(t)$, and the sum over ${\mathscr m}\bm{k}$ of the analytically continued functions from $r>r_a$ yields the correct result for all $r\geq r_{(0)}(t)$~\cite{vandeMeent:2015lxa,vandeMeent:2017bcc}; this is the method of extended homogeneous solutions~\cite{Barack:2008ms,Hopper:2010uv}.\footnote{See also Ref.~\cite{Hopper:2012ty} for a generalization of this method to problems with sources that are nowhere vanishing.} As alluded to in Sec.~\ref{sec:KerrModes}, this method was originally devised to alleviate another problem that arises in frequency-domain calculations for eccentric orbits: the sum over $\bm{k}$ modes of the inhomogeneous solution converges slowly within the libration region. In the context of metric reconstruction, the method allows one to avoid the complexities of nonvacuum reconstruction. From the extended modes of the Hertz potential, we can reconstruct modes of an incomplete metric perturbation, $h^{(1{\mathscr m}\bm{k})\rm rec}_{\mu\nu}$, using Eq.~\eqref{eq:reconstruction} (as ever, with $\partial_t\to-i\omega_{{\mathscr m}\bm{k}}$ and $\partial_\phi\to i{\mathscr m}$). To complete this perturbation, in the region $r>r_{(0)}(t)$ we add mass and spin perturbations, $E_\varphi \frac{\partial g_{\mu\nu}}{\partial M}$ and $L_\varphi \frac{\partial g_{\mu\nu}}{\partial J}$, where the $M$ derivative is taken at fixed $J=Ma$, and the $J$ derivative at fixed $M$; these account for the mass and spin that the particle contributes to the spacetime~\cite{Merlin:2016boc,vandeMeent:2017fqk}. In general we must also add mass and spin perturbations $\delta M \frac{\partial g_{\mu\nu}}{\partial M}$ and $\delta J \frac{\partial g_{\mu\nu}}{\partial J}$ throughout the spacetime (at fixed $p^i_\varphi$); these account for the slowly evolving corrections to the central black hole's mass and spin.\footnote{These corrections proportional to $M_A$ have not been added historically because for any specific snapshot with parameters ${\cal P}^\alpha$, call them ${\cal P}^\alpha_0$, they can be absorbed with a redefinition $M\to M+\epsilon\delta M_0$ and $J\to J+\epsilon\delta J_0$, setting $M^0_A=0$. However, in the context of an evolution, which moves through the space of ${\cal P}^\alpha$ values, they must always be included at 1PA order. Even at a single value of ${\cal P}^\alpha$ where $M^0_A=0$, their time derivatives must be included in Eq.~\eqref{multiscale EFE2}. See Ref.~\cite{Miller:2020bft} for a discussion.} Finally, in the region $r<r_{(0)}(t)$, we must add gauge perturbations that ensure the coordinates $t$ and $\phi$, and therefore the frequencies $\Omega^{(0)}_\alpha$, have the same meaning in the two regions $r<r_{(0)}(t)$ and $r>r_{(0)}(t)$~\cite{Shah:2015nva} (see also~\cite{Bini:2019xwn}). With the completed modes $h^{(1{\mathscr m}\bm{k})}_{\mu\nu}({\cal P}^\alpha,\bm{x})$ in hand, one can calculate any quantity of interest on the orbital timescale with fixed ${\cal P}^\alpha$. In particular, one can calculate the first-order self-force and its dynamical effects using the mode-sum regularization formula derived in Ref.~\cite{Pound:2013faa}; the formula in the no-string gauge is given by Eq.~(125) in that reference. To date, Ref.~\cite{vandeMeent:2017bcc} is the only work to carry out the entire calculation we have just described for generic bound orbits in Kerr spacetime. However, for orbits in Schwarzschild spacetime and for equatorial orbits in Kerr, snapshot frequency-domain calculations of the complete $h^{(1)}_{\mu\nu}$ and $f^\mu_{(1)}$ are now routine, whether in the Lorenz gauge, Regge-Wheeler-Zerilli gauge, or no-string radiation gauge \cite{Barack:2007tm,Detweiler:2008ft,Barack:2010tm,Akcay:2010dx,Shah:2010bi,Dolan:2012jg,Akcay:2013wfa,Osburn:2014hoa,Wardell:2015ada,vandeMeent:2015lxa,Thompson:2018lgb}. Numerical implementations at second order, which are necessary for post-adiabatic accuracy, are still in an early stage but have computed some physical quantities for quasicircular orbits in Schwarzschild spacetime~\cite{Pound:2019lzj}. We can use the output of these snapshot calculations to obtain the true, evolving gravitational waveforms. Once the snapshot mode amplitudes are calculated, from them we can calculate the inputs for the evolution equations~\eqref{dphidt}, \eqref{dpphidt}, and \eqref{dMAdt}. In analogy with Eqs.~\eqref{eq:Teukolsky-waveform}, \eqref{eq:RW-waveform}, and \eqref{eq:LorenzGauge-waveform}, the waveforms are then given by any of \begin{subequations}\label{multiscale waveform} \begin{align} h_+ - i h_\times &= 2 \sum_{\ell{\mathscr m}\bm{k}} \, \frac{{}_{-2} \hat C_{1\ell {\mathscr m} \bm{k}}^{\text{up}}[p^i_\varphi(\tilde u,\epsilon)]}{\omega_{{\mathscr m}\bm{k}}^2} \, {}_{-2} S_{\ell {\mathscr m}}(\theta, \phi; a \omega_{{\mathscr m}\bm{k}}) e^{-i\varphi_{{\mathscr m}\bm{k}}(\tilde u,\epsilon)}+O(\epsilon^2), \\ &= \sum_{\ell{\mathscr m}\bm{k}} \, \frac{D}{2} \left(\hat C_{1\ell {\mathscr m} \bm{k}}^{\text{ZM,up}}[p^i_\varphi(\tilde u,\epsilon)]-i\,\hat C_{1\ell {\mathscr m} \bm{k}}^{\text{CPM,up}}[p^i_\varphi(\tilde u,\epsilon)]\right) \nonumber\\ &\quad\qquad \times{}_{-2} Y_{\ell {\mathscr m}}(\theta, \phi)e^{-i\varphi_{{\mathscr m}\bm{k}}(\tilde u,\epsilon)} +O(\epsilon^2),\\ &= \sum_{\ell{\mathscr m}\bm{k}} \hat C_{mm}^{1\ell{\mathscr m}\bm{k}}[p^i_\varphi(\tilde u,\epsilon)] {}_{2} Y_{\ell{\mathscr m}}(\theta,\phi) e^{-i\varphi_{{\mathscr m}\bm{k}}(\tilde u,\epsilon)} +O(\epsilon^2), \end{align} \end{subequations} where $D=\sqrt{(\ell-1)\ell(\ell+1)(\ell+2)}$, and $\omega_{{\mathscr m}\bm{k}}=\omega_{{\mathscr m}\bm{k}}[p^i_\varphi(\tilde u,\epsilon)]$. Here we have written the waveform in terms of $\tilde u:=\epsilon(t-r^*)$; we return to this point below. We also note that we have given the waveform in terms of modes of $\psi_4$ rather than the less natural (for this purpose) $\psi_0$. In analogy with Eq.~\eqref{mode amplitudes}, we have defined the amplitudes $\hat C_{1\ell {\mathscr m} \bm{k}}^{\text{ZM,up}}$ and $\hat C_{1\ell {\mathscr m} \bm{k}}^{\text{CPM,up}}$ as the relevant weighting coefficients for $r>r_p$, and we have defined the Lorenz-gauge amplitudes $\hat C_{mm}^{1\ell{\mathscr m}\bm{k}}:=\lim_{r\to\infty}(r e^{i\omega_{{\mathscr m}\bm{k}}r^*} h_{mm}^{1\ell{\mathscr m}\bm{k}})$. We have also intentionally inserted a label ``1'' onto the mode amplitudes and omitted $O(\epsilon^2)$ amplitudes. This is because even if we determine the phase $\varphi_{{\mathscr m}\bm{k}}(\tilde u,\epsilon)$ through 1PA order, the second-order amplitudes do not increase the waveform's order of accuracy; an order-$\epsilon^2$ amplitude in the waveform is indistinguishable from a 2PA (order-$\epsilon$) correction to the phase. The waveform \eqref{multiscale waveform} is in the time domain, but it is almost trivially related to the frequency-domain waveform. Defining $h(\omega) := \frac{1}{2\pi}\int_{-\infty}^\infty (h_+ - i h_\times )e^{i\omega u}du$ and applying the stationary-phase approximation, we obtain, e.g., \begin{align} h(\omega) &= \frac{1}{2\pi}\sum_{\ell{\mathscr m}\bm{k}}\sqrt{\frac{2\pi \epsilon}{|d\omega_{{\mathscr m}\bm{k}}/d\tilde t|}}\hat C_{mm}^{1\ell{\mathscr m}\bm{k}}[\tilde t_{{\mathscr m}\bm{k}}(\omega)] {}_{2} Y_{\ell{\mathscr m}}\nonumber\\ &\quad\times\exp\left\lbrace i[\omega\, \tilde t_{{\mathscr m}\bm{k}}(\omega)-\varphi_{{\mathscr m}\bm{k}}(\omega)]+{\rm sgn}\left(d\omega_{{\mathscr m}\bm{k}}/d\tilde t\right)\frac{i\pi}{4}\right\rbrace+o(\sqrt{\epsilon}). \end{align} Here $\tilde t_{{\mathscr m}\bm{k}}(\omega)$ is the solution to $\omega=\omega_{{\mathscr m}\bm{k}}(\tilde t)$, and the phase as a function of $\omega$ is $\varphi_{{\mathscr m}\bm{k}}(\omega)=\varphi_{{\mathscr m}\bm{k}}[\tilde t_{{\mathscr m}\bm{k}}(\omega)]$. Before proceeding, we return to the dependence on $\tilde u$ rather than $\tilde t$. In Eq.~\eqref{multiscale waveform}, all functions of $\tilde u$ are the functions obtained by solving~\eqref{dphidt}, \eqref{dpphidt}, and \eqref{dMAdt}, simply evaluated as a function of $\tilde u$. For example, from Eq.~\eqref{varphi-soln}, \begin{equation} \varphi_\alpha(\tilde u,\epsilon) = \frac{1}{\epsilon}\int_0^{\tilde u}\Omega^{(0)}_\alpha[p^j_\varphi(\tilde t',\epsilon)]d t' + \varphi_\alpha^0. \end{equation} This dependence on $u$ is not a trivial consequence of the multiscale expansion~\eqref{h multiscale}. To justify it, one must adopt a hyperboloidal choice of time that asymptotes to $u$ at ${\cal I}^+$ or perform a matched-expansions calculation, matching the solution~\eqref{h multiscale} to an outgoing-wave solution near ${\cal I}^+$. Ref.~\cite{Miller:2020bft} discusses these points along with several additional advantages of using a hyperboloidal slicing. To see why the replacement $t\to u$ is intuitively sensible, note that with it, Eq.~\eqref{multiscale waveform} correctly reduces to a snapshot waveform on the orbital timescale if we fix $p^i_\varphi$ and replace $\varphi_{{\mathscr m}\bm{k}}(\tilde u,\epsilon)$ with its geodesic approximation $\Omega^{(0)}_{{\mathscr m}\bm{k}}u$ (with fixed $\Omega^{(0)}_{{\mathscr m}\bm{k}}$); without the replacement, the multiscale waveform would not correctly reduce in this way. In the next two subsections, we outline the steps required to generate multiscale waveforms at adiabatic (0PA) and 1PA order, whether in the time or frequency domain. \subsubsection{Adiabatic approximation}\label{adiabatic approximation} At this stage we consider the evolution equations in the form~\eqref{0PA varphi}--\eqref{1PA pivarphi}. There is no difference between that form and Eqs.~\eqref{dphidt}--\eqref{dpphidt} at adiabatic order, but we adopt the notation of Eqs.~\eqref{0PA varphi}--\eqref{1PA pivarphi} here for consistency with our discussion of the 1PA approximation in the next subsection. For convenience, we transcribe the adiabatic evolution equations~\eqref{0PA varphi} and \eqref{0PA pivarphi}: \begin{align} \frac{d\tilde\varphi^{(0)}_\alpha}{d\tilde t} &= \Omega^{(0)}_\alpha(\tilde p^j_{\varphi(0)}),\label{0PA varphi repeat}\\ \frac{d\tilde p^i_{\varphi(0)}}{d\tilde t} &= \Gamma^i_{(1)}(\tilde p^j_{\varphi(0)}).\label{0PA pivarphi repeat} \end{align} An adiabatic waveform-generation scheme consists of the following steps: \begin{enumerate} \item Solve the field equation~\eqref{multiscale EFE1} or the associated Teukolsky equation or Regge-Wheeler-Zerilli equations, on a grid of $p^i_\varphi$ values. At each grid point in parameter space, compute and store two things: the driving forces $\Gamma^i_{(1)}(p^i_\varphi)$ in Eq.~\eqref{0PA pivarphi} and the asymptotic mode amplitudes at ${\cal I}^+$ [e.g., ${}_{\pm2}\hat C^{\rm up}_{1\ell{\mathscr m}\bm{k}}(p^i_\varphi)$ in the Teukolsky case]. \item Using the stored values of $\Gamma^i_{(1)}$, evolve through the parameter space by solving the coupled equations~\eqref{0PA varphi repeat} and \eqref{0PA pivarphi repeat} to obtain the adiabatic parameters $\tilde p^i_{\varphi(0)}$ and phases $\tilde\varphi^{(0)}_r$, $\tilde\varphi^{(0)}_z$, $\tilde\varphi^{(0)}_\phi$ as functions of $\tilde t=\epsilon t$. \item Construct the adiabatic waveform using, e.g., \begin{equation} h_+ - i h_\times = 2 \sum_{\ell{\mathscr m}\bm{k}} \, \frac{{}_{-2} \hat C_{1\ell {\mathscr m} \bm{k}}^{\text{up}}[\tilde p^i_{\varphi(0)}(\tilde u)]}{\omega_{{\mathscr m}\bm{k}}^2} \, {}_{-2} S_{\ell {\mathscr m}}(\theta, \phi; a \omega_{{\mathscr m}\bm{k}})e^{-i\varphi^{(0)}_{{\mathscr m}\bm{k}}(\tilde u)/\epsilon}, \end{equation} where $\omega_{{\mathscr m}\bm{k}}=\omega_{{\mathscr m}\bm{k}}[\tilde p^i_{\varphi(0)}(\tilde u)]$. \end{enumerate} Starting from seminal work in Refs.~\cite{Galtsov:1982hwm,Mino:2003yg}, two groups of authors have developed practical implementations of this scheme~\cite{Hughes:1999bq,Sago:2005gd,Sago:2005fn,Hughes:2005qb,Drasco:2005is,Ganz:2007rf,Hughes:2016xwf,Isoyama:2018sib}. One of the convenient aspects of the adiabatic approximation is that it can be implemented entirely in terms of the Teukolsky equation with a point-particle source, with no requirement to calculate a reconstructed and completed metric or to extract the regular fields $h^{{\rm R}(n)}_{\mu\nu}$. The reason is that, as explained around Eqs.~\eqref{fdiss} and \eqref{fcon}, only the first-order dissipative force $f^\mu_{(1)\rm diss}$ is needed to calculate the driving force $\Gamma^i_{(1)}$. This force is entirely due to the half-retarded minus half-advanced piece of $h^{(1)}_{\mu\nu}$~\cite{Mino:2003yg}, \begin{equation} h^{(1)\rm rad}_{\mu\nu} = \frac{1}{2}h^{(1)\rm ret}_{\mu\nu} - \frac{1}{2}h^{(1)\rm adv}_{\mu\nu}. \end{equation} Because $h^{(1)\rm rad}_{\mu\nu}$ is a vacuum solution to the linearized Einstein equation, it can be reconstructed from the half-retarded minus half-advanced piece of $\psi_0$ or $\psi_4$, using the radiation-gauge reconstruction method reviewed in Sec.~\ref{sec:metric-reconstruction} (as translated to the multiscale expansion in the previous section). Again because it is a vacuum solution, it is smooth at the particle, and it is equal there to the relevant part of $h^{{\rm R}(1)}_{\mu\nu}$ that creates $f^\mu_{(1)\rm diss}$. Furthermore, $h^{(1)\rm rad}_{\mu\nu}$ can contain no stationary perturbations, implying it cannot contain any contribution from the mass and spin perturbations $M_A$, so it needs no completion. Hence, one can evolve the orbit and generate the waveform entirely from mode amplitudes of $\psi_0$ or $\psi_4$. Concrete formulas for adiabatic driving forces in terms of Teukolsky amplitudes were first derived in Ref.~\cite{Galtsov:1982hwm}, which showed that the average rates of change of $E$ and $L_z$ due to $f^\alpha_{(1)\rm diss}$ satisfy a balance law: \begin{align} \frac{d\tilde E^{(0)}_\varphi}{dt} &= -{\mathcal{F}}_E^{\cal H} - {\mathcal{F}}_E^{\cal I},\label{E balance}\\ \frac{d\tilde L^{(0)}_\varphi}{dt} &= -{\mathcal{F}}_{L_z}^{\cal H} - {\mathcal{F}}_{L_z}^{\cal I},\label{L balance} \end{align} where $\tilde P^i_{\varphi(0)} = (\tilde E^{(0)}_\varphi,\tilde L^{(0)}_\varphi, \tilde Q^{(0)}_\varphi)$ are related to $\tilde p^i_{\varphi(0)}$ by the geodesic relationships~\eqref{E(pi)}--\eqref{Q(pi)} between $P^i$ and $p^i$. The fluxes are those due to the retarded field, which we can translate from Eqs.~\eqref{EdotH v1}--\eqref{LdotI v1} as \begin{align} \mathcal{F}_E^\mathcal{H} &= \sum_{\ell{\mathscr m}\bm{k}} \frac{2\pi \alpha_{\ell {\mathscr m} \omega_{{\mathscr m}\bm{k}}}}{\omega_{{\mathscr m}\bm{k}}^2} |{}_{-2} \hat C^{\text{in}}_{1\ell {\mathscr m} \bm{k}}|^2 := \sum_{\ell{\mathscr m}\bm{k}}{\mathcal{F}}_E^{{\cal H}\ell{\mathscr m}\bm{k}}, \\ \mathcal{F}_E^\mathcal{I} &= \sum_{\ell{\mathscr m}\bm{k}} \frac{2\pi}{\omega_{{\mathscr m}\bm{k}}^2}|\, {}_{-2} \hat C^{\text{up}}_{1\ell {\mathscr m} \bm{k}}|^2:= \sum_{\ell{\mathscr m}\bm{k}}{\mathcal{F}}_E^{{\cal I}\ell{\mathscr m}\bm{k}}, \end{align} and similarly for $\mathcal{F}_{L_z}^\mathcal{H}$ and $\mathcal{F}_{L_z}^\mathcal{I}$. Equation~\eqref{E balance} states that the change in the particle's orbital energy is equal at leading order to the sum total of energy carried out of the system (into the black hole and out to infinity). Equation~\eqref{L balance} states the analog about the particle's angular momentum. Some time later, Ref.~\cite{Sago:2005fn} derived a similar formula for the average rate of change of the Carter constant due to $f^\alpha_{(1)\rm diss}$: \begin{align}\label{adiabatic Qdot} \frac{d\tilde Q^{(0)}_\varphi}{d t} = - \left(\frac{dQ}{dt}\right)^{\cal H} - \left(\frac{dQ}{dt}\right)^{\cal I},\\ \end{align} where\footnote{Note that Ref.~\cite{Sago:2005fn} uses $C$ to denote our $Q$ and $Q$ to denote our $K$. We give here the expression for $\left(\frac{dQ}{dt}\right)^{\star}$ as presented in Ref.~\cite{Flanagan:2012kg}. In all cases in the literature, expressions such as these are written in terms of averages $\langle\cdot\rangle$, which we can omit because we work with already averaged orbital variables.} \begin{equation} \left(\frac{dQ}{dt}\right)^{\star} = 2\sum_{\ell{\mathscr m}\bm{k}}\frac{L_{{\mathscr m}\bm{k}}+k_z\tilde\Upsilon^{(0)}_z}{\omega_{{\mathscr m}\bm{k}}}{\mathcal{F}}_E^{\star\,\ell{\mathscr m}\bm{k}} \end{equation} with \begin{equation} L_{{\mathscr m}\bm{k}} = {\mathscr m}\langle \cot^2\theta_{(0)}\rangle_\lambda L_\varphi^{(0)} - a^2\omega_{{\mathscr m}\bm{k}}\langle\cos^2\theta_{(0)}\rangle_\lambda E^{(0)}_\varphi. \end{equation} While this evolution equation for $Q$ superficially resembles those for $E$ and $L_z$, it is of fundamentally different character. The quantities ${\mathcal{F}}_E^{\star}$ and ${\mathcal{F}}_{L_z}^{\star}$ are true physical fluxes across the horizon and out to infinity; they are defined entirely in terms of the metric on the surfaces ${\cal H}^+$ and ${\cal I}^+$. The quantities $\left(\frac{dQ}{dt}\right)^{\star}$, on the other hand, directly involve orbital parameters; they are not locally measurable fluxes. Thus, although Eq.~\eqref{adiabatic Qdot} is sometimes referred to as a flux-balance law, there is no known sense in which it can be meaningfully described as such. However, the evolution equations for $E$, $L_z$, and $Q$ all share the same practical advantage: they can be evaluated directly from the retarded solution to the Teukolsky equation with a point-particle source, with no need to reconstruct the complete metric perturbation or to extract the regular field. Combining Eqs.~\eqref{E balance}, \eqref{L balance}, and \eqref{adiabatic Qdot}, we can compute the adiabatic driving forces \begin{equation} \Gamma^i_{(1)}(\tilde p^i_{\varphi(0)}) = \frac{\partial \tilde p^i_{\varphi(0)}}{\partial \tilde P^j_{\varphi(0)}}\frac{d\tilde P^j_{\varphi(0)}}{dt} \end{equation} from the Teukolsky amplitudes ${}_{-2} \hat C^{\text{in/up}}_{1\ell {\mathscr m} \bm{k}}$ given by Eq.~\eqref{mode amplitudes}. We can then follow the prescription outlined at the beginning of the section. Alternatively, we can express the geodesic frequencies in terms of $\tilde P^i_{\varphi(0)} = (\tilde E^{(0)}_\varphi,\tilde L^{(0)}_\varphi, \tilde Q^{(0)}_\varphi)$ and work directly with those variables, treating $\tilde p^i_{\varphi(0)}$ as a function of $\tilde P^i_{\varphi(0)}$ by inverting the relationships~\eqref{E(pi)}--\eqref{Q(pi)}. The adiabatic approximation has been used to evolve equatorial orbits in Kerr spacetime~\cite{Fujita:2020zxe} and to generate waveforms in Schwarzschild spacetime~\cite{Chua:2020stf}. Yet, despite the approximation's efficient formulation, to date no adiabatic waveforms have been generated for orbits in Kerr spacetime, nor have orbital evolutions been performed for generic (eccentric and inclined) orbits. There are two main obstacles. One is generating sufficiently dense data on the $p^i_\varphi$ space to perform accurate interpolation or fitting. The second is the very large ($\sim 10^4$) number of mode amplitudes that are required to achieve an accurate waveform. Both obstacles are expected to be soon overcome~\cite{Fujita:2020zxe,Chua:2020stf}, but as of this writing, the gold standard for generic orbits remains snapshot waveforms~\cite{Drasco:2005is} that use geodesic phases. \subsubsection{First post-adiabatic approximation}\label{1PA approximation} The 1PA evolution equations~\eqref{0PA varphi}--\eqref{1PA pivarphi}, as extended following the discussion around Eqs.~\eqref{psidot amended}--\eqref{Mdot amended}, are \begin{align} \frac{d\tilde\varphi^{(0)}_\alpha}{d\tilde t} &= \Omega^{(0)}_\alpha(\tilde p^j_{\varphi(0)}),\label{0PA varphi 3}\\ \frac{d\tilde p^i_{\varphi(0)}}{d\tilde t} &= \Gamma^i_{(1)}(\tilde p^j_{\varphi(0)}),\label{0PA pivarphi 3}\\ \frac{d\tilde\varphi^{(1)}_\alpha}{d\tilde t} &= \tilde p^j_{\varphi(1)}\partial_j \Omega^{(0)}_\alpha(\tilde p^j_{\varphi(0)}),\label{1PA varphi 2}\\ \frac{d\tilde p^i_{\varphi(1)}}{d\tilde t} &= \Gamma^i_{(2)}(\tilde p^j_{\varphi(0)},\tilde M^{(1)}_A) +\tilde p^j_{\varphi(1)}\partial_j \Gamma^i_{(1)}(\tilde p^j_{\varphi(0)}),\label{1PA pivarphi 2}\\ \frac{d\tilde M^{(1)}_A}{d\tilde t} &= {\mathcal{F}}^{(1)}_A(\tilde p^j_{\varphi(0)}).\label{1PA dMdt} \end{align} Here we have assumed $M_A= \tilde M^{(1)}_A(\tilde t)+O(\epsilon)$. Because (i) $M_A$ only contributes stationary modes to $h^{(1{\mathscr m}\bm{k})}_{\mu\nu}$, (ii) any source term for $h^{(2{\mathscr m}\bm{k})}_{\mu\nu}$ that is quadratic in these modes will also be stationary, and (iii) a stationary mode of $h^{(2{\mathscr m}\bm{k})}_{\mu\nu}$ will not contribute to $\Gamma^i_{(2)}$, it follows that $\Gamma^i_{(2)}$ is linear in $M_A$, implying we can write it in the form \begin{equation} \Gamma^i_{(2)}(\tilde p^j_{\varphi(0)},\tilde M^{(1)}_A) = \Gamma^i_{(2)}(\tilde p^j_{\varphi(0)},0) + \tilde M^{(1)}_A\gamma^i_A(\tilde p^j_{\varphi(0)}), \end{equation} where $A$ is summed over. $\gamma^i_A(\tilde p^j_{\varphi(0)})$ here is defined as the coefficient of $\tilde M^{(1)}_A$ in $\Gamma^i_{(2)}(\tilde p^j_{\varphi(0)},\tilde M^{(1)}_A)$. A 1PA waveform-generation scheme then consists of the following steps: \begin{enumerate} \item Solve the field equations~\eqref{multiscale EFE1} and \eqref{multiscale EFE2} on a grid of $p^i_\varphi$ values. At each grid point, compute and store the following: (i) the driving forces $\Gamma^i_{(1)}(p^i_\varphi)$, $\Gamma^i_{(2)}(p^i_\varphi,0)$, and $\gamma^i_A(p^i_\varphi)$, (ii) the asymptotic first-order mode amplitudes at ${\cal I}^+$ [e.g., ${}_{-2}\hat C^{\rm up}_{1\ell{\mathscr m}\bm{k}}(p^i_\varphi)$]. \item Using the stored values of the driving forces, evolve through the parameter space by solving the coupled equations~\eqref{0PA varphi 3}--\eqref{1PA dMdt} to obtain $\tilde p^i_{\varphi(0)}$ and the phases $\tilde\varphi^{(0)}_\alpha$ and $\tilde\varphi^{(1)}_\alpha$ as functions of $\tilde t=\epsilon t$. \item Construct the 1PA waveform \begin{equation} h_+ - i h_\times = 2 \sum_{\ell{\mathscr m}\bm{k}} \, \frac{{}_{-2} \hat C_{1\ell {\mathscr m} \bm{k}}^{\text{up}}[\tilde p^i_{\varphi(0)}(\tilde u)]}{\omega_{{\mathscr m}\bm{k}}^2} \, {}_{-2} S_{\ell {\mathscr m}}(\theta, \phi; a \omega_{{\mathscr m}\bm{k}})e^{-i\left[\varphi^{(0)}_{{\mathscr m}\bm{k}}(\tilde u)+\epsilon\varphi^{(1)}_{{\mathscr m}\bm{k}}(\tilde u)\right]/\epsilon}, \end{equation} where $\omega_{{\mathscr m}\bm{k}}=\omega_{{\mathscr m}\bm{k}}[\tilde p^i_{\varphi(0)}(\tilde u)]$. \end{enumerate} We make two potentially clarifying remarks about these steps. First, even though the 1PA dynamics depend on the black hole parameters $M_A$, we need not include these parameters in our storage grid. This is because the 1PA effect of $M_A$ is linear in $M_A$, allowing us to only store its coefficient. However, note that the background spin parameter $a$ must be included in the grid (the background parameter $M$ need not be, as we can measure all lengths in units of $M$). Our second remark is that though $p^i_\varphi\neq\tilde p^i_{\varphi(0)}$ at a given value of $\tilde t$ and $\epsilon$, we can still freely solve~\eqref{multiscale EFE1} and \eqref{multiscale EFE2}, working with $p^i_\varphi$, in order to determine the driving forces as functions; it is precisely those functions, simply with $p^i_\varphi\to\tilde p^i_{\varphi(0)}$, that appear in Eqs.~\eqref{0PA varphi 3}--\eqref{1PA dMdt}. A scheme of this sort was first sketched in Ref.~\cite{Pound:2019lzj} and detailed in Ref.~\cite{Miller:2020bft} for the special case of quasicircular orbits into a Schwarzschild black hole. Fig.~3 of Ref.~\cite{Miller:2020bft} gives a more thorough breakdown, though the structure of the multiscale expansion differs slightly from our formulation here. The general case for generic bound orbits in Kerr appears here for the first time. At its core, the scheme requires three key ingredients for each set of orbital parameter values: the full first-order self-force, the asymptotic mode amplitudes of the first-order waveform, and the second-order dissipative self-force. As we summarized in Sec.~\ref{expansion of source}, the first two ingredients have been calculated for generic bound orbits in Kerr spacetime~\cite{vandeMeent:2017bcc} and are routinely calculated for less generic configurations. The main obstacle to including these ingredients in an evolution scheme is the computational cost and runtime of sufficiently covering the parameter space. The third ingredient has not yet been calculated in even the simplest configurations, though there is ongoing development of a practical implementation~\cite{Pound:2015wva,Miller:2016hjv,Miller:2020bft}, which led to the recent calculation of a second-order conservative effect~\cite{Pound:2019lzj}. \subsection{Mode decompositions of the singular field}\label{mode decompositions of hS} In our description so far, we have largely glossed over what is the pivotal step in almost all self-force calculations beyond the adiabatic approximation: the calculation of $h^{{\rm R}(n)}_{\mu\nu}$ and its derivatives, which are required for the conservative first-order self-force, the second-order self-force, as inputs for the second-order sources (whether the Detweiler stress-energy, the effective source, or the second-order Einsten tensor~\cite{Miller:2016hjv}), and as the essential ingredient in most dynamical quantities of interest. In order to compute $h^{{\rm R}(n)}_{\mu\nu}$ (either using a puncture, or the point-particle method with regularization) in a mode-decomposed calculation, a crucial component is a mode-decomposed form for the puncture field. This can be obtained by expanding the puncture field into the same basis as is used in the calculation of the retarded field and can typically be done analytically, or at least semi-analytically. The specific details depend on the context (e.g. choice of gauge, whether the mode decomposition needs to be exact or if it can be an approximation, whether the harmonics are spheroidal or spherical and scalar, vector, tensor or spin-weighted). However, the essential ingredients in the method are common to all cases: \begin{enumerate} \item Introduce a rotated angular coordinate system $(\theta',\phi')$ such that the worldline is instantaneously at the pole, $\theta' = 0$. This makes the mode decomposition integrals analytically tractable and in some instances reduces the number of spherical harmonic ${\mathscr m}$ modes that need to be considered. \item Expand the relevant quantity in a coordinate series about the worldline. In doing so, it is important to ensure that the series approximation is well behaved away from the worldline, in particular at $\theta' = \pi/2$. This can be achieved by multiplying by an appropriate window function in the $\theta'$ direction \cite{Wardell:2015ada}.\\ The resulting expansion can always\footnote{In some instances (e.g. eccentric orbits) obtaining an expression for $\rho$ in this form requires the definition of the rotated coordinates to include a dependence not just on the unrotated angular coordinates, $(\theta,\phi)$, but also on the other coordinates (e.g. $\Delta r$ for the eccentric case).} be algebraically manipulated into the form of a power series (including $\log$ terms in some cases) in \begin{equation} \rho := k_1 \chi^{1/2} \Big[\delta^2 + 1 - \cos \theta'\big]^{1/2}. \end{equation} Here, $\delta ^2 = \frac{k_2 \Delta r^2}{\chi}$, $\Delta r := r - r_{0}$ and $\chi := 1 - k_3^2 \sin \phi'$, and $k_1$, $k_2$ and $k_1$ depend on the orbital parameters and can be treated as constants in the mode decomposition. The coefficients in the power series contain powers of $\Delta r$ and $\chi$ and also depend on the orbital parameters. Apart from that, the dependence on $\phi'$ is only via one of four possibilities: a. independent of $\phi'$; b. $\cos \phi' \sin \phi'$; c. $\cos \phi'$; d. $\sin \phi'$. The resulting dependence on $\phi'$ will then combine in the next step with the $e^{-i {\mathscr m}' \phi'}$ factor from the harmonic to produce a dependence on $\phi'$ only via powers of $\chi$.\\ When decomposing tensors, certain tensor components may also include an overall factor of $\sin \theta'$, but only ever in such a way that it cancels a singularity in the harmonic at $\theta' = 0$ so that the final integrand is non-singular away from $\Delta r = 0$. \item Integrate against (the conjugate of) the relevant harmonic to obtain a mode decomposition in $(\ell,{\mathscr m}')$ modes with respect to the rotated coordinate system. In the case of spin-weighted or vector and tensor harmonics, we must also be careful to account for the rotation, $\mathcal{R}$, of the frame, either by including the appropriate factor of $e^{i s \gamma'(\theta', \phi', \mathcal{R})}$ in the spin-weighted case \cite{Boyle:2016tjj}, or by including the appropriate tensor transformation in the case of vector and tensor harmonics \cite{Wardell:2015ada}.\\ In performing the integrals, we can exploit the fact that only certain integrals over $\phi'$ are non-vanishing. In particular for the four possibilities listed in the previous step: \begin{enumerate} \item \label{poss1} only contributes for ${\mathscr m}'$ even and only for the real part of $e^{i {\mathscr m}' \phi'}$; \item only contributes for ${\mathscr m}'$ even and only for the imaginary part of $e^{i {\mathscr m}' \phi'}$; \item only contributes for ${\mathscr m}'$ odd and only for the real part of $e^{i {\mathscr m}' \phi'}$; \item only contributes for ${\mathscr m}'$ odd and only for the imaginary part of $e^{i {\mathscr m}' \phi'}$. \end{enumerate} The integrals over $\theta'$ can all be done analytically and result in expressions of the form \begin{alignat}{3} \delta^{n+2}(\delta^2 + 2)^{(n+2)/2} \sum_{i=0}^{\ell-\frac{n+4}{2}} a_i \delta^{2i}\, +\, &\log \Big(\frac{\delta^2+2}{\delta^2}\Big) \sum_{i=0}^{\ell+\frac{n+2}{2}} b_i \delta^{2i} & \quad \text{$n$ even} \\ (\delta^2 + 2)^{(n+2)/2} \sum_{i=0}^{\ell} c_i \delta^{2i}\, +\, &|\delta| \delta^{n+1} \sum_{i={\mathscr m}'}^{\ell} d_i \delta^{2i} & \quad \text{$n$ odd} \end{alignat} where $n$ is the power of $\rho$ and where the coefficients $a_i$, $b_i$, $c_i$ and $d_i$ are $\ell$-dependent rational numbers. The specific limits on the sums given here is for the ${\mathscr m}'=0$ scalar harmonic case. Structurally similar expressions also appear for $\log \rho$ terms, for ${\mathscr m}'\ne 0$ and for spin-weighted harmonics, but with the sums running over different ranges of $i$.\\ The integrals over $\phi'$ can also be done analytically and result in power series (for integer powers of $\chi$), elliptic integrals (for half-integer powers), or the derivative of a hypergeometric function with respect to its argument (for $\log$ terms). In all three cases, they are functions of $k_3$ and potentially also $\Delta r$. \item Transform back to the $(\ell,{\mathscr m})$ modes with respect to the unrotated $(\theta,\phi)$ coordinate system using the Wigner-D matrix, $D^\ell_{{\mathscr m}\emm'}(\mathcal{R})$. With a moving worldline the rotation is time dependent, but this complication is not relevant in many cases, the notable exception being in the effective source method where it is necessary to take time derivatives when computing the source from the puncture \cite{Miller:2016hjv,Heffernan:2017cad}. \end{enumerate} In many practical applications, an exact mode decomposition is not necessary and an approximation is sufficient. For example, in the mode-sum regularization scheme one is only interested in the modes of the puncture (or its radial derivative in the case of the self-force) evaluated in the limit $\Delta r \to 0$. Similarly, in the effective source scheme a series expansion to some power in $\Delta r$ suffices. Then, the exact expression for the mode-decomposed puncture field has a series expansion in $\Delta r$ of the form \begin{equation} \sum_{{\mathscr m}',ijk} c_{1,i} \Delta r^i + c_{2,j} \Delta r^j |\Delta r| + c_{3,k} \Delta_r^k \log \Delta r \end{equation} where the coefficients $c_{1,i}$, $c_{2,j}$ and $c_{3,k}$ depend on the orbital parameters. In those cases, the mode decomposition procedure simplifies significantly and one need only compute up to some maximum value for $i$, $j$ and $k$. Similarly, another simplification arises from the fact that one may only be interested in the decomposition of the puncture accurate to some order in distance from the worldline in the angular directions. This is is reflected in the number of ${\mathscr m}'$ modes that must be included: for a puncture accurate to $n$ derivatives one must include up to $|{\mathscr m}'| = |s| \pm n$ for the spin-weighted case (the vector and tensor cases similarly follow from their relation to the spin-weighed harmonics: $|s|=1$ for the vector case and $|s|=2$ for the tensor case). One particularly important special case is that of mode-sum regularization, where one is only interested in the result for a given quantity summed over ${\mathscr m}$ and with $\Delta r=0$. This leads to so-called mode-sum regularization formulas. For example, in the case of the first-order gravitational self-force this is given by \begin{equation} F^\alpha = \sum_{\ell} (F^\alpha_{\rm ret} - A_{\pm} (2\ell+1) - B ) + D \end{equation} where $A_\pm$, $B$ and $D$ are ``regularization parameters'' that depend on the orbital parameters. Here, the value of $A_\pm$ depends on whether the limit $\Delta r \to 0$ is taken from above or below; this is because is comes from the derivative of the $|\Delta r|$ piece of the puncture. The parameters $B$ does not have this property as it comes from the piece of the puncture that does not $|\Delta r|$ (in particular, for the self-force it comes from the derivative of the $\Delta r^1$ piece of the puncture). The parameter $D$ accounts for the possibility that the subtraction does not exactly capture the behaviour of the contribution from the singular field (and only the singular field) and can often be set to $0$ by appropriately defining the subtraction \cite{Wardell:2015ada,Pound:2013faa}. \subsubsection{Example: leading order puncture for circular orbits in Schwarzschild spacetime} As a simple representative example, consider the problem of decomposing the leading-order piece of the Lorenz-gauge puncture [i.e. the first term in Eq.~\eqref{hS1 covariant}] into the the spin-weighted spherical harmonic basis introduced in Sec.~\ref{sec:schw-perturbations}. For concreteness, we consider a circular geodesic orbit of radius $r_\orbit$ with four-velocity $u^\alpha = u^t[1, 0, 0, \Omega]$, where $\Omega = \sqrt{\frac{M}{r_\orbit^3}}$ and $u^t = \sqrt{\frac{r_\orbit}{r_\orbit-3M}}$. As a first step, we expand the covariant expression in a coordinate series. Keeping only the leading term in the coordinate expansion, we have $g^{\alpha'}_{\mu} = \delta^{\alpha}_{\mu} + \mathcal{O}(\Delta x)$ and ${\sf s} = \rho + \mathcal{O}(\Delta x^2)$ where $\rho^2 := B^2 (\delta^2 + 1 - \cos \theta')$, $\delta^2 := \frac{r_\orbit \Delta r^2}{B^2(r_\orbit-2M)}$, $\chi := 1 - \frac{M}{r_\orbit-2M} \sin^2 \phi'$, $B^2 := \frac{2 r_\orbit^2 (r_\orbit - 2 M) \chi}{(r_\orbit - 3M)}$ and $\Delta r = r-r_\orbit$. Here, we have made the standard choice of identifying a point on the worldline with the point where the puncture is evaluated by setting $\Delta t = t-t_\orbit = 0$. Then, working with the Carter tetrad, the tetrad components of the puncture are \begin{align} h_{ll} = h_{nn} &= \frac{2}{\rho}\frac{r_\orbit-2M}{r_\orbit - 3M},\nonumber \\ h_{ln} = \frac{M}{r_\orbit-2M} h_{m{\bar{m}}} &= \frac{1}{\rho} \frac{2M}{r_\orbit-3M},\nonumber \\ h_{lm} = h_{nm} = -h_{l{\bar{m}}} = -h_{n{\bar{m}}} &= - \frac{\cos^2 \big(\tfrac{\theta'}{2}\big)}{\rho} \frac{2i (r_\orbit-2M) r_\orbit \Omega}{\sqrt{f_\orbit}(r_\orbit-3M)},\nonumber \\ h_{mm} = h_{{\bar{m}}\mb} &= -\frac{\cos^4 \big(\tfrac{\theta'}{2}\big)}{\rho}\frac{2 M}{r_\orbit - 3M}. \end{align} Note that we have included factors of $\cos^2 \big(\tfrac{\theta'}{2}\big)$ and $\cos^4 \big(\tfrac{\theta'}{2}\big)$ to ensure that the puncture is sufficiently regular at $\theta'=\pi$ while not altering its leading-order behaviour near the worldline at $\theta'=0$. We now integrate these against the appropriate spin-weighted spherical harmonic to obtain mode decomposed versions. In doing so, we must take account of the fact that our integration is with respect to a rotated coordinate system by including a factor of $e^{i s \gamma'(\theta', \phi', \mathcal{R})} \approx i^s e^{i s \phi'} + \mathcal{O}(\theta'^2)$. Since we are only interested in the leading-order behaviour near the worldline we will only consider the modes ${\mathscr m}' + s = 0$ series expanded through $\mathcal{O}(\Delta r^1)$. Then, we encounter the following integrals over $\theta'$ \begin{subequations} \begin{equation} \int_0^\pi \frac{1}{\rho} {}_0 \bar{Y}_{\ell0} (\theta', 0) \sin\theta' d\theta' \approx \frac{1}{B} \frac{1}{\sqrt{2\pi(2\ell+1)}}\bigg[2 - \sqrt{2}(2\ell+1)|\delta|\bigg], \end{equation} \begin{gather} \int_0^\pi \frac{\cos^2 \big(\tfrac{\theta'}{2}\big)}{\rho} {}_1 \bar{Y}_{\ell,-1} (\theta', 0) \sin\theta' d\theta' = \int_0^\pi \frac{\cos^2 \big(\tfrac{\theta'}{2}\big)}{\rho} {}_{-1} \bar{Y}_{\ell1} (\theta', 0) \sin\theta' d\theta' \nonumber \\ \approx -\frac{1}{B} \frac{1}{\sqrt{2\pi(2\ell+1)}}\bigg[8 \Lambda_1 - \sqrt{2} (2\ell+1) |\delta|\bigg], \end{gather} \begin{gather} \int_0^\pi \frac{\cos^2 \big(\tfrac{\theta'}{2}\big)}{\rho} {}_1 \bar{Y}_{\ell1} (\theta', 0) \sin\theta' d\theta' = \int_0^\pi \frac{\cos^2 \big(\tfrac{\theta'}{2}\big)}{\rho} {}_{-1} \bar{Y}_{\ell,-1} (\theta', 0) \sin\theta' d\theta' \nonumber \\ \approx -\frac{1}{B} \frac{1}{\sqrt{2\pi(2\ell+1)}}\bigg[8 \Lambda_1 - \frac{12}{(2\ell-1)(2\ell+3)}\bigg], \end{gather} \begin{gather} \int_0^\pi \frac{\cos^4 \big(\tfrac{\theta'}{2}\big)}{\rho} {}_2 \bar{Y}_{\ell,-2} (\theta', 0) \sin\theta' d\theta' = \int_0^\pi \frac{\cos^4 \big(\tfrac{\theta'}{2}\big)}{\rho} {}_{-2} \bar{Y}_{\ell2} (\theta', 0) \sin\theta' d\theta' \nonumber \\ \qquad \approx \frac{1}{B} \frac{1}{\sqrt{2\pi(2\ell+1)}}\bigg[32 \Lambda_2 - \sqrt{2} (2\ell+1) |\delta|\bigg]. \end{gather} \begin{gather} \int_0^\pi \frac{\cos^4 \big(\tfrac{\theta'}{2}\big)}{\rho} {}_2 \bar{Y}_{\ell2} (\theta', 0) \sin\theta' d\theta' = \int_0^\pi \frac{\cos^4 \big(\tfrac{\theta'}{2}\big)}{\rho} {}_{-2} \bar{Y}_{\ell,-2} (\theta', 0) \sin\theta' d\theta' \nonumber \\ \qquad \approx \frac{1}{B} \frac{1}{\sqrt{2\pi(2\ell+1)}}\bigg[32 \Lambda_2 - \frac{80}{(2\ell-1)(2\ell+3)} \bigg], \end{gather} \end{subequations} where $\Lambda_1 := \frac{\ell(\ell+1)}{(2\ell-1)(2\ell+3)}$ and $\Lambda_2 := \frac{(\ell-1)\ell(\ell+1)(\ell+2)}{(2\ell-3)(2\ell-1)(2\ell+3)(2\ell+5)}$ . Next, performing the integrals over $\phi'$ the integrands all involve integer (for the $|\delta|$ terms) and half-integer (for the $\delta^0$ terms) powers of $\chi$, producing elliptic integrals or polynomial functions of $\frac{M}{r_\orbit - 2M}$, respectively. Putting everything together, transforming to the frequency domain (which in this case amounts to simply dividing by $2\pi$) and transforming back to the unrotated frame using the Wigner-D matrix, we then obtain expressions for the mode-decomposed punctures, \begin{subequations} \begin{equation} h_{ll}^{\ell{\mathscr m}\omega} = \frac{D^\ell_{{\mathscr m},0}(\mathcal{R})}{\sqrt{(2\ell+1)\pi}}\bigg[ \frac{4 \mathcal{K}}{\pi r_\orbit}\sqrt{\frac{r_\orbit-2M}{r_\orbit-3M}} - \frac{(2\ell+1)}{r_\orbit^{3/2}\sqrt{r_\orbit-3M}}|\Delta r| \bigg], \end{equation} \begin{equation} h_{ln}^{\ell{\mathscr m}\omega} = \frac{D^\ell_{{\mathscr m},0}(\mathcal{R})}{\sqrt{(2\ell+1)\pi}}\bigg[ \frac{4 M \mathcal{K}}{\pi r_\orbit \sqrt{r_\orbit-2M}\sqrt{r_\orbit-3M}} - \frac{(2\ell+1)M}{r_\orbit^{3/2}\sqrt{r_\orbit-3M}(r_\orbit-2M)}|\Delta r| \bigg], \end{equation} \begin{align} h_{lm}^{\ell{\mathscr m}\omega} &= \frac{D^\ell_{{\mathscr m},-1}(\mathcal{R})}{\sqrt{(2\ell+1)\pi}}\bigg[ -\frac{16 \Lambda_1 \Omega \mathcal{K}}{\pi} \sqrt{\frac{r_\orbit}{r_\orbit-3M}} + \frac{(2\ell+1)\Omega}{\sqrt{r_\orbit-3M}\sqrt{r_\orbit-2M}} |\Delta r| \bigg] \nonumber \\ & + \frac{D^\ell_{{\mathscr m},1}(\mathcal{R})}{\sqrt{(2\ell+1)\pi}}\bigg[4\Lambda_1 - \tfrac{6}{(2\ell-1)(2\ell+3)}\bigg]\frac{4[(2r_\orbit-5M) \mathcal{K}-2(r_\orbit-2M)\mathcal{E}]}{M^{1/2} r_\orbit \pi\sqrt{r_\orbit-3M} } , \end{align} \begin{align} h_{mm}^{\ell{\mathscr m}\omega} &= \frac{D^\ell_{{\mathscr m},-2}(\mathcal{R})}{\sqrt{(2\ell+1)\pi}}\bigg[\frac{64 M \Lambda_2 \mathcal{K}}{\pi r_\orbit \sqrt{r_\orbit-2M}\sqrt{r_\orbit-3M}} - \frac{(2\ell+1)M}{r_\orbit^{3/2}\sqrt{r_\orbit-3M}(r_\orbit-2M)} |\Delta r| \bigg] \nonumber \\ & + \frac{D^\ell_{{\mathscr m},2}(\mathcal{R})}{\sqrt{(2\ell+1)\pi}}\bigg[16\Lambda_2 - \tfrac{40}{(2\ell-1)(2\ell+3)}\bigg] \times \nonumber \\ & \qquad \frac{4[(4r_\orbit-9M)(4r_\orbit-11M) \mathcal{K}-8(r_\orbit-2M)(2r_\orbit-5M)\mathcal{E}]}{3 M\pi r_\orbit \sqrt{r_\orbit-3M}\sqrt{r_\orbit-2M} } , \end{align} \end{subequations} with the other components given either by $h_{ll}^{\ell{\mathscr m}\omega} = h_{nn}^{\ell{\mathscr m}\omega}$, $h_{m{\bar{m}}}^{\ell{\mathscr m}\omega} = \frac{r_\orbit-2M}{M} h_{ln}^{\ell{\mathscr m}\omega}$, $h_{nm}^{\ell{\mathscr m}\omega} = h_{lm}^{\ell{\mathscr m}\omega}$, or by complex conjugation. Here, \begin{subequations} \begin{align} \mathcal{K} := \frac14 \int_0^{2\pi} \bigg(1-\frac{M}{r_\orbit-2M} \sin^2 \phi'\bigg)^{-1/2} \, d\phi', \\ \mathcal{E} := \frac14 \int_0^{2\pi} \bigg(1-\frac{M}{r_\orbit-2M} \sin^2 \phi'\bigg)^{1/2} \, d\phi', \end{align} \end{subequations} are complete elliptic integrals of the first and second kind, respectively. Higher order circular-orbit punctures including the contribution at $\mathcal{O}(\lambda^0)$ are available in Ref.~\cite{Wardell:2015ada}. Yet higher orders and punctures for more generic cases are available upon request to the authors. \section{Conclusion} We stated in the introduction to this review that we aimed to summarize the key methods of black hole perturbation theory and self-force theory rather than summarizing the status of the field, leaving that task to existing reviews. However, it is worth putting this review in the context of the field's current state, and it is worth mentioning key topics that we did {\em not} cover. Regarding topics we neglected, we first state the obvious: we did not cover any applications of black hole perturbation theory other than small-mass-ratio binaries. Although the bulk of the review is intended to provide general treatments of black hole perturbation theory, orbital dynamics in black hole spacetimes, and self-force theory in generic spacetimes, without specializing to binaries, it is undoubtedly slanted toward our application of interest. For that reason, we will also focus exclusively on the state of small-mass-ratio binary modelling. Hinderer and Flanagan's two-timescale analysis of orbital motion~\cite{Hinderer:2008dm} long ago made clear that 1PA waveforms are almost certainly required to perform high-precision measurements of these binaries. Such measurements will require phase errors much smaller than 1 radian, while the errors in 0PA waveforms will have errors of $O(\epsilon^0)$ [or $O(1/\sqrt{\epsilon})$ in the case of a resonance], which could be 1 or many more radians. Ref.~\cite{vandeMeent:2020xgc} has recently provided strong numerical evidence that a 0PA waveform will have significant errors for all mass ratios. Conversely, the same reference shows that a 1PA waveform should be not only highly accurate for EMRIs and IMRIs, but reasonably accurate even for comparable-mass binaries. This bolsters a long line of evidence that perturbative self-force theory is surprisingly accurate well outside its expected domain of validity; see Ref.~\cite{Rifat:2019ltp} for other recent evidence, as well as the reviews~\cite{Tiec:2014lba,Barack:2018yvs}. Our presentation here provides the first framework for 1PA waveform generation. There are two main hurdles to overcome on the way to implementing this framework. One is the difficulty of efficiently covering the parameter space. Once a region is well covered by snapshots, recent advances make it possible to generate long, accurate waveforms extremely rapidly in that region, with generation times of a few tens of milliseconds for eccentric orbits in Schwarzschild spacetime~\cite{Chua:2020stf}. However, covering the parameter space of generic orbits in Kerr is highly expensive even for adiabatic codes, let alone calculations of the first-order self-force. The second main hurdle is calculating the necessary second-order inputs for the 1PA evolution. There has been steady progress in developing practical methods of computing these inputs, but only recently have results begun to materialize~\cite{Pound:2019lzj}. To date, these calculations have been restricted to quasicircular orbits in Schwarzschild spacetime; they must be extended to Kerr and to generic orbits. In lieu of accurate evolving waveforms, the development of data analysis methods has so far been based on ``kludge'' waveforms constructed using a host of additional approximations (primarily, post-Newtonian approximations for the fluxes)~\cite{Glampedakis:2002cb,Barack:2003fp,Gair:2005ih,Babak:2006uv,Sopuerta:2011te,Chua:2017ujo}. These kludges will be very far from accurate enough to enable precise parameter estimation, but they are sufficiently similar to accurate waveforms to serve as testbeds for analysis methods. They may also be sufficiently accurate for detection of loud signals. There is also ongoing work to improve the accuracy of post-Newtonian 0PA approximations to enable them to accurately fill out the weak-field region of the small-mass-ratio parameter space~\cite{Sago:2015rpa}. Our summary of multiscale evolution has also omitted some important ingredients in an accurate model. We must correctly account for passages through resonance, and we may need to include the transition to plunge for mass ratios $\sim 1:50$. We must also account for the secondary's spin, which enters into the 1PA dynamics in three ways: (i) through the Mathisson-Papapetrou spin force~\eqref{EOM spin}, which contributes to $f^\mu_{(1)\rm con}$, (ii) through the spin's contribution to $T^{(2)}_{\mu\nu}$ in Eq.~\eqref{skeleton Tab}, which generates a perturbation that contributes to $f^\mu_{(2)\rm diss}$, and (iii) through a coupling between $h^{{\rm R}(1)}_{\mu\nu}$ and the spin, which again contributes a second-order dissipative effect. We refer to Refs.~\cite{Warburton:2017sxk,Akcay:2019bvk,Witzany:2019nml,Zelenka:2019nyp} for a sample of recent work on calculating these effects and incorporating them into waveform-generation schemes. Specifically, Ref.~\cite{Warburton:2017sxk} generated waveforms from inspirals into a Schwarzschild black hole including first-order conservative (but not second-order dissipative) spin effects; Ref.~\cite{Akcay:2019bvk} derived balance laws incorporating spin; Ref.~\cite{Witzany:2019nml} derived the spin correction to the fundamental frequencies; and Ref.~\cite{Zelenka:2019nyp} computed the spin's contribution to fluxes from spinning particles on generic orbits in Schwarzschild spacetime. We also note that while we have focused on a multiscale expansion built on frequency-domain methods, there has been considerable development of time-domain snapshot calculations of $h^{(1)}_{\mu\nu}$ and $f^\mu_{(1)}$ using fixed geodesic sources~\cite{Barack:2005nr,Barack:2007tm,Barack:2010tm,Dolan:2012jg,Barack:2017oir}. The quantities $h^{(1)}_{\mu\nu}$ and $f^\mu_{(1)}$ output from such computations cannot be directly fed into the second-order field equations~\eqref{multiscale EFE2} or into the multiscale evolution scheme. However, if we decompose the outputs into Fourier modes, as in $h^{(1)}_{\mu\nu} = \sum_{{\mathscr m}\bm{k}}h^{(1{\mathscr m}\bm{k})}_{\mu\nu}e^{im\phi - \omega_{{\mathscr m}\bm{k}}t}$, then the coefficients $h^{(1{\mathscr m}\bm{k})}_{\mu\nu}$ are identical to those in a multiscale expansion, and these can be used as inputs for the multiscale scheme. Moreover, any first-order quantity that depends only on ${\cal P}^\alpha$ will be identical in the time domain with a geodesic source as in the multiscale expansion; this includes any quantity constructed as an average over the orbit, which includes most physical quantities of interest~\cite{Barack:2018yvs}. Because time-domain methods are typically more efficient than frequency-domain ones for highly eccentric orbits, certain dynamical quantities entering into the evolution may be more usefully computed in the time domain. Time-domain calculations also offer an alternative framework for waveform generation: rather than using Eq.~\eqref{multiscale waveform}, one can perform a multiscale evolution of ${\cal P}^\alpha$ to generate a self-accelerated trajectory and then solve the Teukolsky equation in the time domain with an accelerated point-particle source~\cite{Sundararajan:2008zm,Harms:2014dqa}. This may seem redundant, given that in the process of generating the multiscale evolution one must already compute all the inputs for Eq.~\eqref{multiscale waveform}. However, it offers significant flexibility, in that it can take as input trajectories generated with any method, such as inspirals which have been produced that include the full first-order self-force but omit second-order dissipative effects~\cite{Warburton:2011fk,Osburn:2015duj,vandeMeent:2018rms}. This gives it the additional advantage of being able to easily evolve through different dynamical regimes, such as the the evolution from the adiabatic inspiral to the transition to plunge~\cite{Taracchini:2014zpa,Rifat:2019ltp}. Beyond these alternative methods of wave generation, we have also passed over what has been the main application of self-force calculations. Although such calculations were originally motivated by modelling EMRI waveforms (and more recently, the prospect of using them to model IMRIs), they have also enabled the calculation of numerous physical effects in binaries. These, in turn, have facilitated a rich interaction with other binary models: post-Newtonian and post-Minkowskian theory, effective one body theory, and fully nonlinear numerical relativity~\cite{Tiec:2014lba}. Sections 7 and 8 of Ref.~\cite{Barack:2018yvs} provide a summary of the physical effects that have been computed and the synergies with other models. We highlight Refs.~\cite{Bini:2019nra, Damour:2019lcq,Bini:2020wpo} for more recent discussions of the power of such synergies and of the potential future impact of self-force calculations. \section*{Cross-References} \begin{itemize} \item \emph{Introduction to gravitational wave astronomy}, N.~Bishop \item \emph{Space-based laser interferometers}, J.~Gair, M.~Hewitson, A.~Petiteau \item \emph{The gravitational capture of compact objects by massive black holes}, P.~Amaro Seoane \item \emph{Post-Newtonian templates for gravitational waves from compact binary inspiral}, S.~Isoyama, R.~Sturani, H.~Nakano \item \emph{Non-linear effects in EMRI dynamics and the imprints on gravitational waves}, G.~Lukes-Gerakopoulos, V.~Witzany, O.~Semer\'ak \end{itemize} \section*{Acknowledgements} AP is grateful to Jordan Moxon and Eanna Flanagan for numerous helpful discussions. BW thanks Andrew Spiers for independently checking several equations. AP also acknowledges the support of a Royal Society University Research Fellowship. \bibliographystyle{spbasic}
1,314,259,994,444
arxiv
\section{Introduction } There is a well-known relation between the existence of a bifurcate Killing horizon on certain spacetime manifolds and the temperature of some specially privileged equilibrium thermal states. The Rindler, the de Sitter, and the Schwarzschild spacetimes are examples of such situation \cite{kaywald}. While there is a vast literature devoted to the study of the above states, little attention has been paid to other possible thermal or pseudo-thermal states, let alone to what happens when interactions are switched on: does the fluctuation--dissipation theorem still work as in flat space? Are the privileged states attractor solutions of some sort of kinetic equations with exact modes instead of plane waves? Does thermalization of a given initial state happens in {\it strongly} curved space--times without substantial backreaction on the gravitational background? The backreaction during thermalization seems to be negligible practically for any reasonable (Hadamard) initial state in flat space--time \cite{LL10,Kamenev}. When attempting to address some of the above questions one encounters many underwater stones; for example, describing thermal states other than the Gibbons-Hawking one \cite{HG} in the static de Sitter--Rindler wedge \cite{deSitter,Akhmedov:2020qxd} is already non trivial. Also what would be a thermal state does not necessarily posses all the same properties as a regular thermal state in flat space \cite{Popov:2017xut}. Furthermore, if time translations are not a symmetry, as is the case the the global Lanczos' spherical coordinate system \cite{lanczos}, secular divergences arise both in distributions and in anomalous averages \cite{Krotov:2010ma,Akhmedov:2013vka,Akhmedov:2012dn,Akhmedov:2019cfd}. The latter can be resummed for fields whose mass is bigger than a critical mass, via analogues of kinetic equations for both the distributions and anomalous averages. Such kinetic equations do not have Planckian or Boltzmannian solutions. Moreover they have exploding solutions. Such a situation when thermalization does not happen without taking into account backreaction on the background field is similar to the one encountered in constant electric field \cite{Akhmedov:2014hfa,Akhmedov:2014doa}. As regards the Schwarzschild geometry and the black hole radiation, there are three distinguished states which are usually considered: the Boulware \cite{Boulware:1974dm}, Unruh \cite{Unruh:1976db} and Hartle-Hawking states. \cite{Hartle:1976tp,Candelas:1980zt}. The Boulware state is the vacuum of both the in--going and out--going modes of the Schwarzschild background; the Unruh state is the vacuum of the in--going modes and has the Planckian distribution at the Hawking temperature for the out--going modes; finally, in the Hartle--Hawking state both the in--going and the out--going modes are thermally distributed at the Hawking temperature. Can any of the aforementioned quantum states actually describe the fate the quantum field at the end of the collapse \cite{Hawking:1974sw}? Actually, in the formation of a black hole process one has to consider a different basis of modes \cite{Akhmedov:2015xwa} where the in--going and out--going modes are not treated as independent but rather a linear combination of them which is regular at the center of the collapsing star. The corresponding state is inequivalent to either the Boulware, the Unruh or the Hartle--Hawking state. The question then arises whether the initial state before the collapse can thermalise to any of the above mentioned states. A second set of questions regards the behaviour of black holes surrounded by a gas with a temperature different from the Hawking one. Can one heat up a black hole by surrounding it with a gas of temperature different from the Hawking one? And how the heating works in detail? In concrete astrophysical situations black holes indeed are surrounded by accretion disks which definitely have nothing to do with the Hawking radiation. It goes without saying about primordial black holes in early universe. In this paper even without performing loop calculations (where the heating process is actually seen) we will argue that the answer to the first question in the last paragraph seems to be negative. Both in the static de Sitter space (see also \cite{Akhmedov:2020qxd}) and in the black hole background all correlation functions with temperatures different from the Hawking one have anomalous singularities at the horizon. To simplify our discussion for the beginning we consider real scalar field theory in two dimensions, in either the Rindler, the static de Sitter or two--dimensional analog of the Schwarzschild background. We study the properties of the tree--level Wightman functions for a class of time translation invariant states, which include states having different temperatures for the ingoing and the outgoing modes. In all cases when the temperatures are different from the Unruh, Gibbons--Hawking and Hawking ones in the corresponding situations there are anomalous singularities of the correlation functions at the horizon. \section{Rindler space--time} To put the results of the paper in perspective we start by discussing the Rindler spacetime. This section mainly contains a recapitulation of known facts. However some of them are new. \subsection{Geometry, modes and Wightman function} \label{rindler} The coordinate system for the Rindler right wedge of the Minkowski spacetime is obtained by applying the one-parameter subgroup of boosts, which leaves invariant the wedge, to points of, say, the half-line $t = 0$, $x = e^{\alpha\xi}/\alpha>0$: \begin{eqnarray} \label{coordinates1} X(\eta,\xi)=\left( \begin{array}{c} t \\ x \\ \end{array} \right) = \left( \begin{array}{cc} \cosh (\alpha \eta ) & \sinh (\alpha \eta ) \\ \sinh (\alpha \eta ) & \cosh (\alpha \eta ) \\ \end{array} \right) \left( \begin{array}{c} 0 \\ \frac{1 }{\alpha } e^{\alpha \xi}\\ \end{array} \right) =\left( \begin{array}{c} \frac 1 \alpha e^{\alpha \xi } \sinh (\alpha \eta ) \\ \frac 1 \alpha e^{\alpha \xi } \cosh (\alpha \eta ) \\ \end{array} \right) . \end{eqnarray} Here $\eta$ is the parameter of the subgroup and is interpreted as the Rindler time, $\xi$ is the space coordinate and $\alpha$ is the proper acceleration. For real values of $\eta$ and $\xi$ the Rindler coordinates \eqref{coordinates1} cover only the right wedge; this is causally disconnected from the left wedge. The half-lines $x=\pm t$ with $x>0$ are the past and the future horizons. When $\eta$ and $\xi$ are complex they cover the full Minkowski spacetime; in the following we will set the acceleration to one ($\alpha=1$). In the above coordinates the metric is static and conformal to Minkowski \begin{align} \label{metricRindler} ds^2=e^{2 \xi}\Big(d\eta^2-d\xi^2\Big)\, ; \end{align} the invariant interval between two events in the wedge is given by \begin{align} \label{geodistRindler} L_{12}= (X_1-X_2)^2= 2e^{(\xi_1+\xi_2)}\cosh(\eta_2-\eta_1) -e^{2\xi_2}-e^{2\xi_1}. \end{align} For the future reference please note the obvious symmetry of the interval in the exchange \begin{equation} \xi_1\longleftrightarrow \xi_2. \label{symmetry} \end{equation} Lorentz transformations of the wedge corresponds to (time) translations in the $\eta$ variable: $ \eta\to\eta +\gamma$; dilatations in Minkowski space $X\to e^{\beta}X$ correspond to the the shift $ \xi \to \xi + \beta$. \begin{figure}\centering\includestandalone[width=0.6\textwidth]{picRindl}\caption{Penrose diagram of the Rindler space. The Rindler space is bordered by a Killing horizon.}\label{dssp}\end{figure} As regards the light cone variables \begin{eqnarray} \label{coordinatesuv} u = t-x = - e^{ (\xi -\eta)} = - e^{- U } \ \ \ \ \ v = t+x = e^{ (\xi +\eta)} = e^{ V}, \label{lorentz0} \end{eqnarray} they are transformed as follows: \label{coordinatesuv} \begin{eqnarray} \begin{array}{l} u \to e^{- \gamma } u,\\ \\ v \to e^{ \gamma} v , \end{array} \ \ \begin{array}{l} u \to e^{\beta } u,\\ \\ v \to e^{ \beta} v, \end{array}\ \ \begin{array}{l} U \to U+\gamma,\\ \\ V \to V+\gamma, \end{array}\ \ \begin{array}{l} U \to U-\beta,\\ \\ V \to V+ \beta. \end{array} \label{lorentz0} \end{eqnarray} Though geodetically incomplete, the Rindler wedge is a globally hyperbolic manifold in itself; of course a Cauchy surface, say $\eta=0$, is not a Cauchy surface for the whole Minkowski spacetime. As a consequence the modes constructed by canonical quantization in the Rindler wedge do not constitute a basis for the whole Minkowski space--time. It is well-known that to obtain general Hilbert space representations of the fields, including the ones carrying unitary representations of the Poincar\'e group, one needs to construct also the modes defined in the left wedge \cite{Unruh:1983ms}. A less known but powerful alternative is to resort to the theory of generalized Bogoliubov transformations\footnote{Starting from pure states generalized Bogoliubov transformations may produce mixed states while standard Bogoliubov transformations cannot.} which makes use only of the modes of the right wedge \cite{ms,ms2}. The Klein--Gordon equation for a massive scalar field in two-dimensions is as follows: \begin{align} \label{eqmassive} \Big(\partial_\eta^2-\partial_\xi^2+e^{2\xi}m^2\Big)\varphi(\eta,\xi)=0. \end{align} By separating the variables one gets a Schrodinger eigenvalue (textbook) problem in an exponential potential $V(\xi)=m^2 e^{2\xi}$. Normalizable modes are proportional to Macdonald functions $K_{i\omega}(me^\xi)$ which are linear combinations of left-moving and right-moving waves. The canonical field operator may be expanded as follows \begin{align} \label{operatormassiveRindler} \hat{\varphi}(X) =\frac 1 {\pi}\int_{0}^{+\infty} \, \bigg(e^{-i\omega \eta}\hat{b}_{\omega} + e^{i\omega \eta}\hat{b}_{\omega}^\dagger \bigg)\,K_{i\omega}\big(m e^{\xi}\big)\sqrt{\sinh\pi \omega}\, {d \omega}, \end{align} where the creation and annihilation operators obey the standard commutation relations: \begin{align*} [\hat{b}^{}_{\omega},\hat{b}^\dagger_{\omega'}]=\delta(\omega-\omega'), \qquad [\hat{b}^{}_{\omega},\hat{b}^{}_{\omega'}]=0. \end{align*} The so-called Fulling vacuum \cite{ful1,ful2} is identified by the condition \begin{equation}\label{fullingvac} \hat b_\omega |0_R \rangle = 0, \ \ \ \ \ \omega \geq 0. \end{equation} It is a pure state and the corresponding two-point function is given by \begin{equation} W_\infty(X_1,X_2)= \langle 0_R |\, \hat \varphi(X_1) \hat \varphi(X_2) |0_R \rangle = \frac 1{\pi^2}\int_0^\infty e^{-i\omega (\eta_1-\eta_2) } \, K_{i\omega}(m e^{\xi_1}) K_{i\omega}(m e^{\xi_2}) \,{\sinh \pi \omega}\ d\omega. \label{rindler1} \end{equation} The thermal equilibrium average of an operator ${\cal O}$ at temperature $T=\beta^{-1}$ is defined in quantum mechanics as follows: \begin{equation}\label{defW} \langle\hat{\cal O}\rangle=\frac{\text{Tr}\, e^{-\beta \hat{H}} \hat{\cal O}}{\text{Tr} \, e^{-\beta \hat{H}} }, \end{equation} where $\hat{H}$ is the Hamiltonian of the system. This definition does not directly work in Quantum Field Theory but there are states having the same general properties expressed in terms of their analyticity and periodicity properties in the complex time variable; they are known as the Kubo-Martin-Schwinger (KMS) states \cite{haag}. In the Rindler wedge, a Wightman function having the KMS property may be obtained by a generalized Bogoliubov tranformation \cite{ms,ms2} of the Fulling vacuum. In the case under consideration it is given by: \begin{equation} W_{\beta}(X_1(\eta_1,\xi_1),X_2(\eta_2,\xi_2)) = \frac 1{ \pi^2}\int_0^\infty \left[ \frac{e^{-i\omega (\eta_1-\eta_2) }}{1-e^{-\beta\omega}} + \frac{e^{i\omega (\eta_1-\eta_2)}}{ e^{\beta \omega}-1}\right] K_{i\omega}(m e^{\xi_1}) K_{i\omega}(m e^{\xi_2}) {\sinh \pi \omega}\, d\omega. \label{propbetamassive} \end{equation} The two-point function (\ref{propbetamassive}) is time-translation invariant (and therefore it provides an equilibrium state) and {\em respects the exchange symmetry} (\ref{symmetry}). The formal proof of the KMS periodicity { makes use of the exchange symmetry (\ref{symmetry})} but is otherwise straightforward. When $\beta= 2\pi $ an explicit calculation of the integral shows that the LHS in Eq. \eqref{propbetamassive} can be extended to the whole complex Minkowski spacetime (minus the causal cut) and it is actually Poincar\'e invariant \cite{{Unruh:1976db},ms,ms2}: \begin{align} W_{2\pi}(X_1, X_2)=\frac{1}{2\pi}K_0\left(m \sqrt{-(X_1-X_2)^2}\right). \end{align} When $mL\to 0$ it has the standard ultraviolet (Hadamard behaviour) divergence with the correct coefficient ${1}/{4\pi}$ : \begin{align} \label{k0log} \frac{1}{2\pi}K_0\left(m\sqrt{-L}\right) \approx -\frac{1}{4\pi}\log(-m^2 \, L). \end{align} Inside the Rindler wedge, the main contributions to the integral \eqref{propbetamassive} for light--like separations come from high energies $\omega \gg me^{\xi_{1,2}}$ ($\xi_{1,2}$ fixed) and the divergence does not depend on the temperature. This is true for any $\beta$. However, when $\beta \neq 2\pi$ there are extra (anomalous) singularities at the horizon --- the boundary of the wedge, which of course is also light--like. We will show this now. When the temperature is an integer multiple of {$ (2\pi)^{-1}$} a simple formula is available \cite{Akhmedov:2020qxd,Akhmedov:2019esv}: \begin{align} \label{2pin} W_{\frac{2\pi}{N}} \left(X_1 , X_2\right)=\sum_{k=0}^{N-1} W_{2\pi} \left(X_1\left(\eta_1 - \frac{2\pi i\, k }{n},\,\xi_1\right) , \ X_2\left(\eta_2, \,\xi_2\right) \right). \end{align} Let us consider the simplest case $\beta=\pi$: \begin{align} \label{proppi} W_{\pi}=\frac{1}{2\pi}K_0\left(m\sqrt{e^{2\xi_1}+e^{2\xi_2}-2e^{\xi_1+\xi_2}\cosh\Delta\eta}\right)+\frac{1}{2\pi}K_0\left(m\sqrt{e^{2\xi_1}+e^{2\xi_2}+2e^{\xi_1+\xi_2}\cosh\Delta\eta}\right). \end{align} Points of the horizons may be attained as follows: \begin{eqnarray} \label{coordinateshorizon} \lim_{\lambda \to \pm\infty} X(\lambda ,\chi\mp \lambda) =\lim_{\lambda \to \infty}\left( \begin{array}{c} e^{\chi \mp \lambda } \sinh \lambda \\ e^{ \chi \mp \lambda } \cosh \lambda \\ \end{array} \right) \label{lorentz0} = \frac 12 \left( \begin{array}{c} \pm e^{ \chi } \\ e^{ \chi } \end{array} \right) \end{eqnarray} The interval between two points having the same coordinate $\lambda$ is spacelike; for instance \begin{eqnarray} \label{coordinateshorizon1} L_{12}=\left(X_1(\lambda ,\chi_1-\lambda) -X_2(\lambda ,\chi_2-\lambda)\right)^2 = -e^{-2 \lambda} \left(e^{\chi_1}-e^{\chi_2}\right)^2<0; \end{eqnarray} furthermore \begin{eqnarray} \left(X_1(\lambda -i \pi ,\chi_1-\lambda) -X_2(\lambda ,\chi_2-\lambda)\right)^2 = -e^{-2 \lambda} \left(e^{\chi_1}+e^{\chi_2}\right)^2 = L_{12} + 4 e^{-2 \lambda} e^{\chi_1 + \chi_2}. \end{eqnarray} The first term in \eqref{proppi} is singular for any two light-like separated points in the Rindler wedge. When the two points are both approaching either the future or the past horizon also the second term diverges, and it does exactly as the first term; when $\lambda \to +\infty$ \begin{align} W_{\pi}\big[X(\lambda ,\chi_1-\lambda),X(\lambda ,\chi_2-\lambda)\big] \approx -\frac{2}{4\pi}\log(-m^2 \, L_{12}), \ \ \text{as} \ \ \lambda \to +\infty. \end{align} Similarly for $\beta=\frac{2\pi}{N}$ and $ \lambda \to +\infty$. \begin{align}\label{betaL} W_{\frac{2\pi}{N}}\big[X(\lambda ,\chi_1-\lambda),X(\lambda ,\chi_2-\lambda)\big] \approx -\frac{N}{4\pi}\log(- m^2 \, L_{12}) = -\frac{1}{2\beta}\log(-m^2 \, L_{12}). \end{align} In the horizon limit \eqref{coordinateshorizon} the dominant contribution to the integral (\ref{propbetamassive}) comes from the infrared region $\omega \to 0$. Using appendix A and asymptotic form of the modes near horizon one can show that \eqref{betaL} remains true for general $\beta$. The calculation is similar to the one preformed in \cite{Akhmedov:2020qxd}. Such a dependence of the coefficient of the singularity at light--like separation (at the horizon) implies that the thermal state cannot be continued to the entire Minkowski space--time. It is possible to introduce more general time translation invariant (at tree--level) states by letting the temperature depend on the energy: \begin{equation} {\cal W} (X_1,X_2) = \frac 1{ \pi^2}\int_0^\infty \left[ \frac{e^{-i\omega (\eta_1-\eta_2) }}{1-e^{-\beta(\omega)\, \omega}} + \frac{e^{i\omega (\eta_1-\eta_2)}}{ e^{\beta(\omega) \, \omega}-1}\right] K_{i\omega}(m e^{\xi_1}) K_{i\omega}(m e^{\xi_2}) {\sinh \pi \omega}\, d\omega. \label{propbetamassive2} \end{equation} These states also respect the exchange symmetry (\ref{symmetry}). However, when $m\neq 0$ it is not possible to disentangle two independent temperatures for the left-- and right--movers. That is because the modes $e^{-i\omega t} \, K_{i\omega}$ are normalizable linear combinations of left-- and right--movers. We will see below that for de Sitter and Schwarzschild fields the situation is different and such a possibility does exist. \subsection{General dimension} If there are $d$ extra flat transverse spatial dimensions $\vec{x}$ then the Rindler metric is: \begin{align} ds_d^2=e^{2\xi}\big(d \eta^2-d\xi^2\big)-d\vec{x}^2. \end{align} The modes can be represented as $\varphi(\eta,\xi,\vec{x})= e^{i \vec{k}\vec{x}} \varphi_{\vec{k}} (\eta,\xi) $ where $\varphi_{\vec{k}} (\eta,\xi) $ obeys Eq. (\ref{eqmassive}) with the effective mass $m^2 + k^2$. Therefore the field operator can be expanded as \begin{align} \label{operatormassiveRindlerD} \hat{\varphi}(\eta,\xi,\vec{x}) =\int_{-\infty}^{+\infty} \frac{d^dk}{(2\pi)^\frac{d}{2}} \int_{0}^{+\infty} \frac{d \omega}{\pi}\sqrt{\sinh\pi \omega}\bigg[e^{-i\omega \eta+i\vec{k}\vec{x}}\hat{b}_{\omega,\vec{k}}^{} +e^{i\omega \eta- i\vec{k}\vec{x}}\hat{b}_{\omega,\vec{k}}^\dagger\bigg]K_{i\omega}\big(\sqrt{m^2+k^2} e^{\xi}\big). \end{align} The Wightman function at temperature $\beta$ is as follows: \begin{eqnarray}\label{genericbetarind} && W_{\beta}(X_1,X_2)=\cr&& = \int_{-\infty}^{+\infty} \frac{d^dk}{(2\pi)^d} \int_{-\infty}^{+\infty} \frac{d\omega}{\pi^2} \frac{\sinh(\pi \omega)}{1-e^{-\beta \omega}} e^{-i \omega (\eta_1-\eta_2)}e^{i\vec{k} (\vec{x}_1-\vec{x}_2)} K_{i\omega}\big(\sqrt{m^2+k^2} e^{\xi_1}\big)K_{i\omega}\big(\sqrt{m^2+k^2} e^{\xi_2}\big) \cr && \end{eqnarray} Enforcing Poincar\'e invariance gives $\beta = 2\pi$ \cite{ms}; this is the well-known Bisognano--Wichmann theorem, valid also for interacting quantum fields \cite{bisognano}. The anomalous divergence on the horizons for generic $\beta \neq 2\pi$ goes precisely as in the previous section. \subsection{Stress energy tensor in 2D} Here we complete the discussion of the massive scalar field in 2D Rindler spacetime by examining the renormalized stress-energy tensor at various temperatures (some technical details can be found in appendix \ref{appendixD}). To set up the notations let us summarise the standard expression resulting from point splitting regularization in the Poincar\'e invariant case $\beta = 2\pi$: \begin{eqnarray} && \langle T_{VV}\rangle_{2\pi} = -\frac{t_V t_V}{4\pi\epsilon^2 }, \quad \langle T_{UU}\rangle_{2\pi} = -\frac{t_U t_U}{4\pi\epsilon^2 },\cr && \langle T_{VU}\rangle_{2\pi} =\langle T_{UV}\rangle_{2\pi} =-\frac{e^{V-U}}{8 \pi } m^2 \left[ \gamma_e + \log ( m ) +\log \left(\epsilon\sqrt{t_\alpha t^\alpha}\right) \right], \end{eqnarray} where $t_\mu$ is the vector separating the two points of the Wightman function \eqref{propbetamassive}. The above expressions lead to the covariantly conserved stress--energy tensor \cite{birreldavies}: \begin{align} \label{2piset1} \langle :T_{\mu\nu}: \rangle_{2\pi}=-\frac{1}{4 \pi } m^2 \left[ \gamma + \log ( m ) \right] g_{\mu\nu}, \end{align} where $\gamma$ is the Euler-Mascheroni constant. This is obviously related to the expectation value in Minkowski space by the coordinate transformations \eqref{coordinatesuv}. Similarly, for $\beta = 2\pi/N$ point splitting regularization in \eqref{2pin} gives \begin{align} \langle :T_{\mu\nu: }\rangle _{\frac{2\pi}{N}}= \sum_{n=1}^{N-1} \frac{m^2}{4} e^{V - U} K_2\left( 2 m e^{\frac{V - U}{2}} \sin \left( \frac{n \pi}{N}\right) \right) \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} + \cr + \left( -\frac{1}{4 \pi } m^2 \left[ \gamma + \log ( m ) \right]+\frac{m^2}{2} \sum_{n=1}^{N-1} K_0\left( 2 m e^{\frac{V - U}{2}} \sin \left( \frac{n \pi }{N} \right)\right)\right) g_{\mu\nu},\label{betaNsttenz} \end{align} where $K_0(x)$ and $K_2(x)$ are MacDonald functions. Violation of Poincar\'e invariance is manifest. Near the horizon this expression simplifies to: \begin{eqnarray} \label{TbetaN} \langle :T_{\mu\nu }:\rangle _{\frac{2\pi}{N}}= \frac{1}{24}\left(N^2-1\right) \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} + {\mathcal O}(e^{V-U}), \end{eqnarray} while at the spatial infinity it gives: \begin{align*} \langle :T_{\mu\nu }:\rangle _{\frac{2\pi}{N}} \approx -\frac{1}{4 \pi } m^2 \left[ \gamma_e + \log ( m ) \right] g_{\mu\nu}, \end{align*} which coincides with the $\beta = 2 \pi$ case. These two types of asymptotic behaviour of the stress energy tensor are regular. Furthermore, the second one does not depend on $\beta$. On the other hand, the expectation value of the mixed components of stress--energy tensor $T_\mu^\nu$ diverge at the horizon. For generic values of $\beta$, when both points in \eqref{genericbetarind} are taken to the horizon we get (see appendix B) \begin{multline} W_{\beta}(X^+,X^-) \approx \int_{-\infty}^{\infty}\frac{d\omega}{\pi\omega}\frac{e^{-\frac{i\omega}{2} {(V^+ +U^+-V^--U^-)}}}{1-e^{-\beta\omega}}\sin\left(\omega\log(me^{\frac{(V^+-U^+)}{2}}/2) +\arg\Gamma (1-i\omega)\right) \times \\ \times \sin\left(\omega\log(me^{\frac{(V^--U^-)}{2}}/2)+\arg\Gamma (1-i\omega)\right). \end{multline} The expectation value may be obtained by taking into account \begin{multline} \partial _{V_+}\partial _{V_-} W_{\beta}(X^+,X^-)= \int_{-\infty}^{\infty}\frac{d\omega}{4\pi} \frac{\omega}{1-e^{-\beta\omega}} e^{-i\omega (V^+ -V^-)}= \\ =-\frac{1}{4 \pi (V^+-V^-)^2 }+\frac{\pi}{12 \beta^2} =-\frac{t_V t_V}{4\pi\epsilon^2 }+\frac{1}{24\pi}+\frac{\pi}{12 \beta^2}. \end{multline} At the horizon for arbitrary temperatures we get \begin{align} \langle :T_{\mu\nu }:\rangle _\beta= \frac{1}{24}\left( \left(\frac{2 \pi}{\beta}\right)^2-1\right) \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} + {\mathcal O}(e^{V-U}) \end{align} to be compared with \eqref{TbetaN}. \subsection{Massless case} We complete this section with a few remarks on the massless case, necessary to understand the novel features of the de Sitter case which are presented in the next section. The discussion is kept short. Details will be fully discussed elsewhere. The massless Klein-Gordon equation in the two--dimensional Rindler spacetime is \begin{equation} \Box \phi = \partial_\eta^2\phi -\partial_\xi^2 \phi = 0. \end{equation} As regards the vacuum two-point function as in the Minkowskian case \cite{Wightman,klaiber} one would set in Fourier space \begin{equation} \widetilde W_0(k) = \theta(k^0)\delta(k^2) = \widetilde W_R(k) + \widetilde W_L(k) = \frac{1}{k^+} \theta(k^+) \delta(k^-) + \frac{1}{k^-} \theta(k^-) \delta(k^+) \end{equation} where we introduced the lightcone variables $k^\pm = k^0\pm k^1$. Unfortunately $\theta(k^0)$ is not a multiplier for $\delta(k^2)$. The standard regularization (see e.g. \cite{Wightman,klaiber,gelfand,strocchi}) involves an arbitrary infrared regulator having the dimension of a mass: \begin{eqnarray} W_R(\eta,\xi )= W_R(U) =- \frac 1 {4\pi} { \log \left({i \mu (U -i \epsilon)}\right)}, \\ W_L(\eta,\xi) =W_L(V) = - \frac 1 {4\pi} { \log \left({i \mu (V-i \epsilon)}\right)}. \end{eqnarray} We see here that $ W_R$ (resp. $ W_L$) is analytic in the lower half-plane of the complex variable $U$ (resp. $V$). We may therefore introduce the regularized thermal massless right two-point function as the following formal series \begin{eqnarray} W_{R,\beta} (x^-) = \sum_{n=0}^{\infty} W_R(U -i n \beta ) + \sum_{n=1}^{\infty} W'_R(U + in \beta) \end{eqnarray} and similarly for $ W_{L,\beta} (V)$. The total two-point function is the sum \begin{eqnarray} W_{0,\beta} (\eta,\xi) = W_{L,\beta} (V)+W_{R,\beta} (U). \end{eqnarray} But the left and right movers are independent fields and may have independent temperatures. We may thus formally introduce the states characterized by functions of the form \begin{eqnarray} W_{0,\beta_L \beta_R} (x) = W_{L,\beta_L} (x^+)+ W_{R,\beta_R} (x^-). \end{eqnarray} This choice of course does not change the commutators that do not depend on the temperatures. In conclusion we may introduce two independent temperatures for the left and right moving fields \begin{equation} \phi = \phi_R+\phi_L \end{equation} by leaving of course the commutator untouched. But this possibility in Rindler space exist only for the massless field and not for the massive one. \section{Static patch of de Sitter space--time} \label{static} \subsection{Geometry, modes and Wightman functions} \begin{figure}[!h] \centering \includestandalone[width=0.8\textwidth]{picStatic} \caption{Penrose diagram of the de Sitter manifold with Cauchy surfaces of different patches. The Static patch is bordered by a bifurcate Killing horizon.} \label{fig2} \label{staticpic} \end{figure} The two-dimensional de Sitter space can be most easily visualized as the one-sheeted hyperboloid embedded in a three dimensional ambient Minkowski space: \begin{align} dS_2 = \{X\in {\bf R}^3, \ \ X^\mu X_\mu=X_{0}^2-X_{1}^2-X_{2}^2=-R^2\} . \end{align} $X^{\mu}$ denote the coordinates of a given Lorentzian frame of the ambient spacetime; we set the radius $R$ of the de Sitter space equal to one. A suitable coordinate system for the static patch is \begin{equation} \label{coordinates} X\left(t , x \right)=\begin{cases} X^0= \sinh t \ { \rm sech}\ x \\ X^1= \tanh x = u \\ X^2= \cosh t \ {\rm sech}\ x \end{cases}, \qquad t \in(-\infty,\infty), \ x \in(-\infty,\infty). \end{equation} The static coordinates cover a causal diamond of the entire two dimensional de Sitter space (see Fig. \ref{fig2}; see also \cite{Akhmedov:2020qxd} for a full description); we refer to it as the static patch or the Rindler--de Sitter wedge. The metric and the massive scalar Klein-Gordon equation in these coordinates are written as follows: \begin{eqnarray} && ds^2 = \frac{dt^2-dx^2}{\cosh x {}^2}, \\ \label{sdseq} && \partial_{t}^2 \varphi - \partial_x^2 \varphi+ \frac{m^2\varphi }{\cosh^2 x} =0.\nonumber \end{eqnarray} The scattering eigenfunctions of the Schr\"odinger operator with potential $m^2/\cosh^2 x$ \cite{Akhmedov:2020qxd} \begin{align*} \psi_\omega(x)=\sqrt{\sinh(\pi\omega)} \, \Gamma\left(\frac{1}{2}+i\mu-i\omega\right) \, \Gamma\left(\frac{1}{2}-i\mu-i\omega \right) \, \mathsf{P}^{i\omega}_{-\frac{1}{2}+i\mu}(\tanh x), \ \ \ \mu^2=m^2-\frac{1}{4} \end{align*} provide two modes for each energy level $\omega$, namely $e^{-i \omega t} \, \psi_\omega(\pm x)$. The asymptotic behaviour of the modes at $x\to \infty$ is governed by \begin{eqnarray} \label{plus} && \mathsf{P}^{i\omega}_\ind\left(\tanh x\right) \underset{x\to\infty}{\approx} \frac{e^{i\omega x}}{\Gamma(1-i\omega)}, \\ \label{minus} && \mathsf{P}^{i\omega}_\ind\left(-\tanh x\right) \underset{x\to\infty}{\approx} \bigg[ \frac{\Gamma\big(-i\omega\big) e^{-i\omega x}}{\Gamma\big(\frac12+i\mu-i\omega\big)\Gamma\big(\frac12-i\mu-i\omega\big)}+\frac{\cosh(\mu\pi)\Gamma\big(i\omega\big) \, e^{i\omega x}}{\pi }\bigg]; \end{eqnarray} this shows that $e^{-i \omega t} \, \psi_\omega(x)$ is asymptotically right--moving and $e^{-i \omega t} \, \psi_\omega(-x)$ left--moving. The expansion of the field operator written in terms of the above modes {\em naturally splits into two commuting fields: a left mover ${\phi_L}$ and a right mover ${\phi_R}$ for all values of the mass $m$}: \begin{eqnarray} \label{fieldoperator} {\phi}(t,x)&=&{\phi_R}(t,x)+ {\phi_L}(t,x), \cr && \cr {\phi_R}(t,x)&=& \frac{1}{2\pi}\int_{0}^\infty \left[e^{-i \omega t} \psi_\omega(x)a_\omega +e^{i \omega t} \psi^*_\omega(x) a^\dagger_\omega \right] {d\omega},\cr && \cr {\phi_L}(t,x)&=& \frac{1}{2\pi}\int_{0}^\infty \left[ e^{-i \omega t} \psi_\omega(-x) b_\omega +e^{i \omega t} \psi^*_\omega(-x) b^\dagger_\omega \right] d\omega; \end{eqnarray} the ladder operators obey the standard commutation relations \begin{align*} \big[a_{\omega_1}, a^\dagger_{\omega_2} \big]=\delta(\omega_1-\omega_2), \ \ \ \big[b_{\omega_1}, b^\dagger_{\omega_2} \big]= \delta(\omega_1-\omega_2), \qquad \big[a_{\omega_1}, b_{\omega_2} \big]=\big[a_{\omega_1}, b^\dagger_{\omega_2} \big]=0. \end{align*} It maybe worthwhile to stress that the above separation into left and right movers is only possible in the static coordinate system because of the symmetry of the effective potential and, once more, it holds true for massive fields. { Here the left and right moving modes only asymptotically depend on one of the two lightcone variables $t\pm x$ near the corresponding side of the horizon.} In \cite{Akhmedov:2020qxd} we constructed general time translation invariant states \begin{align} \label{bithermalst} \langle\hat{a}_\omega^\dagger \hat{a}_{\omega'}^{} \rangle=\delta(\omega-\omega')\frac{1}{e^{\beta_R(\omega) \,\omega}-1} \qquad \text{and} \qquad \langle\hat{b}_\omega^\dagger \hat{b}_{\omega'}^{} \rangle=\delta(\omega-\omega')\frac{1}{e^{\beta_L(\omega)\, \omega}-1}. \end{align} We gave in particular a full treatment for states of arbitrary global (inverse) temperature \begin{equation} \beta_L(\omega)= \beta_R(\omega)=\beta \end{equation}and provided new integral representations for their correlation functions. Taking inspiration from the consideration of Unruh state for black holes, we enlarge that study and consider different global temperatures for the left and the right--moving modes: \begin{equation} \beta_L(\omega)=\beta_L, \ \ \ \beta_R(\omega)=\beta_R. \end{equation} The Wightman function is the sum of two contributions \begin{equation} \label{propstatic} W_{\beta_L\beta_R}(X_1,X_2 ) = W_{L,\beta_L}(X_1,X_2 )+W_{R,\beta_R}(X_1,X_2 ), \end{equation} where \begin{eqnarray} \label{propstatic2a} W_{L,\beta}(X_1,X_2 ) =\int_{0}^{\infty}\frac{d\omega }{4\pi^2} \left[ e^{-i\omega (t_1-t_2)} \frac{\psi_{\omega}(-x_1) \psi_{\omega}^*(-x_2)}{1-e^{-\beta \omega }} + e^{i\omega( t_1- t_2)} \frac{\psi^*_{\omega}(-x_1) \psi_{\omega}(-x_2)}{e^{\beta \omega }-1} \right], \\ W_{R,\beta}(X_1,X_2 ) = \int_{0}^{\infty}\frac{d\omega }{4\pi^2} \left[ e^{-i\omega (t_1-t_2)} \frac{\psi_{\omega}(x_1) \psi_{\omega}^*(x_2)}{1-e^{-\beta \omega }} + e^{i\omega( t_1- t_2)} \frac{\psi^*_{\omega}(x_1) \psi_{\omega}(x_2)}{e^{\beta \omega }-1} \right]. \label{propstatic2} \end{eqnarray} The formal proof of the KMS periodicity property goes as follows: \begin{eqnarray} && W_{R,\beta}(X_2(t_2,x_2),X_1(t_1,x_1)) =\frac 1{4 \pi^2}\sum_{n=0}^\infty\int_0^\infty {e^{-i\omega (t_2-t_1-i n \beta) }}\psi_{\omega}(x_2) \psi_{\omega}^*(x_1) d\omega +\cr && \cr && +\frac 1{4 \pi^2}\sum_{n=1}^\infty\int_0^\infty {e^{i\omega (t_2-t_1+in\beta)}}\psi^*_{\omega}(x_2) \psi_{\omega}(x_1) \, d\omega= W_{R,\beta}(X_1(t_1-i\beta,x_1),X_2(t_2,x_2))\cr && \end{eqnarray} There holds the exchange symmetry \begin{equation} \label{symmetry22} W_{R,\beta_R}(X_1(t_1,x_1),X_2(t_2,x_2) ) = W_{L,\beta_R}(X_1(t_1,-x_1),X_2(t_2,-x_2) ) \end{equation} When $\beta_L = \beta_R = 2 \pi$ the Wightman function (\ref{propstatic}) respects the de Sitter isometry \cite{HG,Akhmedov:2020qxd,fhkn,sewell,bgm,bm,nt}, i.e. it is a function of the complex de Sitter invariant variable $$ \zeta = - \frac{\cosh(t_2-t_1) + \sinh x_1 \sinh x_2}{\cosh x_1 \cosh x_2} $$ with the locality cut on the negative reals. The variable $\zeta$ and the geodesic distance $L$ are related as follows: $\zeta = - \cosh L \ $ for time-like geodesics, $\zeta = \cos L\ $ for space-like ones; $\zeta = - 1$ for light-like separations. Let us consider now the behavior at the horizon of Eq. (\ref{propstatic}). Points of the right (left) future horizon are obtained in the following limit \begin{eqnarray} \label{coordinateshorizonds} \lim_{\lambda \to + \infty} X(\lambda ,\pm (\lambda - \chi)) =\lim_{\lambda \to +\infty}\left( \begin{array}{l} { \rm sech\, }(\lambda -\chi) \sinh \lambda \\ \pm \tanh (\lambda-\chi ) \\ { \rm sech\, }(\lambda-\chi ) \cosh \lambda \\ \end{array} \right) = \left( \begin{array}{r} e^{ \chi } \\ \pm1 \\ e^{ \chi } \end{array} \right) \end{eqnarray} Points of the left (right) past horizon are obtained in the limit $\lambda \to -\infty$ of the above expression. In all cases the interval between two points having the same finite coordinate $\lambda$ is spacelike: \begin{eqnarray} \label{coordinateshorizon1} L_{12}= -\frac{2 (\cosh (\chi_1 -\chi_2 )-1)} { \cosh (\lambda -\chi_1 ) \cosh (\lambda -\chi_2 ) } <0, \end{eqnarray} becoming light--like only in the limit $\lambda \to \pm \infty$. Using the asymptotics of the modes \ref{plus} and \ref{minus}, and the eq. \eqref{helpfull}, one can obtain the behaviour of $W_{R,\beta_R}$ and $W_{L,\beta_L}$ separately at e.g. the right side of the horizon. As we can see from Eq. \eqref{plus}, in this region $W_{R,\beta_R}$ depends only on the difference $x_1-x_2$, which does not grow when both points are taken to the same side of the horizon. It means, that this contribution to the Wightman function is regular near the right side of the horizon. At the same time, in the same region $W_{L,\beta_L}$ depends on the both $x_1-x_2$ and $x_1+x_2$, as we can see from Eq. \eqref{minus}. The latter sum is infinitely growing near the horizon. As the result, using \eqref{helpfull} one obtains that: \begin{align} W_{\beta_L\beta_R}\Big(X(\lambda ,\chi_1-\lambda) , X(\lambda ,\chi_2-\lambda) \Big) \approx W_{L,\beta_L}\Big(X(\lambda ,\chi_1-\lambda) , X(\lambda ,\chi_2-\lambda) \Big) \approx \frac{1}{\beta_L}\lambda, \qquad \lambda \to +\infty. \end{align} Behavior near the left horizon can be found from the symmetry from Eq. \eqref{symmetry22}, which implies that $$ W_{\beta_L\beta_R}\left(X(t_1,x_1 ), X( t_2, x_2)\right)= W_{\beta_R\beta_L}\left(X(t_1,-x_1 ), X( t_2, -x_2)\right). $$ Hence parity $x\to -x$ plus rearrangement of temperatures $\beta_L\leftrightarrow\beta_R$ leave the two-point-function invariant. As a result, it follows that for $\lambda \to -\infty$ \begin{align} W_{\beta_L\beta_R}\Big(X(\lambda ,\chi_1-\lambda) , X(\lambda ,\chi_2-\lambda) \Big) \approx W_{R,\beta_R}\Big(X(\lambda ,\chi_1-\lambda) , X(\lambda ,\chi_2-\lambda) \Big) \approx\frac{1}{\beta_R}|\lambda|. \end{align} The light--like singularity at the horizons depends on the state of the theory. In particular, at the right horizon it depends only on $\beta_R$ while at the left horizon it depends on $\beta_L$. This shows that such a peculiar behavior is present due to the interplay between the waves that are falling down and reflected from the $m^2/\cosh^2 x$ potential. \subsection{General dimension} The $(D+1)$-embedding coordinates and the invariant scalar product for the $D$--dimensional static patch are given by \begin{align} X_0 = \sinh(t) \, \text{sech}(x), \quad X_i = \tanh(x) \, \vec{y}_i, \quad X_D = \cosh(t) \, \text{sech}(x), \quad \vec{y}_i\vec{y}_i=1, \\ Z = \eta_{\mu\nu} \, X_1^\mu \, X_2^\nu = -\frac{\cosh(t_2-t_1) + \vec{y}_1\cdot \vec{y}_2 \ \sinh x_1 \, \sinh x_2 \, }{\cosh x_1 \, \cosh x_2}. \end{align} The Bunch-Davies Wightman function \cite{HG,thir,nach,tagirov,ss,Bunch,bgm,bm} corresponding to the inverse temperature $\beta_L = \beta_R = 2\pi$ \cite{HG,Akhmedov:2019esv,sewell,bgm,bm} is given by \begin{align} W_{2\pi}(Z)=\frac{\Gamma\left(\frac{D-1}{2} + i \, \mu\right)\Gamma\left(\frac{D-1}{2} - i \, \mu\right)}{ 2 (2\pi)^{\frac{D}{2}}} (Z^2-1)^{-\frac{D-2}{4}} P^{-\frac{D-2}{2}}_{-\frac{1}{2}+ i\mu}(Z), \end{align} where $\mu = \sqrt{m^2 - (D-1)^2/4}$. It has the standard Hadamard singularity near $Z=-1$. Points of the future and past horizons are attained in the following limits \begin{eqnarray} \label{coordinateshorizonds} \lim_{\lambda \to \pm \infty} X(\lambda , (\lambda - \chi)) =\lim_{\lambda \to \infty}\left( \begin{array}{l} { \rm sech\, }(\lambda -\chi) \sinh \lambda \\ \tanh (\lambda-\chi ) \vec y \\ { \rm sech\, }(\lambda-\chi ) \cosh \lambda \\ \end{array} \right) \label{lorentz0} = \left( \begin{array}{l} \pm e^{ \chi } \\ \pm \vec y \\ e^{ \chi } \end{array} \right) \end{eqnarray} Two events on the horizons are spacelike separated unless $\vec{y}_1=\vec{y}_2$. As in Eq. \eqref{2pin} for $\beta=\frac{2\pi}{N}$ in the horizon limit one gets: \begin{align} W_{\frac{2\pi}{N}}(\lambda\to + \infty) \approx N \, W_{2\pi}(\lambda \to + \infty) \approx - N \, \frac{\Gamma\Big(\frac{D-2}{2}\Big)}{2^{2+(D-2)\frac{3}{2}}\pi^\frac{D}{2}}e^{(D-2)\lambda}. \end{align} As in Rindler space the singularity of the propagator on the horizon depends on the temperature. \subsection{Stress-energy tensor in 2D} Let us introduce the light-cone coordinates of the static patch: \begin{eqnarray} && V=t+x,\qquad U=t-x, \nonumber \\ \label{metric} && ds^2=\frac{1}{\cosh^2 (\frac{V-U}{2}) }dUdV \equiv C(U,V) dU dV. \end{eqnarray} To set up notations let us discuss first the de Sitter invariant case $\beta = 2\pi$. When the two arguments of the Wightman function are taken very close to each other, one has that \begin{align} W _{2\pi} (X_+, X_-) \approx - \frac{1}{4\pi}\left( H_{-\frac{1}{2}+i\mu}+ H_{-\frac{1}{2}-i\mu}+\log\left[\frac{(V_--V_+)(U_+-U_-)}{4 \cosh^2(V-U)}\right]\right), \end{align} where $H_{-\frac{1}{2}+i \mu}=\psi\left(\frac{1}{2}+i \mu\right)+ \gamma_e $ are the harmonic numbers; the definitions of $X_\pm$, $V_\pm$ and $U_\pm$ can be found in appendix B. Since \begin{align} \partial _{V_+}\partial _{V_-}W_{2\pi}(Z) \approx -\frac{1}{4\pi(V_+-V_-)^2}+\frac{1}{48 \pi}, \quad {\rm and} \quad \partial _{U_+}\partial _{U_-}W_{2\pi}(Z) \approx -\frac{1}{4\pi(U_+-U_-)^2}+\frac{1}{48 \pi}. \end{align} the covariant point splitting regularization gives \begin{eqnarray} \langle T_{UV}\rangle_{2\pi} =-\frac{m^2}{8 \pi \cosh^2 (\frac{V-U}{2} )}\left( \psi\left(\frac{1}{2}+i \mu\right)+\psi\left(\frac{1}{2}-i \mu\right)+ 2 \gamma_e+\log\left[\epsilon^2 \ t_\alpha t^\alpha\right]\right),\\ \langle T_{UU}\rangle_{2\pi} = -\left(\frac{1}{4\pi\epsilon^2 (t_\alpha t^\alpha)}+\frac{R}{24 \pi}\right) \frac{t_U t_U}{t_\alpha t^\alpha}, \\ \langle T_{VV}\rangle_{2\pi} =-\left(\frac{1}{4\pi\epsilon^2 (t_\alpha t^\alpha)}+\frac{R}{24 \pi}\right) \frac{t_V t_V}{t_\alpha t^\alpha}. \end{eqnarray} After regularization we obtain the well known answer \cite{Bunch} \begin{align} \label{2piset} \langle :T_{\mu\nu}:\rangle_{2\pi}=-\frac{1}{4 \pi } m^2 \left[ \psi\left(\frac{1}{2}+i \mu\right)+\psi\left(\frac{1}{2}-i \mu\right)+ 2 \gamma_e \right] g_{\mu\nu} + \frac{R}{48 \pi } g_{\mu \nu} \end{align} The expectation value of the stress--energy tensor with two temperatures $\beta_L$ and $\beta_R$ can be obtained starting from Eq. \eqref{propstatic}. The most interesting case in this situation is the near horizon limit. For instance, close to right horizon a lengthy but not difficult calculation gives \begin{align} \partial_{V^+} \partial_{V^-} W_{\beta_L \beta_R} (X^+, X^-) \approx \int^{\infty}_{-\infty} \frac{d \w}{ 4 \pi } \frac{\w}{e^{\beta_L \w}-1} e^{i \w (V^+-V^-)} = \frac{\pi}{12 \beta_L^2} - \frac{1}{4\pi} \frac{1}{(V^+-V^-)^2} \end{align} and \begin{multline} \label{desitter_integral} \partial_{U^+} \partial_{U^-} W_{\beta_L \beta_R} (X^+, X^-) \approx \int^{\infty}_{-\infty} \frac{d \w}{ 4 \pi } \frac{\w \sinh^2 \pi \w \ e^{i \w (U^+-U^-)}}{\cosh \pi (\w- \mu) \cosh \pi (\w+\mu)} \bigg[ \frac{1}{e^{\beta_R \w}-1} + \frac{\cosh^2 \mu \pi}{\sinh^2 \pi \w} \frac{1}{e^{\beta_L}-1} \bigg \end{multline} The above expressions simplify when the temperatures of the left-- and right--movers coincide: $\beta_R = \beta_L = \beta$. Then the regularized stress-energy tensor in the near horizon limit takes the form: \begin{align} \label{set_static} \langle :T_{\mu \nu}: \rangle \approx \Theta_{\mu \nu} + \frac{R}{48 \pi } g_{\mu \nu}, \end{align} where \begin{align*} \Theta_{UU} &= -\frac{1}{12 \pi} C^{1/2} \partial^2_U C^{-1/2}+ \frac{\pi}{12 \beta^2} = \frac{\pi}{12} \bigg( \frac{1}{\beta^2} - \frac{1}{(2\pi)^2} \bigg) ,\\ \Theta_{VV} &= -\frac{1}{12 \pi} C^{1/2} \partial^2_V C^{-1/2} + \frac{\pi}{12 \beta^2}=\frac{\pi}{12} \bigg( \frac{1}{\beta^2} - \frac{1}{(2\pi)^2} \bigg) ,\\ \Theta_{UV} &= \Theta_{VU} =0. \end{align*} de Sitter covariance is recovered only when $\beta = 2\pi$. \section{Schwarzschild black hole} \label{SchwarzschildVARIANT2} \begin{figure}[!h] \centering \includestandalone[width=0.8\textwidth]{picSch} \caption{Penrose diagram of the Schwarzschild black hole.} \label{Schpic} \end{figure} Here we consider the radial part of the Schwarzschild metric (we call it the two--dimensional black hole): \begin{equation} ds^2 = \left(1 - \frac{r_g}{r}\right) \, dt^2 - \frac{dr^2}{1 - \frac{r_g}{r}} = ds^2 = \bigg[1-\frac{r_g}{r(r^*)}\bigg]\Big(dt^2-d{r^*}^2\Big).\label{BHmetr} \end{equation} The tortoise coordinate \begin{align} r^*=r+r_g\log\Big(\frac{r}{r_g}-1\Big), \end{align} is such that $r^*\approx r$ when $r\to+\infty$ while $r^* \rightarrow -\infty$ when $r\to r_g$; near the horizon the metric looks like the Rindler's one \eqref{metricRindler} : \begin{align} \label{Schmetrichor} ds_{\text{nh}}^2\approx e^\frac{r^*}{r_g}\Big(dt^2-d{r^*}^2\Big) \end{align} with the acceleration $\alpha=\frac{1}{2r_g}$. \subsection{Modes and Wightman function} \label{modwight} In the tortoise coordinates the massive Klein-Gordon field equation, \begin{equation} \label{Scheq} \partial^2_t \varphi- \partial^2_{r^*} \varphi +m^2 g_{00} \, \varphi=0, \end{equation} is such that the mass term in \eqref{Scheq} vanishes at the horizon where $g_{00}$ does vanish. By separating the variables $\varphi(t,r^*)=e^{i\omega t}\varphi_{\w}(r^*)$ we obtain \begin{align} \label{ScheqS} -\partial_{r^*}^2 \varphi_{\w}(r^*) + m^2 g_{00}(r^*) \, \varphi_{\w}(r^*)=\w^2\varphi_{\w}(r^*). \end{align} When $\w \leq m$ the modes decay exponentially at large $r^* $ and are localized near the horizon \cite{Akhmedov:2016uha}. There is no double degeneration as in the case of massive field in Rindler space. The classically permitted region is at the left of the turning point solving the equation $\w^2-m^2 g_{00}(r^*_\text{turning})=0$. Near the horizon where the effective potential vanishes, the modes approximately behave as \begin{align} \label{assymtoticsABC} \varphi_\omega(r^*) \approx \sqrt{\frac{2}{\pi}}\cos(\omega r^*+\delta_\omega), \quad \omega < m, \quad \left|\omega r^*\right| \gg 1. \end{align} We do not need their exact form for further considerations. They do not propagate at spatial infinity and do not contribute to the Hawking radiation \cite{Akhmedov:2015xwa,Akhmedov:2016uha}. On the other hand they play an important role in the vicinity of the horizon. When $\w > m$ the situation is similar to the static de Sitter case: there are outgoing modes (right movers) $R_\omega(r^*)$ and ingoing modes (left movers) $L_\omega(r^*)$ which might be represented by resorting to special functions. We will instead apply semi-classical approximation methods which in two--dimensions\footnote{Indeed one has that \begin{align*} k(x) = \sqrt{(\w r_g)^2 - (m r_g)^2 \, g_{00}(x)}, \ \ \ \ \ \left|\frac{d}{dx} \frac{1}{k(x)}\right| = \frac{1}{2 m r_g} \, \frac{1}{ \big( \frac{\w}{m} - g_{00}(x) \big)^{3/2}} \, \frac{dg_{00}}{dx}. \end{align*} Here $ x = \, r_g /r^*$ ; $k(x)$ is the wave vector of the problem \eqref{ScheqS}; the following inequality holds for all values of $\omega > m$: $$ \left|\frac{d}{dx} \frac{1}{k(x)}\right| < 1. $$ Thus, if $m r_g \gg 1$ the semiclassical approximation is applicable for all values of $r^*$ and there is no reflection from the potential barrier in \eqref{ScheqS}. Such a reflection is inevitable in four dimensions. We would like to thank Dmitriy Trunin for bringing to our attention this important simplification in 2D.} works well for all values of $r^*$; they lead to the following solutions: \begin{align} \label{WKBR} R_\w(r^*)=A_\w \sqrt[4]{\frac{\w^2}{{\w^2-m^2g_{00} (r^*)}}} \exp\Big(i\ \text{sgn}(\w) \int_{r_0}^{r^*}{\sqrt{\w^2-m^2g_{00} (x)}}dx\Big), \\ \label{WKBL} L_\w(r^*)=B_\w \sqrt[4]{\frac{\w^2}{{\w^2-m^2g_{00} (r^*)}}} \exp\Big(-i\ \text{sgn}(\w)\int_{r_0}^{r^*}\sqrt{\w^2-m^2g_{00} (x)}dx\Big), \end{align} where $r_0$ is a reference point. Here is the mode expansion of the field operator: \begin{eqnarray} \label{fieldopBH} && \hat{\varphi}(t,r^*)=\int_0^m \frac{d\omega}{\sqrt{2\omega}} \, e^{-i\omega t} \varphi_\omega(r^*) \hat{c}_\omega+ \int_m^{+\infty}\frac{d\omega}{\sqrt{2\omega}} e^{-i\omega t}\left[ R_\omega(r^*) \hat{a}_\omega+ L_\omega(r^*) \hat{b}_\omega\right]+h.c. \end{eqnarray} where \begin{eqnarray} && [\hat{a}^{}_{\w},\hat{a}^\dagger_{\w'}]=\delta(\w-\w'), \qquad [\hat{b}^{}_{\w},\hat{b}^\dagger_{\w'}]=\delta(\w-\w'), \qquad [\hat{c}^{}_{\w},\hat{c}^\dagger_{\w'}]=\delta(\w-\w'), \qquad \end{eqnarray} all the other commutators being zero. One may show that in the approximation $m r_g \gg 1$ the canonical commutation relations give the normalization \begin{align} \label{ABCWKB} |A_\w|^2=|B_\w|^2 \approx \frac{1}{2\pi}. \end{align} Taking inspiration from the Rindler and de Sitter cases we may now introduce a Wightman function depending on three (inverse) temperatures as follows: \begin{multline} \label{WBH} W ((t_1,r^*_1) ,\, (t_2, r^*_2))_{\beta_0 \beta_L \beta_R} = \int_{|\omega|<m}\frac{d\w}{2\w} \frac{e^{-i\w(t_1-t_2)}}{1-e^{-\beta_0 \omega}} \, \varphi_\w(r^*_2) \, \varphi_\w(r^*_1) + \\ + \int_{|\omega|>m}\frac{d\w}{2\w}\bigg[ \frac{e^{-i\w(t_1-t_2)}}{1-e^{-\beta_L \omega}}L_\w(r^*_1)L^*_\w(r^*_2)+\frac{e^{-i\w(t_1-t_2)}}{1-e^{\beta_R \omega}}R_\w(r^*_1)R^*_\w(r^*_2)\bigg]. \end{multline} The first term on the RHS of (\ref{WBH}) shows again a double pole at $\omega = 0$. The way we treat it is explained Appendix A. In Appendix C it is shown that when $m=0$ and $\beta_{R} = \beta_{L}= 4 \pi r_g$ the above expression coincides with the Hartle--Hawking Wightman function; when $\beta_L = \infty$ and $\beta_R = 4 \pi r_g$ it corresponds to the Unruh state; finally, when $\beta_{R,L} = \infty$ it reproduces the Boulware state. \subsection{Singularity at the horizon} The additional singularity at the horizon is an infrared effect; the main contribution to it comes from the term \begin{equation} \varphi_\w(r^*_1)\varphi_\w(r^*_2)\approx\frac{1}{\pi \cos\Big(\omega(r^*_1+r^*_2)+2\delta_\omega\Big) \end{equation} in \eqref{WBH}, with $\w \to 0$. To fix the phase we consider space--like separated points near the horizon parametrized as follows: \begin{align}\label{horlimit} r^*_1=\lambda, \quad r^*_2=\lambda + const, \quad t_1=t_2 = - \lambda. \end{align} The (future) horizon limit corresponds to $\lambda\to-\infty$; in this limit the Wightman function has the following asymptotics: \begin{align} \label{wm} W(t_1,r^*_1=\lambda | t_2, r^*_2=\lambda)=\frac{\lambda}{\beta_0}e^{2i\delta_0}, \qquad\text{as} \ \ \lambda\to-\infty. \end{align} As anticipated the limiting expression depends on the phase and goes to zero in the zero temperature limit. At low energies turning point \begin{align} r^*_\text{turning} \approx r_g \log\frac{\w^2}{m^2},\qquad\text{as} \ \ \w\to0. \end{align} is shifted to minus infinity. Since in this limit the Rindler space asymptotics should be reproduced we have to set $\delta_0=\frac{\pi}{2}$. In Appendix D we present another derivation of this equality. \subsection{Stress-energy tensor} In the lightcone coordinates $V = t+r^*, \ \ U = t-r^*$ the metric \eqref{BHmetr} takes the form \begin{align} ds^2 = C(U,V) dU dV, \ \ \ \ C(U,V) = \frac{{\mathcal W}(e^{\frac{V-U}{4M}-1})}{1 + {\mathcal W}(e^{\frac{V-U}{4M}-1})}, \end{align} where ${\mathcal W}(r^*)$ is the Lambert function. Near the horizon \begin{multline} W((V^+,U^+),(V^-,U^-)) \approx \int^m_{-m} \frac{d \w }{4 \pi \w} \frac{1}{e^{\beta_0 \w}-1} \bigg( e^{i \w (V^+ - U^-)+2i \delta_\w }+ e^{i \w (V^+-V^-)} + e^{i \w (U^+-U^-)}+\\ +e^{i\w (U^+-V^-)-2i \delta_\w} \bigg) + \int_{|\w|>m} \frac{d \w }{4 \pi \w} \bigg[ \frac{e^{i\w (V^+-V^-) }}{e^{\beta_R \w}-1} + \frac{e^{i\w (U^+-U^-) }}{e^{\beta_L \w}-1} \bigg], \end{multline} By taking the limit of coinciding points one gets \begin{multline} \partial_{U^+} \partial_{U^-} W \approx - \frac{1}{4\pi} \frac{1}{(U^+-U^-)^2} + \frac{\pi}{12 \beta_0^2} + \frac{1}{2\pi} \bigg( \frac{Li_2 (e^{-m \beta_L)}}{\beta_L^2} - \frac{Li_2 (e^{-m \beta_0)}}{\beta_0^2} \bigg) + \\ +\frac{m}{2\pi} \bigg( \frac{\log (1-e^{-m \beta_0} )}{\beta_0} - \frac{\log (1-e^{-m \beta_L} )}{\beta_L} \bigg), \end{multline} where $Li_2(x)$ is the polylogarithmic function. Then for the components of the stress-energy tensor we obtain \begin{multline} T_{UU} \approx - \bigg[ \frac{1}{4 \pi \epsilon^2 (t_{\alpha} t^{\alpha} )} + \frac{R}{24 \pi} \bigg] \frac{t_U t_U}{t_{\alpha} t^{\alpha}} + \frac{\pi}{12} \bigg( \frac{1}{\beta_0^2} - \frac{1}{(8 \pi M)^2} \bigg) + \frac{1}{2\pi} \bigg( \frac{Li_2 (e^{-m \beta_L)}}{\beta_L^2} - \frac{Li_2 (e^{-m \beta_0)}}{\beta_0^2} \bigg) +\\ +\frac{m}{2\pi} \bigg( \frac{\log (1-e^{-m \beta_0} )}{\beta_0} - \frac{\log (1-e^{-m \beta_L} )}{\beta_L} \bigg). \\ T_{VV} \approx - \bigg[ \frac{1}{4 \pi \epsilon^2 (t_{\alpha} t^{\alpha} )} + \frac{R}{24 \pi} \bigg] \frac{t_V t_V}{t_{\alpha} t^{\alpha}} + \frac{\pi}{12} \bigg( \frac{1}{\beta_0^2} - \frac{1}{(8 \pi M)^2} \bigg) + \frac{1}{2\pi} \bigg( \frac{Li_2 (e^{-m \beta_R)}}{\beta_R^2} - \frac{Li_2 (e^{-m \beta_0)}}{\beta_0^2} \bigg) + \\ +\frac{m}{2\pi} \bigg( \frac{\log (1-e^{-m \beta_0} )}{\beta_0} - \frac{\log (1-e^{-m \beta_R} )}{\beta_R} \bigg). \end{multline} The non-diagonal component near the horizon goes to zero \begin{align} \label{nondiag_schw} T_{UV} \approx \frac{m^2}{4} e^{{\lambda}/{2 r_g}} \frac{| \lambda|}{\beta_0} \to 0. \end{align} Near the horizon the stress--energy tensor in similar to the original result of \cite{Davies:1976ei,Davies:1977} \begin{align} \label{set_schwar} T_{\mu \nu} \approx \Theta_{\mu \nu} + \frac{R}{48 \pi } g_{\mu \nu}, \end{align} where \begin{align*} \Theta_{UU} &= -\frac{1}{12 \pi} C^{1/2} \partial^2_U C^{-1/2}+ \frac{\pi}{12 \beta_0^2} + L(\beta_L, \beta_0),\\ \Theta_{VV} &= -\frac{1}{12 \pi} C^{1/2} \partial^2_V C^{-1/2} + \frac{\pi}{12 \beta_0^2} + L(\beta_R,\beta_0),\\ \Theta_{UV} &= \Theta_{VU} =0, \end{align*} and \begin{align} L(\beta_1, \beta_2) = \bigg( \frac{Li_2 (e^{-m \beta_1})}{\beta_1^2} - \frac{Li_2 (e^{-m \beta_2})}{\beta_2^2} \bigg) +\frac{m}{2\pi} \bigg( \frac{\log (1-e^{-m \beta_2} )}{\beta_2} - \frac{\log (1-e^{-m \beta_1} )}{\beta_1} \bigg). \end{align} Some comments are in order here. When the three temperatures coincide the finite logarithmic and dilogarithmic contributions vanish. Furthermore, there are no finite contributions at all when they all are equal to the Hawking temperature $\beta = 4 \pi r_g$. Second, while the additional singularity of the propagators is effective only on the non-diagonal components of stress-energy tensor (in $(u,v)$ coordinates), the exponential damping protects the covariant components \eqref{nondiag_schw}. Additional singularity does arise in the mixed components of stress energy tensor as follows: \begin{align} T^{V}_{\ \ V } = T^{U}_{\ \ U} =\frac{m^2}{2} \langle \varphi \varphi \rangle \sim \frac{m^2 }{2 \beta_0} | \lambda |, \qquad\text{as} \ \ \lambda\to-\infty. \end{align} \section{Outlook} Heating and thermalization are however non--stationary processes. To calculate the correlation functions in non--stationary situations one has to exploit the Schwinger--Keldysh diagrammatic technique \cite{LL10,Kamenev}. The starting point is to choose an initial Cauchy surface\footnote{This method applies to globally hyperbolic spacetimes. In non globally hyperbolic space--times one should deal also with boundary conditions \cite{Akhmedov:2018lkp}.} and an initial value of the correlation functions, i.e. an initial state. The Schwinger--Keldysh technique provides the time evolution towards future of the correlation function in question. Different types of Cauchy surfaces and initial values may and in general do lead to substantially different physical behaviours \cite{Akhmedov:2013vka,Akhmedov:2019cfd}. Even in highly symmetric curved space--time (such as de Sitter) the tree--level correlators of a generic state are not functions of geodesic distances. It goes without saying about generic space--times, which only partly resemble to the de Sitter space \cite{Akhmedov:2019cfd}. In this sense the situation in strongly curved space--times is similar to the condensed matter phenomena rather than to high energy physics ones. There is no a priori reason for the initial state in the early universe or in the vicinity of primordial black holes be necessarily the ground state or a thermal state at the canonical temperature. Here we considered a class of time translation invariant states in Rindler, static de Sitter and two--dimensional black hole space--times. They can be thought as initial states for thermalization or heating problems. They may also appear as attractor equilibrium states at the end of some process. We have shown that when the various temperatures do not coincide with the canonical ones then the two--point Wightman functions have anomalous singularities at the horizons. That may affect the loop corrections. The latter are necessary to calculate and to resum to trace the fate of the initial state and of the correlation functions. See e.g. \cite{LL10,Kamenev,Akhmedov:2013vka} for various related situations and \cite{Mirbabayi:2020vyt} for the recent study of the resummation of loop corrections in static patch for a particular initial state. Loop corrections for various initial states in static patch will be considered elsewhere. \section{Acknowledgements} We would like to acknowledge valuable discussions with O.Diatlyk, F. Popov, A.Semenov and D.Trunin. The work of ETA was supported by the grant from the Foundation for the Advancement of Theoretical Physics and Mathematics ``BASIS'' and by RFBR grant 18-01-00460. The work of ETA, PAA, KVB and DVD is supported by Russian Ministry of education and science (project 5-100). \newpage \begin{appendices} \numberwithin{equation}{section} \setcounter{equation}{0} \renewcommand\theequation{A.\arabic{equation}} \section{Leading infrared contribution} \label{appendixA} The behavior of various Wightman functions discussed above at the horizons is governed by the integral of the form: \begin{align*} \int_{-\infty}^{+\infty} \frac{d \omega}{\omega+i\epsilon}\frac{e^{i \omega \theta}}{e^{\beta(\omega+i\epsilon)}-1}, \quad \text{where} \ \ |\theta|\gg 1. \end{align*} The choice of the shifts of the poles here reproduces the results in the case $\beta = 2\pi/N$ but it can also justified by general distributional methods \cite{gelfand}. The contour in closed in the upper half-plane for positive values of $\theta$ and in the lower half for negative ones. In the first case the double pole at $\omega=-i\epsilon$ does not contribute. Contributions from other poles are suppressed. For negative $\theta$ the leading contributions in the limit $\theta \to - \infty$ comes from the double pole at $\omega=-i\epsilon$: \begin{align} \label{helpfull} \int_{-\infty}^{+\infty} \frac{d\omega}{\omega+i\epsilon}\frac{e^{i\omega\theta}}{e^{\beta(\omega+i\epsilon)}-1}\approx\begin{cases} 0 \quad \text{if} \ \ \theta>0\\ \frac{2\pi}{\beta} \theta \quad \text{if} \ \ \theta<0 \end{cases} \quad \text{as} \ \ |\theta|\gg 1. \end{align} the answer depends on the sign of $\theta$. \section{Point-splitting regularization} \renewcommand\theequation{B.\arabic{equation}} \setcounter{equation}{0} \label{appendixD} To make the paper self-contained and set up the notations in this appendix we summarize here the standard point splitting regularization procedure \cite{Davies:1976ei} of the expectation value of the stress--energy tensor in curved space--time: \begin{align*} \langle \hat{T}_{\mu\nu}(x) \rangle = \left. D_{\mu\nu} \langle \hat{\varphi}(x^+)\hat{\varphi}(x^-) \rangle\right|_{x^+=x^-=x}. \end{align*} Here $D_{\mu\nu}$ is a differential operator; $x^\pm$ are points which are separated from $x$ along a \textcolor{black}{spacelike} geodesic and $t^\mu$ is tangent vector (see the Fig. \ref{geopic}). \begin{figure}[h] \centering \begin{tikzpicture}[ scale=2] \coordinate (x) at (0, 0); \coordinate (xp) at (1.5, 0.75); \coordinate (xpp) at (3, 0.5); \coordinate (xm) at (-1.5, -0.5); \coordinate (xmm) at (-3, 0.5); \coordinate (xt) at (0.5,0.86); \draw[thick] (xmm) to [out=20,in=150] (xm) node[circle,fill,inner sep=1.5pt]{} to [out=-30,in=-120] (x) node[circle,fill,inner sep=1.5pt]{} to [out=60,in=140] (xp) node[circle,fill,inner sep=1.5pt]{} to [out=-40,in=-170] (xpp); \node at (x) [left]{\LARGE $x$ \normalsize}; \node at (1.75,0.75) [above]{\LARGE $x^+$ \normalsize}; \node at (xm) [below]{\LARGE $x^-$ \normalsize}; \draw[thick,->] (x) -- (xt); \node at (xt) [above]{\LARGE $t^\mu$ \normalsize}; \draw [thick, ->] (-3,-1) -- node[midway, right] {geodesic line} (-2.75,0.25); \end{tikzpicture} \caption{Point-splitting} \label{geopic} \end{figure} A point close enough to $x^\mu$ can be represented as follows \begin{align} \label{eqgeo} x^\mu(\tau)=x^\mu+\tau t^\mu+\frac{1}{2} \tau^2 a^\mu+\frac{1}{6}\tau^3 b^\mu+..., \end{align} where $\tau$ is the proper length. And the coordinates of $x^\pm$ are $x^{\mu}{}^\pm=x^{\mu}(\tau=\pm \epsilon).$ General two dimensional conformally flat metric can be written $ds^2=C(u,v)dudv$. The geodesic equations provide the relations between the parameters $t^\mu, a^\mu, b^\mu$: \begin{align} a^\mu=-\Gamma^\mu_{\nu\lambda}t^\nu t^\lambda, \quad b^\mu=-\Gamma^\mu_{\nu\lambda}(a^\nu t^\lambda+t^\nu a^\lambda)-t^\sigma \partial_\sigma \Gamma^\mu_{\nu\lambda}t^\nu t^\lambda. \end{align} It is enough to express $a^\mu$ and $b^\mu$ in terms of $t^\mu$ to find the finite part of the expectation value of the stress--energy tensor. Another building block is the parallel transport matrix $e^\mu_\nu(\tau)$, solving the following equation: \begin{align} \frac{d e^\mu_\nu}{d\tau}+\Gamma^\mu_{\rho \sigma} \frac{dx^\rho}{d\tau}e^{\sigma}_{\nu} =0, \ \ e^{\mu}_\nu(\tau=0)=\delta_\nu^\mu. \end{align} Again, one expands the parallel transport matrix in powers of $\tau$: \begin{align*} e^\mu_\nu=\delta^\mu_\nu+\tau t^\mu_\nu+\frac{1}{2}\tau^2 a^\mu_\nu+... \end{align*} where \begin{align*} t^\mu_\nu=-\Gamma^\mu_{\rho \nu}t^\rho, \quad a^\mu_\nu=\Gamma^\mu_{\rho \nu}\Gamma^\rho_{\alpha\beta}t^\alpha t^\beta+\Gamma^\mu_{\rho\sigma}\Gamma^\sigma_{\alpha\nu}t^\rho t^\alpha-t^\alpha t^\rho \partial_\alpha \Gamma^\mu_{\rho \nu}. \end{align*} The expectation value of the stress--energy tensor in a state at inverse temperature $\beta^{-1}$ is given by \begin{align} \label{genT} \langle \hat{T}_{\mu\nu} \rangle_\beta=\langle \partial_\alpha \varphi(x^+)\partial_\beta \varphi(x^-)\rangle_\beta\left( e^{+\alpha}_\mu e^{-\beta}_\nu - \frac{1}{2}g_{\mu\nu} g^{\sigma\rho}e^{+\alpha}_\sigma e^{-\beta}_\rho\right)+\frac{1}{2}m^2g_{\mu\nu} \langle\varphi(x^+)\varphi(x^-)\rangle_\beta. \end{align} where $e_\mu^{\pm \alpha}=e_\mu^\alpha(\tau=\pm\epsilon)$ and the limit $\epsilon\to 0$ is taken. The result will contain terms that depend on $\epsilon$ and direction-dependent terms. For example, for the massless field in the generic conformally flat background one has \cite{Davies:1977}: \begin{align} \langle T_{\mu \nu}\rangle = - \bigg[ \frac{1}{4 \pi \epsilon^2 (t_{\alpha} t^{\alpha})} + \frac{R}{24 \pi} \bigg] \bigg[ \frac{t_{\mu} t_{\nu}}{t_{\alpha} t^{\alpha}} - \frac{1}{2} g_{\mu \nu} \bigg] + \Theta_{\mu \nu}, \end{align} and the regularized stress--energy tensor reads: \begin{align} \langle :T_{\mu \nu}: \rangle =\Theta_{\mu \nu} + \frac{R }{48 \pi } g_{\mu \nu}, \end{align} with \begin{align} \Theta_{uu} &= -\frac{1}{12 \pi} C^{1/2} \partial^2_u C^{-1/2}+ \text{state dependent terms},\\ \Theta_{vv} &= -\frac{1}{12 \pi} C^{1/2} \partial^2_v C^{-1/2} + \text{state dependent terms},\\ \Theta_{uv} &= \Theta_{vu} =0. \end{align} The tensor is conserved for an invariant state only if one omits the direction--dependent terms. While averaging over directions leads to quantities, which are not covariantly conserved. \section{Boulware, Unruh and Hartle--Hawking states} \label{appendixB} \setcounter{equation}{0} \renewcommand\theequation{C.\arabic{equation}} There are several different ways to define Boulware, Unruh and Hartle--Hawking states for {\it massless scalar fields in four--dimensions}. Not all of them can be straightforwardly generalized to the massive case. Here we repeat the standard constructions and consider their generalizations to the massive case. \subsection{Analytic continuation of the positive frequency modes} We look for a complete set of solutions of the massless Klein-Gordon equation in either the left or right (Schwarzschild) quadrant of the entire black--hole space--time in four dimensions (see fig. \ref{Schpic}). We require these functions to have definite sign of frequency with respect to the time-like Killing vector $\frac{\partial}{\partial t}$ (in the left quadrant $ - \frac{\partial}{\partial t}$): \begin{eqnarray} \label{set1} &\overrightarrow{u}_{\omega lm} (x) = (4 \pi \omega)^{-1/2} e^{-i \omega t} \overrightarrow{R}_l(\omega | r) Y_{lm} (\theta, \varphi), \nonumber \\ &\overleftarrow{u}_{\omega lm} (x) = (4 \pi \omega)^{-1/2} e^{-i \omega t} \overleftarrow{R}_l(\omega | r) Y_{lm} (\theta, \varphi), \end{eqnarray} where $\overrightarrow{R}_l(\omega | r)$ and $\overleftarrow{R}_l(\omega | r)$ are solutions of the radial equation corresponding to outgoing and incoming waves respectively \cite{Candelas:1980zt} and $Y_{lm} (\theta, \varphi)$ are the standard spherical harmonics; $x = (t,r,\theta, \varphi)$. The Boulware two-point function is \begin{equation} \label{boulware4d} W_B (x,x')= \sum_{lm} \int^{\infty}_0 \frac{d \omega}{4 \pi \omega} e^{-i \omega (t-t')} Y_{lm}(\theta,\phi) Y^*_{lm}(\theta',\phi') \left[ \overrightarrow{R}_l (\omega|r) \overrightarrow{R}^*_l (\omega|r')+\overleftarrow{R}_l (\omega|r) \overleftarrow{R}^*_l (\omega|r') \right]. \end{equation} To define the Unruh state the Kruskal extension is needed: \begin{eqnarray} \label{kruskal} U=t-r-2M \log(r/2M-1), \ \ \Tilde{U}=-4Me^{-U/4M}, \nonumber \\ V=t+r+2M \log(r/2M-1), \ \ \ \ \ \ \ \Tilde{V}=4M e^{V/4M}. \nonumber \end{eqnarray} The Unruh modes are positive--frequency w.r.t. $U$ and near the paste horizon behave as follows: \begin{equation} y_{\omega lm} \sim e^{-i \omega \Tilde{U}} Y_{lm}(\theta, \varphi).\label{set2} \end{equation} They are analytic functions in the lower half-plane of the complex variable $\Tilde{U}$. On the other hand the behaviour of the modes (\ref{set1}), on the past horizon of the right patch and, respectively, on the future horizon of left patch, is as follows \begin{eqnarray} &&\overrightarrow{u}_{\omega lm}^R (x) \approx (4 \pi \omega)^{-1/2} \left|\frac{\Tilde{U}}{4M}\right|^{i4M\omega} Y_{lm} (\theta, \varphi), \\ && \overrightarrow{u}_{\omega lm}^L (x) \approx (4 \pi \omega)^{-1/2} \left|\frac{\Tilde{U}}{4M}\right|^{-i4M\omega} Y_{lm} (\theta, \varphi). \end{eqnarray} Then the normalized combinations \begin{equation} \overrightarrow{y}_{\omega lm} = \frac{1}{\sqrt{|2 \sinh (4 \pi M \omega)}|} \Big[e^{2 \pi M \omega} \overrightarrow{u}^R_{\omega lm} + e^{-2 \pi M \omega} (\overrightarrow{u}^{L}_{\omega lm})^*\Big], \end{equation} have the same analyticity properties of the modes (\ref{set2}) and are equivalent to them. We can then compute Wightman function of Unruh state when the two points are located in the right Schwarzschild patch: \begin{equation} \label{unruh4d} W_U (x,x') = \sum_{lm} \int^{+\infty}_{-\infty} d \omega \bigg[ \frac{\overrightarrow{u}_{\omega lm}(x) \overrightarrow{u}^*_{\omega lm}(x')}{1-e^{-\frac{2 \pi \omega}{\kappa}}} +\overleftarrow{u}_{\omega lm}(x) \overleftarrow{u}^*_{\omega lm}(x') \theta(\omega) \bigg], \end{equation} where $\kappa = (4M)^{-1}$ is the surface gravity. In a similar manner, the modes that are positive frequency w.r.t. $ \frac{\partial}{\partial \Tilde{V}}$ are: \begin{equation} \overleftarrow{y}_{\omega lm} = \frac{1}{\sqrt{2 \sinh (4 \pi M \omega)}} \Big[e^{-2 \pi M \omega} (\overleftarrow{u}^R_{\omega lm})^* + e^{2 \pi M \omega} \overleftarrow{u}^L_{\omega lm}\Big]. \end{equation} They give rise to the Hartle-Hawking Wightman function: \begin{equation} \label{hh4d} W_H (x,x') = \sum_{lm} \int^{+\infty}_{-\infty} {d \omega} \bigg[ \frac{\overrightarrow{u}_{\omega lm}(x) \overrightarrow{u}^*_{\omega lm}(x')}{1-e^{-\frac{2 \pi \omega}{\kappa}}} + \frac{\overleftarrow{u}^*_{\omega lm}(x) \overleftarrow{u}_{\omega lm}(x')}{e^{\frac{2 \pi \omega}{\kappa}}-1} \bigg] \end{equation} The outgoing waves of the Unruh state are thermally distributed at temperature $T=\frac{\kappa}{2 \pi} = \frac{1}{8 \pi M}$; both the outgoing and incoming waves, of the Hartle-Hawking state are thermally distributed at the same temperature. In the two dimensional case these formulae reduce to: \begin{eqnarray} W_U (x,x') = \int^{+\infty}_{-\infty} d \omega \bigg[ \frac{\overrightarrow{u}_{\omega}(x) \overrightarrow{u}^*_{\omega}(x')}{1-e^{-\frac{2 \pi \omega}{\kappa}}} +\overleftarrow{u}_{\omega}(x) \overleftarrow{u}^*_{\omega}(x') \theta(\omega) \bigg], \nonumber \\ W_H (x,x') = \int^{+\infty}_{-\infty} {d \omega} \bigg[ \frac{\overrightarrow{u}_{\omega}(x) \overrightarrow{u}^*_{\omega}(x')}{1-e^{-\frac{2 \pi \omega}{\kappa}}} + \frac{\overleftarrow{u}^*_{\omega}(x) \overleftarrow{u}_{\omega}(x')}{e^{\frac{2 \pi \omega}{\kappa}}-1} \bigg]. \label{WUWH1} \end{eqnarray} \subsection{Two dimensions again} The following construction is valid for any stationary background, provided there are left and right movers. Stationarity implies that a Wightman function with zero anomalous quantum averages depends only on the difference of times: \begin{multline} W (x,x') = \int^{+\infty}_{0} {d \omega} \int^{+\infty}_{0} {d \omega'} \bigg[ \langle a_{\omega} a_{\omega'}^{\dagger} \rangle \overrightarrow{u}_{\omega}(x) \overrightarrow{u}^*_{\omega'}(x') + \langle a_{\omega}^{\dagger} a_{\omega'} \rangle \overrightarrow{u}^*_{\omega}(x) \overrightarrow{u}_{\omega'}(x') + \\ +\langle b_{\omega} b_{\omega'}^{\dagger} \rangle\overleftarrow{u}_{\omega}(x) \overleftarrow{u}^*_{\omega'}(x') + \langle b_{\omega}^{\dagger} b_{\omega'} \rangle \overleftarrow{u}^*_{\omega}(x) \overleftarrow{u}_{\omega'}(x') \bigg]. \end{multline} In this general setting the Unruh and the Hartle-Hawking states correspond respectively to the following choices \begin{equation} \langle a_{\omega}^{\dagger} a_{\omega'} \rangle = \frac{1}{e^{\frac{2 \pi \omega}{\kappa}}-1} \delta(\w -\w'), \quad \langle b_{\omega}^{\dagger} b_{\omega'} \rangle = 0 \end{equation} \begin{equation} \langle a_{\omega}^{\dagger} a_{\omega'} \rangle = \langle b_{\omega}^{\dagger} b_{\omega'} \rangle = \frac{1}{e^{\frac{2 \pi \omega}{\kappa}}-1}\delta(\w -\w'), \end{equation} so that \begin{eqnarray} \label{WUWH} && W_U (x,x') = \int^{+\infty}_{0} d \omega \bigg[ \frac{\overrightarrow{u}_{\omega}(x) \overrightarrow{u}^*_{\omega}(x')}{1-e^{-\frac{2 \pi \omega}{\kappa}}} + \frac{\overrightarrow{u}^*_{\omega}(x) \overrightarrow{u}_{\omega}(x')}{e^{\frac{2 \pi \omega}{\kappa}}-1} +\overleftarrow{u}_{\omega}(x) \overleftarrow{u}^*_{\omega}(x') \bigg] , \\ && W_H (x,x') = \int^{+\infty}_{0} {d \omega} \bigg[ \frac{\overrightarrow{u}_{\omega}(x) \overrightarrow{u}^*_{\omega}(x')}{1-e^{-\frac{2 \pi \omega}{\kappa}}} + \frac{\overrightarrow{u}^*_{\omega}(x) \overrightarrow{u}_{\omega}(x')}{e^{\frac{2 \pi \omega}{\kappa}}-1} +\frac{\overleftarrow{u}_{\omega}(x) \overleftarrow{u}^*_{\omega}(x')}{1-e^{-\frac{2 \pi \omega}{\kappa}}} + \frac{\overleftarrow{u}^*_{\omega}(x) \overleftarrow{u}_{\omega}(x')}{e^{\frac{2 \pi \omega}{\kappa}}-1} \bigg] \cr && \end{eqnarray} These expressions are equivalent to (\ref{WUWH1}) if the following condition holds: \begin{align} \label{condition} u_{-\w}^* (x) u_{-\w} (x') = - u_\w (x) u_\w^*(x') \end{align} for both outgoing and incoming waves; this condition is verified in the two-dimensional Schwarzschild spacetime. In the massive case, gluing modes \eqref{assymtoticsABC} in the classically permitted and forbidden regions gives the relation \begin{equation} e^{2 i \delta_{\w}} = \frac{i \omega - \sqrt{m^2-\w^2}}{i \w + \sqrt{m^2-\w^2}} e^{-2 i \w r^*_\text{turning}}. \end{equation} From here one can show that \begin{align*} \varphi_\w(r^*_1)\varphi_\w(r^*_2) = \varphi_{-\w}(r^*_1)\varphi_{-\w}(r^*_2), \end{align*} and this provides another justification of Eq. \eqref{WBH}. \label{appendixC} \setcounter{equation}{0} \renewcommand\theequation{D.\arabic{equation}} In the horizon limit modes as with $|\w| < m$ behave as follows \begin{equation} \phi_\w (r^*) = C(\w) K_{4 i \w M} (4M m \xi) \sim \cos(\w r^* + \delta_\w), \ \ \ \xi^2 = \frac{r}{2M}-1 \end{equation} with \begin{equation} \delta_\w = \frac{\pi}{2} + r_g \w \big( 2 \log(m r_g)-1 \big) - {\rm arg}\, \Gamma (1+ i \w r_g). \end{equation} It follows that $\delta_0 = \frac{\pi}{2} $. Noting that ${\rm arg}\, \Gamma (1+ i \w r_g) = - {\rm arg}\, \Gamma (1- i \w r_g)$ again points towards \eqref{WBH}. \end{appendices}
1,314,259,994,445
arxiv
\section{Introduction} Accretion-powered millisecond pulsars (AMPs) are neutron stars (NS) that experience transient accretion episodes and show millisecond pulsations corresponding to the stellar rotational period. Presently, 13 objects of this class are known \citep[see reviews by][]{P06,W06,2010arXiv1007.1108P}. The pulsations arise because the NS magnetic field channels the accretion flow on to the NS magnetic poles. Such an accretion flow produces close to sinusoidal pulse profiles in most AMPs, but for several of them a strong evolution and peculiar double-peaked pulse shapes are observed (e.g. \citealt{2008ApJ...675.1468H, 2009MNRAS.400..492I}). The energy spectra of AMPs contain 0.5--1 keV blackbody emission from the neutron star surface and a Comptonization component dominating at higher energies and associated probably with the accretion shock (e.g., \citealt*{2002MNRAS.331..141G}; \citealt{2003MNRAS.343.1301P, 2005A&A...444...15F, 2007A&A...464.1069F}). In addition, emission from the accretion disc at soft energies ($\lesssim 2$ keV) has also been detected in {\it XMM-Newton} observations of XTE J1751--305 \citep{2005MNRAS.359.1261G}, XTE J1807--294 \citep{2005A&A...436..647F}, SAX J1808.4--3658 \citep{2009MNRAS.396L..51P} and recently in IGR J17511--3057 \citep[hereafter IGR17511, ][]{2010MNRAS.407.2575P}. The reflection component of moderate amplitude is also detected in AMPs \citep{2005MNRAS.359.1261G, 2009MNRAS.400..492I}. The pulse profiles are often rather sinusoidal with a slight skewness \citep{P06} and show clear energy dependence. Pulses at soft energies peak at a later phase resulting in ``soft lags'', as first was seen in SAX J1808.4--3658 \citep*{1998ApJ...504L..27C}. The origin of the soft lag is most likely related to the different angular emission patterns of the blackbody and Comptonized components, which naturally explains why the phase lags seem to saturate at $\sim 7$--$10$ keV, where the emission of the blackbody component becomes negligible \citep{2002MNRAS.331..141G, 2003MNRAS.343.1301P, 2005MNRAS.359.1261G}. Some AMPs however show also a (not fully understood) decrease in the lag above $\sim 10$ keV (seen in IGR J00291+5934 as indicated by \citealt{2005ApJ...622L..45G} and \citealt{2005A&A...444...15F}; IGR17511\ might also have this decrease above $\sim 20$ keV, see \citealt{2010arXiv1012.0229F}). Information about this lag can be used to study the properties of the accretion shock and the structure of the hotspot. \subsection{IGR J17511--3057} IGR17511\ was discovered on 2009 September 12 (MJD 55087) by \textit{INTEGRAL} observatory \citep{2009ATel.2196....1B} during the Galactic bulge monitoring program \citep{2007A&A...466..595K}. The 245 Hz pulsations were detected by {\it RXTE}\ \citep{2009ATel.2197....1M} confirming the AMP nature of IGR17511. A {\it Chandra}/HETG observation provided the source position of (J2000) RA=$17^{\rm h}51^{\rm m}08\fs66$, Dec=$-30\degr 57\arcmin 41\farcs0$ ($1\sigma$ error of $0\farcs6$, \citealt{2009ATel.2215....1N}). A near infrared counterpart of magnitude $K_{\textrm s} = 18\fm0 \pm 0.1$ was identified by \citet{2009ATel.2233....1T} within the {\it Chandra} error box, but no radio counterpart was detected with a $3\sigma$ upper limit of 0.10 mJy \citep*{2009ATel.2232....1M}. The source faded beyond {\it RXTE}\ detection limit after 2009 October 11 (MJD 55113, \citealt{2009ATel.2237....1M}). Type I X-ray bursts were observed in IGR17511\ with {\it Swift} \citep{2009ATel.2198....1B} and burst oscillations immediately after with {\it RXTE}\ (\citealt{2009ATel.2199....1W}, see \citealt{2010arXiv1012.0229F} for the analysis of all detected bursts). Several distance constraints have been reported based on these data. The analysis of {\it Swift} data by \citet{2010A&A...509L...3B} yielded an upper limit on the distance of $10.1 \pm 0.5$ kpc, derived using the empirical relation of the Eddington limit $L_{\rm edd} \approx (3.79 \pm 0.15) \times 10^{38}$ erg s$^{-1}$ for the photospheric radius expansion bursts \citep{2003A&A...399..663K}. Another upper limit of 7.5 kpc was derived by \citet{2010arXiv1012.0229F} via the independent analysis of type I bursts. Using the same method, \citet{2010MNRAS.407.2575P} reported a similar upper limit as \citet{2010A&A...509L...3B} from {\it XMM-Newton} data, but the analysis of {\it RXTE}\ data by \citet{2010MNRAS.tmp.1363A} gave a tighter upper limit of $6.9$ kpc. \citet{2010MNRAS.tmp.1363A} also used the distance approximation of \citet{2008ApJS..179..360G}, that resulted in an upper limit of 4.4 kpc for a NS of mass $1.4 \ensuremath{{\rm M}_{\odot}}$, radius $10$ km and hydrogen mass fraction $X = 0.7$. The corresponding upper limit for $X = 0$ would be 5.76 kpc, but absence of hydrogen is inconsistent with the fact that the companion of IGR17511\ seems to be a main sequence star \citep{2010MNRAS.407.2575P, 2011A&A...526A..95R}. In light of these constraints, we adopt the distance value of 5 kpc. The source light curve shows an exponential flux decay, commonly seen in AMPs (a ``slow decay" in the terminology of \citealt{2008ApJ...675.1468H}). The pulse profiles are single-peaked, indicating that we probably see the emission coming mainly from one emitting spot on the neutron star surface (i.e. contribution from the secondary spot does not produce a distinct secondary feature; however, ``flattened'' pulse profile minima may suggest a presence of secondary spot emission). In this paper, we present the results of our spectral and timing analysis of IGR17511\ based on \textit{Swift} and \textit{RXTE} observations. We study the evolution of phase averaged- and phase resolved spectra, phase lags and pulse profile changes during the outburst. \section{Observational data} The \textit{RXTE} data covering the outburst of IGR17511\ (ObsID 94041) were reduced using {\sc heasoft} 6.8 and the {\sc CALDB}. We used data taken both by \textit{RXTE}/PCA (3--25 keV) and HEXTE (25--200 keV). Standard 0.5 per cent systematic was applied to the PCA spectra \citep{2006ApJS..163..401J}. To keep the calibration uniform, we used data from PCA unit 2 only. The source spectrum is contaminated by the Galactic ridge emission \citep{2009Natur.458.1142R}. To take it into account for {\it RXTE}/PCA spectra, we have produced a spectrum from the observations, where both IGR17511\ and nearby AMP XTE J1751--305 were in a quiescent state (MJD 55115 -- 55126). This spectrum, mainly affecting channels below 15 keV, was subtracted from all spectral and timing data. For \textit{Swift}/XRT observations, we only considered window-timing mode data, because the photon-counting mode data suffered from photon pile-up \citep{2010A&A...509L...3B}. We reduced the {\it Swift}/XRT data with {\sc xrtpipeline} v.0.12.3 using standard filtering and screening criteria for the event selection. We used circular regions of 20 pixel radius to extract the spectral data. The {\sc xrtexpomap} task was used to generate the exposure maps and the ancillary response files were generated with the {\sc xrtmkarf} task to account for different extraction regions, vignetting and point spread function corrections. Ancillary response files (ARFs) of individual {\it Swift}/XRT snapshots were averaged together; each ARF was co-added with ``weight'' equal to the relative contribution of photons detected in the snapsot to the overall photon number collected from all snapshots. The {\it Swift}/XRT redistribution matrices (v.011) were taken from the {\sc CALDB}. {\it Swift}/XRT and {\it RXTE}/HEXTE spectra were grouped such as each bin contained at least 200 counts. The type I X-ray bursts \citep{2010A&A...509L...3B, 2010MNRAS.407.2575P, 2010MNRAS.tmp.1363A, 2011A&A...526A..95R, 2010arXiv1012.0229F} were screened out from our analysis. The spectral analysis was done using {\sc XSPEC} v.12 \citep{1996ASPC..101...17A}. Uncertainties of spectral and timing best-fitting parameters correspond to the 90 per cent confidence level, unless otherwise stated. \begin{table} \caption{Data groupings} \begin{tabular}{|lll|} \hline Group code & MJD interval/\textit{RXTE} & MJD interval/\textit{SWIFT} \\ \hline T & 55087.9--55109.3 & 55087.8--55107.5 \\ 1 & 55087.9--55088.9 & 55087.8--55088.7\\ 2 & 55089.2--55090.5 & 55089.6--55090.8 \\ 3 & 55091.2--55094.0 & 55092.8--55093.9 \\ 4 & 55094.0--55096.8 & 55094.4--55095.5\\ 5 & 55097.2--55099.5 & -- \\ 6 & 55100.3--55101.9 & -- \\ 7 & 55102.2--55104.9 & 55102.4--55105.0\\ 8 & 55105.4--55109.3 & 55107.0--55107.5\\ \hline \end{tabular} \label{t:groups} \end{table} \section{Spectral analysis} \label{spec_anal} In this section, we describe the results of our spectral analysis of the source. In order to improve statistics, we have grouped individual spectra as described in Table \ref{t:groups}. \subsection{Spectral model} \label{spec_model} The spectrum of IGR17511\ is typical for an AMP and can be described as a composition of accretion disc emission (around 1--2 keV, not visible in the \textit{RXTE} range), blackbody originating from the hotspot (2--10 keV) and hard X-ray tail generated by thermal Comptonization in accretion shock located above the neutron star surface (contributing in the whole range of 1--200 keV and dominating above 10 keV), similar to other objects of this kind (e.g. \citealt{2002MNRAS.331..141G,2005MNRAS.359.1261G,2005A&A...436..647F,2005A&A...444...15F,2007A&A...464.1069F}; see Fig. \ref{f:spectrum}). To model thermal Comptonization continuum we used the \textsc{compps} model of \citet{1996ApJ...470..249P}. A fluorescent iron line at 6.4 keV and the Compton reflection of the \textsc{compps} component \citep{1995MNRAS.273..837M} were also included in the fitting. The spectral model includes also interstellar absorption model \textsc{phabs} parametrized by the hydrogen column density $N_\mathrm{H}$. The described approach corresponds to \textsc{phabs*(diskbb+bbodyrad+ compps+diskline)} model in {\sc XSPEC}. For \textsc{compps} we assumed a slab geometry and the model is characterized by the Thomson optical depth $\tau_{\rm T}$ and the temperature of the hot electrons $\ensuremath{T_\mathrm{e}}$. The seed photons for Comptonization have a temperature $\ensuremath{T_\mathrm{seed}}$ and the surface area is denoted as $\Sigma_\mathrm{shock}$. The blackbody component has a temperature \ensuremath{T_\mathrm{bb}}\ and its surface area is denoted as $\Sigma_\mathrm{spot}$. The apparent spot radii at infinity can be computed from the \textsc{bbodyrad} and \textsc{compps} model normalizations: $\Sigma=\pi R^2= \pi K D^2_{10}$, where $K$ is the normalization obtained from fits, $R$ is the apparent radius in kilometres and $D_{10}$ is the distance in units of 10 kpc. We denote these radii as $R_\mathrm{spot}$ and $R_\mathrm{shock}$ for the \textsc{bbodyrad} and \textsc{compps}, respectively. The Compton reflection from the accretion disc is parametrized by the amplitude ${\Re} = \Omega / 2\pi$, where $\Omega$ is the solid angle covered by the reflecting medium \citep{1995MNRAS.273..837M}. We used \textsc{diskline} model to model the iron line and we fixed the inner disc radius $r_{ \rm in} = 10 r_{\rm s}$, where $r_{\rm s} = 2GM / c^2$ is the Schwarzschild radius. We also assumed that the radial emissivity profile for the illuminating continuum flux $\propto r^{-3}$. For the \textsc{diskline}, as well as for \textsc{compps}, we assumed an inclination of $i = 60\degr$. The \textsc{diskbb} model component was only included in the joint analysis of {\it Swift}\ and {\it RXTE}\ data (Section \ref{s:swift}) to account for the soft X-ray accretion disc emission. The respective model parameters are the inner disc temperature \ensuremath{T_\mathrm{in}}\ and the apparent inner disc radius $\ensuremath{R_\mathrm{in}}$, which can be computed from the model normalization $K_\mathrm{dbb}$ as $\ensuremath{R_\mathrm{in}} = D_{10} \sqrt{K_\mathrm{dbb} / \cos i}$. \begin{figure} \centerline{\epsfig{file=fig_spectra.eps,width=8cm}} \caption{The joint observed spectrum of IGR17511\ from \textit{RXTE} and \textit{Swift} satellites collected during the entire outburst (group T). Red, green and blue data points represent {\it Swift}/XRT, {\it RXTE}/PCA and {\it RXTE}/HEXTE respectively. Solid, dotted, long-dashed, dashed and dot-dashed curves represent the total spectrum, Comptonization continuum, reflection and iron line, blackbody emission from the hotspot and the disc blackbody, respectively. Lower panel show the residuals of the fit. The fit parameters are given in Table \ref{t:fits} (group T for joint {\it RXTE}\ and {\it Swift}\ spectra). Error bars correspond to 1$\sigma$.} \label{f:spectrum} \end{figure} \begin{table*} \centering \begin{minipage}{150mm} \caption{Results of spectral fitting with the Comptonization model with constant areas for the blackbody and Comptonization components (Sect. \ref{s:swift}, \ref{c:rxte}). Left column indicates the data group. Letter ``F" indicates a fixed parameter.\label{t:fits} } \begin{tabular}{|lllllllllll|} \hline Group & \ensuremath{T_\mathrm{e}} & $\tau_{\rm T}$ & \ensuremath{T_\mathrm{seed}} & ${\Re}$ & \ensuremath{T_\mathrm{bb}} & $\Sigma$ & $EW$ & \ensuremath{T_\mathrm{in}} & $\ensuremath{R_\mathrm{in}}$ & $\chi^2$/d.o.f \\ & keV & & keV & & keV & km$^2$ & eV & keV & km & \\ \hline T \footnote{{\it RXTE}\ and {\it Swift}, interstellar absorption yield $\ensuremath{N_\mathrm{H}}=(0.88_{-0.24}^{+0.21}) \times 10^{22} \mathrm{cm}^{-2}$.} & $30 \pm 2$ & $1.80 _{-0.10}^{+0.12 }$ &$ 1.05 _{- 0.11 }^{+ 0.10 }$ & $0.34\pm0.09$ & $0.62 \pm 0.07 $ & $49 _{- 19 }^{+ 25 }$ & $ 62 _{- 24 }^{+ 25 }$ & $0.24\pm 0.07$ & $40_{-27}^{+46} $ & 271/347 \\ T \footnote{{\it RXTE}\ only.} & $ 31 \pm 2 $ & $ 1.77 _{- 0.08 }^{+ 0.11 }$&$ 1.00 _{- 0.17 }^{+ 0.11 }$ & $ 0.36 \pm 0.08 $ & $ 0.58 _{- 0.11 }^{+ 0.08 }$ & $ 59 _{- 19 }^{+ 64 }$& $ 57 _{- 23 }^{+ 25 }$ & & & 193/161 \\ 1--4 &$ 33 _{- 3 }^{+ 2 }$ & $ 1.70_{- 0.08 }^{+ 0.12 }$ &$ 1.01 _{- 0.17 }^{+ 0.11 }$ & $ 0.40 _{- 0.10 }^{+ 0.09 }$& $ 0.59 _{- 0.12 }^{+ 0.08 }$ & $ 93 _{- 23 }^{+ 77 }$ & $ 73 _{- 23 }^{+ 25 }$ & & & 151/161\\ 5--8 & $ 28 \pm 2 $ & $ 1.88 _{- 0.10}^{+ 0.13 }$ &$ 1.00 _{- 0.20 }^{+ 0.13 }$ & $ 0.33 _{- 0.11 }^{+ 0.12 }$& $ 0.60 _{- 0.13 }^{+ 0.09 }$ & $ 46 _{- 17 }^{+ 65 }$ & $ 44 _{- 26 }^{+ 30 }$ & & & 208/161 \\ \hline 1 & 30F& $ 1.85 _{- 0.02 }^{+ 0.01 }$ & $ 1.09 _{- 0.10}^{+ 0.08 }$ & 0.3F & $ 0.66 _{- 0.07 }^{+ 0.05 }$ & $ 63 _{- 14 }^{+ 31 }$ & $ 95 \pm 22 $ & & & 139/163 \\ 2 & 30F& $ 1.86 _{- 0.01 }^{+ 0.02 }$ & $ 1.16 _{- 0.07 }^{+ 0.06 }$ & 0.3F & $ 0.71 _{- 0.05 }^{+ 0.04 }$ & $ 48 _{- 8 }^{+ 13 }$ & $ 106 _{- 20 }^{+ 19 }$ & & & 163/163 \\ 3 & 30F& $ 1.86 _{- 0.02 }^{+ 0.01 }$ & $ 1.06 _{- 0.10}^{+ 0.07 }$ & 0.3F & $ 0.64 _{- 0.07 }^{+ 0.05 }$ & $ 61 _{- 14 }^{+ 29 }$ & $ 102 \pm 20 $ & & & 149/163 \\ 4 & 30F& $ 1.82 \pm 0.02 $ & $ 1.09 _{- 0.13 }^{+ 0.08 }$ & 0.3F & $ 0.65 _{- 0.09 }^{+ 0.05 }$ & $ 46 _{-11}^{+29}$ & $ 76 \pm 21 $ & & & 163/163 \\ 5 & 30F& $ 1.84 _{- 0.01 }^{+ 0.02 }$ & $ 1.11 _{- 0.13 }^{+ 0.07 }$ & 0.3F & $ 0.66 _{- 0.09 }^{+ 0.05 }$ & $ 38 _{-9 }^{+22}$ & $ 72 \pm 20 $ & & & 167/163 \\ 6 & 30F& $ 1.85 _{- 0.03 }^{+ 0.08 }$ & $ 1.02 _{- 0.45 }^{+ 0.12 }$ & 0.3F & $ 0.60_{- 0.09 }^{+0.07 }$ & $46_{-16}^{+40}$ & $ 53 \pm 23 $ & & & 171/163 \\ 7 & 30F& $ 1.80 \pm 0.03 $ & $ 1.01 _{- 0.14 }^{+ 0.11 }$ & 0.3F & $ 0.57 _{- 0.09 }^{+0.08 }$ & $ 39 _{-13 }^{+30 }$ & $ <39 $ & & & 213/163 \\ 8 & 30F& $ 1.74 \pm 0.03 $ & $ 0.98 _{- 0.15 }^{+ 0.09 }$ & 0.3F & $ 0.56 _{- 0.10}^{+0.06 }$ & $ 37 _{-12 }^{+32 }$ & $ <30 $ & & & 175/163 \\ \hline \end{tabular} \end{minipage} \end{table*} \begin{table*} \caption{Results of spectral fitting with the Comptonization model using independent areas of the hotspot blackbody and Comptonized component using {\it RXTE}\ data only, Sect. \ref{c:rxte}. Left column indicates the data group. \label{t:diffareas} } \begin{tabular}{|llllllllll|} \hline Group & \ensuremath{T_\mathrm{e}} & $\tau_{\rm T}$ & \ensuremath{T_\mathrm{seed}} & ${\Re}$ & \ensuremath{T_\mathrm{bb}} & $EW$ & $\Sigma_\mathrm{spot}$ & $\Sigma_\mathrm{shock}$ & $\chi^2$/d.o.f \\ & keV & & keV & & keV & eV & km$^2$ & km$^2$ & \\ \hline T & $ 33 _{- 3 }^{+ 4 }$ & $ 1.61 \pm 0.18 $ & $ 1.15 _{- 0.21 }^{+ 0.30}$ & $ 0.45 _{- 0.12 }^{+ 0.13 }$ & $ 0.60 \pm 0.13 $ & $ 46 _{- 31 }^{+ 34 }$& $ 63 _{- 26 }^{+ 124 }$ & $ 34 _{- 21 }^{+ 42 }$ & 190/160 \\ 1--4 & $ 35 \pm 3 $ & $ 1.54 _{- 0.15 }^{+ 0.18 }$ & $ 1.16 _{- 0.19 }^{+ 0.25 }$ & $ 0.49 _{- 0.13 }^{+ 0.14 }$ & $ 0.61 _{- 0.13 }^{+ 0.11 }$ & $ 61 _{- 31 }^{+ 33 }$ & $ 80 _{- 31 }^{+ 134 }$ & $ 43 _{- 24 }^{+ 48 }$ & 148/160 \\ 5--8 & $ 31 _{- 3 }^{+ 6 }$ & $ 1.66 _{- 0.29 }^{+ 0.27 }$ & $ 1.21 _{- 0.29 }^{+ 0.34 }$ & $ 0.46 _{- 0.18 }^{+ 0.22 }$ & $ 0.63 _{- 0.15 }^{+ 0.11 }$ & $ <110 $ & $ 44 _{- 17 }^{+ 88 }$ & $ 23 _{- 14 }^{+ 42 }$ & 206/160 \\ \hline \end{tabular} \end{table*} \subsection{Phase-averaged spectra from \textit{RXTE} and \textit{Swift}} \label{s:swift} \begin{figure} \centerline{\epsfig{file=fig_compps.eps,width=8cm}} \caption{The best-fitting parameters for the Comptonization-based model of Section \ref{c:rxte}. The absorption corrected flux in 3--20 keV band is in units of $10^{-9}$ erg cm$^{-2}$ s$^{-1}$. } \label{f:compps} \end{figure} \begin{figure*} \centerline{\epsfig{file=fig_sinefits.eps,width=16cm}} \caption{Results of fitting the per-orbit pulse profiles in (a) 2.1--3.7 keV and (b) 9.7--23.1 keV with expression (\ref{m:cosines}). Top panels: the amplitudes of the fundamental $a_1$ (squares, red) and overtone $a_2$ (diamonds, blue). Bottom panels: phases of the fundamental $\phi_1$ (squares, red) and overtone $\phi_2$ (diamonds, blue, shifted by phase 0.5 for clarity). Typical error bars are shown for a few points. Note the changes in amplitude and phase of the fundamental around MJD 55112, addressed in Section \ref{c:mjd55112}. } \label{f:sinefits} \end{figure*} We used the \textit{Swift}/XRT data (in range 0.6--8.0 keV) to study the soft X-ray emission of IGR17511. Initially, we fitted group T to constrain the disc parameters and the absorption column. The emitting areas of the hotspot and the shock were assumed equal (while the spectrum is statistically good to fit these areas independently, a large number of free parameters would lead to large uncertainties; in fact, these areas must be similar on physical grounds, see \citealt{2009MNRAS.400..492I}). The best-fitting results are shown in Table \ref{t:fits}; in particular we find that the emission below $\lesssim 3$ keV has clear signatures of the accretion disc \citep[see Fig. \ref{f:spectrum} and also][fig. 9]{2010MNRAS.407.2575P}. Fits with free interstellar absorption lead to $\ensuremath{N_\mathrm{H}}=(0.88_{-0.24}^{+0.21} )\times 10^{22} \mathrm{cm}^{-2}$, $\ensuremath{T_\mathrm{in}} =0.24\pm0.07$ keV and $\ensuremath{R_\mathrm{in}}=40_{-27}^{+46}$ km. The aforementioned best fitting values are subjected to several uncertainties. The values of the disc parameters \ensuremath{T_\mathrm{in}}\ and \ensuremath{R_\mathrm{in}}\ are tightly correlated with the absorption value $N_\mathrm{H}$. In addition, the derived value of \ensuremath{R_\mathrm{in}}\ depends on the assumed distance and it should be corrected for two effects in order to obtain a realistic radius. As discussed in \citet{1998PASJ...50..667K} and \citet{1999MNRAS.309..496G}, the derived radius should be larger by the square of the colour correction factor $f_{\rm c}$. This factor $f_{\rm c} = 1.7$ was computed for accretion discs around black holes by \citet{1995ApJ...445..780S}. However, in the case of AMPs, there is a stark difference in the sense that the disc is irradiated by the emission from the hotspot, which casts an uncertainty to this value. Furthermore, \ensuremath{R_\mathrm{in}}\ is also affected by a correction due to the inner boundary condition \citep{1999MNRAS.309..496G}, but this (of the order of unity) factor is not accurately known in the case of accretion onto a magnetized star. Therefore, the value of the inner disc radius should be taken as an order of magnitude estimate. Other spectral parameters (see Table \ref{t:fits}) are consistent with the findings of \citet{2010arXiv1012.0229F}. However, there are small differences between our results and \citet{2010MNRAS.407.2575P} especially for the values of $\tau_{\rm T}$ and $\ensuremath{T_\mathrm{seed}}$. The most likely reason for these differences is because the high energy cutoff cannot be accurately determined in short HEXTE exposures. Also, the cross-calibration issues between {\it Swift}/XRT and {\it XMM}/EPIC instruments \citep[see][]{2011A&A...525A..25T} might cause the differences. In further analysis, we found that the \textit{Swift}/XRT data do not have enough statistics to reliably constrain the disc component for individual fits of groups $1$--$8$. Therefore, we could not look for changes in the disc parameters during the outburst and in the following sections we only consider {\it RXTE}\ data when we study the evolution of the spectral parameters. \subsection{Phase-averaged spectra from \textit{RXTE}} \label{c:rxte} We began our ${\it RXTE}$--only spectral analysis by fitting the model with independent emitting areas for the hotspot and the shock (free blackbody and Comptonized component normalizations). Because the accretion disc does not contribute to the flux in the PCA band 3--25 keV, we omit the {\sc diskbb} component from the following spectral fits and use our best fitting value of $\ensuremath{N_\mathrm{H}}=0.88\times 10^{22} \mathrm{cm}^{-2}$ in the fitting of {\it RXTE}-only data. The spectrum for group T and ``joined" groups 1--4 and 5--8 have sufficient statistics to fit these areas separately. The results of the fitting are shown in Table \ref{t:diffareas}. We find that the ratio between the areas is compatible with constant, although the statistical errors are large to make a firm conclusion. We note that analysis of the same source by \citet{2010MNRAS.407.2575P} and of SAX J1808.4--3658 by \citet{2009MNRAS.400..492I} both suggest, in agreement with our result, that the blackbody area is 2--3 times larger than the area of Comptonized emission. These fits indicate a decrease of the emitting areas, as expected from a gradually increasing inner disc radius \citep*{2009ApJ...706L.129P}. The simultaneous decrease of the iron line equivalent width supports the expected physical picture (note that the reflection amplitude should decrease as well, but observation uncertainties do not allow us to constrain it reliably). The group T and the ``joined" groups 1--4 and 5--8 also allow for independent fitting of \ensuremath{T_\mathrm{e}}\ and reflection amplitude ${\Re}$. The best fitting parameters are shown in Table \ref{t:fits}. We note that the actual value of ${\Re}$ is subject to the chosen continuum model (e.g., \citealt*{2007A&ARv..15....1D}). Spectra for individual groups $1$--$8$ do not have enough statistics to constrain \ensuremath{T_\mathrm{e}}\ and ${\Re}$ (that turns out to be uncertain and compatible with zero). Therefore, we fixed \ensuremath{T_\mathrm{e}}=30 keV and ${\Re}=0.3$ (as found in the \textit{Swift} and \textit{RXTE} fits, Sect. \ref{s:swift}) and obtained fits for individual data groups. Furthermore, we adopted equal emitting areas for the Comptonization and blackbody components for these groups, because the data quality does not allow us to fit them independently and they should be similar on physical grounds (see \citealt{2005MNRAS.359.1261G, 2009MNRAS.400..492I}). The time evolution of the spectral parameters is shown in Fig. \ref{f:compps} and in Table \ref{t:fits}. The optical depth is initially roughly constant $\tau_{\rm T} \approx 1.85$ for groups $1$--$7$, but later it drops to $\tau_{\rm T} = 1.74\pm0.03$ for group $8$. This shows (in connection with fixed \ensuremath{T_\mathrm{e}}\ value) that the spectrum softens in the end of the outburst. It is also noticeable, that $\Sigma_\mathrm{spot}$, \ensuremath{T_\mathrm{seed}}\ and \ensuremath{T_\mathrm{bb}}\ are decreasing slightly during the outburst. The decrease in $\Sigma_\mathrm{spot}$ is most likely caused by the change of $\ensuremath{R_\mathrm{in}}$ in the course of the outburst \citep{2009ApJ...706L.129P}. When the flux drops as the mass accretion rate goes down, $\ensuremath{R_\mathrm{in}}$ increases (it is likely proportional to the Alfv{\'e}n radius which has a $\dot{M}^{-2/7}$ dependence, see e.g. \citealt*{FKR02}). Assuming that the magnetic field is a dipole, the outer boundary of the hotspot is controlled by the current position of $\ensuremath{R_\mathrm{in}}$ and therefore the increase in $\ensuremath{R_\mathrm{in}}$ leads to decrease in $\Sigma_\mathrm{spot}$. \section{Timing analysis} The pulse shape of an accreting pulsar contains important information about physics of emission and geometrical parameters of the system (\citealt{2003MNRAS.343.1301P}, \citealt*{2008ApJ...672.1119L}; \citealt{2008AIPC.1068...77P, 2009MNRAS.400..492I, 2009ApJ...706L.129P}). To obtain the pulse profiles, we used the ephemeris of \citet{2009ATel.2221....1R}. In general, the pulse profiles of IGR17511\ are single-peaked, close to symmetric and without a prominent secondary maxima. \begin{figure*} \centerline{\epsfig{file=fig_pulses_and_sinefits.eps,width=16cm}} \caption{Left panels: evolution of harmonics content for groups 1--8. Stars and triangles represent $2.1$--$3.7$ keV, diamonds and squares -- $9.7$--$23.1$ keV. (a) The amplitudes of the fundamental $a_1$ (upper points) and overtone $a_2$ (lower points); (b) phases of the fundamental $\phi_1$ and (c) overtone $\phi_2$ versus time. Right panels: pulse profiles for $2.1$--$3.7$ and $9.7$--$23.1$ keV (blue and red histograms, respectively). For amplitudes and phases (panels a--c) the errors are at the 90 per cent confidence level, while for pulse profiles the errors bars correspond to $1\sigma$.} \label{f:pulses} \end{figure*} \begin{figure*} \centerline{\epsfig{file=fig_lags.eps,width=16cm}} \caption{Left panels: pulse profile fits with equation (\ref{m:cosines}) for energies 3.3--3.7, 4.5--4.9, 6.5--8.1 and 15.5--23.1 keV (solid, dotted, dashed and dash-dotted lines, respectively). Middle panels: pulse maximum lags (stars) and the lags of the fundamental (diamonds). The overtone lags have large errors and therefore are not shown here. The observational groups are indicated on the panels. Right panel: evolution of the lags between 2.1--3.7 and 15.5--23.1 keV energy bands of the pulse maximum and of the fundamental. The pulse maximum lag was computed as the phase difference between the maxima at different energies, determined via fitting the observed pulse shape with expression (\ref{m:cosines}). } \label{f:lags} \end{figure*} \subsection{Harmonic content evolution} From our spectral analysis (e.g., Fig. \ref{f:spectrum}) we see that the hard X-ray part of the spectrum above 10 keV is dominated by thermal Comptonization, that probably takes place in the accretion shock. The soft part (below 10 keV) includes blackbody emission from the hotspot. Furthermore, around 2 keV there is emission from the accretion disc \citep[see Fig. \ref{f:spectrum} and][fig. 9]{2010MNRAS.407.2575P}. Consequently, we chose energy ranges of interest as $2.1$--$3.7$ and $9.7$--$23.1$ keV: the first interval contains a large fraction of the hotspot blackbody radiation (and some part of disc emission which is non-pulsating), while the second band contains only Comptonized emission. While a throughout analysis by \citet{2011A&A...526A..95R} demonstrated that one needs three (and, sometimes, even four) harmonics to fully describe wide-energy (2--25 keV) pulses, we find it acceptable to utilize two Fourier harmonics for description of pulses in narrow energy bands and trace the pulse profile changes. We note that wide-energy pulse profile is in fact a superposition of different pulse shapes seen at different energies and this might affect the harmonics decomposition. However, indeed the comparison between two-harmonics fit and the actual pulse profiles reveal some deviations from a smooth fit near the pulse profile minima, that can be due to the antipodal spot contribution or due to additional absorption at certain phase (by e.g. the accretion column), which \citet{2011A&A...526A..95R} modelled with a 3rd harmonic (as it follows from their fig. 2). We fitted the pulse profiles collected from each \textit{RXTE} orbit with the following expression \begin{eqnarray} \label{m:cosines} F(\phi)=\overline{F}\{ 1+a_1\cos[2\pi(\phi-\phi_1)] +a_2\cos[4\pi(\phi-\phi_2)] \} , \end{eqnarray} where $a_1$, $a_2$ are amplitudes and $\phi_1, \phi_2$ are phases of the fundamental and the overtone, respectively. The fitting results for aforementioned energy bands are shown in Fig. \ref{f:sinefits}. The amplitude of the fundamental decreases with time (and with flux). While the general trend is rather smooth, some irregularity in fundamental amplitude can be noticed around MJD 55098, and phase of overtone experiences small shift around MJD 55093. After MJD 55100, we observe a ``drift" in the phase of fundamental, seen clearly on Fig. \ref{f:pulses} (see also fig. 4 in \citealt{2011A&A...526A..95R}). This drift is larger at soft energies, which in turn affects the value of the phase lag (see Sect. \ref{s:obslags}). The harmonic content on soft and hard energies behaves very similarly, which is also seen in SAX J1808.4--3658 during its ``slow decay" stage \citep{2009MNRAS.400..492I}. After the slow decay, SAX J1808.4--3658 has shown quite a different evolution of pulse shapes at different energies. For our case, there is a clear evidence that the source experienced a likely transition to the ``rapid drop'' stage (see Section \ref{c:mjd55112}). However, its flux dropped very quickly below the detection level, making it impossible to study pulse profile evolution further. \subsection{Evolution of phase lags} \label{s:obslags} In AMPs the pulses in the soft and hard energies do not arrive in phase, but there is an energy-dependent phase lag (e.g., \citealt{1998ApJ...504L..27C}). We determined the phase lags by fitting the pulse profiles at a given energy with expression (\ref{m:cosines}) and finding the phase difference relative to the reference energy band 2.1--3.7 keV. This way we find the phase lags in each harmonic as well as the pulse maximum lag corresponding to the phase difference between fitted pulse maxima. In IGR17511, the phase lag is negative (i.e. pulse peaks on soft energies at a later phase) and shows a gradual increase from 3 keV to approximately 10 keV, where the lag value nearly ``saturates". This behaviour is typical for AMPs (see e.g. \citealt{1998ApJ...504L..27C, 2002MNRAS.331..141G}, \citealt*{2009ApJ...697.2102H}, \citealt{2010arXiv1012.0229F}). The phase lag of the overtone is poorly constrained: for our groups 1--8 it is noticeable that overtone best-fitting value decreases from 0 to $\sim-100$ $\mu$s in the energy range 3--7 keV, and then remains constant or slightly reduces. But in all cases it is compatible with zero and is determined with the uncertainty more than $100\mu$s. \begin{figure} \centerline{\epsfig{file=fig_mjd.eps,width=7cm}} \caption{The pulse profiles (2--60 keV) from MJD 55111.01 (blue triangles) and MJD 55112.07 (red asterisks), demonstrating a significant shift of the pulse maximum and an abrupt drop in the amplitude (due to coincidental outburst of a neighbouring AMP XTE J1751--305). Blue solid and red dashed curves are the respective best-fitting approximations with expression (\ref{m:cosines}). Error bars correspond to 1$\sigma$.} \label{f:mjd55112} \end{figure} The most noticeable effect we saw in the data is a gradual increase of the phase lag (measured between 2.1--3.7 and 15.5--23.1 keV) from 200 to 400 $\mu$s during the outburst, as illustrated on the right panel of Fig. \ref{f:lags}. Interestingly, fundamental shows a straightforward trend, while pulse maximum lag changes noticeably only close to the end of the outburst. A comparison of the pulse profiles obtained at various dates indicates that the lag increases because the pulse maximum at soft energies (2.1--3.7 keV in our case) shifts to a latter phase, while the maximum of the high-energy pulse (9.7--23.1 keV) shifts in parallel, but in a less pronounced way. To illustrate that, in Fig. \ref{f:pulses} we plot a set of pulse profiles from a few time intervals and the evolution of their harmonic content. \subsection{Pulse profile changes at MJD 55112} \begin{figure} \centerline{\epsfig{file=fig_lightcurve.eps,width=8cm}} \caption{The lightcurve of IGR17511\ during the 2009 outburst. The flux in the 3--10 keV range is corrected for absorption. Blue stars correspond to the {\it RXTE}/PCA data, and red diamonds are for {\it Swift}/XRT (Photon Counting mode). Two vertical dotted lines denote observations centered at MJD 55511.01 and MJD 55112.07 that are shown in Fig. \ref{f:mjd55112}. Although the {\it RXTE}/PCA observations are contaminated after MJD 55112 by the outburst of a nearby AMP XTE J1751--305, {\it Swift}\ allows to determine the flux of IGR17511\ unambiguously revealing an abrupt flux fall consistent with being the start of the ``rapid drop'' outburst stage. Error bars correspond to 1$\sigma$. } \label{f:lc} \end{figure} \label{c:mjd55112} The pulsations became completely undetectable shortly after MJD 55112, and the last useful observations do not contain a reliable statistics to obtain a well-defined energy-resolved pulse shape. However, the shape in the whole \textit{RXTE}/PCA range (approximately 2--60 keV) can be used to locate the pulse maximum. In Fig. \ref{f:mjd55112} we show two pulse profiles from the adjacent observations centered at MJD 55111.01 and MJD 55112.07 together with the respective harmonical fits. The relative amplitude abruptly decreases because the average X-ray emission is modified by simultaneous outburst of AMP XTE J1751--305 in the field of view of \textit{RXTE}/PCA \citep{2009ATel.2237....1M}. It is clear that the phase of the pulse maximum has shifted forward by about 0.1--0.2. By analogue with SAX J1808.4--3658, we can speculate that the source began to shift into the rapid drop stage \citep{2009MNRAS.400..492I}. As noted by \citet{2010arXiv1012.0229F}, the exponential trend typical to the slow decay outburst stage is followed by a faster, linear drop of flux around MJD 55107, few days prior to detected pulse evolution. Fig. \ref{f:sinefits} reveals that in parallel with the linearly decreasing flux trend, the phase of fundamental starts to increase slowly, ending up in a sharp phase jump at MJD 55112. We can speculate that the accretion disc starts to recede from the neutron star and at some point the antipodal spot appears to our view changing the pulse shape. While {\it RXTE}/PCA is a non-imaging instrument and it is impossible to separate flux from IGR17511\ and XTE J1751--305, we were able to estimate flux received from IGR17511\ using {\it Swift}/XRT. The {\it Swift}\ count rate lightcurve has been produced using online XRT product generator\footnote{http://www.swift.ac.uk/user\_objects/} \citep{2009MNRAS.397.1177E} and converted to energy flux in 3--10 keV band using the webPIMMS tool\footnote{http://heasarc.nasa.gov/Tools/w3pimms.html} using powerlaw with photon index 1.7 that is suitable for our object in the mentioned energy interval. We also obtained the {\it RXTE}/PCA light curve in the same energy range for cross-calibration. The resulting lightcurve of the outburst is shown on Fig. \ref{f:lc}. A sharp abrupt fall of the flux has been observed on October 9 (MJD 55113, \citealt{2009ATel.2237....1M}) followed by a non-detection of the source in the subsequent pointings. This strongly supports the conclusion that the source entered the rapid drop stage. Since there is no {\it Swift}\ observation around MJD 55112, it is not possible to reliably correct the amplitude of the respective pulse profile shown on Fig. \ref{f:mjd55112}. \subsection{Phase-resolved spectrum} \label{c:phares} \begin{figure*} \centerline{\epsfig{file=fig_phares_Group_T.eps,width=15cm}} \caption{Results of the phase-resolved spectral analysis. The best-fitting parameters (except normalizations) are frozen at the values obtained in the corresponding phase-averaged fit (Sect. \ref{c:rxte}, Table \ref{t:fits}). Left panel: apparent emitting areas of the two components (blue and red curves represent blackbody $\Sigma_\mathrm{spot} $ and Comptonization $\Sigma_\mathrm{shock}$ components, respectively). Right panels: $\Sigma_\mathrm{spot} $ vs $\Sigma_\mathrm{shock}$. The parameters of the fits with a harmonic function are shown in Table \ref{t:normfits}. } \label{f:phares} \end{figure*} Phase-resolved spectroscopy was performed for 1998 outburst of SAX J1808.4--3658 by \citet{2002MNRAS.331..141G}, for the 2002 outburst by \citet{2009MNRAS.400..492I} and \citet{2010MNRAS.tmp.1560W} and for XTE J1751--305 by \citet{2005MNRAS.359.1261G}. In the former work it was concluded that the energy dependence of the pulse profiles and phase lags can be explained by a simple model where only normalizations of the hotspot blackbody and Comptonization tail vary. Similar to these papers, we have generated the phase-resolved spectra for group T and used the phase-averaged spectrum (Section \ref{c:rxte}) as a reference. We fixed all the parameters except the blackbody and Comptonization normalizations. The results are shown in Fig. \ref{f:phares}. The coefficients for expression (\ref{m:cosines}) that describe the modulation in the apparent areas of the blackbody $\Sigma_\mathrm{spot}$ and Comptonized tail $\Sigma_\mathrm{shock}$ are shown in Table \ref{t:normfits}. Lag between components is large: fundamental components have phase difference of 0.18, which corresponds to $\sim$730$\mu$s, while the lag of the overtone is difficult to determine precisely. Phase-resolved fits of groups 1--4 and 5--8 suggest slight increase of component lags that reflects the changes in the pulse profile. The determined effective areas are shifted comparing to Table \ref{t:diffareas} values, since we use continuum shape determined using equal areas of blackbody and Comptonized emission (i.e., given in Table \ref{t:fits}). Utilizing continuum parameters obtained with independent emitting areas yield in area values compatible with those in Table \ref{t:diffareas}; however, the observed phase difference remains the same. It is interesting to note that the measured $\sim$730$\mu$s lag between components is much larger than the one observed in actual pulsations. The effect also seen in similar studies, e.g., of SAX J1808.4--3658 \citep{2002MNRAS.331..141G, 2009MNRAS.400..492I}. It is a natural consequence of the fact that in phase-resolved spectroscopy we decompose the observed joint spectrum into separate physical components, while the pulse that we observe in the soft X-ray band is a mix from these components (in nearly equal proportion, see Fig. \ref{f:spectrum}). Modelling of time lags (described below in Section \ref{s:lags}) confirms the fact that the lag between components is indeed much larger than the one between the observed pulses at different energies. \begin{table} \caption{Harmonic fits by expression (\ref{m:cosines}) to the phase-resolved apparent areas. The average apparent areas $\overline{\Sigma}$ (in units of km$^2$) are computed for the distance of 5 kpc.} \begin{tabular}{|llllllll|} \hline Group & Model component & $\overline{\Sigma}$ & $a_1$ & $a_2$ & $\phi_1$ & $\phi_2$ \\ \hline T & Spot & 51 & 0.42 & 0.02 & 0.50 & 0.33\\ T & Shock & 57 & 0.23 & 0.03 & 0.31 & 0.31 \\ 1--4 & Spot & 63 & 0.43 & 0.04 & 0.48 & 0.40 \\ 1--4 & Shock & 70 & 0.23 & 0.03 & 0.31 & 0.30 \\ 5--8 & Spot & 40 & 0.45 & 0.03 & 0.52 & 0.27 \\ 5--8 & Shock & 46 & 0.23 & 0.02 & 0.31 & 0.31 \\ \hline \end{tabular} \label{t:normfits} \end{table} \section{Discussion} \subsection{Spot size} The apparent spot size at infinity $R_\mathrm{spot}$ can be related to the actual (circular) spot radius by taking into account relativistic light bending, as described in \citet{2003MNRAS.343.1301P} and \citet{2009MNRAS.400..492I}.\footnote{Note that the relation is derived for a slowly rotating pulsar and blackbody emission.} We can use the emitting areas shown in Table \ref{t:fits} for the group T to obtain estimations of the spot size. We take a star with \ensuremath{M_\ast}=1.4 \ensuremath{{\rm M}_{\odot}}\ and \ensuremath{R_\ast}= 10.3 km (2.5 Schwarzschild radii). The actual spot radius depends on the system inclination $i$ and the spot colatitude $\theta$ (see sect. 5.1 of \citealt{2009MNRAS.400..492I}). The smallest and the largest possible radii correspond to $i=\theta=0\degr$ and $i=\theta=90\degr$, respectively. For best-fitting normalizations of the {\it RXTE}-only fits, the interval of radii is 4.5--7.1 km, and for the joint {\it RXTE}\ and {\it Swift}\ spectrum it is 4.0--6.4 km. Besides the uncertainty in the areas because of a weakly constrained distance to the object, the numbers quoted are subjected to uncertainty that resides in the unknown relation between the emitting area of the blackbody and Comptonization components (which we assumed to be equal). For SAX J1808.4--3658, the time-average spectrum (for the slow decay outburst stage) suggested that the blackbody area is twice as large as the Comptonized one, and the likely error in the area is about 50 per cent; additional uncertainty due to a colour-correction appears if the emission is different from a blackbody, however, for atmospheres heated from above this correction should not play a significant role \citep{2003MNRAS.343.1301P,2009MNRAS.400..492I}. Finally, the estimation is valid for the filled circular spot, while in reality the spot shape can be rather different (see Section \ref{s:ampl}). \subsection{Oscillation amplitude and geometry} \label{s:ampl} \citet{2003MNRAS.343.1301P} have derived an expression that relates oscillation amplitude of the pulsar to the system inclination and spot colatitude for a case of large, blackbody-emitting, always visible spot on the surface of a slowly rotating star (see equation (10) there and sect. 5.2 in \citealt{2009MNRAS.400..492I}). While our hotspot is non-blackbody and the star rotates rapidly (and there are arguments against filled circular spot shape, see below), we can compare the observed relative amplitude of the fundamental with this analytical relation to obtain a zero-order estimate of system's geometrical parameters. Taking the apparent spot radius $R_{\rm \infty}=5$ km, we can obtain a dependence of the amplitude on the inclination and spot colatitude which (and typical neutron star parameters) are shown on Fig. \ref{f:ampl}. We remind that in our case the observed amplitude of the fundamental is about 22 per cent at the beginning of the outburst, gradually decreasing to 15 per cent (Fig. \ref{f:sinefits}). Assuming 60\degr\ inclination we get the spot colatitude of about 15\degr. A decrease in the spot colatitude can cause a corresponding decrease of the amplitude. \begin{figure} \centerline{\epsfig{file=fig_amplitudes_25.eps,width=7cm}} \caption{Contour plots of the constant oscillation amplitude at the plane inclination -- spot colatitude. The curves are computed using expression (22) in \citet{2003MNRAS.343.1301P} (see also Sect 5.1 in \citealt{2009MNRAS.400..492I}) assuming a 1.4\ensuremath{{\rm M}_{\odot}}\ neutron star with radius of 10.3 km and the spot size of $R_{\rm \infty}=5$ km. } \label{f:ampl} \end{figure} What could be a reason for decreasing spot colatitude during the outburst? Present understanding of the AMP geometry is that of the inclined dipole with the accretion disc disrupted at some truncation radius (likely proportional to the Alfv{\'e}n radius, see e.g. \citealt{2005ApJ...634.1214L}) by the magnetic field. Matter flows along the magnetic field lines and falls on the neutron star surface forming a hotspot. As the accretion rate drops, the Alfv{\'e}n radius increases and the disc recedes from the neutron star. The matter then accretes along the magnetic field lines that touch the star closer to the magnetic pole and thus hotspot outer radius decreases. In case of a hypothetical circular spot, decreasing of the spot size $\rho$ would result in \textit{increasing} of the amplitude (expression 22 in \citealt{2003MNRAS.343.1301P}), which contradicts the observations. A filled circular spot, however, is not what is seen in MHD simulations, instead a ring- or a crescent-shaped spot shapes are observed \citep{2004ApJ...610..920R}. In such a case, the emission is generated in a preferred sector situated closer to the disc. When the outer spot radius decreases, the preferred sector shifts closer to the magnetic pole of the star reducing the effective spot colatitude and leading to a decrease of the pulse amplitude as observed. \subsection{Origin of the lag evolution} \label{s:lags} The phase lags can be explained by a difference in emissivity patterns at different energies \citep{2002MNRAS.331..141G,2003MNRAS.343.1301P}. At relatively high energies (above 10 keV) Comptonization is dominating the emission. But as we proceed downwards from 10 to few keV, we observe gradually increasing contribution from the blackbody component which alters the emissivity pattern (towards more isotropic emission). Therefore maximum of flux is observed for different energies at different pulse phases and this effect creates energy-dependent time lags. In agreement with this explanation, the observed lag saturates at the energy where blackbody component becomes insignificant. IGR17511\ demonstrates a smooth increase in the time lags of the fundamental during the outburst (Fig. \ref{f:lags}) and a nearly constant time lags of the overtone (rather weakly constrained by observations). Several sources show that the lag does not remain constant during the outburst: in SAX J1808.4--3658 the lag value increases during the ``slow decay", while in the ``flaring tail" stage it starts to decrease \citep{2009ApJ...697.2102H}. In XTE J1814--338 there is also an observational evidence of the lag increasing in the end of the outburst \citep{2006MNRAS.373..769W}. While the detailed modelling of actual pulse profiles is beyond the scope of this work, below we discuss the likely cause of the observed lag evolution. The pulse shape depends on many factors, such as the spot shape (which can be different for different emission components) and size, colatitude of the magnetic pole and the inner radius of accretion disc \citep[see][]{2008AIPC.1068...77P}. Changes in these parameters will affect the pulse shape, but not every of them can generate the observed increase of lags. To study the lag evolution, we employed the following toy model (following the framework developed by \citealt{2003MNRAS.343.1301P,2004A&A...426..985V, 2006MNRAS.373..836P,2009MNRAS.400..492I}). We have assumed a system inclination of 60\degr\ and a 1.4 \ensuremath{{\rm M}_{\odot}}\ star with radius of 10.3 km rotating at the pulsar frequency of 245 Hz. To model the emission anisotropy, we have chosen the angular dependence of emitting radiation in the form $I(\alpha)=I_0(1-h \cos \alpha)$, where $h$ is the anisotropy parameter and $\alpha$ is the angle between the spot normal and photon direction in the spot frame. For the blackbody radiation $h=0$, while for Comptonization in a slab of Thomson optical depth about unity $h\gtrsim0.5$ \citep*{2004A&A...426..985V,2007MNRAS.381..723I}, and for the observed AMP pulse profiles $h\sim0.7$--$0.8$ is required \citep{2003MNRAS.343.1301P,2009ApJ...706L.129P}. We generate two pulse profiles: one in the hard energy band, where only the Comptonization component contributes, and another at soft energies is a mixture of the blackbody and Comptonization components in equal proportions (we neglect changes of the Comptonization emissivity pattern with energy, which is a good approximation, see \citealt{2007MNRAS.381..723I}). We consider two geometries: a circular spot and a crescent spot, mimicking the shape obtained in MHD modelling \citep{2004ApJ...610..920R}. \begin{figure} \centerline{\epsfig{file=figlagmodel.eps,width=7cm}} \caption{Time lag dependence on the anisotropy parameter $h$ from 0.5 to 0.7. The pulse maximum lags and the lags of the fundamental and the overtone are shown as the solid, dotted and dashed curves, respectively. The dash-dotted line represents the pulse maximum lag between the blackbody and the Comptonization component lightcurves. The contribution of two components to the soft X-ray band are assumed to be equal, the spot colatitude $\theta$ was set to 15\degr, the powerlaw photon index of the continuum spectrum was assumed to be 1.9, and the angular radii of blackbody and Comptonization emitting spots were assumed 30\degr and 15\degr, respectively. } \label{f:modellag} \end{figure} For both assumed spot geometries, we have arrived to a similar conclusion. A rather small change of anisotropy parameter (from 0.5 to 0.6--0.7) turns out to be the best candidate for the lag evolution, as it can reproduce the observed lag increase of about $\sim200\mu$s (see Fig. \ref{f:modellag}). On the other hand, a physically realistic change of the ratio between Comptonized and blackbody emission in the ``soft" light curve, a change of the relative size of Comptonization and blackbody spot, a change in the spot size or a change of the effective spot colatitude were not able to produce lag evolution large enough to match the observed value. Additionally, we can compare the aforementioned time lags with the lags between \textit{physical} components (blackbody and Comptonization light curves). Very similarly to what is observed with the phase-resolved spectroscopy, we obtain the ``component lags" that are nearly two times bigger than the lags computed as a mixture of two components. For the case shown on Fig. \ref{f:modellag}, the ``component lag'' changes from $-$250 to $-$740 $\mu$s. Variation of $h$ in principle could be related to a decreasing optical depth of the Comptonizing slab (see Table \ref{t:fits}), which would allow for more blackbody emission to contribute to the soft energy band. A decrease of $\tau_{\rm T}$ from 1.85 to 1.74 would result in a $\sim$10 per cent larger blackbody flux, making it unlikely to account for the observed increase of the phase lag. To achieve a lag increase of 200 $\mu$s for a typical case shown in Fig. \ref{f:modellag}, the ratio of the blackbody flux to the Comptonized one in the soft band should change by a factor of several, which contradicts the observations of a remarkably similar spectra seen during the outburst. This would mean that the intrinsic angular emissivity of emitting plasma should not change much. However, the vertical structure of the accretion shock might change with variations of the accretion rate causing changes in the emissivity pattern. Thus, if the anisotropy parameter $h$ is varying during the outburst, it might indicate variations in the accretion shock structure. Alternatively, a physical displacement of the centroid of the Comptonization component (accretion shock) relative to the blackbody emitting spot can cause the lag increase. \section{Conclusions} In this paper, we have analysed the data from IGR17511\ obtained by {\it RXTE}\ and {\it Swift}\ during its September--October 2009 outburst. For the largest part of the outburst the source was in the ``slow decay" outburst stage. However, after 24 days from the beginning of outburst, right before the source faded beyond {\it RXTE}\ detection limit (MJD 55112), a very sharp decrease of the source flux was detected accompanied by a considerable phase shift of the pulse profile. This clearly indicates the end of the ``slow decay" stage at that date. The energy spectra of IGR17511\ can be well described with Comptonization in the accretion shock ($\ensuremath{T_\mathrm{e}} \sim 30$ keV, $\tau_{\rm T} \sim 2$) and soft $\sim$1 keV blackbody emission originating from the NS surface. The {\it Swift}/XRT data allowed us to estimate the accretion disc parameters. We obtained the inner disc temperature of about 0.24 keV and reasonable inner disc radius estimates located within the corotation radius. We also detected weak reflection and an iron line ($EW\sim60$--$100$ eV). Spectral evolution was rather weak during the outburst. Only in the very end of the outburst (after MJD 55105), we detected a slight change in the optical depth $\tau_{\rm T}$ of the Comptonized emission; such a spectral stability is known to be common for AMPs. The pulse profiles of the source were smooth and do not demonstrate any prominent narrow secondary features, which indicates that observed emission comes mainly from one emitting spot. The pulse harmonics demonstrate drop of their amplitudes during the course of outburst. The study of phase lag behaviour revealed a considerable increase of the lag from 150 to 400 $\mu$s, an effect seen also in few other AMPs. A change in the anisotropy pattern of the Comptonized radiation could explain the observed time lag evolution, implying changes in the accretion column geometry or a physical displacement of the centroid of the accretion shock relative to the blackbody spot. \section*{Acknowledgements} We thank Alessandro Patruno for helpful discussions. AI was supported by EU FP6 Transfer of Knowledge Project ``Astrophysics of Neutron Stars" (MTKD-CT-2006-042722). JJEK acknowledges the Finnish Graduate School in Astronomy and Space Physics and JP the Academy of Finland grant 127512.
1,314,259,994,446
arxiv
\section*{Funding Information} Australian Research Council (ARC) (DP140101336, DP170100531). RIW acknowledges support through an MQ Research Fellowship. \bigskip
1,314,259,994,447
arxiv
\section{Introduction} The oscillation phenomenon between the neutron and antineutron, $n \to {\bar n} $ was suggested in early 70's by Kuzmin \cite{Kuzmin:1970nx}. First theoretical scheme for $n-{\bar n} $ oscillation was suggested in Ref. \cite{Mohapatra:1980qe}, followed by other models as e.g. \cite{Babu:2001qr,Berezhiani:2005hv,Babu}. Experimental observation of the transformation of neutron to antineutron $n\to {\bar n} $ would be a demonstration of baryon number violation by two units, from $B=+1$ for neutron to $B=-1$ for antineutron. This will be an experimental demonstration that one of the Sakharov's conditions \cite{Sakharov} required for the generation of baryon asymmetry in the universe is indeed realized in the nature. The $n\to {\bar n} $ conversion so far was not experimentally observed. However, this does not exclude the possibility that it can be a rare/suppressed process. For a review of the present theoretical and experimental situation on $n-{\bar n} $ oscillation, see \cite{Phillips:2014fgb}. Apart of baryon-conserving (large) Dirac mass term $m_n \overline{n} n$, the neutron may acquire a small Majorana mass term, $\varepsilon_{n{\bar n} } (n C n + {\rm h.c.})$, which violates the baryon number by two units and induces the neutron -- antineutron mass mixing. As far as the neutron is a composite particle, $n-{\bar n} $ mixing can be induced by the effective six-fermion operators involving the first family quarks $u$ and $d$: % \be{nn} {\cal O}_9 = \frac{1}{{\cal M}^5} (udd udd ) \end{equation} where ${\cal M}$ is some large mass scale of new physics beyond the Standard Model. These operators can have different convolutions of the Lorentz, color and weak isospin indices which are not specified. (More generally, having in mind that all quark families can be involved, such operators can induce the mixing phenomena also for other neutral baryons, e.g. between the hyperon $\Lambda$ into the anti-hyperon $\bar\Lambda$.) The models of Refs. \cite{Mohapatra:1980qe,Babu:2001qr,Berezhiani:2005hv,Babu} are just different field-theoretical realizations for the operators (\ref{nn}). Taking matrix elements from these operators between the neutron and antineutron states, one can estimate the neutron Majorana mass modulo the Clebsch factors as \be{dm} \varepsilon_{n{\bar n} } \sim \frac{\Lambda_{\rm QCD}^6}{{\cal M}^5} \sim \left(\frac{1 \, {\rm PeV} }{{\cal M}}\right)^5 \times 10^{-25} \, {\rm eV} \; . \end{equation} The coefficients of matrix elements $\langle {\bar n} \vert uddudd \vert n\rangle$ for different Lorentz and color structures of operators (\ref{nn}) were studied in ref. \cite{Rao}, but we do not concentrate here on these particularities and take them as $O(1)$ factors. Concerning the experimental limits on $n-{\bar n} $ oscillation time $\tau_{n{\bar n} } = 1/\varepsilon_{n{\bar n} }$, the direct limit on free neutron oscillations imply $\tau_{n{\bar n} } > 0.86 \times 10^8$~s \cite{Grenoble}. The nuclear stability limits, with uncertainties in the evaluation of nuclear matrix elements, translate into $\tau_{n{\bar n} } > 1.3 \times 10^8$~s \cite{Soudan} and $\tau_{n{\bar n} } > 2.7 \times 10^8$~s \cite{SK2015}. The latter implies the strongest upper limit on $n-{\bar n} $ mixing, $\varepsilon_{n{\bar n} } < 2.5 \times 10^{-24}$~eV. The future long-baseline direct experiment at the European Spallation Source (ESS) can reach the sensitivity down to $10^{-25}$~eV and thus improve the existing limits on $n-{\bar n} $ oscillation time by more than an order of magnitude \cite{Phillips:2014fgb}. One can consider a situation when baryon number $B$ is broken not explicitly but spontaneously. Such a baryon symmetry can be global or local, with different physical implications. The possibility of spontaneous violation of global lepton symmetry after which the neutrinos can get non-zero Majorana masses is widely discussed in the literature. As a result, a Goldstone boson should appear in the particle spectrum, named as Majoron \cite{Chikashige:1980ui}. Spontaneous violation of global baryon number in connection with the Majorana mass of the neutron was first discussed in ref. \cite{Barbieri}, in the context of Mohapatra-Marshak model for $n-{\bar n} $ oscillation \cite{Mohapatra:1980qe}. Recently the discussion was revived by one of us in Ref. \cite{baryo-majoron}, where also seesaw model for $n-{\bar n} $ transition with low scale spontaneous violation of baryon number were suggested. An associated Goldstone particle -- baryo-majoron, can have observable effects in neutron to antineutron transitions in nuclei or dense nuclear matter. The low-scale baryo-majoron model \cite{baryo-majoron} has many analogies with the low scale Majoron model for the neutrino masses \cite{Berezhiani:1992cd}. By extending baryon number to $B-L$ symmetry, baryo-majoron can be identified with the ordinary majoron associated with the spontaneous breaking of lepton number, with interesting implications for neutrinoless $2\beta$ decay with the majoron emission \cite{Georgi:1981pg}, for matter-induced effects of the neutrino decay \cite{Berezhiani:1987gf} and for the Majoron field effects in the early Universe \cite{Bento:2001xi}. In this paper we discuss a situation when baryon number is related to a local gauge symmetry. The idea to describe the conservation of baryon number $B$ and lepton number $L$ similar to the conservation of electric charge by introducing gauge symmetries $U(1)_B$ and $U(1)_L$, i.e. in terms of baryon or lepton charges coupled to the massless vector fields of leptonic or baryonic photons with tiny coupling constants, was suggested long time ago \cite{OkunL&Q}. Their effects for the neutron oscillations were studied in Refs. \cite{Lamoreaux}. Nowadays the limits on such interactions are very stringent. Best limits on the coupling strength of baryonic and leptonic photons were obtained from the E\"{o}tv\"{o}s type of experiments testing the equivalence principle~\cite{Adelberger}. Then, the common sense argument used here was that coupling of such photons is many order of magnitude weaker than the gravitational interaction between baryons or leptons and therefore such photons are likely non-existent. We will try to revise this concept. Since baryon number $B$ and lepton number $L$ separately are not conserved due to non-perturbative effects, it is difficult to promote them as gauge symmetries without altering the particle content of the Standard Model, i.e. without introducing new exotic particles. Therefore, we discuss not baryonic and leptonic photons separately but the vector fields associated with $U(1)_{B-L}$ gauge symmetry. In the Standard Model this symmetry is anomaly free and $B-L$ current is conserved at the perturbative as well as at non-perturbative level. On the other side, the existence of neutron--antineutron oscillation or other similar phenomena would imply that this gauge symmetry, if it exists, should be spontaneously broken. $n-{\bar n} $ mixing cannot be induced without violating $B$ and thus $B-L$, which should render massive also $B-L$ baryophoton. However, if its gauge coupling constant is very small, such a baryophoton can remain extremely light, and mediate observable long range forces (fifth force) between material bodies. Clearly, such $B-L$ baryophotons couple with opposite charges not only between the baryons and anti-baryons (and between the leptons and anti-leptons), but also between the baryons and leptons. Thus, $B-L$ charge of the neutral hydrogen atom is zero while the $B-L$ charge of heavier neutral atoms is determined by the number of neutrons in nuclei. Therefore, the regular matter built by nuclei heavier than hydrogen is $B-L$ charged. In principle, at the scale of the universe $B-L$ charge might be compensated by the relic neutrino component that so far remains experimentally undetected. \section{Experimental limits on B-L photons} The baryophoton $b_\mu$ associated with $U(1)_{B-L}$ gauge symmetry interacts with the fermion (neutron, proton, electron and neutrino) currents as $g b_\mu (\overline{n} \gamma^\mu n + \overline{p} \gamma^\mu p - \overline{e} \gamma^\mu e - \overline{\nu} \gamma^\mu \nu)$. As far as the existence of the neutron--antineutron mixing implies the violation of $U(1)_{B-L}$, this gauge boson cannot remain exactly massless. In particular, since $D=9$ effective operators (\ref{nn}) are now forbidden by $U(1)_{B-L}$ symmetry, they can be replaced by the effective $D=10$ operators \be{chi} \frac{\chi}{M^6}(udd udd) \end{equation} involving the complex scalar field $\chi$ bearing two units of $B-L$ number, $Q_\chi=-2$. Its vacuum expectation value (VEV) $\langle \chi \rangle = \upsilon_\chi$. spontaneously breaks the $U(1)_{B-L}$. By substituting $\chi \to \langle \chi \rangle$ in (\ref{nn}), operator (\ref{chi}) reduces to (\ref{nn}) with ${\cal M}^5 = M^6/\upsilon_\chi$, and thus the induced $n-{\bar n} $ mixing mass can be estimated as \be{dm-chi} \varepsilon_{n{\bar n} } \sim \frac{\upsilon_\chi \Lambda_{\rm QCD}^6}{M^6} \sim \left(\frac{\upsilon_\chi}{1 \, {\rm keV} }\right) \left(\frac{10 \, {\rm TeV} }{M}\right)^6 \times 10^{-25} \, {\rm eV} \; . \end{equation} Therefore, taking that the scale $M$ larger than 10 TeV, the neutron--antineutron oscillation can be within the experimental reach at the ESS, i.e. $ \varepsilon_{n{\bar n} } > 10^{-25}$~eV, if $\upsilon_\chi > 1$~keV or so. We take this as a benchmark value for the $B-L$ symmetry breaking scale. If the scale $M$ is taken ad extremis as small as $M\sim 1$~TeV, then one would get $\upsilon_\chi \sim 1$~meV. However, such a tiny scale does not seem to be a really realistic, However, a huge hierarchy problem between the scale $\upsilon_\chi \sim 1$~meV and the electroweak scale $\sim 100$ GeV will be a headache. More important, it is very very unlikely that the violation of $B-L$ at such a small $\upsilon$, which is just about a 3 K (cosmic microwave background (CMB) temperature today), can be relevant for primordial baryogenesis. The possibility of very small $\upsilon$ is excluded in realistic models which we discuss later.\footnote{The lower limit on the $U(1)_{B-L}$ breaking scale becomes even more stringent if instead of $D=10$ operators one introduces $D=11$ operator $\frac{\eta^2}{M^7}(udd udd)$ involving a scalar $\eta$ with the charge $Q=-1$. Then for achieving e.g. $ \varepsilon_{n{\bar n} } \sim 10^{-24}$~eV, it should have larger VEV, $\upsilon_\eta \sim 10$~keV, even if $M\sim 1$~TeV. } In principle, also the VEVs $v_i$ of other scalars $\eta_i$ with non-zero $B-L$ charges $Q_i$ can also participate in breaking $U(1)_{B-L}$. As a result, the baryophoton should acquire the mass: \begin{equation}\label{eq:7} M_b= 2 \sqrt 2 g \upsilon, \quad \quad \upsilon = \big[ \upsilon_\chi^2 + (Q_i/Q_\chi)^2 v_i^2 \big]^{1/2} \geq \upsilon_\chi \end{equation} where $g$ is gauge coupling constant of baryophotons, Thus, the value of $\upsilon_\chi$ defines the minimal possible value of $M_b$ for a given constant $g$. If there are other scalar fields $\eta_i$ with non-zero $Q=B-L$ charges and non-zero VEVs, their contribution would make $M_b$ larger. Therefore, baryophotons should mediate an Yukawa-like fifth-force between the material bodies. Vector boson $b_\mu$ exchange induces a spin-independent potential energy of the interaction between the test particle with $B-L$ charge $Y_i$, in our case the neutron or antineutron, and an attractor (a massive body as e.g. Earth or sun) with the overall $B-L$ charge $Y_A$: \begin{equation}\label{eq:6} V_i = \alpha_{B-L}\frac{Q_i Q_A}{r}\, e^{-r/\lambda}, \quad \quad \quad \lambda = \frac{1}{M_b} \simeq \left(\frac{10^{-49}}{\alpha_{B-L} }\right)^{1/2} \left(\frac{1~\rm keV}{\upsilon }\right) \times 0.6 \cdot 10^{16} ~ {\rm cm} \end{equation} where $\alpha_{B-L} = g^2/4\pi$, in addition to the gravitational potential energy $V_i^{\rm gr} = - G m_i M_A/r$, $G$ being the Newton constant. The overall $B-L$ charge of the gravitating body of mass $M_A$ is defined by its chemical composition: $Q_A \simeq Y_n M_A/m_n$, $m_n$ being the neutron mass. Due to electric neutrality, the amount of protons and electrons should be equal and thus their contributions cancel each other. Hence, the value of $Q_A$ is determined by the neutron fraction $Y_n$. In particular, $Q=0$ for hydrogen and $Q\approx 0.5$ for a typical heavy nuclei. The maximal possible range of the Yukawa radius $\lambda$ for a given constant $g$ is limited by the minimal value of the symmetry breaking scale, $\upsilon = \upsilon_\chi$. If there are other scalar fields $\eta_i$ with different $Q=B-L$ charges and non-zero VEVs, their contribution would make the mass $M_b$ larger, and thus would shorten the range of $\lambda$. The results of torsion-balance tests of the weak equivalence principle from Ref.~\cite{Adelberger} can be interpreted as limits on fifth forces, and in particular, for the force mediated by the $B-L$ baryophotons, as limits on dimensionless constant $\alpha_{B-L}$ for a given radius $\lambda$, as shown in Fig. 1. In difference from universal gravity, the baryophoton exchange would induce for the neutron ($Q_n=1$) and antineutron ($Q_{{\bar n} }=-1$) the potential energies of different sign, $V_{{\bar n} } = - V_{n}$. It is convenient to relate the values $V_{n,{\bar n} }$ with the neutron (and antineutron) gravitational potential energy $V_{\rm gr} = -G m_n M_A/r$: \be{adelb} V_{n,{\bar n} } = \pm \tilde\alpha q_A e^{-r/\lambda} \times V_n^{\rm gr} \end{equation} introducing a dimensionless parameter $\tilde\alpha = \alpha_{B-L}/G m_n^2$, and $q_{A} = Q_A/(M_A/m_n)$ being the massive objects $B-L$ charge per neutron mass unit. The upper limits on the parameter $\tilde\alpha$ as a function of the radius $\lambda$ are given in Fig. 6 of Ref. \cite{Adelberger} (in Ref. \cite{Adelberger} these values are normalized per atomic mass unit, 1~amu$ = 0.99\, m_n$). In Fig. 1 these limits of Ref. \cite{Adelberger} are shown directly translated for $\alpha_{B-L}$. As we see, if the Yukawa radius is larger than the Earth diameter, $\lambda > 10^{9}$~cm or so, then the upper limit on $\alpha_{B-L}$ becomes practically independent on $\lambda$ and it corresponds to $\alpha_{B-L} < 10^{-49}$, or $\tilde\alpha < 1.7\times 10^{-11}$ \cite{Adelberger}. \begin{figure}[t] \centering \includegraphics[width=400pt, trim=40 340 40 60, clip=true]{Fig1.pdf} \caption{Limits on $B-L$ dimensionless interaction constant (see text). Different values of the VEV $\upsilon_\chi$ responsible for generating of the baryophoton mass are shown here for the sake of demonstration of the possible scale of the mechanism. }\label{fig:1} \end{figure} Now we can estimate the neutron potential energy $V_n$ as produced due to the baryophoton potentials by the Earth, sun and the Galaxy, relative to the corresponding gravitational potential energies. The Earth induces the gravitational potential energy for the neutron at its surface $V^{\rm gr}_E = m_n \phi_{\rm gr}= -Gm_n M_\oplus/R_\oplus \approx 0.66$ eV. The sun gives a bigger contribution, $V^{\rm gr}_S = -Gm_n M_\odot/{\rm AU} \approx 10$ eV. Finally, the Galaxy itself induces even bigger value $V^{\rm gr}_G \sim 1$ keV.\footnote{ Notice that we are dealing with the gravitational potentials which fall as $\propto 1/r$ and not with gravitational forces testable by torsion balance experiments. The latter are $\propto 1/r^2$ and their hierarchy between the Earth, sun and the Galaxy becomes reordered in opposite way. This is the reason why for $\lambda$ exceeding the Earth Diameter, the experimental limits of Ref. \cite{Adelberger} become independent on $\lambda$. } Since the Earth is built by heavy nuclei, $B-L$ charge of the Earth is approximated as 50\% of the number of baryons in the Earth, i.e. $q_E \simeq 0.5$. The sun is dominantly consists of hydrogen which has vanishing $B-L$, and thus its fifth force is essentially determined by the mass fraction of heavier nuclei (helium, etc.) with $Y_n \simeq 0.5$. Therefore, $q_{S} \simeq 0.13$ as one can estimate from the known chemical composition of Sun~\cite{Suncomp}. The same applies to the Milky Way contribution, $q_G \simeq 0.13$. Thus, assuming that $\lambda$ is larger than the Earth diameter, $\lambda > 2R_\oplus$, the values of $V_n$ at the surface of Earth can be estimated as:\footnote{We neglect the annual modulation of $V^{\rm gr}_S$ due to small variation the sun--Earth distance, as well as potentials induced by other planets and the Moon. The latter also could be responsible for time variation of the total potential. We also neglect contributions from neighboring galaxies and galaxy clusters since } \begin{equation}\label{eq:8} {V_n} = \tilde \alpha \times \big( 0.5 \, V_E^{\rm gr} e^{-R_\oplus/\lambda} + 0.13\, V_S^{\rm gr} e^{-1~{\rm AU}/\lambda} + 0.13\, V_G^{\rm gr} e^{-10~{\rm kpc}/\lambda} \big ) \end{equation} From (\ref{eq:6}), taking $\alpha_{B-L} = 10^{-49}$, i.e. $\tilde\alpha = 1.7 \times 10^{-11}$, we see that for our benchmark value $\upsilon_\chi = 1$~keV we obtain $\lambda \sim 10^{16}$~cm which is much larger than the sun-Earth distance (1 AU$ \approx 1.5 \times 10^{13}$~cm). Therefore, in the case the contributions of the Earth and the sun in $V_n$ can be as large as respectively $0.56\times 10^{-11}$~eV and $2.2\times 10^{-11}$~eV, amounting in total as $2.8 \times 10^{-11}$~eV. For $\lambda$'s smaller than the Earth diameter, the larger values of $\alpha_{B-L}$ are allowed (see Fig. 1) but the available volume of the source drops as $(\lambda/R_\oplus)^3$ and thus upper limit on $V_n$ sharply decreases. It is interesting to question to how large values of $\lambda$ and how large potential $V_n$ can be induced by the Galaxy. For the benchmark value $\upsilon_\chi = 1$~keV, we see from (\ref{eq:6}) that for having $\lambda > 20~{\rm kpc} = 6\times 10^{22}$~cm, one has to take $\alpha_{B-L} < 10^{-62}$ or so, in which case the baryophoton induced potential will be less than $10^{-21}$~eV and therefore it would have no influence for the experimental search of $n-{\bar n} $ oscillations. Even taking the value of the VEV as small as $\upsilon_\chi = 1$~meV, i.e. at its extreme dictated by the value $\varepsilon_{n{\bar n} } \sim 10^{-24}$~eV by the operators (\ref{chi}) with $M = 1$~TeV, we obtain that $\lambda > 10~{\rm kpc} = 3\times 10^{22}$~cm can be obtained if $\alpha_{B-L} < 6\times 10^{-51}$ or so. In this marginal situation, the Galaxy contribution in $V_n$ could amount up to $10^{-10}$~eV. In any case, contribution of more distant objects as neighboring galaxies and galaxy clusters are exponentially suppressed since a very small $B-L$ breaking scale, $\upsilon_\chi < 1$~meV, is not of interest. \section{$n-{\bar n} $ oscillation in the presence of $B\!-\!L$ fifth force } Non-relativistic Hamiltonian that describes $n-{\bar n} $ oscillation in the presence of fifth force and magnetic fields can be presented as $4\times 4$ matrix acting on the state vector $(n_+,n_-,{\bar n} _+, {\bar n} _-)$ describing the neutron and antineutron states with two spin polarizations:\footnote{Let us recall that the CPT invariance implies that the neutron and antineutron must have exactly equal masses and magnetic moments of the opposite sign. } \begin{equation}\label{eq:1} H = \left(\begin{array}{cc} m_n(1 - \phi_{\rm gr}) + V_n + \mu_n B \sigma_3 & \varepsilon_{n{\bar n} } \\ \varepsilon_{n{\bar n} } & m_n(1 - \phi_{\rm gr}) + V_{{\bar n} } - \mu_n B \sigma_3 \end{array}\right) , \end{equation} where $\mu_n = -6 \times 10^{-12}$ eV/G is the magnetic moment of the neutron, $B$ is the magnetic field and $\sigma_3$ is the third Pauli matrix since the spin quantization axis is chosen as the direction of the magnetic field. In this basis one has no spin precession and the Hamiltonian (\ref{eq:1}) is diagonal. Omitting the universal terms and taking $V = V_n = -V_{{\bar n} }$, it can be rewritten as \be{mat44} H_I = \PM{ V - \Omega_B & 0 & \varepsilon_{n{\bar n} } & 0 \\ 0 & V + \Omega_B & 0 & \varepsilon_{n{\bar n} } \\ \varepsilon_{n{\bar n} } & 0 & - V + \Omega_B & 0 \\ 0 & \varepsilon_{n{\bar n} } & 0 & - V - \Omega_B } . \end{equation} where $\Omega_B = \vert \mu_n B \vert = 6\cdot 10^{-12} (B/1\,{\rm G})$~eV is the Zeeman energy shift induced by the magnetic field. In general case, with $V_n$ and $\Omega_B$ both non-zero, the $n$ and ${\bar n} $ oscillation probabilities are different between the $+$ and $-$ polarization states: \begin{equation}\label{eq:3} P^{\pm}_{n{\bar n} }(t) =\frac{{\varepsilon_{n{\bar n} }}^2}{\varepsilon_{n{\bar n} }^2+ \Delta_{\pm}^2} \sin^2\left(t \sqrt{\varepsilon_{n{\bar n} }^2+ \Delta_{\pm}^2 } \right), \quad\quad \Delta_{\pm} = V \mp \Omega_B \end{equation} where $t$ is a neutron free flight time. In the realistic experimental conditions $t$ cannot be very large, e.g. it was $\sim 0.1$~s in the experiment \cite{Grenoble}, it can be up to $\sim 1$~s in the experimental setup for cold neutrons at the ESS, and in principle it could reach $\sim 10$~s in the experiments when the neutrons vertically fall down in a deep mine. Let us discuss first the case when the fifth force is absent, $V=0$, and there remains only the magnetic field contribution, i.e. $\Delta_{\pm}^2 = 4\Omega_B^2\gg \varepsilon_{n{\bar n} }^2$. Then the neutron oscillation probabilities of $+$ and $-$ polarization states should be equal \begin{equation}\label{eq:3} P^{\pm}_{n{\bar n} }(t) = \frac{{\varepsilon_{n{\bar n} }}^2}{\varepsilon_{n{\bar n} }^2+ \Omega_{B}^2} \sin^2\left(t \sqrt{\varepsilon_{n{\bar n} }^2+ \Omega_{B}^2 } \right), \end{equation} and for making effective the oscillation during a time $t$, the magnetic field should be suppressed to a needed degree. Namely, If $\Omega_B t \gg 1$, the oscillations should be averaged in time and one gets $P^{\pm}_{n{\bar n} } = \varepsilon_{n{\bar n} }^2/2\Omega_B^2$. However, for small free flight times, $t < 1$~s, the magnetic field can be suppressed achieving $\Omega_B t < 1$. The argument of sine wave is small and oscillation probability $P(t)$ becomes practically independent on $\Omega_B$: \begin{equation}\label{eq:4} P \approx \left(\varepsilon_{n{\bar n} } t \right)^2= \left(t/\tau_{n{\bar n} } \right)^2 \end{equation} The latter condition is known as "quasi-free" condition. The needed level of magnetic field suppression depends on the neutron free flight time in experimental conditions. For $t=0.1$~s, the condition $\Omega_B < t^{-1}$ implies $\Omega_B < 10^{-15}$~eV, and thus $B < 10^{-4}$~G. Suppressing fields to the level of 1 nT would be sufficient for future realistic experimental times order 1 s. \begin{figure} \centering \includegraphics[width=400pt, trim=40 340 40 100, clip=true]{Fig2.pdf} \caption{Potential energy $V_n$ of the neutron in $B-L$ field of Sun and Earth. Region of potentils $V_n$ above the red curve is excluded by torsion balance experiment \cite{Adelberger}.}\label{fig:2} \end{figure} Let us consider now the case with non-zero $V_n$. From (\ref{eq:6}), taking $\alpha_{B-L} = 10^{-49}$, we see that for our benchmark value $\upsilon_\chi = 1$~keV we obtain $\lambda \sim 10^{16}$~cm which is much larger than the sun-Earth distance (1 AU$ = 1.5 \times 10^{13}$~cm). We see from Fig. 1, that for $\lambda > 1$ AU, $V_n$ can reach the values up to $3\times 10^{-11}$~eV, equivalent to $\Omega_B$ of the magnetic field $B \simeq 5$~G. This would lead to strong suppression of $n-{\bar n} $ oscillation even if the magnetic field value vanishing: the $n-{\bar n} $ oscillation will not be discovered at the ESS even if $\varepsilon_{n{\bar n} } > 10^{-24}$~eV. Therefore, for achieving the quasi free condition for $n-{\bar n} $ oscillation allowing to discover $n-{\bar n} $ conversion, the value of magnetic field should be tuned with precision of few nT to a resonance value, so that $\Omega_B = V_n$ with the precision of $10^{-16}$ eV or so. Let us noticed, that since oscillation probabilities of $+$ and $-$ polarization states are different, see eq. (\ref{eq:3}), resonance can occur for only for one polarization. Levels of potential energy $V_n$ corresponding to quasi-free conditions for $n \to {\bar n} $ observation time $\Delta{t}=0.1$ and $1.0$ s are shown in the Fig. 2 together with $\Omega_B$ corresponding to the magnetic field 1 nT. We see that $V_n$ can exceed the limit of quasi-free condition in the range of $\lambda$ between $\sim 10^{4} - 10^{13}$ m. In this region $n \to {\bar n} $ oscillation can be suppressed. However, tiny fifth forces have no effect in intranuclear $n \to {\bar n} $ transformations. One can envisage scenario where $n \to {\bar n} $ will be discovered in intranuclear transformations in large underground experiments although it will not be observed in transformation with free neutrons at the corresponding level, e.g. at the ESS. This can be an indication that some extra potential different between the neutron and antineutron is in play, which can be induced by $B-L$ photons under considerations. This situation can be checked by applying in free neutron experiments the magnetic field with programmed magnitude and direction in the whole neutron flight path and by varying of this field to find the resonance value for which it would compensate the effect of $B-L$ field induced potential $V_n$. Example of such variation of magnetic field is shown in Fig. 3 assuming that $V_n=10^{-12}$ eV. \begin{figure}[t] \centering \includegraphics[width=400pt, trim=40 340 40 60, clip=true]{Fig3.pdf} \caption{Variation of external magnetric field that would compensate the suppressing effect of the $B-L$ potential (see text)}\label{fig:3} \end{figure} \section{Low scale seesaw model} Is it possible to built a consistent model in which baryon number, or $B-L$, spontaneously breaks at rather low scales in which case the baryophoton couplings to the neutron can have an effect on the laboratory search of $n-{\bar n} $ oscillation? One can discuss a simple seesaw-like scenario for generation of terms (\ref{chi}), along the lines suggested in ref. \cite{Berezhiani:2005hv,baryo-majoron}. Let us introduce gauge singlet Weyl fermions, ${\cal N}$ with $Q=-1$ and ${\cal N}'$ with $Q=1$. These two together form a heavy Dirac particle with a large mass $M_D$. Both ${\cal N}$ and ${\cal N}'$ can be coupled to scalar $\chi$ ($Q=2$) and get the Majorana mass terms $\sim \langle \chi \rangle =\upsilon_\chi$, from the VEV of the latter. We introduce also a color-triplet scalar $S$, with mass $M_S$ with $Q=-2/3$, having precisely the same gauge quantum numbers as the right down-quark $d_{(R)}$. Consider now the Lagrangian terms \be{NNpr} S u d + S^\dagger d \, {\cal N} + M_D {\cal N} {\cal N}' + \chi^\dagger {\cal N}^2 + \chi {\cal N}^{\prime 2} + {\rm h.c.} \end{equation} In this way, diagram shown in Fig. \ref{fig2}, after integrating out the heavy fermions ${\cal N} + {\cal N}'$, induces $D=10$ operators (\ref{chi}), with $M^6 \sim M_D^2 M_S^4$. \begin{figure}[t] \begin{center} \vspace{-1.cm} \includegraphics[width=8cm]{Z4.pdf} \\ \vspace{-2.4cm} \includegraphics[angle=270,width=8cm]{NNpr-Dirac.pdf} \vspace{-1.2cm} \caption{ \label{fig2} Upper diagram generates $n - \tilde n$ mixing in low scale baryo-majoron model via exchange of heavy Dirac fermion ${\cal N}+{\cal N}'$ when ${\cal N}$ and ${\cal N}'$ get a small Majorana mass $\tilde{M}, \tilde{M}' \sim \langle \chi\rangle $. In the presence of mirror sector containing the twin quarks $u',d'$ connected to ${\cal N}'$, lower diagram would generate $n-n'$ mixing which conserves the combination of Baryon numbers $B-B'$, without insertion of $\chi$ field. } \end{center} \end{figure} Low scale baryon number violation was suggested in Ref. \cite{Berezhiani:2005hv}, in a model which was mainly designed for inducing neutron -- mirror neutron oscillation $n-n'$. This model treats ${\cal N}$ and ${\cal N}'$ states symmetrically: their Majorana masses $\tilde{M}$ and $\tilde{M}'$ are equal, while in addition to couplings (\ref{NNpr}), there are terms that couple ${\cal N}'$ to $u', d'$ and $S'$ states from hidden mirror sector with a particle content identical to that of ordinary one (for review, see e.g. \cite{Berezhiani:2003xm}). Hence, the lower diagramm of Fig. \ref{fig2} induces $D=9$ operator $(1/M)^5uddu'd'd'$ with ${\cal M}^5 = M_D M_S^4$, and thus $n-n'$ mixing with % \be{nn'} \varepsilon_{nn'} \sim \frac{\Lambda_{\rm QCD}^6}{M_D M_S^4} \sim \left(\frac{10~\rm TeV}{{\cal M}} \right)^5 \times 10^{-15}~{\rm eV} \end{equation} which corresponds to $n-n'$ oscillation time $\tau_{nn'} \sim 1$~s. Hence, in this case $n-n'$ mixing should be a dominant effect, since two sector share the common $Q=B-L$. between ordinary and mirror particles, while $n-{\bar n} $ mixing which breaks $Q$ is suppressed by the small VEV $\upsilon_\chi$: \be{delta-ratio} \varepsilon_{n{\bar n} } \leq \frac{\upsilon_\chi}{M_D} \varepsilon_{nn'} \end{equation} Therefore, assuming that $\varepsilon_{nn'} < 10^{-15}$~eV and $M_D > 1$ TeV, for obtaining $\varepsilon_{n{\bar n} } > 10^{-25}$~eV one needs $\upsilon_\chi > 100$~eV. In this case the Galactic contribution in $V_n$ becomes irrelevant, but the possibility of having $\lambda < 1$~AU remains robust. Let us remark, that since $n-n'$ mixing conserves $Q=B-L$, baryophotons interact symmetrically with ordinary and mirror neutrons, and thus should have no effect on $n-n'$ oscillation. As a matter of fact, $n-n'$ mixing can indeed be much larger than $n-{\bar n} $. Existing experimental limits on $n-n'$ transition allow the neutron$-$mirror neutron oscillation time to be less than the neutron lifetime, with interesting implications for astrophysics and particle phenomenology \cite{Berezhiani:2005hv,nnpr}. \section{Conclusions} Neutron - antineutron transformation searched with free neutrons can be suppressed by the presence of the vector field of baryophotons coupled to $B-L$ charges. Due to assumed baryon number non-conservation these photons should be massive with the mass in the range $10^{-11} - 10^{-21}$ eV. This corresponds to a possible region of $B-L$ potential that is not excluded by experimental tests of weak equivalence principle (WEP) so that it could suppress the free neutron $n \to {\bar n} $ transformations. However, if one learns from nuclear instability search that $n \to {\bar n} $ transformation exists but it is suppressed for free neutrons, then this suppression in principle can be removed by the tuning of external magnetic field in the experiment. Weaker $B-L$ fields inducing the potential energy smaller than $10^{-16}$ eV, i.e. below the quasi-free condition limit, practically will not be sensed by $n \to {\bar n} $ transformation and therefore cannot be observed in this way. STEP experiment for Satellite Test of the Equivalence Principle ~\cite{STEP} proposed some years ago claimed the sensitivity of WEP testing to the level $10^{-18}$. STEP mission was not pursued. Corresponding level of magnitude of $B-L$ potential energy that could be excluded in STEP test are also shown in Figure 2. Let us remark about the possibility of the kinetic mixing of $B-L$ photons with the regular QED photons. Such mixing could make the Equivalence Principle tests potentially different for electrically neutral and charged objects, e.g. neutrons and also neutrinos having non-zero $B-L$ could acquire also the tiny electric charges. As a matter of fact, the considered $B-L$ potentials can have no effect on the oscillations between three neutrinos $\nu_e, \nu_\mu$ and $\nu_\tau$ since their $B-L$ charges are equal, but they can be relevant for the active-sterile neutrino (e.g. mirror neutrino) oscillations and can suppress them in certain situations. Also, $B-L$ charge of the Earth would create a $B-L$ magnetic field due to the Earth rotation. Question is whether this can lead to any observable effect? Concluding, if the neutron--antineutron oscillation will be discovered in free neutron oscillation experiments, this will imply limits on $B-L$ photon coupling constant and interaction radius which are considerably stronger than present limits form the tests of the equivalence principle. The potential $V$ induced by these forces can be excluded down to the values of about $10^{-16}$ eV, independently on the interaction radius $\lambda$ of these baryophotons. Instead, if $n-{\bar n} $ oscillation will be discovered via nuclear instability, but not in free neutron oscillations in corresponding level, this would indicate towatds the presence of fifth-force mediated by such baryophotons. \section{Acknowledgments} Z.B. and Y.K. thank Arkady Vainshtein for useful discussions. The work of A.A. and Z.B. was partially supported by the MIUR triennal grant for the Research Projects of National Interest PRIN 2012CPPYP7 ``Astroparticle Physics", and the work of Y.K. was supported in part by US DOE Grant DE-SC0014558. This work was reported by Y.K. at the 3rd Workshop ``NNbar at ESS", 27-28 August 2015, Gothenburg, Sweden. \bigskip {\bf Note Added:} After this work was completed, it was communicated to us by R.~N.~Mohapatra and K.~S.~Babu that they are preparing the work on similar subject.
1,314,259,994,448
arxiv
\section{Introduction} In quantum dot spectroscopy, rather simple, idealized theoretical approaches have been applied to discuss which confined interband optical transitions are formally allowed and which are formally forbidden. But one {\em expects} the simple rules not to work. Yet, the mechanisms for failure have only been assessed within extensions of the simple models. To understand these mechanisms demands a high-level approach that naturally includes the complexity of the dots. Such approaches to the calculation of the optical properties are rare, with 8-band ${\bf k}\cdot{\bf p}$ calculations being among the most sophisticate approaches used so far. Here, we discuss the nature of confined transitions in lens-shaped (In,Ga)As/GaAs quantum dots by using an atomistic pseudopotential-based approach.\cite{zunger_pssb_2001,williamson_PRB_2000,wang_PRB_1999} Specifically, we study three mechanisms that render nominally forbidden transitions, in lower approximations, to allowed transitions within more realistic approximations: (i) ``$2S$-to-$1S$'' and ``$2P$-to-$1P$'' ``crossed'' transitions allowed by finite band-offset effects and orbital mixing; (ii) transitions involving mixed heavy-hole and light-hole states, enabled by the confinement of light-hole states; and (iii) many-body configuration-mixing intensity enhancement enabled by electron-hole Coulomb interaction. We also compare our results with those of 8-band ${\bf k}\cdot{\bf p}$ calculations. Our atomistic pseudopotential theory explains recent spectroscopic data. \section{Interband optical spectrum of (In,Ga)As/GaAs dots} \subsection{Method of calculation} In our approach, the atomistic single-particle energies ${\cal E}_i$ and wave functions $\psi_i$ are solutions to the atomistic Schr\"odinger equation\cite{zunger_pssb_2001} \begin{equation} \label{schrodinger} \{-\frac{1}{2}\nabla^2+V_{SO}+\sum_{l,\alpha}\,v_{\alpha}({\bf r}-{\bf R}_{\,l,\alpha})\}\psi_i={\cal E}_{i}\,\psi_i , \end{equation} \noindent where $v_{\alpha}$ is a pseudopotential for atom of type $\alpha$, with $l$-th site position ${\bf R}_{\,l,\alpha}$ in either the dot or the GaAs matrix. These positions are relaxed by minimizing the total elastic energy consisting of bond-bending plus bond-stretching terms via a valence force field functional.\cite{williamson_PRB_2000} This results in a realistic strain profile $\tilde\varepsilon({\bf R})$ in the nanostructure.\cite{pryor_JAP_1998} In addition, $v_{\alpha}$ depends explicitly on the isotropic component of the strain ${\rm Tr}[\tilde\varepsilon({\bf R})]$.\cite{kim_PRB_2002} $V_{SO}$ is a non-local (pseudo) potential that accounts for spin-orbit coupling.\cite{williamson_PRB_2000} In the single-particle approximation, the transition intensity for light polarized along $\hat{\bf e}$ is \begin{equation} \label{absorption.SP} I^{(SP)}(\omega;\hat{\bf e})=\sum_{i,j}\, |\langle\psi^{(e)}_i|\hat{\bf e}\cdot{\bf p}|\psi^{(h)}_{j}\rangle|^2\big]\delta[\hbar\omega-{\cal E}^{(e)}_{i}+{\cal E}^{(h)}_{j}], \end{equation} \noindent where ${\bf p}$ is the electron momentum.\cite{cardona_book} In addition to the single-particle effects, many-particle effects cause each of the monoexciton states $\Psi^{(\nu)}(X^0)$ to be a mixture of several electron-hole pair configurations (Slater determinants) $e_ih_j$. Namely, \begin{equation} \label{X0.states} |\Psi^{(\nu)}(X^0)\rangle=\sum_{i,j}\,C^{\,(\nu)}_{i,j}|e_ih_j\rangle. \end{equation} \noindent The coefficients $C^{\,(\nu)}_{i,j}$ are determined by the degree of configuration mixing allowed by the electron-hole Coulomb and exchange interaction.\cite{franceschetti_PRB_1999} This mixing is determined by the symmetry of the $e$-$h$ orbitals and by their single-particle energy separation. The many-body optical absorption for (incoherent) unpolarized light\cite{note_unpolarized} is given by \begin{equation} \label{absorption} I^{(MP)}(\hbar\omega)=\frac{1}{3}\sum_{\nu}\,\sum_{\hat{\bf e}=\hat{\bf x},\hat{\bf y},\hat{\bf z}} |M(\hat{\bf e})|^2\,\delta[\hbar\omega-E^{(\nu)}(X^0)], \end{equation} \noindent where $M(\hat{\bf e})=\langle\Psi^{(\nu)}(X^0)|\hat{\bf e}\cdot{\bf p}|0\rangle$. Thus, the configuration mixing can make transitions that are forbidden in the single-particle single-band approximation become allowed in the many-particle representation of Eq. (\ref{absorption}) by borrowing oscillator strength from bright transitions. \begin{figure} \includegraphics[width=8.5cm]{./Fig_1.eps} \caption{{\label{Fig_1}}Optical absorption spectrum of $X^0$ in a lens-shaped In$_{\rm 0.6}$Ga$_{\rm 0.4}$As/GaAs quantum dot (base diameter $b=200\;${\AA}, height $h=20\;${\AA}) calculated at the (a) single-particle approximation [Eq. (\ref{absorption.SP})] under in-plane ($\hat{\bf e}\parallel [100]$; top) and out-of-plane ($\hat{\bf e}\parallel [001]$; bottom) polarization; and (b) at the many-particle level [Eq. (\ref{absorption})] for unpolarized light. Energy is shown relative to (a) the single-particle gap ${\cal E}^{\,(e)}_0-{\cal E}^{\,(h)}_0=1333\;{\rm meV}$ and (b) the ground-state energy $E^{(0)}(X^0)=1309\;{\rm meV}$ of $X^0$.} \end{figure} \subsection{Results} Figure \ref{Fig_1} shows our calculated single-particle [Eq. (\ref{absorption.SP}); Fig. \ref{Fig_1}(a)] and many-particle [Eq. (\ref{absorption}); Fig. \ref{Fig_1}(b)] absorption spectrum of $X^0$ for a lens-shaped In$_{\rm 0.6}$Ga$_{\rm 0.4}$As/GaAs quantum dot with base diameter $b=200\;${\AA} and height $h=20\;${\AA} that confines two shells of electron states: $\{1S_e;1P_e\}$. The energy of the transitions is shown as a shift $\Delta {\cal E}$ from the single-particle exciton gap ${\cal E}^{(e)}_0-{\cal E}^{(h)}_0$ [in Fig. \ref{Fig_1}(a)] or the ground-state energy of the monoexciton $E^{(0)}(X^0)$ [in Fig. \ref{Fig_1}(b)]. Figure \ref{Fig_2.verynew} shows equivalent results for two dots with $b=252\;${\AA} and heights $h=20\;${\AA} and $35\;${\AA}, which confine two $\{1S_e;1P_e\}$ and three $\{1S_e;1P_e;1D_e+2S_e \}$ shells of electron states,respectively.\cite{narvaez_JAP_2005} As expected, we find nominally-allowed single-particle transitions, including (i) the fundamental transition $1S_{hh}$-$1S_{e}$ at $\Delta{\cal E}=0\;{\rm meV}$; (ii) the $1P_{hh}$-$1P_e$ transitions with energy shifts $\Delta {\cal E}\sim 75\;{\rm meV}$ and $65\;{\rm meV}$ for dots with $b=200\;${\AA} and $252\;${\AA}, respectively; and (iii) the transitions $1D_{hh}$-$1D_e$ and $2S_{hh}$-$2S_e$ at $\Delta {\cal E}\sim 130\;{\rm meV}$ for the three-shell dot ($b=252\;${\AA},$h=35\;${\AA}). Note that the underlying atomistic $C_{2v}$ symmetry of the circular-base lens-shape dot splits the electron and hole $1P$ and $1D$ states into 3 and 5 levels, respectively, and causes these states to be a mixture of $L_z=\pm 1$ and $L_z=\pm 2$, respectively. [$L_z$ is the projection of the angular momentum along the cylindrical ($[001]$, out-of-plane) axis of the dot.] Thus, in contrast to predictions of simplified models that assume $C_{\infty v}$ shape symmetry, transitions involving states $1P$ and $1D$ are split into four and nine lines, respectively (Figs. \ref{Fig_1} and \ref{Fig_2.verynew}). We next discuss the {\em nominally-forbidden} transitions. \begin{figure} \includegraphics[width=8.5cm]{./Fig_2.eps} \caption{{\label{Fig_2.verynew}}{\em Idem} Fig. \ref{Fig_1}(b) for two In$_{\rm 0.6}$Ga$_{\rm 0.4}$As/GaAs quantum dots that confine two (left panel) and three (right) shells of electron states, with heights $h=20\;${\AA} and $30\;${\AA}, respectively, and base $b=252\;${\AA}.} \end{figure} \subsubsection{Band-offset and orbital-mixing induced 1S-2S transitions} If the electron and hole envelope wave functions are identical, the envelope-function selection rules indicate that only $\Delta i=0$ ($i\rightarrow i$) transitions are allowed, as assumed e.g. in Refs. [\onlinecite{woggon_book,narvaez_PRB_2001,hawrylak_PRL_2000,findeis_SSC_2000,honester_APL_1999}]. This will be the case in the single-band effective mass approximation if the confinement potential (band offset between dot and environment) for electron and hole are infinite. In contrast, we find a few $\Delta i\neq 0$ transitions with significant intensity: (i) $2S_{hh}$-$1S_e$ [Figs. \ref{Fig_1}(a) and \ref{Fig_2.verynew}(a)], which we find {\em below} $1P_{hh}$-$1P_e$; (ii) four transitions that involve the electron states $1P_e$ and hole states $2P_{hh}$ (also found in Ref. \onlinecite{vasanelli_PRL_2002}) and $2P_{hh}+1F_{hh}$ [Fig. \ref{Fig_1}(a)]; and (iii) transitions $1S_{hh}$-$2S_e$, and $2S_{hh}$-$1D_e$ and $1D_{hh}$-$2S_e$ in the three-shell dot (Fig. \ref{Fig_2.verynew}). There are two reasons why $\Delta i\neq 0$ transitions are allowed. First, in the case of {\em finite} band offsets or, equivalently, when the electron and hole wavefunctions are not identical, the condition $\Delta i=0$ is relaxed and transitions $j\rightarrow i$ may be allowed even in the effective-mass approximation. The latter happens to be the case in the work of Vasanelli {\em et al.} (Ref. \onlinecite{vasanelli_PRL_2002}) in which $2S_{hh}$-$1S_e$ and $2P_{hh}$-$1P_e$ transitions between confined electron and hole levels were found to have finite, non-negligible oscillator strength. Second, orbital mixing also makes such transition allowed: For example, a dot made of zinc-blende material and having a lens or cylindrical shape has the atomistic symmetry $C_{2v}$ while spherical dots have $T_d$ symmetry. In contrast, continuum-like effective-mass based theories for dots use artificially higher symmetries. In fact, the ability of the envelope-function approximation to recognize the correct point-group symmetry depends on the number N of $\Gamma$-like bands used in the expansion.\cite{zunger_pssa_2002} N=1 corresponds to the ``particle-in-a-box'' or to the parabolic single-band effective mass approximation; N=6 corresponds to including the valence band maximum (VBM) states only; and N=8 corresponds to considering the VBM states plus the conduction band minimum. Higher values of N have been also considered.\cite{14-band_k.p,richard_PRB_2004} In particular, (a) within the N=1 single-band effective-mass approximation one uses the symmetry of the {\em macroscopic shape} (lens, cylinder, pyramid, sphere) rather than the true {\em atomistic} symmetry. For example, for zinc-blende lenses and cylinders one uses $C_{\infty v}$ symmetry rather than the correct $C_{2v}$. (b) The 8-band ${\bf k}\cdot{\bf p}$ Hamiltonian assumes cubic ($O_h$) symmetry to describe the electronic structure of the dots,\cite{8-band_k.p} the resulting symmetry group is dictated by {\em both} the symmetry of the macroscopic shape and the cubic symmetry. In the case of square-pyramid-shaped dot the symmetry is $C_{4v}$ rather than the correct $C_{2v}$.\cite{note_Td_symmetry} In single-band effective-mass approaches, transition $i$-$j$ is allowed as long as the overlap between the respective envelope functions is non-zero. For example, for spherical quantum dots one expects $S$-$S$ transitions to be allowed but not $D$-$S$ transitions. Yet, in the true point group symmetry of the {\em zinc-blende sphere} the highest occupied hole state has mixed $S+D$ symmetry, which renders transition $1S_h$-$1D_e$ allowed,\cite{xia_PRB_1989,fu_PRB_1997} and similarly the ``$2S$-$1S$'' transition is allowed because the $2S_{hh}$ state also contains $1S_{hh}$ character.\cite{note_mixing} \begin{figure}[h] \includegraphics[width=8.5cm]{./Fig_3.eps} \caption{{\label{Fig_2.new}} (Color) (a) First (thick line) and second (thin line) strain-modified valence-band offsets along a line parallel $[001]$ that pierces a lens-shaped In$_{\rm 0.6}$Ga$_{\rm 0.4}$As/GaAs quantum dot through its center. The dot size is $b=252\;${\AA} and $h=20\;${\AA}. Position is measured in units of the lattice parameter of GaAs ($a_{\rm GaAs}$) and the energies are relative to the GaAs VBM [$E_v({\rm GaAs})=-5.620\;{\rm eV}$]. (b) Wave functions of the first hole state with significant $lh$ character for different In$_{\rm 0.6}$Ga$_{\rm 0.4}$As/GaAs dots. Isosurfaces enclose $75\%$ of the charge density, while contours are taken at $1\;{\rm nm}$ above the base. The energy ${\cal E}^{\,(h)}_{j}-E_v({\rm GaAs})$ of the state appears in each panel.} \end{figure} \subsubsection{Strong light-hole--electron transitions} In {\em bulk} zinc-blende semiconductors the valence-band maximum is made of degenerate heavy-hole ($hh$, $|3/2,\pm 3/2\rangle$) and light-hole ($lh$, $|3/2,\pm 1/2\rangle$) states.\cite{bastard_monography} While {\em both} optical transitions $hh$-$\Gamma_{1c}$ and $lh$-$\Gamma_{1c}$ are polarized in the $\hat{\bf x}$-$\hat{\bf y}$ plane, only the latter transition presents polarization along $\hat{\bf z}$ ($\parallel [001]$). Under biaxial strain these $hh$ and $lh$ states split. In bulk, the relative energy of these states and their splitting depend on the strain: for compressive (e.g. InAs on GaAs) the $lh$ is below the $hh$ while for tensile (e.g. GaAs on InAs) the $lh$ is above the $hh$.\cite{kent_APL_2002} In {\em quantum dots} the energy of $lh$ states is unknown. More importantly, it is generally assumed to be unconfined; so, the $lh$-$\Gamma_{1c}$ transition is expected to be absent from spectroscopic data. Nonetheless, Minnaert {\em et al}\cite{minnaert_PRB_2001} have speculated that despite the compressive strain in InAs/GaAs dots the $lh$ state is above the $hh$ states, while Ribeiro {\em et al}\cite{ribeiro_JAP_2000} have suggested the presence of a $lh$-derived state {\em below} the $hh$ states by measuring photo-reflectance and photo-absorption in (In,Ga)As/GaAs dots. Adler {\em et al}\cite{adler_JAP_1996} and Akimov {\em et al}\cite{lh_CdSe/ZnSe} have also suggested the presence of $lh$-derived transitions in photoluminescence excitation (PLE) experiments in InAs/GaAs and CdSe/ZnSe self-assembled quantum dots, respectively. In addition, based on a 6-band ${\bf k}\cdot{\bf p}$ calculation, Tadi\'c {\em et al}\cite{tadic_PRB_2002} have predicted that in disk-shaped InP/In$_{0.51}$Ga$_{0.49}$P dots the light-hole states are confined at the interface of the disk and become higher in energy than heavy-hole states as the thickness of the disk is increased. We show in Fig. \ref{Fig_2.new}(a) the strain-modified valence-band offsets of a lens-shaped In$_{\rm 0.6}$Ga$_{\rm 0.4}$As/GaAs with $b=252\;${\AA} and $h=20\;${\AA}, calculated along a line normal to the dot base that pierces the dot through its center. The energy is presented relative to the GaAs VBM ($E_v(GaAs)=-5.620\;{\rm eV}$). We find that inside the dot the heavy-hole ($hh$) potential is above the light-hole ($lh$) one, while outside this order is reversed; although the $lh$ character of the lower-energy band-offset leaks slightly into the barrier close to the dot. Because the dot is alloyed the band offsets inside the dot are jagged. In agreement with Ribeiro {\em et al},\cite{ribeiro_JAP_2000} but in contrast with Minnaert {\em et al},\cite{minnaert_PRB_2001} our atomistic pseudopotential calculations reveal weakly {\em confined} light-hole states at energies deeper than the first $hh$ state. The wave functions of the first of these states are shown in Fig. \ref{Fig_2.new}(b). The energy spacing between the highest hole state [HOMO ($\psi^{(h)}_0$)] and the deep $lh$-type states increases from $92.4\;{\rm meV}$ to $101.1\;{\rm meV}$ and $111.7\;{\rm meV}$ for HOMO-11 [$\psi^{(h)}_{11}$], HOMO-18 [$\psi^{(h)}_{18}$], and HOMO-20 [$\psi^{(h)}_{20}$], respectively. These states give raise to two $lh$-derived transitions in the absorption spectra: (i) $1S_{lh}$-$1S_e$ [Fig. \ref{Fig_1}] with the deep $lh$-type state being a mixture of $71\% \;lh$ and $22\% \;hh$. As seen in Fig. \ref{Fig_1}, this transition has a large intensity in both $\hat{\bf e}\parallel [100]$ and $\hat{\bf e}\parallel [001]$ polarizations. (ii) $(S+P)_{lh+hh}$-$1S_e$ (Fig. \ref{Fig_2.verynew}) with $hh$/$lh$ percentages of $43$/$51$ and $51$/$42$ for the two-shell and three-shell dots, respectively. In these dots with $b=252\;${\AA}, the larger base size reduces the spacing between confined hole states and promotes character mixing. In the 2-shell dot, the offset energy of this transition with respect to $1P_{hh}$-$1P_e$ is $36.0\;{\rm meV}$, in excellent agreement with the observed value.\cite{preisler_private} Note that simple models that follow the common assumption of unconfined $lh$ states do not explain the observed feature. \subsubsection{Coulomb-induced transitions that are forbidden in the single-particle description} Due to the electron-hole Coulomb interaction, each monoexciton state $\Psi (X^0)$ is a mixture of electron-hole configurations [Eq. (\ref{X0.states})]. This mix results in a enhancement/diminishment of the intensity of both allowed and nominally-forbidden transitions in the absorption spectra [Fig. \ref{Fig_2.verynew}(b)]. These are shown by comparing Fig. \ref{Fig_1}(a) {\em vs} Fig. \ref{Fig_1}(b) and Fig. \ref{Fig_2.verynew}(a) {\em vs} Fig. \ref{Fig_2.verynew}(b). The many-body effects include (i) enhancement of the intensity of the nominally forbidden transition $2S_{hh}$-$1S_{e}$ particularly in the 3-shell dot (Fig. \ref{Fig_2.verynew}); (ii) redistribution of the intensity of both the nominally allowed $1P_{hh}$-$1P_{e}$ transitions and the $1D_{hh}$-$1D_e$ and $2S_{hh}$-$2S_e$ transitions; and (iii) change of the intensity of the transitions involving deep hole states with significant light-hole character. The mixing enhancement $\eta(2S_{hh}-1S_e)=I^{(CI)}(2S_{hh}-1S_e)/I(2S_{hh}-1S_e)=3.2$ and $\eta(1S_{hh}-1S_e)=1.1$ for the 2-shell dot, while $\eta(2S_{hh}-1S_e)=8.2$ and $\eta(1S_{hh}-1S_e)=1.3$ for the three shell dot. For both dots, the enhancement of transition $2S_{hh}$-$1S_e$ arises mainly from configuration mixing with the four configurations $|1P_{hh}1P_{e}\rangle$. The degree of mixing is {\em small}, $\sim 2\%$ for both dots, due to the {\em large} ($\sim 26\;{\rm meV}$) energy splitting between these electron-hole configurations at the single-particle (non-interacting) level, yet sufficient to cause a sizeable enhancement of the intensity. We find that the larger $\eta(2S_{hh}-1S_e)$ for the 3-shell dot arises from a larger mixing with configuration $|1S_{hh}1S_{e}\rangle$. \begin{figure}[t] \includegraphics[width=8.5cm]{./Fig_4.eps} \caption{{\label{Fig_3}}Optical absorption spectrum of $X^0$ in two lens-shaped In$_{\rm 0.6}$Ga$_{\rm 0.4}$As/GaAs quantum dots that confine two (left panel) and three (right) shells of electron states. In both cases, the spectra are calculated within a model single-particle (M-SP) [Eq. (\ref{absorption.SP})] (a) and configuration-interaction (M-CI) approach [Eq. (\ref{absorption})] (b), assuming degenerate $1P$, and $2D$ and $2S$ (in three-shell dot) electron and hole states but retaining the atomistic wavefunctions. Vertical scales are different for each dot.} \end{figure} {\em Comparison with experiment:} The calculated $2S_{hh}$-$1S_e$ transition is $\sim 26\;{\rm meV}$ {\em below} the strongest $1P_{hh}$-$1P_e$ transition; in excellent agreement with those observed by Preisler {\em et al}\cite{preisler_private} in magneto-photoluminescence, who found $25\;{\rm meV}$, and in contrast to the effective-mass approximation results of Vasanelli {\em et al}, which place $2S_{hh}$-$1S_e$ {\em above} $1P_{hh}$-$1P_e$. The calculated $1S_{lh}$-$1S_e$ transition is $18.3\;{\rm meV}$ below $1P_{hh}$-$1P_e$, in only rough agreement with the value of $35\;{\rm meV}$ observed by Preisler {\em et al}.\cite{preisler_private} The effect of configuration-mixing on the optical spectrum was previously discussed within the simplified single-band 2D-EMA parabolic model.\cite{narvaez_PRB_2001,hawrylak_PRL_2000} Such continuum theories assume macroscopic shapes that lead to significant degeneracies among the single-particle states of Eq. (\ref{schrodinger}): $P$ states are twofold degenerate; $D$ states and $2S$ are degenerate; and the $S$-$P$ and $P$-$D$ energy spacings are equal. As a result, there is an artificially strong many-body mixing in Eq. (\ref{X0.states}). The many-particle exciton states with allowed Coulomb mixing are: \begin{equation} \label{X.configs} \begin{array}{rcl} |\Psi_{A}\rangle &=&|1S_{hh}\,1S_e\rangle \\ |\Psi_{B}\rangle &=&\frac{1}{\sqrt{2}}(|1P^{(+)}_{hh}\,1P^{(+)}_{e}\rangle+|1P^{(-)}_{hh}\,1P^{(-)}_{e}\rangle) \\ |\Psi_{H}\rangle &=&\frac{1}{\sqrt{2}}(|2S_{hh}\,1S_e\rangle+|1S_{hh}\,2S_e\rangle) \\ |\Psi_{D}\rangle &=&\frac{1}{\sqrt{3}}(|1D^{(+)}_{hh}\,1D^{(+)}_{e}\rangle+|1D^{(-)}_{hh}\,1D^{(-)}_{e}\rangle+|2S_{hh}\,2S_e\rangle) \\ |\Psi_{F}\rangle &=&\frac{1}{\sqrt{6}}(|1D^{(+)}_{hh}\,1D^{(+)}_{e}\rangle+|1D^{(-)}_{hh}\,1D^{(-)}_{e}\rangle-2|2S_{hh}\,2S_e\rangle). \end{array} \end{equation} \noindent Here, the $(\pm)$ labels indicate $L_z=\pm 1$ and $\pm 2$ for the $P$ and $D$ states, respectively. The Coulomb interaction couples states $|\Psi_B\rangle$ and $|\Psi_H\rangle$. Thus the $P$-$P$ transition is split in two lines $\Psi^{(s,p)}_a \simeq |\Psi_B\rangle+|\Psi_H\rangle$ and $\Psi^{(s,p)}_b \simeq |\Psi_B\rangle-|\Psi_H\rangle$. States $|\Psi_D\rangle$ and $|\Psi_F\rangle$, which arise from nominally allowed electron-hole configurations, are also coupled; consequently, the $D$-$D$ transition splits in two lines $\Psi^{(d,d)}_a \simeq |\Psi_D\rangle+|\Psi_F\rangle$ and $\Psi^{(d,d)}_b \simeq |\Psi_D\rangle-|\Psi_F\rangle$. The mixing enhancement $\eta(|\psi_H\rangle)$ within this model is $\infty$ (because transitions $2S_{hh}$-$1S_e$ and $1S_{hh}$-$2S_e$ are {\em forbidden} at the single-particle level). To compare our atomistic predictions with the model calculations, we {\em deliberately neglect} in the pseudopotential-based calculation the atomistic-induced splitting of the $1P$, and $1D$ and $2S$ states (but preserve their atomistically calculated wavefunctions). We calculate the absorption spectrum at the single-particle level [Eq. (\ref{absorption.SP})] and separately in the many-particle approximation [Eq. (\ref{absorption})].\cite{note.00,note_many-body} Figures \ref{Fig_3}(a) and \ref{Fig_3}(b) show, respectively, the atomistic model calculation of the {\em single-particle} and {\em many-particle} absorption spectra. By comparing the results of Fig. \ref{Fig_3} (atomistic wavefunctions; no $P$ or $D$ splittings) with the expectations from Eq. (\ref{X.configs}) (continuum wavefunctions; no $P$ or $D$ splittings), we find that (i) in the atomistic calculation the CI-enhanced transition corresponds to a mixture of states $|2S_{hh}\,1S_e\rangle$ and $|\Psi_B\rangle$ instead of a mixture of $|\psi_H\rangle$ and $|\psi_B\rangle$ as in the model of Eq. (\ref{X.configs}); and (ii) the $D$-shell transition peak [$|\psi_D\rangle$, Fig. \ref{Fig_3}(a)] splits in two transitions that correspond to a mixture of $|\psi_D\rangle$ and $|\psi_F\rangle$ as in the 2D-EMA model. \subsection{Comparison with 8-band ${\bf k}\cdot{\bf p}$ calculations of the interband optical spectrum} Other authors have calculated the absorption spectrum of pure, non-alloyed InAs/GaAs quantum dots using the 8-band ${\bf k}\cdot{\bf p}$ method with cubic symmetry. A comparison with our atomistic, pseudopotential-based predictions in alloyed (In,Ga)As/GaAs dots shows the following main features. (i) Our prediction of transition $2S_{hh}$-$1S_{e}$ between $1S$-$1S$ and $1P$-$1P$ is consistent with the findings of nominally-forbidden transitions between $1S$-$1S$ and $1P$-$1P$ by Heitz {\em et al.}\cite{heitz_PRB_2000}, who calculated the (many-body) absorption spectrum of a monoexciton in pyramid-shaped non-alloyed InAs/GaAs dots with base length of $d=170\;${\AA} (height unspecified); Guffarth {\em et al.},\cite{guffarth_PRB_2003} who calculated the (many-body) absorption spectra of truncated-pyramid InAs/GaAs dots ($d=180\;${\AA},$h=35\;${\AA}); and the single-particle calculations of Sheng and Leburton\cite{sheng_PSSB_2003} in the case of a pure non-alloyed lens-shaped InAs/GaAs dot with $d=153\;${\AA} and $h=34\;${\AA}. Conversely, other 8-band ${\bf k}\cdot{\bf p}$ plus CI calculations did not predict nominally-forbidden transitions between $1S$-$1S$ and $1P$-$1P$, like those of Stier {\em et al.}\cite{stier_PSSA_2002} for a truncated-pyramid InAs/GaAs dot with height $h=34\;${\AA} (base lenght unspecified), who found three groups of transitions: $1S$-$1S$, $1P$-$1P$, and $1D$-$1D$, {\em without} the presence of satellites around transitions $1P$-$1P$. Similarly, recent calculations by Heitz {\em et al.}\cite{heitz_PRB_2005} of the absorption spectrum for small, flat ($d=136\;${\AA} and heights from $3$-$7\;${\rm ML}) truncated-pyramid InAs/GaAs dots also predicted the {\em absence} of nominally-forbidden transitions between $1S$-$1S$ and $1P$-$1P$. (ii) We predict that transitions $1P$-$1P$ and $1D$-$1D$ are split and span about $10\;{\rm meV}$ and $15\;{\rm meV}$ respectively. Instead, the ${\bf k}\cdot{\bf p}$-based calculations of Heitz {\em et al.}\cite{heitz_PRB_2000} predict that $1P$-$1P$ and 1D-1D transitions are much heavily split, each group spanning about $50\;{\rm meV}$. (iii) Sheng and Leburton\cite{sheng_APL_2002} calculated the single-particle dipole oscillator strength for a truncated-pyramid InAs/GaAs dot with $d=174\;${\AA} and $h=36\;${\AA} and found strong nominally-forbidden transitions $1D$-$1P$ nearly $50\;{\rm meV}$ above transitions $1P$-$1P$, and a transition HOMO-7-to-$2S$. Our predictions differ from these in that we find transitions $(2P_{hh}+1F_{hh})$-$1P_e$ above $1P$-$1P$ [Fig. \ref{Fig_1}(a)]. In addition, in this energy interval ($\sim 50\;{\rm meV}$ above $1P$-$1P$) we do not predict hole states with nodes along the [001] axis of the dots. (iv) None of the 8-band ${\bf k}\cdot{\bf p}$-based plus CI calculations of Refs. \onlinecite{heitz_PRB_2000,guffarth_PRB_2003,stier_PSSA_2002,heitz_PRB_2005} predicted strong light-hole--to--conduction transitions originating from deep, weakly-confined hole states with predominant $lh$ character lying between the $1P$-$1P$ and $1D$-$1D$ transitions. \section{Conclusion} Atomistic, pseudopotential-based calculations of the excitonic absorption of lens-shaped (In,Ga)As/GaAs quantum dots predict nontrivial spectra that show nominally-forbidden transitions allowed by single-particle band-offset effects as well as enhanced by many-body effects, and transitions involving deep, weakly confined hole states with significant light-hole character. These transitions explain the satellites of the $P$-$P$ nominally-allowed transitions recently observed in PLE. \begin{acknowledgments} The authors thank G. Bester and L. He for valuable discussions, and V. Preisler (ENS, Paris) for making data in Ref. \onlinecite{preisler_private} available to them prior to publication. This work was funded by U.S. DOE-SC-BES-DMS, under Contract No. DE-AC3699GO10337 to NREL. \end{acknowledgments}
1,314,259,994,449
arxiv
\section{Introduction} Accurate modeling of non-Newtonian fluid flows has been a long-standing problem. Existing hydrodynamic models have to resort to ad hoc assumptions either directly at the macro-scale level when writing down constitutive laws, or as closure assumptions when deriving macro-scale models from some underlying micro-scale description. A variety of empirical constitutive models \cite{Larson88,Owens_Phillips_2002} of both integral and derivative types have been developed, including Oldroyd-B \cite{Oldroyd_Wilson_PRSLA_1950}, Giesekus \cite{Giesekus_JNNFM_1982}, finite extensible nonlinear elastic Peterlin (FENE-P) \cite{Peterlin_Polymer_Science_1966, Bird_Doston_JNNFM_1980}, Rivlin-Sawyers \cite{Riv_Sawy_AnnFluid_1971}. These models are designed such that proper frame-indifference is satisfied, but otherwise left with few physical constraints. Despite their broad applications, the robustness and universal applicability of these models are still in doubt. In principle, viscoelastic effects are determined by the polymer configuration distribution, which can be obtained by directly solving the micro-scale Fokker-Planck equation coupled with the macro-scale hydrodynamic equation \cite{Fan_Acta_1989}. However, the cost of such an approach becomes prohibitive for large scale simulations due to the high-dimensionality of the FK equation. Semi-analytical closures \cite{Warner_IECF_1972, Warner_PhD_1971, FENE_L_S_JNNFM_1999, Yu_Du_mms_2005, Hyon_Du_mms_2008} based on moment approximations of the configuration distribution were developed for dumbbell systems. Applications to non-steady flows \cite{Warner_IECF_1972, Warner_PhD_1971} and more complex intramolecular potential \cite{FENE_L_S_JNNFM_1999, Yu_Du_mms_2005, Hyon_Du_mms_2008} remains largely open due to the high-dimensionality of the configuration space. Several alternative approaches \cite{Laso_Ottinger_JNNFM_1993, Hulsen_Heel_JNNFM_1997, REN_E_HMM_complex_fluid_2005} based on sophisticated coupling between the micro- and macro-scale models have been proposed. However, the efficiency and accuracy of these approaches rely on a separation between the relevant macro- and micro-scales, something that does not usually happen in practice. Motivated by the recent successes in applying machine learning (ML) to construct reduced dynamics of complex systems \cite{ma2018model, Vlachas_Byeon_PRSA_2018, Han_Ma_PNAS_2019, ling_kurzawski_JFM_2016, Wang_Wu_Xiao_PRF_2017, Lusch_Kutz_Brunton_Nature_2018, Linot_Graham_2019, Raissi_Kar_2020}, we aim to learn accurate and admissible non-Newtonian hydrodynamic models directly from a micro-scale description However, we note that directly applying machine learning to construct such first-principled based fluid models is highly non-trivial. The major challenge lies in how to formulate the micro-macro correspondence in a natural way, such that the constructed macroscale model can faithfully retain the viscoelastic properties on the molecular-level fidelity. Moreover, the deep neural network (DNN) representations need to rigorously preserve the physical symmetries. Secondly, to construct the governed reduced dynamics, most of the current ML-based approaches rely on the time-series samples and the various delicate numerical treatments to evaluate the time derivatives (e.g., see discussion \cite{Rudy_Kutz_Science_Ad_2017}). However, the microscale simulation data of non-Newtonian fluids are often limited by the affordable computational resource and superimposed with noise (e.g., due to thermal fluctuations). It is generally impractical to obtain the accurate macro-scale time derivative information from the training data. Moreover, the objective tensor derivative in existing models is chosen empirically, e.g., upper-convected \cite{Oldroyd_Wilson_PRSLA_1950}, covariant \cite{Oldroyd_Wilson_PRSLA_1950}, corrotational \cite{Zaremba_1903}, to ensure the rotational symmetry constraint. Such ambiguities will be inherited if we directly learn the dynamics from the time-series samples. A third challenge is the model interpretability, a well-known weakness of machine-learning-based models. In this study, we present a machine-learning-based approach, the deep non-Newtonian model (DeePN$^2$), for learning the non-Newtonian hydrodynamic model directly from the microscale description. To address the aforementioned challenges, in DeePN$^2$, we learn a set of encoder functions directly from micro-scale simulation data, which can be used to extract the ``features'' of sub-grid polymer configuration. Such features are essentially the macro-scale conformation tensors which are used in the construction of the constitutive laws. To retain the molecular level fidelity, the second idea is to formulate the ansatz of reduced dynamics directly from the micro-scale Fokker-Planck equation. The learning with the ansatz only requires the microscale configuration samples without the need of the time-series training data. Thirdly, to ensure the model admissibility, we propose a general symmetry-preserving DNN structure to represent the terms in the reduced dynamics. All these are done in an end-to-end fashion, by simultaneously learning the micro-scale encoders, the polymer stress, and the evolution dynamics of the macro-scale conformation tensors. The constructed model takes a form similar to the traditional hydrodynamic model and retains clear physical interpretations for individual terms. The conformation tensors are a natural extension of the end-end orientation tensor used in classical rheological models. A new objective tensor derivative naturally arises in this way. It takes a different form from the current choices in those empirical macroscale models. It has a unique micro-scale interpretation and can be systematically constructed without ambiguity. Numerical results demonstrate the accuracy of this machine-learning-based model as well as the crucial role of the constructed tensor derivatives encoded with the molecular structure. \section{Machine-learning based non-Newtonian hydrodynamic model} \subsection{Generalized hydrodynamic model} Let us start with the continuum level description of the dynamics of incompressible non-Newtonian flow in the following generalized form \begin{equation} \begin{split} \nabla \cdot \mathbf u &= 0 \\ \rho \frac{\rm d \mathbf u}{\rm d t} &= -\nabla p + \nabla \cdot (\boldsymbol\tau_{s} + \boldsymbol\tau_{\rm p}) + \mathbf f_{\rm ext}, \end{split} \label{eq:momentum_transport_close} \end{equation} where $\rho$, $\mathbf u$ and $p$ represent the fluid density, velocity and pressure field, respectively. $\mathbf f_{\rm ext}$ is the external body force and $\boldsymbol\tau_s$ is the solvent stress tensor with shear viscosity $\eta_s$, which is assumed to take the Newtonian form $\boldsymbol \tau_s = \eta_s(\nabla \mathbf u + \nabla \mathbf u^T)$. $\boldsymbol\tau_{\rm p}$ is the polymer stress tensor whose constitutive law is generally unknown. To close Eq. \eqref{eq:momentum_transport_close}, traditional models, e.g., Oldroyd-B, Giesekus, and FENE-P, are generally based on the approximation of $\boldsymbol \tau_{\rm p}$ in terms of an empirically chosen conformation tensor (e.g., the end-end orientation tensor), along with some heuristic closure assumption for the dynamics of such a tensor. To map the microscopic model to the continuum model \eqref{eq:momentum_transport_close}, we assume that (\Rmnum{1}) the polymer solution can be treated as nearly incompressible on the continuum scale; and (\Rmnum{2}) the polymer solution is semi-dilute, i.e., the polymer stress $\boldsymbol \tau_{p}$ is dominated by intramolecular interaction $V_{\rm b}(r)$, where $r = \vert \mathbf r\vert$ and $\mathbf r$ is the end-end vector between the two beads of a dumbbell molecule. The form of $V_{\rm b}(r)$ will be specified later. The current approach can be applied to more complicated systems, see Appendix for results of three-bead suspension model. Theoretically, the instantaneous $\boldsymbol \tau_{p}$ can be determined by the probability density function $\rho(\mathbf r, t)$. In DeePN$^2$, instead of directly constructing $\rho(\mathbf r,t)$, we seek a micro-macro correspondence that maps the polymer configurations to a set of conformation tensors, by which we construct the stress model and the evolution dynamics, i.e., \begin{subequations} \begin{align} &\boldsymbol \tau_p = \mathbf G(\mathbf c_1, \mathbf c_2, \cdots \mathbf c_n) \label{eq:stress_model} \\ &\frac{\mathcal{D}{\mathbf c}_i}{\mathcal{D} t} = \mathbf H_i(\nabla \mathbf u, \mathbf c_1, \cdots, \mathbf c_n),\label{eq:c_evolution} \end{align} \label{eq:moment_model} \end{subequations} where $\mathbf c_i \in \mathbb{R}^{3\times3}$ represents the $i\mhyphen$th conformation tensor of the polymer configurations within the local volume unit. It represents the macroscale features by which we construct the polymer stress $\boldsymbol \tau_{\rm p}$ and the evolution dynamics. The detailed formulation will be specified later. In particular, if we choose $n=1$, $\mathbf c_1$ the orientation tensor and approximate $\mathbf G(\mathbf c_1)$ with the linear or the mean-field approximation, \eqref{eq:stress_model} recovers the empirical Hookean and FENE-P model. Moreover, we emphasize that $\left\{\mathbf c_i\right\}_{i=1}^{n}$ are \emph{not the standard high-order moments for the closure approximations of the microscale configuration density} $\rho(\mathbf r, t)$ (e.g., see \cite{Warner_IECF_1972, Warner_PhD_1971, FENE_L_S_JNNFM_1999, Yu_Du_mms_2005, Hyon_Du_mms_2008}). They are the nonlinear conformation tensors directly learnt from the microscale samples for the approximation of stress $\boldsymbol\tau_{\rm p}$, rather than the recovery of the high-dimensional distribution $\rho(\mathbf r, t)$. In principle, with certain pre-assumptions on the formulation of $\mathbf c_i$, a straightforward approach is to learn \eqref{eq:moment_model} on the macro-scale level as a ``black-box'' using time-series samples from microscale simulations. However, this requires the explicit form of the objective tensor derivative $\frac{\mathcal{D}{\mathbf c}_i}{\mathcal{D} t}$, as well as the accurate evaluation of time-derivatives from the time-series samples. Unfortunately, both conditions become impractical for the micro-scale non-Newtonian fluid simulations Alternatively, we employ machine learning to establish a micro-macro correspondence and derive the ansatz of \eqref{eq:moment_model} directly from the micro-scale descriptions. \subsection{Modeling ansatz derived from the micro-scale description} To faithfully retain the micro-scale molecular fidelity, we construct $\left\{\mathbf c_i\right\}_{i=1}^n$ by directly learning a set of encoders from microscale samples, i.e., \begin{equation} \begin{split} \mathbf c_i = \langle \mathbf B_i(\mathbf r) \rangle \quad \mathbf B_i = \mathbf f_i (\mathbf r) \mathbf f_i^T(\mathbf r) \quad \quad i = 1, 2, \cdots, n, \end{split} \label{eq:encoder_model} \end{equation} where $\mathbf B_i$ is a microscale encoder function that maps the microscale polymer configuration to the macroscale feature $\mathbf c_i$. It has an explicit micro-scale interpretation --- the average of the $i\mhyphen$th second-order tensor $\mathbf B_i$ with respect to the encoder vector $\mathbf f_i(\mathbf r): \mathbb{R}^3 \to \mathbb{R}^{3}$. One reason for the choice of $\mathbf B_i(\mathbf r)$ as a second-order tensor is as follows. The stress model $\mathbf G(\cdot)$ needs to retain the rotation symmetry. As the input of the stress model $\mathbf G(\cdot)$, $\mathbf B_i(\mathbf r)$ needs to retain rotational symmetry in accordance with the polymer configuration $\mathbf r$. For example, the vector form of $\mathbf B_i(\mathbf r)$ needs to satisfy $\mathbf B_i(\mathbf Q\mathbf r) = \mathbf Q\mathbf B_i(\mathbf r)$ for any unitary matrix $\mathbf Q$. This yields $\left\langle \mathbf B_i(\mathbf r)\right\rangle \equiv 0$ (see Appendix). A simple non-trivial choice is a second-order tensor taking the form of Eq. \eqref{eq:encoder_model}, so that $\mathbf B_i(\mathbf r)$ satisfies $\mathbf B_i(\mathbf Q\mathbf r) = \mathbf Q \mathbf B_i(\mathbf r) \mathbf Q^T$ and rotational symmetry of $\mathbf G(\cdot)$ can be imposed accordingly. Model \eqref{eq:moment_model} and \eqref{eq:encoder_model} aims at extracting a set of configuration ``features'', represented by the micro-scale encoder $\left\{\mathbf f_i\right\}_{i=1}^n$ and the macro-scale conformation tensor $\left\{\mathbf c_i\right\}_{i=1}^n$, such that the polymer stress $-\langle \mathbf r\nabla V_{\rm b}(r)^{T}\rangle$ can be well approximated by $\mathbf G(\cdot)$ and the evolution of $\{\mathbf c_i \}_{i=1}^{n}$ can be modeled by $\{\mathbf H_i(\cdot)\}_{i=1}^n$ self-consistently. As a special case, if $n = 1$ and $\mathbf f_1(\mathbf r) = \mathbf r$, $\mathbf c_1$ recovers the end-end orientation tensor and the stress model recovers the aforementioned rheological models under special choices of $\mathbf G(\cdot)$. In practice, to accurately capture the nonlinear effects in $V_{\rm b}$, multiple nonlinear conformation moments are needed. To learn $\mathbf G(\cdot)$ and $\mathbf H(\cdot)$, one important constraint comes from rotational symmetry. Let $\widetilde{\mathbf r} = \mathbf Q \mathbf r$, where $\mathbf Q$ is unitary. We must have \begin{subequations} \begin{align} &\mathbf f_i(\widetilde{\mathbf r}) = \mathbf Q \mathbf f_i(\mathbf r) \label{eq:rotation_B}\\ &\mathbf G(\widetilde{\mathbf c}_1, \cdots, \widetilde{\mathbf c}_n) = \mathbf Q \mathbf G(\mathbf c_1, \cdots, \mathbf c_n) \mathbf Q^T \\ &\mathbf H_i(\widetilde{\mathbf c}_1, \cdots, \widetilde{\mathbf c}_n) = \mathbf Q \mathbf H_i(\mathbf c_1, \cdots, \mathbf c_n) \mathbf Q^T, \label{eq:rotation_G_H} \end{align} \label{eq:rotation_B_G_H} \end{subequations} where $\widetilde{\mathbf c}_i = \mathbf Q \mathbf c_i \mathbf Q^T$. For the tensor derivative $\mathcal{D}\mathbf c_i/\mathcal{D} t$, we should have \begin{equation} \widetilde{\frac{\mathcal{D} \mathbf c_i}{\mathcal{D}t}} = \mathbf Q \frac{\mathcal{D} \mathbf c_i}{\mathcal{D}t}\mathbf Q^T \quad i = 1, 2, \cdots, n. \label{eq:rotation_c_evolution} \end{equation} This constraint is satisfied by the various objective tensor derivatives in most existing rheological models, such as the upper-convected \cite{Oldroyd_Wilson_PRSLA_1950}, the covariant \cite{Oldroyd_Wilson_PRSLA_1950} and the Zaremba-Jaumann \cite{Zaremba_1903} derivatives, but these forms are not suitable for us since they lack the desired accuracy. Fortunately these constraints are satisfied automatically if we formulate our macro-scale model based on the underlying micro-scale model. We start from the Fokker-Planck equation \cite{Bird_Curtiss_book_vol_2}, \begin{equation} \frac{\partial \rho(\mathbf r, t)}{\partial t} = -\nabla\cdot \left[(\boldsymbol\kappa\cdot\mathbf r)\rho - \frac{2k_BT}{\gamma}\nabla\rho - \frac{2}{\gamma}\nabla V_{\rm b}(r) \rho\right], \label{eq:FK_dumbbell} \end{equation} where $k_BT$ is the thermal energy, $\gamma$ is the solvent friction coefficient and $\boldsymbol\kappa := \nabla \mathbf u^T$ is the strain of the fluid. Instead of solving \eqref{eq:FK_dumbbell}, we consider the evolution of $\mathbf c_i$, \begin{equation} \frac{\rm d}{\rm dt} \mathbf c_i - \boldsymbol\kappa:\left\langle \mathbf r\nabla_{\mathbf r}\otimes \mathbf B_i(\mathbf r) \right\rangle = \frac{2k_BT}{\gamma}\left\langle \nabla^2_{\mathbf r} \mathbf B_i(\mathbf r) \right\rangle + \frac{2}{\gamma} \left\langle \nabla V_{\rm b}(r) \cdot \nabla_{\mathbf r} \mathbf B_i(\mathbf r) \right\rangle, \label{eq:FK_B_evoluation} \end{equation} where $:$ is the double-dot product. We can prove that Eqs. (\ref{eq:FK_dumbbell}) and (\ref{eq:FK_B_evoluation}) are rotationally invariant. In particular, the combined left-hand-side terms of \eqref{eq:FK_B_evoluation} satisfy the symmetry condition in \eqref{eq:rotation_c_evolution} (see proof in Appendix). Therefore, the combined terms establish a generalized objective tensor derivative. It takes a different form from the ones \cite{Oldroyd_Wilson_PRSLA_1950, Zaremba_1903} in existing models and rigorously preserves the rotational symmetry condition (\ref{eq:rotation_c_evolution}). Accordingly, the hydrodynamic model (\ref{eq:moment_model}) take the following ansatz \begin{subequations} \begin{align} \frac{\mathcal{D} \mathbf c_i}{\mathcal{D}t} &= \frac{\rm d}{\rm dt} \mathbf c_i - \boldsymbol\kappa:\mathcal{E}_i \label{eq:c_tensor_def} \\ \mathbf H_i &= \frac{2k_BT}{\gamma} \mathbf H_{1,i} + \frac{2}{\gamma} \mathbf H_{2,i}. \end{align} \label{eq:c_evolution_ansatz} \end{subequations} Each term of \eqref{eq:c_evolution_ansatz} has a micro-scale correspondence, i.e., \begin{equation} \begin{split} &\mathcal{E}_i(\mathbf c_1, \cdots, \mathbf c_n) = \left\langle \mathbf r\nabla_{\mathbf r}\otimes \mathbf B_i(\mathbf r) \right\rangle \\ &\mathbf H_{1,i} (\mathbf c_1, \cdots, \mathbf c_n) = \left\langle \nabla^2_{\mathbf r} \mathbf B_i(\mathbf r) \right\rangle \\ &\mathbf H_{2,i} (\mathbf c_1, \cdots, \mathbf c_n) = \left\langle \nabla V_{\rm b}(r) \cdot \nabla_{\mathbf r} \mathbf B_i(\mathbf r) \right\rangle, \end{split} \label{eq:E_H_DNN} \end{equation} where $\mathcal{E}_i$ is a $4\mhyphen$th order tensor function and $\mathbf H_{1,i}$, $\mathbf H_{2,i}$ are $2$nd order tensor functions. They will be approximately represented by DNNs. To collect the training data, we use microscale simulations to evaluate these terms; no time-series samples are needed. Note that $\mathcal{D}\mathbf c_i/\mathcal{D}t$ depends on $\mathcal{E}_i$, which encodes some micro-scale information from $\mathbf B_i(\mathbf r)$. Different from the common choices of the objective tensor derivatives in existing models, it takes a more general formulation and has a \emph{clear} micro-scale interpretation without the conventional ambiguities. It recovers the standard upper-convected derivative under special case. To the best of our knowledge, this is the first study that establishes such a direct micro-scale linkage for the objective tensor derivative in the non-Newtonian fluid modeling. The present form is different from the common choices of the objective tensor derivatives in existing models. As shown later, such a formulation that faithfully accounts for the micro-scale polymer configuration is crucial for the accuracy of the constitutive model for $\mathbf c_i$. \subsection{Symmetry-preserving DNN representations} Special rotation-symmetry-preserving DNNs are needed for the encoder functions $\left\{\mathbf f_i\right\}_{i=1}^{n}$, the $2$nd order tensors $\mathbf G$ and $\left\{\mathbf H_{1,i}, \mathbf H_{2,i}\right\}_{i=1}^n$ and the $4$th order tensors $\left\{\mathcal{E}_i\right\}_{i=1}^{n}$ such that the symmetry conditions \eqref{eq:rotation_B_G_H} and \eqref{eq:rotation_c_evolution} are satisfied. For Eq. \eqref{eq:rotation_B} to hold, one can show that $\mathbf f_i(\mathbf r)$ has to take the form \begin{equation} \mathbf f_i(\mathbf r) = g_i(r)\mathbf r, \label{eq:encoder_DNN} \end{equation} where $g_i(r)$ is a scalar encoder function (see Appendix). We always set $g_1(r) \equiv 1$, yielding $\mathbf G \propto \mathbf H_{2,1}$. To construct the DNNs for $\mathbf G$ and $\left\{\mathbf H_{1,i}, \mathbf H_{2,i}\right\}_{i=1}^n$ that satisfy Eq. \eqref{eq:rotation_G_H}, we can transform $\left\{\mathbf c_i\right\}_{i=1}^n$ into a fixed frame for the DNN input. One natural choice is the eigen-space of the conformation tensor $\mathbf c_1= \langle \mathbf r\mathbf r^T\rangle$. Let $\mathbf V$ be the matrix composed of the eigenvectors of $\mathbf c_1$. Define \begin{equation} \begin{split} &\mathbf H_{j,i}(\mathbf c_1, \cdots \mathbf c_n) = \mathbf V \widehat{\mathbf H}_{j,i}(\widehat{\mathbf c}_1, \cdots \widehat{\mathbf c}_n) \mathbf V^T\\ &\widehat{\mathbf c}_i = \mathbf V^T \mathbf c_i \mathbf V \quad j = 1,2~~i = 1, \cdots, n, \end{split} \label{eq:G_H_ansatz} \end{equation} $\widehat{\mathbf c}_1$ is a diagonal matrix composed of the eigenvalues of $\mathbf c_1$. The DNNs will be constructed to learn $\widehat{\mathbf H}_{j,i}$. The learning of the $4$th order tensors $\left\{\mathcal{E}_i\right\}_{i=1}^{n}$ is based on the following decomposition: \begin{equation} \begin{split} &\mathcal{E}_i(\mathbf c_1,\cdots, \mathbf c_n) = \left\langle g_i(r)^2 \mathbf r\nabla_{\color{black}{\mathbf r}} \otimes \color{black}{\mathbf r} \color{black}{\mathbf r^T} \right\rangle \\ &+ \sum_{k=1}^6 \mathbf E_{1,i}^{(k)} (\mathbf c_1, \cdots, \mathbf c_n) \otimes \mathbf E_{2,i}^{(k)} (\mathbf c_1, \cdots, \mathbf c_n). \end{split} \label{eq:E_tensor_ansatz} \end{equation} $\mathbf E_{1,i}, \mathbf E_{2,i} \in \mathbb{R}^{3\times3}$ are second-order tensors satisfying the rotational symmetry condition similar to Eq. \eqref{eq:rotation_G_H}, i.e., \begin{equation} \begin{split} &\mathbf E_{1,i}(\tilde{\mathbf c}_1, \cdots, \tilde{\mathbf c}_n) = \mathbf Q \mathbf E_{1,i} (\mathbf c_1, \cdots, \mathbf c_n) \mathbf Q^T \\ &\mathbf E_{2,i}(\tilde{\mathbf c}_1, \cdots, \tilde{\mathbf c}_n) = \mathbf Q \mathbf E_{2,i} (\mathbf c_1, \cdots, \mathbf c_n) \mathbf Q^T. \end{split} \label{eq:E_F_symmetry} \end{equation} It is shown in Appendix that this decomposition satisfies Eq. \eqref{eq:rotation_c_evolution}. Accordingly, $\mathcal{E}_i$ can be constructed by a set of second order tensors $\mathbf E_{1,i}$ and $\mathbf E_{2,i}$, which can be constructed similar to Eq. \eqref{eq:G_H_ansatz}. Note that with this form, the first term in the RHS of Eq. \eqref{eq:E_tensor_ansatz} becomes $\boldsymbol\kappa \mathbf c_i + \mathbf c_i \boldsymbol\kappa ^T$ similar to the upper-convected derivative. In summary, the DNNs are designed to parametrize $\{g_i(r)\}_{i=2}^n, \{\widehat{\mathbf H}_{1,i}, \widehat{\mathbf H}_{2,i}, \widehat{\mathcal{E}}_i \}_{i=1}^n$. Finally, the DNNs are trained by minimizing the loss \begin{equation} L = \lambda_{H_1} L_{H_1} + \lambda_{H_2} L_{H_2} + \lambda_{\mathcal{E}} L_{\mathcal{E}}, \end{equation} where $L_{H_1}$, $L_{H_2}$ and $L_{\mathcal{E}}$ are the empirical risk associated with $\left\{\mathbf H_{1,i}\right\}_{i=1}^n$, $\left\{\mathbf H_{2,i}\right\}_{i=1}^n$ and $\left\{\mathcal{E}_i\right\}_{i=1}^n$ respectively. $\lambda_{H_1}$, $\lambda_{H_2}$ and $\lambda_{\mathcal{E}}$ are hyper-parameters (see Appendix). Note that the encoders $\left\{g_i(r)\right\}_{i=2}^n$ do not explicitly appear in $L$; they are trained through the learning of $\widehat{\mathbf H}$ and $\mathcal{E}$. \subsection{DeePN$^2$} The DeePN$^2$ model is made up of Eqs. \eqref{eq:momentum_transport_close}, \eqref{eq:moment_model} and \eqref{eq:c_evolution_ansatz}. Note that the model takes the form of classical empirical models. The only differences are that some new conformation tensors and a new form of objective tensor derivative are introduced, and some of the equation terms are represented as function subroutines in the form of NN models. The latter is no different from the situation commonly found in gas dynamics \cite{Molecular_gas_bird_1994}, where the equations of state are given as tables or function subroutines. Also, we note that such conformation tensors are learned from the micro-scale simulations for the best approximation of the polymer stress and constitutive dynamics. This allows us to bypass the evaluation of the polymer configuration distribution by directly solving the high-dimensional FK equation, or coupling the micro-scale simulations. Meanwhile, the micro-scale viscoelastic effects can be faithfully captured beyond the empirical closures based on the linear/mean-field approximations. \section{Numerical results} To demonstrate the model accuracy, we consider a polymer solution with polymer number density $n_p = 0.5$. The bond potential $V_{\rm b}(r)$ is chosen to be FENE, i.e., $V_{\rm b}(r) =-\frac{k_s}{2}r^2_{0} \log\left[ 1-\frac{r^2}{r^2_{0}}\right]$, where $k_s$ is the spring constant. $r_0$ is the maximum bond extension. The continuum model is constructed using $n=3$ encoder functions. We also experimented with larger values of $n$ but did not see appreciable improvement. That been said, the choice of $n$ needs to be looked into more carefully in the future. \begin{figure*} \centering \includegraphics[trim=60 20 100 70,clip,scale=0.21]{./figure_1a.pdf} \includegraphics[trim=60 20 100 70,clip,scale=0.21]{./figure_1b.pdf} \includegraphics[trim=60 20 100 70,clip,scale=0.21]{./figure_1c.pdf} \caption{Quasi-equilibrium relaxation process of a dumbbell suspension obtained from direct MD simulation, the present DeePN$^2$, canonical Hookean and FENE-P model. \textbf{Left}: Encoder function $g(r)$. \textbf{Middle}: Evolution of $\boldsymbol \tau_{\rm p}$. \textbf{Right}: Evolution of $\mathbf c_1 = \left\langle \mathbf r\mathbf r^T\right\rangle$. Two parameter sets of the FENE-P model are examined: (\Rmnum{1}) the initial conditions of both $\mathbf c_1$ and $\boldsymbol\tau_{\rm p}$ are consistent with MD. (\Rmnum{2}) the initial and final conditions of $\mathbf c_1$ are consistent with MD. The Hookean model parameters are set following (\Rmnum{1}). } \label{fig:quasi_equilibrium_MD_ML} \end{figure*} Fig. \ref{fig:quasi_equilibrium_MD_ML} shows the encoder functions $g(r)$ with $r_0 = 2.4$, $k_s = 0.1$. To validate the DeePN$^2$ model, we consider a quasi-equilibrium dynamics of the polymer solution with $k_B T = 0.25$, while the initial polymer configuration is taken from the equilibrium state with $k_B T = 0.6$. The relaxation process is simulated using both MD and DeepN$^2$. Fig. \ref{fig:quasi_equilibrium_MD_ML} shows the evolution of the trace of $\mathbf c_1$ and $\boldsymbol\tau_{\rm p}$. The predictions from DeePN$^2$ agree well with the MD results. In contrast, the predictions from Hookean and FENE-P model show apparent deviations. Next, we consider the non-equilibrium process of a reverse Poiseuille flow (RPF) in a domain $[0, 40]\times[0, 80]\times [0, 40]$ (reduced unit), with periodic boundary condition imposed in each direction. Starting from $t = 0$, an external field $\mathbf f_{\rm ext} = (f_b, 0, 0)$ is applied in the region $y < 40$ and an opposite field $\mathbf f_{\rm ext} = (-f_b, 0, 0)$ is applied in the region $y > 40$. Fig. \ref{fig:velocity_profile_conformation} shows the instantaneous velocity profiles with $r_0 = 3.8$ and $f_b = 0.02$. The predictions from DeePN$^2$ agree well with MD while FENE-P yields apparent deviations. For the velocity evolution at $y = 6$ and $y = 14$, the predictions from the Hookean and FENE-P models show pronounced overestimations on both the magnitude and duration of the oscillation behavior. Such limitations of the FENE-P model have already been noted in Ref. \cite{Laso_Ottinger_JNNFM_1993}. From the microscopic perspective, the discrepancy arises from the mean field approximation, $\boldsymbol\tau_{\rm p} \approx \mathbf c/(1 - \Tr(\mathbf c)/r_0^2)$. Such an approximation cannot capture the nonlinear response when individual polymer bond length approaches $r_0$. In contrast, DeePN$^2$ can capture such micro-scale ``bond length dispersion'' via the additional macro-scale nonlinear conformation tensors $\mathbf c_2, \cdots, \mathbf c_n$ \begin{figure}[htbp] \includegraphics[trim=60 20 100 70,clip,scale=0.25]{./figure_2a.pdf} \includegraphics[trim=60 20 100 70,clip,scale=0.25]{./figure_2b.pdf} \caption{Evolution of the reverse Poiseuille flow of a dumbbell suspension obtained from MD and various models. \textbf{Left}: velocity profiles at $t = 20, 60, 220$. \textbf{Right}: velocity evolution at $y = 6$ and $y = 14$. The parameters of the Hookean and FENE-P model are chosen such that the equilibrium bond length matches the MD results. } \label{fig:velocity_profile_conformation} \end{figure} Shown in Fig. \ref{fig:micro_macro_link}(a) is the evolution of $\mathbf c_1$ at $y = 6$. The DeePN$^2$ faithfully predicts the responses of the polymer configurations under the external flow field. The instantaneous $\boldsymbol \tau_{\rm p}$ is also accurately predicted by the conformation tensors, as shown in Fig. \ref{fig:micro_macro_link}(b-c). The responses can also be examined by the shear-rate-dependent viscosity. As shown in Fig. \ref{fig:micro_macro_link}(d), predictions by DeePN$^2$ agree well with the MD results. In contrast, the FENE-P model yields apparent deviations. \begin{figure}[htbp] \includegraphics[trim=60 20 100 20,clip,scale=0.25]{./figure_3a.pdf} \includegraphics[trim=60 20 100 20,clip,scale=0.25]{./figure_3b.pdf}\\ \includegraphics[trim=60 20 100 20,clip,scale=0.25]{./figure_3c.pdf} \includegraphics[trim=60 20 100 20,clip,scale=0.25]{./figure_3d.pdf} \caption{The micro-macro correspondence during the evolution of the reverse Poiseuille flow of the dumbbell suspension presented in Fig. \ref{fig:velocity_profile_conformation} with the same line scheme. (a) evolution of $\mathbf c_1$ at $y = 6$. (b-c) Normal stress difference ${\boldsymbol \tau_{\rm p}}_{xx}- {\boldsymbol\tau_{\rm p}}_{yy}$ and shear stress ${\boldsymbol \tau_{\rm p}}_{xy}$ at $y = 6$ (upper lines) and $y = 14$ (lower lines). (d) Shear-rate-dependent viscosity. The predictions by Hookean model show large deviations from the MD results, and not shown in (a), (c), (d) for visualization purpose.} \label{fig:micro_macro_link} \end{figure} Besides the first-principle-based stress model and dynamic closure, another distinctive feature of the DeePN$^2$ model is the generalized objective tensor derivative $\mathcal{D}\mathbf c_i/\mathcal{D}t$: \begin{equation} \frac{\mathcal{D}\mathbf c_i}{\mathcal{D}t} = \overset{\triangledown}{\mathbf c}_i - \boldsymbol\kappa:\left[\sum_{k=1}^6 \mathbf E_{1,i}^{(k)} (\mathbf c_1,\cdots,\mathbf c_n) \otimes \mathbf E_{2,i}^{(k)} (\mathbf c_1,\cdots,\mathbf c_n)\right], \label{eq:tensor_derivative} \end{equation} where $\overset{\triangledown}{\mathbf c}_i$ is the standard upper-convected derivative and the second term arises from the source term $\left\langle \mathbf r \nabla_{\mathbf r} g(r)^2 \otimes \mathbf r\mathbf r^T \right\rangle$ in Eq. \eqref{eq:FK_B_evoluation}. Therefore, the second term of $\mathcal{D}\mathbf c_i/\mathcal{D}t$ is embedded with the nonlinear response to external field $\boldsymbol\kappa$, inherited from the encoder $g_i(r)$. As a numerical test, we use the present model to simulate the RPF, where $\mathcal{D}\mathbf c_i/\mathcal{D}t$ is chosen to be the upper-convected derivative $\overset{\triangledown}{\mathbf c}_i$ and other modeling terms remain the same. Fig. \ref{fig:corroational_derivative} shows the evolution of the velocities and $\mathbf c_1$. By ignoring the second term in Eq. \eqref{eq:tensor_derivative}, the predictions show apparent deviations from the MD results. This indicates that the empirical choices of the objective tensor derivative are not accurate. To achieve the desired accuracy, these derivatives have to retain some information from the specific conformation tensor. \begin{figure}[htbp] \centering \includegraphics[trim=60 20 100 20,clip,scale=0.25]{./figure_4a.pdf} \includegraphics[trim=60 20 100 20,clip,scale=0.25]{./figure_4b.pdf} \caption{The effectiveness of the objective tensor derivative constructed by \eqref{eq:tensor_derivative}. The additional source term plays a vital role in the accurate modeling of the fluid systems. The model that uses the canonical upper-convected derivative shows apparent deviations from the MD results for the evolution of the velocities (\textbf{left}) and $\mathbf c_1$ (\textbf{right}) at $y = 6$. } \label{fig:corroational_derivative} \end{figure} \section{Discussion} The present DeePN$^2$ directly learns the stress model and constitutive dynamics from the microscale simulation data and avoids dealing with the high-dimensional microscale configuration density function $\rho(\mathbf r, t)$. A main observation is that the explicit knowledge of $\rho(\mathbf r, t)$ is a sufficient, but not a necessary condition for constructing the full constitutive equation. We note that DeePN$^2$ differs from the previous moment-closure studies \cite{Warner_IECF_1972, Warner_PhD_1971, Armstrong_JCP_1974_1} based on empirical approximations of $\rho(\mathbf r, t)$. In these semi-analytical studies \cite{Warner_IECF_1972, Warner_PhD_1971, Armstrong_JCP_1974_1}, the steady-state FK solution $\rho_{s}(\mathbf r)$ of a dumbbell is approximated by series expansion, yielding the stress-strain relationship only for equilibrium \cite{Bird_Curtiss_book_vol_2}. In Ref. \cite{Du_Liu_MMS_2005, Yu_Du_mms_2005, Hyon_Du_mms_2008}, a set of high-order moments are proposed to capture the peak regime of the $\rho(\mathbf r, t)$, yielding good predictions for the two-dimensional dumbbell system. However, it is not straightforward to generalize such approximations for complex systems due to the lack of general relationship between these moments and the stress tensor $\boldsymbol\tau_{\rm p}$. On the other hand, the conformation tensors constructed in the present DeePN$^2$ \emph{are not} the standard moments for the approximation of $\rho(\mathbf r, t)$; they are directly learnt from the micro-scale samples that best capture the dynamics of $\boldsymbol\tau_{\rm p}$, rather than recover the high-dimensional $\rho(\mathbf r, t)$. As a numerical example, we employ DeePN$^2$ to a three-bead suspension with intramolecular interactions governed by both the bond and angle potentials, see Appendix. Generalization of the learning framework for complex polymer fluids will be conducted and presented in the following studies. \section{Summary} In this study, we presented a machine learning-based approach for constructing hydrodynamic models for polymer fluids, DeePN$^2$, directly from the micro-scale descriptions. While this is only the first step in a long program, the results we obtained have already demonstrated the potential of such an approach for achieving accuracy and efficiency at the same time. The construction is based on an underlying micro-scale model. It respects the symmetries of the underlying physical system. It is end-to-end, and requires little ad hoc human intervention. Contrary to conventional wisdom on machine learning models, the model obtained here is quite interpretable, and in fact shows quite some physical insight. It has already demonstrated much better accuracy than existing hydrodynamic models in several tests. Different from the common ML-based approaches for learning the reduced dynamics of complex systems, the present approach does not require the time-series samples and provides a generalized form of the objective tensor derivative with \emph{clear} micro-scale interpretation. This enables us to avoid the heuristic choices on the objective tensor derivative and the ``black-box'' representations by the numerical evaluation of the time-derivatives. These unique features are well-suited for the multi-scale fluid systems where accurate time-series samples from the micro-scale simulations are often limited. While we focused on polymer solutions, the new form of the objective tensor derivative and the present learning framework are quite general and can be adapted to other systems of complex fluids and soft matter. It should also be noted that what we discussed is only a first step towards constructing accurate and robust hydrodynamic models for non-Newtonian fluids. Admittedly, dumbbell suspensions are polymer models with simplified intramolecular potential and viscoelasticity, applications to more realistic micro-scale models will be carried out in future work. Among the other issues that remain to be addressed, let us mention coupling the training process with the adaptive selection of the training data as was done in MD \cite{Zhang_Lin_PRM_2019}, the automatic choice of the model complexity (e.g. the choice of $n$), the improvement of the underlying micro-scale model \cite{Lei_Li_PNAS_2016}, and the enhancement of the micro-scale sampling efficiency. While some of these will take time, there is no doubt that machine learning, used in the right way, can help us to tackle the long-standing problem of developing truly reliable hydrodynamic models for complex fluids. \clearpage
1,314,259,994,450
arxiv
\section{Introduction} Light-cone distribution amplitudes (LCDAs) are of great importance in exclusive $B$-meson decays like $B \rightarrow \pi \pi$ or $B \rightarrow \pi K$ in the heavy quark limit and allow for the study of $CP$-violation in weak interactions. They parametrize matrix elements of nonlocal heavy-light currents separated along the light-cone at leading order in the heavy-quark effective theory (HQET) \cite{Georgi:1990um} in terms of expansions in wave functions of increasing twist \cite{Beneke:2003pa,Grozin:1996pq}. In particular, LCDAs appear in factorization theorems such as QCD factorization \cite{Beneke:2000ry, Beneke:1999br,Beneke:2003pa}, since these amplitudes encode the nonperturbative nature of the strong interactions and are crucial in $B$-meson decay form factor computations. General definitions have been obtained in \cite{Beneke:2003pa,Grozin:1996pq}. Contrary to light-meson distribution amplitudes, which also appear in factorization theorems, the properties of the $B$-meson distribution amplitudes are less known. However, they have been extensively studied recently. Their evolution equations have been investigated for the leading twist two-particle LCDA in \cite{Lange:2003ff,Bell:2013tfa,Braun:2019zhp,Braun:2019wyx,Galda:2020epp} and for higher twist amplitudes in \cite{Braun:2017liq}. Moreover, the decay $B \rightarrow \gamma \ell \nu$ is of particular interest, because it provides a simple example to probe the light-cone structure of the $B$-meson. Here, the photon has a large energy compared to the strong interaction scale $\Lambda$, so QCD factorization can be used to study parameters like the inverse moment $\lambda_B$ \cite{Beneke:2011nf,Beneke:2018wjp,Shen:2018abs,Braun:2012kp,Khodjamirian:2020hob,Wang:2016qii,Wang:2018wfj}. Three-particle LCDAs have also been investigated e.g. in \cite{Grozin:1996pq,Kawamura:2001jm}, where the corresponding Mellin moments have been defined and identities between two-particle and three-particle LCDAs have been found. In general, these three-particle LCDAs occur in higher dimensional vacuum to meson matrix elements including nonlocal quark operators. But in the case of local quark operators, these matrix elements can be expressed in terms of the parameters $\lambda_{E,H}^2$, which also contribute to the second Mellin moments of the three-particle $B$-meson distribution amplitudes. These are the parameters of particular interest in this work. They have been first investigated by Grozin and Neubert \cite{Grozin:1996pq} within the framework of QCD sum rules \cite{Novikov:1983gd,Shifman:1978by,Shifman:1978bx}. All contributions to the operator-product expansion (OPE) \cite{Wilson:1969zs} in local vacuum condensates up to mass dimension five have been considered there. Up to mass dimension four, the leading order contribution is of $\mathcal{O} (\alpha_s)$, while the leading order of the mass dimension five condensate contributes at $\mathcal{O} (\alpha_s^0)$. The extraction of these parameters is connected to a rather large uncertainty, because the sum rules turn out to be unstable with respect to the variation of the Borel parameter. Notice that such a dependence is not unexpected, since it is well known \cite{Braun:1989iv,Ball:1998sk,Nishikawa:2011qk} that higher dimensional condensates tend to give large contributions to correlation functions including higher dimensional operators. Further study by Nishikawa and Tanaka \cite{Nishikawa:2011qk} lead to deviations from the original values for $\lambda_{E,H}^2$. These authors argued in their work that a consistent treatment of all $\mathcal{O} (\alpha_s)$ contributions should resolve the stability problem, which is related to the fact that the OPE does not converge for the parameters $\lambda_{E,H}^2$ in \cite{Grozin:1996pq}. For this analysis, they included the $\mathcal{O}(\alpha_s)$ corrections of the coupling constant $F(\mu)$ as well, which, albeit leading to good convergence of the OPE, obey large higher order perturbative corrections \cite{Broadhurst:1991fc,Penin:2001ux}. Moreover, they included as an additional nonperturbative correction the dimension six diagram of $\mathcal{O}(\alpha_s)$ in order to check the convergence of the OPE beyond mass dimension five and calculated the $\mathcal{O} (\alpha_s)$ corrections for the dimension five condensate. After performing a resummation of the large logarithmic contributions, which results into more stable sum rules and into a more convergent OPE compared to \cite{Grozin:1996pq}, they obtained new estimates for the parameters $\lambda_{E,H}^2$. If we compare the estimates from \cite{Grozin:1996pq} and \cite{Nishikawa:2011qk} in Table \ref{tab::finalresult}, we see that the values for $\lambda_{E,H}^2$ differ by approximately a factor of three, although the ratio $\lambda_E^2/\lambda_H^2$ gives nearly the same value. It is therefore timely to investigate new alternative sum rules which also allow for the predictions of $\lambda_{E,H}^2$. Instead of analysing a correlation function with a three-particle and a two-particle current, we consider sum rules based on a diagonal correlation function of two quark-antiquark-gluon three-particle currents. We include all leading order contributions up to mass dimension seven. The advantage of this sum rule is that it is positive definite and hence we expect that the quark-hadron duality is more accurate compared to \cite{Grozin:1996pq,Nishikawa:2011qk}. But due to the high mass dimension of the correlation function, we see that the OPE does not show better convergence than in the nondiagonal case. Moreover, the continuum and higher excited states are dominating the sum rule. This problem will be resolved by considering combinations of the parameters $\lambda_{E,H}^2$, in particular the $\mathcal{R}$-ratio $\mathcal{R} = \lambda_E^2/\lambda_H^2$. The paper is organized as follows: In Sec. \ref{chp: DerivationSumRule} we derive the sum rules for the parameters $\lambda_{E,H}^2$ and the sum $(\lambda_H^2 + \lambda_E^2)$. Sec. \ref{chp: Contributions} is devoted to the computation of the OPE contributions which enter the sum rules. In Sec. \ref{chp: NumericalAnalysis} we present the numerical analysis of the sum rules and state our final results for the parameters $\lambda_{E,H}^2$. Additionally, we investigate the ratio given by the quotient of these parameters. Finally, we conclude in Section \ref{chp:Conclusion}. \hspace{-1.5cm} \section{Derivation of the QCD Sum Rules in HQET} \label{chp: DerivationSumRule} \noindent In this chapter we derive the sum rules for the diagonal quark-antiquark-gluon three-particle correlation function. Before we start, the definition of the HQET parameter $\lambda_{E,H}^2$ is in order \cite{Grozin:1996pq}: \begin{align} \bra{0} {g_s \bar{q} \; \vec{\alpha} \cdot \vec{E}} \; \gamma_5 h_v\ket{\bar{B}(v)} &= F(\mu) \, \lambda_E^2 \, , \label{eq:DefLamE} \\ \bra{0} g_s \bar{q} \;\vec{\sigma} \cdot \vec{H} \; \gamma_5 h_v \ket{\bar{B}(v)} &= i F(\mu) \, \lambda_H^2. \label{eq:DefLamH} \end{align} From a physical point of view, these quantities parametrize the local vacuum to $\bar{B}$-meson matrix elements, which contain the chromoelectric and chromomagnetic fields in HQET. The chromoelectric field is given by $E^i = G^{0i}$ and $H^i = -\frac{1}{2} \epsilon^{ijk} G^{jk}$ denotes the chromomagnetic field, with $G_{\mu \nu} = G_{\mu \nu}^{a} T^{a}$. Here, the tensor $G^{\mu \nu} = \frac{i}{g_s} [D^{\mu},D^{\nu}]$ is the field strength tensor, while $g_s$ corresponds to the strong coupling constant. Furthermore, the fields $\bar{q}$ in Eq. \eqref{eq:DefLamE} and \eqref{eq:DefLamH} indicate light quark fields, whereas the field $h_v$ denotes the HQET heavy quark field. Moreover, $v$ is the velocity of the heavy $\bar{B}$-meson. The Dirac matrices $\alpha^i$ are given by $\gamma^0 \gamma^i$ and $\sigma^i = \gamma^{i} \gamma^{5}$. In addition to that the HQET decay constant $F(\mu)$ is defined via the matrix element \begin{align} \bra{0} \bar{q} \gamma_{\mu} \gamma_5 h_v \ket{\bar{B}(v)} = i F(\mu) v_{\mu} \end{align} and can be related to the $B$($\bar{B}$)-meson decay constant in QCD up to one loop order \cite{Neubert:1991sp}: \begin{align} f_B \sqrt{m_B} = F(\mu) K(\mu) &= F(\mu) \Big[ 1 + \frac{C_F \alpha_s}{4 \pi} \Big(3 \cdot \mathrm{ln} \frac{m_b}{\mu} - 2 \Big) \nonumber \\ & + ... \Big] + \mathcal{O}\Big(\frac{1}{m_b}\Big). \label{eq:RelationFb} \end{align} Its explicit scale dependence has to cancel with the one of the matching prefactor in order to lead to the constant $f_B$. Values for $f_B$ can be found in \cite{Aoki:2016frl} and estimate this decay constant to be: \begin{align} f_B = (192.0 \pm 4.3) \; \mathrm{MeV} \, . \label{eq::physicaldecayconstant} \end{align} The coupling constant $F(\mu)$ will be of particular importance for the derivation of the relevant low-energy parameters in the following QCD sum rule analysis. But since we are investigating the sum rules at leading order accuracy, corrections of the order $\mathcal{O}(\alpha_s)$ and $\mathcal{O}\Big(\frac{1}{m_b}\Big)$ will be neglected. As already discussed before, Grozin and Neubert \cite{Grozin:1996pq} introduced the parameters $\lambda_{E,H}^2$. For this, they considered the correlation function shown in Eq. (\ref{eq:CorrFuncOffDiag}). The starting point for our calculation is the correlation function given in Eq. (\ref{eq:CorrelationFunc}). \begin{align} \Pi_{\text{GN}} =& \; i \int \mathrm{d}^d x e^{-i \omega v \cdot x} \bra{0} T\{\bar{q}(0) \Gamma_1^{\mu \nu} g_s G_{\mu \nu}(0) h_v(0) \nonumber \\ &\times \bar{h}_v(x) \gamma_5 q(x) \} \ket{0} \, , \label{eq:CorrFuncOffDiag} \\ \Pi_{\text{diag}} =& \; i \int \mathrm{d}^d x \; e^{-i \omega v \cdot x} \bra{0} T\{\bar{q}(0) \Gamma_1^{\mu \nu} g_s G_{\mu \nu}(0) h_v(0) \nonumber \\ &\times \bar{h}_v(x) \Gamma_2^{\rho \sigma} g_s G_{\rho \sigma}(x) q(x)\} \ket{0} \, . \label{eq:CorrelationFunc} \end{align} \noindent Notice that at this point we do not require a specific choice of the quantities $\Gamma_1^{\mu \nu}$ and $\Gamma_2^{\rho \sigma}$, which indicate an arbitrary combination of Dirac $\gamma$-matrices, but in the following steps it is convenient to choose these matrices such that combinations of the HQET parameters $\lambda_{E,H}^2$ are projected out. This requires that the perturbative and nonperturbative contributions to the OPE in Sec. \ref{chp: Contributions} are computed for general $\Gamma_1^{\mu \nu}$ and $\Gamma_2^{\rho \sigma}$. Since we are considering a diagonal Greens function, the structure of $\Gamma_2^{\rho \sigma}$ is directly related to $\Gamma_1^{\mu \nu}$ by replacing indices. From now on we use the notation: \begin{align} \Gamma_1 &\equiv \Gamma_1^{\mu \nu} \, , \\ \Gamma_2 &\equiv \Gamma_2^{\rho \sigma} \, . \end{align} Moreover, we are working in the $\bar{B}$-meson rest frame, where $ v = (1,\vec{0})^T$, in order to simplify the calculations. The next step in the derivation of the sum rules will be to exploit the unitary condition, where the ground state $\bar{B}$-meson is separated from the continuum and excited states: \\ \begin{widetext} \begin{align} \frac{1}{\pi} \mathrm{Im} \Pi_{\text{diag}}(\omega) =& \sum_n (2\pi)^3 \delta(\omega - p_n) \bra{0} \bar{q}(0) \Gamma_1 g_s G_{\mu \nu}(0) h_v(0)\ket{n} \bra{n} \bar{h}_v(x) \Gamma_2 g_s G_{\rho \sigma}(x) q(x) \ket{0} \mathrm{d} \Phi_n \nonumber \\ =& \, \; \delta(\omega - \bar{\Lambda}) \bra{0} \bar{q}(0) \Gamma_1 g_s G_{\mu \nu}(0) h_v(0)\ket{\bar{B}} \bra{\bar{B}} \bar{h}_v(0) \Gamma_2 g_s G_{\rho \sigma}(0) q(0) \ket{0} \nonumber \\& \; + \rho^{\text{hadr.}}(\omega) \Theta(\omega - \omega^{th}) \, . \label{eq:UnitarityCond} \end{align} \end{widetext} In Eq. (\ref{eq:UnitarityCond}), we introduced the binding energy $\bar{\Lambda} = m_B - m_b$, which is one of the important low-energy parameters in this formalism. Furthermore, we separated the full $n$-particle contribution in the first line into a ground state contribution, which will be the dominant contribution in our chosen stability window, and a continuum contribution including broad higher resonances. In the case of QCD correlation functions, the exponential in Eq. \eqref{eq:CorrelationFunc} would generally take the form $e^{-iqx}$ with $q$ denoting the external momentum. Due to the fact that there is no spatial component in the $B$-meson rest frame, transitions from the ground state to the excited states in Eq. \eqref{eq:UnitarityCond} are possible by injecting energy $q^0$ into the system. In this work we explicitly chose $q = \omega \cdot v$ such that we end up with the correlation function shown in Eq. \eqref{eq:CorrelationFunc}. The matrix elements occurring in \eqref{eq:UnitarityCond} can be decomposed in the following way \cite{Grozin:1996pq,Nishikawa:2011qk}: \begin{align} & \bra{0} \bar{q}(0) \Gamma_1 g_s G_{\mu \nu}(0) h_v(0) \ket{\bar{B}} = \; \frac{-i}{6} F(\mu) \{\lambda_H^2(\mu) \nonumber \\ & \times \mathrm{Tr}[\Gamma_1 P_+ \gamma_5 \sigma_{\mu \nu}] + [\lambda_H^2(\mu) - \lambda_E^2(\mu)] \nonumber \\ & \times \mathrm{Tr}[\Gamma_1 P_+ \gamma_5 (i v_{\mu} \gamma_{\nu} - i v_{\nu} \gamma_{\mu})]\}. \label{eq:Decomp} \end{align} Notice that the second decomposition is indeed valid since the $B$-meson ground state explicitly depends on the velocity $v$ and $\sigma_{\mu \nu} = \frac{i}{2} [\gamma_{\mu}, \gamma_{\nu}]$ corresponds to the usual antisymmetric Dirac tensor. In \eqref{eq:Decomp} we made use of the covariant trace formalism, further investigated in \cite{Grozin:1996pq,Falk:1992fm}. The next step will be to use the standard dispersion relation, after using the residue theorem and the Schwartz reflection principle \footnote{For more details on QCD sum rules or HQET sum rules, see \cite{Neubert:1993mb, Colangelo:2000dp}}: \\ \begin{align} \Pi_{\text{diag}}(\omega) =& \; \frac{1}{\pi} \int_0^{\infty} \mathrm{d}s \frac{\mathrm{Im} \Pi_{\text{diag}}(s)}{s - \omega - i0^+} \nonumber \\ &= \frac{1}{\bar{\Lambda} - \omega - i0^+} \bra{0} \bar{q}(0) \Gamma_1 g_s G_{\mu \nu}(0) h_v(0)\ket{\bar{B}} \nonumber \\ & \times \bra{\bar{B}} \bar{h}_v(0) \Gamma_2 g_s G_{\rho \sigma}(0) q(0) \ket{0} \nonumber \\& \; + \int_{s^{th}}^{\infty} \mathrm{d} s \frac{\rho^{\text{hadr.}}(s)}{s - \omega - i0^+} \, . \label{eq::ground-higher} \end{align} \\ In Eq. (\ref{eq::ground-higher}) we introduce the threshold parameter $s^{th}$, which is another relevant low-energy parameter that separates the ground state contribution from higher resonances and continuum contributions. We can now move on and evaluate the ground state contribution: \begin{widetext} \begin{align} \bra{0} \bar{q}(0) \Gamma_1 g_s G_{\mu \nu}(0) h_v(0)\ket{\bar{B}} \bra{\bar{B}} \bar{h}_v(0) \Gamma_2 g_s G_{\rho \sigma}(0) q(0) \ket{0} &= \; \frac{-i}{6} F(\mu) \Big[\lambda_H^2(\mu) \mathrm{Tr}[\Gamma_1 P_+ \gamma_5 \sigma_{\mu \nu}] \nonumber \\ & + [\lambda_H^2(\mu) - \lambda_E^2(\mu)] \mathrm{Tr}[\Gamma_1 P_+ \gamma_5 (i v_{\mu} \gamma_{\nu} - i v_{\nu} \gamma_{\mu})]\Big] \label{eq:GroundStateContr} \\& \times \frac{-i}{6} F^{\dagger}(\mu) \Big[ \lambda_H^2(\mu) \mathrm{Tr}[\gamma_5 P_+ \Gamma_2 \sigma_{\rho \sigma}] \nonumber \\ & - [\lambda_H^2(\mu) - \lambda_E^2(\mu)] \mathrm{Tr}[\gamma_5 P_+ \Gamma_2 (i v_{\rho} \gamma_{\sigma} - i v_{\sigma} \gamma_{\rho})] \Big] \, \nonumber. \end{align} \end{widetext} Notice that the term involving the difference of both HQET parameter $(\lambda_H^2 - \lambda_E^2)$ does not change its sign under complex conjugation. In order to derive the sum rules which ultimately determine the parameters $\lambda_{E,H}^2$, we make an explicit choice for the matrices $\Gamma_1$ and $\Gamma_2$ \cite{Grozin:1996pq}. Following the same approach as \cite{Grozin:1996pq}, we choose our gamma matrices $\Gamma_{1,2}$ as: \begin{align} \Gamma_{1} &= \frac{i}{2} \sigma_{\mu \nu} \gamma_5 \, \label{eq:G1G2E} \end{align} to obtain the $(\lambda_{H}^2 + \lambda_{E}^2)^2$ sum rule. Furthermore, for the projection of the $\lambda_H^4$ sum rule we choose \begin{align} \Gamma_{1} &= i \Bigg( \frac{1}{2} \delta_{\alpha}^{\; \nu} - v_{\nu} v^{\alpha}\Bigg) \sigma_{\mu \alpha} \gamma_5 \label{eq:G1G2H} \end{align} and for $\lambda_E^4$: \begin{align} \Gamma_{1} &= i v_{\nu} v^{\alpha} \sigma_{\mu \alpha} \gamma_5 \, . \end{align} Notice that these choices are Lorentz covariant in comparison to Eq. \eqref{eq:DefLamE} and \eqref{eq:DefLamH}. The corresponding expressions for $\Gamma_2$ can be obtained from $\Gamma_1$ by replacing $\mu \rightarrow \rho$, $\nu \rightarrow \sigma$. \\ Using the relation in Eq. \eqref{eq:GroundStateContr}, we can obtain expressions for $\Pi_{E,H}$ and $\Pi_{HE}$: \\ \begin{align} \Pi_{E,H}(\omega) =& \; F(\mu)^2 \cdot \lambda_{E,H}^4 \cdot \frac{1}{\bar{\Lambda}- \omega - i0^+} + \int_{s^{th}}^{\infty} \mathrm{d}s \frac{\rho_{E,H}^{\text{hadr.}}(s)}{s - \omega - i0^+} \label{eq:SpectralFuncH} \\ \Pi_{HE}(\omega) =& \; F(\mu)^2 \cdot (\lambda_H^2 + \lambda_E^2)^2 \cdot \frac{1}{\bar{\Lambda} - \omega - i0^+} \nonumber \\ & + \int_{s^{\text{th}}}^{\infty} \mathrm{d}s \frac{\rho_{HE}^{\text{hadr.}}(s)}{s - \omega - i0^+} \label{eq:SpectralFuncE} \end{align} \\ Note that the threshold parameter $s^{th}$ in Eq. (\ref{eq:SpectralFuncH}) does not necessarily coincide with the threshold parameter in Eq. (\ref{eq:SpectralFuncE}). To parametrize the hadronic spectral density, we make use of the global and semilocal quark-hadron duality (QHD) \cite{Poggio:1975af, Hofmann:2003qf} in order to connect the hadronic spectral density with the spectral density which is described by the OPE \cite{Wilson:1969zs,Novikov:1983gd,Neubert:1991sp,Colangelo:2000dp}. This is the essential idea of this formalism. However, power suppressed nonperturbative effects become dominant in comparison to the perturbative contribution for $- |\omega| \approx \Lambda_{\text{QCD}}$. In the QCD sum rule approach \cite{Novikov:1983gd}, these effects are parametrized in terms of a power series of local condensates as a consequence of the non-trivial QCD vacuum structure. These condensates carry the quantum numbers of the QCD vacuum. For convenience, we show explicitly in Appendix \ref{chp:Condensate} the expansion and averaging of the vacuum matrix element \eqref{eq:CorrelationFunc} in order to obtain the quark condensate $\bra{0}\bar{q}q\ket{0}$, the gluon condensate $\bra{0} G_{\mu \nu}^{a} G_{\rho \sigma}^{a} \ket{0}$, the quark-gluon condensate $\bra{0}\bar{q}g_s \sigma \cdot G q\ket{0}$ and the triple-gluon condensate $\bra{0} g_{s}^3 f^{a b c} G_{\mu \nu}^{a} G_{\rho \sigma}^{b} G_{\alpha \lambda}^{c} \ket{0}$. Although we can handle the Euclidean region, the physical states described by the spectral function in Eq. \eqref{eq:SpectralFuncH} and \eqref{eq:SpectralFuncE} are defined for $\omega \in \mathbb{R}$. But since there is no estimate for the hadronic spectral density $\rho_{X}^{\text{hadr.}}(s)$, we need to make use of two statements. First, we exploit that for $\omega \ll 0$ the hadronic and the OPE spectral functions coincide at the global level: \begin{align} \Pi_{X}^{\text{hadr}.} = \Pi_{X}^{\text{OPE}} \; \; \; \; \text{for} \, \, \, X \in \{H,E,H\hspace{-0.1cm}E\}. \label{GlobalQHD} \end{align} Asymptotic freedom guarantees that this equality holds. Moreover, we need to employ the semilocal quark-hadron duality, which connects the spectral densities: \begin{align} \int_{s_{X}^{th}}^{\infty} \mathrm{d}s \frac{\rho_{X}^{\text{hadr.}}(s)}{s - \omega - i0^+} = \int_{s_{X}^{th}}^{\infty} \mathrm{d}s \frac{\rho_{X}^{\text{OPE}}(s)}{s - \omega - i0^+}, \label{LocalQHD} \end{align} where $X$ needs be chosen according to \eqref{GlobalQHD}. In the low-energy region, where nonperturbative effects dominate, the duality relation is largely violated due to strong resonance peaks, while in the high-energy region these peaks become broad and overlapping. Once a sum rule is obtained, the approximations made by QHD are consistent (see Section \ref{chp: NumericalAnalysis} for more details). So it is necessary to work in the transition region where the condensates are important, but still small and local enough such that perturbative methods can be applied. Based on the relations in Eq. \eqref{GlobalQHD}, \eqref{LocalQHD}, we separate the integral over the OPE spectral density by introducing the threshold parameter $s^{th}$. Hence, we end up with the following form for the sum rules: \begin{align} F(\mu)^2 \cdot \lambda_{E,H}^4 \frac{1}{\bar{\Lambda} - \omega - i0^+} =& \int_{0}^{s^{th}} \mathrm{d}s \frac{\rho_{E,H}^{\text{OPE}}(s)}{s - \omega - i0^+} \, \label{eq::hadronrep_1}, \\ F(\mu)^2 \cdot (\lambda_H^2 + \lambda_E^2)^2 \frac{1}{\bar{\Lambda} - \omega - i0^+} =& \int_{0}^{s^{th}} \mathrm{d}s \frac{\rho_{HE}^{\text{OPE}}(s)}{s - \omega - i0^+} \label{eq::hadronrep_2}. \end{align} \\ Finally, we perform a Borel transformation, which removes possible subtraction terms and leads further to an exponential suppression of higher resonances and the continuum. In addition to that, the convergence of our sum rule is improved. Generally, the Borel transform can be defined in the following way \cite{Neubert:1993mb,Colangelo:2000dp}: \begin{align} \mathcal{B}_M f(\omega) = \underset{n \rightarrow \infty, - \omega \rightarrow \infty}{\mathrm{lim}} \frac{(-\omega)^{n + 1}}{\Gamma(n + 1)} \Big(\frac{\mathrm{d}}{\mathrm{d} \omega} \Big)^n f(\omega), \end{align} where $f(\omega)$ illustrates an arbitrary test function. Furthermore, we keep the ratio $M = \frac{-\omega}{n}$ fixed, $M$ denotes the Borel parameter. After applying this transformation, we derive the final form of our sum rule expressions: \begin{align} F(\mu)^2 \cdot \lambda_{E,H}^4 \cdot e^{-\bar{\Lambda}/M} &= \int_{0}^{\omega_{th}} \mathrm{d} \omega \; \rho_{E,H}^{\text{OPE}}(\omega) \; e^{-\omega/M} \nonumber \\ & = \int_{0}^{\omega_{\text{th}}} \mathrm{d} \omega \; \frac{1}{\pi} \mathrm{Im} \Pi_{E,H}^{\text{OPE}}(\omega) \; e^{-\omega/M} \, , \label{eq:SumRuleH} \end{align} \begin{align} F(\mu)^2 \cdot (\lambda_H^2 + \lambda_E^2)^2 \cdot e^{-\bar{\Lambda}/M} &= \int_{0}^{\omega_{th}} \mathrm{d} \omega \; \rho_{HE}^{\text{OPE}}(\omega) \; e^{-\omega/M} \nonumber \\ & \hspace{-0.5cm} = \int_{0}^{\omega_{\text{th}}} \mathrm{d}\omega \; \frac{1}{\pi} \mathrm{Im} \Pi_{HE}^{\text{OPE}}(\omega) \; e^{-\omega/M} \, .\label{eq:SumRuleE} \end{align} These are the QCD sum rules presented in the paper. In order to obtain reliable values for the parameters $\lambda_{E,H}^2$ from the sum rules in Eq. \eqref{eq:SumRuleH} and \eqref{eq:SumRuleE}, the Borel parameter $M$ needs to be chosen accordingly together with the threshold parameter $\omega_{th}$. The next step will be to determine the spectral function $\Pi_{X}^{\text{OPE}}(s)$, which is given by the OPE: \\ \begin{align} \Pi_{\text{X}}^{\text{OPE}}(\omega) =& \; C^{\text{X}}_{\text{pert}}(\omega) + C^{\text{X}}_{\bar{q}q} \braket{\bar{q}q} + C^{\text{X}}_{G^2} \braket{\frac{\alpha_s}{\pi} G^2} \nonumber \\ & + C^{\text{X}}_{\bar{q}Gq} \braket{\bar{q} g_s \sigma \cdot G q} + C^{\text{X}}_{G^3} \braket{g_s^3 f^{abc} G^{a} G^{b} G^{c}} \nonumber \\ & + C^{\text{X}}_{\bar{q}qG^2} \braket{\bar{q}q} \braket{\frac{\alpha_s}{\pi} G^2} + ... \label{eq:OPE} \end{align} The Wilson coefficients $C$ in Eq. \eqref{eq:OPE} will be discussed in Sec. \ref{chp: Contributions}. Moreover, we define a more convenient notation for the condensate contributions: \begin{align} & \braket{\bar{q}q} := \bra{0} \bar{q} q \ket{0}, \braket{G^2} := \bra{0} G_{\mu \nu}^a G^{a, \mu \nu} \ket{0}, \nonumber \\ & \braket{\bar{q} g_s \sigma \cdot G q} := \bra{0} \bar{q} g_s G^{\mu \nu} \sigma_{\mu \nu} q \ket{0}, \nonumber \\ & \braket{g_s^3 f^{abc} G^{a} G^{b} G^{c}} := \bra{0} g_s^3 f^{abc} G^{a}_{\mu \nu} G^{b,\nu \rho} G^{c,\mu}_{\rho} \ket{0}. \end{align} \noindent As previously mentioned, the condensates are uniquely parametrized up to mass dimension five. Starting at dimension six and higher, there occur many different possible contributions, but some of them are related by QCD equations of motions and Fierz identities \cite{Thomas:2007gx} to each other \footnote{A list is given for example in the review \cite{Gubler:2018ctz}.}. Note that in the power expansion of Eq. \eqref{eq:OPE} we have only stated the dimension six and seven condensates, which give a leading order contribution to the parameters $\lambda_{E,H}^2$. Moreover, there are many estimates for the values of the condensates given in the literature, which have been obtained from e.g. lattice QCD, sum rules \cite{Gubler:2018ctz}, but obtaining values for condensates of dimension six and higher is an ongoing task due to the mixing with lower dimensional condensates. Because of the lack of these values, the vacuum saturation approximation \cite{Shifman:1978bx} is exploited in many cases, where a full set of intermediate states is introduced into the higher dimensional condensate and the assumption is used that only the ground state gives a dominant contribution. Thus, the higher dimensional condensate will be effectively reduced to a combination of lower dimensional condensates \footnote{This has already been done for the dimension seven condensate in Eq. \eqref{eq:OPE}.}. \section{Computation of the Wilson Coefficients} \label{chp: Contributions} In this chapter, the leading perturbative and nonperturbative contributions to the correlation function in \eqref{eq:SpectralFuncH} and \eqref{eq:SpectralFuncE} are calculated up to dimension seven. Since the leading order of the diagonal correlator of two three-particle currents is of $\mathcal{O}(\alpha_s)$ in the strong coupling constant, we only investigate contributions up to this order in perturbation theory. For the perturbative contribution we choose the Feynman gauge for the background field, while the nonperturbative contributions to the OPE are evaluated in the fixed-point or Fock-Schwinger (FS) gauge \cite{Fock:1937dy,Schwinger:1951nm}: \begin{align} x_{\mu} \, A^{\mu}(x) = 0 \hspace{0.5cm} \text{and} \hspace{0.5cm} A_{\mu}(x) = \int_0^1 \mathrm{d} u \; u x^{\nu} G_{\nu \mu}(ux). \end{align} In the FS gauge, we set the reference point to $x_0 = 0$. This reference point would occur in all intermediate steps of the calculation and cancel in the end of the calculation. It is well known that this gauge is particularly useful in QCD sum rule computations. Within the framework of QCD sum rules, the long-distance effects are encoded in local vacuum matrix elements of increasing mass dimension. In order to obtain these local condensates, the gluon field strength tensor is expanded in its spacetime coordinate $x$, which results in a simple relation between the gluon field $A_{\mu}$ and the field strength tensor $G_{\mu \nu}$. Additionally, gluon fields do not interact with the heavy quark in HQET, which can be easily seen by considering the heavy-quark propagator in position space \cite{Nishikawa:2011qk}: \begin{align} \contraction{}{h_v(0)}{}{\bar{h}_v(x)} h_v(0) \bar{h}_v(x) &= \Theta(-v \cdot x) \, \delta^{(d - 1)}(x_{\perp}) \, P_+ \, \mathcal{P} \; \nonumber \\ & \times \mathrm{exp}\Bigg( i g_s \int_{v \cdot x}^0 \mathrm{d} s v \cdot A(sv) \Bigg) \, . \label{eq:HeavyQuarkWick} \end{align} Here, $x_{\perp}^{\mu} = x^{\mu} - (v \cdot x) v^{\mu}$, $P_+ = (1 + \slashed{v})/2$ denotes the projection operator and $\mathcal{P}$ illustrates the path ordering operator. Besides these simplifications, there are three additional vanishing subdiagrams depicted in Fig. \ref{fig:Vanishing} due to the FS gauge. Generally, all diagrams can be evaluated in position space like in \cite{Grozin:1996pq,Nishikawa:2011qk}, but in this work we choose to work in momentum space. We make use of dimensional regularization for the loop integrals with the convention $d = 4 - 2 \epsilon$. Fig. \ref{fig:pert&qq-contribution}-\ref{fig:Vanishing} \footnote{All diagrams in this work have been created with JaxoDraw \cite{Binosi:2008ig}.} show the diagrams which need to be computed in order to obtain the Wilson coefficients in Eq. \eqref{eq:OPE}. The calculation of these coefficients proceeds in the following way: First, we use FeynCalc \cite{Shtabovenko:2020gxv} to decompose tensor integrals to scalar integrals. In the next step, these scalar integrals are reduced to master integrals by integration-by-parts identities using LiteRed \cite{Lee:2013mka} We start by considering the perturbative contribution and the contribution from the quark condensate in Fig. \ref{fig:pert&qq-contribution}: \begin{figure}[h] \centering \subfloat[]{\includegraphics[width = 0.20\textwidth]{Diagrams/Perturbative.png}} \hspace{0.5cm} \subfloat[]{\includegraphics[width = 0.20\textwidth]{Diagrams/Dim3Condensate.png}} \caption{Feynman diagrams for the perturbative and $\braket{\bar{q}q}$ condensate contribution. The double line denotes the heavy quark propagator. The solid line denotes the light quark propagator and the curly line denotes the gluon propagator.} \label{fig:pert&qq-contribution} \end{figure} \begin{align} C^{\text{X}}_{\text{pert}}(\omega) =& \; \frac{2 \alpha_s}{\pi^3} \cdot C_F N_c \cdot \mathrm{Tr}[\Gamma_1 P_+ \Gamma_2 \slashed{v}] \cdot \bar{\mu}^{4 \epsilon} \nonumber \\ & \times \Gamma(-6 + 4 \epsilon) \cdot \Gamma(2 - \epsilon) \cdot \omega^{6 - 4 \epsilon} e^{4 i \pi \epsilon} \nonumber \\& \times \Big[\Gamma(2 - \epsilon) \cdot T^1_{\mu \rho \nu \sigma} + \Gamma(3 - \epsilon) \cdot T^2_{\mu \rho \nu \sigma}\Big] \, , \label{eq:WilsonCoeffPert} \end{align} \begin{align} C^{\text{X}}_{\bar{q}q}(\omega) &= -\frac{\alpha_s}{\pi} \cdot C_F \cdot \mathrm{Tr}[\Gamma_1 P_+ \Gamma_2] \cdot \bar{\mu}^{2 \epsilon} \cdot \Gamma(-3 + 2 \epsilon) \nonumber \\& \times \omega^{3 - 2 \epsilon} e^{2 i \pi \epsilon} \Big[\Gamma(2 - \epsilon) \cdot T^1_{\mu \rho \nu \sigma} + \Gamma(3 - \epsilon) \cdot T^2_{\mu \rho \nu \sigma}\Big] \, , \label{eq:WilsonCoeffQq} \end{align} with \begin{align} \bar{\mu}^2 :=& \; \frac{\mu^2 e^{\gamma_E}}{4} \, , \\ T^1_{\mu \rho \nu \sigma} :=& \; g_{\mu \rho} g_{\nu \sigma} - g_{\mu \sigma} g_{\nu \rho} \, , \\ T^2_{\mu \rho \nu \sigma} :=& \; -g_{\nu \sigma} v_{\mu} v_{\rho} + g_{\mu \sigma} v_{\nu} v_{\rho} + g_{\nu \rho} v_{\mu} v_{\sigma} - g_{\mu \rho} v_{\nu} v_{\sigma} \, . \end{align} Notice that the tensor structures of $T^{1,2}_{\mu \rho \nu \sigma}$ satisfy the symmetries imposed by the field strength tensors $G_{\mu \nu}$ and $G_{\rho \sigma}$. In particular, the expressions are anti-symmetric under the exchange of $\{\mu \leftrightarrow \nu\}$, $\{\rho \leftrightarrow \sigma\}$ and symmetric under the combined exchanges $\{\mu \leftrightarrow \rho, \nu \leftrightarrow \sigma \}$ and $\{\mu \leftrightarrow \nu, \rho \leftrightarrow \sigma \}$. The Wilson coefficient for the gluon condensate and higher mass dimension correction for the quark condensate share the same tensor structure as the coefficients stated in Eq. \eqref{eq:WilsonCoeffPert} and \eqref{eq:WilsonCoeffQq}. Furthermore, the mass dimension five contribution with the non-Abelian vertex in Eq. \eqref{eq:WilsonCoeffNonAbelian} and the dimension seven contribution in Eq. \eqref{eq:WilsonDim7} make use of these tensor structures as well. \begin{figure}[h] \centering \subfloat[]{\includegraphics[width=0.20 \textwidth]{Diagrams/Dim4Condensate.png}} \subfloat[]{\includegraphics[width = 0.20\textwidth]{Diagrams/Dim5Condensate3.png}} \caption{(a) shows the Feynman diagram for the dimension four contribution, (b) a schematic illustration of the dimension five condensate originating from the higher order expansion of the dimension three contribution in Fig. \ref{fig:pert&qq-contribution}.} \label{fig:GG&qqCorr-contribution} \end{figure} The Wilson coefficient of the gluon condensate, which corresponds to Fig. \ref{fig:GG&qqCorr-contribution} (a) can be expressed as: \begin{align} C^{\text{X}}_{G^2}(\omega) &= \; \mathrm{Tr}[\Gamma_1 P_+ \Gamma_2 \slashed{v}] \cdot \frac{\bar{\mu}^{2 \epsilon}}{(4 - 2 \epsilon)(3 - 2 \epsilon)} \nonumber \\ &\times \Gamma(-2 + 2 \epsilon) \cdot \Gamma(2 - \epsilon) \cdot \omega^{2 - 2 \epsilon} e^{2 i \pi \epsilon} \cdot T^1_{\mu \rho \nu \sigma} \, . \end{align} \begin{figure}[h] \centering \subfloat[]{\includegraphics[width=0.20 \textwidth]{Diagrams/Dim5Condensate1.png}} \hspace{0.5cm} \label{fig:QGq1-contribution} \subfloat[]{\includegraphics[width=0.20 \textwidth]{Diagrams/Dim5Condensate2.png}} \label{fig:QGq2-contribution} \subfloat[]{\includegraphics[width=0.20 \textwidth]{Diagrams/Dim5Condensate4.png}} \caption{Feynman diagrams for dimension five condensate contributions.} \label{fig:QGq-contribution} \end{figure} The mass dimension five contributions are given as: \begin{align} C^{\text{X}}_{\bar{q}Gq,1}(\omega) =& -\frac{\alpha_s}{\pi} \cdot C_F \cdot \mathrm{Tr}[\Gamma_1 P_+ \Gamma_2] \cdot \frac{\bar{\mu}^{2 \epsilon}}{(4 - 2 \epsilon)} \nonumber \\ & \times \Gamma(-3 + 2 \epsilon) \cdot \Gamma(3 - \epsilon) \cdot \omega^{1 - 2 \epsilon} e^{2 i \pi \epsilon} \cdot T^1_{\mu \rho \nu \sigma} \, , \label{eq:Dim5QuarkExp} \end{align} \begin{align} C^{\text{X}}_{\bar{q}Gq,2}(\omega) =& \; \frac{\alpha_s}{4\pi} \cdot \frac{C_F \cdot \bar{\mu}^{2 \epsilon}}{(4 - 2 \epsilon)(3 - 2 \epsilon)} \cdot \Gamma(-1 + 2 \epsilon) \cdot \Gamma(1 - \epsilon) \nonumber \\ & \times \omega^{1 - 2 \epsilon} e^{2 i \pi \epsilon} \cdot \Big[\mathrm{Tr}[\Gamma_1 P_+ \Gamma_2 \sigma_{\mu \nu} \sigma_{\rho \sigma}] \nonumber \\ & - (1 - \epsilon) \cdot \mathrm{Tr}[\Gamma_1 P_+ \Gamma_2 \slashed{v} \mathrm{i} (v_{\mu} \gamma_{\nu} - v_{\nu} \gamma_{\mu}) \sigma_{\rho \sigma}] \Big] \, , \label{eq:WilsonCoeffAbelian1} \end{align} \begin{align} C^{\text{X}}_{\bar{q}Gq,3}(\omega) &= \; \frac{\alpha_s}{4\pi} \cdot \frac{C_F \cdot \bar{\mu}^{2 \epsilon}}{(4 - 2 \epsilon)(3 - 2 \epsilon)} \Gamma(-1 + 2 \epsilon) \cdot \Gamma(1 - \epsilon) \nonumber \\ & \times \omega^{1 - 2 \epsilon} e^{2 i \pi \epsilon} \cdot \Big[\mathrm{Tr}[\Gamma_1 P_+ \Gamma_2 \sigma_{\mu \nu} \sigma_{\rho \sigma}] \nonumber \\ & + (1 - \epsilon) \cdot \mathrm{Tr}[\Gamma_1 P_+ \Gamma_2 \sigma_{\mu \nu} \mathrm{i} (v_{\rho} \gamma_{\sigma} - v_{\sigma} \gamma_{\rho}) \slashed{v}] \Big] \, , \label{eq:WilsonCoeffAbelian2} \end{align} \begin{align} C^{\text{X}}_{\bar{q}Gq,4}(\omega) &= \; \frac{i \alpha_s}{32 \pi} \cdot \frac{C_A C_F \cdot \bar{\mu}^{2 \epsilon}}{(2 - \epsilon) (3 - 2 \epsilon)} \cdot \mathrm{Tr}[\Gamma_1 P_+ \Gamma_2 \sigma^{\chi \beta}] \nonumber \\ & \times \Gamma(-1 + 2 \epsilon) \cdot \Gamma(1 - \epsilon) \cdot \omega^{1 - 2 \epsilon} e^{2 i \pi \epsilon} \cdot \nonumber \\& \; \Big[ \{g_{\mu \chi} T^1_{\nu \rho \beta \sigma} - (\beta \leftrightarrow \chi) \} + (1 - \epsilon) \nonumber \\ & \times \big(\{v_{\beta} g_{\mu \rho} (v_{\sigma} g_{\nu \chi} - v_{\nu} g_{\sigma \chi}) - (\rho \leftrightarrow \sigma)\} \; + \nonumber \\& \; \{v_{\nu} g_{\mu \chi} (v_{\sigma} g_{\beta \rho} - v_{\rho} g_{\beta \sigma}) - (\beta \leftrightarrow \chi) \} \big) \Big] - (\mu \leftrightarrow \nu) \; , \label{eq:WilsonCoeffNonAbelian} \end{align} Although the other contributions for the mass dimension five condensate (Fig. \ref{fig:QGq-contribution}) possess a more complicated tensor structure, all symmetries described before are still satisfied. We obtain the total Wilson coefficient for the mass dimension five condensate if we sum up all four previous contributions, namely $C^{\text{X}}_{\bar{q}Gq} = \sum_{k = 1}^4 C^{\text{X}}_{\bar{q}Gq,k}$. \begin{figure}[h] \centering \subfloat[]{\includegraphics[width=0.20 \textwidth]{Diagrams/Dim6Condensate.png}}\hspace{0.5cm} \subfloat[]{\includegraphics[width=0.20 \textwidth]{Diagrams/Dim7Condensate.png}} \caption{Feynman diagrams for the dimension six and dimension seven condensate, which contribute to the leading order estimate of $\lambda_{E,H}^2$.} \label{fig:GGG&qqG2Corr-contribution} \end{figure} The last two diagrams depicted in Fig. \ref{fig:GGG&qqG2Corr-contribution} are of mass dimension six and seven. Their contributions are expected to be smaller compared to the dimension five contributions, such that we observe that the OPE starts to converge. Other contributions to mass dimension six are vanishing or are of $\mathcal{O}(\alpha_s^2)$. Thus, the triple-gluon condensate is the only relevant condensate at this order and the Wilson coefficient reads: \begin{align} C^{\text{X}}_{G^3}(\omega) &= \; \frac{\bar{\mu}^{2 \epsilon}}{64 \pi^2} \cdot B_{\mu \lambda \rho \nu \sigma \alpha} \cdot \Gamma(2 \epsilon) \cdot \Gamma(1 - \epsilon) \cdot \omega^{- 2 \epsilon} e^{2 i \pi \epsilon} \nonumber \\ & \hspace{-0.5cm} \times \Big[ \mathrm{Tr}[-i \cdot \Gamma_1 P_+ \Gamma_2 \slashed{v} \sigma^{\lambda \alpha}] + \mathrm{Tr}[\Gamma_1 P_+ \Gamma_2 (v^{\alpha} \gamma^{\lambda} - v^{\lambda} \gamma^{\alpha})]]\Big] \, , \label{eq:tripleGluon} \end{align} \noindent where the expression $B_{\mu \lambda \rho \nu \sigma \alpha}$ is defined in Appendix \ref{chp:Condensate}. Finally, we state the expression for the dimension seven contribution: \begin{align} C^{\text{X}}_{\bar{q} q G^2}(\omega) &= - \mathrm{Tr}[\Gamma_1 P_+ \Gamma_2] \cdot \frac{T^1_{\mu \rho \nu \sigma}}{\omega + i0^+} \cdot \frac{\pi^2}{2 N_c (4 - 2 \epsilon) (3 - 2 \epsilon)} \, . \label{eq:WilsonDim7} \end{align} \begin{figure}[h] \centering \subfloat[]{\includegraphics[width=0.20 \textwidth]{Diagrams/VanishingA.png}} \hspace{0.5cm} \label{fig:VanishingA-contribution} \subfloat[]{\includegraphics[width=0.20 \textwidth]{Diagrams/VanishingC.png}} \label{fig:VanishingB-contribution} \hspace{0.6cm} \subfloat[]{\includegraphics[width=0.20 \textwidth]{Diagrams/VanishingB.png} } \label{fig:VanishingC-contribution} \caption{Vanishing subdiagrams in the Fock-Schwinger gauge.} \label{fig:Vanishing} \end{figure} According to Eq. \eqref{eq:SpectralFuncH} and \eqref{eq:SpectralFuncE}, we still need to take the imaginary part of these diagrams. We choose to compute directly the loop diagrams and take the imaginary part of the resulting expression. Following Cutkosky rules, another approach would be to perform the calculation by considering all possible cuts for the diagrams. Apart from the diagrams in Fig. \ref{fig:QGq-contribution}, the diagrams are finite. (a) and (b) in Fig. \ref{fig:QGq-contribution} include both a three-particle and a two-particle cut, where the latter requires a non-trivial renormalization procedure \cite{Grozin:1996hk}. The optical theorem states that both calculations yield the same result. Besides the diagram in Fig. \ref{fig:GG&qqCorr-contribution} (b), all diagrams in Fig. \ref{fig:pert&qq-contribution}-\ref{fig:GGG&qqG2Corr-contribution} can generally be calculated by using perturbative methods. Fig. \ref{fig:GG&qqCorr-contribution} (b) stems from higher order corrections in the expansion of the nonperturbative quark condensate in Eq. \eqref{eq::matrix25}. Moreover, the diagrams contributing to the quark-gluon condensate in Fig. \ref{fig:QGq-contribution} (a) and (b) obey the same structure as the contributions in \cite{Grozin:1996pq,Nishikawa:2011qk} and hence a cross-check is possible after replacing the quark condensate by the quark-gluon condensate and keeping in mind that the Lorentz structures differ. By taking the imaginary part of all Wilson coefficients discussed above, plugging the results into Eq. \eqref{eq:SumRuleH}, \eqref{eq:SumRuleE} and performing the integration over $\omega$ up to the threshold parameter $\omega_{th}$, we obtain the final expression for the sum rules shown in Eq. \eqref{eq::LambdaHPlusE-sumrule-complete} to Eq. \eqref{eq::LambdaE-sumrule-complete}. For convenience, we introduced the function: \begin{align} G_n(x) := 1 - \sum_{k = 0}^n \frac{x^k}{k!} e^{-x}. \end{align} \noindent We see that the sum rules for $\lambda_{E,H}^4$ in Eq. \eqref{eq::LambdaH-sumrule-complete} and \eqref{eq::LambdaE-sumrule-complete} have got the same expression for the perturbative contribution. This contribution is in addition to that positive, since we are studying a positive-definite correlation function in Eq. \eqref{eq:CorrelationFunc}. Furthermore, the quark, the gluon and the triple-gluon condensate in Eq. \eqref{eq::LambdaH-sumrule-complete}, \eqref{eq::LambdaE-sumrule-complete} have different signs and the Wilson coefficients in Eq. \eqref{eq:WilsonCoeffAbelian1}, \eqref{eq:WilsonCoeffAbelian2} and \eqref{eq:WilsonCoeffNonAbelian} vanish for $\lambda_E^4$. This will have implications on the stability of the sum rule for the parameter $\lambda_E^4$ and will be investigated in Sec. \ref{chp: NumericalAnalysis}. The dimension three, four and six condensates do not appear in Eq. \eqref{eq::LambdaHPlusE-sumrule-complete}, since the signs differ in Eq. \eqref{eq::LambdaE-sumrule-complete} compared to \eqref{eq::LambdaH-sumrule-complete}. All sum rules involve the decay constant $F(\mu)$, whose calculation in terms of the correlation function can be found, e.g. in Ref. \cite{Nishikawa:2011qk}. For consistency, we will retain the result at leading order in $\alpha_s$ \begin{align} F^2(\mu) \cdot e^{-\bar{\Lambda}/M} =& \; \frac{2 N_c M^3}{\pi^2} \cdot G_2\Big(\frac{\omega_{th}}{M}\Big) \; - \braket{\bar{q} q} \nonumber \\& \; + \frac{1}{16M^2} \braket{\bar{q} g_s G \cdot \sigma q} . \label{eq:SumRuleF} \end{align} \begin{widetext} \begin{align} & F(\mu)^2 \cdot (\lambda_H^2 + \lambda_E^2)^2 \, e^{-\bar{\Lambda}/M} = \; \frac{\alpha_s C_A C_F}{\pi^3} \cdot 24 M^7 \cdot G_6\Big(\frac{\omega_{th}}{M}\Big) - \frac{\alpha_s C_F C_A}{4\pi} \cdot \braket{\bar{q}g_s \sigma \cdot G q}\cdot M^2 \cdot \nonumber \\ & \hspace{4.0cm} G_1\Big(\frac{\omega_{th}}{M}\Big) - \frac{3 \alpha_s C_F}{2\pi} \cdot \braket{\bar{q}g_s \sigma \cdot G q}\cdot M^2 \cdot G_1\Big(\frac{\omega_{th}}{M}\Big) - \frac{\pi^2}{2 N_c} \braket{\bar{q}q} \braket{\frac{\alpha_s}{\pi} G^2} \, , \label{eq::LambdaHPlusE-sumrule-complete}\\ & F(\mu)^2 \cdot \lambda_H^4 \, e^{-\bar{\Lambda}/M} \; = \; \frac{\alpha_s C_A C_F}{\pi^3} \cdot 12 M^7 \cdot G_6\Big(\frac{\omega_{th}}{M}\Big) - \frac{ \alpha_s C_F}{\pi} \braket{\bar{q}q} \cdot 6 \cdot M^4 \cdot G_3\Big(\frac{\omega_{th}}{M}\Big) \; \nonumber \\ & \hspace{3.5cm} + \frac{1}{2} \braket{\frac{\alpha_s}{\pi} G^2} \cdot M^3 \cdot G_2\Big(\frac{\omega_{th}}{M}\Big) - \frac{\alpha_s C_F C_A}{8\pi} \cdot \braket{\bar{q}g_s \sigma \cdot G q}\cdot M^2 \cdot G_1\Big(\frac{\omega_{th}}{M}\Big) \nonumber \\ &\hspace{3.5cm} - \frac{3\alpha_s C_F}{4\pi} \cdot \braket{\bar{q}g_s \sigma \cdot G q}\cdot M^2 \cdot G_1\Big(\frac{\omega_{th}}{M}\Big) + \frac{\braket{g_s^3 f^{abc} G^a G^b G^c}}{64 \pi^2} \cdot M \cdot \nonumber \\ & \hspace{3.5cm} \; G_0\Big(\frac{\omega_{th}}{M}\Big) - \frac{\pi^2}{4 N_c} \braket{\bar{q}q} \braket{\frac{\alpha_s}{\pi} G^2} \label{eq::LambdaH-sumrule-complete} \, , \\ & F(\mu)^2 \cdot \lambda_E^4 \, e^{-\bar{\Lambda}/M} \; = \; \frac{\alpha_s C_A C_F}{\pi^3} \cdot 12 M^7 \cdot G_6\Big(\frac{\omega_{th}}{M}\Big) + \frac{ \alpha_s C_F}{\pi} \braket{\bar{q}q} \cdot 6 \cdot M^4 \cdot G_3\Big(\frac{\omega_{th}}{M}\Big) \; \nonumber \\ & \hspace{3.5cm} - \frac{1}{2} \braket{\frac{\alpha_s}{\pi} G^2} \cdot M^3 \cdot G_2\Big(\frac{\omega_{th}}{M}\Big) - \frac{\alpha_s C_F}{2 \pi} \cdot \braket{\bar{q}g_s \sigma \cdot G q} \cdot M^2 \cdot G_1\Big(\frac{\omega_{th}}{M}\Big) \; \nonumber \\ & \hspace{3.5cm} -\frac{\braket{g_s^3 f^{abc} G^a G^b G^c}}{64 \pi^2} \cdot M \cdot G_0\Big(\frac{\omega_{th}}{M}\Big) - \frac{\pi^2}{4 N_c} \braket{\bar{q}q} \braket{\frac{\alpha_s}{\pi} G^2} \, .\label{eq::LambdaE-sumrule-complete} \end{align} \end{widetext} \noindent \section{Numerical Analysis} \label{chp: NumericalAnalysis} In this section we first compute the HQET parameters by using the sum rules in Eq. \eqref{eq:SumRuleF}, (\ref{eq::LambdaHPlusE-sumrule-complete}), (\ref{eq::LambdaH-sumrule-complete}) and \eqref{eq::LambdaE-sumrule-complete} following the procedure described in Sec. \ref{chp: Contributions}. In particular, we consider the ratios \eqref{eq::LambdaHPlusE-sumrule-complete} to \eqref{eq::LambdaE-sumrule-complete} divided by \eqref{eq:SumRuleF} in order to cancel the dependence on the low-energy parameter $\bar{\Lambda}$ and the decay constant $F(\mu)$. The numerical inputs for the necessary parameters are given in Table \ref{tab::input}. But when we investigate the optimal window for the Borel parameter $M$, we observe that the sum rules are dominated by higher resonances and the continuum contribution. This questions the reliability of our estimates \begin{table}[H] \centering \begin{tabular}{||c c c|} \hline Parameters & Value & Ref. \\ [0.5ex] \hline $\alpha_{s}$(1 GeV) & 0.471 & \cite{Herren:2017osy} \\ \hline $\braket{\bar{q}q}$ & $(-0.242 \pm 0.015)^3$ GeV$^{3}$ & \cite{Jamin:2002ev} \\ \hline $\braket{\frac{\alpha_{s}}{\pi} G^2}$ & $(0.012 \pm 0.004)$ \, \text{GeV}$^4$ & \cite{Gubler:2018ctz} \\ \hline $\braket{\bar{q} g G \cdot \sigma q}/ \braket{\bar{q}q}$ & $(0.8 \pm 0.2)$ GeV$^2$ & \cite{Belyaev:1982sa} \\ \hline $\braket{g_s^3 f^{a b c} G^{a} G^{b} G^{c}}$ & $(0.045 \pm 0.045)$ GeV$^6$ & \cite{Shifman:1978bx} \\ \hline $\bar{\Lambda}$ & $(0.55 \pm 0.06)$ GeV & \cite{Gambino:2017vkx} \\ \hline \end{tabular} \caption{List of the numerical inputs, which will be used in our analysis. The vacuum condensates are normalized at the point $\mu = 1$ GeV. For the strong coupling constant we use the two-loop expression with $\Lambda^{(4)}_{\text{QCD}} =0.31$ GeV.} \label{tab::input} \end{table} for $\lambda_{E,H}^2$(1 GeV) and their ratio: \begin{align} \mathcal{R}(\mu) &=\frac{\lambda_E^2(\mu)}{\lambda_H^2(\mu)} \label{eq::R-ratio} \end{align} at $\mu =$ 1 GeV. Hence, we study different combinations of Eq. \eqref{eq::LambdaHPlusE-sumrule-complete}, \eqref{eq::LambdaH-sumrule-complete}, \eqref{eq::LambdaE-sumrule-complete} and \eqref{eq:SumRuleF}. We plot higher dimensional contributions for $\lambda_{H}^4$ in the lower part of Fig. \ref{fig:Lambda_uptodim} (a) and we observe that each power correction enhances the total value of $\lambda_{H}^4$. The dimension five contribution leads to the largest contribution in Fig. \ref{fig:Lambda_uptodim} (a). The fact that correlation functions with a large mass dimension experience large contributions from local condensates with a high mass dimension for small values of the Borel parameter $M$ is a well known fact. Moreover, the contributions from dimensions greater than five become smaller indicating convergence of the OPE. The upper plot in Fig. \ref{fig:Lambda_uptodim} (a) shows the sum of all contributions up to mass dimension seven for different threshold parameters $\omega_{th}$. This variation of the parameter $\omega_{th}$ indicates the stability of the sum rule, since the Borel parameter $M$ and $\omega_{th}$ are correlated. Furthermore, it can be explicitly seen that in the highly nonperturbative regime with small $M$ the condensate contributions become dominant and therefore the sum rule becomes unreliable. To find the optimal window for the threshold $\omega_{th}$, we vary the function $F(\mu)$ in Eq. (\ref{eq:SumRuleF}) for different values of $\omega_{th}$, see Fig. \ref{fig:FalphaS} (a). As we can see, the decay constant $F(\mu)$ gives reliable values in the interval $0.8 \, \text{GeV} \leq \omega_{th} \leq 1.0 \, \text{GeV}$. In order to confirm that our threshold choice gives reasonable results, we compute the physical decay constant $f_{B}$ by using Eq. (\ref{eq:RelationFb}), see Fig. \ref{fig:FalphaS} (b). We observe in Fig. \ref{fig:FalphaS} (b) that for $M \geq 0.8 \, \text{GeV}$ the dependence on the threshold parameter $\omega_{th}$ between $0.8$ GeV and $1.0$ GeV becomes stable and reliable. Although the error of the decay constant $f_B$ given in Eq. \eqref{eq::physicaldecayconstant} is small, we assume a conservative uncertainty of $50\%$, because we neglect the $\mathcal{O}(\alpha_s)$ contributions for the HQET decay constant $F(\mu)$, which are known to be large and moreover our sum rules only account for the contributions up to mass dimension seven. The corresponding analysis in \cite{Nishikawa:2011qk} shows the impact of these corrections, which reduce the uncertainty of the analysis to $15\% -20\%$. Another method to determine the interval for the threshold parameter $\omega_{th}$ is by taking the derivative with respect to the Borel parameter $\partial/\partial (-1/M)$ in Eq. \eqref{eq::LambdaH-sumrule-complete}. Dividing this expression by the original sum rule in Eq. \eqref{eq::LambdaH-sumrule-complete} yields an estimate for the parameter $\bar{\Lambda}$ which needs to be compatible with the value stated in Table \ref{tab::input}. Both methods give the same interval for $\omega_{th}$, namely $0.8 \, \text{GeV} \leq \omega_{th} \leq 1.0 \, \text{GeV}$. \begin{figure*}[hbt!] \centering \subfloat[]{\includegraphics[width=0.48 \textwidth]{Plots/LambdaH_uptodim.pdf}} \hspace{0.3cm} \subfloat[]{\includegraphics[width=0.48 \textwidth]{Plots/LambdaH_E_uptodim.pdf}} \\ \subfloat[]{\includegraphics[width=0.48 \textwidth]{Plots/LambdaE_uptodim.pdf}} \caption{Fig. (a), (b) and (c) show the full OPE of Eq. (\ref{eq::LambdaHPlusE-sumrule-complete}), (\ref{eq::LambdaH-sumrule-complete}) and (\ref{eq::LambdaE-sumrule-complete}) within the threshold interval $0.8 \, \text{GeV} \leq \omega_{th} \leq 1.0 \, \text{GeV}$, respectively. The lower figures illustrate the individual contributions to the OPE for $\omega_{th} =0.9$ GeV. The plots only show the central values.} \label{fig:Lambda_uptodim} \end{figure*} \begin{figure*}[!htbp] \begin{minipage}{\linewidth} \centering \subfloat[]{\includegraphics[width=0.48 \textwidth]{Plots/F1.pdf}} \subfloat[]{\includegraphics[width=0.48 \textwidth]{Plots/fB1.pdf}} \caption{Fig. (a) shows the comparison of the central values of the decay constant $F(\mu)$ for different values of $\omega_{th}$. The value of the binding energy can be found in Table \ref{tab::input}. Fig. (b) shows the comparison of the central values of the physical decay constant $f_B$ with different values of $\omega_{th}$. The dashed line indicates the lattice result and the shaded green area illustrates its corresponding uncertainty.} \label{fig:FalphaS} \end{minipage} \end{figure*} Similarly, we plot higher dimensional contributions for the sum rule in Eq. \eqref{eq::LambdaHPlusE-sumrule-complete} in Fig. \ref{fig:Lambda_uptodim} (b). The lower plot illustrates each order of the power expansion individually. Here, we see that the dimension three, four and six condensates do not contribute to the sum rule. The terms corresponding to the dimension five condensate provide again the largest contribution and beyond this dimension the power expansion is expected to converge, which is indicated by the small contribution of mass dimension seven. Again, the upper plot in Fig. \ref{fig:Lambda_uptodim} (b) shows the value of $(\lambda_H^2 + \lambda_{E}^2)^2$ as a function of $M$ for different threshold parameter $\omega_{th}$. The determination of the threshold window for $\omega_{th}$ follows the same argumentation as for the sum rule in Eq. \eqref{eq::LambdaH-sumrule-complete}. In particular, both methods lead again to the same conclusion and we obtain the interval $0.8 \, \text{GeV} \leq \omega_{th} \leq 1.0 \, \text{GeV}$. The sum rule for the parameter $\lambda_E^4$ in Eq. \eqref{eq::LambdaE-sumrule-complete} requires further investigation. Fig. \ref{fig:Lambda_uptodim} (c) presents in the upper plot the sum of all contributions up to mass dimension seven, while in the lower plot each contribution is considered individually. In comparison to the sum rules in Eq. \eqref{eq::LambdaHPlusE-sumrule-complete} and Eq. \eqref{eq::LambdaH-sumrule-complete}, the mass dimension three and four condensates contribute with the opposite sign to this sum rule. Since these contributions are large, this sum rule becomes unreliable and unstable compared to the previously studied sum rules. Additionally, the dominant dimension five contributions from Eq. \eqref{eq:WilsonCoeffAbelian1}, \eqref{eq:WilsonCoeffAbelian2} and \eqref{eq:WilsonCoeffNonAbelian} do not appear in this sum rule, thus the extraction of an estimate for $\lambda_E^2$ from this sum rule gives an unreliable value. Moreover, we observe that the dimension seven contribution also gives a sizeable contribution, which questions the convergence of the OPE itself. The fact that this sum rule becomes unstable can be seen from the threshold interval for $\omega_{th}$. Only the argumentation via the decay constants $F(\mu)$ and $f_B$ give an appropriate interval, namely $0.55 \; \text{GeV} \leq \omega_{th} \leq 0.65 \; \text{GeV}$. Furthermore, the variation of the threshold seems to give larger deviations than for the sum rules in Eq. \eqref{eq::LambdaHPlusE-sumrule-complete} and \eqref{eq::LambdaH-sumrule-complete} indicating a less stable sum rule with larger uncertainties. To obtain the lower bound for the Borel parameter $M$, we choose a value where the dimension seven condensate contribution is smaller than $40 \%$ of the total OPE. Notice that too small values of $M$ spoil the convergence of the OPE since the condensate contributions become dominant. For the sum rules in Eq. (\ref{eq::LambdaHPlusE-sumrule-complete}) and (\ref{eq::LambdaH-sumrule-complete}), this condition is fulfilled for $0.5 \, \text{GeV} \leq M$. Based on Fig. \ref{fig:Lambda_uptodim} (a) and \ref{fig:Lambda_uptodim} (b), we also see that for $0.5 \, \text{GeV} \leq M$ the sum rule starts to become more reliable. As already mentioned, the sum rule for $\lambda_E^4$ in Eq. (\ref{eq::LambdaE-sumrule-complete}) is more unstable compared to $\lambda_H^4$ and $(\lambda_H^2 + \lambda_E^2)^2$. Hence, this method to obtain the lower bound of $M$ does not work for $\lambda_E^4$. Instead, we choose the values based on Fig. \ref{fig:Lambda_uptodim} (c). We see that for $0.5 \, \text{GeV}\leq M$ the OPE becomes more reliable and therefore a good choice for the lower bound. This estimate of the lower bound is taken into account in the uncertainty analysis. For the determination of the upper bound of the Borel parameter we introduce: \begin{align} R_{\text{cont.}} = 1 - \frac{\int_0^{\omega_{th}} \mathrm{d} \omega \frac{1}{\pi} \mathrm{Im} \Pi_X^{\text{OPE}}(\omega) e^{-\omega/M}}{\int_0^{\infty} \mathrm{d} \omega \frac{1}{\pi} \mathrm{Im} \Pi_X^{\text{OPE}}(\omega) e^{-\omega/M}} \label{eq::R-continuum} \end{align} for $X \in \{H,E,HE\}$ . The value of $R_{\text{cont.}}$ guarantees that the ground state still gives a sizeable contribution compared to the higher resonances and continuum contribution. For reliable results of the sum rule we expect $R_{\text{cont.}} \leq 50 \%$ for $M \leq M_{\text{max}}$. Thus, Eq. (\ref{eq::R-continuum}) fixes the upper bound for the Borel parameter. But in the case of Eq. \eqref{eq::LambdaHPlusE-sumrule-complete}, \eqref{eq::LambdaH-sumrule-complete} and \eqref{eq::LambdaE-sumrule-complete}, the continuum contribution is dominant, which is to be expected from the large mass dimension of the considered correlation function in Eq. \eqref{eq:CorrelationFunc}. Therefore, an upper bound for $M$ is not feasible according to this method. To resolve this problem, we consider two combinations of the sum rules in Sec. \ref{chp: Contributions}, which have the feature that $R_{\text{cont.}}$ becomes about $50 \%$ for a reasonable value of $M$. The combinations are the following: \begin{align} &\frac{(\lambda_H^2 + \lambda_E^2)^2 }{\lambda_H^4} = (1 + \mathcal{R})^2 \hspace{0.5cm} \text{and} \nonumber \\& \frac{F(\mu)^2 e^{-\bar{\Lambda}/M} + F(\mu)^2 e^{-\bar{\Lambda}/M} \lambda_H^4}{F(\mu)^2 e^{-\bar{\Lambda}/M} - F(\mu)^2 e^{-\bar{\Lambda}/M} \lambda_E^4} \label{eq:Comb1} \end{align} with $\mathcal{R}$ defined in Eq. (\ref{eq::R-ratio}). The combination $(1+\mathcal{R})^2$ is an appropriate choice, because the dominant mass dimension five contributions due to Eq. \eqref{eq::LambdaH-sumrule-complete} lower the value of $R_{\text{cont.}}$ significantly. On the other hand, the second combination in Eq. (\ref{eq:Comb1}) is dominated by the large $\mathcal{O}(\alpha_s^0)$ contributions from $F(\mu)$ such that $\lambda_{E,H}^4$ become only small corrections. For both combinations in Eq. (\ref{eq:Comb1}) the parameter is $R_{\text{cont.}} \leq 50 \%$ for $M_{\text{max}} = 0.8$ GeV. In Table \ref{tab::ThresholdAndBorel} we summarize the lower and upper bounds for the parameters $M$ and $\omega_{th}$. \begin{table}[H] \centering \scalebox{0.95}{ \begin{tabular}{||c c c|} \hline Sum rule & Borel window & threshold window \\ [0.5ex] \hline Eq. (\ref{eq:Comb1}) & $0.5 \; \mathrm{GeV} \leq M \leq 0.8 \; \mathrm{GeV}$ & $0.8 \; \mathrm{GeV} \leq \omega_{th} \leq 1.0 \; \mathrm{GeV}$ \\ \hline \end{tabular}} \caption{Summary of the threshold and Borel window for the combination in Eq. (\ref{eq:Comb1}).} \label{tab::ThresholdAndBorel} \end{table} In Fig. \ref{fig::Comb1} (a) and \ref{fig::Comb1} (b) we plot both combinations as a function of $M$ for different values of $\omega_{th}$ within its threshold window. Finally, we are at the point to extract $\mathcal{R}$ and $\lambda_{E,H}^2$ based on Eq. (\ref{eq:Comb1}). The uncertainties of $\lambda_{E,H}^2$ and for the ratio $\mathcal{R}$ are partially determined by varying each input parameter individually according to their uncertainty, see Table \ref{tab::input}. For the strong coupling constant we use the two-loop expression with $\Lambda_{\text{QCD}}^{(4)} = 0.31$ GeV to obtain $\alpha_s(1 \, \text{GeV}) = 0.471$. We vary $\Lambda_{\text{QCD}}^{(4)}$ in the interval $0.29 \, \text{GeV} \leq \Lambda_{\text{QCD}}^{(4)} \leq 0.33 \, \text{GeV}$, which corresponds to the running coupling $\alpha_s(1 \, \text{GeV}) = 0.44 - 0.50$. In the last step, we square each uncertainty in quadrature: \begin{align} \mathcal{R}(1 \, \text{GeV}) &= 0.1 + \left(\begin{array}{c}+ 0.03\\-0.03\\\end{array}\right)_{\omega_{th}} +\left(\begin{array}{c}+ 0.01\\-0.02\\\end{array}\right)_{M} \nonumber \\ & \hspace{-0.35cm}+ \left(\begin{array}{c} + 0.01\\-0.01\\\end{array}\right)_{\alpha_s} + \left(\begin{array}{c}+ 0.01\\-0.01\\\end{array}\right)_{\braket{\bar{q} q}} + \left(\begin{array}{c}+ 0.02\\-0.03\\\end{array}\right)_{\braket{\frac{\alpha_{s}}{\pi} G^2}} \nonumber \\ & \hspace{-0.35cm} + \left(\begin{array}{c}+ 0.05\\-0.04\\\end{array}\right)_{\braket{\bar{q} g G \cdot \sigma q}} + \left(\begin{array}{c}+ 0.02\\-0.02\\\end{array}\right)_{\braket{g_s^3 f^{a b c} G^{a} G^{b} G^{c}}} \nonumber \\ &= 0.1 \pm 0.07 \label{eq::R_final} \end{align} \begin{align} \lambda_H^2(1 \, \text{GeV}) &= \Big [ 0.150 + \left(\begin{array}{c}+ 0.002\\-0.003\\\end{array}\right)_{\omega_{th}} +\left(\begin{array}{c}+ 0.002\\-0.004\\\end{array}\right)_{M} \nonumber \\ & + \left(\begin{array}{c}+ 0.001\\-0.001\\\end{array}\right)_{\braket{\frac{\alpha_{s}}{\pi} G^2}} + \left(\begin{array}{c}+ 0.001\\-0.001\\\end{array}\right)_{\braket{\bar{q} g G \cdot \sigma q}} \nonumber \\ & + \left(\begin{array}{c}+ 0.001\\-0.001\\\end{array}\right)_{\braket{g_s^3 f^{a b c} G^{a} G^{b} G^{c}}} \Big] \, \text{GeV}^2 \nonumber \\ &= (0.150 \pm 0.006) \,\text{GeV}^2 \label{eq::lambdaH_final} \end{align} For $\lambda_H^2$, the variation of the strong coupling constant $\alpha_s$, the dimension three and dimension six condensates do not change the central value significantly. Therefore, these uncertainties can be neglected. \begin{align} \lambda_E^2(1 \, \text{GeV}) &= \Big [ 0.010 + \left(\begin{array}{c}+ 0.004\\-0.005\\\end{array}\right)_{\omega_{th}} +\left(\begin{array}{c}+ 0.002\\-0.003\\\end{array}\right)_{M} \nonumber \\ & + \left(\begin{array}{c}+ 0.001\\-0.001\\\end{array}\right)_{\alpha_s} + \left(\begin{array}{c}+ 0.003\\-0.003\\\end{array}\right)_{\braket{\bar{q} q}} \nonumber \\ & + \left(\begin{array}{c}+ 0.003\\-0.004\\\end{array}\right)_{\braket{\frac{\alpha_{s}}{\pi} G^2}} +\left(\begin{array}{c}+ 0.007\\-0.006\\\end{array}\right)_{\braket{\bar{q} g G \cdot \sigma q}} \nonumber \\ & + \left(\begin{array}{c}+ 0.002\\-0.002\\\end{array}\right)_{\braket{g_s^3 f^{a b c} G^{a} G^{b} G^{c}}} \Big ] \, \text{GeV}^2 \nonumber \\ &= (0.010 \pm 0.009) \,\text{GeV}^2 \, . \label{eq::lambdaE_final} \end{align} \begin{figure*}[t] \centering \subfloat[]{\includegraphics[width=0.48 \textwidth]{Plots/Relation1.pdf}}\hspace{0.3cm} \subfloat[]{ \includegraphics[width=0.48 \textwidth]{Plots/Relation2.pdf}} \caption{Fig. (a) shows the Borel sum rule for $(1+\mathcal{R})^2$ for the window $0.8 \, \text{GeV} \leq \omega_{th} \leq 1.0 \, \text{GeV}$. The shaded green area illustrates the Borel window. Similarly, Fig. (b) shows the Borel sum rule for $(F(\mu)^2 e^{- \bar{\Lambda}/M} + F(\mu)^2 e^{- \bar{\Lambda}/M} \lambda_H^4)/(F(\mu)^2 e^{- \bar{\Lambda}/M} - F(\mu)^2 e^{- \bar{\Lambda}/M} \lambda_E^4)$ for the window $0.8 \, \text{GeV} \leq \omega_{th} \leq 1.0 \, \text{GeV}$.} \label{fig::Comb1} \end{figure*} Notice that the threshold parameter $\omega_{th}$ and the Borel parameter $M$ are correlated, which can be deduced from the determination of the Borel window and the threshold interval. But since the variation of $\omega_{th}$ with respect to $M$ is negligible, it is possible to choose one point in the parameter space of both parameters where the conditions from above are satisfied and obtain an estimate for the uncertainty by varying $\omega_{th}$. Besides these contributions, there are other uncertainties due to several approximations and systematic errors. Since we truncated the perturbative series at $\mathcal{O}(\alpha_s)$ and the power corrections at dimension seven, we introduce another error which is more complicated to determine. Moreover, there is also an intrinsic uncertainty caused by the sum rule approach, for instance generated by the use of the quark-hadron duality. The total uncertainties stated in Eq. \eqref{eq::R_final}, \eqref{eq::lambdaH_final} and \eqref{eq::lambdaE_final} only list those quantities, which give deviations from the central values. Before we state our final results, we will first derive upper bounds on the parameters $\lambda_{E,H}^2$. Due to the diagonal structure of the correlation function, we know that the spectral density is positive definite. By performing the limit $\omega_{th} \rightarrow \infty$ in Eq. \eqref{eq::LambdaH-sumrule-complete} and \eqref{eq::LambdaE-sumrule-complete}, we include all possible higher resonances and continuum contributions into our analysis. Thus, we obtain a consistent upper bound onto these parameters as it was already done in the case of $f_D/f_{D_s}$ decay constants in \cite{Khodjamirian:2008xt}. The values for the upper bounds within the Borel window in Fig. \ref{fig::Comb1} (a) and \ref{fig::Comb1} (b) are: \begin{align} \lambda_H^2 &< 0.48^{+ 0.17}_{-0.24} \, \; \text{GeV}^2 \, , \label{eq:UpperBoundH}\\ \lambda_E^2 &< 0.41^{+ 0.19}_{-0.24} \, \; \text{GeV}^2 \, . \label{eq:UpperBoundE} \end{align} Now we extract our predictions for these parameters based on our sum rule analysis. We expect that these estimates should lie within the bounds of \eqref{eq:UpperBoundH} and \eqref{eq:UpperBoundE}. A conservative estimate of the uncertainties leads to the following final results: \begin{align} \lambda_{E}^2(\text{1 GeV}) &= (0.01 \pm 0.01) \, \, \text{GeV}^2 \, , \label{eq:EstimateE} \\ \lambda_{H}^2(\text{1 GeV}) &= (0.15 \pm 0.05) \, \, \text{GeV}^2 \, , \label{eq:EstimateH} \\ \mathcal{R} &= 0.1 \pm 0.1 \, . \label{eq:EstimateR} \end{align} If we consider instead directly Eq. \eqref{eq::LambdaHPlusE-sumrule-complete}, \eqref{eq::LambdaH-sumrule-complete} and take the Borel window and the threshold parameter $\omega_{th}$ as shown in Table \ref{tab::ThresholdAndBorel}, we obtain the values: \begin{align} \lambda_{E}^2(\text{1 GeV}) &= (0.05 \pm 0.03) \, \, \text{GeV}^2 \, , \label{eq:EstimateEContProblem}\\ \lambda_{H}^2(\text{1 GeV}) &= (0.16 \pm 0.05) \, \, \text{GeV}^2 \, , \label{eq:EstimateHContProblem}\\ \mathcal{R} &= 0.3 \pm 0.2 \, . \label{eq:EstimateRContProblem} \end{align} Note that we can also use \eqref{eq::LambdaE-sumrule-complete} to obtain the value for $\lambda_E^2$, however the threshold window must be chosen as $0.55 \, \text{GeV} \leq \omega_{th} \leq 0.65 \, \text{GeV}$ as shown in Fig. \ref{fig:Lambda_uptodim} (c). Although the sum rules in Eq. \eqref{eq::LambdaHPlusE-sumrule-complete} to \eqref{eq::LambdaE-sumrule-complete} are dominated by continuum contributions and higher resonances for the Borel window given in Table \ref{tab::ThresholdAndBorel}, we see that the set of parameters and their ratio $\mathcal{R}$ in Eq. \eqref{eq:EstimateEContProblem} to \eqref{eq:EstimateRContProblem} reproduce the values for $\lambda_{E,H}^2$ and $\mathcal{R}$ in Eq. \eqref{eq:EstimateE} to \eqref{eq:EstimateR} within the errors. In particular the estimate for $\lambda_H^2$ does not change much, which indicates that the continuum contributions are well approximated by the sum rules in Eq. \eqref{eq::LambdaH-sumrule-complete}. All values lie within the bounds given in Eq. \eqref{eq:UpperBoundH} and \eqref{eq:UpperBoundE}. Our result for $\lambda_{E}^2$ in Eq. \eqref{eq:EstimateE} is close to the result in \cite{Nishikawa:2011qk} and agrees within the error, see Table \ref{tab::finalresult}. Additionally, our result for $\lambda_{H}^2$ tends towards the result in \cite{Grozin:1996pq}. \section{Conclusion} \label{chp:Conclusion} In this work we suggested alternative diagonal QCD sum rules in order to estimate the HQET parameters $\lambda_{E,H}^2$ and their ratio $\mathcal{R} = \lambda_E^2/\lambda_H^2$. We included all leading contributions to the diagonal correlation function of three-particle quark-antiquark-gluon currents up to mass dimension seven. The advantage of these sum rules are that they are positive definite and we expect that the quark-hadron duality is more accurate compared to the previously studied correlation functions in \cite{Grozin:1996pq,Nishikawa:2011qk}. But we observe dominant contributions from the continuum and higher resonances due to the large mass dimension of the correlation function within these sum rules. This is why we consider combinations of these sum rules studied in Section \ref{chp: NumericalAnalysis}, which satisfy the condition that the ground state contribution still gives a sizeable effect. Moreover, the OPE is expected to converge for the two sum rules in Eq. (\ref{eq::LambdaHPlusE-sumrule-complete}) and (\ref{eq::LambdaH-sumrule-complete}) shown in Fig. \ref{fig:Lambda_uptodim} (a) and \ref{fig:Lambda_uptodim} (b), because the investigated contributions beyond mass dimension five become smaller. However, the OPE in Eq. (\ref{eq::LambdaE-sumrule-complete}) needs additional higher order corrections, since the contribution of dimension five and seven are both large, which makes the sum rule unstable, see Fig. \ref{fig:Lambda_uptodim} (c). For a consistent treatment of the leading order contributions we also included only the $\mathcal{O}(\alpha_s^0)$ contributions for the HQET decay constant $F(\mu)$, although it is known that the $\mathcal{O}(\alpha_s)$ contributions are sizeable \cite{Broadhurst:1991fc}. Our results compared to the values obtained in \cite{Grozin:1996pq,Nishikawa:2011qk} are listed in Table \ref{tab::finalresult}. \begin{table}[H] \centering \scalebox{0.80}{ \begin{tabular}{||c c c c|} \hline Parameters & Ref. \cite{Grozin:1996pq} & Ref. \cite{Nishikawa:2011qk} & \textbf{this work} \\ [0.5ex] \hline $\mathcal{R}$(1 GeV) & (0.6 $\pm$ 0.4) & (0.5 $\pm$ 0.4) & (0.1 $\pm$ 0.1) \\ $\lambda_H^2$(1 GeV) & (0.18 $\pm$ 0.07) GeV$^2$ & (0.06 $\pm$ 0.03) GeV$^2$ & (0.15 $\pm$ 0.05) GeV$^2$ \\ $\lambda_E^2$(1 GeV) & (0.11 $\pm$ 0.06) GeV$^2$ & (0.03 $\pm$ 0.02) GeV$^2$ &(0.01 $\pm$ 0.01) GeV$^2$ \\ \hline \end{tabular}} \caption{Comparison of our results for the parameters $\lambda_{E,H}^2$ and $\mathcal{R}$ at $\mu = 1 \; \mathrm{GeV}$.} \label{tab::finalresult} \end{table} With these new sum rules we obtain independent estimates for the parameters $\lambda_{E,H}^2$ and the $\mathcal{R}$-ratio, which are important ingredients for the second moments of the $B$-meson light-cone distribution amplitudes in $B$-meson factorization theorems. For future improvements of our sum rules we suggest to include $\mathcal{O}(\alpha_s^2)$ corrections to the OPE and consider even higher mass dimension in the power expansion of local vacuum condensates. In this case it would also be necessary to include the $\mathcal{O}(\alpha_s)$ contributions for $F(\mu)$. Especially the sum rule in \eqref{eq::LambdaE-sumrule-complete} will benefit greatly since we expect the convergence of the OPE, which results in better determination of $\lambda_{E,H}^2$ and consequently $\mathcal{R}$. \acknowledgments We would like to thank Alexander Khodjamirian for proposing this project to us, for his constant feedback throughout the work and for reading the manuscript. We thank Thomas Mannel for useful discussion and reading the manuscript. Additionally, we are grateful to Thorsten Feldmann and Alexei Pivovarov for helpful discussions. This research was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant 396021762 - TRR 257.
1,314,259,994,451
arxiv
\section{Introduction} In this paper, we develop sample-path large deviations for one-dimensional L\'evy processes and random walks, assuming the jump sizes are heavy-tailed. Specifically, let $X(t),t\geq 0,$ be a centered L\'evy process with regularly varying L\'evy measure $\nu$. Assume that $\P(X(1)>x)$ is regularly varying of index $-\alpha$, and that $\P(X(1)<-x)$ is regularly varying of index $-\beta$; i.e. there exist slowly varying functions $L_{+}$ and $L_{-}$ such that \begin{equation} \label{intro-eq-twosided} \P(X(1)>x) = L_{+} (x) x^{-\alpha}, \hspace{1cm} \P(X(1)<-x) = L_{-}(x) x^{-\beta}. \end{equation} Throughout the paper, we assume $\alpha, \beta>1$. We also consider spectrally one-sided processes; in that case only $\alpha$ plays a role. Define $\bar X_n= \{\bar X_n(t), t\in [0,1]\}$, with $\bar X_n(t) = X(nt)/n, t\geq 0$. We are interested in large deviations of $\bar X_n$. This topic fits well in a branch of limit theory that has a long history, has intimate connections to point processes and extreme value theory, and is still a subject of intense activity. The investigation of tail estimates of the one-dimensional distributions of $\bar X_n$ (or random walks with heavy-tailed step size distribution) was initiated in \cite{Nagaev69,Nagaev77}. The state of the art of such results is well summarized in \cite{BorovkovBorovkov, DiekerDenisovShneer, EmbrechtsKluppelbergMikosch97, FKZ}. In particular, \cite{DiekerDenisovShneer} describe in detail how fast $x$ needs to grow with $n$ for the asymptotic relation \begin{equation} \label{onebigjump} \P( X(n) > x) = n \P(X(1)>x)(1+o(1)) \end{equation} to hold, as $n\rightarrow\infty$, in settings that go beyond (\ref{intro-eq-twosided}). If (\ref{onebigjump}) is valid, the so-called \emph{principle of one big jump} is said to hold. A functional version of this insight has been derived in \cite{HLMS}. A significant number of studies investigate the question of if and how the principle of a single big jump is affected by the impact of (various forms of) dependence, and cover stable processes, autoregressive processes, modulated processes, and stochastic differential equations; see \cite{BuraDamekMikosch13, FossModulated, HultLindskog07, KonstantinidesMikosch05, MikoschWintenberger13, mikosch2016large, MikoschSamorodnitsky00, Samorodnitsky04}. The problem we investigate in this paper is markedly different from all of these works. Our aim is to develop asymptotic estimates of $\P(\bar X_n \in A)$ for a sufficiently general collection of sets $A$, so that it is possible to study continuous functionals of $\bar X_n$ in a systematic manner. For many such functionals, and many sets $A$, the associated rare event will not be caused by a single big jump, but multiple jumps. The results in this domain (e.g. \cite{blanchetrisk, FK12, ZBM}) are few, each with an ad-hoc approach. As in large deviations theory for light tails, it is desirable to have more general tools available. Another aspect of heavy-tailed large deviations we aim to clarify in this paper is the connection with the standard large-deviations approach, which has not been touched upon in any of the above-mentioned references. In our setting, the goal would be to obtain a function $I$ such that \begin{equation} \label{weakldp} -\inf_{\xi\in A^\circ} I(\xi)\leq \liminf_{n\rightarrow\infty} \frac{\log \P(\bar X_n \in A)}{\log n} \leq \limsup_{n\rightarrow\infty} \frac{\log \P(\bar X_n \in A)}{\log n} \leq -\inf_{\xi\in \bar A} I(\xi), \end{equation} where ${A}^\circ$ and $\bar A$ are the interior and closure of $A$; all our large deviations results are derived in the Skorokhod $J_1$ topology. Equation (\ref{weakldp}) is a classical large deviations principle (LDP) with sub-linear speed (cf.\ \cite{dembozeitouni}). Using existing results in the literature (e.g.\ \cite{DiekerDenisovShneer}), it is not difficult to show that $X(n)/n=\bar X_n(1)$ satisfies an LDP with rate function $I_1=I_1(x)$ which is $0$ at $0$, equal to $(\alpha-1)$ if $x>0$, and $(\beta-1)$ if $x<0$. This is a lower-semicontinuous function of which the level sets are not compact. Thus, in large-deviations terminology, $I_1$ is a rate function, but is not a good one. This implies that techniques such as the projective limit approach cannot be applied. In fact, in Section \ref{subsec:nonexistence}, we show that there does not exist an LDP of the form (\ref{weakldp}) for general sets $A$, by giving a counterexample. A version of (\ref{weakldp}) for compact sets is derived in Section~\ref{subsec:weak-ldp}, as a corollary of our main results. A result similar to (\ref{weakldp}) for random walks with semi-exponential (Weibullian) tails has been derived in \cite{Gantert98} (see also \cite{Gantert00, Gantert14} for related results). Though an LDP for finite-dimensional distributions can be derived, lack of exponential tightness also persists at the sample-path level. To make the rate function good (i.e., to have compact level sets), a topology chosen in \cite{Gantert98} is considerably weaker than any of the Skorokhod topologies (but sufficient for the application that is central in that work). The approach followed in the present paper is based on the recent developments in the theory of regular variation. In particular, in \cite{LRR}, the classical notion of regular variation is re-defined through a new convergence concept called $\mathbb M$-convergence (this is in itself a refinement of other reformulations of regular variation in function spaces; see \cite{DeHaanLin01, HultLindskog05, HultLindskog06}). In Section~\ref{sec:preliminaries}, we further investigate the $\mathbb M$-convergence framework by deriving a number of general results that facilitate the development of our proofs. This paves the way towards our main large deviations results, which are presented in Section~\ref{sec:sample-path-ldps}. We actually obtain estimates that are sharper than (\ref{weakldp}), though we impose a condition on $A$. For one-sided L\'evy processes, our result takes the form \begin{equation} \label{asymptoticslevy} C_{\mathcal J(A)}(A^\circ) \leq \liminf_{n\rightarrow\infty} \frac{\P(\bar X_n \in A)}{(n\nu[n,\infty))^{\mathcal J(A)}} \leq \limsup_{n\rightarrow\infty} \frac{\P(\bar X_n \in A)}{(n\nu[n,\infty))^{\mathcal J(A)}} \leq C_{\mathcal J(A)}(\bar A). \end{equation} Precise definitions can be found in Section~\ref{subsec:one-sided-large-deviations}; for now we just mention that $C_j$ is a measure on the Skorokhod space, and $\mathcal J(\cdot)$ is an integer valued set function defined as $\mathcal J(A) = \inf_{\xi\in A\cap \mathbb D_s^\uparrow} \mathcal D_+(\xi)$, where $\mathcal D_+(\xi)$ is the number of discontinuites of $\xi$, and $\mathbb D_s^\uparrow$ is the set of all non-increasing step functions vanishing at the origin. Throughout the paper, we adopt the convention that the infimum over an empty set is $\infty$. Letting $\mathbb D_j$ and $\mathbb D_{< j}$ be the sets of step functions vanishing at the origin with precisely $j$ and at most $j-1$ steps respectively, we note that the measure $C_j$, defined on $\mathbb D \setminus \mathbb D_{<j}$ has its support on $\mathbb D_j$. A crucial assumption for (\ref{asymptoticslevy}) to hold is that the Skorokhod $J_1$ distance between the sets $A$ and $\mathbb D_{< \mathcal J(A)}$ is strictly positive. For $A$ such that $\mathcal J(A)=1$ this result corresponds to the one shown in \cite{HLMS}. (Note that \cite{HLMS} deals with multi-variate regular variation whereas we focus on 1-dimensional regular variation in this paper.) The interpretation of the ``rate function'' $\mathcal J(A)$ is that it provides the number of jumps in the L\'evy process that are necessary to make the event $A$ happen. This can be seen as an extension of the principle of a single big jump to multiple jumps. A rigorous statement on when (\ref{asymptoticslevy}) holds can be found in Theorem~\ref{thm:one-sided-main-theorem}, which is the first main result of the paper. The result that comes closest to (\ref{asymptoticslevy}) is Theorem~5.1 in \cite{LRR} which considers the $\mathbb M$-convergence of $\nu[n,\infty)^{-j} \P(X/n \in A)$. This result could be used as a starting point to investigate rare events that happen on a time-scale of $O(1)$. However, in the large-deviations scaling we consider, rare events that happen on a time-scale of $O(n)$. Controlling the L\'evy process on this larger time-scale requires more delicate estimates, eventually leading to an additional factor $n^j$ in the asymptotic results. We further show that the choice $j=\mathcal J(A)$ is the only choice that leads to a non-trivial limit. One useful notion that we develop and rely on in our setting is a form of asymptotic equivalence, which can best be compared with exponential equivalence in classical large deviations theory. In Section~\ref{subsec:two-sided-large-deviations} we present sample-path large deviations for two-sided L\'evy processes. Our main results in this case are Theorems \ref{thm:two-sided-limit-theorem}--\ref{thm:two-sided-multiple-asymptotics}. In the two-sided case, \revrem{we need to resolve}% determining the most likely path requires resolving significant combinatorial issues which do not appear in the one sided case. The polynomial rate of decay for $\P(\bar X_n \in A)$, which was described by the function $\mathcal J(A)$ in the one-sided case, has a more complicated description; the corresponding polynomial rate in the two-sided case is \begin{equation} \label{two-sided rate function} \inf_{\xi,\zeta\in \mathbb D_s^\uparrow;\; \xi-\zeta \in A} (\alpha-1)\mathcal D_+(\xi) + (\beta-1)\mathcal D_+(\zeta). \end{equation} Note that this is a result that one could expect from the result for one-sided L\'evy processes and a heuristic application of the contraction principle. A rigorous treatment of the two-sided case requires a more delicate argument compared to the one-sided case: in the one-sided case, the argument simplifies since if one takes $j$ largest jumps away from $\bar X_n$, then the probability that the residual process is of significant size is $o\big((n\nu[n,\infty))^j\big)$ so that it does not contribute in (\ref{asymptoticslevy}), while in two-sided case, taking $j$ largest upward jumps and $k$ largest downward jumps from $\bar X_n$ doesn't guarantee that the residual process remains small with high enough probability---i.e., the probability that the residual process is of significant size cannot be bounded by $o\big((n\nu[n,\infty))^j(n\nu(-\infty,-n])^k\big)$. In addition, it may be the case that multiple pairs $(j,k)$ of jumps lead to optimal solutions of (\ref{two-sided rate function}). \revadd{ To overcome such difficulties, we first develop general tools---Lemma~\ref{thm:simple-product-space} and \ref{thm:union-limsup}---that establish a suitable notion of $\mathbb M$-convergence on product spaces. Using these results, we prove in Theorem~\ref{thm:multi-d-limit-theorem} the suitable $\mathbb M$-convergence for multiple L\'evy processes in the associated product space. Viewing the two-sided L\'evy process as a superposition of one-sided L\'evy processes, we then apply the continuous mapping principle for $\mathbb M$-convergence to Theorem~\ref{thm:multi-d-limit-theorem} to establish our main results. Although no further implications are discussed in this paper, we believe that Theorem~\ref{thm:multi-d-limit-theorem} itself is of independent interest as well because it can be applied to generate large deviations results for a general class of functionals of multiple L\'evy processes. } We derive analogous results for random walks in Section~\ref{subsec:random-walks}. Random walks cannot be decomposed into independent components with small jumps and large jumps as easily as L\'evy processes, making the analysis of random walks more technical if done directly. However, it is possible to follow an indirect approach. Given a random walk $S_k, k\geq 0$, one can study a subordinated version $S_{N(t)}, t\geq 0$ with $N(t),t\geq 0$ an independent unit rate Poisson process. The Skorokhod $J_1$ distance between rescaled versions of $S_k, k\geq 0$ and $S_{N(t)}, t\geq 0$ can then be bounded in terms of the deviations of $N(t)$ from $t$, which have been studied thoroughly. In Section~\ref{subsec:conditional-limit-theorem}, we provide conditional limit theorems which give a precise description of the limit behavior of $\bar X_n$ given that $\bar X_n \in A$ as $n\rightarrow\infty$. An early result of this type is given in \cite{durrett1980conditioned}, which focuses on regularly varying random walks with finite variance conditioned on the event $A= \{\bar X_n(1)>a\}$. Using the recent results that we have discussed (e.g. \cite{HLMS}) more general conditional limit theorems can be derived for single-jump events. We prove an LDP of the form (\ref{weakldp}) in Section~\ref{subsec:weak-ldp}, where the upper bound requires a compactness assumption. We construct a counterexample showing that the compactness assumption cannot be totally removed, and thus, a full LDP does not hold. Essentially, if a rare event is caused by $j$ big jumps, then the framework developed in this paper applies if each of these jumps is bounded away from below by a strictly positive constant. Our counterexample in Section~\ref{subsec:nonexistence} indicates that it is not trivial to remove this condition. As one may expect, it is not possible to apply classical variational methods to derive an expression for the exponent $\mathcal J(A)$, as is often the case in large deviations for light tails. Nevertheless, there seems to be a generic connection with a class of control problems called impulse control problems. Equation (\ref{two-sided rate function}) is a specific deterministic impulse-control problem, which is related to \cite{Barles}. We expect that techniques similar to those in \cite{Barles} will be useful to characterize optimality of solutions for problems like (\ref{two-sided rate function}). The latter challenge is not taken up in the present study and will be addressed elsewhere. Instead, in Section~\ref{sec:applications}, we analyse (\ref{two-sided rate function}) directly in several examples; see also \cite{chen2017efficient}. In each case, a condition needs to be checked to see whether our framework is applicable. We provide a general result that essentially states that we only need to check this condition for step functions in A, which makes this check rather straightforward. In summary, this paper is organized as follows. After developing some preliminary results in Section~\ref{sec:preliminaries}, we present our main results in Section~\ref{sec:sample-path-ldps}. Applications to random walks and connections with classical large deviations theory are investigated in Section~\ref{sec:implications}. Section~\ref{sec:proofs} is devoted to proofs. We collect some useful bounds in Appendix A. \section{$\mathbb M$-convergence}\label{sec:preliminaries} This section reviews and develops general concepts and tools that are useful in deriving our large deviations results. The proofs of the lemmas and corollaries stated throughout this section are provided in Section~\ref{subsec:proofs-for-M-convergence}. We start with briefly reviewing the notion of $\mathbb M$-convergence, introduced in \citet{LRR}. Let $(\mathbb S,d)$ be a complete separable metric space, and $\mathscr S$ be the Borel $\sigma$-algebra on $\mathbb S$. Given a closed subset $\mathbb C$ of $\mathbb S$, let $\mathbb S\setminus \mathbb C$ be equipped with the relative topology as a subspace of $\mathbb S$, and consider the associated sub $\sigma$-algebra $\mathscr S_{\mathbb S\setminus \mathbb C} \triangleq \{A: A\subseteq \mathbb S\setminus \mathbb C, A\in \mathscr S\}$ on it. Define $\mathbb C^r \triangleq \{x\in \mathbb S: d(x, \mathbb C) < r\}$ for $r>0$, and let $\mathbb M(\mathbb S\setminus \mathbb C)$ be the class of measures defined on $\mathscr S_{\mathbb S\setminus \mathbb C}$ whose restrictions to $\mathbb S\setminus \mathbb C^r$ are finite for all $r>0$. Topologize $\mathbb M(\mathbb S\setminus \mathbb C)$ with a sub-basis $\big\{\{\nu \in \mathbb M(\mathbb S\setminus \mathbb C): \nu (f) \in G\}$: $f\in \mathcal C_{\mathbb S\setminus \mathbb C}$, $G$ open in $\mathbb{R}_+\big\}$ where $\mathcal C_{\mathbb S\setminus \mathbb C}$ is the set of real-valued, non-negative, bounded, continuous functions whose support is bounded away from $\mathbb C$ (i.e., $f(\mathbb C^r) = \{0\}$ for some $r>0$). A sequence of measures $\mu_n \in \mathbb M(\mathbb S\setminus \mathbb C)$ converges to $\mu\in \mathbb M(\mathbb S\setminus \mathbb C)$ if $\mu_n (f) \to \mu(f)$ for each $f\in \mathcal C_{\mathbb S\setminus \mathbb C}$. Note that this notion of convergence in $\mathbb M(\mathbb S\setminus\mathbb C)$ coincides with the classical notion of weak convergence of measures (\citealp{billingsley2013convergence}) if $\mathbb C$ is an empty set. We say that a set $A\subseteq \mathbb S$ is bounded away from another set $B\subseteq \mathbb S$ if $\inf_{x\in A, y\in B} d(x,y) > 0$. An important characterization of $\mathbb M(\mathbb S\setminus \mathbb C)$-convergence is as follows: \begin{theorem}[Theorem~2.1 of \citealp{LRR}]\label{result:L21} Let $\mu, \mu_n \in \mathbb M({\mathbb S\setminus \mathbb C})$. Then $\mu_n \to \mu$ in $\mathbb M({\mathbb S\setminus \mathbb C})$ as $n \to \infty$ if and only if \begin{equation} \label{result1ub} \limsup_{n\to\infty } \mu_n(F) \leq \mu(F) \end{equation} for all closed $F\in \mathscr S_{\mathbb S\setminus \mathbb C}$ bounded away from $\mathbb C$ and \begin{equation} \label{result1lb} \liminf_{n\to\infty} \mu_n(G) \geq \mu(G) \end{equation} for all open $G\in \mathscr S_{\mathbb S\setminus \mathbb C}$ bounded away from $\mathbb C$. \end{theorem} We now introduce a new notion of equivalence between two families of random objects, which will prove to be useful in Section~\ref{subsec:one-sided-large-deviations}, and Section~\ref{subsec:random-walks}. Let $F_\delta \triangleq \{x \in \mathbb S: d(x,F) \leq \delta\}$ and $G^{-\delta} \triangleq ((G^c)_\delta)^c$. (Compare these notations to $\mathbb C^r$; note that we are using the convention that superscript implies open sets and subscript implies closed sets.) \begin{definition}\label{def:asymptotic-equivalence} Suppose that $X_n$ and $Y_n$ are random elements taking values in a complete separable metric space $(\mathbb S, d)$, and $\epsilon_n$ is a sequence of positive real numbers. $Y_n$ is said to be asymptotically equivalent to $X_n$ with respect to $\epsilon_n$ if for each $\delta>0$, $$\limsup_{n\to\infty} \epsilon_n^{-1}\P(d(X_n,Y_n) \geq \delta) = 0.$$ \end{definition} The usefulness of this notion of equivalence comes from the following lemma, which states that if $Y_n$ is asymptotically equivalent to $X_n$, and $X_n$ satisfies a limit theorem, then $Y_n$ satisfies the same limit theorem. Moreover, it also allows one to extend the lower and upper bounds to more general sets in case there are asymptotically equivalent distributions that are supported on a subspace $\mathbb S_0$ of $\mathbb S$: \begin{lemma}\label{lem:extended-bounds} Suppose that $\epsilon_n^{-1} \P(X_n \in \cdot)\to \mu(\cdot)$ in $\mathbb M(\mathbb S\setminus \mathbb C)$ for some sequence $\epsilon_n$ and a closed set $\mathbb C$. In addition, suppose that $\mu(\mathbb S \setminus \mathbb S_0) = 0$ and $\P(X_n \in \mathbb S_0) = 1$ for each $n$. If $Y_n$ is asymptotically equivalent to $X_n$ with respect to $\epsilon_n$, then $$ \liminf_{n\to\infty} \epsilon_n^{-1}\P(Y_n \in G) \geq \mu(G) $$ if $G$ is open and $G\cap \mathbb S_0$ is bounded away from $\mathbb C$; $$ \limsup_{n\to\infty} \epsilon_n^{-1}\P(Y_n \in F) \leq \mu(F) $$ if $F$ is closed and there is a $\delta>0$ such that $F_\delta \cap \mathbb S_0$ is bounded away from $\mathbb C$. \end{lemma} This lemma is particularly useful when we work in Skorokhod space, and $\mathbb S_0$ is the class of step functions. Taking $\mathbb S_0 = \mathbb S$, a simpler version of Lemma~\ref{lem:extended-bounds} follows immediately: \begin{corollary}\label{lem:asymptotic-equivalence} Suppose that $\epsilon_n^{-1} \P(X_n \in \cdot) \to \mu(\cdot)$ in $\mathbb M(\mathbb S\setminus \mathbb C)$ for some sequence $\epsilon_n$. If $Y_n$ is asymptotically equivalent to $X_n$ with respect to $\epsilon_n$, then the law of $Y_n$ has the same (normalized) limit, i.e., $\epsilon_n^{-1} \P(Y_n \in \cdot ) \to \mu(\cdot)$ in $\mathbb M(\mathbb S\setminus \mathbb C)$. \end{corollary} Next, we discuss the $\mathbb M$-convergence in a product space as a result of the $\mathbb M$-convergences on each space. \begin{lemma}\label{thm:simple-product-space} Suppose that $\mathbb S_1, \ldots, \mathbb S_d$ are separable metric spaces, $\mathbb C_1, \ldots, \allowbreak \mathbb C_d$ are closed subsets of $\mathbb S_1, \ldots, \mathbb S_d$, respectively. If $\mu_n^{(i)}(\cdot)\to\mu^{(i)}(\cdot)$ in $\mathbb M(\mathbb S_i\setminus \mathbb C_i)$ for each $i=1,\ldots, d$ then, \begin{equation}\label{crude-m-convergence-in-product-space} \mu_n^{(1)}\times\cdots\times\mu_n^{(d)}(\cdot) \to \mu^{(1)}\times\cdots\times\mu^{(d)}(\cdot) \end{equation} in $ \mathbb M\Big(\big(\prod_{i=1}^d \mathbb S_i\big) \setminus \bigcup_{i=1}^d \big(\big(\prod_{j=1}^{i-1} \mathbb S_j\big) \times \mathbb C_i \times \big(\prod_{j=i+1}^d \mathbb S_j\big) \big)\Big). $ \end{lemma} It should be noted that Lemma~\ref{thm:simple-product-space} itself is not exactly ``right'' in the sense that the set we take away is unnecessarily large, and hence, has limited applicability. More specifically, the $\mathbb M$-convergence in (\ref{crude-m-convergence-in-product-space}) applies only to the sets that are contained in a ``rectangular'' domain $\prod_{i=1}^d (\mathbb S_i\setminus \mathbb C_i)$. Our next observation allows one to combine multiple instances of $\mathbb M$-convergences to establish a more refined one so that (\ref{crude-m-convergence-in-product-space}) applies to a class of sets that are not confined to a rectangular domain. In particular, we will see later in Theorem~\ref{thm:two-sided-limit-theorem} and Theorem~\ref{thm:multi-d-limit-theorem} that in combination with Lemma~\ref{thm:simple-product-space}, the following lemma produces the ``right'' $\mathbb M$-convergence for two-sided L\'evy processes and random walks. \begin{lemma}\label{thm:union-limsup} Consider a family of measures $\{\mu^{(i)}\}_{i=0,1,\ldots,m}$ and a family of closed subsets $\{\mathbb C(i)\}_{i=0,1,\ldots,m}$ of $\mathbb S$ such that $\frac{1}{\epsilon_n{(i)}}\P(X_n \in \cdot) \to \mu^{(i)}(\cdot)$ in $\mathbb M(\mathbb S\setminus \mathbb C(i))$ for $i=0,\ldots,m$ where $\big\{\{{\epsilon_n{(i)}}: n\geq 1\}\big\}_{i=0,1,\ldots,m}$ is the family of associated normalizing sequences. Suppose that $\mu^{(0)} \in \mathbb M\big(\mathbb S \setminus \bigcap_{i=0}^{m}\mathbb C(i)\big)$; $ \limsup_{n\to\infty}\frac{{\epsilon_n{(i)}}}{{\epsilon_n{(0)}}} = 0 $ for $i=1,\ldots,m$; and for each $r>0$, there exist positive numbers $r_0,\ldots,r_m$ such that $\bigcap_{i=0}^m\mathbb C(i)^{r_i}\subseteq \big(\bigcap_{i=0}^{m}\mathbb C(i)\big)^r$. The $$ \frac{1}{\epsilon_n{(0)}}\P(X_n\in \cdot) \to \mu^{(0)} $$ in $\mathbb M\big(\mathbb S \setminus \bigcap_{i=0}^{m}\mathbb C(i)\big)$. \end{lemma} A version of the continuous mapping principle is satisfied by $\mathbb M$-convergence. Let $(\mathbb S',d')$ be a complete separable metric space, and let $\mathbb C'$ be a closed subset of $\mathbb S'$. \begin{theorem}[Mapping theorem; Theorem~2.3 of \citet{LRR}]\label{result:L23} Let $h:(\mathbb S\setminus \mathbb C, \mathscr S_{\mathbb S\setminus \mathbb C}) \to (\mathbb S'\setminus \mathbb C', \mathscr S_{\mathbb S'\setminus \mathbb C'})$ be a measurable mapping such that $h^{-1}(A')$ is bounded away from $\mathbb C$ for any $A'\in \mathscr S_{\mathbb S'\setminus \mathbb C'}$ bounded away from $\mathbb C'$. Then $\hat h: \mathbb M({\mathbb S\setminus \mathbb C}) \to \mathbb M({\mathbb S'\setminus \mathbb C'})$ defined by $\hat h(\nu) = \nu \circ h^{-1}$ is continuous at $\mu$ provided $\mu(D_h) = 0$, where $D_h$ is the set of discontinuity points of $h$. \end{theorem} For our purpose, the following slight extension will prove to be useful in developing rigorous arguments. \begin{lemma}\label{lem:almost-continuous-mapping} Let $\mathbb S_0$ be a measurable subset of $\mathbb S$, and $h:(\mathbb S_0, \mathscr S_{\mathbb S_0 })\to (\mathbb S'\setminus \mathbb C', \mathscr S'_{\mathbb S' \setminus \mathbb C'})$ be a measurable mapping such that $h^{-1}(A')$ is bounded away from $\mathbb C$ for any $A'\in \mathscr S_{\mathbb S'\setminus \mathbb C'}$ bounded away from $\mathbb C'$. Then $\hat h:\mathbb M({\mathbb S\setminus \mathbb C}) \to \mathbb M({\mathbb S'}\setminus \mathbb C')$ defined by $\hat h(\nu) = \nu \circ h^{-1}$ is continuous at $\mu$ provided that $\mu(\partial \mathbb S_0\setminus \mathbb C^r) = 0$ and $\mu(D_h\setminus \mathbb C^r)=0$ for all $r>0$, where $D_h$ is the set of discontinuity points of $h$. \end{lemma} When we focus on L\'evy processes, we are specifically interested in the case where $\mathbb S$ is $\mathbb R_+^{\infty\downarrow}\times[0,1]^\infty$, where $\mathbb R_+^{\infty\downarrow} \triangleq \{x \in \mathbb R_+^\infty: x_1 \geq x_2 \geq \ldots\}$, and $\mathbb S'$ is the Skorokhod space $\mathbb D = \mathbb D([0,1],\mathbb{R})$ --- the space of real-valued RCLL functions on $[0,1]$. We use the usual product metrics $d_{\mathbb R_+^{\infty\downarrow}}(x,y) = \sum_{i=1}^\infty \frac{|x_i - y_i| \wedge 1}{2^i}$ and $d_{[0,1]^\infty}(x,y) = \sum_{i=1}^\infty \frac{|x_i - y_i| }{2^i}$ for $\mathbb R_+^{\infty\downarrow}$ and $[0,1]^\infty$, respectively. For the finite product of metric spaces, we use the maximum metric; i.e., we use $d_{\mathbb S_1\times\cdots\times\mathbb S_d}((x_1,\ldots,x_d), (y_1,\ldots,y_d)) \triangleq \max_{i=1,\ldots,d}d_{\mathbb S_i}(x_i,y_i) $ for the product $\mathbb S_1\times\cdots\times\mathbb S_d$ of metric spaces $(\mathbb S_i,d_{\mathbb S_i})$. For $\mathbb D$, we use the usual Skorokhod $J_1$ metric $d(x,y) \triangleq \inf_{\lambda \in \Lambda} \|\lambda - e\| \vee \| x\circ \lambda - y\|$, where $\Lambda$ denotes the set of all non-decreasing homeomorphisms from $[0,1]$ onto itself, $e$ denotes the identity, and $\|\cdot\|$ denotes the supremum norm. Let $$S_j \triangleq \{(x,u)\in \mathbb{R}_+^{\infty\downarrow}\times [0,1]^\infty: 0, 1, u_1,\ldots, u_j \text{ are all distinct}\}.$$ This set will play the role of $\mathbb S_0$ of Lemma~\ref{lem:almost-continuous-mapping}. Define $T_j: S_j\to \mathbb D$ to be $T_j(x,u) = \sum_{i=1}^j x_i 1_{[u_i,1]}$. Let $\mathbb D_j$ be the subspaces of the Skorokhod space consisting of nondecreasing step functions, vanishing at the origin, with exactly $j$ jumps, and $\mathbb D_{\leqslant j}\triangleq \bigcup_{0\leq i\leq j} \mathbb D_i$---i.e., nondecreasing step functions vanishing at the origin with at most $j$ jumps. Similarly, let $\mathbb D_{< j}\triangleq \bigcup_{0\leq i<\, j} \mathbb D_i$. Define $\mathbb H_{j} \triangleq \{x \in \mathbb R_+^{\infty\downarrow}: x_j > 0, x_{j+1} = 0\},$ and $\mathbb H_{< \,j} \triangleq \{x\in \mathbb R_+^{\infty \downarrow}: x_{j} = 0\}$. The continuous mapping principle applies to $T_j$, as we can see in the following result. \begin{lemma}[Lemma 5.3 and Lemma 5.4 of \citealp{LRR}]\label{result:L53} Suppose $A \subset \mathbb D$ is bounded away from $\mathbb D_{< \,j}$. Then, $T_j^{-1}(A)$ is bounded away from $\mathbb H_{<\,j} \times [0,1]^\infty$. Moreover, $T_j:S_j \to \mathbb D$ is continuous. \end{lemma} A consequence of Result~\ref{result:L53} and Lemma~\ref{lem:almost-continuous-mapping} along with the observation that $S_j$ is open is that one can derive a limit theorem in a path space from a limit theorem for jump sizes. \begin{corollary}\label{result:consequence-of-L23-53-54} If $\mu_n\to \mu$ in $\mathbb M\big((\mathbb R_+^{\infty\downarrow} \times [0,1]^\infty) \setminus (\mathbb H_{<\,j} \times [0,1]^\infty)\big)$, and $\mu\big(S_{j}^c\setminus (\mathbb H_{<\, j}\times [0,1]^\infty)^r\big) = 0$ for all $r>0$, then $\mu_n \circ T_{j}^{-1} \to \mu\circ T_{j}^{-1}$ in $\mathbb M(\mathbb D \setminus \mathbb D_{<\, j})$. \end{corollary} To obtain the large deviations for two-sided L\'evy measures, we will first establish the large deviations for independent spectrally positive L\'evy processes, and then apply Lemma~\ref{lem:almost-continuous-mapping} with $h(\xi,\zeta) = \xi-\zeta$. The next lemma verifies two important conditions of Lemma~\ref{lem:almost-continuous-mapping} for such $h$. Let $\mathbb D_{l,m}$ denote the subspace of the Skorokhod space consisting of step functions vanishing at the origin with exactly $l$ upward jumps and $m$ downward jumps. Given $\alpha, \beta>1$, let $\mathbb D_{< \,j,k} \triangleq \bigcup_{(l,m)\in \mathbb{I}{<\,j,k}} \mathbb D_{l,m}$ and $\mathbb D_{<(j,k)} \triangleq\bigcup_{(l,m)\in \mathbb{I}{<\,j,k}} \mathbb D_l\times \mathbb D_m$, where $\mathbb{I}_{<\,j,k} \triangleq \big\{(l,m)\in \mathbb{Z}_+^2\setminus\{(j,k)\}: (\alpha-1) l + (\beta-1) m \leq (\alpha-1) j + (\beta-1) k\big\}$ and $\mathbb{Z}_+$ denotes the set of non-negative integers. Note that in the definition of $\mathbb{I}_{<\,j,k}$, the inequality is not strict; however, we choose to use the strict inequality in our notation to emphasize that $(j,k)$ is not included in $\mathbb{I}_{<\,j,k}$. \begin{lemma}\label{lem:continuous-mapping-principle-for-subtraction} Let $h:\mathbb D \times \mathbb D \to \mathbb D$ be defined as $h(\xi,\zeta) \triangleq \xi-\zeta.$ Then, $h$ is continuous at $(\xi,\zeta)\in \mathbb D\times \mathbb D$ such that $(\xi(t) - \xi(t-))(\zeta(t)-\zeta(t-)) = 0$ for all $t\in(0,1]$. Moreover, $h^{-1}(A)\subseteq \mathbb D\times \mathbb D$ is bounded away from $\mathbb D_{<(j,k)}$ for any $A\subseteq \mathbb D$ bounded away from $\mathbb D_{<\,j,k}$. \end{lemma} We next characterize convergence-determining classes for the convergence in $\mathbb M(\mathbb S\setminus \mathbb C)$. \begin{lemma}\label{lem:convergence-determining-class} Suppose that \emph{(i)} $\mathcal A_p$ is a $\pi$-system; \emph{(ii)} each open set $G \subseteq \mathbb S$ bounded away from $\mathbb C$ is a countable union of sets in $\mathcal A_p$; and \emph{(iii)} for each closed set $F\subseteq \mathbb S$ bounded away from $\mathbb C$, there is a set $A \in \mathcal A_p$ bounded away from $\mathbb C$ such that $F\subseteq A^\circ$ and $\mu(A\setminus A^\circ) = 0$. If, in addition, $\mu\in \mathbb M(\mathbb S\setminus \mathbb C)$ and $\mu_n(A) \to \mu(A)$ for every $A\in \mathcal A_p$ such that $A$ is bounded away from $\mathbb C$, then $\mu_n \to \mu$ in $\mathbb M(\mathbb S\setminus \mathbb C)$. \end{lemma} \begin{remark} Since $\mathbb S$ is a separable metric space, the Lindel\"of property holds. Therefore, a sufficient condition for assumption (ii) of Lemma~\ref{lem:convergence-determining-class} is that for every $x\in \mathbb S \setminus \mathbb C$ and $\epsilon>0$, there is $A\in \mathcal A_p$ such that $x \in A^\circ \subseteq B(x,\epsilon)$. To see that this implies assumption (ii), note that for any given open set $G$, one can construct a cover $\{(A_x)^\circ: x \in G\}$ of $G$ by choosing $A_x$ so that $x \in (A_x)^\circ \subseteq G$ and then extract a countable subcover (due to the Lindel\"of property) whose union is equal to $G$. Note also that if $A$ in assumption (iii) is open, then $\mu(A\setminus A^\circ) = \mu(\emptyset) = 0$ automatically. \end{remark} \section{Sample-Path Large Deviations}\label{sec:sample-path-ldps} In this section, we present large-deviations results for scaled L\'evy processes with heavy-tailed L\'evy measures. Section~\ref{subsec:one-sided-large-deviations} studies a special case, where the L\'evy measure is concentrated on the positive part of the real line, and Section~\ref{subsec:two-sided-large-deviations} extends this result to L\'evy processes with two-sided L\'evy measures. In both cases, let $X_n(t) \triangleq X(nt)$ be a scaled process of $X$, where $X$ is a L\'evy process with a L\'evy measure $\nu$. Recall that $X_n$ has It\^{o} representation (see, for example, Section 2 of \citealp{kyprianou2014fluctuations}): \begin{align*} X_n(s) &= nsa + B(ns) \\ &\quad + \int_{|x|\leq 1} x[N([0,ns]\times dx) - ns\nu(dx)] + \int_{|x|>1} xN([0,ns]\times dx), \end{align*} with $a$ a drift parameter, $B$ a Brownian motion, and $N$ a Poisson random measure with mean measure Leb$\times\nu$ on $[0,n]\times(0,\infty)$; Leb denotes the Lebesgue measure. \subsection{One-sided Large Deviations}\label{subsec:one-sided-large-deviations} Let $X$ be a L\'evy process with L\'evy measure $\nu$. In this section, we assume that $\nu$ is a regularly varying (at infinity, with index $-\alpha<-1$) L\'evy measure concentrated on $(0,\infty)$. Consider a centered and scaled process \begin{equation}\label{math-display-above-definition-nu-alpha-j} \bar X_n(s) \triangleq \frac{1}{n}X_n(s) - sa - \mu_1^+\nu_1^+s, \end{equation} where $\mu_1^+ \triangleq \frac{1}{\nu_1^+}\int_{[1,\infty)} x\nu(dx)$, and $\nu_1^+ \triangleq \nu[1,\infty)$. For each constant $\gamma>1$, let $\nu_\gamma(x,\infty) \triangleq x^{-\gamma}$, and let $\nu_\gamma^j$ denote the restriction (to $\mathbb R_+^{j\downarrow}$) of the $j$-fold product measure of $\nu_\gamma$. Let $C_0(\cdot)\triangleq \delta_\mathbf 0(\cdot)$ be the Dirac measure concentrated on the zero function. Additionally, for each $j\geq 1$, define a measure $C_j\in \mathbb M(\mathbb D \setminus \mathbb D_{<\, j})$ concentrated on $\mathbb D_{j}$ as $C_j(\cdot) \triangleq \mathbf{E} \Big[\nu_\alpha ^j \{y\in (0,\infty)^j:\sum_{i=1}^j y_i 1_{[U_i,1]}\in \cdot\}\Big]$, where the random variables $U_i, i\geq 1$ are i.i.d.\ uniform on $[0,1]$. The proof of the main result of this section hinges critically on the following limit theorem. \begin{theorem}\label{thm:one-sided-limit-theorem} For each $j\geq 0$, \begin{equation}\label{eq:main-result} (n\nu[n,\infty))^{-j}\P(\bar X_n\in \cdot) \to C_j(\cdot), \end{equation} in $\mathbb M(\mathbb D \setminus \mathbb D_{<\, j})$, as $n\to \infty$. Moreover, $\bar X_n$ is asymptotically equivalent to a process that assumes values in $\mathbb D_{\leqslant \mathcal J(A)}$ almost surely. \end{theorem} \begin{proof}[Proof Sketch] The proof of Theorem~\ref{thm:one-sided-limit-theorem} is based on establishing the asymptotic equivalence of $\bar X_n$ and the process obtained by just keeping its $j$ biggest jumps, which we will denote by $\hat J_n^{\leqslant j}$ in Section~\ref{sec:proofs}. Such an equivalence is established via Proposition~\ref{prop:asymptotic-equivalence-Xbar-Jbar}, and Proposition~\ref{prop:asymptotic-equivalence-Jbar-Jj}. Then, Proposition~\ref{prop:Jj} identifies the limit of $\hat J_n^{\leqslant j}$, which coincides with the limit in (\ref{eq:main-result}). The full proof of Theorem~\ref{thm:one-sided-limit-theorem} is provided in Section~\ref{subsec:proofs-for-sample-path-ldps}. \end{proof} Recall that $\mathbb D_s^\uparrow$ denotes the subset of $\mathbb D$ consisting of non-decreasing step functions vanishing at the origin, and $\mathcal D_+(\xi)$ denotes the number of upward jumps of an element $\xi$ in $\mathbb D$. Finally, set \begin{equation}\label{def:JA} \mathcal J(A) \triangleq \inf_{\xi\in \mathbb D_s^\uparrow \cap A } \mathcal D_+(\xi). \end{equation} Now we are ready to present the main result of this section, which is the following large-deviations theorem for $\bar X_n$. \begin{theorem}\label{thm:one-sided-main-theorem} Suppose that $A$ is a measurable set. If $\mathcal J(A)<\infty$, and if $A_\delta \cap \mathbb D_{\leqslant \mathcal J(A)}$ is bounded away from $\mathbb D_{< \mathcal J(A)}$ for some $\delta>0$, then \begin{equation}\label{eq:one-sided-large-deviations} \begin{aligned} C_{\mathcal J(A)}(A^\circ) & \leq \liminf_{n\rightarrow\infty} \frac{\P(\bar X_n \in A) }{(n \nu[n,\infty))^{\mathcal J(A)}} \\ & \leq \limsup_{n\rightarrow\infty} \frac{\P(\bar X_n \in A)}{(n \nu[n,\infty))^{\mathcal J(A)}} \leq C_{\mathcal J(A)}(\bar A). \end{aligned} \end{equation} {\color{red}[check]} If $\mathcal J(A)= \infty$, and $A_\delta \cap \mathbb D_{\leqslant i+1}$ is bounded away from $\mathbb D_{\leqslant i}$ for some $\delta>0$ and $i\geq 0$, then \begin{equation}\label{eq:one-sided-large-deviations-null} \lim_{n\to\infty}\frac{\P(\bar X_n \in A)}{(n\nu[n,\infty))^{i}} = 0. \end{equation} In particular, in case $\mathcal J(A)< \infty$, \eqref{eq:one-sided-large-deviations} holds if $A$ is bounded away from $\mathbb D_{<\,\mathcal J(A)}$; in case $\mathcal J(A) = \infty$, \eqref{eq:one-sided-large-deviations-null} holds if $A$ is bounded away from $\mathbb D_{\leqslant i}$. \end{theorem} \begin{proof} We first consider the case $\mathcal J(A) < \infty$. Note that $\mathcal J(A^\circ) > \mathcal J(A)$ implies that $A^\circ$ doesn't contain any element of $\mathbb D_{\leqslant \mathcal J(A)}$. Since $C_{\mathcal J(A)}$ is supported on $\mathbb D_{\leqslant \mathcal J(A)}$, $A^\circ$ is a $C_{\mathcal J(A)}$-null set. Therefore, the lower bound holds trivially if $\mathcal J(A^\circ) > \mathcal J(A)$. On the other hand, $\mathcal J(A) = \mathcal J(\bar A)$. To see this, suppose not---i.e., $\mathcal J(\bar A) < \mathcal J(A)$. Then, there exists $\zeta\in \mathbb D_s^\uparrow \cap \bar A $ such that $\zeta \in \mathbb D_{< \mathcal J(A)}$. This implies that $\zeta \in A_\delta \cap \mathbb D_{\leqslant J(A) }$ for any $\delta>0$, which is contradictory to the assumption that $A_\delta \cap \mathbb D_{\leqslant J(A) }$ is bounded away from $\mathbb D_{< \mathcal J(A)}$ for some $\delta>0$. In view of these observations, we can assume w.l.o.g.\ that $\mathcal J(A^\circ) = \mathcal J(A) = \mathcal J(\bar A)$. Now, from Theorem~\ref{thm:one-sided-limit-theorem} with $j=\mathcal J(A^\circ)$ along with the lower bound of Lemma~\ref{lem:extended-bounds}, \begin{align*} C_{\mathcal J(A)} (A^\circ) & = C_{\mathcal J(A^\circ)} (A^\circ) \leq \liminf_{n\to\infty} \frac{\P(\bar X_n \in A^\circ)}{(n\nu[n,\infty))^{\mathcal J(A^\circ)}} \\ &\leq \liminf_{n\to\infty} \frac{\P(\bar X_n \in A)}{(n\nu[n,\infty))^{\mathcal J(A)}}. \end{align*} Similarly, from Theorem~\ref{thm:one-sided-limit-theorem} with $j=\mathcal J(\bar A)$ along with the upper bound of Lemma~\ref{lem:extended-bounds}, \begin{align*} \limsup_{n\to\infty} \frac{\P(\bar X_n \in A)}{(n\nu[n,\infty))^{\mathcal J(A)}} & \leq \limsup_{n\to\infty} \frac{\P(\bar X_n \in \bar A)}{(n\nu[n,\infty))^{\mathcal J(\bar A)}} \\ & \leq C_{\mathcal J(\bar A)} (\bar A) = C_{\mathcal J(A)} (\bar A). \end{align*} In case $\mathcal J(A) = \infty$, we reach the conclusion by applying Theorem~\ref{thm:one-sided-limit-theorem} with $j=i$ along with noting that $C_i(\bar A) = 0$. \end{proof} Theorem~\ref{thm:one-sided-main-theorem} dictates the ``right'' choice of $j$ in Theorem~\ref{thm:one-sided-limit-theorem} for which (\ref{eq:main-result}) can lead to a limit in $(0,\infty)$. We conclude this section with an investigation of a sufficient condition for $C_j$-continuity; i.e., we provide a sufficient condition on $A$ which guarantees $C_j(\partial A) = 0$. The latter property implies \begin{equation} \label{A-continuity} C_j(A^\circ) = C_j(A)= C_j(\bar A), \end{equation} implying that the liminf and limsup in our asymptotic estimates yield the same result. Assume that $A$ is a subset of $\mathbb D_{j}$ bounded away from $\mathbb D_{<\, j}$; i.e., $d(A,\mathbb D_{<\,j})>\gamma$ for some $\gamma>0$. Consider a path $\xi\in A$. Note that every $\xi\in \mathbb D_j$ is determined by the pair of jump sizes and jump times $(x,u) \in (0,\infty)^j\times [0,1]^j$; i.e., $\xi(t) = \sum_{i=1}^j x_i 1_{[u_i,1]}(t)$. Formally, we define a mapping $\hat T_j: \hat S_j \rightarrow \mathbb D_j$ by $\hat T_j(x,u) = \sum_{i=1}^j x_i 1_{[u_i,1]}$, where $\hat S_j \triangleq \{(x,u)\in \mathbb{R}_+^{j\downarrow}\times [0,1]^j: 0, 1, u_1,\ldots, u_j \text{ are all distinct}\}$. Since $d(A,\mathbb D_{<\, j})>\gamma$, we know that $\hat T_j(x,u)\in A$ implies $x\in (\gamma,\infty)^j$; see Lemma~\ref{lem:Djk} (b). In view of this, we can see that (\ref{A-continuity}) holds if the Lebesgue measure of $\hat T_{j}^{-1}(\partial A )$ is 0 since $ C_j(A) = \int_{(x,u) \in \hat T_j^{-1}(A)} du d\nu_\alpha^j(x) $% . One of the typical settings that arises in applications is that the set $A$ can be written as a finite combination of unions and intersections of $\phi_1^{-1}(A_1),\ldots,\phi_m^{-1}(A_m)$, where each $\phi_i:\mathbb D \to \mathbb S_i$ is a continuous function, and all sets $A_i$ are subsets of general topological space $\mathbb S_i$. If we denote this operation of taking unions and intersections by $\Psi$ (i.e., $A = \Psi(\phi_1^{-1}(A_1),\ldots,\phi_m^{-1}(A_m))$), then \[ \Psi(\phi_1^{-1}(A_1^\circ),\ldots,\phi_m^{-1}(A_m^\circ)) \subseteq A^\circ \subseteq A \subseteq \bar A \subseteq \Psi(\phi_1^{-1}(\bar A_1),\ldots,\phi_m^{-1}(\bar A_m)). \] Therefore, (\ref{A-continuity}) holds if $\hat T_j^{-1}(\Psi(\phi_1^{-1}(\bar A_1),\ldots,\phi_m^{-1}(\bar A_m)))\setminus \hat T_j^{-1}(\Psi(\phi_1^{-1}(A_1^\circ),\allowbreak\ldots,\allowbreak\phi_m^{-1}(A_m^\circ)))$ has Lebesgue measure zero. A similar principle holds for the limit measures $C_{j,k}$, defined in the next section where we deal with two-sided L\'evy processes. \subsection{Two-sided Large Deviations}\label{subsec:two-sided-large-deviations} Consider a two-sided L\'evy measure $\nu$ for which $\nu[x,\infty)$ is regularly varying with index $-\alpha$ and $\nu(-\infty, -x]$ is regularly varying with index $-\beta$. Let $$ \bar X_n(s) \triangleq \frac{1}{n}X_n(s) - sa - (\mu_1^+\nu_1^+-\mu_1^-\nu_1^-)s, $$ where \begin{align*} \mu_1^+ &\triangleq \frac{1}{\nu_1^+}\int_{[1,\infty)} x\nu(dx), & \nu_1^+ &\triangleq \nu[1,\infty), \\ \mu_1^- &\triangleq \frac{-1}{\nu_1^-}\int_{(-\infty,-1]} x\nu(dx), & \nu_1^- &\triangleq \nu(-\infty,-1]. \end{align*} Recall the definition of $\mathbb D_{j,k}$ given below Corollary~\ref{result:consequence-of-L23-53-54}, and the definition of $\nu_\alpha^j$ and $\nu_\beta^k$ as given below \eqref{math-display-above-definition-nu-alpha-j}. Let $C_{0,0}(\cdot) \triangleq \delta_{\mathbf 0}(\cdot)$ be the Dirac measure concentrated on the zero function. For each $(j,k)\in \mathbb{Z}_+^2\setminus \{(0,0)\}$, define a measure $C_{j,k}\in \mathbb M(\mathbb D\setminus \mathbb D_{<j,k})$ concentrated on $\mathbb D_{j,k}$ as $C_{j,k}(\cdot) \triangleq \mathbf{E} \Big[\nu_\alpha ^j\times\nu_\beta^k \{(x,y)\in (0,\infty)^j\times(0,\infty)^k:\sum_{i=1}^j x_i 1_{[U_i,1]} - \sum_{i=1}^k y_i1_{[V_i,1]}\in \cdot\}\Big]$, where $U_i$'s and $V_i$'s are i.i.d.\ uniform on $[0,1]$. Recall that $\mathbb D_{< j,k} = \bigcup_{(l,m)\in \mathbb{I}{<j,k}} \mathbb D_{l,m}$ and $\mathbb{I}{<j,k} = \big\{(l,m)\in \mathbb{Z}_+^2\setminus\{(j,k)\}: (\alpha-1) l + (\beta-1) m \leq (\alpha-1) j + (\beta-1) k\big\}$. As in the one-sided case, the proof of the main theorem of this section hinges on the following limit theorem. \begin{theorem}\label{thm:two-sided-limit-theorem} For each $(j, k)\in \mathbb{Z}_+^2$, \begin{equation}\label{eq:two-sided-limit-theorem} (n\nu[n,\infty))^{-j}(n\nu(-\infty,-n])^{-k}\P(\bar X_n\in \cdot) \to C_{j,k}(\cdot) \end{equation} in $\mathbb M(\mathbb D \setminus \mathbb D_{< j,k})$ as $n\to \infty$. \end{theorem} The proof of Theorem~\ref{thm:two-sided-limit-theorem} builds on Theorem~\ref{thm:one-sided-limit-theorem}, using Lemma~\ref{thm:simple-product-space}, Lemma~\ref{thm:union-limsup}, and Lemma~\ref{lem:continuous-mapping-principle-for-subtraction} and Theorem~\ref{thm:multi-d-limit-theorem}. We provide the full proof in Section~\ref{subsec:proofs-for-sample-path-ldps}. Let $\mathcal I(j,k) \triangleq (\alpha-1)j + (\beta-1)k$, and consider a pair of integers $(\mathcal J(A),\mathcal K(A))$ such that \begin{equation}\label{def:JK} (\mathcal J(A), \mathcal K(A)) \in \argmin_{\substack{(j,k)\in \mathbb Z_+^2\\ \mathbb D_{j,k} \cap A \neq \emptyset}}\mathcal I(j,k). \end{equation} The next theorem is the first main result of this section. \begin{theorem}\label{thm:two-sided-main-theorem} Suppose that $A$ is a measurable set. If the argument minimum in (\ref{def:JK}) is non-empty and $A$ is bounded away from $\mathbb D_{< \mathcal J(A), \mathcal K(A)}$, then the argument minimum is unique and \begin{equation}\label{eq:two-sided-main-result} \begin{aligned} \liminf_{n\rightarrow\infty} \frac{\P(\bar X_n \in A) }{(n \nu[n,\infty))^{\mathcal J(A)}(n \nu(-\infty,-n])^{\mathcal K(A)}} &\geq C_{\mathcal J(A), \mathcal K(A)}(A^\circ), \\ \limsup_{n\rightarrow\infty} \frac{\P(\bar X_n \in A)}{(n \nu[n,\infty))^{\mathcal J(A)} (n \nu(-\infty,-n])^{\mathcal K(A)} } &\leq C_{\mathcal J(A),\mathcal K(A)}(\,\bar A\;). \end{aligned} \end{equation} {\color{red}[check]} Moreover, if the argument minimum in (\ref{def:JK}) is empty and $A$ is bounded away from $\mathbb D_{<l,m}\cup \mathbb D_{l,m}$ for some $(l,m)\in \mathbb{Z}_+^2\setminus \{(0,0)\}$, then \begin{equation}\label{two-sided-limit-tends-to-zero} \lim_{n\to\infty}\frac{ \P(\bar X_n \in A)}{(n \nu[n,\infty))^{l}\allowbreak (n \nu(-\infty,-n])^{m}} = 0. \end{equation} \end{theorem} The proof of the theorem is provided below as a consequence of the following lemma. \begin{lemma}\label{wasteful-lemma} Suppose that a sequence of $\mathbb D$-valued random elements $Y_n$ satisfies \eqref{eq:two-sided-limit-theorem} (with $\bar X_n$ replaced with $Y_n$) for each $(j,k)\in \mathbb Z_+^2$. Then \eqref{eq:two-sided-main-result} (with $\bar X_n$ replaced with $Y_n$) holds if $A$ is a measurable set for which the argument minimum in \eqref{def:JK} is non-empty, and $A$ is bounded away from $\mathbb D_{<\mathcal J(A), \mathcal K(A)}$. Moreover, \eqref{two-sided-limit-tends-to-zero} (with $\bar X_n$ replaced with $Y_n$) holds if the argument minimum in (\ref{def:JK}) is empty and $A$ is bounded away from $\mathbb D_{<l,m}\cup \mathbb D_{l,m}$ for some $(l,m)\in \mathbb{Z}_+^2\setminus \{(0,0)\}$. \end{lemma} The proof of this lemma is provided in Section~\ref{subsec:proofs-for-sample-path-ldps}. \begin{proof}[Proof of Theorem~\ref{thm:two-sided-main-theorem}] The uniqueness of the argument minimum is immediate from the assumption that $A$ is bounded-away from $\mathbb D_{<\mathcal J(A), \mathcal K(A)}$. Since $\bar X_n$ satisfies \eqref{eq:two-sided-limit-theorem} by Theorem~\ref{thm:two-sided-limit-theorem}, the conclusion of the theorem follows from applying Lemma~\ref{wasteful-lemma} with $Y_n = \bar X_n$. \end{proof} In case one is interested in a set for which the $\argmin$ of $\mathcal I$ in (\ref{def:JK}) is not unique, a natural approach is to partition $A$ into smaller sets and analyze each element separately. In the next theorem, we show that this strategy can be successfully employed with a minimal requirement on $A$. However, due to the presence of two different slowly varying functions $n^\alpha\nu[n,\infty)$ and $n^\beta\nu(-\infty,-n]$, the limit behavior may not be dominated by a single $\mathbb D_{l,m}$. To deal with this case, let $\mathbb{I}_{= j,k} \triangleq \{(l,m): (\alpha-1)l+(\beta-1)m = (\alpha-1)j+(\beta-1)k\}$, $\mathbb{I}_{\ll j,k} \triangleq \{(l,m): (\alpha -1)l + (\beta-1)m < (\alpha-1)j+(\beta-1)k\}$, $\mathbb D_{= j,k} \triangleq \bigcup_{(l,m)\in \mathbb{I}_{= j,k}} \mathbb D_{l,m}$, and $\mathbb D_{\ll j,k} \triangleq \bigcup_{(l,m)\in \mathbb{I}_{\ll j,k}} \mathbb D_{l,m}$. Denote the slowly varying functions $n^\alpha\nu[n,\infty)$ and $n^\beta\nu(-\infty,-n]$ with $L_+(n)$ and $L_-(n)$, respectively. \begin{theorem}\label{thm:two-sided-multiple-asymptotics} Let $A$ be a measurable set and suppose that the argument minimum in (\ref{def:JK}) is non-empty and contains a pair of integers $(\mathcal J(A),\mathcal K(A))$. If $A_\delta \cap \mathbb D_{=\mathcal J(A), \mathcal K(A)}$ is bounded away from $\mathbb D_{\ll\mathcal J(A), \mathcal K(A)}$ for some $\delta>0$, then for any given $\epsilon > 0$, there exists $N\in \mathbb N$ such that \begin{equation}\label{two-sided-large-deviation-combined} \begin{aligned} \P(\bar X_n \in A) & \geq \frac{\sum_{(l,m)} \big(C_{l,m}(A^\circ)-\epsilon\big)L_+^l(n)L_-^m(n)}{n^{(\alpha-1)\mathcal J(A)+(\beta-1)\mathcal K(A)}}, \\ \P(\bar X_n \in A) & \leq \frac{\sum_{(l,m)} \big(C_{l,m}(\bar A)+\epsilon\big)L_+^l(n)L_-^m(n)}{n^{(\alpha-1)\mathcal J(A)+(\beta-1)\mathcal K(A)}}, \end{aligned} \end{equation} for all $n\geq N$, where the summations are over the pairs $(l,m)\in \mathbb{I}_{=\mathcal J(A), \mathcal K(A)}$. In particular, \eqref{two-sided-large-deviation-combined} holds if $A$ is bounded away from $\mathbb D_{\ll \mathcal J(A), \mathcal K(A)}$. \end{theorem} \begin{proof} Note first that from Lemma~\ref{lemma-for-two-sided-multiple-optima} (i), there exists a $\delta'>0$ such that $\mathbb D_{\ll \mathcal J(A), \mathcal K(A)}$ is bounded away from $A \cap (\mathbb D_{l,m})_{\delta'}$ for all $(l,m) \in \mathbb{I}_{=\mathcal J(A), \mathcal K(A)}$. Moreover, applying Lemma~\ref{lemma-for-two-sided-multiple-optima} (ii) to each $A \cap (\mathbb D_{l,m})_{\delta'}$, we conclude that there exists $\rho>0$ such that $A\cap (\mathbb D_{l,m})_\rho$ is bounded away from $(\mathbb D_{j,k})_\rho$ for any two distinct pairs $(l,m),(j,k) \in \mathbb{I}_{=\mathcal J(A), \mathcal K(A)}$. This means that $A\cap (\mathbb D_{l,m})_\rho$'s are all disjoint and bounded away from $\mathbb D_{<l,m}$. To derive the lower bound, we apply Theorem~\ref{thm:two-sided-main-theorem} to $A^\circ\cap (\mathbb D_{l,m})^{\rho}$ to obtain \begin{align*} C_{l,m}(A^\circ) &= C_{l,m}(A^\circ\cap \mathbb D_{l,m} ) = C_{l,m}(A^\circ\cap \mathbb D_{l,m} \cap (\mathbb D_{l,m})^{\rho} ) \\ &= C_{l,m}(A^\circ\cap (\mathbb D_{l,m})^{\rho} ) \leq \liminf_{n\to\infty} \frac{\P(\bar X_n \in A^\circ\cap (\mathbb D_{l,m})^\rho)}{(n\nu[n,\infty))^l(n\nu(-\infty,-n])^m} \\ & \leq \liminf_{n\to\infty} \frac{\P(\bar X_n \in A\cap (\mathbb D_{l,m})^\rho)}{(n\nu[n,\infty))^l(n\nu(-\infty,-n])^m}, \end{align*} for each $(l,m)\in \mathbb{I}_{=\mathcal J(A), \mathcal K(A)}$. That is, for any given $\epsilon>0$, there exists an $N_{l,m}\in \mathbb N$ such that \begin{equation}\label{eq:asympt-low-lm} \begin{aligned} \frac{\big(C_{l,m}( A^\circ)-\epsilon\big)L_+^l(n)L_-^m(n)}{n^{(\alpha-1)l+(\beta-1)m}} \leq \P\big(\bar X_n \in A \cap (\mathbb D_{l,m})^\rho\big), \end{aligned} \end{equation} for all $n\geq N_{l,m}$. Meanwhile, an obvious bound holds for $A\setminus \bigcup_{(l,m)\in \mathbb{I}_{=\mathcal J(A), \mathcal K(A)}} \allowbreak (\mathbb D_{l,m})^\rho$; i.e., \begin{equation}\label{eq:asympt-low-0} 0\leq \P\left(\bar X_n \in \textstyle{A\setminus \bigcup_{(l,m)\in \mathbb{I}_{=\mathcal J(A), \mathcal K(A)}} (\mathbb D_{l,m})^\rho}\right). \end{equation} Since $(\alpha-1)l + (\beta-1)m = (\alpha-1)\mathcal J(A) + (\beta-1)\mathcal K(A)$ for $(l,m)\in \mathbb{I}_{=\mathcal J(A),\mathcal K(A)}$, summing (\ref{eq:asympt-low-lm}) over $(l,m) \in \mathbb{I}_{=\mathcal J(A), \mathcal K(A)}$ together with (\ref{eq:asympt-low-0}), we arrive at the lower bound of the theorem, with $N = \max_{(l,m)\in \mathbb{I}_{=\mathcal J(A), \mathcal K(A)}} N_{l,m}$. Turning to the upper bound, we apply Theorem~\ref{thm:two-sided-main-theorem} to $\bar A\cap (\mathbb D_{l,m})_\rho$ to get \begin{align*} \limsup_{n\to\infty} \frac{\P(\bar X_n \in \bar A \cap (\mathbb D_{l,m})_\rho )}{(n\nu[n,\infty))^l(n\nu(-\infty,-n])^m} \leq C_{l,m}(\bar A\cap (\mathbb D_{l,m})_\rho) = C_{l,m}(\bar A). \end{align*} for each $(l,m) \in \mathbb{I}_{=\mathcal J(A), \mathcal K(A)}$. That is, for any given $\epsilon>0$, there exists $N_{l,m}'\in \mathbb N$ such that \begin{equation}\label{eq:asympt-up-lm} \begin{aligned} \P(\bar X_n \in A \cap (\mathbb D_{l,m})_\rho) \leq \frac{\big(C_{l,m}(\bar A\revrem{\cap (\mathbb D_{l,m})_\rho})+\epsilon/2\big)L_+^l(n)L_-^m(n)}{n^{(\alpha-1)l+(\beta-1)m}}, \end{aligned} \end{equation} for all $n\geq N_{l,m}'$. On the other hand, since $\bar A\setminus \bigcup_{(l,m)\in \mathbb{I}_{=\mathcal J(A),\mathcal K(A)}}(\mathbb D_{l,m})^{\rho} $ is closed and bounded away from $\mathbb D_{<\mathcal J(A), \mathcal K(A) }$, \begin{equation*} \begin{aligned} \limsup_{n\to\infty} \frac{\P\left(\bar X_n \in A\setminus \bigcup_{(l,m)}(\mathbb D_{l,m})^\rho \right)}{(n\nu[n,\infty))^{\mathcal J(A)}(n\nu(-\infty,-n])^{\mathcal K(A)}} \leq C_{\mathcal J(A),\mathcal K(A)}\left( \textstyle{ \bar A\setminus \bigcup_{(l,m)}(\mathbb D_{l,m})^\rho} \right), \end{aligned} \end{equation*} where the union is over the pairs $(l,m)\in \mathbb{I}_{=\mathcal J(A),\mathcal K(A)}$. Therefore, there exists $N'$ such that \begin{equation}\label{eq:asympt-up-0} \begin{aligned} & \P\left(\bar X_n \in \textstyle{A\setminus \bigcup_{(l,m)}(\mathbb D_{l,m})^\rho} \right) \\ & \leq \frac{\left(C_{\mathcal J(A),\mathcal K(A)}\left(\textstyle{ \bar A\setminus \bigcup_{(l,m)}(\mathbb D_{l,m})^\rho}\right)+\epsilon/2\right)L_+^{\mathcal J(A)}(n)L_-^{\mathcal K(A)}(n)}{n^{(\alpha-1)\mathcal J(A)+(\beta-1)\mathcal K(A)}} \\ & = \frac{\left(\epsilon/2\right)L_+^{\mathcal J(A)}(n)L_-^{\mathcal K(A)}(n)}{n^{(\alpha-1)\mathcal J(A)+(\beta-1)\mathcal K(A)}}, \end{aligned} \end{equation} for $n \geq N'$ since $\textstyle{ \bar A\setminus \bigcup_{(l,m)}(\mathbb D_{l,m})^\rho}$ is disjoint from the support of $C_{\mathcal J(A), \mathcal K(A)}$. Summing (\ref{eq:asympt-up-lm}) over $(l,m)\in \mathbb{I}_{=\mathcal J(A), \mathcal K(A)}$ and (\ref{eq:asympt-up-0}), \begin{equation} \P(\bar X_n \in A) \leq \frac{\sum_{(l,m)} \big(C_{l,m}\big(\bar A\revrem{\cap (\mathbb D_{l,m})_\delta}\big)+\epsilon\big)L_+^l(n)L_-^m(n)}{n^{(\alpha-1)\mathcal J(A)+(\beta-1)\mathcal K(A)}}, \end{equation} for $n \geq N$, where $N = N'\vee\max_{(l,m)\in \mathbb{I}_{=\mathcal J(A), \mathcal K(A)}} N'_{l,m}$. \revrem{Taking $\delta \to 0$, we obtain the upper bound of the theorem.} \end{proof} \section{Implications}\label{sec:implications} This section explores the implications of the large-deviations results in Section~\ref{sec:sample-path-ldps}, and is organized as follows. Section~\ref{subsec:random-walks} proves a result similar to Theorem~\ref{thm:two-sided-main-theorem}, now focusing on random walks with regularly varying increments. Section~\ref{subsec:conditional-limit-theorem} illustrates that conditional limit theorems can easily be studied by means of the limit theorems established in Section~\ref{sec:sample-path-ldps}. Section~\ref{subsec:weak-ldp} develops a weak large deviation priciple (LDP) of the form (\ref{weakldp}) for the scaled L\'evy processes. Finally, Section~\ref{subsec:nonexistence} shows that the weak LDP proved in Section~\ref{subsec:weak-ldp} is the best one can hope for in the presence of regularly varying tails, by showing that a full LDP of the form (\ref{weakldp}) does not exist. \subsection{Random Walks}\label{subsec:random-walks} Let $S_k, k\geq 0,$ be a random walk, set $\bar S_n(t) = S_{[nt]}/n, t\geq 0$, and define $\bar S_n = \{\bar S_n(t), t\in [0,1]\}$. Let $N(t), t\geq 0,$ be an independent unit rate Poisson process. Define the L\'evy process $X(t) \triangleq S_{N(t)}, t\geq 0$, and set $\bar X_n(t) \triangleq X(nt)/n, t\geq 0$. The goal is to prove an analogue of Theorem~\ref{thm:two-sided-main-theorem} for the scaled random walk $\bar S_n$. Let $\mathcal J(\cdot)$, $\mathcal K(\cdot)$, and $C_{j,k}(\cdot)$ be defined as in Section~\ref{subsec:two-sided-large-deviations}. \begin{theorem}\label{thm:random-walk} Suppose that $\P(S_1 \geq x)$ is regularly varying with index $-\alpha$ and $\P(S_1 \leq -x)$ is regularly varying with index $-\beta$. Let $A$ be a measurable set bounded away from $\mathbb D_{< \mathcal J(A), \mathcal K(A)}$. Then \begin{equation}\label{eq:random-walk} \begin{aligned} & \liminf_{n\rightarrow\infty} \frac{\P(\bar S_n \in A) }{(n \P(S_1\geq n))^{\mathcal J(A)}(n \P(S_1\leq -n))^{\mathcal K(A)}} \geq C_{\mathcal J(A), \mathcal K(A)}(A^\circ), \\ & \limsup_{n\rightarrow\infty} \frac{ \P(\bar S_n \in A)}{(n \P(S_1\geq n))^{\mathcal J(A)} (n \P(S_1\leq -n))^{\mathcal K(A)} } \leq C_{\mathcal J(A),\mathcal K(A)}(\bar A). \end{aligned} \end{equation} \end{theorem} \begin{proof} The idea is to combine our notion of asymptotic equivalence with Theorem~\ref{thm:two-sided-main-theorem}. First, we need to derive the asymptotic behavior of the L\'evy measure of the constructed L\'evy process. From Example A3.17 in \cite{EmbrechtsKluppelbergMikosch97}, we obtain $\P(X(1)\geq x) \sim \P(S_1\geq x)$. Moreover, \cite{EmbrechtsVeraverbeke} implies that $\nu(x,\infty) \sim \P(X(1)\geq x)$. Similarly, it follows that $\nu(-\infty,-x)\sim \P(S_1 \leq - x)$. Now, from Lemma~\ref{wasteful-lemma}, \eqref{eq:random-walk} is proved if \eqref{eq:two-sided-limit-theorem} holds for $\bar S_n$. In view of Corollary~\ref{lem:asymptotic-equivalence}, \eqref{eq:two-sided-limit-theorem} holds---and hence, the proof is completed---if we prove the asymptotic equivalence between $\bar X_n$ and $\bar S_n$ (w.r.t.\ a geometrically decaying sequence). To prove the asymptotic equivalence, we first argue that the Skorokhod distance between $\bar S_n$ and $\bar X_n$ is bounded by $\sup_{t\in [0,1]} |N(tn)/n - t|$. To see this, define the homeomorphism $\lambda_n(t)$ as the linear interpolation of the jump points of $N(nt)/n$, and observe that $\bar X_n(t) = \bar S_n (\lambda_n(t))$. Thus, the distance between $\bar S_n$ and $\bar X_n$ is bounded by $\sup_{t\in [0,1]} |\lambda_n(t)-t|$ which, in itself, is bounded by $\sup_{t\in [0,1]} |N(tn)/n - t|$. From Lemma~\ref{lem:cont_etemadi}, \begin{equation*} \P(\sup_{t\in [0,1]} |N(tn)/n - t|)>\delta) \leq 3\sup_{t\in [0,1]}\P( |N(tn)/n - t|)>\delta/3), \end{equation*} where $\P( |N(tn)/n - t|)>\delta/3)$ vanishes at a geometric rate w.r.t.\ $n$ uniform in $t\in [0,1]$, from which the asymptotic equivalence follows. \end{proof} \subsection{Conditional Limit Theorems}\label{subsec:conditional-limit-theorem} \label{subsec:conditional-limit-theorem} As before, $\bar{X}_{n}$ denotes the scaled L\'{e}vy process defined as in Section~\ref{subsec:one-sided-large-deviations} for the one-sided case and Section \ref{subsec:two-sided-large-deviations} for the two-sided case, respectively. In this section, we present conditional limit theorems which give a precise description of the limit law of $\bar X_n$ conditional on $\bar X_n\in A$. The next result, for the one-sided case, follows immediately from the definition of weak convergence and Theorem \ref{thm:one-sided-main-theorem}. \begin{corollary} Suppose that a subset $B$ of $\mathbb{D}$ satisfies the conditions in Theorem~\ref{thm:one-sided-main-theorem} and that $C_{\mathcal{J}(B)}% (B^{\circ})=C_{\mathcal{J}(B)}(B)=C_{\mathcal{J}(B)}(\bar B)>0$. Let $\bar {X}_{n}^{|B}$ be a process having the conditional law of $\bar{X}_{n}$ given that $\bar{X}_{n}\in B$, then there exists a process $\bar{X}_{\infty}^{|B}$ such that \[ \bar{X}_{n}^{|B}\Rightarrow\bar{X}_{\infty}^{|B}, \] in $\mathbb{D}$. Moreover, if $\P^{|B}\left( \cdot\right) $ is the law of $\bar{X}_{\infty}^{|B}$, then% \[ \P^{|B}\left( \bar{X}_{\infty}^{|B}\in \cdot\right) :=\frac{C_{\mathcal{J}% (B)}(\cdot\cap B)}{C_{\mathcal{J}(B)}(B)}. \] \end{corollary} Let us provide a more direct probabilistic description of the process $\bar {X}_{\infty}^{|B}$. Directly from the definition of $\P^{|B}$ we have that \[ \bar{X}_{\infty}^{|B}\left( t\right) =\sum_{n=1}^{\mathcal{J}(B)}\chi _{n}1_{[U_{n},1]}\left( t\right) , \] where $U_{1},...,U_{\mathcal{J}(B)}$ are i.i.d.\ uniform random variables on $[0,1]$ and \begin{align*} & \P^{|B}\left( \chi_{1}\in dx_{1},...,\chi_{\mathcal{J}(B)}\in dx_{\mathcal{J}(B)}\right) \\ & =\frac{\Pi_{i=1}^{\mathcal{J}(B)}\left( \alpha x_{i}{}^{-\alpha-1}% dx_{i}\right) \,\mathbb{I}\left( x_{\mathcal{J}(B)}>...>x_{1}>0\right) \P\left( \sum_{n=1}^{\mathcal{J}(B)}x_{n}1_{[U_{n},1]}\left( \cdot\right) \in B\right) }{C_{\mathcal{J}(B)}(B)}. \end{align*} An easier to interpret description of $\P^{|B}$ can be obtained by using the fact that $\delta_{B}:=d\left( B,\mathbb{D}_{\leqslant\mathcal{J}(B)-1}\right) >0$. Define an auxiliary probability measure, $\P_{\#}^{|B}$, under which, not only $U_{1},...,U_{\mathcal{J}(B)}$ are i.i.d. Uniform$\left( 0,1\right) $, but also $\chi_{1},...,\chi_{\mathcal{J}(B)}$ are i.i.d. distributed Pareto$\left( \alpha,\delta_{B}\right) $ and independent of the $U_{i}$'s; that is,% \begin{align*} &\P_{\#}^{|B}\left( \chi_{1}\in dx_{1},...,\chi_{\mathcal{J}(B)}\in dx_{\mathcal{J}(B)}\right) \\ & =(\alpha/\delta_{B})^{\mathcal{J}(B)}\Pi _{i=1}^{\mathcal{J}(B)}(x_{i}/\delta_{B})^{-\alpha-1}dx_{i}\,\mathbb{I}\left( x_{i}% \geq\delta_{B}\right) . \end{align*} Then, we have that \begin{equation} \P^{|B}\left( \bar{X}_{\infty}^{|B}\in\cdot\right) =\P_{\#}^{|B}\left( \bar{X}_{\infty}^{|B}\in\cdot\text{ }|\text{ }\bar{X}_{\infty}^{|B}\in B\right) .\label{Rec_Cond}% \end{equation} Moreover, note that% \begin{equation} \P_{\#}^{|B}\left( \bar{X}_{\infty}^{|B}\in B\right) =\delta_{B}% ^{-\mathcal{J}(B)\left( \alpha+2\right) }C_{\mathcal{J}(B)}% (B)>0.\label{Rec_Cond_2}% \end{equation} In view of (\ref{Rec_Cond}) and (\ref{Rec_Cond_2}) one can say, at least qualitatively, that the most likely way in which the event $\bar{X}_{n}\in B$ is seen to occur is by means of $\mathcal{J}(B)$ i.i.d.\ jumps which are suitably Pareto distributed and occurring uniformly throughout the time interval $[0,1]$. We now are ready to provide the corresponding conditional limit theorem for the two-sided case, building on Theorem \ref{thm:two-sided-main-theorem}. The proof is again immediate, using the definition of weak convergence. \begin{corollary} \label{cor:two-sided-conditional-probability_special_case} Suppose that a subset $B$ of $\mathbb{D}$ satisfies the conditions in Theorem~\ref{thm:two-sided-main-theorem} and that \[ C_{\mathcal{J}(B),\mathcal{K}(B)}(B^{\circ})=C_{\mathcal{J}(B),\mathcal{K}% (B)}(B)=C_{\mathcal{J}(B),\mathcal{K}(B)}(\bar B)>0. \] Let $\bar{X}_{n}^{|B}$ be a process having the conditional law of $\bar{X}% _{n}$ given that $\bar{X}_{n}\in B$, then \[ \bar{X}_{n}^{|B}\Rightarrow\bar{X}_{\infty}^{|B}, \] in $\mathbb{D}$. Moreover, if $\P^{|B}\left( \cdot\right) $ is the law of $\bar{X}_{\infty}^{|B}$, then% \[ \P^{|B}\left( \bar{X}_{\infty}^{|B}\in\cdot\right) :=\frac{C_{\mathcal{J}% (B),\mathcal{K}(B)}(\cdot\cap B)}{C_{\mathcal{J}(B),\mathcal{K}(B)}(B)}. \] \end{corollary} A probabilistic description, completely analogous to that given for the one-sided case, can also be provided in this case. Define $\delta_{B}=d\left( B,\mathbb{D}_{<\mathcal{J}(B),\mathcal{K}(B)}\right) >0$ and introduce a probability measure $\P_{\#}^{|B}$ under which we have the following: First, $U_{1},...,U_{\mathcal{J}(B)},V_{1},...,V_{\mathcal{K}(B)}$ are i.i.d. $U\left( 0,1\right) $; second, $\chi_{1},...,\chi_{\mathcal{J}(B)}$ are i.i.d. Pareto($\alpha,\delta_{B}$), and, finally $\varrho_{1},...,\varrho _{\mathcal{K}(B)}$ are i.i.d. Pareto($\beta,\delta_{B}$) random variables (all of these random variables are mutually independent). Then, write \[ \bar{X}_{\infty}^{|B}\left( t\right) =\sum_{n=1}^{\mathcal{J}(B)}\chi _{n}1_{[U_{n},1]}\left( t\right) -\sum_{n=1}^{\mathcal{K}(B)}\varrho _{n}1_{[V_{n},1]}\left( t\right) . \] Applying the same reasoning as in the one sided case we have that \[ \P^{|B}\left( \bar{X}_{\infty}^{|B}\in\cdot\right) =\P_{\#}^{|B}\left( \bar{X}_{\infty}^{|B}\in\cdot\text{ }|\text{ }\bar{X}_{\infty}^{|B}\in B\right) \] and \[ \P_{\#}^{|B}\left( \bar{X}_{\infty}^{|B}\in B\right) =\delta_{B}% ^{-\mathcal{J}(B)\left( \alpha+2\right) -\mathcal{K}(B)\left( \beta+2\right) }C_{\mathcal{J}(B),\mathcal{K}(B)}(B)>0. \] We note that these results also hold for random walks, and thus is a significant extension of Theorem 3.1 in \cite{durrett1980conditioned}, where it is assumed that $\alpha>2$ and $B=\{\bar{X}_{n}\left(1\right) \geq a\}$. \subsection{Large Deviation Principle}\label{subsec:weak-ldp} In this section, we show that $\bar X_n$ satisfies a weak large deviation principle with speed $\log n$, and a rate function which is piece-wise linear in the number of discontinuities. More specifically, define \begin{equation}\label{eq:rate-function} I(\xi)\triangleq \left\{\begin{array}{ll} (\alpha-1)\mathcal D_+(\xi) + (\beta-1)\mathcal D_-(\xi), & \text{if $\xi$ is a step function \& $\xi(0) = 0$;} \\ \infty, & \text{otherwise.} \end{array} \right. \end{equation} where $\mathcal D_-(\xi)$ denotes the number of downward jumps in $\xi$. \begin{theorem}\label{thm:weak-ldp} The scaled process $\bar X_n$ satisfies the weak large deviation principle with rate function $I$ and speed $\log n$, i.e., \begin{equation}\label{eq:ldp-lower-bound} -\inf_{x \in G} I(x) \leq \liminf_{n\to\infty} \frac{\log \P(\bar X_n \in G)}{\log n} \end{equation} for every open set $G$, and \begin{equation}\label{eq:ldp-upper-bound} \limsup_{n\to\infty} \frac{\log \P(\bar X_n \in K)}{\log n} \leq -\inf_{x\in K} I(x) \end{equation} for every compact set $K$. \end{theorem} The proof of Theorem~\ref{thm:weak-ldp} is provided in Section~\ref{subsec:proofs-for-implications}. It is based on Theorem~\ref{thm:two-sided-main-theorem}, and a reduction of the case of general $A$ to open neighborhoods; reminiscent of arguments made in the proof of Cram\'ers theorem \cite{dembozeitouni}. \subsection{Nonexistence of Strong Large Deviation Principle}\label{subsec:nonexistence} We conclude the current section by showing that the weak LDP presented in the previous section is the best one can hope for in our setting, in the sense that for any L\'evy process $X$ with a regularly varying L\'evy measure, $\bar X_n$ cannot satisfy a strong LDP; i.e., (\ref{eq:ldp-upper-bound}) in Theorem~\ref{thm:weak-ldp} cannot be extended to all closed sets. Consider a mapping $\pi:\mathbb D \to \mathbb{R}_+^2$ that maps paths in $\mathbb D$ to their largest jump sizes, i.e., $$ \pi(\xi) \triangleq \Big(\sup_{t\in (0,1]} \big(\xi(t)-\xi(t-)\big), \sup_{t\in(0,1]} \big(\xi(t-) - \xi(t)\big)\Big). $$ Note that $\pi$ is continuous, since each coordinate is continuous: for example, if the first coordinate (the largest upward jump sizes) of $\pi(\xi)$ and $\pi(\zeta)$ differ by $\epsilon$ then $d(\xi,\zeta) \geq \epsilon/2$, which implies that the first coordinate is continuous. Now, to derive a contradiction, suppose that $\bar X_n$ satisfies a strong LDP. In particular, suppose (\ref{eq:ldp-upper-bound}) in Theorem~\ref{thm:weak-ldp} is true for all closed sets rather than just compact sets. Since $\pi$ is continuous w.r.t.\ the $J_1$ metric, $\pi(\bar X_n)$ has to satisfy a strong LDP with rate function $I'(y) = \inf \{I(\xi): \xi\in \mathbb D, y=\pi (x)\}$ by the contraction principle, in case $I'$ is a rate function. (Since $I$ is not a good rate function, $I'$ is not automatically guaranteed to be a rate function per se; see, for example, Theorem~4.2.1 and the subsequent remarks of \citealp{dembozeitouni}.) From the exact form of $I'$, given by $$ I'(y_1,y_2) = (\alpha-1)\mathbb{I}(y_1>0) + (\beta-1)\mathbb{I}(y_2>0), $$ one can check that $I'$ indeed happens to be a rate function. For the sake of simplicity, suppose that $\alpha = \beta = 2$, and $\nu[x,\infty) = \nu(-\infty,-x] = x^{-2}$. Let $ \hat J_n^{\leqslant 1} \triangleq \frac1n Q_n^\gets (\Gamma_1)1_{[U_1,1]} $ and $ \hat K_n^{\leqslant 1} \triangleq \frac1n R_n^\gets (\Delta_1)1_{[V_1,1]} $ where $ Q_n^{\gets}(y) \triangleq \inf\{s>0: n\nu[s,\infty)< y\}= \left(n/y\right)^{1/2} $ and $R_n^{\gets}(y) \triangleq \inf\{s>0: n\nu(-\infty,-s]< y\}= \left(n/y\right)^{1/2}$. The random variables $\Gamma_1$ and $\Delta_1$ are standard exponential, and $U_1,V_1$ uniform $[0,1]$ (see also Section~\ref{sec:proofs} for similar and more general notational conventions). Note that $\bar Y_n\triangleq (\hat J_n^{\leqslant 1}, \hat K_n^{\leqslant 1})$ is exponentially equivalent to $\pi(\bar X_n)$ if we couple $\pi(\bar X_n)$ and $(\hat J_n^{\leqslant 1}, \hat K_n^{\leqslant 1})$, using the representation of $\bar X_n$ as in (\ref{eq:Poisson-Jump-representation}): for any $\delta>0$, $\P\big( |\bar Y_n - \pi(\bar X_n)|>\delta \big) \leq \P\big(\bar Y_n \neq \pi(\bar X_n)\big) = \P\big( Q_n^{\gets} (\Gamma_1)\leq 1 \text{ or } R_n^{\gets} (\Delta_1)\leq 1\big) $, which decays at an exponential rate. Hence, $$ \frac{\log \P\big( |\bar Y_n - \pi(\bar X_n)|>\delta \big)}{\log n}\to -\infty, $$ as $n\to \infty$, where $|\cdot|$ is the Euclidean distance. As a result, $\bar Y_n$ should satisfy the same (strong) LDP as $\pi(\bar X_n)$. Now, consider the set $A \triangleq \bigcup_{k=2}^\infty [\log k, \infty) \times [k^{-1/2},\infty)$. Then, since $[\log k, \infty) \times [k^{-1/2},\infty) \subseteq A$ for $k\geq 2$, \begin{align*} \P(\bar Y_n \in A) & \geq \P\big((\hat J_n^{\leqslant 1}, \hat K_n^{\leqslant 1}) \in [\log n, \infty) \times[n^{-1/2},\infty)\big) \\ & = \P\big( Q_n^{\gets}(\Gamma_1) > n\log n, R_n^{\gets}(\Delta_1) > n^{1/2}\big) \\ & = \P\left( \left(\frac{n}{\Gamma_1}\right)^{1/2} > n\log n, \left(\frac{n}{\Delta_1}\right)^{1/2} > n^{1/2}\right) \\ & = \P\left(\Gamma_1 < \frac{1}{n(\log n)^2}\right)\P( \Delta_1 < 1) \\ & = (1-e^{- \frac{1}{n(\log n)^2}})(1-e^{-1}). \end{align*} Thus, \begin{equation}\label{eq:no-full-ldp-1} \begin{aligned} \limsup_{n\to\infty} \P(\bar Y_n \in A) & \geq \limsup_{n\to\infty} \frac{\log (1-e^{- \frac{1}{n(\log n)^2}})(1-e^{-1})}{\log n} \\ & \geq \limsup_{n\to\infty} \frac{\log \frac{1}{n(\log n)^2} (1-\frac{1}{2n(\log n)^2})(1-e^{-1})}{\log n} \\ & = -1. \end{aligned} \end{equation} On the other hand, since $A \subseteq (0,\infty) \times (0,\infty)$, \begin{equation}\label{eq:no-full-ldp-2} -\inf_{(y_1,y_2)\in A} I'(y_1,y_2) = -2. \end{equation} Noting that $A$ is a closed (but not compact) set, we arrive at a contradiction to the large deviation upper bound for $\bar Y_n$. This, in turn, proves that $\bar X_n$ cannot satisfy a full LDP. \section{Proofs}\label{sec:proofs} Section~\ref{subsec:proofs-for-M-convergence}, Section~\ref{subsec:proofs-for-sample-path-ldps}, and Section~\ref{subsec:proofs-for-implications} provide proofs of the results in Section~\ref{sec:preliminaries}, Section~\ref{sec:sample-path-ldps}, and Section~\ref{sec:implications}, respectively. \subsection{Proofs of Section~\ref{sec:preliminaries}}\label{subsec:proofs-for-M-convergence} Recall that $F_\delta = \{x \in \mathbb S: d(x,F) \leq \delta\}$ and $G^{-\delta} = ((G^c)_\delta)^c$. \begin{proof}[Proof of Lemma~\ref{lem:extended-bounds}] Let $G$ be an open set such that $G\cap \mathbb S_0$ is bounded away from $\mathbb C$. For a given $\delta>0$, due to the assumed asymptotic equivalence, $\P(X_n \in G^{-\delta}, d(X_n,Y_n) \geq \delta) = o(\epsilon_n)$. Therefore, \begin{equation}\label{eq:asymp-equiv-lower-bound} \begin{aligned} & \liminf_{n\to\infty} \epsilon_n^{-1}\P(Y_{n} \in G) \\ & \geq \liminf_{n\to\infty} \epsilon_n^{-1}\P\left(X_n \in G^{-\delta}, d(X_n,Y_n) < \delta\right) \\& = \liminf_{n\to\infty} \epsilon_n^{-1}\left\{\P\left(X_n \in G^{-\delta}\right)- \P\left(X_n\in G^{-\delta},d(X_n,Y_n) \geq \delta\right) \right\} \\& = \liminf_{n\to\infty} \epsilon_n^{-1}\P\left(X_n \in G^{-\delta}\right) \end{aligned} \end{equation} Pick $r>0$ such that $G^{-\delta}\cap \mathbb S_0 \cap \mathbb C_r = 0$ and note that $G^{-\delta}\cap {\mathbb C_r}^\mathsf{c}$ is an open set bounded away from $\mathbb C$. Then, \begin{align*} \liminf_{n\to\infty} \epsilon_n^{-1}\P(X_n \in G^{-\delta}) & = \liminf_{n\to\infty} \epsilon_n^{-1}\P(X_n \in G^{-\delta}\cap \mathbb S_0) \\ & = \liminf_{n\to\infty} \epsilon_n^{-1}\P(X_n \in G^{-\delta}\cap \mathbb S_0 \cap {\mathbb C_r}^\mathsf{c}) \\ & = \liminf_{n\to\infty} \epsilon_n^{-1}\P(X_n \in G^{-\delta}\cap {\mathbb C_r}^\mathsf{c}) \geq \mu(G^{-\delta}\cap {\mathbb C_r}^\mathsf{c}) \\ & = \mu(G^{-\delta}\cap {\mathbb C_r}^\mathsf{c}\cap \mathbb S_0) = \mu(G^{-\delta}\cap \mathbb S_0) = \mu(G^{-\delta}). \end{align*} Since $G$ is an open set, $G = \bigcup_{\delta>0} G^{-\delta}$. Due to the continuity of measures, $\lim_{\delta\to 0}\mu(G^{-\delta}) = \mu(G),$ and hence, we arrive at the lower bound \begin{align*} &\liminf_{n\to\infty} \epsilon_n^{-1}\P(Y_n \in G) \geq \mu(G) \end{align*} by taking $\delta \to 0$. Now, turning to the upper bound, consider a closed set $F$ such that $F_\delta \cap \mathbb S_0$ is bounded away from $\mathbb C$. Given a $\delta>0$, by the equivalence assumption, $\P(Y_n \in F, d(X_n,Y_n) \geq \delta) = o(\epsilon_n)$. Therefore, \begin{equation}\label{eq:asymp-equiv-upper-bound} \begin{aligned} & \limsup_{n\to\infty} \epsilon_n^{-1}\P(Y_{n} \in F) \\ & = \limsup_{n\to\infty} \epsilon_n^{-1}\big\{\P\left(Y_n \in F,\, d(X_n,Y_n) < \delta\right) \\ &\qquad\qquad\qquad\qquad + \P\left(Y_n \in F,\, d(X_n,Y_n) \geq \delta\right)\big\} \\& = \limsup_{n\to\infty} \epsilon_n^{-1}\P\left(X_n \in F_{\delta}\right) = \limsup_{n\to\infty}\epsilon_n^{-1}\P(X_n \in F_\delta \cap \mathbb S_0) \\ & \leq \limsup_{n\to\infty}\epsilon_n^{-1}\P(X_n \in \overline{F_\delta \cap \mathbb S_0}\,) \leq \mu\big(\,\overline{F_\delta \cap \mathbb S_0}\,\big) =\mu\big(\,\overline{F_\delta \cap \mathbb S_0}\,\cap \mathbb S_0\big) \\ & \leq\mu\big(\bar{F}_\delta\cap \mathbb S_0\big) =\mu(\bar F_\delta) = \mu(F_\delta). \end{aligned} \end{equation} Note that $\{F_\delta\}$ is a decreasing sequence of sets, $F = \bigcap_{\delta>0} F_\delta$ (since $F$ is closed), and $\mu\in \mathbb M(\mathbb S\setminus \mathbb C)$ (and hence $\mu$ is a finite measure on $\mathbb S\setminus \mathbb C^r$ for some $r>0$ such that $F_\delta \subseteq \mathbb S\setminus \mathbb C^r$ for some $\delta>0$). Due to the continuity (from above) of finite measures, $ \lim_{\delta \to 0} \mu(F_{\delta}) = \mu(F). $ Therefore, we arrive at the upper bound \begin{align*} &\limsup_{n\to\infty} \epsilon_n^{-1}\P(X_n \in F) \leq \mu(F) \end{align*} by taking $\delta \to 0$. \end{proof} \revadd For a measure $\mu$ on a measurable space $\mathbb S$, denote the restriction of $\mu$ to a subspace $\mathbb O\subseteq \mathbb S$ with $\mu_{|\mathbb O}$. \begin{proof}[Proof of Lemma~\ref{thm:simple-product-space}] We provide a proof for $d=2$ which suffices for the application in this article. The extension to general $d$ is straightforward, and hence, omitted. In view of the Portmanteau theorem for $\mathbb M$-convergence---in particular item \emph{(v)} of Theorem 2.1 of \cite{LRR}---it is enough to show that for all but countably many $r>0$, $(\mu^{(1)}_n\times\mu^{(2)}_n)_{|(\mathbb S_1\times\mathbb S_2)\setminus ((\mathbb C_1\times\mathbb S_2)\cup (\mathbb S_1\times\mathbb C_2))^r}(\cdot)$ converges to $(\mu^{(1)}\times\mu^{(2)}\allowbreak)_{|(\mathbb S_1\times\mathbb S_2)\setminus ((\mathbb C_1\times\mathbb S_2)\cup (\mathbb S_1\times\mathbb C_2))^r}(\cdot)$ weakly on $(\mathbb S_1\times\mathbb S_2)\setminus \big((\mathbb C_1\times\mathbb S_2)\cup (\mathbb S_1\times\mathbb C_2)\big)^r$, which is equipped with the relative topology as a subspace of $\mathbb S_1\times\mathbb S_2$. From the assumptions of the lemma and again by Portmanteau theorem for $\mathbb M$-convergence, we note that ${\mu^{(1)}_n}_{|{\mathbb S_1\setminus\mathbb C_1^r}}$ converges to ${\mu^{(1)}}_{|\mathbb S_1\setminus\mathbb C_1^r}$ weakly in $\mathbb S_1 \setminus \mathbb C_1^r$, and ${\mu^{(2)}_n}_{|\mathbb S_2\setminus\mathbb C_2^r}$ converges to ${\mu^{(2)}}_{|\mathbb S_2\setminus\mathbb C_2^r}$ weakly in $\mathbb S_2 \setminus \mathbb C_2^r$ for all but countably many $r>0$. For such $r$'s, ${\mu^{(1)}_n}_{|\mathbb S_1\setminus\mathbb C_1^r}\times {\mu^{(2)}_n}_{|\mathbb S_2\setminus\mathbb C_2^r}$ converges weakly to ${\mu^{(1)}}_{|\mathbb S_1\setminus\mathbb C_1^r}\times {\mu^{(2)}}_{|\mathbb S_2\setminus\mathbb C_2^r}$ in $\big(\mathbb S_1 \setminus \mathbb C_1^r\big)\times\big(\mathbb S_2 \setminus \mathbb C_2^r\big)$. Noting that $(\mathbb S_1\times\mathbb S_2)\setminus \big((\mathbb C_1\times\mathbb S_2)\cup (\mathbb S_1\times\mathbb C_2)\big)^r$ coincides with $\big(\mathbb S_1 \setminus \mathbb C_1^r\big)\times\big(\mathbb S_2 \setminus \mathbb C_2^r\big)$, and ${\mu^{(1)}}_{|\mathbb S_1\setminus\mathbb C_1^r} \times{\mu^{(2)}}_{|\mathbb S_2\setminus \mathbb C_2^r}$ and ${\mu_n^{(1)}}_{|\mathbb S_1\setminus \mathbb C_1}\times{\mu_n^{(2)}}_{|\mathbb S_2\setminus \mathbb C_2}$ coincide with $(\mu^{(1)}\times\mu^{(2)})_{|(\mathbb S_1\times \mathbb S_2)\setminus((\mathbb C_1\times \mathbb S_2)\cup(\mathbb S_1 \times \mathbb C_2))^r}$ and $({\mu_n^{(1)}}\times {\mu_n^{(2)}})_{|(\mathbb S_1\times \mathbb S_2)\setminus((\mathbb C_1\times \mathbb S_2)\cup(\mathbb S_1 \times \mathbb C_2))^r}$, respectively, we reach the conclusion. \end{proof} \begin{proof}[Proof of Lemma~\ref{thm:union-limsup}] Starting with the upper bound, suppose that $F$ is a closed set bounded away from $\bigcap_{i=0}^m\mathbb C(i)$. From the assumption, there exist $r_0,\ldots,r_m$ such that $F\subseteq \bigcup_{i=0}^m (\mathbb S \setminus \mathbb C(i)^{r_i})$, and hence, \begin{align*} \limsup_{n\to\infty}\frac{\P(X_n\in F)}{{\epsilon_n{(0)}}} &\leq \limsup_{n\to\infty}\sum_{i=0}^m\frac{\P\big(X_n\in F \cap (\mathbb S \setminus \mathbb C(i)^{r_i})\big)}{{\epsilon_n{(i)}}}\frac{{\epsilon_n{(i)}}}{{\epsilon_n{(0)}}} \\& \leq \limsup_{n\to\infty}\sum_{i=0}^m\frac{\P(X_n\in F\setminus \mathbb C(i)^{r_i})}{{\epsilon_n{(i)}}}\frac{{\epsilon_n{(i)}}}{{\epsilon_n{(0)}}} \\ &= \limsup_{n\to\infty}\frac{\P(X_n\in F\setminus \mathbb C(0)^{r_0})}{{\epsilon_n{(0)}}} \\ & \leq \mu^{(0)}(F\setminus \mathbb C(0)^{r_0}) \leq \mu^{(0)}(F) \end{align*} Turning to the lower bound, if $G$ is an open set bounded away from $\bigcap_{i=0}^m\mathbb C(i)$, \begin{align*} \liminf_{n\to\infty}\frac{\P(X_n\in G)}{{\epsilon_n{(0)}}} &\geq \liminf_{n\to\infty}\frac{\P(X_n\in G \setminus \mathbb C(0)_{r})}{{\epsilon_n{(0)}}} \geq \mu^{(0)}(G \setminus \mathbb C(0)_{r}). \end{align*} Taking $r\to 0$ yields the lower bound. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:almost-continuous-mapping}] Suppose that $\mu_n \to \mu$ in $\mathbb M(\mathbb S\setminus \mathbb C)$, and $\mu(D_h\setminus \mathbb C^r) = 0$ and $\mu(\partial \mathbb S_0 \setminus \mathbb C^r)= 0$ for each $r>0$. Note that $\partial h^{-1}(A') \subseteq \mathbb S \setminus \mathbb C^r$ for some $r>0$ due to the assumption, and $\partial h^{-1}(A') \subseteq h^{-1}(\partial A') \cup D_h \cup \partial \mathbb S_0$. Therefore, $\mu( \partial h^{-1}(A')) \leq \mu\circ h^{-1}(\partial A') + \mu(D_h\setminus \mathbb C^{r}) + \mu(\partial \mathbb S_0 \setminus \mathbb C^{r}) = 0$. Applying Theorem 2.1 (iv) of \cite{LRR} for $h^{-1}(A')$, we conclude that $\mu_n(h^{-1}(A')) \to \mu(h^{-1}(A'))$. Again, by Theorem 2.1 (iv) of \cite{LRR}, this means that $\mu_n\circ h^{-1} \to \mu \circ h^{-1}$ in $\mathbb M(\mathbb S'\setminus \mathbb C')$, and hence, $\hat h$ is continuous at $\mu$. \end{proof} \revrem \begin{proof} [Proof of Lemma~\ref{lemma:two-sided-continuous-mapping}] The first two claims are straightforward analogies of Result~\ref{result:L53}. As a consequence of those claims, we can apply Lemma~\ref{lem:almost-continuous-mapping} with $\mathbb S_0 = S_{j,k}$ and $h = T_{j,k}$ to conclude the proof of the last claim. \end{proof} \revadd \begin{proof}[Proof of Lemma~\ref{lem:continuous-mapping-principle-for-subtraction}] The continuity of $h$ is well known; see, for example, \cite{whitt1980some}. For the second claim, it is enough to prove that for each $j$ and $k$, $h^{-1}(A)\subseteq \mathbb D\times \mathbb D$ is bounded away from $\mathbb D_{j}\times \mathbb D_{k}$ whenever $A\subseteq \mathbb D$ is bounded away from $\mathbb D_{j,k}$. Given $j$ and $k$, let $A\subseteq \mathbb D$ be bounded away from $\mathbb D_{j,k}$. To prove that $h^{-1}(A)$ is bounded away from $\mathbb D_j\times \mathbb D_k$ by contradiction, suppose that it is not. Then, for any given $\epsilon>0$, one can find $\xi\in \mathbb D$ and $\zeta \in \mathbb D$ such that $d(\xi,\mathbb D_j) < \epsilon/2$, $d(\zeta,\mathbb D_k)< \epsilon/2$, and $\xi-\zeta \in A$. Since a time-change of a step function doesn't change the number of jumps and jump-sizes, there exist $\xi'\in \mathbb D_j$ and $\zeta' \in \mathbb D_k$ such that $\|\xi - \xi'\|_\infty < \epsilon/2$ and $\|\zeta - \zeta'\|_\infty < \epsilon/2$. Therefore, $ d(\xi-\zeta,\xi'-\zeta') \leq \| (\xi - \zeta) - (\xi'-\zeta')\|_\infty \leq \|\xi - \xi'\|_\infty + \|\zeta - \zeta'\|_\infty < \epsilon. $ From this along with the property $d(\xi'-\zeta', \mathbb D_{j,k}) = 0$, we conclude that $ d(\xi-\zeta, \mathbb D_{j,k}) < \epsilon $. Taking $\epsilon\to 0$, we arrive at $d(A, \mathbb D_j \times \mathbb D_k) = 0$ which is contradictory to the assumption. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:convergence-determining-class}] From (i) and the inclusion-exclusion formula, $\mu_n(\bigcup_{i=1}^m A_i) \to \mu(\bigcup_{i=1}^m A_i)$ as $n\to \infty$ for any finite $m$ if $A_i \in \mathcal A_p$ is bounded away from $\mathbb C$ for $i=1,\ldots,m$. If $G$ is open and bounded away from $\mathbb C$, there is a sequence of sets $A_i, i\geq 1$ in $\mathcal A_p$ such that $G=\bigcup_{i=1}^\infty A_i$; note that since $G$ is bounded away from $\mathbb C$, $A_i$'s are also bounded away from $\mathbb C$. For any $\epsilon >0$, one can find $M_\epsilon$ such that $\mu(\bigcup_{i=1}^{M_\epsilon} A_i) \geq \mu( G)-\epsilon$, and hence, $$\liminf_{n\to\infty} \mu_n(G) \geq \liminf_{n\to\infty} \mu_n(\bigcup_{i=1}^{M_\epsilon} A_i) = \mu(\bigcup_{i=1}^{M_\epsilon} A_i) \geq \mu(G) -\epsilon.$$ Taking $\epsilon \to 0$, we arrive at the lower bound (\ref{result1lb}). Turning to the upper bound, given a closed set $F$, we pick $A\in \mathcal A_p$ bounded away from $\mathbb C$ such that $F\subseteq A^\circ$. Then, \begin{align*} \mu(A) - \limsup_{n\to\infty} \mu_n(F) & = \lim_{n\to\infty}\mu_n(A) + \liminf_{n\to\infty} (-\mu_n(F)) \\ &= \liminf_{n\to\infty} (\mu_n(A)-\mu_n(F)) = \liminf_{n\to\infty} \mu_n(A\setminus F) \\ & \geq \liminf_{n\to\infty} \mu_n(A^\circ \setminus F) \geq \mu(A^\circ\setminus F) \\ & = \mu(A) - \mu(F). \end{align*} Note that $\mu(A)<\infty$ since $A$ is bounded away from $\mathbb C$, which together with the above inequality establishes the upper bound (\ref{result1lb}). \end{proof} \subsection{Proofs of Section~\ref{sec:sample-path-ldps}}\label{subsec:proofs-for-sample-path-ldps} This section provides the proofs for the limit theorems (Theorem~\ref{thm:one-sided-limit-theorem}, Theorem~\ref{thm:two-sided-limit-theorem}) presented in Section~\ref{sec:sample-path-ldps}. The proof of Theorem~\ref{thm:one-sided-limit-theorem} is based on \begin{itemize} \item[1.] The asymptotic equivalence between the target object $\bar X_n$ and the process obtained by keeping its $j$ largest jumps, which will be denoted as $J_n^{\leqslant j}$: Proposition~\ref{prop:asymptotic-equivalence-Xbar-Jbar} and Proposition~\ref{prop:asymptotic-equivalence-Jbar-Jj} prove such asymptotic equivalences. Two technical lemmas (Lemma~\ref{lem:key-upper-bound-1} and Lemma~\ref{lem:key-upper-bound-2}) play key roles in Proposition~\ref{prop:asymptotic-equivalence-Jbar-Jj}. \item[2.] $\mathbb M$-convergence of $J_n^{\leqslant j}$: Lemma~\ref{lem:poisson-jumps} identifies the convergence of jump size sequences, and Proposition~\ref{prop:Jj} deduces the convergence of $J_n^{\leqslant j}$ from the convergence of the jump size sequences via the mapping theorem established in Section~\ref{sec:preliminaries}. \end{itemize} For Theorem~\ref{thm:two-sided-limit-theorem}, we first establish a general result (Theorem~\ref{thm:multi-d-limit-theorem}) for the $\mathbb M$-convergence of multiple L\'evy processes in the associated product space using Lemma~\ref{thm:simple-product-space} and \ref{thm:union-limsup}. We then apply Lemma~\ref{lem:continuous-mapping-principle-for-subtraction} to prove Theorem~\ref{thm:two-sided-limit-theorem}. Recall that $X_n(t) \triangleq X(nt)$ is a scaled process of $X$, where $X$ is a L\'evy process with a L\'evy measure $\nu$ supported on $(0,\infty)$. Also recall that $X_n$ has It\^{o} representation \begin{align}\label{eq:ito_rep_onesided2} X_n(s) & = nsa + B(ns) + \int_{|x|\leq 1} x[N([0,ns]\times dx) - ns\nu(dx)] \\ & \hspace{75pt} + \int_{|x|>1} xN([0,ns]\times dx), \nonumber \end{align} where $N$ is the Poisson random measure with mean measure Leb$\times\nu$ on $[0,n]\times(0,\infty)$ and Leb denotes the Lebesgue measure. It is easy to see that \begin{align*} J_n(s) & \triangleq \sum_{l=1}^{\tilde N_n} Q_n^\gets (\Gamma_l)1_{[U_l,1]}(s) \stackrel{\mathcal D}{=} \int_{|x|>1} xN([0,ns]\times dx), \end{align*} where $\Gamma_l = E_1 + E_2 + ... + E_l$; $E_i$'s are i.i.d.\ and standard exponential random variables; $U_l$'s are i.i.d.\ and uniform variables in $[0,1]$; $\tilde N_n = N_n\big([0,1]\times [1,\infty)\big)$; $ N_n = \sum_{l=1}^\infty \delta_{(U_l, Q_n^{\gets}(\Gamma_l))}, $ where $\delta_{(x,y)}$ is the Dirac measure concentrated on $(x,y)$; $ Q_n(x) \triangleq n\nu[x,\infty)$, $ Q_n^{\gets}(y) \triangleq \inf\{s>0: n\nu[s,\infty)< y\} $. Note that $\tilde N_n$ is the number of $\Gamma_l$'s such that $\Gamma_l \leq n\nu_1^+$, where $\nu_1^+ \triangleq \nu[1,\infty)$, and hence, $\tilde N_n \sim \text{Poisson}(n\nu_1^+)$. Throughout the rest of this section, we use the following representation for the centered and scaled process $\bar X_n \triangleq \frac{1}{n} X_n$: \begin{align}\label{eq:Poisson-Jump-representation} \bar X_n(s) &\stackrel{\mathcal D}{=} \frac1n J_n(s) + \frac1n B(ns) \\ &\hspace{15pt} + \frac1n\int_{|x|\leq 1} x[N([0,ns]\times dx) - ns\nu(dx)] - (\mu_1^+\nu_1^+)s. \nonumber \end{align} \begin{proof}[Proof of Theorem~\ref{thm:one-sided-limit-theorem}] We decompose $\bar X_n$ into a centered compound Poisson process $\bar J_n$, a centered L\'evy process with small jumps and continuous increments $\bar Y_n$, and a residual process that arises due to centering $\bar Z_n$. After that, we will show that the compound Poisson process determines the limit. More specifically, consider the following decomposition: \begin{equation}\label{eq:main-decomposition} \begin{aligned} \bar X_n(s) &\stackrel{\mathcal{D}}{=} \bar Y_n(s) + \bar J_n(s) + \bar Z_n(s),\\ \bar Y_n(s) &\triangleq \frac1n B(ns) + \frac1n \int_{|x|\leq 1} x[N([0,ns]\times dx) - ns\nu(dx)],\\ \bar J_n(s) &\triangleq \frac1n \sum_{l=1}^{\tilde N_n} (Q_n^{\gets}(\Gamma_l)-\mu_1^+)1_{[U_l,1]}(s),\\ \bar Z_n(s) &\triangleq \frac1n \sum_{l=1}^{\tilde N_n} \mu_1^+ 1_{[U_l, 1]}(s) - \mu_1^+\nu_1^+s, \end{aligned} \end{equation} where $\mu_1^+ \triangleq \frac{1}{\nu_1^+}\int_{[1,\infty)} x\nu(dx)$. Let $\hat J_n^{\leqslant j} \triangleq \frac1n \sum_{l=1}^j Q_n^\gets(\Gamma_l)1_{[U_l, 1]}$ be, roughly speaking, the process obtained by just keeping the $j$ largest (un-centered) jumps of $\bar J_n$. In view of Corollary~\ref{lem:asymptotic-equivalence} and Proposition~\ref{prop:Jj}, it suffices to show that $\bar X_n$ and $\hat J_n^{\leqslant j}$ are asymptotically equivalent. Proposition~\ref{prop:asymptotic-equivalence-Xbar-Jbar} along with Proposition~\ref{prop:asymptotic-equivalence-Jbar-Jj} prove the desired asymptotic equivalence, and hence, conclude the proof of the Theorem~\ref{thm:one-sided-limit-theorem}. \end{proof} \begin{proposition}\label{prop:asymptotic-equivalence-Xbar-Jbar} Let $\bar X_n$ and $\bar J_n$ be as in the proof of Theorem~\ref{thm:one-sided-limit-theorem}. Then, $\bar X_n$ and $\bar J_n$ are asymptotically equivalent w.r.t.\ $\big(n\nu[n,\infty)\big)^{j}$ for any $j\geq 0$. \end{proposition} \begin{proof} In view of the decomposition (\ref{eq:main-decomposition}), we are done if we show that $\P(\|\bar Y_n\| > \delta) = o\left((n\nu[n,\infty))^{-j}\right)$ and $\P(\|\bar Z_n\| > \delta) = o\left((n\nu[n,\infty))^{-j}\right)$. For the tail probability of $\|\bar Y_n\|$, \begin{align*} \P\bigg[\sup_{t\in[0,1]}|\bar Y_{n}(t)| > \delta\bigg] &\leq \P\bigg[\sup_{t\in[0,n]}\big|B(t)\big|>n\delta/2\bigg] \\ & + \P\bigg[\sup_{t\in[0,n]}\left|\int_{|x|\leq 1} x[N((0,t] \times dx) - t\nu(dx)]\right| > n\delta/2\bigg]. \end{align*} We have an explicit expression for the first term by the reflection principle, and in particular, it decays at a geometric rate w.r.t.\ $n$. For the second term, let $Y'(t)\triangleq \int_{|x|\leq 1} x[N((0,t] \times dx) - t\nu(dx)]$. Using Etemadi's bound for L\'evy processes (see Lemma~\ref{lem:cont_etemadi}), we obtain \begin{align*} &\P\bigg[\sup_{t\in[0,n]}\left|\int_{|x|\leq 1} x[N([0,t] \times dx) - t\nu(dx)]\right| > n\delta/2\bigg]\\ &\leq 3 \sup_{t\in[0,n]}\P\bigg[\left|Y'(t)\right| > n\delta/6\bigg]\\ &\leq 3 \sup_{t\in[0,n]}\bigg\{\P\bigg[|Y'(\lfloor t\rfloor)| > n\delta/12\bigg] + \P\bigg[|Y'(t)-Y'(\lfloor t\rfloor) | > n\delta /12\bigg]\bigg\}\\ &\leq 3 \sup_{t\in[0,n]}\P\bigg[|Y'(\lfloor t\rfloor)| > n\delta/12\bigg] + 3\sup_{t\in[0,n]}\P\bigg[|Y'(t)-Y'(\lfloor t\rfloor) | > n\delta /12\bigg]\\ &= 3 \sup_{1 \leq k \leq n}\P\bigg[|Y'(k)| > n\delta/12\bigg] + 3\sup_{t\in[0,1]}\P\bigg[|Y'(t) | > n\delta /12\bigg]\\ & \leq 3 \sup_{1 \leq k \leq n}\P\bigg[\bigg|\sum_{i=1}^k\{Y'(i)-Y'(i-1)\}\bigg| > n\delta/12\bigg] \\ &\hspace{180pt} + 3\P\bigg[\sup_{t\in[0,1]}|Y'(t) |^m > (n\delta /12)^m\bigg]. \end{align*} Since $Y'(i)-Y'(i-1)$ are i.i.d.\ with $Y'(i)-Y'(i-1)\stackrel{\mathcal{D}}{=}Y'(1)= \int_{|x|\leq 1}\allowbreak x[N((0,1]\times dx) - \nu(dx)]$ and $Y'(1)$ has exponential moments, the first term decreases at a geometric rate w.r.t.\ $n$ due to the Chernoff bound; on the other hand, since $Y'(t)$ is a martingale, the second term is bounded by $3\frac{\mathbf{E} |Y'(1)|^m}{n^m (\delta/12)^m}$ for any $m$ by Doob's submartingale maximal inequality. Therefore, by choosing $m$ large enough, this term can be made negligible. For the tail probability of $\|\bar Z_n\|$, note that $\bar Z_n$ is a mean zero L\'evy process with the same distribution as $\mu_1^+(N(ns)/n - \nu_1^+s)$, where $N$ is the Poisson process with rate $\nu_1^+$. Therefore, again from the continuous-time version of Etemadi's bound, we see that $\P(\|\bar Z_n\| > \delta)$ decays at a geometric rate w.r.t.\ $n$ for any $\delta>0$. \end{proof} \begin{proposition}\label{prop:asymptotic-equivalence-Jbar-Jj} For each $j\geq 0$, let $\bar J_n$ and $\hat J_n^{\leqslant j}$ be defined as in the proof of Theorem~\ref{thm:one-sided-limit-theorem}. Then, $\bar J_n$ and $\hat J_n^{\leqslant j}$ are asymptotically equivalent w.r.t.\ $\big(n\nu[n,\infty)\big)^{j}$. \end{proposition} \begin{proof With the convention that the summation is $0$ in case the superscript is strictly smaller than the subscript, consider the following decomposition of $\bar J_n$: \begin{align*} \hat J_n^{\leqslant j} &\triangleq \frac1n \sum_{l=1}^{j} Q_n^\gets(\Gamma_l) 1_{[U_l,1]}, & \bar J_n^{>j} &\triangleq \frac1n \sum_{l=j+1}^{\tilde N_n} (Q_n^\gets(\Gamma_l) -\mu_1^+)1_{[U_l,1]}, \\ \check J_n^{\leqslant j} &\triangleq \frac1n \sum_{l=1}^j -\mu_1^+1_{[U_l,1]}, & \bar R_n &\triangleq \frac1n \mathbb{I}(\tilde N_n < j)\sum_{l=\tilde N_n+1}^j (Q_n^\gets(\Gamma_l) - \mu_1^+)1_{[U_l,1]}, \end{align*} so that $$ \bar J_n = \hat J_n^{\leqslant j} + \check J_n^{\leqslant j} + \bar J_n^{>j} - \bar R_n. $$ Note that $\P(\|\check J_n^{\leqslant j}\| \geq \delta) =0$ for sufficiently large $n$ since $\|\check J_n^{\leqslant j}\| = j\mu_1/n$. On the other hand, $\P(\|\bar R_n\|\geq \delta)$ decays at a geometric rate since $\{\|\bar R_n\|\geq \delta\} \subseteq \{\tilde N_n < j\}$ and $\P(\tilde N_n < j)$ decays at a geometric rate. Since $\P(\|\bar J_n^{>j}\| \geq \delta) \leq \P(\|\bar J_n^{>j}\| \geq \delta, Q_n^\gets(\Gamma_j)\geq n\gamma ) + \P(\|\bar J_n^{>j}\| \geq \delta, Q_n^\gets(\Gamma_j)\leq n\gamma )$, Lemma~\ref{lem:key-upper-bound-1} and Lemma~\ref{lem:key-upper-bound-2} given below imply $\P(\|\bar J_n^{>j}\| \geq \delta)=o\left( (n\nu[n,\infty))^j\right)$ by choosing $\gamma$ small enough. Therefore, $\hat J_n^{\leqslant j}$ and $\bar J_n$ are asymptotically equivalent w.r.t.\ $(n\nu[n,\infty))^j$. \end{proof} Define a measure $\mu_\alpha^{(j)}$ on $\mathbb R_+^{\infty \downarrow}$ by \begin{equation*} \begin{aligned} \mu_\alpha^{(j)}(dx_1, dx_2, \cdots) &\triangleq \prod_{i=1}^j \nu_\alpha(dx_i)\mathbb{I}_{[x_1\geq x_2\geq \cdots \geq x_j > 0]} \prod_{i=j+1}^\infty \delta_0(dx_i), \end{aligned} \end{equation*} where $\nu_\alpha(x,\infty) = x^{-\alpha}$, and $\delta_0$ is the Dirac measure concentrated at $0$. \begin{proposition}\label{prop:Jj} For each $j\geq 0$, $$ \big(n\nu[n,\infty)\big)^{-j}\P(\hat J_n^{\leqslant j}\in \cdot) \to C_j(\cdot) $$ in $\mathbb M\big(\mathbb D \setminus \mathbb D_{<\, j}\big)$ as $n\to\infty$. \end{proposition} \begin{proof} Noting that $(\mu_\alpha^{(j)}\times \text{Leb})\circ T_j^{-1} = C_j$ and $\P(\hat J_n^{\leqslant j} \in \cdot) = \P\big(\allowbreak\big((Q_n^\gets(\Gamma_l)/n,l\geq 1),(U_l,l\geq 1)\big)\in T_j^{-1}(\cdot)\big)$, Lemma~\ref{lem:poisson-jumps} and Corollary~\ref{result:consequence-of-L23-53-54} prove the proposition. \end{proof} \begin{lemma}\label{lem:key-upper-bound-1} For any fixed $\gamma>0$, $\delta>0$,and $j\geq 0$, \begin{equation}\label{eq:hatJ_on_geq} \P \left\{\| \bar J_n^{>j}\|\geq \delta, Q_n^\gets(\Gamma_j) \geq n\gamma\right\} = o\left((n\nu[n,\infty))^j\right). \end{equation} \end{lemma} \begin{proof} (Throughout the proof of this lemma, we use $\mu_1$ and $\nu_1$ in place of $\mu_1^+$ and $\nu_1^+$ respectively.) We start with the following decomposition of $\bar J_n^{>j}$: for any fixed $\lambda \in \left(0, \frac{\delta}{3\nu_1\mu_1}\right)$, \begin{align*} \bar J_n^{>j} &= \frac1n \sum_{l=j+1}^{\tilde N_n} (Q_n^\gets(\Gamma_l) - \mu_1)1_{[U_1,1]} \\ & = \tilde J_n^{[j+1,n\nu_1(1+\lambda)]} - \tilde J_n^{[\tilde N_n + 1, n\nu_1(1+\lambda)]}\mathbb{I}(\tilde N_n < n\nu_1(1+\lambda)) \\ &\hspace{80pt} + \tilde J_n^{[n\nu_1(1+\lambda)+1,\tilde N_n]}\mathbb{I}(\tilde N_n > n\nu_1(1+\lambda)), \end{align*} where $$ \tilde J_n^{[a,b]}\triangleq \frac1n \sum_{l=\lceil a\rceil}^{\lfloor b\rfloor} (Q_n^\gets(\Gamma_l)-\mu_1)1_{[U_l,1]}. $$ Therefore, \begin{align*} &\P \left\{\| \bar J_n^{>j} \|\geq \delta, Q_n^\gets(\Gamma_j) \geq n\gamma\right\} \\ & \leq \P\left(\left\|\tilde J_n^{[j+1,n\nu_1(1+\lambda)]}\right\|\geq \delta/3, Q_n^\gets(\Gamma_j)\geq n\gamma\right) \\ &\hspace{10pt} +\P\left( \left\|\tilde J_n^{[\tilde N_n + 1, n\nu_1(1+\lambda)]}\right\| \geq \delta/3\right) +\P\left(\tilde N_n > n\nu_1(1+\lambda)\right) \\ & = \text{(i)} + \text{(ii)} + \text{(iii)}. \end{align*} Noting that $\left\|\tilde J_n^{[\tilde N_n + 1, n\nu_1(1+\lambda)]}\right\| \leq (\nu_1(1+\lambda)-\tilde N_n/n)\mu_1$ --- recall that $\tilde N_n$ is defined to be the number of $l$'s such that $Q_n^\gets(\Gamma_l) \geq 1$, and hence, $0\leq Q_n^\gets(\Gamma_l)<1$ for $l>\tilde N_n$ --- we see that (ii) is bounded by $$\P((\nu_1(1+\lambda)-\tilde N_n/n)\mu_1\geq \delta/3)= \P\left(\frac{\tilde N_n}{n\nu_1}\leq 1+\lambda-\frac{\delta}{3\nu_1\mu_1}\right),$$ which decays at a geometric rate w.r.t.\ $n$ since $\tilde N_n$ is Poisson with rate $n\nu_1$. For the same reason, (iii) decays at a geometric rate w.r.t.\ $n$. We are done if we prove that (i) is $o\left((n\nu[n,\infty))^j\right)$. Note that $Q_{n}^{\gets}(\Gamma_j) \geq n\gamma$ implies $Q_n(n\gamma)\geq \Gamma_j$, and hence, \begin{align*} &\sum_{l=j+1}^{(1+\lambda)n\nu_1} \big(Q_{n}^{\gets}(\Gamma_l - \Gamma_j + Q_n(n\gamma))-\mu_1\big)1_{[U_l,1]} \\ &\leq \sum_{l=j+1}^{(1+\lambda)n\nu_1} \big(Q_{n}^{\gets}(\Gamma_l)-\mu_1\big)1_{[U_l,1]} \\ &\leq \sum_{l=j+1}^{(1+\lambda)n\nu_1} \big(Q_{n}^{\gets}(\Gamma_l - \Gamma_j)-\mu_1\big)1_{[U_l,1]}. \end{align*} Therefore, if we define \begin{align*} A_n &\triangleq \left\{Q_n^\gets(\Gamma_j) \geq n\gamma\right\},\\ B_n' &\triangleq \left\{ \sup_{t\in[0,1]}\sum_{l=j+1}^{(1+\lambda)n\nu_1} \big(Q_{n}^{\gets}(\Gamma_l - \Gamma_j)-\mu_1\big)1_{[U_l,1]}(t) \geq n\delta \right\}, \\ B_n'' &\triangleq \left\{\inf_{t\in[0,1]}\sum_{l=j+1}^{(1+\lambda)n\nu_1} \big(Q_{n}^{\gets}(\Gamma_l - \Gamma_j + Q_n(n\gamma))-\mu_1\big)1_{[U_l,1]}(t) \leq -n\delta\right\}, \end{align*} then we have that $$ \text{(i)} \leq \P(A_n \cap (B'_n \cup B''_n)) \leq \P(A_n \cap B'_n) + \P(A_n \cap B''_n) = \P(A_n)(\P(B'_n)+\P(B''_n)) $$ where the last equality is from the independence of $A_n$ and $B_n'$ as well as of $A_n$ and $B_n''$ (which is, in turn, due to the independence of $\Gamma_j$ and $\Gamma_l - \Gamma_j$). From Lemma~\ref{lem:Djk} (c) and Proposition~\ref{prop:Jj}, $\P(A_n) = \P(\hat J_n^{\leqslant j} \in (\mathbb D\setminus \mathbb D_{<\, j})^{-\gamma/2}) = O\left((n\nu[n,\infty))^j\right)$, and hence, it suffices to show that the probabilities of the complements of $B_n'$ and $B_n''$ converge to 1---i.e., for any fixed $\gamma>0$, \begin{equation}\label{eq:lem_cn1} \P\left\{\sup_{t\in[0,1]}\sum_{l=j+1}^{(1+\lambda)n\nu_1} \big(Q_{n}^{\gets}(\Gamma_l - \Gamma_j)-\mu_1\big)1_{[U_l,1]}(t) < n\delta\right\}\to 1, \end{equation} and \begin{equation}\label{eq:lem_cn2} \P \left\{\inf_{t\in[0,1]}\sum_{l=j+1}^{(1+\lambda)n\nu_1} \big(Q_{n}^{\gets}(\Gamma_l - \Gamma_j + Q_n(n\gamma))-\mu_1\big)1_{[U_l,1]}(t) > -n\delta\right\}\to 1. \end{equation} Starting with (\ref{eq:lem_cn1}) \begin{align*} &\P\left\{\sup_{t\in[0,1]}\sum_{l=j+1}^{(1+\lambda)n\nu_1} \big(Q_{n}^{\gets}(\Gamma_l - \Gamma_j)-\mu_1\big)1_{[U_l,1]}(t) < n\delta\right\}\\ &=\P\left\{\sup_{t\in[0,1]}\sum_{l=1}^{(1+\lambda)n\nu_1-j} \big(Q_{n}^{\gets}(\Gamma_l)-\mu_1\big)1_{[U_l,1]}(t) < n\delta\right\}\\ &\geq\P\left\{\sup_{t\in[0,1]}\sum_{l=1}^{(1+\lambda)n\nu_1-j} \big(Q_{n}^{\gets}(\Gamma_l)-\mu_1\big)1_{[U_l,1]}(t) < n\delta, \tilde N_n \leq (1+\lambda)n\nu_1 - j\right\}\\ &\geq\P\left\{\sup_{t\in[0,1]}\sum_{l=1}^{\tilde N_n} \big(Q_{n}^{\gets}(\Gamma_l)-\mu_1\big)1_{[U_l,1]}(t) < n\delta, \tilde N_n \leq (1+\lambda)n\nu_1 - j\right\}\\ &\geq\P\left\{\sup_{t\in[0,1]}\sum_{l=1}^{\tilde N_n} \big(Q_{n}^{\gets}(\Gamma_l)-\mu_1\big)1_{[U_l,1]}(t) < n\delta)\right\} - \P\left\{\tilde N_n > (1+\lambda)n\nu_1 - j\right\}. \end{align*} The second inequality is due to the definition of $Q_{n}^\gets$ and that $\mu_1\geq 1$ (and hence $Q_{n}^\gets(\Gamma_l) - \mu_1 \leq 0$ on $l\geq \tilde N_n$), while the last inequality comes from the generic inequality $\P(A\cap B) \geq \P(A) - \P(B^c)$. The second probability converges to 0 since $\tilde N$ is Poisson with rate $n\nu_1$. Moving on to the first probability in the last expression, observe that $\sum_{l=1}^{\tilde N_n} \big(Q_{n}^{\gets}(\Gamma_l)-\mu_1\big)1_{[U_l,1]}(\cdot)$ has the same distribution as the compound Poisson process $\sum_{i=1}^{J(n\cdot)} (D_i-\mu_1)$, where $J$ is a Poisson process with rate $\nu_1$ and $D_i$'s are i.i.d.\ random variables with the distribution $\nu$ conditioned (and normalized) on $[1,\infty)$, i.e., $\P\{D_i \geq s\} = 1\wedge\big(\nu[s,\infty)/\nu[1,\infty)\big)$. Using this, we obtain \begin{align} &\P\left\{\sup_{t\in[0,1]}\sum_{l=1}^{\tilde N_n} \big(Q_{n}^{\gets}(\Gamma_l)-\mu_1\big)1_{[U_l,1]}(t) < n\delta\right\} \nonumber \\ &= \P\left\{\sup_{1\leq m \leq J(n)}\sum_{l=1}^{m} (D_l -\mu_1) < n\delta\right\} \label{eq:same_form_cp} \\ &\geq \P\left\{\sup_{1\leq m\leq 2n\nu_1} \sum_{l=1}^m (D_l-\mu_1) < n\delta , J(n) \leq 2n\nu_1\right\} \nonumber \\ &\geq \P\left\{\sup_{1\leq m\leq 2n\nu_1} \sum_{l=1}^m (D_l-\mu_1) < n\delta \right\} - \P\big\{J(n) > 2n\nu_1\big\} \nonumber \end{align} The second probability vanishes at a geometric rate w.r.t.\ $n$ because $J(n)$ is Poisson with rate $n\nu_1$. The first term can be investigated by the generalized Kolmogorov inequality, cf.\ \cite{Shneer2009} (given as Result~\ref{result:gen_kol} in Appendix A): \begin{align*} \P\left(\max_{1\leq m\leq 2n\nu_1} \sum_{l=1}^m(D_l - \mu_1) \geq n\delta/2\right) &\leq C\frac{2n\nu_1V(n\delta/2)}{(n\delta/2)^2}, \end{align*} where $V(x) = \mathbf{E} [(D_l-\mu_1)^2; \mu_1 - x \leq D_l \leq \mu_1 + x]\leq \mu_1^2 + \mathbf{E}[ D_l^2; D_l \leq \mu_1 + x]$. Note that \begin{align*} \mathbf{E} [D_l^2; D_l \leq \mu_1 + x] &= \int_0^1 2s ds+\int_1^{\mu_1 + x} 2s \frac{\nu[s,\infty)}{\nu[1,\infty)}ds \\& =1 + \frac{2}{\nu_1} (\mu_1+x)^{2-\alpha}L(\mu_1+x), \end{align*} for some slowly varying $L$. Hence, $$\P\left(\max_{1\leq m\leq 2n\nu_1} \sum_{l=1}^m(D_l - \mu_1) < n\delta\right) \geq 1-\P\left(\max_{1\leq m\leq 2n\nu_1} \sum_{l=1}^m(D_l - \mu_1) \geq n\delta/2\right)\to 1,$$ as $n\to\infty$. Now, turning to (\ref{eq:lem_cn2}), let $\gamma_n \triangleq Q_n(n\gamma)$. \begin{align*} &\P \left\{\inf_{t\in[0,1]}\sum_{l=j+1}^{(1+\lambda)n\nu_1} \big(Q_{n}^{\gets}(\Gamma_l - \Gamma_j + Q_n(n\gamma))-\mu_1\big)1_{[U_l,1]}(t) > -n\delta\right\} \\ &= \P \left\{\inf_{t\in[0,1]}\sum_{l=1}^{(1+\lambda)n\nu_1-j} \big(Q_{n}^{\gets}(\Gamma_l +\gamma_n)-\mu_1\big)1_{[U_l,1]}(t) > -n\delta\right\} \\ &\geq \P \left\{\inf_{t\in[0,1]}\sum_{l=1}^{(1+\lambda)n\nu_1-j} \big(Q_{n}^{\gets}(\Gamma_l +\gamma_n)-\mu_1\big)1_{[U_l,1]}(t) > -n\delta, E_0 \geq \gamma_n\right\} \\ &\geq \P \left\{\inf_{t\in[0,1]}\sum_{l=1}^{(1+\lambda)n\nu_1-j} \big(Q_{n}^{\gets}(\Gamma_l +E_0)-\mu_1\big)1_{[U_l,1]}(t) > -n\delta, E_0 \geq \gamma_n\right\} \\ &= \P \left\{\inf_{t\in[0,1]}\sum_{l=2}^{(1+\lambda)n\nu_1-j+1} \big(Q_{n}^{\gets}(\Gamma_l)-\mu_1\big)1_{[U_l,1]}(t) > -n\delta, \Gamma_1 \geq \gamma_n\right\} \\ &\geq \P \left\{\inf_{t\in[0,1]}\sum_{l=2}^{(1+\lambda)n\nu_1-j+1} \big(Q_{n}^{\gets}(\Gamma_l)-\mu_1\big)1_{[U_l,1]}(t) > -n\delta\right\} - \P\left\{\Gamma_1 < \gamma_n\right\}\\ &= (A)-(B), \end{align*} where $E_0$ is a standard exponential random variable. (Recall that $\Gamma_l \triangleq E_1 + E_2 + \cdots + E_l$, and hence $(\Gamma_l + E_0, U_l) \stackrel{\mathcal{D}}{=} (\Gamma_{l+1}, U_{l})\stackrel{\mathcal{D}}{=} (\Gamma_{l+1}, U_{l+1})$.) Since $(B) = \P\left\{\Gamma_1 < \gamma_n\right\} \to 0$ (recall that $\gamma_n = n\nu[n\gamma,\infty)$ and $\nu$ is regularly varying with index $-\alpha<-1$), we focus on proving that the first term (A) converges to 1: \begin{align*} (A) &=\P \Bigg\{\inf_{t\in[0,1]}\sum_{l=2}^{(1+\lambda)n\nu_1-j+1} \big(Q_{n}^{\gets}(\Gamma_l)-\mu_1\big)1_{[U_l,1]}(t) > -n\delta\Bigg\} \\ &\geq \P \Bigg\{\inf_{t\in[0,1]}\sum_{l=2}^{(1+\lambda)n\nu_1-j+1} \big(Q_{n}^{\gets}(\Gamma_l)-\mu_1\big)1_{[U_l,1]}(t) > -n\delta, \\ &\hspace{180pt} \tilde N_n \leq (1+\lambda)n\nu_1 -j+1 \Bigg\} \\ &\geq \P \Bigg\{\inf_{t\in[0,1]}\sum_{l=1}^{\tilde N_n} \big(Q_{n}^{\gets}(\Gamma_l)-\mu_1\big)1_{[U_l,1]}(t) \geq -n\delta/3, \\ &\hspace{80pt} \inf_{t\in[0,1]}-\big(Q_{n}^{\gets}(\Gamma_1)-\mu_1\big)1_{[U_1,1]}(t)> -n\delta/3, \\ &\hspace{80pt} \inf_{t\in[0,1]}\sum_{l=\tilde N_n+1}^{(1+\lambda)n\nu_1-j+1} \big(Q_{n}^{\gets}(\Gamma_l)-\mu_1\big)1_{[U_l,1]}(t) \geq -n\delta/3, \\ &\hspace{80pt} \tilde N_n \leq (1+\lambda)n\nu_1 -j +1 \Bigg\} \\ &\geq \P \Bigg\{\inf_{t\in[0,1]}\sum_{l=1}^{\tilde N_n} \big(Q_{n}^{\gets}(\Gamma_l)-\mu_1\big)1_{[U_l,1]}(t) \geq -n\delta/3, \Bigg\} \\ &\hspace{50pt} +\P\Big\{Q_{n}^{\gets}(\Gamma_1)-\mu_1< n\delta/3\Big\} \\ &\hspace{50pt} +\P\Bigg\{\inf_{t\in[0,1]}\sum_{l=\tilde N_n+1}^{(1+\lambda)n\nu_1-j+1} \big(Q_{n}^{\gets}(\Gamma_l )-\mu_1\big)1_{[U_l,1]}(t) \geq -n\delta/3 \Bigg\} \\ &\hspace{50pt} +\P\left\{\tilde N_n \leq (1+\lambda)n\nu_1 -j +1\right\}-3 \\ &=\text{(AI) + (AII) + (AIII) + (AIV)}-3. \end{align*} The third inequality comes from applying the generic inequality $\P(A\cap B) \geq \P(A) + \P(B) -1$ three times. Since $\tilde N_n$ is Poisson with rate $n\nu_1$, \begin{align*} \text{(AIV)} = \P\left\{\tilde N_n \leq (1+\lambda)n\nu_1 -j +1\right\} &= \P\left\{\frac{\tilde N_n}{n\nu_1 } \leq 1+\lambda-\frac{j -1}{n\nu_1 }\right\} \to 1. \end{align*} For the first term (AI), \begin{align*} \text{(AI)} &=\P \Bigg\{\inf_{t\in[0,1]}\sum_{l=1}^{\tilde N_n} \big(Q_{n}^{\gets}(\Gamma_l )-\mu_1\big)1_{[U_l,1]}(t) \geq -n\delta/3 \Bigg\} \\ &= \P \Bigg\{\sup_{t\in[0,1]}\sum_{l=1}^{\tilde N_n} \big(\mu_1 - Q_{n}^{\gets}(\Gamma_l )\big)1_{[U_l,1]}(t) \leq n\delta/3 \Bigg\} \\ &= \P \Bigg\{\sup_{1\leq m\leq J(n)}\sum_{l=1}^{m} (\mu_1-D_l) \leq n\delta/3 \Bigg\}, \end{align*} where $D_i$ is defined as before. Note that this is of exactly same form as (\ref{eq:same_form_cp}) except for the sign of $D_l$, and hence, we can proceed exactly the same way using the generalized Kolmogorov inequality to prove that this quantity converges to $1$ --- recall that the formula only involves the square of the increments, and hence, the change of the sign has no effect. For the second term (AII), \begin{align*} \text{(AII)} \geq \P\big\{Q_n^\gets(\Gamma_1) \leq n\delta/3\big\}\geq \P\big\{\Gamma_1 > Q_{n}(n\delta/3)\big\} \to 1, \end{align*} since $Q_{n}(n\delta/3)\to 0$. For the third term (AIII), \begin{align*} \text{(AIII)} &= \P\Bigg\{\inf_{t\in[0,1]}\sum_{l=\tilde N_n+1}^{(1+\lambda)n\nu_1-j+1} \big(Q_{n}^{\gets}(\Gamma_l)-\mu_1\big)1_{[U_l,1]}(t) \geq -n\delta/3 \Bigg\} \\ &\geq \P\Bigg\{\inf_{t\in[0,1]}\sum_{l=\tilde N_n+1}^{(1+\lambda)n\nu_1-j+1} (1-\mu_1) 1_{[U_l,1]}(t) \geq -n\delta/3 \Bigg\} \\ &\geq \P\Bigg\{\sum_{l=\tilde N_n+1}^{(1+\lambda)n\nu_1-j+1} (\mu_1-1) \leq n\delta/3 \Bigg\} \\ &\geq \P\Bigg\{(\mu_1-1)\big((1+\lambda)n\nu_1-j - \tilde N_n+1\big) \leq n\delta/3\Bigg\} \\ &\geq \P\Bigg\{1+\lambda-\frac{\delta}{3\nu_1(\mu_1-1)}\leq \frac{\tilde N_n}{n\nu_1} +\frac{j-1}{n\nu_1}\Bigg\} \\ &\to 1, \end{align*} since $\lambda < \frac{\delta}{3\nu_1(\mu_1-1)}$. This concludes the proof of the lemma. \end{proof} \begin{lemma} \label{lem:key-upper-bound-2} For any $j\geq0$, $\delta>0$, and $m<\infty$, there is $\gamma_0>0$ such that \begin{align*} \P\left\{\left\|\bar J_n^{>j}\right\| > \delta, Q_{n}^\gets(\Gamma_j)\leq n\gamma_0 \right\}=o(n^{-m}). \end{align*} \end{lemma} \begin{proof} (Throughout the proof of this lemma, we use $\mu_1$ and $\nu_1$ in place of $\mu_1^+$ and $\nu_1^+$ respectively, for the sake of notational simplicity.) Note first that $Q_n^\gets(\Gamma_j)=\infty$ if $j=0$ and hence the claim of the lemma is trivial. Therefore, we assume $j\geq1$ throughout the rest of the proof. Since for any $\lambda > 0$ \begin{align} &\P\left\{\left\|\bar J_n^{>j} \right\| > \delta, Q_{n}^\gets(\Gamma_j)\leq n\gamma \right\} \nonumber \\ & \leq \P\Bigg\{\Bigg\|\sum_{l=j+1}^{\tilde N_n} (Q_{n}^\gets(\Gamma_l)-\mu_1) 1_{[U_l,1]} \Bigg\| > n\delta, Q_{n}^\gets(\Gamma_j)\leq n\gamma, \label{eq:key_bound} \\ \nonumber &\hspace{190pt} \frac{\tilde N_n}{n\nu_1} \in \left[\frac{j}{n\nu_1}, 1+\lambda\right] \Bigg\} \\ & \hspace{10pt}+ \P\left\{\frac{\tilde N_n}{n\nu_1} \notin \left[\frac{j}{n\nu_1}, 1+\lambda\right] \right\}, \nonumber \end{align} and $\P\left\{\frac{\tilde N_n}{n\nu_1} \notin \left[\frac{j}{n\nu_1}, 1+\lambda\right] \right\}$ decays at a geometric rate w.r.t.\ $n$, it suffices to show that (\ref{eq:key_bound}) is $o(n^{-m})$ for small enough $\gamma>0$. First, recall that by the definition of $Q_n^{\gets}(\cdot)$, $$ Q_n^{\gets}(x) \geq s \qquad \Longleftrightarrow \qquad x \leq Q_n(s),$$ and $$ n\nu(Q_n^\gets(x), \infty) \leq x \leq n\nu[Q_n^\gets(x),\infty).$$ Let $L$ be a random variable conditionally (on $\tilde N_n$) independent of everything else and uniformly sampled on $\{j+1, j+2,\ldots, \tilde N_n\}$. Recall that given $\tilde N_n$ and $\Gamma_j$, the distribution of $\{\Gamma_{j+1}, \Gamma_{j+2}, \ldots, \Gamma_{\tilde N_n}\}$ is same as that of the order statistics of $\tilde N_n - j$ uniform random variables on $[\Gamma_j, n\nu[1,\infty)]$. Let $D_l, l\geq 1$, be i.i.d.\ random variables whose conditional distribution is the same as the conditional distribution of $Q_{n}^\gets (\Gamma_L)$ given $\tilde N_n$ and $\Gamma_j$. Then the conditional distribution of $\sum_{l=j+1}^{\tilde N_n} (Q_{n}(\Gamma_l)-\mu_1) 1_{[U_l,1]}$ is the same as that of $ \sum_{l=1}^{\tilde N_n-j} (D_l-\mu_1)1_{[U_l,1]} $. Therefore, the conditional distribution of $\left\|\sum_{l=j+1}^{\tilde N_n} (Q_{n}(\Gamma_l)-\mu_1) 1_{[U_l,1]}\right\|_\infty$ is the same as the corresponding conditional distribution of $\sup_{1\leq m \leq \tilde N_n-j}\allowbreak\Big|\sum_{l=1}^{m} (D_l-\mu_1)\Big|$. To make use of this in the analysis what follows, we make a few observations on the conditional distribution of $Q_n^\gets(\Gamma_L)$ given $\Gamma_j$ and $\tilde N_n$. \begin{itemize} \item[(a)] \emph{The conditional distribution of $Q_n^{\gets}(\Gamma_L)$:}\\ Let $q \triangleq Q_n^\gets(\Gamma_j)$. Since $\Gamma_L$ is uniformly distributed on $[\Gamma_j, Q_n(1)] = [\Gamma_j, n\nu[1,\infty)]$, the tail probability is \begin{align*} \P\{Q_n^\gets(\Gamma_L) \geq s | \Gamma_j, \tilde N_n \} &= \P\{ \Gamma_L \leq Q_n(s) | \Gamma_j, \tilde N_n \} \\ & = \P\{\Gamma_L \leq n\nu[s,\infty)|\Gamma_j, \tilde N_n \}\\ &= \P\left\{\left. \frac{\Gamma_L - \Gamma_j}{n\nu[1,\infty) - \Gamma_j} \leq\frac{n\nu[s,\infty)-\Gamma_j}{n\nu[1,\infty)- \Gamma_j} \right| \Gamma_j, \tilde N_n\right\} \\ &= \frac{n\nu[s,\infty)-\Gamma_j}{n\nu[1,\infty)- \Gamma_j} \end{align*} for $s\in [1,q]$; since this is non-increasing w.r.t.\ $\Gamma_j$ and $n\nu(q,\infty) \leq \Gamma_j \leq n\nu[q,\infty)$, we have that \begin{align*} \frac{\nu[s,q)}{\nu[1,q)} \leq \P\{Q_n^\gets(\Gamma_L) \geq s | \Gamma_j, \tilde N_n \} \leq \frac{\nu[s,q]}{\nu[1,q]}. \end{align*} \item[(b)]\emph{Difference in mean between conditional and unconditional distribution:}\\ From (a), we obtain \begin{align*} \tilde\mu_n \triangleq \mathbf{E}[ Q_n^\gets(\Gamma_L)|\Gamma_j, \tilde N_n] \in \left[1+\int_1^q \frac{\nu[s,q)}{\nu[1,q)}ds, 1+\int_1^q \frac{\nu[s,q]}{\nu[1,q]}ds\right], \end{align*} and hence, \begin{align*} |\mu_1 - \tilde\mu_n| &\leq \left| \frac{\nu[1,q)\int_1^\infty \nu[s,\infty) ds - \nu[1,\infty)\int_1^q \nu[s,q)ds}{\nu[1,\infty)\nu[1,q)}\right| \\ &\hspace{15pt} \vee \left| \frac{\nu[1,q]\int_1^\infty \nu[s,\infty) ds - \nu[1,\infty)\int_1^q \nu[s,q]ds}{\nu[1,\infty)\nu[1,q]}\right|. \end{align*} Since \begin{align*} &\frac{\nu[1,q)\int_1^\infty \nu[s,\infty) ds - \nu[1,\infty)\int_1^q \nu[s,q)ds}{\nu[1,\infty)\nu[1,q)} \\ & = \frac{\nu[q,\infty)}{\nu[1,q)}(q-1) + \frac{\int_q^\infty \nu[s,\infty) ds}{\nu[1,\infty)} - \frac{\nu[q,\infty)\int_1^q \nu[s,\infty)ds}{\nu[1,\infty)\nu[1,q)}, \end{align*} and \begin{align*} &\frac{\nu[1,q)\int_1^\infty \nu[s,\infty) ds - \nu[1,\infty)\int_1^q \nu[s,q)ds}{\nu[1,\infty)\nu[1,q)} \\ &\hspace{100pt}-\frac{\nu[1,q]\int_1^\infty \nu[s,\infty) ds - \nu[1,\infty)\int_1^q \nu[s,q]ds}{\nu[1,\infty)\nu[1,q]}\\ &=\frac{\nu\{q\}\left((q-1)\nu[1,\infty) + \int_q^\infty \nu[s,\infty) ds + \int_1^q \nu[s,\infty)ds\right)}{\nu[1,\infty)(\nu[1,q)+\nu\{q\})}, \end{align*} we see that $|\mu_1-\tilde\mu_n|$ is bounded by a regularly varying function with index $1-\alpha$ (w.r.t.\ $q$) from Karamata's theorem. \item[(c)] \emph{Variance of $ Q_n^{\gets}(\Gamma_L)$:} Turning to the variance, we observe that, if $\alpha \leq 2$, \begin{equation} \label{eq:conditional_variance} \begin{aligned} & \mathbf{E} [Q_n^{\gets} (\Gamma_L) ^2 |\Gamma_j, \tilde N_n] \\ & \leq \int_0^1 2s ds + 2\int_1^q s\frac{\nu[s,q]}{\nu[1,q]}ds \\ & \leq 1+\frac{2}{\nu[1,q]}\int_1^q s\nu[s,\infty)ds = 1+q^{2-\alpha} L(q) \end{aligned} \end{equation} for some slowly varying function $L(\cdot)$. If $\alpha > 2$, the variance is bounded w.r.t.\ $q$. \end{itemize} Now, with (b) and (c) in hand, we can proceed with an explicit bound since all the randomness is contained in $q$. Namely, we infer \begin{align*} &\P\Bigg(\Bigg\|\sum_{l=j+1}^{\tilde N_n} (Q_{n}^\gets(\Gamma_l)-\mu_1) 1_{[U_l,1]} \Bigg\|_\infty > n\delta, Q_n^\gets(\Gamma_j) \leq n\gamma, \frac{\tilde N_n}{n\nu_1} \in \left[\frac{j}{n\nu_1}, 1+\lambda\right] \Bigg)\\ &=\P\Bigg(\Bigg\|\sum_{l=j+1}^{\tilde N_n} (Q_{n}^\gets(\Gamma_l)-\mu_1) 1_{[U_l,1]} \Bigg\|_\infty > n\delta, \Gamma_j \geq Q_{n}(n\gamma), \frac{\tilde N_n}{n\nu_1} \in \left[\frac{j}{n\nu_1}, 1+\lambda\right] \Bigg)\\ &=\mathbf{E}\left[ \P\Bigg(\Bigg\|\sum_{l=j+1}^{\tilde N_n} (Q_{n}^\gets(\Gamma_l)-\mu_1) 1_{[U_l,1]} \Bigg\|_\infty > n\delta \right|\Gamma_j, \tilde N_n\Bigg) ; \Gamma_j \geq Q_{n}(n\gamma), \\ &\hspace{260pt} \frac{\tilde N_n}{n\nu_1} \in \left[\frac{j}{n\nu_1}, 1+\lambda\right] \Bigg] \\& =\mathbf{E}\Bigg[ \P\Bigg(\left. \max_{1\leq m \leq \tilde N_n-j}\left|\sum_{l=1}^{m} (D_l-\mu_1)\right| > n\delta \right|\Gamma_j, \tilde N_n\Bigg) ; \Gamma_j \geq Q_{n}(n\gamma), \\ &\hspace{260pt} \frac{\tilde N_n}{n\nu_1} \in \left[\frac{j}{n\nu_1}, 1+\lambda\right]\Bigg]. \end{align*} By Etemadi's bound (Result~\ref{result:etimedi} in Appendix), \begin{equation}\label{eq:etemadi-in-action} \begin{aligned} & \P\left(\left.\max_{1\leq m \leq \tilde N_n-j}\left|\sum_{l=1}^m(D_l - \mu_1)\right|\geq n\delta\right|\Gamma_j, \tilde N_n\right) \\& \leq 3\max_{1\leq m \leq \tilde N_n}\P\left(\left.\left|\sum_{l=1}^m(D_l - \mu_1)\right|\geq n\delta\right|\Gamma_j, \tilde N_n\right) \\& \leq 3\max_{1\leq m \leq \tilde N_n}\Bigg\{\P\left(\left.\sum_{l=1}^m(D_l - \mu_1)\geq n\delta\right|\Gamma_j,\tilde N_n\right) \\ &\hspace{150pt} +\P\left(\left.\sum_{l=1}^m(\mu_1 - D_l)\geq n\delta\right|\Gamma_j,\tilde N_n\right)\Bigg\} \end{aligned} \end{equation} and as $|D_l-\tilde\mu_n|$ is bounded by $q$, we can apply Prokhorov's bound (Result~\ref{result:prokhorov} in Appendix) to get \begin{align*} &\P\left(\left.\sum_{l=1}^m(\mu_1-D_l)\geq n\delta\right|\Gamma_j,\tilde N_n\right) \\ & = \P\left(\left.\sum_{l=1}^m(\tilde\mu_n-D_l)\geq n\delta-m(\mu_1-\tilde\mu_n)\right|\Gamma_j,\tilde N_n\right) \\& \leq \P\left(\left.\sum_{l=1}^m(\tilde \mu_n-D_l)\geq n\delta-n\nu_1(1+\lambda)(\mu_1-\tilde\mu_n)\right|\Gamma_j,\tilde N_n\right) \\& \leq \left(\frac{qn(\delta - \nu_1(1+\lambda)(\mu_1-\tilde\mu_n))}{m\mathbf{var\,}(Q_n^\gets(\Gamma_L))}\right)^{-\frac{n(\delta - \nu_1(1+\lambda)(\mu_1-\tilde\mu_n))}{2q}} \\& \leq \left(\frac{n\nu_1(1+\lambda)\mathbf{var\,}(Q_n^\gets(\Gamma_L))}{qn(\delta - \nu_1(1+\lambda)(\mu_1-\tilde\mu_n))}\right)^{\frac{n(\delta - \nu_1(1+\lambda)(\mu_1-\tilde\mu_n))}{2q}} \end{align*} \begin{align*} =\left\{ \begin{array}{ll} \left(\frac{\nu_1(1+\lambda)(1+q^{2-\alpha}L_1(q))}{q(\delta - \nu_1(1+\lambda)q^{1-\alpha}L_2(q))}\right)^{\frac{n(\delta - \nu_1(1+\lambda)q^{1-\alpha}L_2(q))}{2q}} & \text{if } \alpha \leq 2, \\ \left(\frac{\nu_1(1+\lambda)C}{q(\delta - \nu_1(1+\lambda)q^{1-\alpha}L_2(q))}\right)^{\frac{n(\delta - \nu_1(1+\lambda)q^{1-\alpha}L_2(q))}{2q}} & \text{otherwise,} \end{array} \right. \end{align*} for some $C>0$ if $m\leq (1+\lambda)n\nu_1$. Therefore, there exist constants $M$ and $c$ such that $q \geq M$ (i.e., $\Gamma_j \leq Q_n(M)$) implies \begin{align*} \P\left(\left.\sum_{l=1}^m(\mu_1 - D_l) \geq n\delta\right|\Gamma_j\right) \leq c(q^{1-\alpha\wedge 2})^{\frac{n\delta}{8q}}, \end{align*} and since we are conditioning on $q = Q_{n}^\gets(\Gamma_j)\leq n\gamma$, \begin{align*} c(q^{1-\alpha\wedge 2})^{\frac{n\delta}{8q}} \leq c(q^{1-\alpha\wedge 2})^{\frac{\delta}{8\gamma}}. \end{align*} Hence, \begin{align*} \P\left(\left.\sum_{l=1}^m(\mu_1 - D_l) \geq n\delta\right|\Gamma_j\right) \leq c\left(Q_n^\gets(\Gamma_j)^{1-\alpha\wedge 2}\right)^{\frac{\delta}{8\gamma}}. \end{align*} With the same argument, we also get \begin{align*} \P\left(\left.\sum_{l=1}^m( D_l-\mu_1) \geq n\delta\right|\Gamma_j\right) \leq c\left(Q_n^\gets(\Gamma_j)^{1-\alpha\wedge 2}\right)^{\frac{\delta}{8\gamma}}. \end{align*} Combining (\ref{eq:etemadi-in-action}) with the two previous estimates, we obtain \begin{align*} \P\left(\left.\max_{1\leq m \leq \tilde N_n-j}\left|\sum_{l=1}^m(D_l - \mu_1)\right|\geq n\delta\right|\Gamma_j, \tilde N_n\right) \leq 6 c\left(Q_n^\gets(\Gamma_j)^{1-\alpha\wedge 2}\right)^{\frac{\delta}{8\gamma}}, \end{align*} on $\Gamma_j\geq Q_n(n\gamma)$, $\tilde N_n - j \leq n\nu_1(1+\lambda)$, and $\Gamma_j \leq Q_n(M)$. Now, \begin{align*} &\mathbf{E}\bigg[ \P\Bigg(\left. \max_{1\leq m \leq \tilde N_n-j}\left|\sum_{l=1}^{m} (D_l-\mu_1)\right| > n\delta \right|\Gamma_j, \tilde N_n\Bigg) ; \Gamma_j \geq Q_{n}(n\gamma)\ \\ &\hspace{220pt} \&\ \frac{\tilde N_n}{n\nu_1} \in \left[\frac{j}{n\nu_1}, 1+\lambda\right]\bigg] \\& \leq \mathbf{E}\bigg[ \P\Bigg(\left. \max_{1\leq m \leq \tilde N_n-j}\left|\sum_{l=1}^{m} (D_l-\mu_1)\right| > n\delta \right|\Gamma_j, \tilde N_n\Bigg) ; \Gamma_j \geq Q_{n}(n\gamma);\\ & \hspace{180pt} \frac{\tilde N_n}{n\nu_1} \in \left[\frac{j}{n\nu_1}, 1+\lambda\right]; \Gamma_j\leq Q_n(M)\bigg] \\& \hspace{20pt} + \P(\Gamma_j > Q_n(M)) \\& \leq \mathbf{E} \left[6 c\left(Q_n^\gets(\Gamma_j)^{1-\alpha\wedge 2}\right)^{\frac{\delta}{8\gamma}}\right] + \P(\Gamma_j > Q_n(M)) \\& \leq \mathbf{E} \left[6 c\left(Q_n^\gets(\Gamma_j)^{1-\alpha\wedge 2}\right)^{\frac{\delta}{8\gamma}};Q_n^\gets(\Gamma_j) \geq n^\beta \right] + \P \left( Q_n^\gets(\Gamma_j) < n^\beta \right) \\ &\hspace{250pt} + \P\big(\Gamma_j > Q_n(M)\big) \\& \leq 6 c\left(n^{\beta(1-\alpha\wedge 2)}\right)^{\frac{\delta}{8\gamma}} + \P \left( \Gamma_j > Q_n(n^\beta) \right) + \P\big(\Gamma_j > Q_n(M)\big) \\& \leq 6 c\left(n^{\beta(1-\alpha\wedge 2)}\right)^{\frac{\delta}{8\gamma}} + \P \left( \Gamma_j > (n^{1-\alpha\beta}L(n)) \right) + \P\big(\Gamma_j > Q_n(M)\big), \end{align*} for any $\beta>0$. If one chooses $\beta$ so that $1-\alpha\beta>0$ (for example, $\beta = \frac1{2\alpha}$), the second and third terms vanish at a geometric rate w.r.t.\ $n$. On the other hand, we can pick $\gamma$ small enough compared to $\delta$, so that the first term is decreasing at an arbitrarily fast polynomial rate. This concludes the proof of the lemma. \end{proof} Recall that we denote the Lebesgue measure on $[0,1]^\infty$ with $\text{Leb}$ and defined measures $\mu_\alpha^{(j)}$ and $\mu_\beta^{(j)}$ on $\mathbb R_+^{\infty \downarrow}$ as \begin{equation*} \mu_\alpha^{(j)}(dx_1, dx_2, \ldots) \triangleq \prod_{i=1}^j \nu_\alpha(dx_i)\mathbb{I}_{[x_1\geq x_2\geq \cdots \geq x_j > 0]} \prod_{i=j+1}^\infty \delta_0(dx_i), \end{equation*} and $\nu_\alpha(x,\infty) = x^{-\alpha}$, where $\delta_0$ is the Dirac measure concentrated at $0$. \begin{lemma}\label{lem:poisson-jumps} For each $j\geq 0$, $$\big(n\nu[n,\infty))^{-j}\P[((Q_{n}^{\gets}(\Gamma_l)/n, l \geq 1), (U_l, l\geq 1)) \in \cdot] \to (\mu_\alpha^{(j)}\times \text{Leb})(\cdot)$$ in $\mathbb M\big((\mathbb R_+^{\infty\downarrow}\times[0,1]^\infty) \setminus (\mathbb H_{<\, j}\times [0,1]^\infty)\big)$ as $n\to\infty$. \end{lemma} \begin{proof} We first prove that \begin{equation}\label{eq:claim-poisson-jumps} \big(n\nu[n,\infty))^{-j}\P[(Q_{n}^{\gets}(\Gamma_l)/n, l \geq 1) \in \cdot] \to \mu_\alpha^{(j)}(\cdot) \end{equation} in $\mathbb M(\mathbb R_+^{\infty\downarrow}\setminus \mathbb H_{<\, j})$ as $n\to\infty$. To show this, we only need to check that \begin{equation}\label{eq:what-to-check} \big(n\nu[n,\infty))^{-j} \allowbreak \P[(Q_{n}^{\gets}(\Gamma_l)/n, l \geq 1) \in A] \to \mu_\alpha^{(j)}(A) \end{equation} for $A$'s that belong to the convergence-determining class $\mathcal A_{j} \triangleq \big\{\{z\in \mathbb{R}_+^{\infty\downarrow}: x_1 \leq z_1 , \ldots, x_l \leq z_l\}: l\geq j, x_1\geq\ldots\geq x_l>0\big\}$. To see that $\mathcal A_{j}$ is a convergence-determining class for $\mathbb M(\mathbb R_+^{\infty\downarrow}\setminus \mathbb H_{<\, j})$-convergence, note that $\mathcal A'_{j} \triangleq \big\{\{z\in \mathbb{R}_+^{\infty\downarrow}: x_1 \leq z_1 < y_1, \ldots, x_l \leq z_l < y_l\}: l\geq j,\ x_1,\ldots,x_l\in (0,\infty),\ y_1,\ldots,y_l \in (0,\infty] \big\}$ satisfies conditions (i), (ii), and (iii) of Lemma~\ref{lem:convergence-determining-class}, and hence, is a convergence-determining class. Now define $\mathcal A_j(i)$'s recursively as $\mathcal A_{j}(i+1) \triangleq \{B\setminus A: A,B\in \mathcal A_j(i), A\subseteq B\}$ for $i\geq 0$, and $\mathcal A_{j}(0) = \mathcal A_j'' \triangleq \big\{\{z\in \mathbb{R}_+^{\infty\downarrow}: x_1 \leq z_1 , \ldots, x_l \leq z_l\}: l\geq j, x_1,\ldots, x_l>0\big\}$. Since we restrict the set-difference operation between nested sets, the limit associated with the sets in $\mathcal A_{j}(i+1)$ is determined by the sets in $\mathcal A_j(i)$, and eventually, $\mathcal A_j''$. Noting that $\mathcal A'_j \subseteq \bigcup_{i=0}^\infty \mathcal A_j(i)$, we see that $\mathcal A_j''$ is a convergence-determining class. Now, since both $\P[(Q_n^\gets(\Gamma_l)/n, l\geq 1)\in \cdot]$ and $\mu_\alpha^{(j)}(\cdot)$ are supported on $\mathbb{R}^{\infty\downarrow}_+$, one can further reduce the convergence determining class from $\mathcal A_j''$ to $\mathcal A_j$. To check the desired convergence for the sets in $\mathcal A_j$, we first characterize the limit measure. Let $l \geq j$ and $x_1 \geq \cdots \geq x_l > 0$. By the change of variables $v_i = x_i^\alpha y_i^{-\alpha}$ for $i=1,\ldots, j$, \begin{align* & \mu_\alpha^{(j)} (\{z \in \mathbb R_+^{\infty\downarrow}: x_1 \leq z_1, \ldots, x_l \leq z_l\}) \nonumber\\ & =\mathbb{I}(j=l)\cdot \int_{x_j}^\infty \cdots\int_{x_1}^\infty \mathbb{I}(y_1\geq \cdots \geq y_j) d\nu_\alpha(y_1)\cdots d\nu_\alpha(y_j) \nonumber \\ & = \mathbb{I}(j=l)\cdot \left(\prod_{i=1}^j x_i\right)^{-\alpha}\cdot\int_0^1\cdots \int_0^1 \mathbb{I}(x_1^{-\alpha}v_1 \leq \cdots \leq x_j^{-\alpha} v_j) dv_1\cdots dv_j. \end{align*} Next, we find a similar representation for the distribution of $\Gamma_1,\ldots, \Gamma_l$. Let $U_{(1)},\ldots, U_{(l-1)}$ be the order statistics of $l-1$ iid uniform random variables on $[0,1]$. Recall first that the conditional distribution of $(\Gamma_1/\Gamma_l,\ldots, \Gamma_{l-1}/\Gamma_l)$ given $\Gamma_j$ does not depend on $\Gamma_j$ and coincides with the distribution of $(U_{(1)},\ldots, U_{(l-1)})$; see, for example, \cite{pyke1965spacings}. Suppose that $l\geq j$ and $0\leq y_1\leq \cdots \leq y_l$. By the change of variables $u_i = \gamma^{-1} y_i v_i$ for $i=1,\ldots,l-1$, and $\gamma = y_{l}v_l$, \begin{align*} & \P\big(\Gamma_1 \leq y_1,\ldots, \Gamma_l \leq y_l\big) \\ & = \mathbf{E} \Big[\P\big({\Gamma_1}/{\Gamma_l} \leq {y_1}/{\Gamma_l},\,\ldots,\, {\Gamma_{l-1}}/{\Gamma_l} \leq {y_{l-1}}/{\Gamma_l}\big| \Gamma_l\big)\cdot\mathbb{I}\big(\Gamma_l \leq y_l\big)\Big] \\ & = \int_0^{y_l}\P\big(U_{(1)} \leq y_1/\gamma,\ldots,\, U_{(l-1)} \leq y_{l-1}/\gamma\big) \frac{e^{-\gamma}\gamma^{l-1}}{(l-1)!}d\gamma \\ & = \int_0^{y_l} e^{-\gamma}\gamma^{l-1}\int_0^{y_{l-1}/\gamma}\cdots\int_0^{y_1/\gamma}\mathbb{I}(u_1\leq\cdots\leq u_{l-1} \leq 1) du_1\cdots du_{l-1}d\gamma \\ & = \left(\prod_{i=1}^{l-1} y_i\right)\int_0^{y_l} e^{-\gamma}\int_0^1\cdots\int_0^1\mathbb{I}(y_1v_1\leq\cdots\leq y_{l-1} v_{l-1} \leq \gamma) dv_1\cdots dv_{l-1}d\gamma \\ & = \left(\prod_{i=1}^{l} y_i\right)\cdot \int_0^1\cdots\int_0^1 e^{-y_lv_l}\mathbb{I}(y_1v_1\leq \cdots \leq y_{l}v_{l}) dv_1\cdots dv_l. \end{align*} Since $0\leq Q_n(nx_1)\leq \ldots \leq Q_n(nx_l)$ for $x_1\geq \cdots \geq x_l > 0$, \begin{align*} & (n\nu[n,\infty))^{-j}\P[Q_n^\gets(\Gamma_1)/n \geq x_1, \ldots, Q_n^\gets(\Gamma_l) \geq x_l] \\ & = (n\nu[n,\infty))^{-j}\P[\Gamma_1\leq Q_n(n x_1), \ldots, \Gamma_l\leq Q_n(n x_l)] \\ & = (n\nu[n,\infty))^{-j}\cdot\left(\prod_{i=1}^{l} Q_n(nx_i)\right) \\ &\hspace{50pt} \cdot\int_0^1\cdots\int_0^1 e^{-Q_n(nx_l)v_l}\mathbb{I}(Q_n(nx_1)v_1\leq \cdots \leq Q_n(nx_l)v_{l}) dv_1\cdots dv_l \\ & = \Bigg(\prod_{i=1}^{j} \frac{Q_n(nx_i)}{n\nu[n,\infty)}\Bigg)\cdot \Bigg(\prod_{i=j+1}^{l} Q_n(nx_i)\Bigg) \\ &\hspace{30pt} \cdot\int_0^1\cdots\int_0^1 e^{-Q_n(nx_l)v_l}\mathbb{I}\bigg(\frac{Q_n(nx_i)}{n\nu[n,\infty)}v_1\leq \cdots \leq \frac{Q_n(nx_i)}{n\nu[n,\infty)}v_{l}\bigg) dv_1\cdots dv_l. \end{align*} Note that $Q_n(nx_i)\to 0$ and $\frac{Q_n(nx_i)}{n\nu[n,\infty)}\to x_i^{-\alpha}$ as $n\to\infty$ for each $i=1,\ldots, l$. Therefore, by bounded convergence, \begin{align*} & (n\nu[n,\infty))^{-j}\P[Q_n^\gets(\Gamma_1)/n \geq x_1, \ldots, Q_n^\gets(\Gamma_l) \geq x_l] \\ &\to \mathbb{I}(j=l)\Bigg(\prod_{i=1}^{j}x_i\Bigg)^{-\alpha} \cdot\int_0^1\cdots\int_0^1 \mathbb{I}(x_1^{-\alpha}v_1\leq \cdots \leq x_j^{-\alpha}v_{j}) dv_1\cdots dv_j \\ & = \mu_\alpha^{(j)} (\{z \in \mathbb R_+^{\infty\downarrow}: x_1 \leq z_1, \ldots, x_l \leq z_l\}), \end{align*} which concludes the proof of \eqref{eq:claim-poisson-jumps}. The conclusion of the lemma follows from the independence of $(Q_n^\gets(\Gamma_l)/n, l\geq 1)$ and $(U_l,l\geq 1)$ and Lemma~\ref{thm:simple-product-space}. \end{proof} \begin{lemma}\label{lem:Djk} Suppose that $x_1 \geq \cdots \geq x_j \geq 0$; $u_i\in(0,1)$ for $i=1,\ldots,j$; $y_1 \geq \cdots \geq y_k \geq 0$; $v_i \in (0,1)$ for $i=1, \ldots,k$; $u_1,\ldots,u_j, v_1, \ldots, v_k$ are all distinct. \begin{itemize} \item[(a)] For any $\epsilon>0$, \begin{align*} & \{x \in G: d(x,y)< (1+\epsilon)\delta \text{ implies } y\in G\} \\ & \subseteq G^{-\delta} \\ & \subseteq\{x \in G: d(x,y)< \delta \text{ implies } y\in G\}. \end{align*} Also, $(A\cap B)_\delta \subseteq A_\delta \cap B_\delta$ and $A^{-\delta} \cup B^{-\delta} \subseteq (A\cup B)^{-\delta}$ for any $A$ and $B$. \item[(b)] $\sum_{i=1}^j x_i 1_{[u_i,1]} \in (\mathbb D \setminus \mathbb D_{<\,j})^{-\delta}$ implies $x_j \geq \delta$. \item[(c)] $\sum_{i=1}^j x_i 1_{[u_i,1]} \notin (\mathbb D \setminus \mathbb D_{<\, j})^{-\delta}$ implies $x_j \leq 2\delta$. \item[(d)] $\sum_{i=1}^j x_i 1_{[u_i,1]} - \sum_{i=1}^k y_i 1_{[v_i,1]} \in (\mathbb D \setminus \mathbb D_{< j,k})^{-\delta}$ implies $x_j \geq \delta$ and $y_k \geq \delta$. \item[(e)] Suppose that $\xi \in \mathbb D_{j,k}$. If $l<j$ or $m<k$, then $\xi$ is bounded away from $\mathbb D_{l,m}$. \item[(f)] If $I(\xi) > (\alpha -1)j + (\beta-1)k$, then $\xi$ is bounded away from $\mathbb D_{<j,k}\cup \mathbb D_{j,k}$. \end{itemize} \end{lemma} \begin{proof} (a) Immediate consequences of the definition. (b) From (a), we see that $\sum_{i=1}^j x_i 1_{[u_i,1]} \in (\mathbb D \setminus \mathbb D_{<\, j})^{-\delta}$ and $\sum_{i=1}^{j-1} x_i 1_{[u_i,1]}\in \mathbb D_{<\, j}$ implies $d\Big(\sum_{i=1}^j x_i 1_{[u_i,1]},\allowbreak\sum_{i=1}^{j-1} x_i 1_{[u_i,1]}\Big) \geq \delta$, which is not possible if $x_j < \delta$. (c) We prove that for any $\epsilon>0$, $\sum_{i=1}^j x_i 1_{[u_i,1]} \notin (\mathbb D \setminus \mathbb D_{<\,j})^{-\delta}$ implies $x_j \leq (2+\epsilon)\delta$. To show this, in turn, we work with the contrapositive. Suppose that $x_j> (2+\epsilon)\delta$. If $d(\sum_{i=1}^j x_i 1_{[u_i,1]}, \zeta)< (1+\epsilon/2)\delta$, by the definition of the Skorokhod metric, there exists a non-decreasing homeomorphism $\phi$ of $[0,1]$ onto itself such that $\|\sum_{i=1}^j x_i 1_{[u_i,1]}-\zeta\circ \phi\|_\infty < (1+\epsilon/2)\delta$. Note that at each discontinuity point of $\sum_{i=1}^{j}x_i1_{[y_i,1]}$, $\zeta\circ\phi$ should also be discontinuous. Otherwise, the supremum distance between $\sum_{i=1}^j x_i 1_{[u_i,1]}$ and $\zeta\circ \phi$ has to be greater than $(1+\epsilon/2)\delta$, since the smallest jump size of $\sum_{i=1}^j x_i 1_{[u_i,1]}$ is greater than $(2+\epsilon)\delta$. Hence, there has to be at least $j$ discontinuities in the path of $\zeta$; i.e., $\zeta \in \mathbb D \setminus \mathbb D_{<\,j}$. We have shown that $d(\sum_{i=1}^j x_i 1_{[u_i,1]}, \zeta)< (1+\epsilon/2)\delta$ implies $\zeta\in \mathbb D \setminus \mathbb D_{<\,j}$, which in turn, along with (a), shows that $\sum_{i=1}^j x_i 1_{[u_i,1]} \in (\mathbb D \setminus \mathbb D_{<\, j})^{-\delta}$. (d) Suppose that $\sum_{i=1}^j x_i 1_{[u_i,1]} - \sum_{i=1}^k y_i 1_{[v_i,1]}\in (\mathbb D\setminus \mathbb D_{<j,k})^{-\delta}$. Since $\sum_{i=1}^{j-1} x_i 1_{[u_i,1]} - \sum_{i=1}^k y_i 1_{[v_i,1]}\notin \mathbb D \setminus \mathbb D_{<j,k}$, $$x_j \geq d\left(\sum_{i=1}^j x_i 1_{[u_i,1]} - \sum_{i=1}^k y_i 1_{[v_i,1]}, \ \sum_{i=1}^{j-1} x_i 1_{[u_i,1]} - \sum_{i=1}^k y_i 1_{[v_i,1]}\right) \geq \delta.$$ Similarly, we get $y_k \geq \delta$. (e) Let $\xi = \sum_{i=1}^j x_i 1_{[u_i,1]} - \sum_{i=1}^k y_i 1_{[v_i,1]}$. First, we claim that $d(\zeta, \xi) \geq x_j/2$ for any $\zeta\in \mathbb D_{l,m}$ with $l < j$. Suppose not, i.e., $d(\zeta, \xi) < x_j / 2$. Then there exists a non-decreasing homeomorphism $\phi$ of $[0,1]$ onto itself such that $\|\sum_{i=1}^j x_i 1_{[u_i,1]}-\zeta\circ \phi\|_\infty < x_j/2$. Note that this implies that at each discontinuity point $s$ of $\xi$, $\zeta\circ\phi$ should also be discontinuous. Otherwise, $|\zeta\circ\phi(s)- \xi(s)|+|\zeta\circ\phi(s-) - \xi(s-)|\geq|\xi(s)-\xi(s-)| \geq x_j$, and hence it is contradictory to the bound on the supremum distance between $\xi$ and $\zeta\circ \phi$. However, this implies that $\zeta$ has $j$ upward jumps and hence, contradictory to the assumption $\zeta \in \mathbb D_{l,m}$, proving the claim. Likewise, $d(\zeta, \xi) \geq y_k/2$ for any $\xi \in \mathbb D_{l,m}$ with $m < k$. (f) Note that in case $I(\xi)$ is finite, $\mathcal D_+(\xi) > j$ or $\mathcal D_-(\xi) > k$. In this case, the conclusion is immediate from (e). In case $I(\xi)=\infty$, either $\mathcal D_+(\zeta) =\infty$, $\mathcal D_-(\zeta)=\infty$, $\xi(0) \neq 0$, or $\xi$ contains a continuous non-constant piece. By containing a continuous non-constant piece, we refer to the case that there exist $t_1$ and $t_2$ such that $t_1 < t_2$, $\xi(t_1)\neq \xi(t_2-)$ and $\xi$ is continuous on $(t_1, t_2)$. For the first two cases where the number of jumps is infinite, the conclusion is an immediate consequence of (e). The case $\xi(0)\neq 0$ is also obvious. Now we are left with dealing with the last case, where $\xi$ has a continuous non-constant piece. To discuss this case, assume w.l.o.g.\ that $\xi(t_1) < \xi(t_2-)$. We claim that $d(\xi,\mathbb D_{j,k}) \geq \frac{\xi(t_2-)-\xi(t_1)}{2(j+1)}$. Note that for any step function $\zeta$, \begin{align*} \|\xi-\zeta\| & \geq |\xi(t_2-)-\zeta(t_2-)|\vee |\xi(t_1)-\zeta(t_1)| \\ & \geq (\xi(t_2-)-\zeta(t_2-)) \vee (\zeta(t_1)-\xi(t_1)) \\ & \geq \frac{1}{2}\Big\{ (\xi(t_2-) - \xi(t_1))-(\zeta(t_2-)-\zeta(t_1))\Big\} \\ & \geq \frac12 \Big\{(\xi(t_2-) - \xi(t_1)) - \sum_{t\in(t_1,t_2)} \big(\zeta(t)-\zeta(t-)\big) \Big\} \\ & \geq \frac12 \Big\{(\xi(t_2-)-\xi(t_1)) - 2\mathcal D_+(\zeta) \|\xi-\zeta\|\Big\}, \end{align*} where the fourth inequality is due to the fact that $\|\xi-\zeta\| \geq \frac{\zeta(t)-\zeta(t-)}{2}$ for all $t\in(t_1,t_2)$. From this, we get $$ \|\xi-\zeta\| \geq \frac{\xi(t_2-)-\xi(t_1)}{2(\mathcal D_+(\zeta) + 1)} \geq \frac{\xi(t_2-)-\xi(t_1)}{2(j + 1)}, $$ for $\zeta \in \mathbb D_{j,k}$. Now, suppose that $\zeta \in \mathbb D_{j,k}$. Since $\zeta\circ\phi$ is again in $\mathbb D_{j,k}$ for any non-decreasing homeomorphism $\phi$ of $[0,1]$ onto itself, $$ d(\xi, \zeta) \geq \frac{\xi(t_2-)-\xi(t_1)}{2(j + 1)}, $$ which proves the claim. \end{proof} Now we move on to the proof of Theorem~\ref{thm:two-sided-limit-theorem}. We first establish Theorem~\ref{thm:multi-d-limit-theorem}, which plays a key role in the proof. Recall that $\mathbb D_{<j} = \bigcup_{0\leq l<j} \mathbb D_l$ and let $\mathbb D_{<(j_1,\ldots,j_d)} \triangleq \bigcup_{(l_1,\ldots,l_d) \in \mathbb{I}_{<(j_1,\ldots,j_d)}}\prod_{i=1}^d\mathbb D_{l_i}$ where $ \mathbb{I}_{<(j_1,\ldots,j_d)} \triangleq \big\{(l_1,\ldots,l_d)\allowbreak\in \mathbb Z_+^d\setminus\{(j_1,\ldots,j_d)\}: (\alpha_1-1)l_1+\cdots+(\alpha_d-1)l_d \leq (\alpha_1-1)j_1+\cdots+(\alpha_d-1)j_d\big\}$. For each $l\in \mathbb Z_+$ and $i=1,\ldots,d$, let $ C_l^{(i)}(\cdot) \triangleq \mathbf{E} \Big[\nu_{\alpha_i}^l \{x\in (0,\infty)^l:\sum_{j=1}^l x_j 1_{[U_j,1]}\in \cdot\}\Big] $ where $U_1,\ldots,U_l$ are iid uniform on $[0,1]$, and $ \nu_{\alpha_i}^l $ is as defined right below \eqref{math-display-above-definition-nu-alpha-j}. \begin{theorem}\label{thm:multi-d-limit-theorem} Consider independent 1-dimensional L\'evy processes $X^{(1)},\allowbreak\ldots, X^{(d)}$ with spectrally positive L\'evy measures $\nu_1(\cdot), \ldots,\nu_d(\cdot)$, respectively. Suppose that each $\nu_i$ is regularly varying (at infinity) with index $-\alpha_i<-1$, and let $\bar X^{(i)}_n$ be centered and scaled scaled version of $X^{(i)}$ for each $i=1,\ldots,d$. Then, for each $(j_1,\ldots,j_d)\in \mathbb Z_+^d$, $$ \frac{\P((\bar X_n^{(1)},\ldots,\bar X_n^{(d)})\in \cdot)}{\prod_{i=1}^d\big(n\nu_i[n,\infty)\big)^{j_i}}\to C_{j_1}^{(1)}\times\cdots\times C_{j_d}^{(d)}(\cdot) $$ in $\mathbb M\Big(\prod_{i=1}^d\mathbb D\setminus \mathbb D_{<(j_1,\ldots,j_d)}\Big)$. \end{theorem} \begin{proof From Theorem~\ref{thm:one-sided-limit-theorem}, we know that $ (n\nu_i[n,\infty))^{-j}\P(\bar X^{(i)}_n\in \cdot)\to C_{j}(\cdot) $ in $\mathbb M(\mathbb D\setminus \mathbb D_{<j})$ for $i=1,\ldots,d$ and any $j\geq 0$. This along with Lemma~\ref{thm:simple-product-space}, for each $(l_1,\ldots,l_d)\in \mathbb Z_+^d$ we obtain $$ \prod_{i=1}^d\big(n\nu_i[n,\infty)\big)^{-l_i}\P((\bar X^{(1)}_n,\ldots,\bar X^{(d)}_n)\in \cdot)\to C_{l_1}^{(1)}\times \cdots \times C_{l_d}^{(d)}(\cdot) $$ in $ \mathbb M\Big( \prod_{i=1}^d\mathbb D \setminus \mathbb C_{(l_1,\ldots,l_d)}\Big) $ where $ \mathbb C_{(l_1,\ldots,l_d)} \triangleq \bigcup_{i=1}^d (\mathbb D^{i-1} \times \mathbb D_{<l_i} \times \mathbb D^{d-i}) $. Since $ \mathbb D_{<(j_1,\ldots,j_d)} = \bigcap_{(l_1,\ldots,l_d)\notin \mathbb{I}_{<(j_1,\ldots,j_d)}}\allowbreak\mathbb C_{(l_1,\ldots,l_d)}, $ our strategy is to proceed with Lemma~\ref{thm:union-limsup} to obtain the desired $\mathbb M\Big(\prod_{i=1}^d\mathbb D\setminus \mathbb D_{<(j_1,\ldots,j_d)}\Big)$-convergence by combining the $\mathbb M\Big( \prod_{i=1}^d\mathbb D \setminus \mathbb C_{(l_1,\ldots,l_d)}\Big)$-convergences for $(l_1,\ldots,l_d)\notin \mathbb{I}_{<(j_1,\ldots,j_d)}$. We first rewrite the infinite intersection over $\mathbb Z_+^d \setminus \mathbb{I}_{<(j_1,\ldots,j_d)}$ as a finite one to facilitate the application of the lemma. Consider a partial order $\prec$ on $\mathbb Z_+^{d}$ such that $(l_1,\ldots,l_d) \prec (m_1,\ldots,m_d)$ if and only if $\mathbb C_{(l_1,\ldots,l_d)} \subsetneq \mathbb C_{(m_1,\ldots,m_d)}$. Note that this is equivalent to $l_i\leq m_i$ for $i=1,\ldots,d$ and $l_i < m_i$ for at least one $i=1,\ldots,d$. Let $\mathbb{J}_{j_1,\ldots,j_d}$ be the subset of $\mathbb Z_+^{d}$ consisting of the minimal elements of $\mathbb Z_+^{d}\setminus \mathbb{I}_{<(j_1,\ldots,j_d)}$, i.e., $ \mathbb{J}_{j_1,\ldots,j_d} \triangleq \{(l_1,\ldots,l_d)\in \mathbb Z_+^{d}\setminus \mathbb{I}_{<(j_1,\ldots,j_d)}: (m_1,\ldots,m_d) \prec (l_1,\ldots,l_d) \text{ implies } (m_1,\ldots,m_d) \in \mathbb{I}_{<(j_1,\ldots,j_d)}\} $. Figure~\ref{fig1} illustrates how the sets $\mathbb{I}_{<(j_1,\ldots,j_d)}$ and $\mathbb{J}_{j_1,\ldots,j_d}$ look when $d=2$, $j_1=2$, $j_2=2$, $\alpha_1 = 2$, $\alpha_2 = 3$. \begin{figure}\label{fig1} \centering \begin{tikzpicture} \tikzstyle{axes}=[] \tikzstyle{important line}=[very thick] \draw[style=help lines, step=0.5cm] (-0.1,-0.1) grid (4.1,2.6); \begin{scope}[style=axes] \draw[->] (-0.4,0) -- (4.4,0) node[right] {$l_1$} coordinate(x axis); \draw[->] (0.01,-0.4) -- (0.01,2.9) node[above] {$l_2$} coordinate(y axis); \end{scope} \draw (-0.25,-0.25) node{$0$}; \draw (1,-0.25) node{$2$}; \draw (2,-0.25) node{$4$}; \draw (3,-0.25) node{$6$}; \draw (4,-0.25) node{$8$}; \draw (-0.25,1) node{$2$}; \draw (-0.25,2) node{$4$}; \node[label=270:{$(j_1,j_2)=(2,2)$}] (B) at (6,2.1) {}; \draw [->](1.1,1.1) to [out=45,in=180] (2.5,1.8) to [out=0,in=160] (3.5,0.7) to [out=340,in=230] (5,1.5); \fill[red] (1,1) circle (3pt); \fill[red] (0,2) circle (3pt); \fill[red] (0.5,1.5) circle (3pt); \fill[red] (3.5,0) circle (3pt); \fill[red] (2.5,0.5) circle (3pt); \fill[red] (0.5,1.5) circle (3pt); \fill[red] (3.5,0) circle (3pt); \fill[red] (2.5,0.5) circle (3pt); \fill[blue] (0,0) circle (3pt); \fill[blue] (0,0.5) circle (3pt); \fill[blue] (0,1) circle (3pt); \fill[blue] (0,1.5) circle (3pt); \fill[blue] (0.5,0) circle (3pt); \fill[blue] (0.5,0.5) circle (3pt); \fill[blue] (0.5,1) circle (3pt); \fill[blue] (1,0) circle (3pt); \fill[blue] (1,0.5) circle (3pt); \fill[blue] (1.5,0) circle (3pt); \fill[blue] (1.5,0.5) circle (3pt); \fill[blue] (2,0) circle (3pt); \fill[blue] (2,0.5) circle (3pt); \fill[blue] (2.5,0) circle (3pt); \fill[blue] (3,0) circle (3pt); \draw[red,dashed,very thick] (-0.4,1.7) -- (3.8,-0.4); \end{tikzpicture} \caption{An example of $\mathbb{I}_{<(j_1,\ldots,j_d)}$ and $\mathbb{J}_{j_1,\ldots,j_d}$ where $d=2$, $j_1=2$, $j_2 = 2$, $\alpha_1 = 2$, and $\alpha_2 = 3$. The blue dots represent the elements of $\mathbb{I}_{<(j_1,j_2)}$, and the red dots represent the elements of $\mathbb{J}_{j_1,j_2}$. The dashed red line represents $(l_1, l_2)$ such that $(\alpha_1-1)l_1 +(\alpha_2-1)l_2 = (\alpha_1-1)j_1 +(\alpha_2-1)j_2$.} \label{fig1} \end{figure} It is straightforward to show that $|\mathbb{J}_{j_1,\ldots,j_d}|<\infty$, and that $(m_1,\ldots,m_d)\notin \mathbb{I}_{<(j_1,\ldots,j_d)}$ implies $\mathbb C_{(l_1,\ldots,l_d)} \subseteq \mathbb C_{(m_1,\ldots,m_d)}$ for some $(l_1,\ldots,l_d) \in \mathbb{J}_{j_1,\ldots,j_d}$; therefore, $\mathbb D_{<(j_1,\ldots,j_d)} = \bigcap_{(l_1,\ldots,l_d)\in \mathbb{J}_{j_1,\ldots,j_d}} \mathbb C_{(l_1,\ldots,l_d)}$. In view of this and the fact that $\limsup\frac{\prod_{i=1}^d \big(n\nu_i[n,\infty)\big)^{-l_i}}{\prod_{i=1}^d \big(n\nu_i[n,\infty)\big)^{-j_i}}\to 0$ for $(l_1,\ldots,l_d)\in \mathbb{J}_{j_1,\ldots,j_d}\setminus \{(j_1,\ldots,j_d)\}$, the conclusion of the theorem follows from Lemma~\ref{thm:union-limsup} if we show that for each $r>0$, $ \xi \triangleq (\xi_1,\ldots,\xi_d)\notin \big(\bigcup_{(l_1,\ldots,l_d)\in \mathbb{I}_{<j_1,\ldots,j_d}}\prod_{i=1}^d\mathbb D_{l_i}\big)^r $ implies $ \xi\notin (\mathbb C_{(l_1,\ldots,l_d)})^r \text{ for some }(l_1,\ldots,l_d)\in \mathbb{J}_{j_1,\ldots,j_d} $. To see that this is the case, suppose that $\xi$ is bounded away from $\bigcup_{(l_1,\ldots,l_d)\in \mathbb{I}_{<j_1,\ldots,j_d}}\prod_{i=1}^d\mathbb D_{l_i}$ by $r>0$. Let $m_i\triangleq \inf\{k\geq0: \xi_i \in (\mathbb D_{\leqslant k})^r\}$. In case $m_i = \infty$ for some $i$, one can pick a large enough $M\in \mathbb Z_+$ such that $M\mathbf e_i \notin \mathbb{I}_{<(j_1,\ldots,j_d)}$ where $\mathbf e_i$ is the unit vector with 0 entries except for the $i$-th coordinate. Letting $(l_1,\ldots,l_d)\in \mathbb{J}_{j_1,\ldots,j_d}$ be an index such that $\mathbb C_{(l_1,\ldots,l_d)} \subseteq \mathbb C_{M\mathbf e_i}$, we find that $\xi \notin (\mathbb C_{(l_1,\ldots,l_j)})^r \subseteq (\mathbb C_{M\mathbf e_i})^r$ verifying the premise. If $\max_{i=1,\ldots,d} m_i < \infty$, $\xi \in (\prod_{i=1}^d \mathbb D_{m_i})^r$ and hence, $(m_1,\ldots,m_d)\notin \mathbb{I}_{<(j_1,\ldots,j_d)}$, which, in turn, implies that there exists $(l_1,\ldots,l_d)\in \mathbb{J}_{j_1,\ldots,j_d}$ such that $\mathbb C_{(l_1,\ldots,l_d)}\subseteq \mathbb C_{(m_1,\ldots,m_d)}$. However, due to the construction of $m_i$'s, each $\xi_i$ is bounded away from $\mathbb D_{<m_i}$ by $r$, and hence, $\xi$ is bounded away from $\mathbb D^{i-1}\times \mathbb D_{<m_i}\times \mathbb D^{d-i}$ by $r$ for each $i$. Therefore, $\xi \notin (\mathbb C_{(l_1,\ldots,l_j)})^r \subseteq (\mathbb C_{(m_1,\ldots,m_j)})^r$, and hence, the premise is verified. Now we can apply Lemma~\ref{thm:union-limsup} to reach the conclusion of the theorem. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:two-sided-limit-theorem}] Let $X^{(+)}$ and $X^{(-)}$ be L\`evy processes with spectrally positive L\'evy measures $\nu_+$ and $\nu_-$ respectively, where $\nu_+[x,\infty) = \nu[x,\infty)$ and $\nu_-[x,\infty) = \nu(-\infty,-x]$ for each $x>0$, and denote the corresponding scaled processes as $\bar X_n^{(+)}(\cdot) \triangleq X^{(+)}(n\cdot)/n$ and $\bar X_n^{(-)}(\cdot)\triangleq X^{(-)}(n\cdot)/n$. More specifically, let \begin{align*}\label{eq:plus_minus} \bar X^{(+)}_n(s) &= sa + B(ns)/n + \frac{1}{n}\int_{|x|\leq 1} x[N([0,ns]\times dx) - ns\nu(dx)] \\ & \hspace{160pt}+ \frac{1}{n}\int_{x>1} xN([0,ns]\times dx), \\ \bar X^{(-)}_n(s) &= \frac{1}{n}\int_{x<-1} xN([0,ns]\times dx). \end{align*} From Theorem~\ref{thm:multi-d-limit-theorem}, we know that $(n\nu[n,\infty))^{-j}(n\nu(-\infty,-n])^{-k}\P\big((\bar X_n^{(+)},\allowbreak \bar X_n^{(-)}) \in \cdot\big) \to C_{j}^+\times C_{k}^-(\cdot)$ in $\mathbb M\big((\mathbb D\times \mathbb D) \setminus \mathbb D_{<(j,k)}\big)$ where $C_j^+(\cdot) \triangleq \mathbf{E} \Big[\nu_\alpha ^j \{x\in (0,\infty)^j:\sum_{i=1}^j x_i 1_{[U_i,1]}\in \cdot\}\Big]$ and $C_k^-(\cdot) \triangleq \mathbf{E} \Big[\nu_\beta ^k \{y\in (0,\infty)^k:\sum_{i=1}^k y_i 1_{[U_i,1]}\in \cdot\}\Big]$. In view of Lemma~\ref{lem:continuous-mapping-principle-for-subtraction} and that $C_j^+\times C_k^-\big\{(\xi,\zeta)\in \mathbb D\times\mathbb D: (\xi(t) - \xi(t-))(\zeta(t)-\zeta(t-)) \neq 0 \text{ for some } t\in(0,1]\big\} = 0$, we can apply Lemma~\ref{lem:almost-continuous-mapping} for $h(\xi,\zeta)= \xi-\zeta$. Noting that $C_{j,k}(\cdot) = \big(C_j^+\times C_k^-\big)\circ h^{-1}(\cdot)$, we conclude that $(n\nu[n,\infty))^{-j}\allowbreak(n\nu(-\infty,-n])^{-k}\P\big(\bar X_n^{(+)}-\bar X_n^{(-)} \in \cdot\big) \to C_{j,k}(\cdot)$ in $\mathbb M(\mathbb D \setminus \mathbb D_{<j,k})$. Since $\bar X_n$ has the same distribution as $\bar X_n^{(+)} - \bar X_n^{(-)}$, the desired $\mathbb M(\mathbb D \setminus \mathbb D_{<j,k})$-convergence for $\bar X_n$ follows. \end{proof} \begin{proof}[Proof of Lemma~\ref{wasteful-lemma}] In general, $$ \min_{\substack{(j,k)\in \mathbb Z_+^2\\ \mathbb D_{j,k} \cap \bar A \neq \emptyset}}\mathcal I(j,k) \leq \mathcal I(\mathcal J(A), \mathcal K(A)) \leq \min_{\substack{(j,k)\in \mathbb Z_+^2\\ \mathbb D_{j,k} \cap A^\circ \neq \emptyset}}\mathcal I(j,k), $$ and the left inequality cannot be strict since $A$ is bounded away from $\mathbb D_{<\mathcal J(A), \mathcal K(A)}$. On the other hand, in case the right inequality is strict, then $\mathbb D_{\mathcal J(A), \mathcal K(A)} \cap A^\circ =\emptyset$, which in turn implies $C_{\mathcal J(A), \mathcal K(A)} (A^\circ) = 0$ since $C_{\mathcal J(A), \mathcal K(A)}$ is supported on $\mathbb D_{\mathcal J(A), \mathcal K(A)}$. Therefore, the lower bound is trivial if the right inequality is strict. In view of these observations, we can assume w.l.o.g.\ that $(\mathcal J(A), \mathcal K(A))$ is also in both $\argmin_{\substack{(j,k)\in \mathbb Z_+^2\\ \mathbb D_{j,k} \cap A^\circ \neq \emptyset}}\mathcal I(j,k)$ and $\argmin_{\substack{(j,k)\in \mathbb Z_+^2\\ \mathbb D_{j,k} \cap \bar A \neq \emptyset}}\mathcal I(j,k)$. Since $A^\circ$ and $\bar A$ are also bounded-away from $\mathbb D_{<\mathcal J(A), \mathcal K(A)}$, the upper bound of (\ref{eq:two-sided-main-result}) is obtained from \eqref{result1ub} and Theorem~\ref{thm:two-sided-limit-theorem} for $\bar A$, $j= \mathcal J(\bar A)=\mathcal J(A)$, and $k = \mathcal K(\bar A) = \mathcal K(A)$; the lower bound of (\ref{eq:two-sided-main-result}) is obtained from \eqref{result1lb} and Theorem~\ref{thm:two-sided-limit-theorem} for $A^\circ$, $j= \mathcal J(A^\circ)=\mathcal J(A)$, and $k = \mathcal K(A^\circ) =\mathcal K(A)$. Finally, we obtain \eqref{two-sided-limit-tends-to-zero} from Theorem~\ref{thm:two-sided-limit-theorem} and \eqref{result1ub} with $j=l$, $k = m$, $F = \bar A$ along with the fact that $C_{l,m}(\bar A) = 0$ since $A$ is bounded away from $\mathbb D_{l,m}$. \end{proof} \begin{lemma}\label{lemma-for-two-sided-multiple-optima} Let $A$ be a measurable set and suppose that the argument minimum in (\ref{def:JK}) is non-empty and contains a pair of integers $(\mathcal J(A),\mathcal K(A))$. Let $(l,m) \in \mathbb{I}_{=\mathcal J(A), \mathcal K(A)}$. \begin{itemize} \item[(i)] If $A_\delta \cap \mathbb D_{l,m}$ is bounded away from $\mathbb D_{\ll\mathcal J(A), \mathcal K(A)}$ for some $\delta>0$, then $A \cap (\mathbb D_{l,m})_\gamma$ is bounded away from $\mathbb D_{\ll\mathcal J(A), \mathcal K(A)}$ for some $\gamma>0$. \item[(ii)] If $A$ is bounded away from $\mathbb D_{\ll \mathcal J(A),\mathcal K(A)}$, then there exists $\delta>0$ such that $A\cap (\mathbb D_{l,m})_\delta$ is bounded away from $\mathbb D_{j,k}$ for any $(j,k) \in \mathbb{I}_{=\mathcal J(A),\mathcal K(A)}\setminus \{(l,m)\}$. \end{itemize} \end{lemma} \begin{proof} For (i), we prove that if $d(A_{2\delta} \cap \mathbb D_{l,m},\, \mathbb D_{\ll \mathcal J(A), \mathcal K(A)})> 3\delta$ then $d(A \cap (\mathbb D_{l,m})_\delta,\, \mathbb D_{\ll \mathcal J(A), \mathcal K(A)})\geq \delta$. Suppose that $d( A \cap (\mathbb D_{l,m})_\delta,\, \mathbb D_{\ll\mathcal J(A), \mathcal K(A)}) < \delta$. Then, there exists $\xi \in A\cap (\mathbb D_{l,m})_\delta$ and $\zeta \in \mathbb D_{\ll \mathcal J(A),\mathcal K(A)}$ such that $d(\xi, \zeta) < \delta$. Note that we can find $\xi'\in \mathbb D_{l,m}$ such that $d(\xi, \xi') \leq 2\delta$, which means that $\xi' \in A_{2\delta} \cap \mathbb D_{l,m}$. Therefore, $d(A_{2\delta}\cap \mathbb D_{l,m}, \mathbb D_{\ll \mathcal J(A), \mathcal K(A)} )\leq d(\xi', \zeta) \leq d(\xi',\xi) + d(\xi, \zeta) \leq 2\delta + \delta \leq 3\delta$. For (ii), suppose that $d\big(A, \mathbb D_{\ll \mathcal J(A), \mathcal K(A)}\big) > \gamma$ for some $\gamma > 0$ and $(l,m)$ and $(j,k)$ are two distinct pairs that belong to $\mathbb{I}_{=\mathcal J(A), \mathcal K(A)}$. Assume w.l.o.g.\ that $j < l$. (If $j > l$, it should be the case that $k < m$, and hence one can proceed in the same way by switching the roles of upward jumps and downward jumps in the following argument.) Let $c$ be a positive number such that $c > 8(l-j)+2$ and set $\delta = \gamma / c$. We will show that $A \cap (\mathbb D_{l,m})_\delta$ and $(\mathbb D_{j,k})_\delta$ are bounded away from each other. Let $\xi$ be an arbitrary element of $A\cap (\mathbb D_{l,m})_\delta$. Then, there exists a $\zeta \in \mathbb D_{l,m}$ such that $d(\zeta, \xi) \leq 2\delta$. Note that $d\big(\zeta, \mathbb D_{\ll \mathcal J(A), \mathcal K(A)}\big) \geq (c-2)\delta$; in particular, $d\big(\zeta, \mathbb D_{j,m}) \geq (c-2) \delta$. If we write $\zeta \triangleq \sum_{i=1}^l x_i 1_{[u_i,1]} - \sum_{i=1}^m y_i 1_{[v_i,1]}$, this implies that $x_{j+1} \geq \frac{(c-2)\delta}{l-j}$. Otherwise, $(c-2)\delta > \sum_{i=j+1}^lx_i = \|\zeta - \zeta'\| \geq d(\zeta, \zeta')$, where $\zeta' \triangleq \zeta - \sum_{i=j+1}^lx_i1_{[u_i,1]} \in \mathbb D_{j,m}$. Therefore, $d(\zeta, \mathbb D_{j,k}) \geq \frac{(c-2)\delta}{2(l-j)}$, which in turn implies $d(\xi, \mathbb D_{j,k}) \geq \frac{(c-2)\delta}{2(l-j)}-2\delta > 2\delta$. Since $\xi$ was arbitrary, we conclude that $A\cap (\mathbb D_{l,m})_\delta$ bounded away from $(\mathbb D_{j,k})_\delta$. \end{proof} \subsection{Proofs for Section~\ref{sec:implications}}\label{subsec:proofs-for-implications} Recall that \begin{equation*} I(\xi)\triangleq \left\{\begin{array}{ll} (\alpha-1)\mathcal D_+(\xi) + (\beta-1)\mathcal D_-(\xi) & \text{if $\xi$ is a step function with $\xi(0) = 0$} \\ \infty & \text{otherwise} \end{array} \right. . \end{equation*} \begin{proof}[Proof of Theorem~\ref{thm:weak-ldp}] Observe first that $I(\cdot)$ is a rate function. The level sets $\{\xi: I(\xi) \leq x\}$ equal $\bigcup_{\substack{(l,m)\in \mathbb Z_+^2\\(\alpha-1)l + (\beta-1)m \leq \lfloor x\rfloor}}\mathbb D_{l,m}$ and are therefore closed---note the level sets are not compact so $I(\cdot)$ is not a good rate function (see, for example, \cite{dembozeitouni} for the definition and properties of good rate functions). Starting with the lower bound, suppose that $G$ is an open set. We assume w.l.o.g.\ that $ \inf_{\xi\in G} I(\xi) < \infty$, since the inequality is trivial otherwise. Due to the discrete nature of $I(\cdot)$, there exists a $\xi^*\in G$ such that $I(\xi^*) = \inf_{\xi\in G} I(\xi)$. Set $j\triangleq \mathcal D_+(\xi^*)$ and $k \triangleq \mathcal D_-(\xi^*)$. Let $u_1^+,\ldots, u_j^+$ be the sorted (from the earliest to the latest) upward jump times of $\xi^*$; $x_1^+,\ldots, x_j^+$ be the sorted (from the largest to the smallest) upward jump sizes of $\xi^*$; $u_1^-,\ldots,u_k^-$ be the sorted downward jump times of $\xi^*$; $x_1^-,\ldots,x_k^-$ be the sorted downward jump sizes of $\xi^*$. Also, let $x_{j+1}^+=x_{k+1}^-=0$, $u_0^+=u_0^- = 0$, and $u_{j+1}^+ = u_{k+1}^-= 1$. Note that if $\zeta \in \mathbb D_{l,m}$ for $l<j$, then $d(\xi^*,\zeta) \geq x_j^+/2$ since at least one of the $j$ upward jumps of $\xi^*$ cannot be matched by $\xi$. Likewise, if $\zeta \in \mathbb D_{l,m}$ for $m < k$, then $d(\xi^*, \zeta) \geq x_k^-/2$. Therefore, $d(\mathbb D_{<j,k}, \xi^*) \geq (x_j^+\wedge x_k^-)/2$. On the other hand, since $G$ is an open set, we can pick $\delta_0>0$ so that the open ball $B_{\xi^*,\delta_0}\triangleq \{\zeta\in \mathbb D: d(\zeta,\xi) < \delta_0\}$ centered at $\xi^*$ with radius $\delta_0$ is a subset of $G$---i.e., $B_{\xi^*,\delta_0} \subset G$. Let $\delta = (\delta_0 \wedge x_j^+ \wedge x_k^-)/4$. If $j=k=0$, then $\xi^* \equiv0$, and hence, $\{\bar X_n \in G\}$ contains $\{\|\bar X_n\|\leq \delta\}$ which is a subset of $B_{\xi^*,\delta}$. One can apply Lemma~\ref{lem:cont_etemadi} to show that $\P(X_n\in G)$ converges to 1, which, in turn, proves the inequality. Now, suppose that either $j\geq 1$ or $k\geq 1$. Then, $d(B_{\xi^*,\delta},\mathbb D_{<j,k})\geq \delta$. As $d(B_{\xi^*,\delta},\mathbb D_{<j,k})>0$ and $B_{\xi^*,\delta}$ is open, we see from our sharp asymptotics (Theorem~\ref{thm:one-sided-limit-theorem}) that \begin{equation*} C_{j,k}(B_{\xi^*,\delta})\leq \liminf_{n\rightarrow\infty} (n\nu[n,\infty))^{-j} (n\nu(-\infty,-n])^{-k}P( \bar X_n \in B_{\xi^*,\delta}). \end{equation*} From the definition of $C_{j,k}$, it follows that $C_j(B_{\xi^*,\delta})>0$. To see this, note first that we can assume w.l.o.g.\ that $x_i^\pm$'s are all distinct since $G$ is open (because, if some of the jump sizes are identical, we can pick $\epsilon$ such that $B_{\xi^*,\epsilon}\subseteq G$, and then perturb those jump sizes by $\epsilon$ to get a new $\xi^*$ which still belongs to $G$ while whose jump sizes are all distinct.) Suppose that $\xi^* = \sum_{l=1}^j x^+_{i^+_l}1_{[u^+_i,1]}-\sum_{l=1}^k x^-_{i^-_l}1_{[u^-_i,1]}$, where $\{i^\pm_1,\ldots,i^\pm_j\}$ are permutations of $\{1,\ldots,j\}$. Let $2\delta' \triangleq \delta \wedge \underline \Delta^+_u\wedge \underline\Delta^+_x\wedge \underline \Delta^-_u\wedge \underline\Delta^-_x$, where $\underline \Delta^+_u = \min_{i=1,\ldots,j+1} (u^+_i-u^+_{i-1})$, $\underline \Delta^+_x = \min_{i=1,\ldots,j} (x^+_{i-1}-x^+_{i})$, $\underline \Delta^-_u = \min_{i=1,\ldots,k+1} (u^-_i-u^-_{i-1})$, and $\underline \Delta^-_x = \min_{i=1,\ldots,k} (x^-_{i-1}-x^-_{i})$. Consider a subset $B'$ of $B_{\xi^*,\delta}$: \begin{align*} B' \triangleq &\bigg\{\sum_{l=1}^jy^+_{i^+_l}1_{[v^+_l,1]}-\sum_{l=1}^k y^-_{i^-_l}1_{[v^-_l,1]}: \\ &\hspace{50pt} v^+_i \in (u^+_i-\delta', u^+_i+\delta'), y^+_i \in (x^+_i-\delta', x^+_i+\delta'), i=1,\ldots,j; \\ &\hspace{50pt} v^-_i \in (u^-_i-\delta', u^-_i+\delta'), y^-_i \in (x^-_i-\delta', x^-_i+\delta'), i=1,\ldots,k \bigg\}. \end{align*} Then, \begin{align*} &C_{j,k}(B_{\xi^*,\delta}) \\ &\geq C_{j,k}(B' \\ &= \int_{(u^+_1-\delta',u^+_1+\delta')\times\cdots\times(u^+_j-\delta',u^+_j+\delta')} dLeb \cdot \int_{(x^+_1-\delta',x^+_1+\delta')\times\cdots\times(x^+_j-\delta',x^+_j+\delta')}d\nu_\alpha \\ &\hspace{20pt} \cdot\int_{(u^-_1-\delta',u^-_1+\delta')\times\cdots\times(u^-_k-\delta',u^-_k+\delta')} dLeb \cdot \int_{(x^-_1-\delta',x^-_1+\delta')\times\cdots\times(x^-_k-\delta',x^-_k+\delta')}d\nu_\beta \\ &\geq {(2\delta')^{j}}{(2\delta'(x_1^+)^\alpha)^{j}}{(2\delta')^{k}}{(2\delta'(x_1^-)^{\beta})^{k}}>0. \end{align*} We conclude that \begin{equation}\label{eq:getting-lower-bound} \begin{aligned} &\liminf_{n\rightarrow\infty}\frac{\log \P(\bar X_n \in G)}{\log n} \geq \liminf_{n\rightarrow\infty}\frac{\log \P(\bar X_n \in B_{\xi^*,\delta})}{\log n} \\ &\geq \liminf_{n\rightarrow\infty}\frac{\log (C_{j,k}(B_{\xi^*,\delta}) (n \nu[n,\infty))^j(n \nu(-\infty,-n])^k (1+o(1))) }{\log n} \\ &= -\big((\alpha-1)j+(\beta-1)k\big), \end{aligned} \end{equation} which is the lower bound. Turning to the upper bound, suppose that $K$ is a compact set. We first consider the case where $\inf_{\xi \in K} I(\xi) < \infty$. Pick $\xi^*$, $j$ and $k$ as in the lower bound, i.e., $I(\xi^*) \triangleq \inf_{\xi\in K} I(\xi)$, $j\triangleq \mathcal D_+(\xi^*)$, and $k \triangleq \mathcal D_-(\xi^*)$. Here we can assume w.l.o.g.\ either $j \geq 1$ or $k\geq 1$ since the inequality is trivial in case $j=k=0$. For each $\zeta \in K$, either $I(\zeta) > I(\xi^*)$, or $I(\zeta) = I(\xi^*)$. We construct an open cover of $K$ by considering these two cases separately: \begin{itemize} \item If $I(\zeta) > I(\xi^*)$, $\zeta$ is bounded away from $\mathbb D_{<j,k}\cup \mathbb D_{j,k}$ (Lemma~\ref{lem:Djk} (f)). For each such $\zeta$'s, pick a $\delta_\zeta>0$ in such a way that $d(\zeta, \mathbb D_{<j,k}\cup \mathbb D_{j,k})>\delta_\zeta$. Set $j_\zeta \triangleq j$ and $k_\zeta \triangleq k$. Note that in this case $C_{j_\zeta,k_\zeta}(\bar B_{\zeta,\delta_\zeta}) = 0$. \item If $I(\zeta) = I(\xi^*)$, set $j_\zeta \triangleq \mathcal D_+(\zeta)$ and $k_\zeta \triangleq \mathcal D_-(\zeta)$. Since they are bounded away from $\mathbb D_{<j_\zeta,k_\zeta}$ (Lemma~\ref{lem:Djk} (e)), we can choose $\delta_\zeta>0$ such that $d(\zeta, \mathbb D_{<j_\zeta,k_\zeta}) > \delta_\zeta$ and $C_{j_\zeta,k_\zeta}(\bar B_{\zeta,\delta_\zeta}) < \infty$. \end{itemize} Consider an open cover $\{B_{\zeta;\delta_\zeta}: \zeta\in K\}$ of $K$ and its finite subcover $\{B_{\zeta_i;\delta_{\zeta_i}} \}_{i=1,\ldots,m}$. For each $\zeta_i$, we apply the sharp asymptotics (Theorem~\ref{thm:two-sided-limit-theorem}) to $\bar B_{\zeta_i; \delta_{\zeta_i}}$ to get \begin{equation}\label{ineq:some-upper-bound-in-sec-5} \limsup_{n\rightarrow\infty} \frac{\log \P(\bar X_n \in \bar B_{\zeta_i; \delta_{\zeta_i}})}{\log n} \leq (\alpha-1)j_{\zeta_i} + (\beta-1)k_{\zeta_i} = -I(\xi^*). \end{equation} Therefore, \begin{align} \limsup_{n\rightarrow\infty} \frac{\log \P(\bar X_n \in \bar F)}{\log n} & \leq \limsup_{n\rightarrow\infty} \frac{\log \sum_{i=1}^m \P(\bar X_n \in \bar B_{\zeta_i; \delta_{\zeta_i}})}{\log n} \nonumber \\ & = \max_{i=1,\ldots,m}\limsup_{n\rightarrow\infty} \frac{\log \P(\bar X_n \in \bar B_{\zeta_i; \delta_{\zeta_i}})}{\log n} \nonumber \\ &\leq -I(\xi^*) = -\inf_{\xi\in K} I(\xi),\label{ineq:another-upper-bound-in-sec-5} \end{align} completing the proof of the upper bound in case the right-hand side is finite. Now, turning to the case $\inf_{\xi \in K} I(\xi) = \infty$, fix an arbitrary positive integer $l$. Since $\mathbb D_{<l,l}$ is closed and disjoint with a compact set $K$, it is also bounded away from each $\zeta\in K$. Now picking $\delta_\zeta>0$ so that $\bar B_{\zeta,\delta_\zeta}$ is disjoint with $K$ for each $\zeta$, one can construct an open cover $\{B_{\zeta;\delta_\zeta}:\zeta \in K\}$ of $K$. Let $\{B_{\zeta_i;\delta_{\zeta_i}}\}_{i=1,\ldots,m}$ its finite subcover, then from the same calculation as \eqref{ineq:some-upper-bound-in-sec-5} and \eqref{ineq:another-upper-bound-in-sec-5}, \begin{equation*} \limsup_{n\rightarrow\infty} \frac{\log \P(\bar X_n \in K)}{\log n} \leq -(\alpha+\beta-2)m. \end{equation*} Taking $m\to\infty$, we arrive at the desired upper bound. \end{proof} \section{Applications}\label{sec:applications} In this section, we illustrate the use of our main results, established in Section~\ref{sec:sample-path-ldps}, in several problem contexts that arise in control, insurance, and finance. In all examples, we assume that $\bar{X}_{n}\left( t\right) =X\left( nt\right) /n$, where $X\left(\cdot\right)$ is a centered L\'{e}vy process satisfying (1.1). \subsection{Crossing High Levels with Moderate Jumps}\label{subsec:moderate-jumps} We are interested in level crossing probabilities of L\'evy processes where the jumps are conditioned to be moderate. More precisely, we are interested in probabilities of the form $\P\big(\sup_{t\in [0,1]} [\bar X_n(t)-ct] \geq a ; \;\sup_{t\in [0,1]} [\bar X_n(t)-\bar X_n(t-)]\leq b\big)$. We make a technical assumption that $a$ is not a multiple of $b$ and focus on the case where the L\'evy process $\bar X_n$ is spectrally positive. The setting of this example is relevant in, for example, insurance, where huge claims may be reinsured and therefore do not play a role in the ruin of an insurance company. \cite{AsmussenPihlsgaard} focus on obtaining various estimates of infinite-time ruin probabilities using analytic methods. Here, we provide complementary sharp asymptotics for the finite-time ruin probability, using probabilistic techniques. Set $A \triangleq \{\xi\in \mathbb D: \sup_{t\in [0,1]} [\xi(t)-ct]\geq a ; \sup_{t\in [0,1]} [\xi(t)-\xi(t-)] \leq b\}$ and define $ j \triangleq \lceil a/b \rceil. $ Intuitively, $j$ should be the key parameter, as it takes at least $j$ jumps of size $b$ to cross level $a$. Our goal is to make this intuition rigorous by applying Theorem~\ref{thm:one-sided-main-theorem} and by showing that the upper and lower bounds are tight. We first check that $A_\delta \cap \mathbb D_j$ is bounded away from the closed set $\mathbb D_{\leqslant j-1}$ for some $\delta>0$. To see this, it suffices to show that \begin{itemize} \item[1)] $\sup_{t\in[0,1]}[\xi(t)-\xi(t-)] \leq b$ and $\sup_{t\in[0,1]}[\zeta(t)-\zeta(t-)] > b'$ imply $d(\xi,\zeta) > \frac{b'-b}{3}$; and \item[2)] $\sup_{t\in [0,1]}[\xi(t) - ct]< a'$ and $\sup_{t\in[0,1]}[\zeta(t)-ct] \geq a$ imply $d(\xi,\zeta) \geq \frac{a-a'}{c+1}$. \end{itemize} It is straightforward to check 1). To see 2), note that for any $\epsilon>0$, one can find $t^*$ such that $\zeta(t^*) - ct^* \geq a-\epsilon$. Of course, $\xi(\lambda(t^*)) - c\lambda(t^*) < a'$ for any homeomorphism $\lambda (\cdot )$. Subtracting the latter inequality from the former inequality, we obtain \begin{equation}\label{eq:mod-jump-diff} \zeta(t^*)-\xi(\lambda(t^*)) \geq a - a' -\epsilon + c(t^* - \lambda(t^*)). \end{equation} One can choose $\lambda$ so that $d(\xi,\zeta) + \epsilon \geq \| \lambda - e\| \geq \lambda(t^*)- t^*$ and $d(\zeta, \xi) + \epsilon \geq \|\zeta - \xi \circ \lambda\|\geq \zeta(t^*) - \xi(\lambda(t^*))$, which together with (\ref{eq:mod-jump-diff}) yields $$ d(\xi, \zeta) > a-a'-(c+1)\epsilon-cd(\xi,\zeta). $$ This leads to $d(\xi,\zeta) \geq \frac{a-a'}{c+1}$ by taking $\epsilon \to 0$. With 1) and 2) in hand, it follows that $\phi_1(\xi) \triangleq \sup_{t\in[0,1]}[\xi(t)-\xi(t-)]$ and $\phi_2(\xi) \triangleq \sup_{t\in [0,1]}[\xi(t) - ct]$ are continuous functionals and $A_{\delta}\subseteq A(\delta)$, where $A(\delta) \triangleq \{\xi\in \mathbb D: \sup_{t\in [0,1]} [\xi(t)-ct]\geq a-(c+1)\delta ; \sup_{t\in [0,1]} [\xi(t)-\xi(t-)] \leq b+3\delta\}$. Since $\xi \in A(\delta)\cap \mathbb D_j$ implies that the jump size of $\xi$ is bounded from below by $(b+3\delta)j-(a-(c+1)\delta)$, one can choose $\delta>0$ so that $A(\delta)\cap \mathbb D_j$ is bounded away from $\mathbb D_{\leqslant j-1}$. This implies that $A_\delta\cap \mathbb D_j$ is also bounded away from $\mathbb D_{\leqslant j-1}$ for sufficiently small $\delta>0$. Hence, Theorem~\ref{thm:one-sided-main-theorem} applies with $\mathcal J(A) = j$. Next, to identify the limit, recall the discussion at the end of Section~\ref{subsec:one-sided-large-deviations}. Note that $A = \phi_1^{-1}[a,\infty) \cap \phi_2^{-1}(-\infty,b]$ and \begin{equation}\label{eq:constraints-for-B} \begin{aligned} &\hat T_{j}^{-1}(\phi_1^{-1}[a,\infty) \cap \phi_2^{-1}(-\infty,b]) \\ &= \textstyle{\left\{(x,u)\in \hat S_j: \sum_{i=1}^j x_i \geq a+ c\max_{i=1,\ldots,j} u_i,\ \max_{i=1,\ldots,j} x_i \leq b\right\}},\\ &\hat T_{j}^{-1}(\phi_1^{-1}(a,\infty) \cap \phi_2^{-1}(-\infty,b)) \\ &= \textstyle{\left\{(x,u)\in \hat S_j: \sum_{i=1}^j x_i > a+ c\max_{i=1,\ldots,j} u_i,\ \max_{i=1,\ldots,j} x_i < b\right\}}. \end{aligned} \end{equation} We see that $\hat T_{j}^{-1}(\phi_1^{-1}[a,\infty) \cap \phi_2^{-1}(-\infty,b]) \setminus \hat T_{j}^{-1}(\phi_1^{-1}(a,\infty) \cap \phi_2^{-1}(-\infty,b))$ has Lebesgue measure 0, and hence, $A$ is $C_j$-continuous. Thus, (\ref{A-continuity}) holds with \begin{equation*} C_{j}(A) = \mathbf{E} \left[\nu_\alpha^j\{(0,\infty)^j: \sum_{i=1}^jx_i1_{[U_i,1]} \in A\}\right] = \int_{(x,u)\in \hat T_{j}^{-1}(A) } \prod_{i=1}^j [\alpha x_i^{-\alpha-1} dx_i du_i]>0. \end{equation*} Therefore, we conclude that \begin{equation} \P\left(\sup_{t\in [0,1]}[ \bar X_n(t)-ct] \geq a ; \sup_{t\in [0,1]} [\bar X_n(t)-\bar X_n(t-)] \leq b\right) \sim C_{j}\big(A\big) (n\nu[n,\infty))^{j}. \end{equation} In particular, the probability of interest is regularly varying with index $-(\alpha-1)\lceil a/b \rceil$. \subsection{A Two-sided Barrier Crossing Problem}\label{subsec:two-sided-barrier} We consider a L\'{e}vy-driven Ornstein-Uhlenbeck process of the form% \[ d\bar{Y}_{n}\left( t\right) =-\kappa d\bar{Y}_{n}\left( t\right) +d\bar{X}_{n}\left( t\right), \hspace{1cm} \bar{Y}_{n}\left( 0\right) =0. \] We apply our results to provide sharp large-deviations estimates for \[ b\left( n\right) =\P\left( \inf\{\bar{Y}_{n}\left( t\right) :0\leq t\leq1\}\leq-a_{-},\bar{Y}_{n}\left( 1\right) \geq a_{+}\right) \] as $n\rightarrow\infty$, where $a_{-},a_{+}>0$. This probability can be interpreted as the price of a barrier digital option (see \citealp{cont2004financial}, Section 11.3). In order to apply our results it is useful to represent $\bar{Y}_{n}$ as an explicit function of $\bar{X}_{n}$. In particular, we have that \begin{align} \bar{Y}_{n}\left( t\right) & =\exp\left( -\kappa t\right) \left( \bar{Y}_{n}\left( 0\right) +\int_{0}^{t}\exp\left( \kappa s\right) d\bar{X}_{n}\left( s\right) \right) \label{REP_1}\\ & =\bar{X}_{n}\left( t\right) -\kappa\exp\left( -\kappa t\right) \int% _{0}^{t}\exp\left( \kappa s\right) \bar{X}_{n}\left( s\right) ds. \label{REP_2}% \end{align} Hence, if $\phi:\mathbb{D}\left( [0,1],\mathbb{R}\right) \rightarrow \mathbb{D}\left( [0,1],\mathbb{R}\right) $ is defined via \[ \phi\left( \xi\right) \left( t\right) =\xi\left( t\right) -\kappa \exp\left( -\kappa t\right) \int_{0}^{t}\exp\left( \kappa s\right) \xi\left( s\right) ds, \] then $\bar{Y}_{n}=\phi\left( \bar{X}_{n}\right) $. Moreover, if we let% \[ A=\left\{ \xi\in\mathbb{D}:\inf_{0\leq t\leq1}\phi\left( \xi\right) \left( t\right) \leq-a_{-},\phi\left( \xi\right) \left( 1\right) \geq a_{+}\right\} , \] then we obtain \[ b\left( n\right) ={\P}\left( \bar{X}_{n}\in A\right) . \] In order to easily verify topological properties of $A$, let us define $ m,\pi_{1}:\mathbb{D}( [0,1],\allowbreak\mathbb{R}) \rightarrow\mathbb{R}% $ by $ m\left( \xi\right) =\inf_{0\leq t\leq1}\xi\left( t\right) ,\text{ and }% \pi_{1}\left( \xi\right) =\xi\left( 1\right). $ Note that $\pi_{1}$ is continuous (see \citealp{billingsley2013convergence}, Theorem~12.5), that $m$ is continuous as well, and so is $\phi$. Thus, $m\circ\phi$ and $\pi_{1}\circ\phi$ are continuous. We can therefore write% \[ A=\left( m\circ\phi\right) ^{-1}(-\infty,-a_{-}]\cap\left( \pi_{1}\circ \phi\right) ^{-1}[a_{+},\infty), \] concluding that $A$ is a closed set. We now apply Theorem~\ref{thm:two-sided-main-theorem}. To show that $\mathbb{D}_{i,0}$ is bounded away from $\left( m\circ\phi\right) ^{-1}(-\infty,-a_{-}]$, select $\theta$ such that $d\left( \theta,\mathbb{D}_{i,0}\right) <r$ with $r<a_{-}/\left( 1+\kappa\exp\left( \kappa\right) \right) $. There exists a $\xi\in\mathbb{D}_{i,0}$ such that $d\left( \theta,\xi\right) <r$ and $\xi$ satisfies $ \xi\left( t\right) =\sum_{j=1}^{i}x_{j}I_{[u_{j},1]}\left( t\right) , $ with $i\geq1$. There also exists a homeomorphism $\lambda:[0,1]\rightarrow \lbrack0,1]$ such that \begin{equation} \sup_{t\in\lbrack0,1]}\left\vert \lambda\left( t\right) -t\right\vert \vee\left\vert \left( \xi\circ\lambda\right) \left( t\right) -\theta\left( t\right) \right\vert <r.\label{BND_AUX_1}% \end{equation} Now, define $\psi=\theta-\left( \xi\circ\lambda\right)$. Due to the linearity of $\phi$, and representations (\ref{REP_1}) and (\ref{REP_2}), we obtain that \begin{align*} \phi\left( \theta\right) \left( t\right) & =\phi\left( \left( \xi \circ\lambda\right) \right) \left( t\right) +\phi\left( \psi\right) \left( t\right) \\ & =\exp\left( -\kappa t\right) \sum_{j=1}^{i}\exp\left( \kappa\lambda ^{-1}\left( u_{j}\right) \right) x_{j}I_{[\lambda^{-1}\left( u_{j}\right) ,1]}\left( t\right) +\psi\left( t\right) \\ &\hspace{50pt} -\kappa\exp\left( -\kappa t\right) \int_{0}% ^{t}\exp\left( \kappa s\right) \psi\left( s\right) ds. \end{align*} Since $x_{j}\geq0$, applying the triangle inequality and inequality (\ref{BND_AUX_1}) we conclude (by our choice of $r$), that \[ \inf_{0\leq t\leq1}\phi\left( \theta\right) \left( t\right) \geq-r\left( 1+\kappa\exp\left( \kappa\right) \right) >-a_{-}. \] A similar argument allows us to conclude that $\mathbb{D}_{0,i}$ is bounded away from $\left( \pi_{1}\circ\phi\right) ^{-1}[a_{+},\infty)$. Hence, in addition to being closed, $A$ is bounded away from $\mathbb{D}_{0,i}% \cup\mathbb{D}_{i,0}$ for any $i\geq1$. Moreover, let $\xi\in A\cap\mathbb{D}_{1,1},$ with \begin{equation} \xi\left( t\right) ={x}I_{[{u},1]}(t)-{y}I_{[{v},1]}(t), \label{fREP}% \end{equation} where ${x}>0$ and ${y}>0$. Using (\ref{REP_1}), we obtain that $\xi\in A\cap\mathbb{D}_{1,1},$ is equivalent to \[ {y}\geq a_{-}\text{, \ }{u}>{v}\text{, and }{x}\geq a_{+}\exp\left( \kappa\left( 1-{u}\right) \right) +{y}\exp\left( -\kappa\left( {u}-{v}\right) \right) . \] Now, we claim that \begin{align} A^{\circ} & =\left\{ \xi\in\mathbb{D}:\inf_{0\leq t\leq1}\phi\left( \xi\right) \left( t\right) <-a_{-},\phi\left( \xi\right) \left( 1\right) >a_{+}\right\} \label{A_INT}\\ & =\left( m\circ\phi\right) ^{-1}(-\infty,-a_{-})\cap\left( \pi_{1}% \circ\phi\right) ^{-1}(a_{+},\infty).\nonumber \end{align} It is clear that $A^{\circ}$ contains the open set in the right hand side. We now argue that such a set is actually maximal, so that equality holds. Suppose that $\phi\left( \xi\right) \left( 1\right) =a_{+}$, while $\min_{0\leq t\leq1}\phi\left( \xi\right) \left( t\right) <-a_{-}$. We then consider $\psi=-\delta I_{\{1\}}\left( t\right)$ with $\delta>0$, and note that $d\left( \xi,\xi+\psi\right) \leq\delta$, and \[ \phi\left( \xi+\psi\right) \left( t\right) =\phi\left( \xi\right) \left( t\right) I_{[0,1)}\left( t\right) +\left( a_{+}-\delta\right) I_{\{1\}}\left( t\right) , \] so that $\xi+\psi\notin A$. Similarly, we can see that the other inequality (involving $a_{-}$) must also be strict, hence concluding that (\ref{A_INT}) holds. We deduce that, if $\xi\in A^{\circ}\cap\mathbb{D}_{1,1}$ with $\xi$ satisfying (\ref{fREP}), then% \[ {y}>a_{-}\text{, \ }{u}>{v}\text{, }{x}>a_{+}\exp\left( \kappa\left( 1-{u}\right) \right) +{y}\exp\left( -\kappa\left( {u}-{v}\right) \right) . \] Thus, we can see that $A$ is $C_{1,1}\left( \cdot\right) $-continuous, either directly or by invoking our discussion in Section~\ref{subsec:one-sided-large-deviations} regarding continuity of sets. Therefore, applying Theorem~\ref{thm:two-sided-main-theorem}, we conclude that \[ b\left( n\right) \sim n\nu\lbrack n,\infty)n\nu(-\infty,-n]C_{1,1}\left( A\right) \] as $n\rightarrow\infty$, where \[ C_{1,1}\left( A\right) =\int_{0}^{1}\int_{a_{-}}^{\infty}\int_{{v}}^{1}% \int_{a_{+}\exp\left( \kappa\left( 1-{u}\right) \right) +{y}\exp\left( -\kappa\left( {u}-{v}\right) \right) }^{\infty}\nu_{\alpha}(dx)\,du\,\nu _{\beta}(dy)dv. \] In particular, the probability of interest is regularly varying with index $2-\alpha-\beta$. \subsection{Identifying the Optimal Number of Jumps for Sets of the Form $A=\{\xi: l \leq \xi \leq u\}$}\label{subsec:sausage} The sets that appeared in the examples in Section~\ref{subsec:moderate-jumps} and Section~\ref{subsec:two-sided-barrier} lend themselves to a direct characterization of the optimal numbers of jumps $(\mathcal J(A), \mathcal K(A))$. However, in more complicated problems, deciding what kind of paths the most probable limit behaviors consist of may not be as obvious. In this section, we show that for sets of a certain form, we can identify an optimal path. Consider continuous real-valued functions $l$ and $u$, which satisfy $l(t) < u(t)$ for every $t\in [0,1]$, and suppose that $l(0)<0<u(0)$. Define $A=\{\xi: l(t) \leq \xi(t) \leq u(t)\}$. We assume that both $\alpha,\beta<\infty$, which is the most interesting case. The goal of this section is to construct an algorithm which yields an expression for $\mathcal J(A)$ and $\mathcal K(A)$. In fact, we can completely identify a function $h$ that solves the optimization problem defining $(\mathcal J(A),\mathcal K(A))$. This function will be a step function with both positive and negative steps. We first construct such a function, and then verify its optimality. The first step is to identify the times at which this function jumps. Define the sets \begin{align*} A_t &\triangleq \{x: l(t) \leq x \leq u(t) \}, & A_{s,t}^* &\triangleq \cap_{s\leq r\leq t} A_r, \end{align*} and the times $(t_n, n\geq 1)$ by \begin{align*} t_{n+1} &\triangleq 1 \wedge \inf \{ t>t_n : A_{\tau_n,t} = \emptyset\} \text{ \ for \ }n\geq 2, &t_1 &\triangleq 1 \wedge \inf \{t>0: 0\notin A_t\}. \end{align*} Let $n^*= \inf \{n\geq 1: t_n=1\}$. Assume that $n^*>1$, since the zero function is the obvious optimal path in case $n^*=1$. Due to the construction of the times $t_n, n\geq 1,$ we have the following properties: \begin{itemize} \item Either $l(t_1)=0$ or $u(t_1) = 0$. \item For every $n = 1,\ldots,n^*-2$, $\sup_{t\in [t_{n},t_{n+1}]}l(t) = \inf_{t\in [t_{n},t_{n+1}]}u(t)$. \item $H_{fin} \triangleq [\sup_{t\in [t_{n^*-1},t_{n^*}]}l(t), \inf_{t\in [t_{n^*-1},t_{n^*}]}u(t)]$ is nonempty. \end{itemize} Set $h_n \triangleq \sup_{t\in [t_{n},t_{n+1}]}l(t)$ for $n=1,\ldots,n^*-1$, and set $h_{n^*-1} \triangleq h_{fin}$ for any $h_{fin} \in H_{fin}$. Define now $h(t)$ as $0$ on $t\in [0,t_1)$, $h(t) = h_n$ on $t\in [t_{n},t_{n+1})$ for $n=1,\ldots,n^*-2$, and $h(t) = h_{n^*-1}$ on $t\in [t_{n^*-1}, 1]$. We claim now that $(\mathcal J(A), \mathcal K(A)) = (\mathcal J(\{h\}), \mathcal K(\{h\}))$. In fact, we can prove that if $g\in A$ is a step function, $\mathcal D_+(g) \geq \mathcal D_+(h)$ and $\mathcal D_-(g) \geq \mathcal D_-(h)$, which implies the optimality of $h$. The proof is based on the following observation. At each $t_{n+1}$, either \begin{itemize} \item[1)] for any $\epsilon>0$ one can find $t\in[t_{n+1}, t_{n+1}+\epsilon]$ such that $u(t)<h_{n}$, or \item[2)] for any $\epsilon>0$ one can find $t\in[t_{n+1}, t_{n+1}+\epsilon]$ such that $l(t)>h_{n}$. \end{itemize} Otherwise, there exists $\epsilon>0$ such that $h_n \in A_{t_n,t_{n+1}+\epsilon}$, contradicting the definition of $t_n$, which requires $A_{t_n, t_{n+1}+\epsilon} = \emptyset$. From this observation, we can prove that on each interval $(t_n, t_{n+1}]$, any feasible path must jump at least once in the same direction as that of the jump of $h$. To see this, first suppose that 1) is the case at $t_{n+1}$, and $g\in A$ is a step function. Note that due to its continuity, $l(\cdot)$ should have achieved its supremum at $t_{sup}\in [t_n,t_{n+1}]$, i.e., $l(t_{sup}) = h_n$, and hence, $g(t_{sup})\geq h_n$. On the other hand, due to the right continuity of $g$ and 1), $g$ has to be strictly less than $h_{n}$ at $t_{n+1}$, i.e., $g(t_{n+1})< h_n$. Therefore, $g$ must have a downward jump on $(t_{sup},t_{n+1}]\subseteq (t_n,t_{n+1}]$. Note that the direction of the jump of $h$ in the interval $(t_n,t_{n+1}]$ (more specifically at $t_{n+1}$) also has to be downward. Since $g$ is an arbitrary feasible path, this means that whenever $h$ jumps downward on $(t_n,t_{n+1})$, any feasible path in $A$ should also jump downward. Hence, any feasible path must have either equal or a greater number of downward jumps as $h$'s on $[0,1]$. Case 2) leads to a similar conclusion about the number of upward jumps of feasible paths. The number of upward jumps of $h$ is optimal, proving that $h$ is indeed the optimal path. \subsection{Multiple Optima} \label{subsec:multiple-asymptotics-example} This section illustrates how to handle a case where we require Theorem~\ref{thm:two-sided-multiple-asymptotics}, and consider an illustrative example where a rare event can be caused by two different configurations of big jumps. Suppose that the regularly varying indices $-\alpha$ and $-\beta$ for positive and negative parts of the L\'evy measure $\nu$ of $X$ are equal, and consider the set $A \triangleq \{\xi\in \mathbb D: |\xi(t)| \geq t - 1/2\}$. Then, $\argmin_{\substack{(j,k)\in \mathbb Z_+^2\\ \mathbb D_{j,k} \cap A \neq \emptyset}}\mathcal I(j,k) = \{(1,0), (0,1)\}$, and $\mathbb D_{\ll 1,0} = \mathbb D_{\ll 0,1} = \mathbb D_{0,0}$. Since $|\xi(1)| \geq 1/2$ for any $\xi\in A$, $d(A, \mathbb D_{0, 0}) = 1/2>0$. Theorem~\ref{thm:two-sided-multiple-asymptotics} therefore applies, and for each $\epsilon>0$, there exists $N$ such that \begin{align*} \P(\bar X_n \in A) & \geq \frac{\big(C_{l,m}(A^\circ\cap \mathbb D_{1,0})-\epsilon\big)L_+(n) + \big(C_{l,m}( A^\circ\cap \mathbb D_{0,1})-\epsilon\big)L_-(n)}{n^{\alpha-1}}, \\ \P(\bar X_n \in A) &\leq \frac{\big(C_{l,m}(A^-\cap \mathbb D_{1,0})+\epsilon\big)L_+(n) + \big(C_{l,m}(A^-\cap \mathbb D_{0,1})+\epsilon\big)L_-(n)}{n^{\alpha-1}}, \end{align*} for all $n\geq N$. Note that $A$ is closed, since if there is $\xi\in \mathbb D$ and $s\in [0,1]$ such that $|\xi(s)| < s-1/2$, then $B(\xi, \frac{s-1/2-\xi(s)}{2})\subseteq A^c$. Therefore, $A^-\cap \mathbb D_{1,0} = A\cap \mathbb D_{1,0} = \{\xi=x1_{[u,1]}: x \geq 1/2, 0 < u \leq 1/2\}$, and hence, $C_{1,0}(A^- \cap \mathbb D_{1,0}) = \P(U_1 \in (0,1/2])\nu_\alpha[1/2,\infty) = (1/2)^{1-\alpha}$. Noting that $ A^\circ \cap \mathbb D_{1,0} \supseteq(A\cap \mathbb D_{1,0})^\circ = \{\xi = x1_{[u,1]}: x > 1/2, 0<u<1/2\}$, we deduce $C_{1,0}( A^\circ\cap \mathbb D_{1,0})\geq \P(U_1 \in (0,1/2))\nu_\alpha (1/2,\infty) = (1/2)^{1-\alpha}$. Therefore, $C_{1,0}(A^\circ \cap \mathbb D_{1,0}) = C_{1,0}(A^- \cap \mathbb D_{1,0}) = (1/2)^{1-\alpha}$. Similarly, we can check that $C_{0,1}( A^\circ \cap \mathbb D_{0,1}) = C_{0,1}(A^- \cap \mathbb D_{0,1}) = (1/2)^{1-\beta}\, (=(1/2)^{1-\alpha})$. Therefore, for $n\geq N$, \begin{align*} & ((1/2)^{1-\alpha} - \epsilon)(L_+(n)+L_-(n))n^{1-\alpha} \\ & \hspace{97pt} \leq \P(\bar X_n \in A) \\ & \hspace{97pt} \leq ((1/2)^{1-\alpha} + \epsilon)(L_+(n)+L_-(n))n^{1-\alpha}. \end{align*} This is equivalent to \begin{align*} \left(\frac12 \right)^{1-\alpha} & \leq \liminf_{n\to\infty}\frac{\P(\bar X_n \in A)}{(L_+(n)+L_-(n))n^{1-\alpha}}\\ &\leq \limsup_{n\to\infty}\frac{\P(\bar X_n \in A)}{(L_+(n)+L_-(n))n^{1-\alpha}} \leq \left(\frac12 \right)^{1-\alpha}. \end{align*} Hence, $$ \lim_{n\to\infty}\frac{\P(\bar X_n \in A)}{(L_+(n)+L_-(n))n^{1-\alpha}} = \left(\frac12 \right)^{1-\alpha}. $$
1,314,259,994,452
arxiv
\section{introduction} The binary black holes (BBHs) detected by the ground-based gravitational-wave (GW) detectors LIGO~\citep{Harry:2010zz} and Virgo~\citep{TheVirgo:2014hva} all merged in the local universe~\citep{2016PhRvL.116x1102A,GW151226-DETECTION,2016PhRvX...6d1015A,2017PhRvL.118v1101A,2017ApJ...851L..35A,2017PhRvL.119n1101A,2018arXiv180511579T}. These detections have allowed to measure the \emph{local} merger rate of BBHs at {$[24.4-111.7]$}~\ensuremath{\mathrm{Gpc}^{-3} \mathrm{yr}^{-1}}\xspace (90\% credible interval~\citep{o2rates}). The current sensitivity of advanced detectors limits to {z\si0.3} the maximum redshift at which heavy BBH such as GW150914{} can be detected, while heavier objects could be observed farther away~\citep{2016PhRvL.116x1102A,GW151226-DETECTION,2016PhRvX...6d1015A,2017PhRvL.118v1101A,2017ApJ...851L..35A,2017PhRvL.119n1101A,2018arXiv180511579T,2017PhRvD..96b2001A,o2rates}. As the LIGO and Virgo instruments progress toward their design sensitivity~\citep{2016LRR....19....1A}, and the network of ground-based detectors grows, it will be possible to detect BBH at redshifts of \si1 (the exact value depending on the BBH mass). This can potentially allow to probe the merger rate of BBHs through a significant distance range, and check how it varies with redshift~\citep{2018arXiv180510270F}. While this might provide precious information on the evolution of the merger rate, it would be interesting to access sources at even higher redshifts. Since compact binaries are constituted of neutron stars and black holes, leftovers of main-sequence stars, a measurement of their abundance at different stages of cosmic history can potentially tell us something about the star formation rate (SFR). This latter is currently measured using various electromagnetic probes (see~\cite{2014ARA&A..52..415M} for a recent review). However, electromagnetic probes do not directly track the amount of matter being formed on a galaxy. Instead, they track the luminosity, which then is linked to the mass production through several steps of modeling (e.g. on the initial mass function). Furthermore, dust extinction can significantly reduce the bolometric luminosity of a galaxy, or alter the its spectral content, which is a key ingredient to infer the SFR from light. These limitations are particularly severe at redshifts above 3 where, additionally, fewer data points are available from electromagnetic observations. Gravitational-wave probes do not suffer from these issues: they cannot be altered by dust and they directly encode information about the mass of the source. Two proposals for third-generation (3G) ground-based detectors are currently being pursued, which would allow to detect BBHs at large redshifts: the Einstein Telescope~\citep{2010CQGra..27s4002P} (ET) and Cosmic Explorer (CE)~\citep{2017CQGra..34d4001A}. Using the local merger rate calculated by the LIGO and Virgo collaborations it has been estimated that $[1-40]\times10^4$ BBHs merge in the universe per year~\citep{2017PhRvL.118o1105R}. ~\cite{Vitale3G} has shown how BBH can be detected all the way to redshift of \si15 by networks of 3G detectors. Since that is a significant fraction of the volume of the universe, one would thus expect that a large fraction of merging BBH would be detectable. Indeed, \cite{2017PhRvL.118o1105R} estimates that 99.9\% of the BBH mergers will be detectable by 3G detectors~\footnote{In this Letter we solely focus on BBHs. Previous work exists for binary neutron stars~\citep{VanDenBroeck:2010vx,2019ApJ...878L..13S}.}. In this Letter we show how, under quite generic hypotheses, accessing BBHs with 3G gravitational-wave detectors, allows for a direct inference of the merger rate and the SFR all the way to redshifts of $\sim 10$. \section{Event rates}\label{sec:rates} As sources are detected in a gravitational-wave detector network, one can estimate their redshifts \citep{Vitale3G,2016ApJ...825..116F,2015PhRvD..91d2003V} and measure their detection rate in the local frame. Let $\ensuremath{{R}_m}\xspace(z_m)\equiv \frac{\ensuremath{\mathrm{d}} N_m}{\ensuremath{\mathrm{d}} t_d \ensuremath{\mathrm{d}} z}$ be the total redshift rate density of mergers in the detector frame (the number of mergers per detector time per redshift). The shape of this function, given the uncertainty in the observed redshift of the detected sources, can be inferred with hierarchical analysis~\citep{Mandel:2010,Hogg:2010,Youdin:2011,Farr:2011}. The redshift rate density can be written in terms of the volumetric total merger rate in the source frame, $\ensuremath{\mathcal{R}_m}\xspace(z_m)\equiv \frac{\ensuremath{\mathrm{d}} N_m}{\ensuremath{\mathrm{d}} V_c \ensuremath{\mathrm{d}} t_s}$ as \begin{equation} \label{Eq.DifferentialRate} \ensuremath{{R}_m}\xspace(z_m) = \frac{1}{1+z_m}\frac{\ensuremath{\mathrm{d}} V_c}{\ensuremath{\mathrm{d}} z}\ensuremath{\mathcal{R}_m}\xspace(z_m), \end{equation} where the $1+z_m$ term arises from converting source-frame time to detector-frame time~\citep{Dominik:2013tma}. The volumetric merger rate {in galactic fields} depends on the star formation rate and the delay between the formation of the binary black hole progenitors and their eventual merger. All the systems that merge at a lookback time $t_m$ (or, which is equivalent, at a redshift $z_m=z(t_m)$) are systems that formed at $z_f>z_m$ (or $t_f > t_m$). The delay time distribution, $p(t_m | t_f, \lambda)$, is the probability density that a system that formed at time $t_f$ will merge at time $t_m$. This function may depend on an (unknown) time scale, the parameters of the system that is merging, and possibly other parameters. We capture this dependence using parameters $\lambda$. We can write the merger rate at redshift $z_m$ as a function of the black hole binary volumetric formation rate, $\ensuremath{\mathcal{R}_f}\xspace\left(z_f \right)$: \begin{eqnarray}\label{Eq.VolumetricRateRed} \ensuremath{\mathcal{R}_m}\xspace(z_m) &=& \int_{z_m}^{\infty}{ \ensuremath{\mathrm{d}} z_f \frac{\ensuremath{\mathrm{d}} t_f}{\ensuremath{\mathrm{d}} z_f}\ensuremath{\mathcal{R}_f}\xspace(z_f) p(t_m| t_f, \lambda)} \end{eqnarray} Here we assume that volumetric formation rate $\ensuremath{\mathcal{R}_f}\xspace(z_f)$ is simply proportional to the star formation rate density at the same redshift, $\psi(z)$ (Eq.~\eqref{Eq.MDSFR}): \begin{equation} \ensuremath{\mathcal{R}_f}\xspace(z_f)\equiv \frac{\ensuremath{\mathrm{d}} \ensuremath{N_{\mathrm{form}}}\xspace}{\ensuremath{\mathrm{d}} \ensuremath{V_{\mathrm{C}}}\xspace \ensuremath{\mathrm{d}} t_f} \propto \psi(z_f). \end{equation} This is a reasonable assumption~\citep{GW150914-STOCHASTIC,2014ARA&A..52..415M}, since the life-time of massive stars that will become black holes is of the order of tens of Myr and hence negligible when compared to the other time-scales of interest. We do not account here for eventual contributions to the formation rate arising from binaries that do not form in galactic fields (e.g.\ binaries from globular clusters or from population III stars). The methods we use can be extended to account for multiple formation channels; we discuss this possibility further below. Both the formation rate and the time delay distribution might depend on some intrinsic properties of the of the binary being formed, e.g., the component masses, {or of the environment, e.g, the metallicity}~\citep{Dominik:2013tma}. These dependencies can be included in an extension of our analysis in a straightforward manner, by adding the masses and other parameters to $\lambda$ and marginalizing them in Eq.\ \eqref{Eq.VolumetricRateRed}. However, for this proof-of-principle study we will assume these details can be neglected. In this work we will follow two different approaches. First, we will assume that nothing is known about the true functional form of the SFR and the time-delay distribution. In this case, we use a non-parameteric gaussian process algorithm to directly measure the volumetric rate density in the detector frame, $\ensuremath{\mathcal{R}_m}\xspace(z)$. Next, we will show that assuming the parameterized functional form of both the star formation rate and the time-delay distribution, the parameters on which they depend can be measured from the GW detections. \section{Simulated signals}\label{sec:method} To demonstrate how the cosmic BBH merger rate can be measured, we % generate three months of synthetic BBH detections in each time-delay model with realistic redshift uncertainty (see below) \citep{Vitale3G}. We assume that the SFR is the Madau-Dickinson (MD) star-formation rate, which can be written: \begin{eqnarray}\label{Eq.MDSFR} \psi_{MD}(z)&=&\phi_0 \frac{(1 + z)^{\alpha}}{(1 + \frac{1+z}{C})^{\beta}}, \end{eqnarray} with parameters $\alpha=2.7$, $\beta=2.9$ and $C=5.6$ \citep{2014ARA&A..52..415M}. The coefficient $\phi_0$ is chosen such that the merger rate at $z=0$ is 50~\ensuremath{\mathrm{Gpc}^{-3} \mathrm{yr}^{-1}}\xspace, consistent with the BBH rate measured by the LIGO and Virgo collaborations~\citep{o2catalog}. We consider two different functional forms for the distribution of time-delays between formation and merger: an exponential function with time scale parameter $\tau$: \begin{equation}\label{Eq.TimeDelayExp} p(t_m | t_f ,\tau) = \frac{1}{\tau} \exp{\left[-\frac{\left(t_{f} - t_{m}\right)}{\tau}\right]} \end{equation} and a distribution uniform in the logarithm of the time delay: \begin{equation} p(\log(t_m-t_f)) \propto \left\{ \begin{array}{ll} 1 & \;10 \ensuremath{\mathrm{Myr}}\xspace < t_m-t_f<10 \ensuremath{\mathrm{Gyr}}\xspace \\ 0 & \;\mathrm{otherwise} \\ \end{array} \right. \end{equation} The true redshifts of the sources under both delay assumptions are randomly drawn from Eq.~\ref{Eq.DifferentialRate}, after normalizing it to unity in the redshift range $z\in [0,15]$. In Fig.~\ref{Fig.InjectedPz} we show the redshift distribution of the simulated BBH merger events using the exponential time delay with $\tau = 0.1 \, \ensuremath{\mathrm{Gyr}}\xspace$, $1 \, \ensuremath{\mathrm{Gyr}}\xspace$, $10\, \ensuremath{\mathrm{Gyr}}\xspace$, and with the flat-in-log distribution at a fixed local merger rate density of 50~$\ensuremath{\mathrm{Gpc}^{-3} \mathrm{yr}^{-1}}\xspace$. The numbers of events in three months are $M=\ensuremath{48000},\ensuremath{29400},\ensuremath{4200}$ and $\ensuremath{25800}$ respectively. \begin{figure}[htb] \includegraphics[width=\columnwidth,clip=true]{true_dNdzdVc.pdf} \caption{The merger redshift distribution of the simulated population of BBH. We assume a Madau-Dickinson SFR, and four different prescriptions for the time delay between formation and merger: an exponential time delay with e-fold time of $100\ensuremath{\mathrm{Myr}}\xspace$, $1\ensuremath{\mathrm{Gyr}}\xspace$ and $10\ensuremath{\mathrm{Gyr}}\xspace$, and a uniform-in-log distribution, with a minimum of 10\ensuremath{\mathrm{Myr}}\xspace and a maximum of 10\ensuremath{\mathrm{Gyr}}\xspace.}\label{Fig.InjectedPz} \end{figure} The redshift of detected BBH cannot be perfectly measured using GW detectors. We approximate the results of a full analysis of a 3-detectors 3G network \citep{Vitale3G} by assuming that the likelihood function for the true redshift follows a log-normal distribution conditioned on the true redshift with standard deviation $\sigma_{\mathrm{LN}} (z_{\mathrm{true}})=0.017 z_{\mathrm{true}}+0.012$. We do not explicitly draw mass values or calculate a signal-to-noise ratio. As long as one works with BBH of total mass above \si15\ensuremath{\mathrm{M}_\odot}, all sources are detectable by 3G networks including the CE up to redshifts were the merger rate becomes negligible~\citep{Vitale3G,2017PhRvL.118o1105R}. Once the catalog of simulated events and the corresponding redshift likelihoods have been generated, our analysis proceeds hierarchically~ \citep{Mandel:2010,Hogg:2010,Youdin:2011,Farr:2011}. We assume that the production of gravitational-wave sources is an (inhomogeneous) Poisson process, with rate density \begin{equation} \ensuremath{\mathcal{R}_m}\xspace \left( z \mid \lambda \right),\nonumber \end{equation} depending on some parameters $\lambda$. Therefore the posterior for the population-level parameters given (synthetic) data for three months of events, $\vec{d}\equiv \left\{ d_i \right\}_{i=1}^{M}$ is~ \citep{Foreman-Mackey:2014,Farr:2015,Youdin:2011} \begin{multline}\label{Eq.PofLambda} p\left( \lambda \mid \vec{d}\,\right) \propto \left[\prod_{i=1}^{M} \int \dd z_i \, p\left( d_i \mid z_i \right) \ensuremath{{R}_m}\xspace\left(z_i \mid \lambda \right)\right] e^{- \chi} \,p\left(\lambda\right) \\ \simeq \left[\prod_{i=1}^{M} \frac{1}{M_i} \sum_{j=1}^{M_i} \ensuremath{{R}_m}\xspace \left( z_{ij} \mid \lambda\right)\right] e^{- \chi} \, p\left(\lambda\right), \end{multline} where $\chi\equiv \int \dd z \, \dd t_d \, \ensuremath{{R}_m}\xspace\left(z \mid \lambda\right)$, $z_i$ is the redshift of event $i$; $p\left( \lambda \right)$ is a prior imposed on the parameters describing the merger rate density; and we use $M_i$ samples, $\left\{z_{ij}\right\}_{j=1}^{M_i}$, drawn from a density proportional to the likelihood, $z_{ij} \sim p\left( d_i \mid z_{ij} \right) \dd z_{ij}$, to approximate the marginalisation integral over $z_i$. \section{Results} We desire to understand how well we can expect to constrain the merger rate density and the time delay distribution from our synthetic data set of three months of observations. We first consider an unmodeled approach, where nothing is assumed about the underlying SFR function and time-delay distribution other than that it is relatively smooth~\citep{Foreman-Mackey:2014}. We assume that the log of the merger rate can be described by a piecewise-constant function over $K = 29$ redshift bins. % To ensure there are enough samples in each bin, we choose the bins in the following way: $0\leq z<0.32$ for the first bin, while the remaining bins are uniformly distributed in $\log (1+z)$ with $z\in[0.32,15)$ so that the log of merger rate is \begin{equation} \log \ensuremath{\mathcal{R}_m}\xspace = \begin{cases} n_1 & 0 \leq z < z_1 \\ \ldots & \\ n_i & z_{i-1} \leq z < z_i \\ \ldots & \\ n_K & z_{K-1} \leq z < z_K \end{cases}, \end{equation} and we treat the per-bin merger rates, $n_i$, as parameters, $\lambda$, in Eq.~\ref{Eq.PofLambda}. We apply a squared-exponential Gaussian-Process prior on the $n_i$, which has a covariance kernel of \begin{equation} \cov\left( n_i, n_j \right) = \sigma^2 \exp\left[-\frac{1}{2}\left(\frac{z_{i-1/2} - z_{j-1/2}}{l}\right)^2\right], \end{equation} with $z_{i-1/2} = \left( z_{i} - z_{i-1} \right)/2$ the midpoint of the $i$th redshift bin. We treat the variance of the $n_i$, $\sigma^2$, and the correlation length in redshift space, $l$, as additional parameters in the fit. The squared-exponential Gaussian Process prior enforces the smoothness of the merger rate on scales that are comparable to or larger than $l$ (which may be much larger than the bin spacing if the data support it), and guards against over-fitting when $K$ is large~\citep{Foreman-Mackey:2014}. The results for this fit are shown in Fig.~\ref{Fig.MeasuredGP}, where for each true synthetic population we show the median posterior on the piecewise-constant $\dd N / \dd V_c \dd t_d$, together with 68\% and 95\% (1- and 2-sigma) credible intervals. We see that the unmodeled GP method pinpoints the merger rates so precisely that all four distributions are clearly distinguishable; near $z \sim 2$ the uncertainty in the measured merger rate is $\sim 3\%$. At moderate redshifts, $z<4$, the uncertainties are smaller than the separation between different populations. At larger redshifts the measurement becomes more uncertain, and overlaps exist. This is due to a combination of two effects: from one side, fewer sources merge, and hence are detected, at those redshifts; from the other, the uncertainty in their measured redshift is higher. The advantage of this approach over a more rigid parameterization of the merger rate is that it can fit \emph{any} sufficiently smooth merger rate; a disadvantage is that we learn nothing individually about the time-delay distribution or the star formation rate, since it they are completely degenerate in this flexible model. \begin{figure}[htb] \includegraphics[width=\columnwidth,clip=true]{dNdzdVc_GP.pdf} \caption{Posterior on the volumetric merger rate density calculated using an unmodeled approach. The dashed lines are the true rates under the four possible time delay distributions we consider. Full lines give the median measurement, while the bands report the 68\% and 95\% credible intervals. Near the peak $z\sim 2$ the uncertainty in the rate estimate is $\sim 3\%$ for $\tau=0.1\ensuremath{\mathrm{Gyr}}\xspace,1\ensuremath{\mathrm{Gyr}}\xspace$ and flat-in-log models. The uncertainty rises to $10\%$ in $\tau=10\ensuremath{\mathrm{Gyr}}\xspace$ model around the peak $z\sim 1$, as the total number of events is 10 times smaller than the numbers in other models. The small systematic offset for the flat-in-log and prompt data sets is likely due to a $100 \, \mathrm{Myr}$ lower limit on the delay time imposed for numerical stability; see the corresponding discussion in the parameterized model results.}\label{Fig.MeasuredGP} \end{figure} Next, we want to verify how well we can measure the characteristic parameters of the SFR and time-delay distribution \emph{assuming} we know their functional forms. For this analysis, we take the MD SFR and the exponential time-delay distribution as models, treating the parameters $\alpha$, $\beta$, $C$, as well as the time-delay scale $\tau$ as unknowns. We then calculate the posterior for $\lambda_{MD}=\{\alpha,\beta,C,\tau\}$ with Eq.~\ref{Eq.PofLambda}. Note that our parameterized model is inconsistent with the flat-in-log data-generating model no matter what value of $\tau$ is used. We use log-normal priors with a width of $\simeq 0.25$ in the log for $\alpha , \beta$ and $C$, reflecting an approximation to the uncertainty in the determination of the SFR~\citep{2014ARA&A..52..415M}. For $\tau$, we use a width of 2 in log to cover the whole dynamical range from 0.1~\ensuremath{\mathrm{Gyr}}\xspace to 10~\ensuremath{\mathrm{Gyr}}\xspace. The uncertainties are large enough that the posterior distributions are not truncated by the prior; even with only three months of data we obtain meaningful constraints on the SFR parameters at the few percent level and the time delay at a few tens of percent in all models. We place a lower bound on the time-delay parameter $\tau \geq 100 \, \mathrm{Myr}$ in order to ensure numerical stability in our computation of the integral in Eq.\ \eqref{Eq.VolumetricRateRed}. This results in some discrepancy between the fit and the data-generating distribution for the ``prompt'' data set; the prompt data is recovered in the limit $\tau \to 0$, but as this is excluded by our prior there is a bias in the fit, particularly at high redshift where timescales of $100 \, \mathrm{Myr}$ are a significant fraction of the age of the universe. The inferred posterior on the merger rate redshift density is shown in Figure \ref{fig:dNdz-parameteric}. In Fig.~\ref{Fig.MDPosteriorsAll} we show posteriors for the parameters $\phi_{MD}$ for the set of events with $\tau=1\ensuremath{\mathrm{Gyr}}\xspace$. \begin{figure} \includegraphics[width=\columnwidth,clip=true]{dNdzdVc_param_3months.pdf} \caption{Posterior of the merger rate density calculated from the parameterized fits described in the text. Dashed lines show the true merger rate distributions for our models. Solid lines give the posterior median and dark and light bands the 68\% and 95\% credible intervals. See the text for more details} \label{fig:dNdz-parameteric} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{posteriors_1Gyr_3months.pdf} \caption{The posterior distribution for the time-delay timescale and the MD SFR parameters after $\ensuremath{29400}$ detections in the $1 \, \mathrm{Gyr}$ delay timescale scenario. Truth is indicated by blue lines. Dashed lines indicate the highest posterior density 90\% credible interval; star formation rate parameters are measured to few percent precision, and the delay timescale is measured to $\sim 60\%$. Plot labels give the median and the highest posterior density 90\% credible interval for each parameter.} \label{Fig.MDPosteriorsAll} \end{figure} After three months of detections in the $1 \, \mathrm{Gyr}$ scenario, the scale factor of the time delay distribution can be measured with relative uncertainty of 60\% {(90\% credible interval)}: $\tau=\ensuremath{1.14^{+0.59}_{-0.61}}$. The parameters of the MD SFR can also be measured with precision of $\sim 20\%$ or better. We obtain $\alpha=\ensuremath{2.64^{+0.38}_{-0.29}}$, $\beta=\ensuremath{5.69^{+0.27}_{-0.18}}$, and $C=\ensuremath{3.03^{+0.17}_{-0.19}}$. The parameter recovery for the other scenarios is similar; but for the flat in log scenario the systematic bias from model mismatch is significantly larger the statistical uncertainty. The parameter estimates obtained from all scenarios are given in Table\ \ref{Tab.NearbyInj}. Determination of the time delay distribution and the parameters of the star formation rate also allow measurement of the total number of BBH mergers per solar mass of star formation (not shown). We observe that correlations exist between some of the parameters. In particular, $\tau$ and $C$ show a clear correlation. This can be understood as follows. If $C$ increases then the peak of the SFR moves to higher redshift; to keep the \emph{observed} merger rate fixed, this implies an increase in the delay time. \begin{table}[htb] \centering \begin{tabular}{|c|c|c|c|c|} \hline True time-delay & $\alpha$ & $\beta$ &C& $\tau$ \\ \hline Exp. $\tau=0.1\ensuremath{\mathrm{Gyr}}\xspace$ & \ensuremath{2.75^{+0.12}_{-0.11}}& \ensuremath{5.38^{+0.23}_{-0.21}}& \ensuremath{2.98^{+0.10}_{-0.07}} & \ensuremath{0.31^{+0.21}_{-0.17}} \\ \hline Exp. $\tau=1.0\ensuremath{\mathrm{Gyr}}\xspace$ & \ensuremath{2.64^{+0.38}_{-0.29}}& \ensuremath{5.69^{+0.27}_{-0.18}}& \ensuremath{3.03^{+0.17}_{-0.19}} & \ensuremath{1.14^{+0.59}_{-0.61}} \\ \hline Exp. $\tau=10\ensuremath{\mathrm{Gyr}}\xspace$ & \ensuremath{2.54^{+0.89}_{-0.87}}& \ensuremath{5.57^{+0.88}_{-0.75}}& \ensuremath{3.00^{+0.29}_{-0.29}} & \ensuremath{11.64^{+17.32}_{-8.60}} \\ \hline Flat Log & \ensuremath{1.95^{+0.16}_{-0.15}}& \ensuremath{4.83^{+0.26}_{-0.25}}& \ensuremath{3.01^{+0.19}_{-0.16}} & \ensuremath{0.38^{+0.35}_{-0.26}}\\ \hline \end{tabular} \caption{Median and {90\%} credible intervals for the posterior of the MD parameters and time-delay scale. The first column reports which event set is used. \label{Tab.NearbyInj}} \end{table} \section{Discussion and outlook} \label{sec:discussion} In this Letter we have shown how next-generation ground-based detectors will enable using gravitational waves from binary black hole to infer their merger rate throughout cosmic history, even in absence any model for the star formation history. On the other hand, if a modeled template is available for the star formation rate and for the time-delay distribution between formation and merger, we have shown how their characteristic parameters can be measured with just three months of data. We have simulated four different ``Universes'', assuming the formation rate matches the Madau-Dickinson star formation rate, and four different prescriptions for the delay between formation and merger: flat in the logarithm of the time-delay, or exponential, with e-fold time of 0.1, 1 or 10~\ensuremath{\mathrm{Gyr}}\xspace. The unmodeled approach yields a direct measurement of the volumetric merger rate $\ensuremath{\mathcal{R}_m}\xspace\equiv \ensuremath{\mathrm{d}} N/\ensuremath{\mathrm{d}} V_c\ensuremath{\mathrm{d}} t_d$. Fig~\ref{Fig.MeasuredGP} shows the measurement obtained with three months of data. The four models are clearly distinguishable, and have uncertainties much smaller than their separation for redshifts below $\sim 4$. At larger redshifts, the uncertainties increase due to the smaller number of sources, and the larger uncertainty on their redshifts. Including a model for the star-formation history and the time-delay distribution dramatically increases the power of the method, and the expense of its generality. Using the Madau-Dickinson SFR, Eq.~\ref{Eq.MDSFR} and an exponential time-delay distribution with unknown e-fold time $\tau$ as templates, we have shown how all unknowns can be measured with good precision after three months of data. The measurement of the SFR parameters is not accurate for the universe with flat-in-log time delays, as one would have expected given the mismatch between the time-delay template and the actual time-delay distribution. This kind of issues can be mitigated using templates with more parameters. The number of parameters will increase the computational cost of the analysis, and the uncertainty in the measurement. However, the number of detectable BBH is in the hundreds of thousand per year, which will compensate for the extra complexity of the model. In this work we have made a few simplifying assumptions to keep the computational cost under control. {First, we have assumed that the time-delay distribution is the same for all sources at all redshifts, while in reality it will depend on the redshift of the source through the metallicity of the environment~\citep{2019MNRAS.482.5012C}. This limitation can be lifted, introducing a functional form that relates time delay to redshift and possible other parameters, that will eventually be marginalized over.} Relatedly, we have neglected the dependence of the SFR and time-delay distribution on the mass and spins of the sources. This is not an intrinsic limitation of the method, and can be easily folded in the analysis. As these extra parameters are accounted for, we would expect that more sources will be required to achieve the same precision. But, as mentioned above, in this work we have considered three months worth of data. Many more detections will be available for these tests, and hence compensate for the increased complexity of the model. Finally, while generating the simulated signals, we have assumed that all sources come from galactic fields. There is growing evidence that at least a fraction of BBH detected by LIGO and Virgo have been formed in globular clusters~\citep{2015PhRvL.115e1101R,2016PhRvD..93h4029R}. These sources would show a very different evolution with redshift, with a peak of the merger rate at higher redshift. If black holes from population III stars merge, they could also contribute to the total merger rate, probably with a peak above $z\sim 10$~\citep{Belczynski2016,2016MNRAS.456.1093K}. {Depending on the relative abundance of mergers in these channels, one could be able to calculate their branching ratios as a function of redshift. This would give information which is complementary to what can be obtained studying the mass, spin, and eccentricity distribution of gravitational-wave detections. The method we developed can be extended to account for multiple population, which we will explore in a future publication.} \section{Acknowledgments} The authors would like to thank H.-Y.~Chen, M.~Fishbach, R. O'Shaughnessy, C.~Pankow, T.~Regimbau, for useful comments and suggestions. S.V. acknowledges support of the National Science Foundation through the NSF award PHY-1836814 SV acknowledges the support of the National Science Foundation and the LIGO Laboratory. LIGO was constructed by the California Institute of Technology and Massachusetts Institute of Technology with funding from the National Science Foundation and operates under cooperative agreement PHY-1764464. The author would like to acknowledge the LIGO Data Grid clusters, without which the simulations could not have been performed. This is LIGO document number P1800219.
1,314,259,994,453
arxiv
\section{Introduction} In this paper we study a rather general class of jump type stochastic differential equations taking values in $ \mathbb {R}^d $ evolving according to \begin{equation}\label{eq:sde} X_t = x +\int_0^t b (s, X_s) d s + \int_{[0, t ]} \int_{ \mathbb {R}^m \times \mathbb {R}_+} c (s, z, X_{s-}) 1_{ u \le \gamma ( s,z, X_{s-})} N (ds,d z, du ) , \end{equation} with $x \in \mathbb {R}^d . $ In the above equation, $N(ds, dz, du ) $ is a Poisson random measure, defined on a fixed probability space $(\Omega, {\mathcal A}, P) ,$ with $ (s, z, u ) \in \mathbb {R}_+ \times \mathbb {R}^m \times \mathbb {R}_+, $ having intensity measure $ d s \mu (d z) d u ,$ for some $\sigma -$finite measure $\mu $ on $ (\mathbb {R}^m , {\mathcal B} ( \mathbb {R}^m ) ) .$ The associated infinitesimal generator at time $t$ is given by \begin{equation}\label{eq:gen} L_t f (x) = \sum_{i=1}^d \frac{\partial f}{\partial x_i } (x) b^i ( t,x) + \int_{\mathbb {R}^d} [ f ( x + c ( t, z, x)) - f(x)] \gamma ( t, z, x) \mu (dz ) . \end{equation} The coefficients of the system are the measurable functions $ b : \mathbb {R}_+ \times \mathbb {R}^d \to \mathbb {R}^d , $ $c : \mathbb {R}_+ \times \mathbb {R}^m \times \mathbb {R}_+ \to \mathbb {R}^d $ and $ \gamma : \mathbb {R}_+ \times \mathbb {R}^m \times \mathbb {R}_+ \to \mathbb {R}_+ .$ We shall always work under conditions ensuring that \eqref{eq:sde} admits a unique strong non-explosive adaptive (to the filtration generated by the Poisson random measure) solution which is Markov, having c\`adl\`ag trajectories which are of finite variation (see Assumption \ref{conditions} below). Let us give some comments on \eqref{eq:sde}. If the jump rate $ \gamma $ is a constant function, then the above process is a classical jump process. But if $\gamma $ is not constant, then the jump intensity and also the jump amplitude depend on the current position of the process. This is a natural assumption in many modeling issues (see e.g. \cite{CDMR}, \cite{PTW-10} or \cite{pierrennathalie} for the modeling of biological or chemical phenomena, see \cite{ABGKZ} for an overview). Observe that if the measure $ \mu$ is finite, then the process evolving according to \eqref{eq:sde} is a piecewise deterministic Markov process (PDMP). PDMP's have been introduced by Davis (\cite{Davis84} and \cite{Davis93}); they evolve in a deterministic manner in between successive jump events, and only a finite number of jumps occur during finite time intervals. Here, we will however deal with the general infinite activity case. The goal of this paper is to describe how the solution of \eqref{eq:sde} behaves as $t \to \infty .$ Let us illustrate the main ideas by some examples. Suppose e.g.\ that there exist measurable functions $ c ( z, x) : \mathbb {R}^m \times \mathbb {R}^d \to \mathbb {R}^d ,$ $ \gamma ( z, x) : \mathbb {R}^m \times \mathbb {R}^d \to \mathbb {R}_+ $ and $ b_1 ( x) : \mathbb {R}^d \to \mathbb {R}^d $ such that for all $ z \in \mathbb {R}^m , x\in \mathbb {R}^d , $ $$ |c(t, z, x) - c(z, x) | + | \gamma ( t, z, x ) - \gamma ( z, x) | + | b(t, x) - b_1 ( x) | \to 0 \mbox{ as } t \to \infty .$$ Then (under suitable additional technical conditions) the long time behavior of \eqref{eq:sde} will be well-described by another process of the same type, solution of $$ \bar X_t = x +\int_0^t b ( \bar X_s) d s + \int_{[0, t ]} \int_{ \mathbb {R}^m \times \mathbb {R}_+} c (z, \bar X_{s-}) 1_{ u \le \gamma ( z,\bar X_{s-})} N (ds,d z, du ) , $$ having {\it time homogenous coefficients.} We will call this regime of convergence the {\it slow regime}, and we notice that in this case jumps survive in the limit process. Of course this is not the only possible scenario, and two other limit regimes exists. They both appear in the situation where, as $t \to \infty, $ the jump heights tend to $0.$ If they do so in a moderate way, they will just produce a limit drift. If they converge to $0$ sufficiently fast, they will generate a limit diffusive part. Suppose e.g. that the {\it drift} produced by the jumps at time $t, $ given by $$ \tilde b( t, x) = \int_{\mathbb {R}^m} c (t, z, x) \gamma ( t, z, x ) \mu ( dz), $$ converges to a limit drift vector field $b_2 ( x) $ and that $\int_{\mathbb {R}^m} |c (t, z, x)|^2 \gamma ( t, z, x ) \mu ( dz) \to 0 $ as $t \to \infty .$ In this case, the corresponding time homogenous limit process $\bar X_t$ is solution of the deterministic equation $$ d \bar X_t = b (\bar X_t ) dt , \; \mbox{ with } b (x) = b_1 (x) + b_2 (x) .$$ We call such a limit regime the {\it intermediate jump regime} - it produces a deterministic limit process, and such results are of course related to the law of large numbers. Finally, the jump part in \eqref{eq:sde} can be centered, that is, $ \tilde b( t, x) = 0 $ for all $ t, x .$ In this case, we are in the {\it fast jump regime} -- and interesting limit features may appear if the variance of the jump part given by $$ a^{ij} ( t, x) = \int_{\mathbb {R}^m} c^i (t, z, x) c^j (t, z, x) \gamma ( t, z, x ) \mu (dz ) , 1 \le i, j \le d ,$$ converges, as $t \to \infty , $ to some limit variance $ a(x) $ giving rise to a limit diffusive part, and if at the same time, higher order terms $ \int_{\mathbb {R}^m} |c (t, z, x)|^3 \gamma ( t, z, x ) \mu ( dz) $ disappear, as time tends to infinity. Notice that these three jump regimes may appear simultaneously as shows the following example. \begin{ex}\label{ex:CIR} Let $ d=m= 1 $ and $ \mu ( dz ) = dz . $ For $t > 0$ and for some $r >0,$ let $$ c (t, z, x ) = \frac{\sigma }{2 } e^{-rt} [ 1_{ ]-3 e^{2rt} , -2e^{2rt} [ } (z) - 1_{ ] -2e^{2rt} , -e^{2rt} [ } (z) ] - a x e^{-2rt} 1_{ ]- e^{2rt}, 0 [ } (z) + \frac{d}{(1 + z)^2 } 1_{ ]0, e^{2rt} [ } (z),$$ $b(t, x ) = b $ and $$ \gamma(t, z, x) = f(x) 1_{ ]-3 e^{2rt} , - e^{2rt} [ } (z) + 1_{ ]- e^{2rt}, 0 [ } (z) + f(x) 1_{ ]0, e^{2rt} [ } (z) , $$ for some constants $ \sigma , d \in \mathbb {R} $ and $ a, b > 0 .$ Here, $f : \mathbb {R} \to \mathbb {R}_+$ is bounded, having bounded derivative such that $ \inf_{ x \in \mathbb {R} } f( x) > 0 .$ Let $X_t$ be solution of \eqref{eq:sde} with these parameters, starting from $X_0 = x \in \mathbb {R}_+ . $ By construction, jumps coming from ``noise'' $ z \in ]-3 e^{2rt},- e^{2rt} [ $ are centered, that is, $$ \int_{-3 e^{2rt}}^{-e^{2rt}} c(t, z, x) \gamma ( t, z , x ) \mu (dx ) = 0, $$ and the associated variance is given by $ a (t, x) = \sigma^2 f(x) $ for all $ t, x .$ Moreover, for all $ t, x, $ $$ \int_{-e^{2rt}}^{0} c(t, z, x) \gamma ( t, z , x ) \mu (dx ) = - a x $$ giving rise to a limit drift $-a x .$ Finally, the jumps produced by noise $z > 0 $ survive in the limit process -- this corresponds to the slow jump regime. It is straightforward to show that the associated limit process is a Cox-Ingersoll-Ross type jump process given by $$ d \bar X_t = (b - a \bar X_t ) dt + \sigma \sqrt{ f (\bar X_t ) } d W_t + \int_{ \mathbb {R} \times \mathbb {R}_+} \frac{d}{(1 + z)^2 } 1_{ \{ u \le f( \bar X_{t-}) \}} N (dt,d z, du ) . $$ \end{ex} Let us now come back to the general frame of \eqref{eq:sde}. In this paper, we will given conditions on the coefficients of the system that guarantee that the associated limit process is a time homogeneous jump diffusion process $\bar X_t $ having generator \begin{multline}\label{eq:12} L f(x) = \sum_{i=1}^d \frac{\partial f}{\partial x_i } (x) g^i ( x) + \frac{1}{2} \sum_{i, j = 1}^d \frac{\partial^2 f}{\partial x_i \partial x_j } (x) a^{i j } ( x) + \int_{{\mathbb {R}^m}} [ f ( x + c ( z, x)) - f(x)] \gamma ( z, x) \mu (dz ), \end{multline} where $ g = b_1 + b_2, $ with $b_2$ the limit drift of the intermediate regime, $a $ the limit variance associated to the fast regime and $c $ and $\gamma $ the limit jump intensity and height of the slow regime. Let us now define precisely what we mean when saying that $ \bar X$ is the limit process associated to \eqref{eq:sde}. We formalize this idea by using the notion of {\it asymptotic pseudotrajectories} which has been introduced in Bena\"{\i}m and Hirsch \cite{bh96} (1996) and then further used in \cite{bouguetetal} and which provides a general framework to deal with the long time behavior of non-autonomous processes. For $X_t $ a solution of \eqref{eq:sde}, starting from any arbitrary initial law $ {\mathcal L} ( X_0) , $ let $$ \mu_t := {\mathcal L} (X_t) $$ and write $P_t$ for the transition semigroup of the limit process $\bar X_t.$ We introduce for any two probability measures $ \mu $ and $\nu $ on $(\mathbb {R}^d , {\mathcal B} ( \mathbb {R}^d ) ) $ and any class of test functions $ {\mathcal F}$ the distance $$ d_{\mathcal F} ( \mu , \nu ) := \sup_{ f \in {\mathcal F}} | \mu ( f) - \nu ( f) | .$$ The main result of this paper, Theorem \ref{cor:1}, gives explicit conditions on the coefficients of \eqref{eq:sde} implying that for a suitable class of test functions $ {\mathcal F} $ and for any $ T < \infty , $ \begin{equation}\label{eq:aspseudo} \lim_{t \to \infty } \sup_{s \le T} d_{\mathcal F} ( \mu_{t + s } , \mu_t P_s ) = 0. \end{equation} This means that $ ( \mu_t)_t $ is an asymptotic pseudotrajectory of $ (P_t)_t $ in the sense of \cite{bh96}. Furthermore, if the limit process $\bar X_t$ is exponentially ergodic with unique invariant probability measure $\pi , $ we shall also prove the weak convergence of $\mu_t$ to $ \pi , $ as $ t \to \infty ,$ together with a control of the speed of convergence (see Corollary \ref{cor:3}). The study of non-stationarity is a challenging problem, in particular in statistics of stochastic processes, and a lot of papers have been devoted to this topic during the last decade, see e.g. \cite{JM}, \cite{Dahlhaus2017} or \cite{hlt} and the references therein. All these papers deal either with the time discrete case or with some periodic setting (leading again to a discrete description), and to the best of our knowledge, the results of Theorem \ref{cor:1} and Corollary \ref{cor:3} are one of the first results that allow to deal with the longtime behavior of time inhomogeneous processes in the framework of continuous time observations. To prove \eqref{eq:aspseudo}, we have to control two schemes of convergence. Firstly the convergence of $X_t $ to $ \bar X_t ,$ and secondly, the convergence of $ \bar X_t$ to its invariant regime $ \pi .$ The main point to prove the convergence of $X_t$ to the limit process is a control of the asymptotic behavior of the generator $L_t $ as $t \to \infty , $ using a Taylor expansion, and to prove that $ L_t $ converges to $L, $ in a sense that has to be made precise. It is then a classical approach, relying on the Trotter-Kato theorem, to show that this convergence implies the one of the associated transition semigroups. This implication is classical if the limit transition semigroup {\it preserves regularity}, but it is more difficult in the present frame, since the presence of the position dependent jump rate $ \gamma $ makes the regularity analysis of the associated transition semigroup more intricate and difficult. We rely on recent results of Bally et al. \cite{ballyetc} (2017) which enable us to solve this difficulty. Let us now comment on the second scheme of convergence, the convergence of $\bar X_t$ to its invariant measure $ \pi . $ For classical jump diffusions there starts to be a huge literature on this subject. Masuda \cite{Ma07} (2007) follows the Meyn and Tweedie approach developed in \cite{Me93book} or \cite{Me93}, but he works in the simpler situation where the intensity term $\gamma ( z, x) $ of \eqref{eq:12} is not present. Kulik \cite{kulik} (2009) uses the stratification method to prove exponential ergodicity of jump diffusions, but the models he considers do not include position dependent jump rates neither. Finally, Duan and Qiao \cite{DQ14} (2016) are interested in solutions driven by non-Lipschitz coefficients. None of the above mentioned papers is applicable to our situation due to the presence of the position dependent jump intensity $ \gamma ( z, x ) ,$ and therefore the first part of this paper is devoted to the ergodicity analysis of the process $ \bar X_t.$ More precisely, we show that the jumps themselves can be used in order to generate an explicit coupling method which leads to a control of the speed of convergence to equilibrium of $ \bar X_t.$ This coupling method relies on the regeneration method which has been introduced in L\"ocherbach and Rabiet \cite{Victor} (2017) and which is applied to the big jumps. In spirit of the splitting technique introduced by Nummelin \cite{Nu78} (1978) and Athreya and Ney \cite{At78} (1978), we state a non-degeneracy condition which guarantees that the jump operator associated to the big jumps possesses a Lebesgue absolutely continuous component. This amounts to imposing that the partial derivatives of the jump term $c (z, x) $ in \eqref{eq:12} with respect to $z$ are sufficiently non-degenerate, see Assumption \ref{ass:final} below, leading to a sort of local Doeblin condition for the jump operator associated to big jumps. Roughly speaking this local Doeblin condition implies that jumps issued of a pre-jump position $x $ belonging to a certain set $ C$ generate noise independently of the starting position $x.$ This relevant set $C$ would be what \cite{Me93book} call a ``petite set''. In order to be able to couple two trajectories of the limit process, we have then to ensure that they happen to visit the set $C$ at the same time. This is granted by a Lyapunov type condition implying that the process returns to a big compact $K$ infinitely often, together with a control of the moments of the associated hitting times. Moreover, we need a control argument that allows to steer the trajectory of $\bar X_t $ from any starting position $x \in K$ to the target set $ C$-- this control is based on an approximation of the process $\bar X$ by a finite activity process where only big jumps are considered, see Theorem \ref{theo:controlXbar} below. As a consequence, we are able to state our Theorem \ref{theo:tv} which proves the unique ergodicity of $\bar X$ together with a control of the speed of convergence to equilibrium with respect to total variation distance. Our paper is organized as follows. Section \ref{sec:1} is devoted to the proof of the unique ergodicity of the limit process $ \bar X_t$ together with the control of the speed of convergence to equilibrium, see Theorem \ref{theo:tv}. In this section, we also explain the regeneration technique based on big jumps and prove the existence of a finite coupling time of two copies $ \bar X_t^x $ and $ \bar X_t^y , $ together with the existence of all of its polynomial moments under a suitable Lyapunov type condition. Section \ref{sec:2} is devoted to the study of the time inhomogeneous process $ X_t $ having generator \eqref{eq:gen}. Here, we state Theorem \ref{cor:1} which proves the convergence of $ X_t$ to the limit process $\bar X_t , $ and Theorem \ref{theo:2} together with Corollary \ref{cor:3} proving the convergence of ${\mathcal L}(X_t) $ to the unique invariant probability measure $ \pi $ of the limit process $ \bar X_t .$ Finally, Section 4 gives some examples, in particular we discuss systems of Hawkes processes with mean field interactions, exponential memory kernels and variable length memory. Some mathematical proofs are shifted to the Appendix section. \section{Unique ergodicity and Speed of convergence to equilibrium for jump diffusions with state dependent jump intensity}\label{sec:1} \subsection{Notation} Throughout this paper, for $x \in \mathbb {R}^d, $ $| x| $ will denote the Euclidean norm on $\mathbb {R}^d .$ Moreover, for a multi-index $\alpha = ( \alpha_1, \ldots, \alpha_q) \in \{ 1 \ldots, d \}^q , $ we write $ | \alpha | = q $ for the length of $\alpha $ and $ \partial_x^\alpha = \partial_{x_{\alpha_1}} \ldots \partial_{x_{\alpha_q}}$ for the associated partial derivative. For a function $ f : \mathbb {R}^d \to \mathbb {R} $ which is $q$ times differentiable, we introduce for any $ 0 \le l \le q , $ \begin{equation} \| f \|_{l, q, \infty } := \sum_{l \le | \alpha | \le q } \sup_{x \in \mathbb {R}^d } | \partial^\alpha_x f (x) | \mbox{ and } \| f \|_{q, \infty } := \| f \|_\infty + \| f \|_{1, q , \infty } . \end{equation} We write $ C^q_b ( \mathbb {R}^d ) $ for the class of all functions $f : \mathbb {R}^d \to \mathbb {R}$ such that $ \| f \|_{q, \infty } < \infty .$ \subsection{The model} Let $\mu$ be a $\sigma-$finite measure on $({\mathbb {R}^m} , {\mathcal B} ({\mathbb {R}^m})).$ Moreover, let $(\Omega, {\mathcal A} , P ) $ be a probability space on which are defined a Poisson random measure $ N ( ds, dz , du ) ,$ which is a measure on $\mathbb {R}_+ \times {\mathbb {R}^m} \times \mathbb {R}_+$ having intensity measure $ ds \mu ( dz) du , $ as well as independent, standard $1-$dimensional Brownian motions $W^1, \ldots , W^k $ which are independent of $ N .$ We consider the following jump diffusion equation taking values in $\mathbb {R}^d, $ \begin{multline}\label{eq:sde0} \bar X_t = x +\int_0^t g ( \bar X_s) d s + \sum_{l=1}^k \int_0^t \sigma_l ( \bar X_s) d W^l _s \\ + \int_{[0, t ]} \int_{ {\mathbb {R}^m} \times \mathbb {R}_+} c ( z, \bar X_{s-}) {1}_{ u \le \gamma ( z, \bar X_{s-})} N (ds, d z, d u ) , \end{multline} where the coefficients $ g (x), $ $ c(z, x)$ and $ \gamma (z, x) $ are measurable functions satisfying the following assumption. \begin{ass}\label{conditionsbis} \begin{enumerate} \item $g$ and $ \sigma_l, 1 \le l \le k, $ are $C^1 $ and $ \| \nabla g \|_\infty +\sum_{l=1}^k \| \nabla \sigma_l \|_\infty < \infty .$ \item $c$ and $ \gamma$ are continuous, Lipschitz continuous with respect to $x,$ i.e. $$ |c ( z, x ) - c (z, x') | \le L_c( z ) | x - x ' | \mbox{ and } |\gamma ( z, x ) - \gamma (z, x') | \le L_{\gamma}( z ) | x - x ' |,$$ where $ L_c, \ L_{\gamma} $ are measurable functions $ {\mathbb {R}^m} \to \mathbb {R}_+ .$ \item \begin{equation}\label{eq:cmu} C_\mu ( \gamma , c ) := \sup_{x \in \mathbb {R}^d } \int_{{\mathbb {R}^m} }( L_{c} ( z) \gamma (z,x) + L_{\gamma} ( z) |c (z,x) |) \mu (d z) < \infty . \end{equation} \item $\sup_{x \in \mathbb {R}^d } \sup_{z \in {\mathbb {R}^m}} \gamma (z,x) = \Gamma < \infty .$ \item $\sup_{ x \in \mathbb {R}^d } \int_{G} | c (z, x) | \gamma ( z, x) \mu (dz) < \infty .$ \end{enumerate} \end{ass} Under these assumptions, Theorem 1.2.\ of Graham \cite{carl} (1992) implies that \eqref{eq:sde0} admits a unique strong non-explosive adapted solution which is Markov, having c\`adl\`ag trajectories. We denote by $ \bar X_{t_0, t}^x , t \geq t_0, $ a version of the above process starting from the position $x \in \mathbb {R}^d $ at time $t_0 .$ Whenever $t_0 = 0, $ we shall shortly write $ \bar X_t^x := \bar X_{0, t }^x .$ Finally, if we do not want to specify the initial value at time $0$ of the process, we shall simply write $ \bar X_t $ for the process. Our aim is to state easily verifiable conditions on the parameters $ g, \sigma_l, c $ and $ \gamma $ of the process that imply the unique ergodicity of the process $ \bar X$ together with a control of the speed of convergence to equilibrium. Our approach relies on the regeneration technique based on the jump transitions. Therefore, we first state a sufficient condition implying a local Doeblin condition for the jump transitions. This is done in the next subsection and used ideas introduced in \cite{Victor}. \subsection{A Doeblin lower bound for the jump-transitions} We control the rate of convergence to equilibrium based on a splitting scheme reminiscent of the regeneration technique. This scheme is entirely based on certain {\it big} jumps of $\bar X$ (that will be defined below) and needs the following non-degeneracy condition on the jump noise and the associated jump rate. \begin{ass}\label{conditions2} We suppose that $m \geq d.$ Let $ \mu = \mu_{ac} + \mu_{s} $ be the Lebesgue decomposition of $\mu ,$ with $ \mu_{ ac} (d z) = h(z) d z ,$ for some measurable function $h \geq 0 \in L^1_{loc} ( \lambda ) , $ where $\lambda $ is the Lebesgue measure on $\mathbb {R}^m .$ Then there exist $x_0 \in \mathbb {R}^d , z_0 \in \mathbb {R}^m $ and $r, R > 0 $ such that $$ \inf_{ z : |z - z_0 | \le R , x : |x - x_0|\le r } \gamma (z,x) h (z) = \varepsilon > 0. $$ \end{ass} In order to introduce what we shall call {\it big jumps} of the process, we impose the following condition which implies that the measure $ \gamma (x, z) \mu ( dz) $ is sigma-finite, uniformly in $x.$ \begin{ass}\label{conditions2bis} There exists a non-decreasing sequence $(G_n)_n $ of subsets of $ \mathbb {R}^m $ and an increasing sequence of positive numbers $ \Gamma_n $ with $ \Gamma_n \uparrow +\infty $ as $n\to \infty, $ such that $ \bigcup G_n = \mathbb {R}^m ,$ \begin{equation}\label{eq:explosion} \int_{G_n}\gamma(z, x) \mu (d z ) =: \bar \gamma_n ( x) \le \Gamma_n < \infty \end{equation} for all $x \in \mathbb {R}^d $ and for all $n.$ \end{ass} We fix some $n.$ Thanks to \eqref{eq:explosion}, we can couple the process $\bar X_t$ with a rate $\Gamma_n-$Poisson process $ N^{[n]} = (N^{[n]}_t)_{t \geq 0}$ such that jumps of $\bar X_t$ produced by noise $z \in G_n,$ $$ \Delta^{[n]} \bar X_t : = \int_{G_n } \int_0^\infty c( z, \bar X_{t-}) 1_{ \{u \le \gamma ( z, \bar X_{t-})\} } N (dt, dz, du) ,$$ can only occur at the jump times $ T_k^{[n]} , k \geq 1 ,$ of $N^{[n]}.$ We will construct our regeneration scheme based on these {\it big jumps} $ T_k^{[n]}, k \geq 1, $ for a suitably chosen truncation level $n.$ Let \begin{equation}\label{eq:Pi} \Pi ^{[n]} ( x, dy ) = {\mathcal L} ( \bar X_{T_k^{[n]} } | \bar X_{T_k^{[n]} - } = x ) (dy ) \end{equation} be the transition kernel associated to big jumps. Let $ x_0, z_0 \in \mathbb {R}^d $ be as in Assumption \ref{conditions2}. We then assume \begin{ass}\label{ass:final} $ \nabla_z c ( z_0, x_0 ) $ has full rank. \end{ass} \begin{prop} Grant Assumptions \ref{conditions2}, \ref{conditions2bis} and \ref{ass:final}. Let $ r , R > 0 $ be as in Assumption \ref{conditions2} and fix $n_0 $ such that $ \{ z \in \mathbb {R}^m : | z - z_0 | \le R \} \subset G_{n_0 }.$ Then there exist $\eta > 0, \beta \in ]0 , 1 [ $ and probability measure $ \nu $ on $(\mathbb {R}^d, {\mathcal B}(\mathbb {R}^d ) ) $ such that for any $ V \in {\mathcal B} ( \mathbb {R}^d ) , $ \begin{equation}\label{eq:minoration} \inf_{ x : |x - x_0| < \eta } \Pi^{[n]} ( x, V) \geq \beta \nu ( V) . \end{equation} \end{prop} \begin{proof} Write $ \mathcal{K} = \{ z\in \mathbb {R}^m : | z - z_0 | \le R \} $ with $z_0 $ and $ R$ chosen according to Assumption \ref{conditions2}. Then for all $ V \in {\mathcal B} ( \mathbb {R}^d ) $ and for all $ x , |x- x_0 | \le r , $ \begin{multline} \Pi^{[n]}( x, V) \geq \frac{1}{\Gamma_n } \int_{G_{n}\cap {\mathcal K}} \gamma ( z, x ) \mathds{1}_V ( x + c ( z, x ) )\mu (d z) \\ \geq \frac{1}{\Gamma_n} \int_{G_{n}\cap {\mathcal K}} \gamma ( z, x ) \mathds{1}_V ( x + c ( z, x ) )h(z) d z \geq \frac{\varepsilon}{\Gamma_n} \int_{G_{n}\cap {\mathcal K}} \mathds{1}_V ( x + c ( z, x ) ) d z, \end{multline} where $h$ is the Lebesgue density of the absolute continuous part of $\mu .$ Since $ \nabla_z c ( z_0, x_0) $ is of full rank, standard arguments imply that the mapping $ z \mapsto x + c(z, x) $ is locally invertible, locally around $z_0, $ uniformly in $ x $ belonging to a small ball around $x_0,$ and the assertion follows e.g.\ from Lemma 6.3 of \cite{micheletal}. \end{proof} \subsection{Total variation coupling} We now explain how the lower bound \eqref{eq:minoration} allows to couple two trajectories $\bar X_t^x $ and $ \bar X_t^y , $ the first issued from $x, $ the second from $y $ at time $t= 0,$ once they have both entered $ C := \{ x \in \mathbb {R}^d : |x- x_0 | < \eta \} .$ These ideas rely on the regeneration technique introduced by Athreya and Ney \cite{At78} (1978) and by Nummelin \cite{Nu78} (1978). We apply these ideas here to the jump mechanism. Firstly, fix some $n \geq n_0 $ and write $\Pi ( x, dy ) := \Pi^{[n]} (x, dy) $ for the jump transition kernel of \eqref{eq:Pi}. For any fixed $ x, x' \in \mathbb {R}^d , $ let $ \Pi ((x, x'), dy dy') $ be the maximal coupling of $ \Pi ( x, dy ) $ and $ \Pi ( x', dy') .$ Then \eqref{eq:minoration} implies that $$ \Pi ((x, x'), dy dy') \geq \beta 1_{C \times C} (x, x') \nu (dy ) \delta_{y} (dy'). $$ This lower bound implies that once a jump $T_k^{[n]} $ occurs while the two copies of the process are inside the set $C,$ {\it with probability $ \beta, $ is is possible to choose the same ``after-jump'' position $y$ for the two of them according to the measure $ \nu .$ } We may therefore introduce a split kernel $Q ((x,x', u),dy dy' ), $ which is a transition kernel from $ \mathbb {R}^d \times \mathbb {R}^d \times [0, 1 ] $ to $\mathbb {R}^d \times \mathbb {R}^d, $ defined by \begin{equation}\label{Q} Q((x,x',u), dy dy') = \left\{ \begin{array}{ll} \nu(dy) \delta_{y } (dy') & \mbox{ if } (x,x',u) \in C \times C \times [0, \beta]\\ \frac{1}{1 - \beta} \left( \Pi ((x,x') , dy dy' ) - \beta \nu(dy) \delta_{y} (dy') \right) & \mbox{ if } (x,x',u) \in C \times C \times ] \beta , 1] \\ \Pi ((x, x') ,dy dy') & \mbox{ if } (x,x') \notin C \times C . \end{array} \right. \end{equation} Notice that $$ \int_0^1 Q((x,x', u), dy dy' ) du = \Pi (( x,x'), dy dy' ) ;$$ in this sense $Q((x,x', u), dy dy' )$ can be considered as `splitting' the original kernel $\Pi (( x,x'), dy dy' )$ by means of the additional `color' $u.$ We now show how to construct a coupled version of the processes $\bar X^x$ and $\bar X^y$ recursively over time intervals $[T_k^{[n]} , T^{[n]}_{k+1} [ , k \geq 0 .$ For that sake introduce the process $Z^x_t$ defined by \begin{equation}\label{eq:z} Z^x_t = x + \int_0^t g ( Z^x_s ) d s +\sum_{l=1}^m \int_0^t \sigma_l ( Z^x_s) d W^l _s + \int_0^t \int_{ G_n^c} \int_0^ { \infty } c( z, Z^x_{s-} ) 1_{ u \le \gamma ( z, Z^x_{s- }) } N (d s, d z, d u ) . \end{equation} The coupling construction works as follows. \begin{enumerate} \item We use the same realization of jump times $ T_k^{[n]}, k \geq 0, $ for $\bar X^x$ and $\bar X^y .$ \item We start at time $ t= 0 $ with $\bar X^x_0 = x, \bar X^y_0= y.$ \item Take two independent realizations of $Z^x $ and of $Z^y $ and put \begin{equation} \bar X_t^x := Z_t^x , \; \bar X_t^y := Z_t^y \mbox{ for all } 0 \le t < T_1^{[n]} . \end{equation} Notice that $ T_1^{[n]} $ is independent of the rhs of \eqref{eq:z} and exponentially distributed with parameter $\Gamma_n .$ We put $$ \bar X^x_{T_1^{[n]} - } := Z^x _{T_1^{[n]} - } , \; \bar X^y_{T_1^{[n]} - } := Z^y _{T_1^{[n]} - } .$$ On $\bar X^x_{T_1^{[n]} - } = x' $ and $\bar X^y_{T_1^{[n]} - } = y' $ we do the following. \item We choose a uniform random variable $ U_1 , $ uniformly distributed on $ [0, 1 ], $ independently of anything else. \item On $ U_1 = u , $ we choose a random variable $ V_1 \sim Q ( (x',y', u) , \cdot ) $ and we put \begin{equation}\label{eq:vk} (\bar X^x_{T_1^{[n]} } , \bar X^y _{T_1^{[n]} }) := V_1 . \end{equation} We then restart the above procedure at item (2) with the new starting point $V_1 $ instead of $(x,y) .$ \end{enumerate} Write $( \bf X^x_t , X^y _t) $ for the $2 d +1-$dimensional process with additional color $ U_k , $ defined by $${ ( \bf X^x_t , X^y _t) }= \sum_{ k \geq 0 } 1_{ [T_k^{[n]} , T_{k+1}^{[n]} [} (t) ( \bar X^x_t,\bar X^y_t, U_k ) ,$$ keeping trace of the additional color $U_k.$ In the above formula, we put $ U_0 := 1 $ (during the interval $ [0, T_1^{[n ]}[, $ no coupling attempt is made). Let \begin{equation}\label{eq:tauc} \tau_c := \inf\{ T_k^{[n]} , k \geq 1 : (\bar X^x_{T_k^{[n]} - }, \bar X^y_{T_k^{[n]} - }) \in C \times C , U_k \le \beta \} , \end{equation} which is the coupling time of the process. It is clear that by the structure of the splitting kernel $Q ( (x',y', u) , \cdot ),$ if $ \tau_c < \infty , $ then $$ \bar X^x_{\tau_c } = \bar X^y_{\tau_c } \sim \nu .$$ Once the two trajectories have met at time $ \tau_c,$ by the Markov property, we may merge them into a single one, and there is no need to continue the above construction, that is, we apply the construction (1) -- (5) described above only up to the time $ \tau_c.$ It is clear that in this way the speed of convergence to equilibrium of the process is determined by the moments of the coupling time $\tau_c.$ In particular, in order to prove that $ \tau_c < \infty $ almost surely, we have to ensure that joint visits of the set $C $ by $\bar X_t^x $ and $ \bar X_t^y $ do indeed happen. This is granted by a Lyapunov condition plus a control argument that will be developed in the next section. These arguments will not only imply the finiteness of the coupling time, but also a control of its moments. \subsection{Lyapunov function } We introduce the operator $$L f(x) = \sum_{i=1}^d \frac{\partial f}{\partial x_i } (x)g^i ( x) + \frac{1}{2} \sum_{i, j = 1}^d \frac{\partial^2 f}{\partial x_i \partial x_j } (x) a^{i, j } ( x) + \int_{\mathbb {R}^d } [ f ( x + c ( z, x)) - f(x)] \gamma ( z, x) \mu (dz ), $$ where for $ 1 \le i, j \le d, $ $ a^{i, j } (x) = \sum_{l=1}^m\sigma_l^i \sigma_l^j (x) ,$ for sufficiently regular test functions $f.$ We impose \begin{ass}\label{ass:drift} There exists a continuous function $V : \mathbb {R}^d \to [1, \infty [ $ which belongs to the domain $ {\mathcal D} ({L}) $ of the extended generator ${L}$ of the process $\bar X,$ and constants $ b, c > 0 $ such that for any $x \in \mathbb {R}^d ,$ \begin{equation}\label{eq:driftcond0} {L} V (x) \le - b V(x) + c \mathds{1}_K (x) , \end{equation} where $K \subset \mathbb {R}^d $ is a compact. \end{ass} \begin{ex} Suppose that there exists a compact $ K \subset \mathbb {R}^d, $ such that \begin{equation} Tr ( a (x)) + 2 < g ( x) , x> + 2 \int_{\mathbb {R}^d } < x + c(z,x), c (z, x) > \gamma (z, x ) \mu (dz ) \le - c |x| ^2 , \end{equation} for all $x \in K^c .$ Then \eqref{eq:driftcond0} holds for $ V(x) = |x|^2 . $ We refer to Section 4 of \cite{Victor} for a detailed discussion of other conditions implying \eqref{eq:driftcond0}. \end{ex} \begin{rem} Theorem 4.1 of Douc et al. \cite{DFG09} (2009), applied to $ \Phi ( x) = b x $ and $ \delta = 0, $ shows that \eqref{eq:driftcond0} implies $$ E_x (e^{ b \tau_K} ) \le V(x) , \mbox{ for } \tau_K = \inf \{ t \geq 0 : \bar X_t \in K\} . $$ \end{rem} \begin{cor} Under Assumption \ref{ass:drift}, for any coupling of $\bar X_t^x $ and $ \bar X_t^y $ and for $ \tau_{K \times K} := \inf \{ t \geq 0 : (\bar X_t^x , \bar X_t^y ) \in K \times K \}, $ we have $$ E_{x,y} ( e^{b \tau_{K \times K}}) \le V(x) + V(y) + C.$$ \end{cor} \begin{proof} If suffices to define the $2d-$dimensional Lyapunov function $ \bar V (x, y ) := V(x) + V(y) $ and to check that \eqref{eq:driftcond0} holds for $ \bar L $ where $ \bar L$ denotes the generator of the process $ (\bar X_t^x, \bar X_t^y ).$ \end{proof} As a consequence, under Assumption \ref{ass:drift}, two copies of the process visit the compact $K$ {\bf at the same time} infinitely often, almost surely. \subsection{Control} Once the two copies of the process have entered the compact $K, $ we have to steer them to the target set $C $ appearing in the Doeblin lower bound \eqref{eq:minoration}. This is related to the control properties of the process $ \bar X.$ Since the process $\bar X$ is of infinite jump activity, we start by approximating it by a finite activity process in the following way. For any subset $ G \subset \mathbb {R}^d $ with $\mu ( G) < \infty ,$ we define the process $\bar X^{G } $ by \begin{multline} \bar X^{G }_t = x + \sum_{l=1}^m \int_0^t \sigma_l ( \bar X^{G }_s ) d W_s^l + \int_0^t g ( \bar X^{G }_s) ds \\ + \int_{[0, t ]} \int_G \int_{\mathbb {R}_+} c(z, \bar X^{G }_{s-}) 1_{ \{ u \le \gamma (z, \bar X^{G }_{s-})\}} N (ds, dz, du ) . \end{multline} Then we know that \begin{prop}\label{prop:control}[Lemma 6 of \cite{ballyetc}] Grant Assumption \ref{conditionsbis}. There exists a constant $C> 0 $ such that for any $x \in \mathbb {R}^d $ and $ T > 0, $ \begin{equation}\label{eq:control1} P_x ( \sup_{t \le T} | \bar X^{G }_t - \bar X_t | \geq \varrho ) \le \frac{T e}{\varrho} \exp \left( C T \left[ \sum_l \| \nabla \sigma_l \|_\infty + \| \nabla g \|_\infty + C_{\mu } (\gamma, c )\right]^2 \right) \alpha ( G^c ) , \end{equation} where $ C_{\mu } (\gamma, c ) $ is defined in \eqref{eq:cmu} and where $ \alpha(G^c) : =\sup_{ x \in \mathbb {R}^d } \int_{G^c} | c (z, x) | \gamma ( z, x) \mu (dz) < \infty $ by Assumption \ref{conditionsbis}. \end{prop} \begin{cor}\label{cor:412} Under the conditions of Proposition \ref{prop:control}, for any fixed time horizon $ T$ and any $ \varrho > 0 $ there exists $ G_T \subset \mathbb {R}^d $ such that for all $ G $ with $G_T \subset G,$ \begin{equation}\label{eq:control2} \inf_{ x \in \mathbb {R}^d } P_x ( \sup_{t \le T} | \bar X^{G }_t - \bar X_t | \le \varrho ) > 0 . \end{equation} \end{cor} In the following, we shall choose $ \varrho := \eta/ 4$ (recall \eqref{eq:minoration}) and $T := 1 .$ Fix $ G$ such that \eqref{eq:control2} holds with these parameters. In a next step we will give conditions ensuring that $ \inf_{x \in K } P_x ( |\bar X^{G }_1 - x_0 | \le \eta/4 ) > 0 .$ To do so, let us introduce the following objects. Write $\,\tt H\,$ for the Cameron-Martin space of measurable functions ${h}:[0,1]\to \mathbb {R}^k $ having absolutely continuous components ${ h}^\ell(t) = \int_0^t \dot h^\ell(s) ds$ with $\int_0^{1}[{\dot h}^\ell]^2(s) ds < \infty$, $1\le \ell\le k$. For $x\in \mathbb {R}^d $ and ${ h}\in{\tt H}$, consider the deterministic system $ \varphi^{ (h, s,x ) } $ solution of \begin{equation}\label{eq:generalcontrolsystem} \varphi^{ (h, s,x ) } (t) = x + \int_s^t g ( \varphi^{ (h, s,x ) } (u) ) du + \sum_{\ell=1}^k \int_s^t \sigma_\ell ( \varphi^{ (h, s,x ) }(u) ) \dot h^\ell (u) du. \end{equation} If $ s = 0, $ we write for short $ \varphi^{(h,x)} $ instead of $ \varphi^{(h,0, x) }.$ We will impose either the following assumption of strong controllability \begin{ass}\label{ass:strongcontrol} For all $ x \in K, $ there exists $ h \in \tt H $ such that $$ \varphi^{(h, x)} (1 ) = x_0 .$$ \end{ass} Assumption \ref{ass:strongcontrol} is satisfied e.g.\ if the matrix $a$ defined through $ \sum_{l=1}^k \sigma_l^i \sigma_l^j = a^{i j } $ for all $ 1 \le i , j \le d,$ is positive definite on $ K $ (here we suppose w.l.o.g. that $ x_0 \in int K $). However, if Assumption \ref{ass:strongcontrol} does not happen to be satisfied, we may introduce a weaker condition taking into account the jumps of the process $ \bar X^{G }.$ For that sake, fix some $n \geq 1 , $ a sequence $ 0 < t_1 < \ldots < t_n < 1 $ as well as a sequence $ (z_1, \ldots , z_n ) $ of elements $ z_k \in G .$ Write for short $ \mathbf{z} = (z_1, \ldots , z_n ) , \mathbf{t} = (t_1, \ldots, t_n ) .$ Consider finally a sequence $ \mathbf{h} := (h_1, \ldots , h_n ) $ of elements of ${\tt H}$ and introduce the skeleton process $ x_t = x_t ( x, \mathbf{t}, \mathbf{z}, \mathbf{h} ) $ which is defined on $ [0, 1 ]$ as follows. $$ x_t = x_t ( x, \mathbf{t}, \mathbf{z}, \mathbf{h} ) = \left\{ \begin{array}{ll} \varphi^{(h_1, x)} ( t) &0 \le t < t_1 \\ x_{t_k-} + c( z_k, x_{t_k-} ) & t = t_k , 1 \le k \le n , \\ \varphi^{(h_k, x_{t_k} )} (t- t_k ) & t_k \le t < t_{k+1 }\wedge 1 , 1 \le k \le n \end{array} \right\} ,$$ where we put $t_{n+1} := \infty $ for convenience. Finally we put $x_1 = x_1 ( x, \mathbf{t}, \mathbf{z}, \mathbf{h} ) = \varphi^{(h_n, x_{t_n} )} (1- t_n ).$ Then we suppose \begin{ass}\label{ass:weakcontrol} The process $ \bar X^{G }$ has a minimal jumping rate, i.e., $$ \gamma (z, x) > 0 \mbox{ for all $x$ and for all $ z \in G$},$$ and for all $ x \in K, $ there exist $n \in {\bf N}$ and sequences $ \mathbf{t}, \mathbf{z} , \mathbf{h} $ such that $ z_1, \ldots, z_n \in supp ( \mu) \cap G $ and $$ |x_1 ( x, \mathbf{t}, \mathbf{z}, \mathbf{h} ) - x_0 | \le \eta /4 .$$ \end{ass} \begin{theo}\label{theo:controlXbar} Suppose that Assumption \ref{conditionsbis} holds. Grant either Assumption \ref{ass:strongcontrol} or \ref{ass:weakcontrol}. Then $$ \inf_{ x \in K } P_x ( | \bar X^{G }_1 - x_0 |\le \eta/4 ) > 0 .$$ \end{theo} \begin{proof} Recall that $ \sup_{z, x } \gamma ( z, x) \le \Gamma < \infty$ by Item (4) of Assumption \ref{conditionsbis}. Therefore, the jumps of $\bar X^{G } $ occur at most at the jump times of a rate $\mu ( G) \Gamma -$Poisson process that we shall call $ J.$ We work conditionally on $ J_1 = n $ and on the choice of jump times $ T_1 = t_1 < T_2 = t_2 < \ldots < T_n =t_n < 1.$ On $\{ J_1 = 0 \}, $ $ \bar X^{G } $ does not jump during $ [0, 1 ] $ and thus \begin{equation}\label{eq:nojump} \bar X^{G }_t = Y_t , \mbox{ for all } t \le 1, \end{equation} where \begin{equation}\label{eq:SDEsanssauts} Y_t = x + \sum_l \int_0^t \sigma_l (Y_s) d W^l_s + \int_0^t g (Y_s) ds =: \Phi_t ( x) . \end{equation} Here, $ \Phi_t( x) $ denotes the stochastic flow associated to the above stochastic differential equation. Notice that under our assumptions, $ x \mapsto \Phi_t( x) $ is continuous, see e.g.\ \cite{kunita}. Therefore, under Assumption \ref{ass:strongcontrol}, we may conclude as follows: $$ P_x ( | \bar X^{G }_1 - x_0 |\le \eta/4 ) \geq P_x ( | Y_1 - x_0 |\le \eta/4 ; J_1 = 0 ) = P_x ( | Y_1 - x_0 |\le \eta/4 ) \cdot P ( J_1 = 0 ) > 0 ,$$ due to the support theorem for diffusions, which implies the assertion since $ x \mapsto \Phi_1 (x) = Y_1 $ is continuous and since $ K$ is compact. Suppose now that Assumption \ref{ass:weakcontrol} holds. We work conditionally on $ J_1 = n $ and on $ T_1 = t_1 < \ldots < T_n = t_n < 1 $ such that $ n , \mathbf{t} $ satisfy Assumption \ref{ass:weakcontrol}. Our goal is to construct a version of $ \bar X^{G }_1, $ conditionally on these choices, which is continuous in the starting point $x.$ This construction relies on the so-called ``real-shocks''-representation of $ \bar X^{G }_t $ which we define now (cf.\ also to Section 2.2.3 in \cite{ballyetc}). During this construction, we will choose successively random variables $ Z_1, \ldots, Z_n $ and define a process $ x_t ( x, Z_1^n ) ,$ depending on these choices, where $Z_1^n = (Z_1, \ldots , Z_n ) ,$ for $ 0 \le t \le 1.$ This process is defined recursively as follows. Firstly, we put $$ x_t = \Phi_t ( x ) , \mbox{ for all } 0 \le t < t_1 .$$ Then, conditionally on $x_{t_1 -} = y_1, $ we choose a random variable $ Z_1 $ with law $q_G ( z, y_1 ) \mu^* ( dz) , $ where for some fixed $ z^* \in G^c ,$ $ \mu^* (dz ) = \mu ( dz ) + \delta_{z^* } (dz ) $ and $$ q_G ( z,y) = \Theta_G ( y) 1_{ z^*} ( z) + \frac{1}{\mu (G) \Gamma } 1_G ( z) \gamma ( z, y) ,$$ with $$ \Theta_G ( y) = 1 - \frac{1}{\mu ( G) \Gamma } \int_G \gamma ( z, y ) \mu (dz) .$$ Then we put $$x_{t_1 } := x_{t_1 -} + c( Z_1, x_{t_1 -} ) 1_G ( Z_1), x_t = \Phi_{t - t_1} ( x_{t_1} ) , \mbox{ for all } t_1 \le t < t_2 , $$ and we proceed iteratively by choosing, conditionally on $x_{t_2 -} = y_2, $ a random variable $$ Z_2 \sim q_G ( z, y_2) \mu^* (dz) ,$$ and so on. Finally, we obtain a terminal value $x_1 = \Phi_{1 - t_n } ( x_{t_n} ) .$ It is easy to check that $$ {\mathcal L} ( x_1 ( x, Z_1^n )) = {\mathcal L} ( \bar X^{G }_1 | J_1 = n , T_1 = t_1, \ldots, T_n = t_n) .$$ Due to the support theorem for diffusions and by continuity of $c (x, z),$ we clearly have that $x_1 ( x, \mathbf{t}, \mathbf{z}, \mathbf{h} ) \in supp ({\mathcal L} ( x_1 ( x, Z_1^n ))) .$ Therefore, Assumption \ref{ass:weakcontrol} implies that for all $x \in K, $ $$ P ( | x_1 ( x, Z_1^n ) - x_0| \le \eta/4 ) > 0 .$$ The important point is now that the above construction ensures the continuity of $$ x \mapsto x_1 ( x, Z_1^n ).$$ Thus, by continuity in $x$ and compactness of $ K,$ $$ \inf_{x \in K } P ( | x_1 ( x, Z_1^n ) - x_0| \le \eta/4 ) > 0 $$ implying the assertion. \end{proof} We may now conclude with our main result of this section. Introduce $$ \tau_1^{[n]} = \inf\{ T_k^{[n]} , k \geq 1 : (\bar X^x_{T_k^{[n]} - }, \bar X^y_{T_k^{[n]} - }) \in C \times C \} .$$ \begin{prop}\label{prop:goodcontrol} Grant Assumptions \ref{conditionsbis}, \ref{conditions2bis}, \ref{ass:drift} and \ref{ass:strongcontrol} or \ref{ass:weakcontrol}. Then there exists $n_0 $ such that for all $ n \geq n_0, $ for all $ p \geq 1, $ there exists a constant $ C = C( p) $ with \begin{equation}\label{eq:goodcontrol} E_{x,y} ( (\tau_1^{[n]} )^p ) \le C(p ) [V (x) + V(y ) ] . \end{equation} \end{prop} The proof of Proposition \ref{prop:goodcontrol} is given in the Appendix. \subsection{Speed of convergence to equilibrium} Let us summarize all assumptions needed so far. \begin{ass}\label{ass:Final!} \begin{enumerate} \item We impose Assumption \ref{conditionsbis}, implying the existence of a unique strong solution of \eqref{eq:sde0}. \item We impose the non-degeneracy condition Assumption \ref{conditions2}. \item We impose Assumption \ref{conditions2bis} implying that the jump measure is sigma-finite. \item We impose the local Doeblin condition \eqref{eq:minoration}. \item We impose the Lyapunov type condition Assumption \ref{ass:drift}. \item We impose the controllability condition Assumption \ref{ass:strongcontrol} or \ref{ass:weakcontrol}. \end{enumerate} \end{ass} Let $ {\mathcal G} := \{ f : \mathbb {R}^d \to \mathbb {R} \mbox{ measurable} : \|f\|_\infty \le 1 \} $ and introduce for any two probability measures $\mu $ and $\nu $ on $(\mathbb {R}^d, {\mathcal B}(\mathbb {R}^d ) )$ the distance $$ d_{\mathcal G} ( \mu, \nu ) := \sup_{ f \in {\mathcal G} } | \mu (f) - \nu (f) | ,$$ which is the total variation distance between $ \mu $ and $ \nu .$ Write $P_t$ for the transition semigroup of the limit process, i.e., $P_t f (x) = E (f (\bar X_t^x )) .$ Then we have the following result. \begin{theo}\label{theo:tv} Grant Assumption \ref{ass:Final!}. Then the process $ \bar X_t$ is positively Harris recurrent with unique invariant probability measure $\pi.$ Moreover, for all $ p \geq 1, $ there exists a constant $C(p) $ such that $$ | P_t f(x) - P_t f (y )| \le [ V(x) + V(y ) ] \| f\|_\infty \frac{C(p)}{t^p } .$$ Finally, if $(X_t)_{t \geq 0} $ is any stochastic process defined on $ (\Omega , {\mathcal A}, P)$ satisfying $ \sup_{s \geq 0} E ( V ( X_s ) ) < \infty $ and if we denote $ \mu_s = {\mathcal L} ( X_s ) $ its law at time $s,$ then for all $ p \geq 1, $ $$ d_{\mathcal G} ( \mu_s P_t , \pi ) \le C (p ) t^{ - p }, $$ for all $ s , t \geq 0 ,$ and for a suitable constant $ C (p) $ depending on $p.$ \end{theo} \begin{proof} The Harris recurrence of $ \bar X_t$ follows from Theorem 2.12 of \cite{Victor}. To prove the second assertion, the main point of the proof is the fact that Proposition \ref{prop:goodcontrol} implies that $$ E_{x, y }( \tau_c^p) \le C(p ) [ V (x) + V(y) ] .$$ Indeed, this is a direct consequence of \eqref{eq:goodcontrol} together with the definition of the coupling time $\tau_c$ in \eqref{eq:tauc}. Then $$ | P_t f(x) - P_t f (y )| \le 2 \|f\|_\infty P_{x, y } (\tau_c > t) \le 2 \|f\|_\infty \frac{ E_{x, y } (\tau_c^p) }{t^p } $$ allows to obtain the second assertion. Moreover, notice that by Theorem 4.3 of \cite{Me93}, Assumption \ref{ass:drift} implies in particular that, once we have proven the unique ergodicity of $ \bar X_t$ with invariant probability measure $\pi, $ we necessarily have that $ \pi ( V) < \infty .$ Therefore we obtain, integrating the first assertion against $ \mu_s ( dx) $ and $ \pi ( dy ) , $ that $$ d_{\mathcal G} ( \mu_s P_t , \pi ) \le [ \pi ( V) + E ( V ( X_s ) ) ] \frac{C(p)}{t^p } ,$$ implying the assertion, since by assumption, $ \sup_s E ( V ( X_s ) ) < \infty . $ \end{proof} We are now ready to study the longtime behavior of a time inhomogeneous Markov process having jumps with position dependent jump rate and infinite activity jump activity. \section{Longtime behavior of time inhomogeneous PDMP's}\label{sec:2} We now turn to the main goal of this paper, the study of the longtime behavior of solutions of \eqref{eq:sde}. In order to grant existence and uniqueness of the solution of \eqref{eq:sde}, we impose the following conditions on the coefficients $b, c $ and $ \gamma .$ \begin{ass}\label{conditions} \begin{enumerate} \item $b(t, x ) $ is globally Lipschitz continuous in $x, $ uniformly in time, that is, there exists a constant $L_b$ such that $ \sup_{t \geq 0 } | b ( t, x) - b (t, y ) | \le L_b | x - y | ,$ for all $ x, y \in \mathbb {R}^d .$ \item $c$ and $ \gamma$ are Lipschitz continuous with respect to $x,$ uniformly in time, i.e.\ for all $ t > 0, $ $$ |c (t, z, x ) - c (t, z, x') | \le L_c( z ) | x - x ' | \mbox{ and } |\gamma (t, z, x ) - \gamma (t, z, x') | \le L_{\gamma}( z ) | x - x ' |,$$ where $ L_c, L_{\gamma} $ are measurable functions from $ {\mathbb {R}^m} \to \mathbb {R}_+ .$ \item For all $ T > 0, $ $ \sup_{x \in \mathbb {R}^d } \sup_{0 \le t \le T } \int_{{\mathbb {R}^m} }( L_{c} ( z) \gamma (t, z,x) + L_{\gamma} ( z) |c (t, z,x) |) \mu (d z) < \infty .$ \item For all $ T > 0 ,$ we have that $\sup_{ 0 \le t \le T } \sup_x \int_{{\mathbb {R}^m} } \gamma (t, z,x) |c (t, z,x) | \mu (d z )< \infty .$ \end{enumerate} \end{ass} Under these assumptions, we may still apply Theorem 1.2.\ of \cite{carl} to guarantee that \eqref{eq:sde} admits a unique strong non-explosive adapted solution which is Markov, having c\`adl\`ag trajectories. In the following, for any $ t_0 \geq 0, x \in \mathbb {R}^d , $ we shall write $ X_{t_0, t }^x , t \geq t_0, $ for a version of the above process starting from the position $ x $ at time $t_0.$ Our aim is to show that, as $t \to \infty , $ under suitable conditions, the time-inhomogeneous process $X_{t_0, t}^x$ of \eqref{eq:sde} converges to the time homogeneous limit process solving equation \eqref{eq:sde0} of Section \ref{sec:1}. In order to identify the limit process, we have to distinguish the three possible jump regimes that we have discussed in the introduction, the slow, intermediate and fast regime. \begin{ass}\label{ass:1} For all $t \geq 0, $ there exists a measurable partition $ (E^l_t, l=1, 2, 3) $ of $ \mathbb {R}^m $ such that $ E_t^i \cap E_t^j = \emptyset $ for all $ i \neq j $ and $ E_t^1 \cup E_t^2 \cup E_t^3 = \mathbb {R}^m,$ with the following properties. 1. (Fast regime) For all $ x \in \mathbb {R}^d ,$ \begin{equation}\label{eq:et1} \int_{E_t^1 } c(t,z, x) \gamma (t, z, x) \mu ( dz) = 0 , \; \lim_{t \to \infty } \int_{E_t^1 } |c(t,z, x)|^3 \gamma (t, z, x) \mu ( dz) = 0 . \end{equation} Moreover, there exists a measurable function $a : \mathbb {R}^d \to \mathbb {R}^{d \times d } $ such that \begin{equation}\label{eq:at} a^{i j } (t, x) = \int_{E_t^1 } c^i (t, z, x ) c^j (t, z, x) \gamma ( t, z, x ) \mu (dz ), 1 \le i, j \le d, \end{equation} satisfies $\sup_{ t \geq t_0} | a (t, x ) - a (x) | \to 0 $ as $t_0 \to \infty .$ \\ 2. (Intermediate regime) For all $x \in \mathbb {R}^d , $ \begin{equation}\label{eq:et2} \lim_{t \to \infty } \int_{E_t^2 } |c(t,z, x)|^2 \gamma (t, z, x) \mu ( dz) = 0 . \end{equation} Moreover, there exists a measurable function $b_2 : \mathbb {R}^d \to \mathbb {R}^{d } $ such that \begin{equation}\label{eq:bt} \tilde b (t, x) = \int_{E_t^2 } c (t, z, x ) \gamma ( t, z, x ) \mu (dz ) \end{equation} satisfies $\sup_{ t \geq t_0} | \tilde b (t, x ) - b_2 (x) | \to 0 $ as $t_0 \to \infty .$ \\ 3. (Slow regime) There exist measurable functions $\gamma (z, x ) \geq 0 $ and $ c(z, x) $ such that \begin{equation}\label{eq:ct} \sup_{ t \geq t_0} \int_{E_t^3} \Big( | \gamma (t, z, x ) - \gamma ( z, x) | + \gamma ( z, x ) | c ( t, z, x ) - c ( z, x) | \Big) \mu (dz ) \to 0 \end{equation} as $t_0 \to \infty , $ for all $ x \in \mathbb {R}^d .$ \\ 4. There exists a measurable function $b_1 : \mathbb {R}^d \to \mathbb {R}^{d } $ such that $ \sup_{ t \geq t_0} | b (t, x ) - b_1 (x) | \to 0 $ as $t_0 \to \infty .$ \\ 5. Introducing \begin{multline} \varepsilon ( x, t_0) = \sup_{ t \geq t_0 } \int_{E_t^1} | c (t, z, x) |^3 \gamma (t, z, x) \mu (dz) + \sup_{t \geq t_0} \int_{E_t^2} | c (t, z, x ) |^2 \gamma ( t, z, x ) \mu (dz ) \\ + \sup_{ t \geq t_0} [ | a (t, x ) - a (x) | +|b ( t, x ) + \tilde b ( t, x) - g(x)| ] \\ + \sup_{ t \geq t_0} \int_{E_t^3} \Big( | \gamma (t, z, x ) - \gamma ( z, x) | + \gamma ( z, x ) | c ( t, z, x ) - c ( z, x) | \Big) \mu (dz ) , \end{multline} there exist $C, r > 0 $ such that $ \varepsilon (x,t) \le C [1+|x|] e^{-rt} .$\\ 6. Let $\sigma_l , 1 \le l \le k, $ be such that $ a^{i j } (x) = \sum_{l=1}^m\sigma_l^i \sigma_l^j (x) $ for all $ 1 \le i, j \le d .$ Put $ g ( x) = b_1 ( x) + b_2 (x) .$ Then $\sigma_l, g , c $ and $\gamma $ are such that Assumption \ref{ass:Final!} holds. \end{ass} With $L_t $ the generator of \eqref{eq:sde} as in \eqref{eq:gen}, it is immediate to see that the following result holds true. \begin{prop} Suppose that the coefficients of the stochastic differential equations \eqref{eq:sde} satisfy Assumption \ref{conditions}. Grant moreover Assumption \ref{ass:1}. Then there exists a constant $C > 0 $ such that for any function $f \in C_b^3 ( \mathbb {R}^d ) $ $$ | Lf (x) - L_t f (x) | \le C e^{-rt } [1+|x|] \|f\|_{3, \infty } .$$ \end{prop} Therefore, the infinitesimal generator of \eqref{eq:sde} converges to the one of the limit process, if Assumption \ref{ass:1} holds. If the limit semigroup $P_t f(x) = E ( f ( \bar X_t^x )) $ satisfies suitable regularity conditions, this implies the convergence of the associated semigroups (see e.g.\ \cite{ballyetc}, \cite{bouguetetal}). This regularity of $P_t f (x)$ is actually delicate to show due to the presence of the position dependent jump rate $\gamma ( z, x ) .$ We refer to \cite{ballyetc} for a thorough study on which we rely in the sequel. \subsection{Asymptotic pseudotrajectories} To prove the convergence of the time dependent process to the limit process, we shall need both assumptions on the coefficients of the limit semigroup as well as on the time dependent approximating semigroup $L_t .$ {\bf Conditions on the limit process.} To state the conditions on the limit process in the presence of the position dependent jump rate $ \gamma ( z, x) ,$ we have to introduce the following notation. For a function $ f : \mathbb {R}^m \times \mathbb {R}^d \to \mathbb {R}$ which is $ q -$times differentiable with respect to $x, $ we write for any $ p \geq 1,$ for any time horizon $ T > 0$ and for any constant $C > 0, $ \begin{eqnarray} |f|_{p} &=& \sup_{x \in \mathbb {R}^d } \left( \int_{\mathbb {R}^m } | f (z, x) |^p \gamma ( z, x) \mu ( dz)\right)^{1/p}, \\ {[ f ]}_{ p } &=& \sup_{1 \le p' \le p } |f|_{ p'} , \\ \theta_{(q,p)} &=& 1 + \| \sigma \|_{2, q, \infty} + \| g \|_{2, q , \infty} + \sum_{ 2 \le |\alpha | \le q } [ \partial^\alpha_x c]_{ p},\\ \alpha_p &=& \| \nabla \sigma\|^2_\infty + \| \nabla g \|_\infty + [ \nabla_x c]^p_{ p } ,\label{eq:alphap}\\ \alpha_{q, p } ( C, T ) &=& C \theta^q_{q, p } \exp{ (C T q \sum_{ 1 \le n \le q } \frac1n \alpha_{ p \cdot q } ) } ,\\ \Gamma_{ q} ( \gamma) &=& \sup_{x \in \mathbb {R}^d } \sum_{l=1}^q \sum_{ 1 \le |\alpha | \le l } \left( \int_{\mathbb {R}^m} | \partial^\alpha_x \ln \gamma (z, x) |^{l/ | \alpha|} \gamma (z, x) \mu (dz) \right)^{q/l} . \end{eqnarray} In the following, we will apply the above notations with $ q=3, p=12$ and write \begin{equation}\label{eq:q3} Q_3(P, T ) := \alpha^{6 }_{3, 12 } ( C, T ) \times \left( 1+ \Gamma_{ 3} ( \gamma) + \sum_{ 0 \le |\beta | \le 3 } [ \partial^\beta_x \ln \gamma ]_{ 12 } \right)^3 . \end{equation} {\bf Conditions on the time inhomogeneous approximating process.} To begin with, we impose the following condition implying that the time inhomogeneous process $ X_{t_0, t }^x $ is $1-$ultimately bounded. \begin{ass}\label{unifbounded} There exists a constant $C > 0 $ such that for all $ t_0 > 0 , $ \begin{equation}\label{eq:ultimativelybounded} \sup_{t \geq t_0 } E ( | X_{t_0, t }^x |) \le C ( 1 + |x|) . \end{equation} \end{ass} \begin{rem} To check \eqref{eq:ultimativelybounded}, it is sufficient to impose a Lyapunov condition. Suppose e.g.\ that there exists a function $ V : \mathbb {R}^d \to \mathbb {R}_+ $ with $ E ( V ( X_{t_0, t }^x )) < \infty $ for all $ t \geq t_0 $ and with $ V( x) \geq |x| $ if $ |x| \geq K ,$ where $ K$ is some fixed constant. Assume that $V$ is a Lyapunov function in the sense that there exist $ \alpha , \beta > 0 $ such that \begin{equation}\label{eq:lyapunovlt} L_t V ( x) \le - \alpha V( x) + \beta \end{equation} for all $ t \geq 0 . $ Then \eqref{eq:ultimativelybounded} holds. \end{rem} Finally, in order to control the regularity of the approximating semigroup, we introduce \begin{eqnarray*} \Phi_1 (t, z, x)&=& \left[ | \nabla_x \gamma (t, z, x) | |c ( t, z, x) |^2 + ( | \nabla_x c |^2 + |c|^2 ) \gamma (t, z, x)\right] 1_{E_t^1 } (z) ,\\ \Phi_2 (t, z, x)&=& \left[ | \nabla_x \gamma (t, z, x) | |c ( t, z, x) | + ( | \nabla_x c | |c| + | \nabla_x c | ) \gamma (t, z, x)\right] 1_{E_t^2 \cup E_t^3 } (z), \end{eqnarray*} and we suppose that \begin{equation}\label{eq:ct} C_{t_0} := \sup_{ t \geq t_0} \sup_{x \in \mathbb {R}^d } \sum_{i=1}^2 \int_{\mathbb {R}^m} \Phi_i ( t, z, x) \mu ( dz) < \infty \end{equation} for some $t_0 > 0.$ {\bf Asymptotic pseudotrajectories} We state our convergence result in terms of asymptotic pseudotrajectories introduced in Bena\"im and Hirsch \cite{bh96} (1996), see also Bena\"im et al. \cite{bouguetetal} (2016). This notion provides an efficient tool to describe the long time behavior of time dependent processes. Consider the class of test functions $$ {\mathcal F} = \{ f : \mathbb {R}^d \to \mathbb {R} : \| f\|_{3, \infty} \le 1 \} ,$$ and introduce for any two probability measures $ \mu $ and $\nu $ on $(\mathbb {R}^d , {\mathcal B} ( \mathbb {R}^d ) ) $ the associated distance $$ d_{\mathcal F} ( \mu , \nu ) := \sup_{ f \in {\mathcal F}} | \mu ( f) - \nu ( f) | .$$ For $X_t $ a solution of \eqref{eq:sde}, starting from any arbitrary initial law $ {\mathcal L} ( X_0) , $ let $$ \mu_t := {\mathcal L} (X_t) $$ and recall that $P_t$ denotes the transition semigroup of the limit process \eqref{eq:sde0}. The following result is our main result. \begin{theo}\label{cor:1} Suppose that the coefficients of \eqref{eq:sde} satisfy Assumption \ref{conditions} and that $ E ( |X_0 |) < \infty .$ Suppose moreover that $ Q_3 (P, T ) < \infty $ for all $ T > 0 $ and that $C_{t_0 } < \infty $ for some $t_0 > 0 .$ Finally, grant Assumptions \ref{ass:1} and \ref{unifbounded}. Then there exists a constant $M_1$ such that for any $ T < \infty , $ \begin{equation}\label{eq:217} \sup_{s \le T} d_{\mathcal F} ( \mu_{t + s } , \mu_t P_s ) \le C e^{ M_1 T} \int_t^{t+T} e^{-rs} ds . \end{equation} In particular, $$ \lim_{t \to \infty } \sup_{s \le T} d_{\mathcal F} ( \mu_{t + s } , \mu_t P_s ) = 0,$$ thus $ ( \mu_t)_t $ is an asymptotic pseudotrajectory of $ (P_t)_t $ in the sense of \cite{bh96}. \end{theo} The proof of Theorem \ref{cor:1} is given in the Appendix. Since according to Theorem \ref{cor:1}, $ X_{t_0, t}^x $ is a good approximation of $ \bar X_{t_0, t }^x $ as $t_0$ tends to infinity, it is natural to study to which extent $ X_{t_0, t}^x $ approaches the invariant regime of the limit process, as time tends to infinity. Recall that $P_t$ denotes the limit semigroup and $\pi $ the associated invariant probability measure, which exists according to Theorem \ref{theo:tv}. Finally, recall that $ {\mathcal G} = \{ f : \mathbb {R}^d \to \mathbb {R} \mbox{ measurable such that } \| f \|_\infty \le 1 \} .$ The following theorem is an immediate consequence of Theorem \ref{theo:tv}. \begin{theo}\label{theo:2} Grant Assumption \ref{ass:Final!} and the assumptions of Theorem \ref{cor:1}. Let $V(x)$ be the Lyapunov function of Assumption \ref{ass:drift} and suppose that there exists a constant $C$ such that for all $ t_0 > 0 , $ $$ \sup_{t \geq t_0 } E ( V( X_{t_0, t }^x ) ) \le C ( 1 + V(x) ) .$$ Then for all $ p \geq 1 ,$ for a constant $C = C(p) $ depending on $p$ and on ${\mathcal L} ( X_0) ,$ we have \begin{equation}\label{eq:controlg} d_{\mathcal G} ( \mu_s P_t , \pi ) \le C t^{ - p }, \end{equation} for all $ s , t \geq 0 . $ \end{theo} As a consequence, we can then show that $ \mu_t $ converges, as $t \to \infty, $ to the invariant measure of the limit semigroup, in the following sense. \begin{cor}\label{cor:3} Grant the assumptions of Theorem \ref{cor:1}. Then for all $ p \geq 1 , $ there exists a constant $C= C(p, M_1 ) > 0, $ such that for all $ t \geq 0, $ \begin{equation} d_{ \mathcal F} ( \mu_t, \pi ) \le C t^{- p} . \end{equation} \end{cor} \begin{proof} Fix some $ 0 < \alpha < 1 .$ Then, since $ {\mathcal F} \subset {\mathcal G} ,$ $$ d_{ \mathcal F} ( \mu_t, \pi ) \le d_{\mathcal F} ( \mu_t , \mu_{ \alpha t } P_{t- \alpha t } ) + d_{\mathcal G} ( \mu_{ \alpha t } P_{t- \alpha t } , \pi ) .$$ Using \eqref{eq:217} with $T = ( 1- \alpha) t $ and with $ \alpha t $ instead of $ t $ by we obtain $$d_{\mathcal F} ( \mu_t , \mu_{ \alpha t } P_{t- \alpha t } ) \le C e^{ M_1 ( 1- \alpha) t } e^{- r \alpha t }.$$ Choose therefore $ \alpha $ sufficiently close to $1$ such that $$ r \alpha - M_1 ( 1- \alpha) > r/2 .$$ For this choice of $\alpha, $ $$ d_{\mathcal F} ( \mu_t , \mu_{ \alpha t } P_{t- \alpha t } ) \le C e^{- \frac{r}{2} t }$$ and by \eqref{eq:controlg}, $$ d_{\mathcal G} ( \mu_{ \alpha t } P_{t- \alpha t } , \pi ) \le C (p) (1 - \alpha )^{-p} t^{-p} ,$$ from which we deduce the result. \end{proof} \section{Examples}\label{sec:4} \subsection{Hawkes processes with exponential memory kernels and memory of variable length in a mean-field frame} Hawkes processes are point process models which are very important from a modeling point of view. They have regained a lot of interest in the recent years, in particular in econometrics, as good models to account for contagion risk and clustering arrival of events. They have also shown to be very useful in neuroscience due to their capacity of reproducing both the typical time dependencies observed in spike trains of neurons as well as the interaction structure of neural nets. Originally introduced by \cite{Hawkes} and \cite{ho} as a model for the appearances of earthquakes, their key feature is the fact that any point event is able to trigger future events -- for this reason, Hawkes processes are sometimes called ``self-exciting point processes''. We start by briefly recalling the definition of a Hawkes process. A Hawkes process $Z$ is a counting process on the real line $\mathbb {R} .$ Its law is characterized by its stochastic intensity processes $ \lambda_t $ which is defined through the relation $ P ( Z \mbox{ has a jump in } ]t , t + dt ] | {\mathcal F}_t ) = \lambda_t dt , $ where $ {\mathcal F}_t = \sigma ( Z ( ]u, s ] ) , \, -\infty < u < s \le t ),$ and where \begin{equation}\label{eq:intensity0} \lambda_ t = f \left( \int_{]-\infty , t [} h ( t-s) d Z_s \right) . \end{equation} Here, $f : \mathbb {R} \to \mathbb {R}_+$ is the {\it jump rate function} and $ h : \mathbb {R}_+ \to \mathbb {R} $ is the {\it memory kernel.} For simplicity, in what follows we suppose that $h$ is an exponential kernel, that is, it is of the form \begin{equation}\label{eq:Erlang} h(t)=ce^{- \alpha t}, t \geq 0, \end{equation} where $ c \in \mathbb {R} .$ If $ c > 0 , $ then the process is self-exciting, a negative value of $c$ implies that the process is self-inhibiting. Instead of considering a single Hawkes process $Z, $ systems of interacting Hawkes processes display a much richer behavior. So let us consider, inspired by \cite{SusanneEva} and \cite{aaee}, a system of $N$ Hawkes processes $Z^1, \ldots , Z^N ,$ having intensity $ \lambda_t^1, \ldots , \lambda_t^N $ each. We will suppose that the interactions between these processes are of mean field type (see \eqref{eq:intensitymf} and \eqref{eq:int} below), with exponential memory kernel. In many situations it is reasonable to assume that the jump intensity of some of the processes, say of process $Z^1,$ is only influenced by its history since its last jump time. This is what \cite{GL} call ``memory of variable length'', in reminiscence of the so-called ``Variable length Markov chains'' coined by Rissanen \cite{rissanen1983} (1983). More precisely, if we put $$ L_t := \sup \{ s \le t :\Delta Z^1 (s) = 1 \} , $$ which is the last jump of $ Z^1$ before time $t,$ then we suppose that the jump intensity of $Z^1 $ is of the form \begin{equation}\label{eq:intensitymf} \lambda_t^1 = f_1 \left( \frac{1}{{N- 1}} \sum_{j=2}^N \int_{]L_t , t [} e^{- \alpha (t- s)} d Z^j_s\right) . \end{equation} In addition, we suppose that the intensity of any of the remaining processes $Z^2, \ldots, Z^N $ is given by \begin{equation}\label{eq:int} \lambda^2_t =\ldots = \lambda^N_ t = f_2 \left( \frac{b}{\alpha} - \frac{c}{N-1} \sum_{j=2}^N \int_{]- \infty , t [} e^{- \alpha (t- s)} d Z^j_s \right) . \end{equation} Here, $f_1 $ and $f_2$ are non-decreasing, strictly positive, lowerbounded, bounded Lipschitz continuous functions having bounded derivative. Moreover, $ \alpha , b , c > 0 $ are fixed constants. Introduce now $$ X_t^{1} := \frac{1}{{N- 1}} \sum_{j=2}^N \int_{]L_t , t ]} e^{- \alpha (t- s)} d Z^j_s, \; X_t^2 := \frac{b}{\alpha} - \frac{c}{N-1} \sum_{j=2}^N \int_{]- \infty , t ]} e^{- \alpha (t- s)} d Z^j_s,$$ both processes depend implicitly on $ N, $ the number of interacting components in the system. Therefore, we shall write $ X_t^{[N]} := X_t := (X_t^1 , X_t^2 );$ this is a two-dimensional Markov process with drift coefficient $$ b (N, x) = - \alpha x + b \left( \begin{array}{ll} 0 \\ 1 \end{array} \right) .$$ In order to recognize the different jump regimes, we actually have to guess the fast, intermediate and the slow regime. It turns out that due to the mean field frame, no diffusive regime will appear in the limit (that is, no fast regime is present in this case). The jumps of process $Z^1 $ induce big jumps of $X_t^1. $ Indeed, each time that $Z^1 $ jumps, the process $X^1 $ is reset to $0, $ which is a consequence of its variable length memory structure. All other jumps will lead to a deterministic drift function in the limit. Therefore, we may choose $ d= 2, m = 1 $ and $\mu (dz ) = dz $ together with $E_t^1 = \emptyset , $ $E_t^2 = ]0, 1 [ ,$ $E_t^3 = ]1, 2 [ ,$ $$ \gamma (N, z, x) = 1_{ ]0, 1 [} (z) (N-1) f_2 ( x^2) + 1_{ ]1, 2 [} (z) f_1 ( x^1) , $$ and jump amplitude functions \begin{equation}\label{eq:amplitude} c (N, z, x) = 1_{ ]0, 1 [} (z) \frac{1}{{N-1 }} \left( \begin{array}{cc} 1 \\ -c \end{array} \right)+ 1_{ ]1, 2 [} (z) \left( \begin{array}{cc} -x^1 \\ 0 \end{array} \right) . \end{equation} Instead of considering the frame of time inhomogeneous processes, in the present situation it is reasonable to prove the convergence of $X^{[N]} $ to a limit process, as the number of interacting components, $N, $ tends to infinity. Firstly, we realize that the ``jump drift'' given by $\int_{E_t^2} \gamma (N, z, x) c (N, z, x)dz $ equals $$ \int_{E_t^2} \gamma (N, z, x) c (N, z, x)dz =f_2 (x^2) \left( \begin{array}{cc} 1 \\ -c \end{array} \right) .$$ Therefore, the associated limit process is given by $ \bar X_t = ( \bar X_t^1 , \bar X_t^2 ) , $ where $ \bar X_t^2 $ follows an autonomous deterministic equation given by $$ d \bar X_t^2 = - \alpha \bar X_t^2 - c f_2 ( \bar X_t^2) dt + b dt ,$$ and where \begin{equation}\label{eq:sdehawkeslimit} d \bar X^1_t = - \alpha \bar X_t^1 dt +f_2 ( \bar X_t^2) dt - \int_{\mathbb {R}_+} \bar X_{t-}^1 1_{ \{ u \le f_1 ( \bar X_{t-}^1 ) \}} \bar N (dt, du ) , \end{equation} with $\bar N (dt, du ) $ a PRM on $ \mathbb {R}_+ \times \mathbb {R}_+ $ having intensity measure $ dt du .$ This limit process is a true PDMP having only ``big jumps'' (those of the first coordinate) and evolving in a deterministic manner in between successive jumps. Obviously, this process can only be ergodic if the coefficients $ \alpha , c $ and $b$ are such that the autonomous deterministic dynamical system describing $ \bar X_t^2 $ possesses an equilibrium. Since $f_2$ is non-decreasing, a sufficient condition for this is that $$ \alpha > 0 .$$ Once the second component is at equilibrium, the first component evolves as a renewal process: the successive visits to the state $ \bar X_t^1 = 0 ,$ that occur at each jump of the process, induce an explicit regeneration scheme. We refer the reader to \cite{evafou} for the study of a (much more complicated) related situation. Suppose now that instead of being reset to $0$ after a big jump, the process $X_t^1$ is reset to some random value, that is, we replace \eqref{eq:amplitude} by \begin{equation}\label{eq:amplitudebis} c (N, z, x) = 1_{ ]0, 1 [} (z) \frac{1}{{N-1 }} \left( \begin{array}{cc} 1 \\ -c \end{array} \right)+ 1_{ ]1, 2 [} (z) \left( \begin{array}{cc} -x^1 + \varepsilon (z - 1)\\ 0 \end{array} \right) , \end{equation} for some small $\varepsilon > 0.$ This does not change the evolution of the second coordinate, but for the first one we obtain now \begin{equation}\label{eq:sdehawkeslimitbis} d \bar X^1_t = - \alpha \bar X_t^1 dt +f_2 ( \bar X_t^2) dt - \int_{\mathbb {R}_+} [ \bar X_{t-}^1 - \varepsilon z] 1_{ \{ u \le f_1 ( \bar X_{t-}^1 ) \}} \bar N (dt, dz, du ) , \end{equation} with $\bar N (dt, dz, du ) $ a PRM on $ \mathbb {R}_+ \times [0, 1 ] \times \mathbb {R}_+ $ having intensity measure $ dt dz du .$ With slight modifications of the tools developed in Section \ref{sec:1}, $ \bar X_t^1 $ can then easily shown to be exponentially ergodic, and it is straightforward to deduce that $ {\mathcal L} ( X_t^{[N]} ) $ is an asymptotic pseudotrajectory of the limit semigroup $ (P_t)_t$ if we choose a joint convergence of $ (N, t ) $ to infinity such that $ N = e^{rt} $ for some $r > 0. $ In other words, we need to simulate an exponential (in time) number of particles $N$ in order to be sure that the above approximation procedure works -- which is the typical order of magnitude for the speed of convergence in mean field limits. \subsection{A limit Cox-Ingersoll-Ross type jump process.} We continue the example given in the introduction. Thus $ d=m= 1 $ and $ \mu ( dz ) = dz . $ For $t > 0$ and for some $r >0,$ $$ c (t, z, x ) = \frac{\sigma }{2 } e^{-rt} [ 1_{ ]-3 e^{2rt} , -2e^{2rt} [ } (z) - 1_{ ] -2e^{2rt} , -e^{2rt} [ } (z) ] - a x e^{-2rt} 1_{ ]- e^{2rt}, 0 [ } (z) + \frac{d}{(1 + z)^2 } 1_{ ]0, e^{2rt} [ } (z),$$ $b(t, x ) = b $ and $$ \gamma(t, z, x) = f(x) 1_{ ]-3 e^{2rt} , - e^{2rt} [ } (z) + 1_{ ]- e^{2rt}, 0 [ } (z) + f(x) 1_{ ]0, e^{2rt} [ } (z) , $$ for some constants $ \sigma , d \in \mathbb {R} $ and $ a, b > 0 ,$ where, $f : \mathbb {R} \to \mathbb {R}_+$ is bounded, having bounded derivative such that $ \inf_{ x \in \mathbb {R} } f( x) > 0 .$ By construction, jumps coming from ``noise'' $ z \in ]-3 e^{2rt},- e^{2rt} [ $ are centered, that is, $$ \int_{-3 e^{2rt}}^{-e^{2rt}} c(t, z, x) \gamma ( t, z , x ) \mu (dx ) = 0, $$ and the associated variance is given by $ a (t, x) = \sigma^2 f(x) $ for all $ t, x .$ Moreover, for all $ t, x, $ $$ \int_{-e^{2rt}}^{0} c(t, z, x) \gamma ( t, z , x ) \mu (dx ) = - a x $$ giving rise to a limit drift $-a x .$ Finally, the jumps produced by noise $z > 0 $ survive in the limit process -- this corresponds to the slow jump regime. It is straightforward to show that the associated limit process is a Cox-Ingersoll-Ross type jump process given by $$ d \bar X_t = (b - a \bar X_t ) dt + \sigma \sqrt{ f (\bar X_t ) } d W_t + \int_{ \mathbb {R} \times \mathbb {R}_+} \frac{d}{(1 + z)^2 } 1_{ \{ u \le f( \bar X_{t-}) \}} N (dt,d z, du ) . $$ Taking the Lyapunov function $V(x) = |x|^2, $ it is easy to see that $\bar X_t $ satisfies all conditions of Assumption \ref{ass:Final!} for any choice of $x_0 \in K $ since the diffusion part is uniformly elliptic (recall that $ f $ is strictly lower bounded). Concerning the time inhomogeneous process, Assumptions \ref{conditions} and \ref{ass:1} are satisfied by construction. To check \eqref{eq:lyapunovlt}, it suffices to take once more $ V(x) = |x|^2 $ and $t_0 $ sufficiently large (such that $ a e^{-2 rt_0} < 2$). Finally, it is straightforward to verify that $C_{t_0} $ defined in \eqref{eq:ct} is finite. Therefore, Theorem \ref{theo:2} and Corollary \ref{cor:3} apply in this case.
1,314,259,994,454
arxiv
\section{Introduction} Social coding platforms, such as GitHub, have changed the collaborative nature of open source software development by integrating mechanisms such as issue reporting and pull requests into distributed version control tools~\cite{Dabbish2012,gousios2014exploratory}. This pull-based development workflow offers new opportunities for community engagement but at the same time increases the workload for repository maintainers to communicate, review code, deal with contributor license agreement issues, explain project guidelines, run tests, and merge pull requests~\citep{Gousios2016}. To reduce this intensive workload, developers often rely on automation tools to perform repetitive tasks to check whether the code builds, the tests pass, and the contribution conforms to a defined style guide~\citep{kavaler2019tool}. GitHub projects adopt, for example, tools to support Continuous Integration and Continuous Delivery or Deployment (CI/CD)~\cite{zhao2017impact,cassee2020silent} and for code review~\cite{kavaler2019tool}. In recent years, software bots have been widely adopted to automate a variety of predefined tasks around pull requests~\cite{Wessel2018}. By automating part of the workflow, developers hope to increase both productivity and quality~\cite{Vasilescu2015}. To further support automation, GitHub recently introduced GitHub Actions\footnote{https://github.com/features/actions} (the feature was made available to the public in November 2019). GitHub Actions allow the automation of tasks based on various triggers (e.g., commits, pull requests, issues, comments, etc.) and can be easily shared from one repository to another, making it easier to automate how developers build, test, and deploy software projects. However, little is known about the impact of such kind of automation and the challenges it might impose on the project development process. In this paper, we aim to understand how software developers use GitHub Actions to automate their workflows and how the dynamics of pull requests of GitHub projects change following the adoption of GitHub Actions. To achieve our goal, we address the following research questions: \textbf{RQ1:} \textit{How do OSS projects use GitHub Actions?} We aim to understand how commonly repositories use GitHub Actions and what they use them for. As a results of this analysis, we found a small subset of active repositories (0.7\% of the 416,266 repositories) adopted GitHub Actions. These Actions are spread across 20 categories, including continuous integration, utilities, and deployment. We also analyzed the commit history of files related to GitHub Action workflows to understand how the use of predefined Actions evolves over time. Overall, we found that a typical Action is added two times, and never removed or modified. \textbf{RQ2:} \textit{How is the use of GitHub Actions discussed by developers?} To gain an insight into how developers perceive GitHub Actions, we manually analyzed a set of 209 GitHub issues that discuss GitHub Actions. We found distinct categories of discussions related to GitHub Actions' maintenance and implementation, including switching other automation tools to Actions, suggestions to implement Actions, and problems and frustrations. \textbf{RQ3:} \textit{What is the impact of GitHub Actions?} In this RQ, we investigate whether project activity indicators, such as the number of pull requests merged and non-merged, number of comments, the time to close pull requests, and number of commits change after GitHub Actions adoption. We used a \textit{Regression Discontinuity Design}~\cite{thistlethwaite1960regression} to model the effect of Action adoption across 926 projects that had adopted GitHub Actions for at least 6 months. Our findings indicate that, on average, there are more rejected pull requests and fewer commits on merged pull requests after adopting GitHub Actions. In summary, we make the following contributions: (i) bringing attention to GitHub Actions, a relevant yet neglected resource that offers support for developers' tasks; (ii) characterizing the usage of GitHub Actions, and (iii) providing an understanding of how GitHub Actions' adoption impacts project activities and what developers discuss about them. \section{Workflow Automation with GitHub Actions} \begin{figure*}[!htbp] \scriptsize \centering \includegraphics[scale=0.7]{img/github_actions.pdf} \caption{GitHub workflow automation with GitHub Actions (adapted from GitHub).} \label{fig:flow} \end{figure*} GitHub Actions is an event-driven API provided by the GitHub platform to automate development workflows. GitHub Actions can run a series of commands after a specified event has occurred. An event is an specific activity that triggers a workflow run, as shown in Figure~\ref{fig:flow} (see the \img{img/actions} icon). For example, a workflow is triggered when a pull request is created for a repository or when a pull request is merged into the main branch. Workflows are defined in the \textbf{.github/workflows/} directory and use YAML syntax, having either a .yml or .yaml file extension. A workflow can contain one or more Actions. Developers can create their own Actions by writing custom code that interacts with their repository, and use them in their workflows or publish them on the GitHub Marketplace. GitHub allows developers to build Docker and JavaScript Actions and both require a metadata file to define the inputs, outputs, and main entry point of the Action. After the successful execution of a workflow, the outputs can be displayed in different ways. One of the possibilities is through a \textit{GitHub Action bot}. This bot, as any other bot on GitHub, is implemented as a GitHub user that can submit code contributions, interact through comments, and merge or close pull requests~\cite{wessel2020inconvenient}. Recently, developers published GitHub Action variants for many well-known bots (e.g., Coveralls, Codecov, Snyk) and these Actions are rapidly increasing in popularity~\cite{golzadeh2020groundtruth}. As an example of GitHub Actions adoption, consider the case of the project \textit{Grammapy}\footnote{https://github.com/gammapy/gammapy}, an open-source Python package for gamma-ray astronomy. As of the 13$^{th}$ of November, 2019, the \textit{Grammapy} community adopted a GitHub Action called \textit{First Interaction}\footnote{https://github.com/marketplace/actions/first-interaction}, which is responsible for identifying and welcoming newcomers when they create their first issue or open their first pull request on a project. As shown in Listing~\ref{listing:greetings}, \textit{Grammapy} created a workflow called \textit{Greeting} that might be triggered by both new pull requests and issues, as defined by the \textbf{on} keyword. The output of the \textit{First Interaction} Action is displayed through an issue/pull request comment posted by \textit{GitHub Action Bot} when a new pull request or issue is authored by a new contributor. An example of this Action interaction on a GitHub issue is shown in Figure~\ref{fig:greetings-example}. \begin{figure}[!htbp] \scriptsize \centering \includegraphics[scale=0.6]{img/greetingnew.png} \caption{Greetings workflow of Gammapy -- \textit{greetings.yml}} \label{listing:greetings} \end{figure} \begin{figure}[!htbp] \scriptsize \centering \includegraphics[scale=0.55]{img/greetings.png} \caption{Example of \textit{github-actions} bot greeting a newcomer.} \label{fig:greetings-example} \end{figure} \section{Research Design} This study aims to understand GitHub Actions usage in GitHub projects. In the following, we present our study design, data collection, and analysis procedures. \subsection{Selecting Projects} We assembled a dataset of GitHub open-source projects that adopted GitHub Actions at some point in their history. To compose our study sample, we started by selecting repositories from GitHub. For this, we used the GitHub project metadata of Munaiah et al.'s \cite{munaiah2017curating} RepoReapers data set, which contained 446,862 GitHub repositories classified as containing an engineered software project. We then filter this dataset to keep open-source software projects that at some point had adopted a GitHub Action. To identify these projects, we retrieved data from the GitHub API using a Ruby toolkit called Octokit.rb.\footnote{http://octokit.github.io/octokit.rb/} We verified whether the repositories had files of yaml format in the \textit{./github/workflows} directory. This filtered dataset comprises 3,190 projects. \subsection{Analyzing the use of GitHub Actions} First, we collected and quantitatively analyzed the number of projects using GitHub Actions and the number of Actions per project (\textbf{RQ1}). We also analyzed the workflow files of the studied projects searching for the category, description, and whether the Action was verified by GitHub. To understand the evolution of GitHub Actions, we retrieve the commit history of each workflow file used by the studied projects. For this purpose, we compared the commit history looking for changes regarding Actions, which include addition, removal, configuration change, or version update. \subsection{Categorizing GitHub Actions Discussions} To answer \textbf{RQ2}, we gathered issues from the repositories that mention either ``github action'' or ``github actions'', have at least one comment, and were posted after the release of the GitHub Actions feature. We collected 209 issues that met these criteria. After collecting these issues, we manually analyzed and categorized them. Two researchers independently conducted the manual classification. The first author of this paper conducted the manual classification of the 209 issues. Another researcher categorized a subset of 25 random issues using the same model. We scored a free-marginal kappa value of 0.66. We then conducted a second negotiation round and scored a free-marginal kappa value of 0.76. Fleiss et al.~\cite{fleiss2013statistical} state that the rule of thumb is that kappa values less than 0.40 are poor, values from 0.40 to 0.75 are intermediate to good, and values above 0.75 are excellent. \subsection{Time series analysis} To answer \textbf{RQ3}, we conducted a time series analysis. We collected longitudinal data for different outcome variables and treated the adoption of GitHub Actions by each project in our data set as an ``intervention''. This way, we could align all the time series of project-level outcome variables on the intervention date and compare their trends before and after adopting Actions. In the following subsections, we detail the different steps involved, from filtering the initial data set to running the statistical models. \subsubsection{Aggregating project variables} We analyzed data from 6 months before and 6 months after the Action adoption. Similarly to previous work~\cite{zhao2017impact,wessel2020effects,cassee2020silent}, we exclude 30 days around the Action adoption date to avoid the influence of the instability caused during this period. Afterward, we aggregated individual pull request data into monthly periods, considering 6 months before and after the Action introduction. Afterward, we checked the activity level of the candidate projects, since many projects on GitHub are inactive~\cite{gousios2014exploratory}. We removed from our dataset (i) projects that did not received any pull requests or (ii) that disabled the Actions during the period we considered. After applying all filters, our data set comprises 926 active projects that have been using at least one GitHub Action for 6 months. We focused on the same pull request related variables as in previous work \cite{wessel2020effects}: \MyPara{Merged/non-merged pull requests:} the number of monthly contributions (pull requests) that have been merged, or closed but not merged into the project, computed over all closed pull requests in each time frame. \MyPara{Comments on merged/non-merged pull requests:} the median number of monthly comments computed over all merged and non-merged pull requests in each time frame. \MyPara{Time-to-merge/time-to-close pull requests:} the median of monthly pull request latency (in hours), computed as the difference between the time when the pull request was closed and the time when it was opened. The median is computed using all merged and non-merged pull requests in each time frame. \MyPara{Commits of merged/non-merged pull requests:} the median of monthly commits computed over all merged and non-merged pull requests in each time frame. Based on previous work~\cite{cassee2020silent,zhao2017impact,wessel2020effects}, we also collected six known covariates for each project: \MyPara{Project name:} the name of the project to which the pull request belongs. This name is used to uniquely identify the project on GitHub. \MyPara{Programming language:} the primary project programming language, as automatically provided by GitHub. \MyPara{Time since the first pull request:} in months, computed since the earliest recorded pull request in the entire project history. We use this variable to capture the project maturity when it comes to the pull request usage. \MyPara{Total number of pull request authors:} we count how many contributors submitted pull requests to the project as a proxy for the size of the project community. \MyPara{Total number of commits:} we compute the total number of commits as a proxy for the activity level of a project. \MyPara{Number of pull requests opened:} the number of monthly contributions (pull requests) received in each time frame. We expect that projects with a high number of contributions also observe a high number of comments, latency, commits, and merged and non-merged contributions. \subsubsection{Statistical Approach} \label{sec-statistical-modeling} We modeled the effect of GitHub Action adoption over time across GitHub repositories using a Regression Discontinuity Design (RDD)~\cite{thistlethwaite1960regression,imbens2008regression}, following the work of Wessel et al. \cite{wessel2020effects}. RDD is a technique used to model the extent of a discontinuity at the moment of intervention and long after the intervention. The technique is based on the assumption that if the intervention does not affect the outcome, there would be no discontinuity, and the outcome would be continuous over time~\cite{quasiexperimentation}. The statistical model behind RDD is \begin{equation*} \begin{split} y_{i} =&\: \alpha + \beta\cdot \mbox{\textit{time}}_{i} + \gamma\cdot \mbox{\textit{intervention}}_{i} \: + \\& \delta\cdot \mbox{\textit{time\_after\_intervention}}_{i} \: + \eta\cdot \mbox{controls}_{i} + \varepsilon_{i} \end{split} \end{equation*} where $i$ indicates the observations for a given project. To model the passage of time as well as the GitHub Action introduction, we rely on three variables: \textit{time}, \textit{time after intervention}, and \textit{intervention}. The \textit{time} variable is measured as months at the time $j$ from the start to the end of our observation period for each project. We considered a time period of 12 months for this study, 6 months before and after bot adoption. The \textit{intervention} variable is a binary value used to indicate whether the time $j$ occurs before ($\mbox{\textit{intervention}}=0$) or after the ($\mbox{\textit{intervention}}=1$) adoption event. The \textit{time\_after\_intervention} variable counts the number of months at time $j$ since the Action adoption, and the variable is set to 0 before adoption. The $\mbox{\textit{controls}}_{i}$ variables enable the analysis of Action adoption effects, rather than confounding the effects that influence the dependent variables. For observations before the intervention, holding controls constant, the resulting regression line has a slope of $\beta$, and after the intervention $\beta+\delta$. The size of the intervention effect is measured as the difference equal to $\gamma$ between the two regression values of $y_{i}$ at the moment of the intervention. Considering that we are interested in the effects of GitHub Actions on the monthly trend of the number of pull requests, number of comments, time-to-close pull requests, and number of commits for both merged and non-merged pull requests, we fitted eight models ($4$ variables $\times$ $2$ cases). To balance false-positives and false-negatives, we report the corrected p-values after applying multiple corrections using the method of Benjamini and Hochberg~\cite{benjamini1995controlling}. We implemented the RDD models as a mixed-effects linear regression using the R package \textit{lmerTest}~\cite{kuznetsova2017lmertest}. Following the work of Wessel et al.~\cite{wessel2020effects}, we modeled \textit{project name} and \textit{programming language} as random effects~\cite{galecki2013linear} to capture project-to-project and language-to-language variability~\cite{zhao2017impact}. We evaluate the model fit using \textit{marginal} $(R^2_m)$ and \textit{conditional} $(R^2_c)$ scores, as described by Nakagawa and Schielzeth~\cite{nakagawa2013general}. The $R^2_m$ can be interpreted as the variance explained by the fixed effects alone, and $R^2_c$ as the variance explained by the fixed and random effects together. In mixed-effects regression, the variables used to model the intervention along with the other fixed effects are aggregated across all projects, resulting in coefficients useful for interpretation. The interpretation of these regression coefficients supports the discussion of the intervention and its effects, if any. Thus, we report the significant coefficients ($p < 0.05$) in the regression as well as their variance, obtained using ANOVA. In addition, we \textit{log} transform the fixed effects and dependent variables that have high variance~\cite{sheather2009modern}. We also account for multicollinearity, excluding any fixed effects for which the variance inflation factor (VIF) is higher than $5$~\cite{sheather2009modern}. \section{Results} In the following, we report the results of our study per research question. \subsection{How do OSS projects use GitHub Actions (RQ1)?} \label{howmany} Analyzing a set of 416,266 active (i.e., received pull requests in the relevant time frame, see previous section) repositories, we identified 3,190 (0.7\%) open-source software projects that had adopted at least one GitHub Action at the time of our data collection. Figure~\ref{fig:Bar} reports the absolute number of repositories that use GitHub Actions grouped by programming language, showing that the most prominent adopters of GitHub Actions are Python repositories, followed by Java and Ruby. \begin{figure}[!htbp] \scriptsize \centering \includegraphics[scale=0.45]{img/Bar.pdf} \caption[Number of repositories that use GitHub Actions]{Number of repositories that use GitHub Actions.} \label{fig:Bar} \end{figure} We collected the data only 10 months after GitHub Actions was released to the public and our data show that a number of projects had already adopted the technology. Of the 3,190 GitHub repositories that use GitHub Actions, we found a total of 708 different predefined Actions. We collected data from each Action's repository and also from the GitHub Marketplace\footnote{https://github.com/marketplace?type=actions} page to categorize the Actions. If published in the marketplace, an Action is classified in 1--2 categories by the publisher. Table~\ref{tab:categories} presents the categorization of Actions we found. Note that the percentages do not add up to 100 since about half of the Actions are assigned to two categories, a primary one and a secondary one. \begin{table}[!htbp] \centering \caption[Categorization of Actions found within GitHub Actions workflows]{Categorization of Actions found within GitHub Actions workflows.} \begin{tabular}{lrr} \hline \textbf{Actions' Categories} & \textbf{\# of Actions} & \textbf{\%} \\ \hline Continuous integration & 192 & 27.12 \\ Utilities & 173 & 24.44 \\ Deployment & 87 & 12.29 \\ Publishing & 70 & 9.89 \\ Code quality & 53 & 7.49 \\ Code review & 45 & 6.36 \\ Dependency management & 36 & 5.08 \\ Testing & 33 & 4.66 \\ Open Source management & 30 & 4.24 \\ Project management & 27 & 3.81 \\ Container CI & 25 & 3.53 \\ Chat & 18 & 2.54 \\ Security & 13 & 1.84 \\ Community & 6 & 0.85 \\ Desktop tools & 5 & 0.70 \\ Mobile & 5 & 0.70 \\ Mobile CI & 4 & 0.56 \\ IDEs & 3 & 0.42 \\ Monitoring & 3 & 0.42 \\ Localization & 2 & 0.28 \\ Uncategorised & 280 & 39.55 \\ \hline \textbf{total Actions} & \textbf{708} & \textbf{156.77} \\ \hline \end{tabular} \label{tab:categories} \end{table} The five most frequent categories of Actions are the following: \MyPara{Continuous integration:} Actions responsible for running the CI pipeline and notifying contributors of test failures in CI tools (e.g., Retry Step, Chef Delivery). \MyPara{Utilities:} Actions created to automate diverse steps of the development workflow on the GitHub platform, often in support of other Actions. The \textit{Read Properties} Action, for example, inspects Java \textit{.properties} files looking for predefined properties. Another example of a utility Action is \textit{Replace string}, which replaces strings that match predefined regular expressions. \MyPara{Deployment:} Actions designed to build and deploy the application upon request. One example is the Action called \textit{Jekyll Deploy}, responsible for building and deploying the Jekyll site to GitHub Pages. \MyPara{Publishing:} Actions responsible for automatically publishing packages to the registry. For example, \textit{Action For Semantic Release} is an Action that leverages \textit{semantic-release} to fully automate the package release workflow, determining the next version number, generating the release notes and publishing the package. \MyPara{Code quality:} Actions that analyze source code (e.g., code style, code coverage, code quality, and smells) submitted through pull requests and give feedback to developers via GitHub checks or comments. In addition, we found that 42 (5.93\%) out of 708 Actions are verified by GitHub. Creators are verified if they have an existing relationship with GitHub, and GitHub has worked closely with the creator to create these Actions. The five most popular Actions are the following: \MyPara{actions/checkout:} A verified, utility Action that checks-out a repository under \$GITHUB\_WORKSPACE. Therefore, a workflow can access the repository for further workflow tasks. \MyPara{actions/setup-python:} A verified, utility Action that sets up a Python environment for use in a workflow, allowing the use of Python features and commands. \MyPara{actions/cache:} A verified utility and dependency management Action that allows caching dependencies and building outputs to improve workflow execution time. \MyPara{actions/upload-artifact:} A verified utility Action that uploads artifacts from a workflow, allowing developers to share data between jobs and store data once a workflow is complete. \MyPara{actions/setup-java:} A verified utility action that sets up a Java environment for use in a workflow, allowing the use of Java features and commands, such as compiling and executing. Analyzing the version histories of these GitHub repositories, we found that in addition to adding a GitHub Action, repositories also removed Actions, modified their arguments, and updated their versions. We investigated how often these events occur and which Actions were most affected. \begin{table}[t] \caption{GitHub Actions that were added most often.} \label{tab:additions} \centering \begin{tabular}{lrl} \toprule Action & \# & Description \\ \midrule actions/checkout & 7,962 & Check out a repository \\ actions/setup-python & 1,756 & Set up workflow with Python \\ actions/cache & 1,729 & Cache dependencies/build outputs \\ actions/upload-artifact & 1,441 & Upload artifacts from workflow \\ actions/setup-java & 877 & Set up workflow with Java \\ actions/download-artifact & 580 & Download artifacts from build \\ shivammathur/setup-php & 434 & Set up workflow with PHP \\ actions/setup-ruby & 373 & Set up workflow with Ruby \\ codecov/codecov-action & 253 & Upload coverage to Codecov \\ actions/setup-dotnet & 225 & Set up workflow with .NET \\ \bottomrule \end{tabular} \end{table} Table~\ref{tab:additions} shows the top 10 GitHub Actions that were added to a repository most often. At the median, Actions were added two times (average: 30). \begin{table}[t] \caption{GitHub Actions that were removed often.} \label{tab:removals} \centering \begin{tabular}{lrl} \toprule Action & $\nicefrac{rm}{add}$ & Description \\ \midrule masa-iwasaki/setup-rbenv & 1.00 & Rbenv setup \\ meeDamian/github-release & 0.94 & Github Releases \\ eregon/use-ruby-action & 0.81 & Prebuilt Ruby \\ jakejarvis/s3-sync-action & 0.77 & Sync with S3 \\ harmon758/postgresql-action & 0.73 & PostgreSQL setup \\ kiegroup/github-action-build-chain & 0.67 & Build multiple projects \\ alexjurkiewicz/setup-ccache & 0.67 & Ccache setup \\ actions/labeler & 0.64 & Pull request labelling \\ SamKirkland/FTP-Deploy-Action & 0.64 & FTP server deploy \\ coverallsapp/github-action & 0.62 & Coveralls upload \\ \bottomrule \end{tabular} \end{table} Naturally, the Actions that were added most often are also the ones that were removed most often. Instead of absolute numbers, we therefore analyzed the relative frequency of removals, i.e., how often different Actions were removed compared to how often they were added. Table~\ref{tab:removals} shows the ten Actions that were removed most often in relative terms. We limited this and the following analyses in this section to Actions that were added at least ten times. As the table shows, \texttt{masa-iwasaki/setup-rbenv} was removed in all cases where it had previously been added. Looking at the Action's README file,\footnote{\url{https://github.com/masa-iwasaki/setup-rbenv}} this is unsurprising since it contains the note `Do Not Use This Action'. Other Actions were removed less frequently, with a median of zero removals (average: 5). Some of the Actions in the top 10 have several issues reported against them regarding not being able to run on some operating systems.\footnote{e.g., \url{https://github.com/meeDamian/github-release/issues/20}} \begin{table}[t] \caption{GitHub Actions with many arguments modified.} \label{tab:arguments} \centering \begin{tabular}{l@{}rl} \toprule Action & $\nicefrac{mod}{add}$ & Description \\ \midrule andstor/copycat-action & 5.45 & File copying \\ reactivecircus/android-emulator-runner & 0.62 & Android Emulators \\ julianoes/Publish-Docker-Github-Action & 0.51 & Publish docker \\ archive/github-actions-slack & 0.50 & Messages to Slack \\ nanasess/setup-chromedriver & 0.45 & ChromeDriver setup \\ elgohr/Publish-Docker-Github-Action & 0.43 & Publish docker \\ google/oss-fuzz & 0.42 & Fuzz testing \\ docker/build-push-action & 0.41 & Docker with Buildx \\ actions/setup-dotnet & 0.38 & .NET setup \\ SamKirkland/FTP-Deploy-Action & 0.36 & FTP server deploy \\ \bottomrule \end{tabular} \end{table} Table~\ref{tab:arguments} shows the GitHub Actions which had their arguments modified most often, in relative terms. For example, the arguments of \texttt{andstor/copycat-action}, an Action to copy files from a repository to another external repository, were changed 5.45 times as often as the Action was added. This Action has 15 arguments, including ones to indicate the source and destination of the copy. At the median, GitHub Actions had their arguments modified zero times, with an average of 3. \begin{table}[t] \caption{GitHub Actions with many versions changed.} \label{tab:versions} \centering \begin{tabular}{l@{}rl} \toprule Action & $\nicefrac{vc}{add}$ & Description \\ \midrule gaurav-nelson/github-action & & \\ \hspace{1em}-markdown-link-check & 1.79 & Link checker \\ SamKirkland/FTP-Deploy-Action & 0.91 & FTP server deploy \\ technote-space/get-diff-action & 0.75 & Git diff \\ actions/github-script & 0.71 & GitHub via JavaScript \\ homoluctus/slatify & 0.64 & Slack Notifications \\ stefanzweifel/git-auto-commit-action & 0.60 & Automatically Commit \\ crazy-max/ghaction-docker-buildx & 0.60 & Docker with Buildx \\ peter-evans/create-pull-request & 0.53 & Pull request creation \\ cirrus-actions/rebase & 0.50 & Rebase pull requests \\ puppetlabs/action-litmus\_parallel & 0.47 & Workflow files org. \\ \bottomrule \end{tabular} \end{table} Some GitHub Actions were also frequently updated (average: 3, median: 0). Table~\ref{tab:versions} shows the Actions that had their versions modified most often, compared to how often they were added. The Action at the top of this list is under active development, with nine releases in the past six months at the time of writing. \MyBox{\textbf{Answer to RQ1.} We identified 3,190 active GitHub repositories which have adopted the GitHub Actions feature. We found 708 unique predefined Actions being used within the workflows. These Actions are spread across 20 categories. The most recurrent ones are continuous integration, utilities, and deployment. A typical (median) GitHub Action is added twice, and never removed or modified. Some of the Actions are removed, their arguments modified, and their versions changed many times, which might be explained by their characteristics, such as release history or number of arguments.} \subsection{How is the use of GitHub Actions discussed by developers? (RQ2)} We categorized 209 GitHub issues based on the content of the discussion. Table~\ref{tab:issues} shows an overview of this categorization, indicating how many issues we found in each category. We present the categories in the following. \begin{table}[!htbp] \centering \caption[Categorization of discussion]{Categorization of discussion.} \begin{tabular}{lrr} \hline \textbf{Issues' Categories} & \multicolumn{1}{l}{\textbf{\# of Issues}} & \multicolumn{1}{l}{\textbf{\%}} \\ \hline GitHub Actions maintenance & 43 & 20.57 \\ Announcement GitHub Actions & 35 & 16.74 \\ Requesting GitHub Actions to be implemented & 34 & 16.27 \\ Switching CI/CD tools to GitHub Actions & 32 & 15.31 \\ GitHub Actions problems and frustrations & 31 & 14.83 \\ Other & 31 & 14.83 \\ \hline \textbf{sum} & \textbf{209} & \textbf{100} \\ \hline \end{tabular} \label{tab:issues} \end{table} \MyPara{GitHub Actions maintenance:} The most recurrent topic discussed by open-source contributors and maintainers regarding GitHub Actions is their maintenance. Within this category, we classified issues which developers used to discuss about maintaining GitHub Actions, adding or requesting features to pre-existing GitHub Actions, quick fixes, workarounds, and requesting admin rights. One maintainer, for example, opened an issue to request changes in an Action responsible for reporting diffs: ``\textit{At the moment, GitHub Action for diff report generation is triggered by PR changes and comment addition. [...] Pull request should be removed and only comment trigger is left.}'' Another example relates to issues pointing to small fixes in the README file, for example, as a result of moving to GitHub Actions. \MyPara{Announcement of GitHub Actions:} Another recurrent topic discussed by open-source developers on GitHub issues is when a new Action is announced. This category also comprises issues announcing that GitHub Actions has been implemented (not replacing pre-existing CI/CD platforms). \MyPara{Requesting GitHub Actions to be implemented:} We also found issues suggesting GitHub Actions to be looked into or requesting the use of GitHub Actions. We found, for example, a developer requesting to create an Action to add a label ``\textit{had PR}'' to an issue once the pull request that solves a specific issue is submitted. \MyPara{Switching CI/CD tools to GitHub Actions:} Developers often create issues to discuss switching from a pre-existing CI/CD platform (e.g., CircleCI, Jenkins, TravisCI) to GitHub Actions (not including implementing GitHub Actions in parallel with pre-existing CI/CD tools). \MyPara{GitHub Actions problems and frustrations:} This category encompasses bugs, broken builds, errors, and frustrations related to GitHub Actions. There are bugs caused by a failure of a service the Action relies on. For example, a maintainer opened an issue to report that the ``\textit{GitHub Actions on Mac and Windows fail due to missing Numpy}.'' \MyPara{Other:} Issues within this category relate to bugs pointed out by GitHub Actions, noise, or other discussions that do not fall into other categories. \MyBox{\textbf{Answer to RQ2.} Overall, discussions involving problems and frustrations are outweighed by announcements that GitHub Actions had been implemented, requesting implementation, and switching CI/CD tools.} \subsection{What is the Impact of GitHub Actions? (RQ3)} To answer this question, we investigated the effects of GitHub Action adoption on project activities along four dimensions: (i) merged and non-merged pull requests, (ii) human conversation, (iii) efficiency to close pull requests, and (iv) modification effort. We start by investigating how Action adoption impacts the number of merged and non-merged pull requests. We fit two mixed-effect RDD models, as described in Section \ref{sec-statistical-modeling}. For these models, the \textit{number of merged/non-merged pull requests} per month is the dependent variable. Table~\ref{tab:resultspullrequest} summarizes the results of these models. In addition to the model coefficients, the table also shows the sum of squares, with a variance explained for each variable. \begin{table}[htbp] \scalefont{0.9} \centering \caption{The Effects of GitHub Actions on PRs. The response is \textbf{log(number of merged/non-merged PRs)} per month.} \label{tab:resultspullrequest} \begin{threeparttable} \begin{tabular}{lrrrrrr} \midrule & \multicolumn{2}{c}{Merged PRs} & & \multicolumn{2}{c}{Non-merged PRs}\\ \cmidrule{2-3}\cmidrule{5-6} & Coeffs & Sum Sq. & & Coeffs & Sum Sq. \\ \cmidrule{2-3}\cmidrule{5-6} Intercept & -0.203*** & & & -0.159** &\\ TimeSinceFirstPR & 0.0002 & 47.7 & & -0.001** & 6.95 \\ log(TotalPRAuthors) & -0.002 & 638.6 & & 0.028*** & 133.69 \\ log(TotalCommits) & 0.020*** & 236.5 & & 0.017** & 34.65 \\ log(OpenedPRs) & 0.770*** & 3393.8 & & 0.230*** & 530.86 \\ log(PRComments) & 0.048*** & 48.9 & & 0.342*** & 410.05 \\ log(PRCommits) & 0.246*** & 105.8 & & 0.200*** & 73.24 \\ time & 0.004 & 1.0 & & -0.004* & 0.01 \\ interventionTrue & 0.014 & 0.1 & & 0.002 & 0.03 \\ time\_after\_intervention & -0.004 & 0.2 && 0.008** & 0.47 \\ \midrule Marginal $R^2$ & 0.88 & & & 0.64 & \\ Conditional $R^2$ & 0.91 & & & 0.78 & \\ \midrule \end{tabular} \begin{tablenotes} \item *** $p < 0.001$, ** $p < 0.01$, * $p < 0.05$ \end{tablenotes} \end{threeparttable} \end{table} Analyzing the model for merged pull requests, we found that the fixed-effects part fits the data well ($R^2_m=0.88$). However, considering $R^2_c=0.91$, variability also appears from project-to-project and language-to-language. Among the fixed effects, we note that the number of monthly pull requests explains most of the variability in the model, indicating that projects receiving more contributions tend to have more merged pull requests, with other variables held constant. None of the Action-related predictors have statistically significant effects, meaning the trend in the number of merged pull requests is stationary over time, and remains unaffected by the Action adoption. Similarly to the previous model, the fixed-effect part of the non-merged pull requests model fits the data well ($R^2_m=0.64$), even though a considerable amount of variability is explained by random effects ($R^2_c=0.78$). We note similar results on fixed effects: projects receiving more contributions tend to have more non-merged pull requests. In addition, pull requests receiving more comments tend to be rejected. The effect of Action adoption on the non-merged pull requests differs from the previous model. Regarding the time series predictors, the model did not detect any discontinuity at adoption time. However, the negative trend in the number of non-merged pull requests before the Action adoption is reversed, toward an increase after adoption. \begin{table}[htbp] \scalefont{0.9} \centering \caption{The Effects of GitHub Actions on Pull Request Comments. The response is \textbf{log(median of comments)} per month.} \label{tab:resultscomments} \begin{threeparttable} \begin{tabular}{lrrrrrr} \midrule & \multicolumn{2}{c}{Merged PRs} & & \multicolumn{2}{c}{Non-merged PRs}\\ \cmidrule{2-3}\cmidrule{5-6} & Coeffs & Sum Sq. & & Coeffs & Sum Sq. \\ \cmidrule{2-3}\cmidrule{5-6} Intercept & 0.020 & & & -0.058** &\\ TimeSinceFirstPR & -0.001*** & 3.29 & & -0.001*** & 10.98 \\ log(TotalPRAuthors) & 0.055*** & 47.49 & & 0.031*** & 169.33 \\ log(TotalCommits) & -0.008 & 2.71 & & 0.002 & 25.67 \\ log(OpenedPRs) & -0.005 & 40.87 & & 0.051*** & 217.05 \\ log(TimeToClosePRs) & 0.077*** & 376.95 & & 0.113*** & 930.91 \\ log(PRCommits) & 0.213*** & 70.48 & & 0.199*** & 75.38 \\ time & -0.009*** & 1.54 & & 0.002 & 0.00 \\ interventionTrue & 0.001 & 0.02 & & -0.002 & 0.01 \\ time\_after\_intervention & 0.009*** & 0.70 && -0.003 & 0.05 \\ \midrule Marginal $R^2$ & 0.38 & & & 0.65 & \\ Conditional $R^2$ & 0.54 & & & 0.70 & \\ \midrule \end{tabular} \begin{tablenotes} \item *** $p < 0.001$, ** $p < 0.01$, * $p < 0.05$ \end{tablenotes} \end{threeparttable} \end{table} To investigate the effects of Action adoption on pull request communication, we fit one model to merged pull requests and another to non-merged ones. The \textit{median of pull request comments} per month is the dependent variable. Table~\ref{tab:resultscomments} shows the results of the fitted models. Considering the model of comments on merged pull requests, we found that the combined fixed-and-random effects ($R^2_c=0.54$) fit the data better than the fixed effects ($R^2_m=0.38$), showing that most of the explained variability in the data is associated with project-to-project and language-to-language variability, rather than the fixed effects. Additionally, we also observe that time-to-close pull requests explains the largest amount of variability in the model, indicating that the communication during the pull request review is strongly associated with the time to merge it. Regarding the Action effects, we note a decreasing time baseline trend before adoption; no statistically significant discontinuity at the adoption time; and an apparent neutralization of the aforementioned time trend after adoption, as $\beta(\mbox{\textit{time}}) + \delta(\mbox{\textit{time\_after\_intervention}}) \simeq 0$. Turning to the model of comments on non-merged pull requests, the model fits the data well ($R^2_m=0.65$) and there is also variability explained by the random variables ($R^2_c=0.70$). This model also suggests that communication during the pull request review is strongly associated with the time to reject the pull request. Table~\ref{tab:resultscomments} shows that none of the Action-related predictors have statistically significant effects, meaning the comments trend in non-merged pull requests is stationary over time, and remains unaffected by the Action adoption. \begin{table}[htbp] \scalefont{0.9} \centering \caption{The Effects of GitHub Actions on Time-to-close PRs. The response is \textbf{log(median of time-to-close PRs)} per month.} \label{tab:resultstime} \begin{threeparttable} \begin{tabular}{lrrrrrr} \midrule & \multicolumn{2}{c}{Merged PRs} & & \multicolumn{2}{c}{Non-merged PRs}\\ \cmidrule{2-3}\cmidrule{5-6} & Coeffs & Sum Sq. & & Coeffs & Sum Sq. \\ \cmidrule{2-3}\cmidrule{5-6} Intercept & -0.374** & & & 0.053 &\\ TimeSinceFirstPR & -0.003** & 130.3 & & 0.0004 & 322.6 \\ log(TotalPRAuthors) & 0.218*** & 1386.1 & & 0.067** & 3733.9 \\ log(TotalCommits) & 0.021 & 155.6 & & -0.028 & 530.3 \\ log(OpenedPRs) & -0.139*** & 1046.3 & & 0.089*** & 4588.4 \\ log(PRComments) & 1.528*** & 8543.4 & & 2.816*** & 23589.5 \\ log(PRCommits) & 1.520*** & 4145.8 & & 1.011*** & 1967.1 \\ time & 0.013 & 2.5 & & -0.005 & 0.3 \\ interventionTrue & -0.053 & 2.1 & & -0.003 & 0.0 \\ time\_after\_intervention & -0.004 & 0.1 && 0.006 & 0.3 \\ \midrule Marginal $R^2$ & 0.47 & & & 0.63 & \\ Conditional $R^2$ & 0.57 & & & 0.67 & \\ \midrule \end{tabular} \begin{tablenotes} \item *** $p < 0.001$, ** $p < 0.01$, * $p < 0.05$ \end{tablenotes} \end{threeparttable} \end{table} We fitted two RDD models where \textit{median of time to close pull requests} per month is the dependent variable. The results are shown in Table~\ref{tab:resultstime}. Analyzing the results to the effect of GitHub Actions on the latency to merge pull requests, we found that combined fixed-and-random effects fit the data better than the fixed effects. Although several variables affect the trends of pull request latency, communication during the pull requests is responsible for most of the variability in the data. This indicates the expected results: the more effort contributors expend discussing the contribution, the more time the contribution takes to merge. The number of commits also explains the amount of data variability, since a project with many changes needs more time to review and merge them. However, none of the Action-related predictors have statistically significant effects on the time spent to merge pull request. Turning to the model of non-merged pull requests, we note that it fits the data well ($R^2_m=0.63$), and there is also a variability explained by the random variables ($R^2_c=0.67$). As above, communication during the pull requests is responsible for most of the variability encountered in the results. Similar to the previous model, none of the Action-related predictors have statistically significant effects on the time spent to reject pull request. \begin{table}[htbp] \scalefont{0.9} \centering \caption{The Effects of GitHub Actions on Pull Request Commits. The response is \textbf{log(median of commits)} per month.} \label{tab:resultscommits} \begin{threeparttable} \begin{tabular}{lrrrrrr} \midrule & \multicolumn{2}{c}{Merged PRs} & & \multicolumn{2}{c}{Non-merged PRs}\\ \cmidrule{2-3}\cmidrule{5-6} & Coeffs & Sum Sq. & & Coeffs & Sum Sq. \\ \cmidrule{2-3}\cmidrule{5-6} Intercept & -0.020*** & && -0.075** &\\ TimeSinceFirstPR & 0.001** & 7.02 & & -0.00004 & 14.14 \\ log(TotalPRAuthors) & -0.060*** & 83.99 & & -0.012* & 233.35 \\ log(TotalCommits) & 0.019** & 41.86 & & 0.019*** & 71.96 \\ log(OpenedPRs) & 0.247*** & 440.55 && 0.117*** & 352.79 \\ log(PRComments) & 0.541*** & 441.40 && 0.611*** & 758.36 \\ time & 0.010*** & 1.97 && 0.002 & 0.00 \\ interventionTrue & 0.039** & 0.56 & & -0.009 & 0.07 \\ time\_after\_intervention & -0.015*** & 3.18 && -0.002 & 0.03 \\ \midrule Marginal $R^2$ & 0.47 & & & 0.49 & \\ Conditional $R^2$ & 0.60 & & & 0.54 & \\ \midrule \end{tabular} \begin{tablenotes} \item *** $p < 0.001$, ** $p < 0.01$, * $p < 0.05$ \end{tablenotes} \end{threeparttable} \end{table} Finally, we studied whether Action adoption affects the number of commits made before and during the pull request review. Again, we fitted two models for merged and non-merged pull requests, where the \textit{median of pull request commits} per month is the dependent variable. The results are shown in Table~\ref{tab:resultscommits}. Analyzing the model of commits on merged pull requests, we found that the combined fixed-and-random effects ($R^2_c=0.60$) fit the data better than the fixed effects ($R^2_m=0.47$). The statistical significance of all Action-related coefficients indicates that the adoption of Actions affected the number of commits. We note an increasing trend before adoption and a statistically significant discontinuity at the adoption time. Further, the positive trend in the number of merged pull requests before the Action adoption is reversed, toward a decrease after adoption. Additionally, we can also observe that the number of pull request comments and the number of contributions per month explains most of the variability in the result. This result suggests that the more comments and pull requests there are, the more commits there will be. Investigating the results of the non-merged pull request model, we also found that the combined fixed-and-random effects fit the data better than the fixed effects. Similar to the previous model, the number of pull request comments per month explains most of the variability in the result. Regarding the time series predictors, the model did not detect any discontinuity at adoption time. However, the positive trend in the median of commits before the bot adoption is reversed, toward a decrease after adoption. \MyBox{\textbf{Answer to RQ3.} After adopting GitHub Actions, on average, there are more rejected pull requests and fewer commits on merged pull requests.} \section{Discussion} Recently, an easy, reusable, and portable way to automate developers' workflows on GitHub was made possible by the advent of GitHub Actions. So far, the literature presents scarce evidence on the use of GitHub Actions by GitHub repositories~\cite{golzadeh2020groundtruth}. In this work, we contribute by introducing and systematizing evidence on the use, evolution, and impacts of such Actions. Our findings contribute new knowledge about how software developers use GitHub Actions. Firstly, we showed that 3,190 (0.7\%) of the 416,266 repositories in our data set had already adopted GitHub Actions at the time of analysis. In addition, we found that 708 unique predefined Actions have been used within the repositories using GitHub Actions. While 39.55\% of these Actions were uncategorized, 5.93\% were verified, indicating that the majority of the Actions on GitHub are created by the community. Uncategorized Actions are Actions that have not been published to the GitHub Marketplace, thus this percentage indicates an active community surrounding GitHub Actions. Analyzing the historical evolution of Actions, we found that some of the Actions require maintenance after being added to a repository. While a typical (median) Action is added twice (to the same or different repositories) and never removed or modified, a significant minority of Actions are removed, have their arguments modified, or are updated to new versions. For repository maintainers, this means that adding a GitHub Action effectively adds yet another dependency to a project that needs to be maintained, might become outdated, and needs to be adjusted over time. On the other hand, this is not unexpected for a new feature, and in fact all repositories that have adopted GitHub Actions to date can be seen as early adopters. A typical Action also does not appear to require ongoing maintenance, at least not in the time window considered in our study. Announcements that GitHub Actions had been implemented, requesting implementation and switching CI/CD tools outweighed the discussions involving problems and frustrations, thus indicating a positive perception of GitHub Actions. With a new feature such as GitHub Actions, it would not be surprising if some negative sentiment would be found in corresponding discussion forums, in particular asking why yet another feature is needed. However, anecdotally, developers seem to appreciate the premise of GitHub Actions to help standardize the use of bots on GitHub, and in fact a good portion of the discussions was about switching CI/CD tools that a repository already used to their GitHub Actions equivalent. We found that two activity indicators have a statistically significant effect on the pull request process after the adoption of an GitHub Action. According to the regression results, the median number of rejected pull requests increases after the adoption of Actions. This may indicate that project maintainers started to have faster and clearer feedback on the pull request, helping them to identify major issues on a vast number of contributions. Moreover, GitHub Actions produce different effects on non-merged pull requests when compared to the effects of adopting software bots---Wessel et al.~\cite{wessel2020effects} reported that the introduction of bots on pull requests' reviews leads to fewer rejected pull requests. From the regression results, we also noticed an increase in the median number of commits on merged pull requests just after bot adoption. It makes sense from the contributors' side, since the Action introduces a secondary evaluation step to the pull request. Especially at the beginning of the adoption, the Action might increase the number of commits due to the need to meet all requirements and obtain a stable code. After that, however, a decrease occurs in the median number of commits on merged pull requests per month. Our work has implications for researchers and practitioners. For researchers interested in software bots, it is important to understand the role of GitHub Actions in the bot landscape. Backed by GitHub, GitHub Actions are likely here to stay, and we already see evidence of existing software tools, such as test coverage tools, being integrated into and packaged as GitHub Actions. It is important to understand how such Actions affect the interplay of developers in their effort to develop software, and our study provides the first step in this direction. Additional effort is also necessary to investigate the impact for newcomers, who already face a variety of barriers~\cite{balali2018newcomers,steinmacher2015social}. Educators may also see an opportunity in GitHub Actions to build automation tools to better support their OSS assignments~\cite{pinto2017training}. Practitioners need to make informed decisions whether to adopt GitHub Actions (or software bots in general) into their projects and how to use them effectively. Also, GitHub Actions might allow them to automate repetitive tasks in their projects with their own custom GitHub Action. Already at its current early-adopter stage, GitHub Actions provides hundreds of different Actions, potentially making it difficult for practitioners to decide which Action to use, if any. Our work provides first empirical data on which Actions are currently used, how they evolve, and what their impact can be on development processes. We hope that this work will inspire more repositories to adopt GitHub Actions for their projects. \section{Limitations and Threats to Validity} In this section, we discuss the limitations and threats to validity and how we have mitigated them. For replication purposes, we made our data and source code publicly available.\footnote{https://zenodo.org/record/4626256} \MyPara{External Validity:} Since we selected engineered software projects, our findings might not be generalized to other or all GitHub projects. One way to overcome this threat is by studying non-engineered projects hosted on GitHub. Additionally, even though we considered a large number of projects and our results indicate general trends, we recommend running segmented analyses when applying our results to a given project. In addition, we focused on the same pull request related variables as in previous work \cite{wessel2020effects,cassee2020silent,zhao2017impact}, leaving other effects and artifacts for future work. \MyPara{Construct Validity:} As stated by Kalliamvakou et al.~\cite{Kalliamvakou2014}, many merged pull requests appear non-merged. Since we consider the number of merged pull requests, our results may be affected by this threat. Our study can be replicated when automated ways of detecting this issue are developed. \MyPara{Internal Validity:} To reduce internal threats, we applied multiple data filtering steps to the statistical models. We varied the data filtering criteria to confirm the robustness of our models. For example, we filtered projects that did not receive pull requests in all months and observed similar phenomena. We also carried out a series of placebo tests~\cite{imbens2008regression} using the same model with the adoption artificially set to different dates to confirm the model robustness. The assumption of exogeneity of the treatment might be a threat. We added several controls that might influence the independent variables to reduce confounding factors. \section{Related Work} There has been no study investigating GitHub Actions. However previous work has investigated other automation tools such as software bots, continuous integration, and continuous delivery. \subsection{Software Bots} On GitHub, software bots are often integrated into the pull request workflow \cite{Erlenhov2019}, to perform a variety of tasks. These include repairing bugs \cite{Monperrus2019}, refactoring the source code \cite{Wyrich2019}, recommending tools to help developers \cite{Brown2019} and updating outdated dependencies \cite{Mirhosseini2017}. Software bots have been proposed to support technical and social aspects of software development activities \cite{Lin2016}, such as communication and decision-making \cite{Storey2016}. Van Tonder and Le Goues~\cite{Tonder2019} believe software bots are a promising addition to a developer's toolkit as they bridge the gap between human software development and automated processes. However, understanding how software bots’ interaction affects human developers is a major challenge. Storey et al.~\cite{Storey2016} highlight that software bots’ potential negative impact is still neglected, as the way that these software bots interact on pull requests can be disruptive and perceived as unwelcoming \cite{10.1145/3387940.3391504}. Wessel et al.~\cite{Wessel2018} investigated the usage and impact of software bots to support contributors and maintainers with pull requests. After identifying bots on popular GitHub repositories, the authors classified these bots into $13$ categories according to the tasks they perform. The third most frequently used bots are code review bots. Wessel et al.~\cite{wessel2020effects} also employed a regression discontinuity design on OSS projects, revealing that the bot adoption increases the number of monthly merged pull requests, decreases monthly non-merged pull requests, and decreases communication among developers. \subsection{Continuous Integration and Continuous Delivery (CI/CD)} Improving software quality and reducing risks is the main goal of CI as stated by Duvall et al. \cite{duvall2007continuous}. By introducing continuous integration to the pull request process, the findings from Vasilescu et al. \cite{Vasilescu2015} clearly point to the benefits of CI. More pull requests got processed, more were being accepted and merged and more were also being rejected. In the context of Computer Science education, rising enrollments make it difficult for instructors and teaching assistants to give adequate feedback on each student's work. Hu et al.~\cite{Hu2019} had set up a static code analyzer and a continuous integration service on GitHub to help students check code style and functionality. By implementing three bots, results found by the authors showed that more than 70\% of students think the advice given by the bots is useful and can provide significantly more feedback (six times more on average) than teaching staff. A survey by Chen et al.~\cite{940726} reports that of the hundreds of billions of dollars spent on developer wages, up to 25\% accounts for fixing bugs~\cite{940726}. Continuous integration thus holds huge potential to further reduce human effort and costs by automatically fixing bugs. Prior work has also investigated the impact of CI and code review tools on GitHub projects~\cite{zhao2017impact,kavaler2019tool,cassee2020silent} across time. While Zhao et al.~\cite{zhao2017impact} and Cassee et al.~\cite{cassee2020silent} focused on the impact of the Travis CI tool's introduction on development practices, Kavaler et al.~\cite{kavaler2019tool} turned to the impact of linters, dependency managers, and coverage reporter tools. Our work extends this literature by providing a more in-depth investigation of the effects of GitHub Actions adoption. \section{Conclusion} In this paper, we investigate how software developers use GitHub Actions to automate their workflows, how they discuss these Actions on the issue tracker, and what are the effects of the adoption of such Actions on pull requests. While several Actions have been proposed and adopted by the open-source software community, relatively little has been done to evaluate the state of practice. To understand the impact on practice, we collected and analyzed data from 3,190 active GitHub repositories. Further, to understand the impact on practice, we statistically analyzed a sample of 926 open-source projects hosted on GitHub. Firstly, the findings showed that only a small subset of 3,190 repositories used GitHub Actions. We also found that 708 unique predefined Actions were being used within the workflows. Further, we collected and analyzed GitHub Actions related issues and found that the majority of the discussion was positive. These findings indicate that GitHub Actions were met with an overall positive reception among software developers. By modeling the data around the introduction of GitHub Actions, we notice different results from merged pull requests and non-merged ones. The monthly number of commits of merged pull requests decreases after the adoption of GitHub Actions and there are also more monthly rejected pull requests. Our findings bring to light how early adopters are using, discussing, and being impacted by GitHub Actions. Learning from those early adopters can provide insights to assist the open-source community to decide whether to use GitHub Actions and how to use them effectively. Future work includes the qualitative investigation of the effects of adopting a GitHub Actions and the expansion of our analysis for considering the effects of different types of Actions and activity indicators. \section*{Acknowledgments} This work was partially supported by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – Brasil (CAPES) – Finance Code 001, CNPq (grant 141222/2018-2), NSF grants 1815503 and 1900903, and the Australian Research Council's Discovery Early Career Researcher Award (DECRA) funding scheme (DE180100153) \bibliographystyle{IEEEtran}
1,314,259,994,455
arxiv
\section{Introduction} \defcitealias{pdm00}{PDM} \defcitealias{dem06}{DMPP} In recent years an increasing number of globular clusters (GCs) have been found to be seriously depleted in low-mass stars ($\lesssim 0.5$\,M${_\odot}$\,) when compared with high-concentration clusters (\citealt{pdm00} hereafter \citetalias{pdm00}; \citealt{dem05}). The first heavily depleted cluster to be discovered was NGC\,6712 \citep{dem99, and01}, followed by Pal\,5 \citep{koc04}, NGC\,6218 (\citealt{dem06}, hereafter \citetalias{dem06}), NGC\,2298 \citep{dem07} and NGC\,6838 \citep{pul07}. In all cases, the analysis of the radial variation of the directly observed mass function (MF) confirms that the relative depletion of low-mass stars is not due to the local effect of mass segregation but is a structural property of the global mass function (GMF), i.e. the MF of the entire stellar population of the GC obtained by model fitting. Hereafter, the GMF is defined as a power-law function, where the number of stars $N$ per unit mass $m$ follows a relationship of the type $dN/dm \propto m^\alpha$ for $0.3 < m < 0.8$\,M${_\odot}$\,. For some of the clusters (NGC\,6712, Pal\,5, NGC\,6838), this finding is at least qualitatively consistent with the predicted effect of tidal stripping caused by the Galaxy as these objects have typically shorter disruption times ($T_{\rm dis}$) than the average GC according to the models of \citet{gne97}, \citet{din99} and \citet{bau03}. Quantitatively, however, the agreement is poor for all of them, since the expected tight correlation between $T_{\rm dis}$ and GMF slope \citep{bau03}, whereby clusters with a higher probability of disruption should always have a shallower GMF slope, is not observed (\citetalias{dem06}; \citealt{dem07}). Rather than an error in the models, this is possibly the result of the large uncertainties affecting their input parameters, especially the clusters' orbits and exact shape of the Galactic potential. (For example, the redetermination of the orbit of NGC\,6218 with respect to the Hipparcos reference system has led to a value of $T_{\rm dis}$ in better agreement with the cluster's flat GMF \citepalias{dem06} than that based on the previous, less accurate orbit.) This situation is unlikely to improve much until the advent of space astrometry missions like Gaia. The absence of a clear correlation with the effects of tidal stripping, on the other hand, could also be due, at least in part, to our imperfect understanding of the relation between the cluster's GMF and its fundamental structural parameters and their evolution in time. To explore this possibility, we have used the available data to search for possible signs of a more complex situation. We present in this Letter the results of this study which indeed imply that there is more at work here than was thought up to now. Specifically, we find that there is a systematic trend between the GMF slope $\alpha$ and the central concentration parameter $c$ (defined as $\log(r_{\rm t}/r_{\rm c})$ where $r_{\rm t}$ and $r_{\rm c}$ are the tidal radius and core radius, respectively), in the sense that all five clusters above with a severely depleted GMF have an intermediate or low value of $c$. On the contrary, the twelve halo clusters in the sample studied by \citetalias{pdm00} and \citet{dem05} have typically high central concentration ($<$$c$$>$ $\simeq 1.9$) and correspondingly much steeper GMF slopes, typically $\alpha=-1.4$ in the range $0.3 - 0.8$\,M${_\odot}$\,. In the following sections, we explore in more detail the origin and nature of this trend and its possible explanations. \section{The sample} The sample used in this investigation includes all GCs for which reliable luminosity functions (LFs) and MFs exist from deep HST or VLT photometry. This includes the twelve halo clusters studied by \citetalias{pdm00}, the two bulge clusters NGC\,6352 and NGC\,6496 studied by \citet{pul03} and the five GCs, mentioned above, that have recently been shown to have a depleted GMF at the low-mass end. Finally, we have added to the sample NGC\,288, whose LF has been studied by \citet{bel02} with the HST and also reveals a paucity of low-mass stars. The complete sample is listed in Table\,\ref{tab1}, where column (1) gives the cluster name, column (2) the bibliographic reference, column (3) the MF index $\alpha$ in the mass range $0.3 - 0.8$\,M${_\odot}$\,, column (4) the value of the central concentration parameter $c$ and column (5) the total integrated magnitude $M_{\rm V}$, both from Harris (1996). \begin{deluxetable}{lcccc} \tablewidth{245pt} \tablecaption{Mass function index and central concentration\label{tab1}} \tablehead{ \colhead{Object} & \colhead{Reference} & \colhead{$\alpha$} & \colhead{$c$} & \colhead{$M_V$} } \startdata NGC\,104 & a & $-1.2$ & $2.03$ & $-9.42 $ \\ NGC\,288 & b & $+0.0$ & $0.96$ & $-6.74 $ \\ NGC\,2298\tablenotemark{*} & c & $+0.5$ & $1.28$ & $-6.30 $ \\ Pal\,5\tablenotemark{*} & d & $-0.4$ & $0.70$ & $-5.17 $ \\ NGC\,5139 & a & $-1.2$ & $1.61$ & $-10.29$ \\ NGC\,5272 & a & $-1.3$ & $1.84$ & $-8.93 $ \\ NGC\,6121\tablenotemark{*} & e & $-1.0$ & $1.59$ & $-7.20 $ \\ NGC\,6218\tablenotemark{*} & f & $+0.1$ & $1.29$ & $-7.32 $ \\ NGC\,6254 & a & $-1.1$ & $1.40$ & $-7.48 $ \\ NGC\,6341 & a & $-1.5$ & $1.81$ & $-8.20 $ \\ NGC\,6352 & g & $-0.6$ & $1.10$ & $-6.48 $ \\ NGC\,6397\tablenotemark{*} & h & $-1.4$ & $2.50$ & $-6.63 $ \\ NGC\,6496 & g & $-0.7$ & $0.70$ & $-7.23 $ \\ NGC\,6656\tablenotemark{*} & i & $-1.4$ & $1.31$ & $-8.50 $ \\ NGC\,6712\tablenotemark{*} & j & $+0.9$ & $0.90$ & $-7.50 $ \\ NGC\,6752 & a & $-1.6$ & $2.50$ & $-7.73 $ \\ NGC\,6809 & a & $-1.3$ & $0.76$ & $-7.55 $ \\ NGC\,6838\tablenotemark{*} & k & $+0.2$ & $1.15$ & $-5.60 $ \\ NGC\,7078\tablenotemark{*} & l & $-1.9$ & $2.50$ & $-9.17 $ \\ NGC\,7099 & a & $-1.4$ & $2.50$ & $-7.43 $ \\ \enddata \tablecomments{For clusters marked with an asterisk following the cluster name the value of $\alpha$ is that of the GMF. For all other objects, $\alpha$ is the index of the MF measured near the half-mass radius. Bibliographical references in Column (2) are as follows: (a) \citealt{pdm00}; (b) \citealt{bel02}; (c) \citealt{dem07}; (d) \citealt{koc04}; (e) \citealt{pul99}; (f) \citealt{dem06}; (g) \citealt{pul03}; (h) \citealt{dem00}; (i) \citealt{alb02}; (j) \citealt{and01}; (k) \citealt{pul07}; (l) \citealt{pas04}.} \end{deluxetable} As for the value of $\alpha$, it has been derived as follows. For those clusters for which a GMF index exists from multi-mass models (indicated by an asterisk following the cluster name), that value is used. For all other clusters, $\alpha$ is the index of the power-law MF that, once folded through the derivative of the mass--luminosity (M--L) relationship appropriate for the cluster's metallicity, best fits the LF measured near the half-mass radius ($r_{\rm hm}$), since there the MF is expected to closely approach the GMF \citep{dem00}. For clusters in the sample studied by \citetalias{pdm00}, we adopted the same M--L used in that work, while for the remaining objects the M--L relationship of \citet{bar97} for the appropriate metallicity was used. As pointed out by \citetalias{pdm00}, since the LF of NGC\,6341 and NGC\,7099 were measured farther out in the cluster, namely at about four times the half-mass radius, in principle a small positive correction to the measured index $\alpha$ should be applied to account for the steepening of the MF with increasing radial distance caused by mass segregation. This correction amounts to less than $0.2$ dex and is included in the values listed in Table\,\ref{tab1}. The correction being relatively small, however, none of our conclusions would change if we were to ignore it. \begin{figure} \centering \plotone{f1.eps} \caption{Observed trend between MF index $\alpha$ and the central concentration parameter $c$. Clusters are indicated by their NGC (or Pal) index number. Objects for which a GMF index is available are marked with a circled cross. The large crosses indicate the average and $1\,\sigma$ distribution of $\alpha$ and $c$ for clusters with $c$ values above and below $1.4$. The dashed line is an eye-ball fit to the distribution.} \label{fig1} \end{figure} In Figure\,\ref{fig1} we show the run of $\alpha$ as a function of $c$ from Table\,\ref{tab1}. It is immediately obvious that there are no high-concentration clusters with a shallow GMF. It also appears that a relatively low concentration is a necessary but not sufficient condition for a depleted GMF. The median value of $c$ ($1.4$) splits the cluster population roughly in two groups, one with lower and one with higher concentration. The mean GMF index of the first group is $\alpha=-0.3 \pm 0.7$, while the second has a much tighter distribution with $\alpha=-1.4 \pm 0.3$. The average values and the associated $\pm 1\,\sigma$ uncertainties are shown as thick crosses in Figure\,\ref{fig1}. Due to the large dispersion of $\alpha$ at low concentration, it is not possible to derive a precise correlation between $c$ and $\alpha$ over the whole range spanned by these parameters. On the other hand, the GMF becomes undoubtedly less steep as $c$ decreases. The relationship $\alpha+2.5=2.3/c$ is a simple yet satisfactory eye-ball fit to the distribution (dashed line in Figure\,\ref{fig1}). \section{Discussion} The result shown in Figure\,1 is very surprising, since there is a clear absence of centrally concentrated clusters that are depleted in low-mass stars as one would expect of GCs undergoing tidal disruption, i.e. objects that should occupy the upper right quadrant of Figure\,\ref{fig1}. This is contrary to theoretical expectations, since the same relaxation mechanism that drives a cluster towards higher central density and, eventually, core collapse is also responsible for the dissolution of the cluster, via evaporation \citep{spi87}. Therefore, as a cluster evolves dynamically in the course of its lifetime, one would expect that its GMF should become shallower at the low-mass end while the central concentration parameter should increase. Put differently, severe mass loss should be a sufficient, yet not necessary, condition for core collapse. In this simple scheme, one would for instance expect that NGC\,2298 and NGC\,6218, with $\alpha \simeq 0.3-0.5$ should have a much denser core than NGC\,6397 with $\alpha \simeq -1.6$. Clearly, this picture is not at all consistent with the results shown in Figure\,\ref{fig1}. That cluster concentration somehow creates variations in the IMF is not a plausible explanation for the observed trend between $c$ and $\alpha$, since the star formation process could not possibly know about the future structural properties of the forming cluster. Thus, unless the high $c$ value of the densest clusters is primordial and a presently unknown mechanism exists whereby low-mass star formation is hampered in low density environments, the depletion at the low-mass end of the GMF must result from mass loss via dynamical evolution. It is possible that the clusters that have lost a significant fraction of their original mass have indeed undergone core collapse and have already recovered a normal radial density and surface brightness profile. A large body of theoretical studies exists on the evolution of GCs after core collapse \citep{hen75, sug83, goo93, mak96}, according to which they should undergo a homologous re-expansion, within a thermal timescale, triggered by the energy released by hardening binaries \citep{hut85}. This process, however, is not necessarily stable and can lead to re-collapse and subsequent gravothermal oscillations \citep{sug83, goo87, coh89, mak96} when the number of stars in the cluster exceeds a threshold that depends on the mass spectrum. \citet{mur90} show that for a Salpeter IMF the re-expansion of the core is stable up to $N_{\rm s} \simeq 3 \times 10^5$, a value comparable with the estimated number of objects present in the most depleted clusters in our sample. It, therefore, appears plausible that these clusters may have re-expanded after core collapse. On the other hand, \citet{mur90} predict the observed core radius to expand over time as $r_{\rm c} \propto t^{0.6}$ and it would take too long for the core to reach a size comparable to that of the pre-collapse phase, unless the core shrunk only marginally during collapse or current models under-estimate the observed size of $r_{\rm c}$ during the collapse phase of a realistic multi-mass cluster with a finite number of stars. The evaporation timescale increases with increasing total cluster mass $M_{\rm T}$, so one could possibly understand the trend shown in Figure\,\ref{fig1} if clusters of lower central concentration were also those with lower mass. Unfortunately, this is not easy to investigate as less than half the clusters in our sample have a relatively solid $M_{\rm T}$, based on multi-mass model fitting, while for the rest $M_{\rm T}$ comes from the total luminosity, i.e. the integrated magnitude $M_{\rm V}$, by assuming a constant mass-to-light ratio. The value of $M_{\rm V}$ is shown for each cluster in Table\,\ref{tab1} and no correlation is found between it and $c$. Similarly, no correlation exists between $M_{\rm T}$ and $c$ for clusters with a reliable value of $M_{\rm T}$. For example, NGC\,2298, NGC\,6218 and NGC\,6656 have masses in the ratio $1:2.4:5.3$, but share the same concentration $c \simeq 1.3$. Conversely, NGC\,6397 is about half as massive as NGC\,6656, but its $c$ is twice as large. It may be argued that besides evaporation other tidal mechanisms are responsible for the depleted GMFs that we find. In fact, the compressive heating that GCs undergo when they cross the Galactic plane (disk shocking) or venture close to the Galactic centre (bulge shocking) can have a much stronger effect than evaporation, depending on the cluster's orbit \citep{agu88, gne97, din99}. However, while bulge and disk shocks can cause significant mass loss \citep{spi87,heg03}, they will preferentially remove low-mass stars {\em only if} these have previously been pushed toward the cluster's periphery by mass segregation \citep{ves97}. Even if for some GCs tidal shocks have been as important as (or more important than) internal two-body relaxation in determining the mass loss rate, the observed trend is still puzzling since in any case tidal shocks should accelerate the evolution of a cluster toward higher central density and core collapse \citep[see][]{spi73, che86, spi87}. Only initially very loose GCs ($c \la 0.5$) are expected to quickly dissolve, via mass loss due to stellar evolution, before reaching core collapse \citep{che90, fuk95}. However, Figure\,\ref{fig1} shows that also clusters of intermediate concentration may undergo severe mass loss without necessarily showing signs of core collapse for as long as a Hubble time. This behaviour is in principle consistent with the results of Fokker--Planck and N-body calculations of realistic clusters, including the effect of stellar evolution and two-body relaxation. \citet{che90} and \citet{tak00} have investigated the temporal evolution of $c$ and $\alpha$ for various initial conditions (total mass, concentration, relaxation time and IMF index). Their models suggest that, unless the IMF is very steep ($\alpha \simeq -3.5$), energy equipartition and mass segregation will initially drive the cluster towards lower values of the central density, mainly because the cluster shrinks (and therefore stars are lost) more quickly than the core can contract. Eventually the cluster undergoes core collapse, but how long this takes and whether the increase in the central concentration $c$ can measurably affect the surface brightness profile depends on the initial conditions and in particular on the IMF slope. If the latter is very shallow ($\alpha \simeq -1.5$), stellar evolution may remove enough mass from the cluster so that its core is reduced to just a few stars. Even in the deepest collapse phase, no central cusp would be visible in the surface brightness profile. The predictions of \citet{che90} and \citet{tak00} are difficult to compare directly to the actual data, since the measured value of $c$, based on the surface brightness profile dominated by red-giant stars, is not a good tracer of the true cluster density. Nevertheless, the overall picture seems compatible with Figure\,\ref{fig1} and their findings allow us to put forth the following hypotheses to explain the presence of clusters with a shallow GMF and low $c$ and the absence of objects with a shallow GMF and high $c$. The dashed line in Figure\,1 approximately traces the evolutionary path of GCs from their birth towards two opposite directions of increasing or decreasing concentration. Clusters born with sufficiently high concentration ($c \ga 1.5$) evolve towards core collapse. Mass loss can be important via stellar evolution in the first $\sim 1$\,Gyr, and to a lesser extent via evaporation or tidal stripping throughout the life of the cluster, but the GMF at any time does not depart significantly from the IMF. Clusters with $c \la 1.5$ at birth also evolve towards core collapse, but mass loss via stellar evolution and, most importantly, via relaxation and tidal stripping proceeds faster, particularly if the orbit has a short perigalactic distance or frequent disk crossings. Therefore, as the tidal boundary shrinks and the cluster loses preferentially low-mass stars, the GMF progressively flattens. This speeds up energy equipartition, but $c$ still decreases, since the tidal radius shrinks more quickly than the luminous core radius (although the central density, particularly that of heavy remnants, is increasing). These clusters can eventually undergo core collapse, but this might only affect a few stars in the core, thereby making it observationally hard to detect. An alternative possibility is that there were IMF variations. Some clusters with shallow IMF have undergone severe stellar mass-loss and have therefore expanded considerably. This has led to a lower $c$ and a shallower GMF (larger $\alpha$) because these systems were more prone to tidal stripping. Most of these clusters have already disrupted but some survive for a long time in a state of low $c$ and large $\alpha$ before collapse occurs. Clusters with a steeper IMF, on the other hand, have proceeded normally to core collapse. The problem with this alternative possibility is the absence of GCs with a shallow GMF slope and high concentration ($c > 1.5$). Since high initial concentration should lead to a rapid collapse phase \citep{che90} and no mechanism is known that could steepen the GMF over time, originally massive clusters with a shallow IMF and high $c$ should still be visible in the ``zone of avoidance'' (upper right corner) of Figure\,\ref{fig1}. Admittedly, our sample is relatively small and we cannot exclude that clusters exist in this region of the parameter space. In particular, GCs of high concentration ($c \ga 2$) and low total luminosity ($M_{\mathrm V} \ga -6$) are potential candidates and their GMF should be investigated. Conversely, clusters with low concentration and steep IMF can have formed, NGC\,6809 being a good candidate. Objects like these should follow the general evolution towards lower $\alpha$ and possibly lower $c$. The exact balance between decrease in $\alpha$ and in $c$ should depend on the initial conditions, and particularly the initial mass (which together with the tidal radius, set by the Galactic potential, defines the relaxation time). Therefore, until GCs are found in the ``zone of avoidance'' of Figure\,\ref{fig1}, it seems plausible that opposite evolutionary paths exist in the $c$-vs-$\alpha$ plane for clusters born with different central concentration and/or on different orbits, but that the IMF was the same or very similar for all of them. At low masses, the IMF of GCs must approach the steepest GMFs in our sample, while at higher masses it cannot be much shallower than Salpeter without mass loss via stellar evolution causing the rapid disruption of the cluster \citep{che90, tak00}. We presently have no direct measurements of the GC IMF above $0.8$\,M${_\odot}$\,, but a value of $\alpha\simeq -2$ is the preferred outcome of multi-mass Michie--King models (\citetalias{pdm00}; \citetalias{dem06}; \citealt{dem07}). In this sense, the tapered power-law distribution proposed by \citet{dem05} remains a viable hypothesis for the IMF of GCs. \section{Conclusions} We have discovered an empirical correlation between the central concentration and the GMF slope of GCs, whereby only loose clusters have a shallow GMF. A low value of the central concentration seems, therefore, a necessary condition for extensive mass loss leading to cluster disruption. Although it is possible that GCs formed with a certain degree of mass segregation and that some low-mass stars may have been lost due to tidal truncation before two-body relaxation could act upon them, all of the depleted clusters that we have studied have a dynamical structure consistent with their being in a condition of energy equipartition (\citealt{and01}; \citetalias{dem06}; \citealt{dem07, pul07}). Therefore, the observed trend between $\alpha$ and $c$ can only be understood if either the depleted clusters have undergone collapse and have subsequently rebounded, or if they are proceeding unnoticed towards core collapse or have already reached it without showing it. In either case, this means that the central concentration parameter $c$ derived from the surface brightness profile is not a good tracer of a cluster's true central density or dynamical state. Our current estimate of the fraction of post-core collapse clusters may therefore need a complete revision as a large number of them may be lurking in the Milky Way. A reliable assessment of a cluster's dynamical state requires the study of the complete radial variation of its stellar MF. \acknowledgements We thank an anonymous referee whose thorough comments and suggestions have helped us to considerably improve the presentation of this paper. It is a pleasure to thank Torsten B\"oker and Andres Jord\'an for very useful discussions.
1,314,259,994,456
arxiv
\section{Introduction} Macroscopic systems, in absence of an external drive, equilibrate with the environment. However, relaxation may be slow, i.e. with a relaxation time which exceeds any attainable observation time~\cite{Palmer}. In that case, only dynamical properties are accessible to observation and the question naturally arises of what can be learnt about equilibrium from dynamics. Paradigmatical examples of slow relaxation are glassy systems~\cite{BCKM} or systems undergoing phase ordering after a sudden temperature quench from above to below the critical point~\cite{BCKM,Bray}. Here we shall look at the problem in the latter context, whose prototypical instance is the quench of a ferromagnetic system. In order to make the presentation as simple as possible, we shall mostly concentrate on the Ising model. The extension to other phase-ordering systems will be discussed at the end of the paper, with the example of the spherical model. Phase ordering in the Ising model by now is a mature subject, generally considered to be well understood. For reviews see Refs.~\cite{Bray,Puri,Jo,Henkel}. Among the many interesting features of the process, in this paper we shall be primarily concerned with the lack of equilibration in any finite time, if the system is infinite. This is frequently referred to with the catchy expression that the system remains permanently out of equilibrium, whose meaning, however, has never been fully clarified. For instance, a similar circumstance arises also when the quench is made to the critical temperature $T_c$, because, due to critical slowing down, again equilibrium is not reached in any finite time. Nonetheless, in that case, the process cannot be regarded as substantially different from one of equilibration, because as time grows the system gets closer and closer to the equilibrium critical state, which is unique in the sense that in the thermodynamic limit it is independent of the boundary conditions (BC). Instead, in the quench to below $T_c$ the picture is qualitatively different, because, although the state extrapolated from dynamics is unique, the same cannot be said of the equilibrium state, which depends on BC even in the thermodynamic limit. This we have shown in Ref.~\cite{FCZ} (to be referred to as I in the following), where we have investigated the nature of the equilibrium state in the Ising model below $T_c$, under different symmetry-preserving BC. We have found that while periodic boundary conditions (PBC) lead to the usual ferromagnetic ordering, due to the breaking of ergodicity with the consequential spontaneous breaking of the $\mathbb{Z}_2$ up-down symmetry, the scenario changes dramatically with antiperiodic boundary conditions (APBC), because ergodicity breaking is precluded. Then, the system cannot order and complies with the requirement of the transition by remaining critical also below $T_c$, all the way down to $T=0$. We have argued that this new transition, without spontaneous symmetry breaking and without ordering, consists in the condensation of fluctuations. In the $1d$ case, since $T_c=0$, the low temperature phase is shrunk to just $T=0$. Motivated by the existence of such diversity in the equilibrium properties, in this paper we address the next natural question, formulated in the title of the paper, of matching statics and dynamics. Using the equal-time correlation function as the probing observable, we shall see that the asymptotic state, extrapolated from dynamics, that is by taking the $t \to \infty$ limit after the thermodynamic limit, is unique and {\it critical}. Now, the point is that this, which we may call the time-asymptotic state and which, we emphasize, is the same for both choices of BC, is found to coincide with the bona fide equilibrium state, i.e. the one computed from equilibrium statistical mechanics, in the APBC case but to be remote from it in the PBC case. Thus, we have the one and the same dynamical evolution which, although not reaching equilibrium in any finite time, turns out to be informative of the true equilibrium state in one case (APBC), but not in the other (PBC). It is, then, appropriate to regard the APBC case as one in which equilibrium is approached, just as in a quench to $T_c$, while the PBC case offers an instance of a system remaining permanently out of equilibrium. The poor performance in approaching equilibrium with PBC is traceable to the presence of ergodicity breaking at the working temperature, which, instead, is preserved when APBC are applied. At the end of the paper we shall argue that the connection between the presence/absence of ergodicity breaking and the absence/presence of equilibration goes beyond the Ising example, by showing that it takes place with the same features in the rather different context of the spherical and mean-spherical model. The paper is organized as follows: in section~\ref{II} we formulate the problem. In section~\ref{III} the relation between equilibrium and relaxation in the quench to above $T_c$ is analyzed by using scaling arguments. The cases of the quench to $T_c$ with $d=2$, to $T=0$ with $d=1$ and to below $T_c$ with $d=2$ are analyzed in sections~\ref{IV}, \ref{V} and \ref{VI}, respectively. The spherical and mean spherical model are introduced and investigated in section~\ref{SM}. Concluding remarks are made in section~\ref{CR}. \section{The problem} \label{II} We are concerned with the relaxation dynamics of a system initially prepared in an equilibrium state at the temperature $T_I$ and suddenly quenched to the lower temperature $T_F$. We consider the Ising model on a lattice of size $V=L^d$, with the usual nearest neighbours interaction \begin{equation} {\cal H}(\boldsymbol{s}) = -J\sum_{<ij>} s_is_j, \label{Ham.1} \end{equation} where $J > 0$ is the ferromagnetic coupling, $\boldsymbol{s} = [s_i]$ is a configuration of spin variables $s_i=\pm1$ and $<ij>$ is a pair of nearest neighbours. We shall study the $d=1$ and $d=2$ cases, where in the thermodynamic limit there is a critical point at $T_c=0$ and $T_c=2.269J$, respectively. Since the system's size is finite, BC must be specified and, because of the major role that these will play in the following developments, it is necessary to enter in some detail from the outset. As anticipated in the Introduction, we shall consider PBC and APBC (precisely cylindrical antiperiodic BC) implemented by adding to the interaction an extra term ${\cal B}(\boldsymbol{s})$ with couplings among spins on the boundary~\cite{Gallavotti,Antal,FCZ}. In the $d=2$ case spins on opposite edges are coupled ferromagnetically, just like spins in the bulk, if PBC are applied. Instead, in the APBC case, spins on one pair of opposite edges are coupled ferromagnetically, while those on the other pair antiferromagnetically. Hence, the boundary term reads \begin{equation} {\cal H}_{b}(\boldsymbol{s}) = -J \sum_{y=1}^L s_{1,y} s_{L,y} - b J \sum_{x=1}^L s_{x,1}s_{x,L}, \label{bdr.1} \end{equation} where we have denoted by $b=\pm$ the sign of $J$, which identifies PBC $(+)$ or APBC $(-)$. In the $d=1$ case this term simplifies to \begin{equation} {\cal H}_b (\boldsymbol{s}) = - bJ s_{1}s_{L}, \label{bdr.2} \end{equation} where $L$ is the length of the chain. It is important to note that both these BC preserve the up-down symmetry of the Ising interaction. Taking as it is customary $T_I = \infty$ in order to have an uncorrelated initial state, the system is put in contact with a thermal reservoir at the lower and finite temperature $T_F$ and let to evolve according to a dynamical rule which does not conserve the order parameter, like Glauber or Metropolis. This simply corresponds to running a Markov chain at the fixed temperature $T_F$, with the so-called hot start, that is with a uniformly random initial condition. The relaxation process is monitored through the equal-time spin-spin correlation function \begin{equation} \mathcal{C}(r, \epsilon,t^{-1},L^{-1}; b) = \left [ \langle s_{i}(t)s_{j}(t)\rangle - \langle s_{i}(t) \rangle \langle s_{j}(t) \rangle \right ], \label{corr.0} \end{equation} where the angular brackets denote averages taken over the noisy dynamics and the initial conditions, while the square brackets stand for the average over all pairs of sites $(i,j)$ keeping fixed the distance $r$ between $i$ and $j$. In the set of control parameters, $\epsilon=T_F-T_c$ is the temperature difference from criticality, $t^{-1}$ is the inverse time and $L^{-1}$ is the inverse linear size. We are interested in taking both the large-time and the thermodynamic limit of the above quantity, and then to compare the outcomes, depending on the order in which these two limits have been taken. Letting $t^{-1} \to 0$ first, while keeping $L$ fixed, the equilibrium correlation function is obtained \begin{equation} \lim_{t^{-1} \to 0}\mathcal{C}(r,\epsilon,t^{-1},L^{-1};b) = \mathcal{C}_{\rm eq}(r,\epsilon,L^{-1};b) = \left [ \langle s_{i}s_{j}\rangle_{\rm eq} - \langle s_{i} \rangle_{\rm eq} \langle s_{j} \rangle_{\rm eq} \right ], \label{D3.7} \end{equation} where now the angular brackets stand for the Gibbs ensemble average and the square brackets have the same meaning as in Eq.~(\ref{corr.0}). Then, the subsequent thermodynamic limit implements the prescription~\cite{Gallavotti} for the construction of the equilibrium correlation function in the infinite system \begin{equation} \lim_{L^{-1} \to 0} \lim_{t^{-1} \to 0}\mathcal{C}(r,\epsilon,t^{-1},L^{-1};b) = C_{\rm eq}(r,\epsilon;b). \label{D3.6} \end{equation} The crux of the matter is that, after reversing the order of these limits, the end result might not be the same as the one above, because the large-time limit of the time-dependent correlation function for the infinite system \begin{equation} \lim_{t^{-1} \to 0} \lim_{L^{-1} \to 0} \mathcal{C}(r,\epsilon,t^{-1},L^{-1};b) = C^*(r,\epsilon;b), \label{D3.5} \end{equation} exists but does not necessarily coincide with $C_{\rm eq}(r,\epsilon;b)$. Referring to $C^*(r,\epsilon;b)$ as the time-asymptotic correlation function, if it matches $C_{\rm eq}(r,\epsilon;b)$ then the infinite system equilibrates. If not, it remains permanently out of equilibrium. Which is the case depends on $T_F$ and on the choice of BC. In the quench to $T_F \geq T_c$ both $C_{\rm eq}(r,\epsilon)$ and $C^*(r,\epsilon)$ are independent of the BC choice and do coincide, signaling equilibration. Instead, in the quench to below $T_c$, as we shall see, $C^*(r,\epsilon)$ does not depend on $b$, while $C_{\rm eq}(r,\epsilon;b)$ retains this dependence, implying that equilibration can be achieved at most with one of the two BC, but certainly not with both. As anticipated in the Introduction, the equilibration condition is fulfilled with APBC, but not with PBC. In the next section we shall substantiate the above statements with results for the $1d$ and $2d$ Ising model. We shall take the aforementioned limits, after setting up the general scaling scheme which unifies static and dynamic phenomena into one single framework encompassing both. In order to do this, it is convenient to treat separately the three cases: $\epsilon > 0$, $\epsilon = 0$ and $\epsilon < 0$. \section{Statics and Dynamics: $\boldsymbol{\epsilon > 0}$} \label{III} \begin{figure}[ht] \centering \includegraphics[width=8cm]{FigPPT1SXa.pdf} \hspace{-2cm} \includegraphics[width=8cm]{FigPPT1DXb.pdf} \caption{Parameter space of the $1d$ model (a) and of the $2d$ model (b).} \label{fig:pspace} \end{figure} Let us assume that at a generic point in the $\epsilon > 0$ sector of the three-dimensional space of the parameters $(\epsilon,t^{-1},L^{-1})$, depicted in Fig.\ref{fig:pspace}, the correlation function obeys scaling in the form~\footnote{This is a finite-size extension of the scaling form derived in Ref.~\cite{Janssen}.} \begin{equation} \mathcal{C}(r,\epsilon,t^{-1},L^{-1};b) = \frac{1}{r^a} \mathcal{F} \left ( \frac{r}{\ell}, \frac{\ell}{R},\frac{R}{L};b \right ), \label{anml.1} \end{equation} where the exponent $a$ is related to the anomalous dimension exponent $\eta$ by $a = d-2+\eta$ and to the fractal dimensionality $D$ of the Coniglio-Klein (CK)~\cite{CK,FK} correlated clusters~\cite{CF} by \begin{equation} a = 2(d-D). \label{anml.2} \end{equation} From the exact results~\cite{Stanley,Goldenfeld} \begin{equation} \eta = \left \{ \begin{array}{ll} 1, \;\; $for$ \;\; d=1,\\ 1/4,\;\; $for$ \;\; d=2, \end{array} \right . \label{z.0} \end{equation} follows \begin{equation} a = \left \{ \begin{array}{ll} 0, \;\; $for$ \;\; d=1,\\ 1/4,\;\; $for$ \;\; d=2, \end{array} \right . \label{z.1} \end{equation} and \begin{equation} D = \left \{ \begin{array}{ll} 1, \;\; $for$ \;\; d=1,\\ 15/8,\;\; $for$ \;\; d=2, \end{array} \right . \label{z.2} \end{equation} which shows that the CK clusters are compact in $1d$ and fractals in $2d$. Up to a proportionality constant, the scaling variable $\ell$ is the equilibrium correlation length of the infinite system, given by~\cite{Stanley} \begin{equation} \ell(\epsilon) = \left \{ \begin{array}{ll} -[\ln \tanh (J/\epsilon)]^{-1}, \;\; $for$ \;\; d=1,\\ \epsilon^{-\nu},\;\; $with$ \;\; \nu=1 \;\; $for$ \;\; d=2. \end{array} \right . \label{anml.3} \end{equation} The other characteristic length $R(t)$ obeys the power law~\cite{Janssen} \begin{equation} R(t) = t^{1/z}, \label{anml.3bis} \end{equation} with the dynamical exponent~\cite{1d,Nightingale} \begin{equation} z = \left \{ \begin{array}{ll} 2, \;\; $for$ \;\; d=1,\\ 2.16,\;\; $for$ \;\; d=2. \end{array} \right . \label{z.1} \end{equation} The connection between $R(t)$ and the time dependent correlation length will be clarified shortly and is summarized in Fig.\ref{fig:R}. Both lengths diverge as the critical point, which is at the origin of the reference frame in Fig.\ref{fig:pspace}, is approached along the $\epsilon$ axis and the $t^{-1}$ axis, respectively. The scaling ansatz~(\ref{anml.1}) is dense of information and allows to predict what should be expected in different regions of the parameter space. The foremost relevant features are the power-law decay $1/r^{a}$ of correlations at short distance and the large-distance cutoff enforced by the scaling function. The separation between short and large distances is fixed by the correlation length \begin{equation} \xi(\epsilon,t^{-1},L^{-1};b) = \left [ \frac{\int_0^L dr \, r^2 \, \mathcal{C}(r,\epsilon,t^{-1},L^{-1};b) } {\int_0^L dr \, \mathcal{C}(r,\epsilon,t^{-1},L^{-1};b) } \right ]^{1/2}, \label{corrl.0} \end{equation} which scales as \begin{equation} \xi(\epsilon,t^{-1},L^{-1};b) = R f \left ( \frac{R}{\ell}, \frac{\ell}{L};b \right ). \label{crrl.1} \end{equation} The behaviour of $\xi$, as parameters are changed, can be unraveled by the following argument. Suppose that $\ell$ and $L$ are fixed in a region where $\ell \ll L$ and let us survey what happens as the quench unfolds and $R$ grows. \begin{figure}[ht] \centering \includegraphics[width=8cm]{FigPPT2SXa.pdf} \hspace{-2cm} \includegraphics[width=8cm]{FigPPT2DXb.pdf} \caption{Schematic representation of the saturation of $\xi$ vs $R$ for $\ell \ll L$ (a) and for $\ell \gg L$ (b).} \label{fig:R} \end{figure} Approximating the above equation by \begin{equation} \xi(\epsilon,t^{-1}) \simeq R f \left ( \frac{R}{\ell},0 \right ), \label{crrl.101} \end{equation} in the early stage of the quench, when $R \ll \ell$, it can be further reduced to \begin{equation} \xi \sim R, \label{xiR.1} \end{equation} because the system behaves as if it was approaching the critical point along the $t^{-1}$ axis. As $R$ is let to grow further, equilibrium is eventually reached when $R \sim \ell$ and the correlation length saturates to the limiting value \begin{equation} \xi \sim \ell, \label{elle.1} \end{equation} as illustrated in the left panel of Fig.\ref{fig:R}, with the equilibration time given by $t_{\rm eq} = \ell^z$. The BC are immaterial throughout, because $\xi$ remains always much smaller than $L$ so that the system as a whole behaves as a collection of independent finite systems, on which the far away BC have no effect. By the same reasoning, in the region where $\ell \gg L$ we still have $\xi \sim R$ in the early stage, when $R \ll L$, with independence from BC. But then BC come into play when the system equilibrates and $\xi$ saturates to the limiting value $L$, as illustrated in the right panel of Fig.\ref{fig:R}, since correlations extend up to distances where the BC are effective. In this connection see Ref.~\cite{Das}. Summarizing, $\xi$ is given by the shortest of the three lengths $(\ell, R,L)$, that is \begin{equation} \xi(\epsilon,t^{-1},L^{-1}) \sim \min(\ell,R,L), \label{crrl.4} \end{equation} in the regions of the parameter space where one of the three is considerably shorter than the other two, with crossovers connecting these regions. It is clear from Fig.\ref{fig:R} that $\xi$ and $R$ coincide at all times if both $\ell$ and $L$ are infinite. Next to the correlation length, it is useful to keep track also of the susceptibility \begin{equation} \chi(\epsilon,t^{-1},L^{-1};b) = \int d^d r \, \mathcal{C}(r,\epsilon,t^{-1},L^{-1};b), \label{susc.0} \end{equation} which is related to the correlation length by \begin{equation} \chi \sim \xi^{2D-d}. \label{susc.1} \end{equation} This is an important relation, because it is independent of the direction of approach to the critical point and depends only on the geometrical nature of the correlated clusters through $D$. According to the above reasoning, when the limits $t^{-1} \to 0$ and $L^{-1} \to 0$ are taken in the $\epsilon > 0$ sector, we necessarily have $\xi \sim \ell$, independently of the order in which these limits are taken, because $\ell$ is finite. Moreover, the finite correlation length guarantees that the system equilibrates with independence from BC \begin{equation} \lim_{L^{-1} \to 0} \lim_{t^{-1} \to 0} \mathcal{F} \left ( \frac{r}{\ell}, \frac{\ell}{R},\frac{R}{L};b \right ) = \lim_{t^{-1} \to 0} \lim_{L^{-1} \to 0} \mathcal{F} \left ( \frac{r}{\ell}, \frac{\ell}{R},\frac{R}{L};b \right ) = F_{\rm eq} \left (\frac{r}{\ell} \right ). \label{D3.6bis} \end{equation} \subsection{Example: $1d$ system} As an example, let us check the above statements against exact results in the particular case of the $t^{-1} \to 0$ limit of the $1d$ model with finite $L$. The equilibrium correlation function is given by \begin{equation} \mathcal{C}_{\rm eq}(r,\epsilon,L^{-1};b) = \frac{1}{r^a} \mathcal{F}_{\rm eq} \left ( \frac{r}{L}, \frac{L}{\ell};b \right ), \label{anml.4} \end{equation} where $a=0$ according to Eq.~(\ref{z.1}), while the two explicit forms of the scaling function (see I and Ref.~\cite{Antal}) read \begin{equation} \mathcal{F}^{(p)}_{\rm eq}(z,\zeta) = \frac{\cosh[\zeta(1-z)]}{\cosh(\zeta)}, \label{anml.6} \end{equation} \begin{equation} \mathcal{F}^{(a)}_{\rm eq}(z,\zeta) = \frac{\sinh[\zeta(1-z)]}{\sinh(\zeta)}, \label{anml.7} \end{equation} where we have set \begin{equation} z=r/L, \quad \zeta = L/\ell. \label{anml.8} \end{equation} We have considered a chain of length $2L$ in order to simplify notation. The superscripts $(p)$ and $(a)$ have been used for PBC and for APBC, respectively. The equilibrium correlation length, defined through the second moment as in Eq.~(\ref{corrl.0}), scales as \begin{equation} \xi_{\rm eq}(\epsilon,L^{-1};b) = \ell f_{\rm eq}(\zeta;b), \label{crrl.1bis} \end{equation} with the scaling functions \begin{equation} f_{\rm eq}^{(p)} (\zeta) = \left [2-\frac{2\zeta}{\sinh (\zeta)} \right ]^{1/2}, \quad f_{\rm eq}^{(a)} (\zeta) = \left [2-\frac{\zeta^2}{\cosh(\zeta) - 1} \right ]^{1/2}, \label{crrl.5} \end{equation} from which follows \begin{equation} \xi_{\rm eq}^{(p)}(\epsilon,L^{-1}) = \left \{ \begin{array}{ll} \sqrt{2} \, \ell, \;\; $for$, \;\; \ell \ll L,\\ \frac{1}{\sqrt{3}} \, L,\;\; $for$, \;\; L \ll \ell, \end{array} \right . \quad \xi_{\rm eq}^{(a)}(\epsilon,L^{-1}) = \left \{ \begin{array}{ll} \sqrt{2} \, \ell, \;\; $for$, \;\; \ell \ll L,\\ \frac{1}{\sqrt{6}} \, L,\;\; $for$, \;\; L \ll \ell, \end{array} \right . \label{crrl.10} \end{equation} showing that in the regimes $\ell \ll \L$ and $\ell \gg L$, indeed one has $\xi \sim \min(\ell,L)$. Completing, next, the sequence of limits by letting $L^{-1} \to 0$, it is straightforward to check that the dependence on BC disappears, yielding \begin{equation} \lim_{L^{-1} \to 0} \mathcal{F}^{(p)}_{\rm eq}(z,\zeta) = \lim_{L^{-1} \to 0} \mathcal{F}^{(a)}_{\rm eq}(z,\zeta) = e^{-r/\ell}. \label{anml.9} \end{equation} Using the definition, it is immediate to verify that also $\chi \sim \ell$ and, therefore, that moving toward the critical point along the $\epsilon$ axis one has \begin{equation} \chi \sim \xi, \label{susc.2} \end{equation} in agreement with Eq.~(\ref{susc.1}), because $D=1$ when $d=1$. \section{Statics and Dynamics: $\boldsymbol{\epsilon = 0, d=2}$} \label{IV} When $\epsilon = 0$, the $1d$ and $2d$ cases are quite different and need to be treated separately. In the latter one, which we shall now consider, $T_c > 0$ and ergodicity does not break. In the former, instead, $T_c = 0$ and ergodicity may break, depending on BC. This makes it more akin to the $2d$ quench to below $T_c$. So, it will be dealt with in the next section. \begin{figure}[ht] \centering \includegraphics[width=10cm]{fig3.pdf} \caption{Scaling function of the time-dependent correlation function in the quench to $T_c$ of the $2d$ model, demonstrating independence from BC, in the system with $L=256$. PBC (black symbols) and APBC (empty symbols).} \label{fig:collapseTc} \end{figure} The specificity of the quench to $\epsilon = 0$ is that $\ell$ diverges and, consequently, that $\xi$ can be limited only by $R$ or $L$. Thus, when the $t^{-1} \to 0$ limit is taken first and $L$ is kept fixed, $\xi$ crosses over from $R$ to $L$ in the finite time $t_{\rm eq} \sim L^z$, as in the right panel of Fig.\ref{fig:R}, and the system equilibrates to \begin{equation} \mathcal{C}_{\rm eq}(r,L^{-1};b) = \frac{1}{r^{1/4}} \mathcal{F}_{\rm eq} \left ( \frac{r}{L};b \right ), \label{sat.1} \end{equation} which depends on BC because correlations extend up to the boundary. Letting next $L^{-1} \to 0$, the BC dependence disappears from the critical correlation function of the infinite system \begin{equation} C_{\rm eq}(r) \sim \frac{1}{r^{1/4}}. \label{sat.2} \end{equation} Instead, if the $L^{-1} \to 0$ limit is taken first, $R$ is the only length left in the problem. This implies $\xi \sim R$ at all times, so that there is no finite equilibration time. However, the time-dependent correlation function \begin{equation} C(r,t^{-1}) = \frac{1}{r^{1/4}} F_c (r/R), \label{sat.3} \end{equation} which is BC independent (see Fig.\ref{fig:collapseTc}), gets arbitrarily close to the equilibrium counterpart~(\ref{sat.2}) as time grows, because the limit \begin{equation} \lim_{t^{-1} \to 0} C(r,t^{-1}) = C^*(r) \sim \frac{1}{r^{1/4}}, \label{sat.4} \end{equation} coincides with it. In summary, in the quench to $\epsilon = 0$ of the $2d$ system, like in the $\epsilon > 0$ case previously considered, the equilibrium correlation function of the infinite system $C_{\rm eq}(r)$ does not depend on BC and coincides with the time-asymptotic one $C^*(r)$, warranting the conclusion that the system can get arbitrarily close to equilibrium by waiting long enough. Comparing Eqs.~(\ref{sat.1}) and~(\ref{sat.3}), it is evident that the scaling structure is the same, the only difference being in the specific forms of the scaling functions, which is inessential for the present considerations. This shows that the time direction along the $t^{-1}$ axis, as far as scaling is concerned, is just another direction of approach to the critical point, on the same footing with the other two. In addition, from the formal similarity of the two scaling expressions follows straightforwardly that the susceptibility satisfies Eq.~(\ref{susc.1}) in the form \begin{equation} \chi \sim \xi^{7/4}, \label{suscett.1} \end{equation} irrespective of the direction of approach, with $\xi \sim L$ along the $L^{-1}$ axis and $\xi \sim R$ along the $t^{-1}$ axis. \section{Statics and Dynamics: $\boldsymbol{\epsilon = 0, d=1}$} \label{V} As mentioned above and explained at length in I, in the $1d$ system at $\epsilon = 0$ we are confronted with a radically different situation, because ergodicity, which holds for both BC above $T_c$, is now broken with PBC, but not with APBC. In order to ease the comparison, and to highlight the contrast, with the less familiar case of a transition without ergodicity breaking, let us first briefly summarize the well-established concept of ergodicity breaking~\cite{Palmer}. In the PBC case there are two degenerate ground states: the two ordered configurations with all spins either up $\boldsymbol{s}_+ = [s_i=+1]$ or down $\boldsymbol{s}_- = [s_i=-1]$. These, by themselves, form two absolutely-confining ergodic components, which are dynamically disconnected because the activated moves needed to go from one to the other are forbidden at zero temperature. Consequently, time averages coincide with ensemble averages taken with either one of the two broken-symmetry ferromagnetic pure states $P_-(\boldsymbol{s})=\delta_{\boldsymbol{s},\boldsymbol{s}_-}, P_+(\boldsymbol{s})=\delta_{\boldsymbol{s},\boldsymbol{s}_+}$ and {\it do not} coincide with the symmetric ensemble averages taken in the Gibbs state, which is the even mixture of the pure states \begin{equation} P^{(p)}(\boldsymbol{s}) = \frac{1}{2}[P_-(\boldsymbol{s}) + P_+(\boldsymbol{s})]. \label{mixt.0} \end{equation} In such a situation, only time averages are physically meaningful. Conversely, in the APBC case all the $4L$ degenerate ground-state configurations with one defect (or domain wall) belong to the same ergodic component, because the defect can freely sweep the whole system at no energy cost. Then, in this case time and ensemble averages coincide. The qualitative difference between the two zero-temperature states is well illustrated (see Fig.\ref{fig:1}) by the probability distribution $P_b(m)$ of the magnetization density $m=\frac{1}{2L}\sum_i s_i$, which is demonstrated~\cite{Antal} to be double peaked in the PBC case \begin{equation} P^{(p)}(m) = \frac{1}{2} [\delta(m+1) + \delta(m-1)], \label{mxt.1} \end{equation} \begin{figure}[ht] \centering \includegraphics[width=5cm]{fig1a-sublabel.pdf} \includegraphics[width=5cm]{fig1b-sublabel.pdf} \caption{Magnetization density distributions at $\epsilon=0$ in the $1d$ Ising model, with $m_\pm = \pm 1$. The spikes in the panel (a) stand for $\delta$ functions.} \label{fig:1} \end{figure} and uniform over the $[-1,1]$ interval in the APBC case \begin{equation} P^{(a)}(m) \to \left \{ \begin{array}{ll} 1/2, \;\; $for$, \;\; m \in [-1,1],\\ 0,\;\; $for$, \;\; m \notin [-1,1]. \end{array} \right . \label{mxt.2} \end{equation} So, if we now take the $t^{-1} \to 0$ limit while keeping $L^{-1}$ fixed, we find BC-related differences in the results. With PBC, as explained above, the meaningful averages are those in the broken symmetry pure states, yielding \begin{equation} \mathcal{C}^{(p)}_{\rm eq}(r,L^{-1}) = [\langle s_is_j \rangle_\pm - \langle s_i \rangle_\pm \langle s_j \rangle_\pm] = 0, \label{1dI.0} \end{equation} where the angular brackets stand for the average with respect to $P_-(\boldsymbol{s})$ or $P_+(\boldsymbol{s})$. The vanishing of correlations for any $r$ holds independently of $L$ and clearly implies that also the correlation length vanishes. Notice that if the correlation function had been extracted by taking the $\epsilon \to 0$ limit of the Gibbs average taken with the distribution~(\ref{mixt.0}), as in Eq.~(\ref{anml.6}), the result would have been \begin{equation} \mathcal{C}^{(p)}_{\rm eq}(r,L^{-1}) = 1, \label{1dI.16} \end{equation} which is independent of $r$ and does not decay. However, this would have been just an artefact of the mixture. Conversely, in the APBC case the ensemble Gibbs average, as calculated in Eq.~(\ref{anml.7}), gives the correct time-average result because ergodicity is not broken. Hence, in the $\epsilon \to 0$ limit, from Eq.~(\ref{anml.7}) one has \begin{equation} \mathcal{C}^{(a)}_{\rm eq}(r,L^{-1}) = (1-r/L). \label{1dI.17} \end{equation} The dependence on $r/L$ in the above expression reveals that correlations extend over a distance of order L in agreement with the general argument expounded in section~\ref{III}. Hence, by letting $L^{-1} \to 0$ the correlation length diverges, leading to the conclusion that the state at the origin of the parameter space is a {\it critical point} for the APBC system, where the correlation function displays the constant behaviour \begin{equation} C^{(a)}_{\rm eq}(r) = 1. \label{corrl.01} \end{equation} Contrary to Eq.~(\ref{1dI.16}), now the lack of decay is a real physical effect, which corresponds to the critical power law decay $1/r^{a}$ with a vanishing exponent $a$, due to the compactness of the CK correlated clusters. When the sequence of limits is reversed, after taking the thermodynamic limit we are again in the situation in which $R$ is the only length in the problem. Therefore $\xi \sim R$, as in the previous section, and we get the BC independent result \begin{equation} C(r,t^{-1}) = \frac{1}{r^a} F (r/R ), \label{q.1} \end{equation} with $a=0$. The function $F(x)$ is known from exact analytical computation with PBC~\cite{1d,Bray} and is given by \begin{equation} F(x) = \mathrm{erfc}\,(x), \quad \text{with} \quad x=r/2R. \label{quench.1} \end{equation} That the same scaling function applies also to the case of APBC is demonstrated by the numerical data displayed in Fig.~\ref{fig:scaledC}, which have been obtained by simulating the quench dynamics with the Metropolis algorithm on a system with $L=10^5$, after imposing PBC and APBC. The plot shows that the above result indeed holds irrespective of the BC choice, because the PBC and APBC data superimpose to the theoretical curve of Eq.~(\ref{quench.1}) with great accuracy, as long as $R(t) \ll L$. The existence of an endlessly growing correlation length $R(t)$ means that the relaxation dynamics along the $t^{-1}$axis drives both systems, with PBC and with APBC, toward the same asymptotic critical state at the origin \begin{figure}[ht] \centering \includegraphics[width=10cm]{fig5.pdf} \caption{Collapse on the master curve of Eq.~(\ref{quench.1}) of the data for $C(r,t^{-1},L^{-1})$ in the time regime $R \ll L$. PBC (black symbols) and APBC (empty symbols). The data for $R/L=10^{-4},10^{-3}$ have been obtained with $L=10^5$, those with $R/L=10^{-2}$ with $L=10^4$.} \label{fig:scaledC} \end{figure} with the unique time-asymptotic correlation function given by \begin{equation} \lim_{t^{-1} \to 0} C(r,t^{-1}) = C^*(r)= 1, \label{elle.2} \end{equation} which coincides with the APBC equilibrium result in Eq.(\ref{corrl.01}). So, if we compare the asymptotic result of Eq.~(\ref{elle.2}) with the APBC static one of Eq.~(\ref{corrl.01}), and with the PBC equilibrium result of Eq.~(\ref{1dI.0}), we see, as stated in the Introduction, that the APBC system tends toward equilibrium, although with an infinite relaxation time, while the PBC system remains permanently out of equilibrium. It is evident that the origin of the diversity of behaviours is in the presence or absence of ergodicity breaking. In fact, we shall see in the next section that the same behaviour occurs in the quench of the $2d$ system to below the critical point. In order to complete the picture of critical behaviour, let us check on the validity of Eq.~(\ref{susc.1}). From Eqs.~(\ref{anml.9},\ref{1dI.17},\ref{quench.1}) follows that along the three directions one has $\xi \sim \ell, \xi \sim R, \xi \sim L$, as well as $\chi \sim \ell, \chi \sim R, \chi \sim L$, yielding \begin{equation} \chi \sim \xi, \label{susc.10} \end{equation} independently of the direction of approach to the critical point, as it should be since $D=1$. \section{Statics and Dynamics: $\boldsymbol{\epsilon < 0, 2d}$} \label{VI} As in the previous case, the nature of the equilibrium state of the $2d$ model below $T_c$ depends strongly on BC, even in the thermodynamic limit. In I we have shown that the segment with $\epsilon < 0$ in the parameter space (see right panel of Fig.\ref{fig:pspace}) is the coexistence line of states spontaneously-magnetized in opposite directions, when PBC are imposed, while it is a line of critical points with APBC. Since this is a crucial point, let us overview the equilibrium picture before turning to the discussion of the quench dynamics. \begin{figure}[ht] \centering \includegraphics[width=10cm]{pannello2-sublabel.pdf} \caption{Typical equilibrium configurations below $T_c$. In the PBC case (a) one black domain of up spins fills the entire systems. Thermal fluctuations produce the small white domains of down spins. In the APBC case (b) there are two large domains separated by one interface cutting across the system. Within each domain there are the small patches of reversed spins due to thermal fluctuations.} \label{fig:conf} \end{figure} \subsection{{\bf Equilibrium with PBC}} \label{PBCsotto} When PBC are imposed, two confining components of spins aligned either prevalently up or prevalently down, are formed in phase space. A configuration typical of the up component is shown in the left panel of Fig.\ref{fig:conf}. In the thermodynamic limit these components become absolutely confining, ergodicity breaks down and, therefore, we are confronted with the same situation discussed in the $1d$ case at $T=0$. Namely, the Gibbs state becomes the even mixture of the two broken-symmetry pure states like in Eq.~(\ref{mixt.0}), that is \begin{equation} P_{\rm eq}^{(p)}(\boldsymbol{s}) = \sum_\alpha p(\alpha) P_\alpha (\boldsymbol{s}). \label{mix.1} \end{equation} Here, $\alpha = \pm$ is the component label, the mixing probability is uniform $p(\alpha)=1/2$ and $P_\alpha (\boldsymbol{s})$ is the ferromagnetic pure state. The nonvanishing spontaneous magnetization density $m_\alpha$ is given by \begin{equation} m_-=-m_+, \quad |m_\alpha| = |\epsilon|^\beta, \quad \beta=1/8. \label{mix.2} \end{equation} Using the above definitions and rewriting the Gibbs average in terms of the component averages, i.e. $\langle \cdot \rangle_{\rm eq} = \sum_\alpha p(\alpha) \langle \cdot \rangle_\alpha$, the equilibrium correlation function can be rearranged in the form \begin{equation} C_{\rm eq}^{(p)}(r,\epsilon) = \overline{ \langle (s_i -m_\alpha) (s_{i+r} -m_\alpha) \rangle_\alpha} \; + \; \overline{[\langle s_i \rangle_\alpha - \overline{m_\alpha}] [\langle s_{i+r} \rangle_\alpha - \overline{m_\alpha}]}, \label{mix.4} \end{equation} where the overline denotes averaging with respect to $p(\alpha)$. The first contribution is the average over components of the {\it intra}-component correlation function $\langle \psi_i \psi_{i+r} \rangle_\alpha$, where the variables $\psi_i = s_i - m_\alpha$ represent the thermal fluctuations in the pure state $P_\alpha (\boldsymbol{s})$. As it is intuitively clear, deviations from the average by symmetry do not depend on $\alpha$, so we shall use the notation $G_{\rm eq}(r,\epsilon)$ for $\langle \psi_i \psi_{i+r} \rangle_\alpha$. At low $T_F$ this quantity is short ranged, since in the broken-symmetry state the correlation length $\xi_\psi$ of the $\psi$ variables vanishes as $T_F \to 0$. The second term, instead, represents the {\it inter}-components contribution, which reduces to $m^2_\alpha$, since $\overline{m_\alpha}=0$ and $m^2_\alpha$ is independent of $\alpha$. Thus, in the end, from the Gibbs average we have \begin{equation} C_{\rm eq}^{(p)}(r,\epsilon) = G_{\rm eq}(r,\epsilon) +m^2_\alpha. \label{mix.5} \end{equation} It is important, for what follows, to keep in mind that the constant term $m^2_\alpha$, which is the variance of the variable $m_\alpha$ distributed according to $p(\alpha)$, arises exclusively from the mixing as the constant term in Eq.~(\ref{1dI.16}). Therefore, in the PBC case the only dynamical variables are the $\psi_i$, which means that the dynamical rule updates $\psi_i$, but not $m_\alpha$. The magnetization distribution exhibits the double peak structure~\cite{Binder,Bruce} which, in the thermodynamic limit, becomes the sum of the two $\delta$ functions \begin{equation} P^{(p)}(m|\epsilon) = \frac{1}{2} [ \delta(m-m_-) + \delta(m-m_+)]. \label{mix.60} \end{equation} Hence, as explained in the $1d$ case, the meaningful averages are those taken with the broken symmetry ensembles $P_\alpha (\boldsymbol{s})$, which coincide with time averages and give \begin{equation} C_{\alpha,\rm eq}^{(p)}(r,\epsilon) = G_{\rm eq}(r,\epsilon). \label{mix.5bis} \end{equation} \subsection{{\bf Equilibrium with APBC}} \label{APBCeq} When APBC are imposed, like in the $1d$ case ergodicity does not break. As explained in I, there is only one ergodic component, whose typical configurations at sufficiently low $T_F$ are composed of two large ordered domains, separated by one interface cutting across the system and sweeping through it, as illustrated in the right panel of Fig.\ref{fig:conf}. This suggests to split the spin variable into the sum of two independent components \begin{equation} s_i = m_{\alpha(i)}+ \psi_i, \label{split.1} \end{equation} where $\alpha(i)=\pm$ is the label of the domain to which the site $i$ belongs and $\psi_i=s_i - m_{\alpha(i)}$ is, as before, the thermal fluctuation variable. The significant difference with respect to the previous case is that now $\psi_i$ and $m_{\alpha(i)}$ are both dynamical variables, since the fluctuations of the latter one are not due to the mixing of pure states, but to the transit of the interface through the site $i$, which means that the dynamical rule updates both $\psi_i$ and $m_{\alpha(i)}$. Using the independence of these variables and the vanishing of averages $\langle s_i \rangle_{\rm eq} = \langle m_{\alpha(i)} \rangle_{\rm eq} =\langle \psi_i \rangle_{\rm eq} =0$, the correlation function can be written as the sum of two contributions \begin{equation} C_{\rm eq}^{(a)}(r,\epsilon,L^{-1}) = G_{\rm eq}(r,\epsilon) + D_{\rm eq}(r,\epsilon,L^{-1}), \label{mix.6} \end{equation} which have quite different properties. The first one, which is the same as in Eq.~(\ref{mix.5}), is short ranged. The $L$ dependence has been neglected, because we may always assume that the conditions for $\xi_\psi \ll L$ are realized. The second one, which contains the correlations of the background variables $m_{\alpha(i)}$, i.e. \begin{equation} D_{\rm eq}(r,\epsilon,L^{-1}) = \frac{1}{V} \sum_i \langle m_{\alpha(i)} m_{\alpha(i+r)} \rangle_{\rm eq}, \label{mix.7} \end{equation} has been studied numerically in I and scales as \begin{equation} D_{\rm eq}(r,\epsilon,L^{-1}) = \frac{1}{r^a}Y(\epsilon,r/L), \quad \text{where} \quad Y(\epsilon,x) = m^2_\alpha (1-x). \label{mix.8} \end{equation} We have retained the power-law prefactor $r^{-a}$ in front, even though now $a=0$ because the correlated clusters of the background variables are compact, in order to emphasize the similarity with Eq.~(\ref{sat.1}) and to render it evident by inspection that the correlation length $\xi_m$ of these variables coincides with $L$. Notice that $\epsilon$ does not enter the scaling function but only its amplitude through $m^2_\alpha$. From the divergence of $\xi_m$ in the thermodynamic limit, there follows that the whole segment on the $\epsilon$ axis with $\epsilon < 0$ is a locus of critical points, as anticipated above. The corresponding critical properties can be extracted by using $L^{-1}$ as the parameter of approach to criticality. It should be clear that this is bulk criticality, in no way related to the properties of the interface, to which the attention of previous studies of the APBC model was primarily directed. In I we have shown that the exponents satisfy the relations $\dot{\beta}/\dot{\nu} = 0$ and $\dot{\gamma}/\dot{\nu}=d$, where the dots identify the exponents with respect to $L^{-1}$, e.g. from $\xi_m \sim L$, follows $\dot{\nu} = 1$. This implies $\dot{\beta} = 0$ and $\dot{\gamma} = d$. Hence, the hyperscaling relation $2\dot{\beta} + \dot{\gamma} = \dot{\nu}d$ is satisfied, suggesting that the upper critical dimensionality might diverge. So, if now we take the thermodynamic limit, from Eqs.~(\ref{mix.6}) and~(\ref{mix.8}) we get \begin{equation} C_{\rm eq}^{(a)}(r,\epsilon) = G_{\rm eq}(r,\epsilon) + \frac{m^2_\alpha}{r^a}, \label{mix.6bis} \end{equation} and, consequently, the susceptibility of the background variables $\chi^{(a)}_m$ diverges like \begin{equation} \chi^{(a)}_m(\epsilon,L^{-1}) \sim L^d, \label{mix.9bis} \end{equation} in agreement with Eq.~(\ref{susc.1}), the $m$-CK clusters being compact. The strong magnetization fluctuations, implied by the divergence of the susceptibility, are indeed exhibited by the distribution $P^{(a)}(m)$ which, instead of being double peaked like in Eq.~(\ref{mix.60}), has been shown in I to be uniform over the interval $[m_-,m_+]$. The qualitative difference between $P^{(p)}(m)$ and $P^{(a)}(m)$ is the same previously analyzed in the $1d$ case and schematically represented in Fig.\ref{fig:1}. The uniformity of $P^{(a)}(m)$ is the distinctive feature which highlights the difference between condensation of fluctuations and the usual ordering transition associated to the double-peak structure of Eq.~(\ref{mix.60}). \subsection{Relaxation Dynamics} When the relaxation of the infinite system is studied, by taking first the thermodynamic limit, the dependence on BC is expected to disappear, because at any finite time the correlation length is limited by $R$. This is confirmed by the snapshots of the typical configurations (see Fig.\ref{fig:conf1}) taken after the quench to $T_F/T_c = 0.79$. The top panel depicts the PBC case and the bottom panel the APBC one. In each panel time increases from left to right. The first three snapshots, taken at $t=1,10,100$, display the self-similar morphology characteristic of coarsening domains, which does not show to be affected by the type of the imposed BC, because $R \ll L$. The BC influence is evident, instead, in the fourth snapshot taken at $t=10^5$, when $R \gtrsim L$ and the system has equilibrated. \begin{figure}[ht] \centering \includegraphics[width=10cm]{pannello-sublabel.pdf} \caption{The two panels show the sequence of snapshots taken at $t=1, 10, 100, 10^5$ with PBC (a) and APBC (b), after a quench at $T_F=1.8$, with $L=256$. Time increases from left to right. The first three configurations belong to the coarsening regime and show independence from BC. The last pair of configurations is morphologically similar to those in Fig.~\ref{fig:conf} and shows that at $t=10^5$ the system has equilibrated.} \label{fig:conf1} \end{figure} The configurations morphology, with large compact growing domains containing in their interior small patches of thermal fluctuations, suggests to generalize to the off-equilibrium regime the split of variables~(\ref{split.1}) by $s_i(t) = m_{\alpha(i,t)}+ \psi_i(t)$, where $\alpha(i,t)$ is the label of the domain to which the site $i$ belongs at the time $t$. Then, as in Eq.~(\ref{mix.6}), the correlation function separates into the sum of two contributions \begin{equation} C(r,\epsilon,t^{-1}) = G_{\rm eq}(r,\epsilon) + D(r;\epsilon,t^{-1}), \label{gen.19} \end{equation} where the first one is BC-independent, time-independent and identical to the analogous term appearing in Eqs.~(\ref{mix.5}) and~(\ref{mix.6}), because thermal fluctuations equilibrate quickly. The second contribution contains the correlations of the background variables and obeys scaling in the form \begin{equation} D(r,\epsilon,t^{-1}) = \frac{m^2_\alpha}{r^a} F(r/R), \label{gen.20} \end{equation} where $a=0$, due to the compactness of domains, the growth law $R(t) = t^{1/z}$ is the same of Eq.~(\ref{anml.3bis}) with $z=2$ and the $\epsilon$ dependence has been factorized in the amplitude $m^2_\alpha$. Comparing with Eq.~(\ref{sat.3}), we see that the same behavior as in the quench to $T_c$ is obtained, apart for the change of the exponents $z$ and $a$, and for the specific forms of the functions $F_c(x)$ and $F(x)$. \begin{figure}[ht] \centering \includegraphics[width=10cm]{fig8.pdf} \caption{Collapse on the master curve of Eq.~(\ref{quench.1}) of the data for $C(r,t^{-1},L^{-1})$ in the time regime $R \ll L$, system size $L=256$. PBC (black symbols) and APBC (empty symbols). The continuous line is the plot of the Ohta-Jasnow-Kawasaki function defined in Eq.~(\ref{OJK}).} \label{fig:scaledCbis} \end{figure} The above statements are substantiated by the plot in Fig.\ref{fig:scaledCbis} of the numerical data for the equal-time correlation function, generated for a quench to $T_F/T_c=0.666$, which corresponds to $\epsilon = -0.9$, with $L=256$ and with both PBC and APBC. The APBC data have been circularly averaged to smooth out the anisotropy induced by the cylindrical BC. The good collapse of the data, in the time regime such that $R \ll L$, shows that for the chosen value of $T_F$ the thermal fluctuations contribution is negligible. Moreover, the master curve $F(x)$ compares well with the Otha-Jasnow-Kawasaki~\cite{OJK} approximate result \begin{equation} F(x) = \left (\frac{2}{\pi} \right ) \arcsin (\gamma), \quad \gamma = \exp(-x^2/b), \label{OJK} \end{equation} where $b$ is a constant, as it is demonstrated by Fig.\ref{fig:scaledCbis}. Therefore, the relaxation to below $T_c$ is not qualitatively different from the one to $T_c$. Both are coarsening processes and both do not depend on the imposed BC. Differences between the two are in the quantitative details, like the values of the $z$ exponent, the dimensionality of correlated clusters and the shape of the scaling functions. The implication is that also in the quench to below $T_c$ the system tends toward a critical state, because the time-dependent correlation length $R$ diverges, eventually yielding the time-asymptotic critical correlation function \begin{equation} C^*(r,\epsilon)= G_{\rm eq}(r,\epsilon) + \frac{m^2_\alpha}{r^a}. \label{dyna.5} \end{equation} It is then evident, according to the discussion made at the end of subsection~\ref{APBCeq}, that this asymptotic form, which we emphasize once more is the same for both choices of BC, matches $C^{(a)}_{\rm eq}(r;\epsilon)$ but not $C^{(p)}_{\rm eq}(r;\epsilon)$. Finally, recalling that $a=0$, it is straightforward to see from Eq.~(\ref{gen.20}) that the background susceptibility scales like \begin{equation} \chi (\epsilon,t^{-1}) \sim R^d, \label{dyna.6} \end{equation} in agreement with the result~(\ref{mix.9bis}) for $\chi^{(a)}_m(\epsilon,L^{-1})$. In conclusion, in the PBC case, as anticipated in the Introduction, the asymptotic state and the equilibrium one are remote one from the other, and the system may be regarded as remaining strongly out of equilibrium, because in the former one there are long-range correlations, which are absent in the second one. In the APBC case, instead, both the asymptotic and the equilibrium state are critical and with the same universal properties, hence the system equilibrates although with an infinite equilibration time, just as in the quench to $T_c$. \subsection{Summary} So far we have shown that when the Ising model is quenched in the two-phase region, i.e. to below $T_c$ for $d=2$ and to $T_F=0$ for $d=1$, the APBC system equilibrates and the PBC one remains off equilibrium. The basic elements of the mechanism underlying this phenomenology are as follows: \begin{enumerate} \item To different BC, in principle, there correspond different statistical ensembles. \item These ensembles become {\it equivalent} when the limits are taken according to the sequence: $L^{-1} \to 0$ first and then $t^{-1} \to 0$, for all temperatures $T_F$. \item Instead, the ensembles {\it may} become {\it non equivalent}, depending on BC, when the limits are taken in the reverse sequence: $t^{-1} \to 0$ first and then $L^{-1} \to 0$ with $T_F < T_c$. \item Equivalence fails with PBC because of ergodicity breaking, and holds with APBC since ergodicity is preserved. \item Ergodicity breaking induces spontaneous symmetry breaking, which makes correlations short-ranged. \item Instead, when ergodicity holds an unusual type of criticality sets in, with long-range correlations and {\it compact} correlated domains. \end{enumerate} In the next section we shall show that this is not just a peculiarity of the Ising model, but that it is a more general phenomenon, since it takes place with the same characteristics also in the quench to below the critical point of the spherical model, without invoking the imposition of different types of BC. In fact, two different ensembles arise not from the choice of BC, which is taken to be the standard PBC one, but from enforcing the spherical constraint either sharply or smoothly. These ensembles turn out to be equivalent or non equivalent, just as in the Ising case, depending on the order of the $L^{-1} \to 0$ and $t^{-1} \to 0$ limits. \section{Spherical models} \label{SM} \subsection{Equilibrium} Let us briefly recall what the spherical model is about starting from equilibrium, which means that the $t^{-1} \to 0$ limit has been taken beforehand. Consider a classical paramagnet in the volume $V=L^d$ and with the energy function~\cite{Ma} \begin{equation} {\cal H}(\boldsymbol{\varphi}) = \int_V d\vec r \, \varphi(\vec r) \left (-\frac{1}{2}\nabla^2 \right )\varphi(\vec r), \label{gauss.1} \end{equation} where $\boldsymbol{\varphi}$ stands for a configuration of the local, continuous and unbounded spin variable $\varphi(\vec r)$. PBC are understood throughout. Due to its bilinear character, the above Hamiltonian can be diagonalized by Fourier transform \begin{equation} {\cal H} = \frac{1}{2V}\sum_{\vec k} k^2|\varphi_{\vec k}|^2. \label{gauss.1bis} \end{equation} In the spherical model (SM) of Berlin and Kac~\cite{BK} a coupling among the modes is induced by the imposition of an overall sharp constraint on the square magnetization density \begin{equation} \mathit{s}(\boldsymbol{\varphi}) = \frac{1}{V} \int_V d \vec r \, \varphi^2(\vec r) = \frac{1}{V^2}\sum_{\vec k} |\varphi_{\vec k}|^2 = 1. \label{gauss.2} \end{equation} Then, in thermal equilibrium the statistical ensemble is given by \begin{equation} P_\textrm{SM}(\boldsymbol{\varphi}) = \frac{1}{Z_\textrm{SM}} e^{-\beta {\cal H}(\boldsymbol{\varphi})} \, \delta \left (\mathit{s}(\boldsymbol{\varphi})-1 \right ), \label{Gauss.4} \end{equation} where $Z_\textrm{SM}$ is the partition function. A variant of the model, called the mean-spherical model (MSM)~\cite{LW,KT}, is obtained by imposing the constraint in the mean: An exponential bias is introduced in place of the $\delta$ function \begin{equation} P_\textrm{MSM}(\boldsymbol{\varphi}) = \frac{1}{Z_\textrm{MSM}}e^{-\beta [{\cal H}(\boldsymbol{\varphi}) +\frac{\kappa}{2} {\cal S}(\boldsymbol{\varphi})]}, \label{Gauss.3} \end{equation} where ${\cal S}(\boldsymbol{\varphi}) = V\mathit{s}(\boldsymbol{\varphi})$ and the parameter $\kappa$ must be so adjusted to satisfy the requirement \begin{equation} \langle \mathit{s}(\boldsymbol{\varphi}) \rangle_\textrm{MSM} = 1. \label{msph.1} \end{equation} Although it is the common usage to refer to these as models, it should be clear from Eqs. (\ref{Gauss.4}) and (\ref{Gauss.3}) that we are dealing with two conjugate ensembles, distinguished by conserving or letting to fluctuate the density $\mathit{s}$. In both models there exists a phase transition at the same critical temperature $T_c$, above which they are equivalent and below which they are not, which means that the nature of the low temperature phase is different. It is worth, here, to go in some detail~\cite{CSZ} because the point is quite illuminating on the equivalence or lack-of issue. Let us separate in $\mathit{s}$ the excitations from the ground-state contribution \begin{equation} \mathit{s} = \mathit{s}_0 + \mathit{s}^*, \quad \text{with} \quad \mathit{s}_0 = \frac{1}{V^2}\varphi_0^2, \quad \quad \mathit{s}^*= \frac{1}{V^2} \sum_{\vec k \neq 0} |\varphi_{\vec k}|^2. \label{s.1} \end{equation} Then, taking the average in either ensemble, from the spherical constraint follows the sum rule \begin{equation} \langle \mathit{s}_0 \rangle + \langle \mathit{s}^* \rangle = 1, \label{s.2} \end{equation} which must be satisfied at all temperatures and it is the motor of the transition. In fact, in the thermodynamic limit the excitations contribution is superiorly bounded~\cite{CSZ} by \begin{equation} \langle \mathit{s}^* \rangle \leq TB, \label{s.3} \end{equation} where $B$ is a dimensionality-dependent positive constant, which is finite for $d > 2$ and diverges at $d=2$. Therefore, by enforcing the constraint~(\ref{s.2}) there remains defined the critical temperature \begin{equation} T_c = 1/B, \label{s.4} \end{equation} above which the sum rule~(\ref{s.2}) is saturated without any contribution from $\langle \mathit{s}_0 \rangle$, while below there must necessarily be a finite contribution from the ground state, yielding \begin{equation} \langle \mathit{s}_0 \rangle = \left \{ \begin{array}{ll} 0, \;\; $for$, \;\; T \geq T_c,\\ 1 - T/T_c,\;\; $for$, \;\; T < T_c. \end{array} \right . \label{s.5} \end{equation} Rewriting $\mathit{s}_0 = \psi_0^2$, where $\psi_0$ is the density $\frac{1}{V}\varphi_0$, the question is how can there arise a finite contribution to $\langle \mathit{s}_0 \rangle$ from this single degree of freedom and here is precisely where the two models differ. In the SM the sharp version~(\ref{gauss.2}) of the constraint introduces enough nonlinearity for the transition to take place by {\it ordering}. This means that ergodicity breaks down inducing the spontaneous breaking of the $\mathbb{Z}_2$ symmetry. Then, exactly like in the Ising model with PBC, the probability distribution of the magnetization density, that is of $\psi_0$, results from the mixture of the two pure ferromagnetic states \begin{equation} P_\textrm{SM}(\psi_0) = \frac{1}{2}[\delta (\psi_0 - m_-) + \delta (\psi_0 - m_+)], \label{ensmbl.010} \end{equation} where $m_\pm = \pm \sqrt{1 - T/T_c}$ is the spontaneous magnetization. Thus, in this case $\langle \mathit{s}_0 \rangle_{SM}$ stands for the square of the spontaneous magnetization $m_\pm^2$. Instead, in the MSM ordering cannot take place, because the soft version~(\ref{msph.1}) of the constraint leaves the statistics Gaussian. Neither ergodicity nor symmetry break down, as in the Ising APBC case. Then, below $T_c$, the only mean to build up the finite value of $\langle \mathit{s}_0 \rangle_{MSM}$ needed to saturate the sum rule is by growing the fluctuations of $\psi_0$ through the spread out probability distribution given by \begin{equation} P_\textrm{MSM}(\psi_0) = \frac{e^{-\frac{\psi_0^2}{2(1-T/T_c)}}}{\sqrt{2\pi (1 - T/T_c)}}. \label{ensmbl.001} \end{equation} Therefore, now $\langle \mathit{s}_0 \rangle_{MSM}$ stands for the macroscopic variance of $\psi_0$. Elsewhere~\cite{EPL,CCZ,Zannetti,Merhav,Marsili}, this type of transition, characterized by the fluctuations of an extensive quantity condensing into one microscopic component, has been referred to as {\it condensation of fluctuations}. \begin{figure}[!tb] \centering \includegraphics[width=5cm]{fig9a-sublabel.pdf} \hspace{0cm} \includegraphics[width=5cm]{fig9b-sublabel.pdf} \caption{Magnetization distribution in the MSM model (a) and in the SM model (b), for $T < T_c$. The spikes in the right panel stand for $\delta$ functions.} \label{fig1} \end{figure} Comparing Figs.~\ref{fig1} and~\ref{fig:1}, it is evident that the distributions are the same in the two cases where ergodicity breaks down, that is in the Ising model with PBC and in the SM. In the other two cases, Ising with APBC and MSM, the distributions are not superimposable but show the same physical phenomenon: ergodicity is preserved by developing macroscopic fluctuations of the magnetization, which remain finite in the thermodynamic limit and reveal the critical nature of the low temperature phase. In fact, in the MSM the structure factor, i.e. the Fourier transform of the correlation function, is given by~\cite{CCZ} \begin{equation} C_\textrm{MSM}(\vec k) = \frac{T}{k^2} + m_\pm^2 \delta(\vec k). \label{str.1} \end{equation} The two terms appearing above are the anolouges in Fourier space of those entering $C_{\rm eq}^{(a)}(r;\epsilon)$ in Eq.~(\ref{mix.6bis}), with the correspondences \begin{equation} G_{\rm eq}(r;\epsilon) \longleftrightarrow \frac{T}{k^2}, \quad \frac{m^2_\alpha}{r^a} \longleftrightarrow m_\pm^2 \delta(\vec k). \label{str.2} \end{equation} Notice that, as it is well known, the thermal fluctuations contribution in the MSM is massless, i.e. is critical, at all temperatures below $T_c$. For simplicity, let us set $T=0$ in order to get rid on this contribution and to focus on the interesting one, which is the $\delta$-function term (Bragg peak). We emphasize that this is the Fourier transform of the background {\it critical} contribution with compact correlated clusters, just as the corresponding term in the Ising APBC case. Finally, we point out that the $d=2$ case is analogous to Ising with $d=1$, because $T_c$ vanishes. However, for brevity, we shall not elaborate on this case here. \subsection{Dynamics} Let us next consider the relaxation dynamics in the quench to $T_F=0$. In Ref.~\cite{Fusco} it was shown that, when the thermodynamic limit is taken first, the two models are equivalent at all times. Then, it is an exact result that the dynamical structure factor both for the SM and MSM is given by \begin{equation} C(\vec k,t) = \Delta \left (1 +\frac{2r^2_0}{R^2} \right )^{d/2} R^d \, e^{-(kR)^2}, \label{str.3} \end{equation} where $C(\vec k,0)=\Delta$ is the spatially uncorrelated initial condition at $T_I=\infty$, $R = \sqrt{2t}$ is the growth law for nonconserved dynamics and $r_0 = 1/(\sqrt{2}\Lambda)$ is the microscopic length related to the momentum cutoff $\Lambda$, which is imposed exponentially when integrating over $\vec k$ and is responsible for the corrections to scaling in the early regime. From the normalization condition at $t=0$ \begin{equation} \int \frac{d^d k}{(2\pi)^d} \, C(\vec k,0) e^{-k^2/\Lambda^2} = 1, \label{str.4} \end{equation} there follows $\Delta = (2\sqrt{\pi})^d$. Inserting this into Eq.~(\ref{str.3}), with little algebra one can verify that indeed the normalization is satisfied at all times. Then, since the peak grows like $C(0,t) \sim R^d$, one can conclude that the asymptotic structure factor is the $\delta$ function \begin{equation} \lim_{t \to \infty} C(\vec k,t) = C^*(\vec k) = \delta(\vec k), \label{str.5} \end{equation} which matches the equilibrium Bragg peak~(\ref{str.1}) in the MSM. Since the growth of $R$ implies that the asymptotic state is critical and with compact correlated clusters, in the quench to below $T_c$ the MSM approaches equilibrium arbitrarily close, while the SM, which is not critical in the equilibrium state, remains permanently out of equilibrium. In conclusion, going through all the items listed at the end of the previous section, one can check that perfect correspondence between Ising-PBC and SM on one side and Ising-APBC and MSM on the other is established. \section{Concluding Remarks} \label{CR} In this paper we have addressed a problem which is of basic interest in the physics of slowly relaxing systems. Since slow relaxation means that equilibrium is not reached in the observable time scale, relevant questions are whether a criterion for equilibration, or for lack of, can be established and, if so, whether the nature of the equilibrium state can be inferred from the available dynamical information. Although the task of giving general answers to these questions is of formidable difficulty, we have shown that, at least in the restricted realm of phase-ordering systems, it is possible to arrive at some definite conclusions. By analyzing the relaxation of the Ising model after temperature quenches, we have found that the system does or does not equilibrate, depending on whether the dynamics at the final temperature of the quench is ergodic or not. This has been established by investigating the dependence of the spin-spin correlation function upon the order of the large-time and thermodynamic limits, when different BC are imposed. The findings are that the APBC system equilibrates in all conditions, because the dynamics are ergodic at all temperatures, while the PBC system does not equilibrate for $T_F < T_c$, because that's where ergodicity does not hold. These statements are strengthened and corroborated by exact analytical results from the quench of the spherical and mean spherical model, which reproduce very closely, although in a quite different context, the picture just outlined. We may then answer the first question asked at the beginning of the section by saying that it might take an infinite time to equilibrate, but nonetheless the system can get arbitrarily close to equilibrium if the dynamics are ergodic. Instead, if ergodicity is broken, and if the initial state is symmetric, the system does not get close to equilibrium, no matter how long is let to relax. For what concerns the second question, the answer is that, yes, once it is established that the system approaches equilibrium, then the nature of the equilibrium state can be inferred from the dynamical information. Consider first the quench to $T_c$, in which case the time-dependent correlation function obeys the scaling form~(\ref{sat.3}), while in the equilibrium state it decays according to the pure power law~(\ref{sat.2}). It is then evident that the latter result can be reconstructed from the short distance behaviour, i.e. for $r \ll R$, at times finite but large enough to detect a clean scaling behaviour. The same procedure applies also in the case of the quench to below $T_c$ with APBC, where the background component of the correlation function is given by Eq.~(\ref{gen.20}). Then, again the short distance approximation, which in this case is a constant term since $a=0$, reproduces correctly the form of the equilibrium critical correlation function. Finally, let us comment on the nature of the line of critical points on the $\epsilon < 0$ segment in the Ising APBC case. According to the view put forward in this paper, $t^{-1}$ is just another relevant parameter measuring the distance from criticality, on the same footing with $\epsilon$ and $L^{-1}$, so that these critical points control both statics and dynamics. In subsection \ref{APBCeq} we have pointed out that the static critical exponents, defined with respect to $L^{-1}$, satisfy the hyperscaling relation $2\dot{\beta} + \dot{\gamma} = \dot{\nu }d$ for all $d$, suggesting that the upper critical dimension is at $d=\infty$. It is then interesting to note the concomitance with the fact that the Otha-Jasnow-Kawasaki approximate theory, which accounts well for the time dependent correlation function as shown in Fig.\ref{fig:scaledCbis}, becomes exact in the $d \to \infty$ limit~\cite{Bray}. \begin{acknowledgements} A.F. acknowledges financial support of the MIUR PRIN 2017WZFTZP "Stochastic forecasting in complex systems". \end{acknowledgements}
1,314,259,994,457
arxiv
\section{Introduction} Fluid-structure interaction (FSI) problems arise in many applications, such as aerodynamics, hemodynamics and geomechanics. They are used to predict flow properties in patient-specific arterial geometries, microfluidic devices and in the design of many industrial components. FSI problems are moving domain problems, characterized by highly non-linear coupling between fluid flow and structure deformation. As a result, the development of robust numerical algorithms is a subject of intensive research. The solution strategies for FSI problems can be classified as monolithic and partitioned methods. In monolithic algorithms~\cite{bazilevs2008isogeometric,deparis2003acceleration,gerbeau2003quasi,nobile2001numerical,gee2011truly,ryzhakov2010monolithic,hron2006monolithic,bathe2004finite}, the coupling conditions are imposed implicitly and the entire coupled problem is solved as one system of algebraic equations. However, they may require long computational time, large memory allocation and well-designed preconditioners~\cite{gee2011truly,badia2008modular,heil2008solvers}. In partitioned methods~\cite{degroote2008stability,farhat2006provably,bukavc2012fluid,badia2009robin,BorSunMulti,Fernandez2012incremental,fernandez2013fully,nobile2008effective,hansbo2005nitsche,lukavcova2013kinematic,banks2014analysis,banks2014analysis2,oyekole2018second,bukavc2016stability}, the fluid flow and structure deformation are solved separately as smaller and better conditioned sub-problems, which reduces the computational cost. However, they often suffer from numerical instabilities, which makes the design and analysis of stable and efficient partitioned schemes challenging even for simplified, linear problems. The design of partitioned algorithms is especially challenging in blood flow applications due to numerical instabilities known as \textit{the added mass effect}~\cite{causin2005added}, which are manifested when the fluid and structure have comparable densities. Furthermore, design of non-iterative, partitioned methods is particularly difficult when the dimension of the solid domain is the same as the dimension of the fluid domain. When the structure is thin, i.e., described by a lower-dimensional model, it serves as a fluid-structure interface with mass, which is exploited in the design of many partitioned methods~\cite{oyekole2018second,bukavc2016stability,Fernandez2012incremental,lukavcova2013kinematic} where parts of the structure equation are used as a Robin boundary condition for the fluid problem. However, when the structure is thick, no additional mass is present at the fluid-structure interface, which makes the design of stable, non-iterative partitioned algorithms especially challenging. It is well-known that classical, Dirichlet-Neumann partitioned methods are unconditionally unstable when fluid and structure have comparable densities~\cite{causin2005added}, which can be resolved by sub-iterating between fluid and structure sub-problems within each time step. As an alternative to the Dirichlet-Neumann approach, which can exhibit convergence issues, Robin-Dirichlet, Robin-Neumann, or Robin-Robin methods were designed in~\cite{nobile2008effective,badia2009robin,badia2008fluid,gerardo2010analysis,degroote2011similarity}. In the design of these methods, the coupling conditions are linearly combined to obtain the generalized Robin interface conditions, which are then used in the fluid and/or structure sub-problems. We also mention the fictitious-pressure and fictitious-mass algorithms proposed in~\cite{baek2012convergence,yu2013generalized}, in which the added mass effect is accounted for by incorporating additional terms into governing equations. However, algorithms proposed in~\cite{nobile2008effective,badia2009robin,badia2008fluid,gerardo2010analysis,degroote2011similarity,baek2012convergence,yu2013generalized} still require sub-iterations between the fluid and the structure sub-problems in order to achieve stability. A different partitioned scheme was proposed in~\cite{burman2009stabilization,burman2013unfitted}, where the fluid-structure coupling conditions are imposed using Nitsche's penalty method~\cite{hansbo2005nitsche} and some terms are time-lagged to uncouple the fluid and solid sub-problems. It was shown that the scheme is stable under a CFL condition if a weakly consistent stabilization term that includes pressure variations at the interface is added. The authors show that the rate of convergence in time is sub-optimal, which is then corrected by proposing a few defect-correction sub-iterations. A non-iterative, partitioned algorithm based on the so-called added-mass partitioned Robin conditions was proposed in~\cite{banks2014analysis2}. It was shown that the algorithm is stable under a condition on the time step, which depends on the structure parameters. Even though the authors do not derive the convergence rates, their numerical results indicate that the scheme is second-order accurate in time. A generalized Robin-Neumann explicit coupling scheme based on an interface operator accounting for the solid inertial effects within the fluid has been proposed in~\cite{fernandez2015generalized}. The scheme has been analyzed on a linear FSI problem and shown to be stable under a time-step condition. In our previous work~\cite{bukavc2014modular}, we developed a partitioned scheme for FSI with a thick, linearly viscoelastic structure based on an operator-splitting approach. However, the assumption that the structure is viscoelastic was necessary in the derivation of the scheme, and the solid viscosity was solved implicitly with the fluid problem. Furthermore, the scheme was shown to be stable only under a condition on the time step~\cite{bukavc2016stability}. In this work, we propose a partitioned, loosely-coupled method for FSI problems with thick structures. As opposed to the previous work, the method presented here is unconditionally stable, and sub-iterations or stabilization terms are not needed to achieve stability. Furthermore, a moving domain problem was considered in the stability analysis. The fluid is modeled using the Navier-Stokes equations for an incompressible, viscous fluid, and the structure using the equations of linear elasticity. The deformation of the fluid mesh is treated using the Arbitrary Lagrangian-Eulerian approach (ALE)~\cite{hughes1981lagrangian,donea1983arbitrary,nobile2001numerical}, where the fluid mesh is allowed to deform matching the deformation of the structural domain. The proposed partitioned method is based on generalized Robin boundary conditions, which are formulated in a novel way. Unconditional stability is shown on a moving domain, semi-discrete problem using energy estimates. The proposed method is discretized in space and implemented using the finite element method. We preform error analysis of the fully discrete method on a linearized problem and show that the scheme exhibits $\mathcal{O}(\Delta t^\frac12)$ convergence in time and optimal convergence in space. The relation between the combination parameter used in the formulation of generalized Robin boundary conditions and the accuracy of the method is explored in the numerical examples. We also compare our method to an implicit scheme on a benchmark problem under realistic parameters in blood flow modeling. This paper is organized as follows. The non-linear FSI problem is presented in Section 2, and the proposed numerical scheme is presented in Section 3. Stability analysis is performed in Section 4 and error analysis is performed in Section 5. Numerical examples are presented in Section 6. Conclusions are drawn in Section 7. \section{Mathematical model} We are interested in modeling fluid flow in a deformable channel, where the channel walls represent an elastic structure. We assume that the fluid is viscous and incompressible, that the structure is linearly elastic, and that the fluid and structure are both described in two-dimensional domains. The fluid and structure are two-ways coupled, resulting in a non-linear, moving domain problem. \subsection{Computational domains and mappings} We denote the reference fluid domain by $\hat{\Omega}_F$ and the reference structure domain by $\hat{\Omega}_S$ (see Figure~\ref{domain}). The fluid and structure domains at time $t$ are denoted by $\Omega_F(t)$ and $\Omega_S(t)$, respectively. \begin{figure}[ht] \centering{ \includegraphics[scale=0.6]{domain2D.pdf} } \caption{Left: Reference domain $\hat{\Omega}_F \cup \hat{\Omega}_S$. Right: Deformed domain $\Omega_F(t) \cup \Omega_S(t).$} \label{domain} \end{figure} We assume that the structure equations are given in a Lagrangian framework, with respect to the reference domain $\hat{\Omega}_S$. The fluid equations will be described in the ALE formulation. To track the deformation of the fluid domain in time, we introduce a smooth, invertible, ALE mapping ${\mathcal{A}}: \hat{\Omega}_F \times [0,T] \rightarrow \Omega_F(t)$ given by \begin{equation*} {\mathcal{A}} ({\boldsymbol X},t)= {\boldsymbol X} + {\boldsymbol \eta}_F({\boldsymbol X},t), \quad \; \textrm{for all } {\boldsymbol X} \in \hat{\Omega}_F, t \in [0,T], \label{ale} \end{equation*} where ${\boldsymbol \eta}_F$ denotes the displacement of the fluid domain. We assume that ${\boldsymbol \eta}_F$ equals the structure displacement on $\hat\Gamma$, and is arbitrarily extended into the fluid domain $\hat{\Omega}_F$~\cite{langer2018numerical}. We denote the fluid deformation gradient by ${\boldsymbol F} = {\nabla}\mathcal{A}$ and its determinant by ${J}$. \subsection{Fluid sub-problem} To model the fluid flow, we use the Navier-Stokes equations in the ALE formulation~\cite{langer2018numerical,multilayered,thick}, given as follows: \begin{align}\label{NSale1} & \rho_F \left( \partial_t \boldsymbol{v} |_{\hat{\Omega}_F}+ (\boldsymbol{v}-\boldsymbol{w}) \cdot \nabla \boldsymbol{v} \right) = \nabla \cdot \boldsymbol\sigma_F(\boldsymbol v, p) + \boldsymbol f_F& \textrm{in}\; \Omega_F(t)\times(0,T), \\ \label{NSale2} &\nabla \cdot \boldsymbol{v} = 0 & \textrm{in}\; \Omega_F(t)\times(0,T), \end{align} where $\boldsymbol{v}$ is the fluid velocity, $\boldsymbol w= \partial_t \boldsymbol x|_{\hat{\Omega}_F} = \partial_t \mathcal{A} \circ \mathcal{A}^{-1}$ is the domain velocity, $\rho_F$ is the fluid density, $\boldsymbol\sigma_F$ is the fluid stress tensor and $\boldsymbol f_F$ is the forcing term. For a Newtonian fluid, the stress tensor is given by $\boldsymbol\sigma_F(\boldsymbol v,p) = -p \boldsymbol{I} + 2 \mu_F \boldsymbol{D}(\boldsymbol{v}),$ where $p$ is the fluid pressure, $\mu_F$ is the fluid viscosity and $\boldsymbol{D}(\boldsymbol{v}) = (\nabla \boldsymbol{v}+(\nabla \boldsymbol{v})^{T})/2$ is the strain rate tensor. Notation $ \partial_t \boldsymbol{v} |_{\hat{\Omega}_F}$ denotes the Eulerian description of the ALE field $\partial_t {\boldsymbol{v}} \circ \mathcal{A}$~\cite{formaggia2010cardiovascular}, {\emph i.e.}, \begin{equation*} \partial_t \boldsymbol{v}(\boldsymbol x,t) |_{\hat{\Omega}_F} = \partial_t \boldsymbol{v}(\mathcal{A}^{-1}(\boldsymbol x,t),t). \end{equation*} We denote the inlet and outlet of the fluid domain by $\Gamma_F^{in}(t)$ and $\Gamma_F^{out}(t)$, respectively. At the inlet and outlet sections, we prescribe Neumann boundary conditions: \begin{align} & \boldsymbol {\sigma}_F \boldsymbol{ n}_{F} = -p_{in} (t) \boldsymbol{n}_{F} & \textrm{on} \; \Gamma_{F}^{in}(t) \times (0,T), \label{inlet} \\ & \boldsymbol {\sigma}_F \boldsymbol{ n}_{F} = -p_{out}(t) \boldsymbol{n}_{F} & \textrm{on} \; \Gamma_{F}^{out}(t) \times (0,T), \label{outlet} \end{align} where $\boldsymbol n_F$ is the outward unit normal to the deformed fluid domain. We will also consider the dynamic pressure inlet and outlet data: \begin{align}\ & \displaystyle{p+\frac{\rho_F}{2}|\boldsymbol v|^2}=p_{in}(t) & \textrm{on} \; \Gamma_{F}^{in}(t) \times (0,T), \label{inlet2} \\ & \displaystyle{p+\frac{\rho_F}{2}|\boldsymbol v|^2}=p_{out}(t) & \textrm{on} \; \Gamma_{F}^{out}(t) \times (0,T), \label{outlet2} \\ & \boldsymbol v \times \boldsymbol{n}_F = 0 & \textrm{on} \; \Gamma_{F}^{in}(t) \cup \Gamma_F^{out}(t) \times (0,T). \label{dp3} \end{align} Here, the fluid flow is driven by a prescribed dynamic pressure drop, and the flow enters and leaves the fluid domain orthogonally to the inlet and outlet boundary. While Neumann boundary conditions~\eqref{inlet}-\eqref{outlet} are more convenient to use in numerical simulations, dynamic pressure boundary conditions~\eqref{inlet2}-\eqref{dp3} are used to derive the energy estimates of the fluid problem in a moving domain and in the stability analysis. \subsection{Structure sub-problem} To model the elastic structure, we use the elastodynamics equations written in the first order form as \begin{align} & \partial_{t} {\boldsymbol \eta} = \boldsymbol \xi & \textrm{in}\; \hat{\Omega}_S\times(0,T), \\ &{\rho}_S \partial_{t} {\boldsymbol \xi} = {\nabla} \cdot \boldsymbol \sigma_S(\boldsymbol \eta) + \boldsymbol f_S& \textrm{in}\; \hat{\Omega}_S\times(0,T), \end{align} where $\boldsymbol{\eta}$ is the structure displacement, $\boldsymbol{\xi}$ is the structure velocity, ${\rho}_S$ is the structure density, ${\boldsymbol \sigma_S}$ is the solid stress tensor and $\boldsymbol f_S$ is the volume force applied to the structure. We assume that the deformations are small and use the Saint-Venant Kirchhoff elastic model, given as \begin{align*} \boldsymbol \sigma_S(\boldsymbol \eta) = 2 \mu_S \boldsymbol D(\boldsymbol \eta) + \lambda_S (\nabla \cdot \boldsymbol \eta) \boldsymbol I, \end{align*} where $\mu_S$ and $\lambda_S$ are Lam\'e constants. We assume that the structure is fixed at the inlet and outlet boundaries: \begin{equation}\label{homostructure1} {\boldsymbol \eta} = 0 \quad \textrm{on} \;\; \hat{\Gamma}_S^{in} \cup \hat{\Gamma}_S^{out} \times(0,T), \end{equation} and that the external structure boundary $\hat{\Gamma}_S^{ext}$ is exposed to zero external ambient pressure: \begin{equation}\label{homostructure2} {\boldsymbol \sigma_S} {\boldsymbol{ n}}_S = 0 \quad \textrm{on} \;\; \hat{\Gamma}_S^{ext} \times (0,T), \end{equation} where ${\boldsymbol n}_S$ is the outward normal to the reference structure domain. \subsection{The coupled FSI problem} To couple the fluid and structure sub-problems, we prescribe the kinematic and dynamic coupling conditions~\cite{langer2018numerical,multilayered} given as follows: \noindent \textbf{Kinematic coupling condition} describes the continuity of velocity at the fluid-structure interface (no-slip): \begin{equation} {\boldsymbol{v}} \circ \mathcal{A}= {\boldsymbol \xi} \;\; \; \textrm{on} \; \hat{\Gamma} \times (0,T). \label{kinematic} \end{equation} \noindent \textbf{Dynamic coupling condition} describes the continuity of stresses at the fluid-structure interface due to the action-reaction principle. The condition reads: \begin{equation} {J} \boldsymbol \sigma_F \boldsymbol{F}^{-T} \boldsymbol n_F + {\boldsymbol \sigma_S } {\boldsymbol n}_S=0 \;\;\; \textrm{on} \; \hat\Gamma \times (0,T). \label{dynamic} \end{equation} Hence, the fully-coupled fluid-structure interaction problem is given by: \begin{align}\label{fsi1} & \rho_F \left( \partial_t \boldsymbol{v} |_{\hat{\Omega}_F}+ (\boldsymbol{v}-\boldsymbol{w}) \cdot \nabla \boldsymbol{v} \right) = \nabla \cdot \boldsymbol\sigma_F(\boldsymbol v, p) & \textrm{in}\; \Omega_F(t)\times(0,T), \\ &\nabla \cdot \boldsymbol{v} = 0 & \textrm{in}\; \Omega_F(t)\times(0,T), \label{fsi11}\\ & \partial_{t} {\boldsymbol \eta} = \boldsymbol \xi & \textrm{in}\; \hat{\Omega}_S\times(0,T), \\ &{\rho}_S \partial_{t} {\boldsymbol \xi} = {\nabla} \cdot \boldsymbol \sigma_S(\boldsymbol \eta) & \textrm{in}\; \hat{\Omega}_S\times(0,T), \label{fsi12} \\ & {\boldsymbol{v}} \circ \mathcal{A}= {\boldsymbol \xi} & \textrm{on} \; \hat{\Gamma} \times (0,T), \label{coupling_noslip}\\ & {J} \boldsymbol \sigma_F \boldsymbol{F}^{-T} \boldsymbol n_F+ {\boldsymbol \sigma_S } {\boldsymbol n}_S =0 & \textrm{on} \; \hat\Gamma \times (0,T).\label{fsi2} \end{align} To update the fluid domain, we extend the solid displacement at the interface using the harmonic extension, which is a common choice of the extension operator~\cite{badia}. The fluid domain and domain velocity are determined, respectively, by \begin{gather*} \Omega_F(t) = \mathcal{A}(\hat{\Omega}_F, t), \quad \boldsymbol w = \partial_t \mathcal{A} \circ \mathcal{A}^{-1}. \end{gather*} Initially, the fluid and the structure are assumed to be at rest, with zero displacement from the reference configuration. \subsection{The weak formulation of the coupled problem} Given an open set $S$, we consider the usual Sobolev spaces $H^k(S)$, with $k \geq 0$. For all $t \in [0,T]$ we introduce the following functional spaces: \begin{align*} &V^F(t) =\left\{ \boldsymbol \phi: \Omega_F(t) \rightarrow \mathbb{R}^2 \ | \ \boldsymbol \phi = \hat{\boldsymbol \phi} \circ \mathcal{A}^{-1}, \ \hat{\boldsymbol \phi} \in (H^1(\hat{\Omega}_F))^2 \right\}, \\ &V^{F,0}(t) =\left\{ \boldsymbol \phi \in V^F(t) \ | \ \boldsymbol \phi \times \boldsymbol n =0 \; \; \textrm{on} \; \Gamma_{F}^{in} \cup \Gamma_F^{out} \right\}, \\ & Q^F(t)= \left\{ \psi: \Omega_F(t) \rightarrow \mathbb{R} \ | \ \psi = \hat{ \psi} \circ \mathcal{A}^{-1}, \ \hat{ \psi} \in L^2(\hat{\Omega}_F) \right\}, \\ & V^S = \left\{ {\boldsymbol {\zeta}}: \hat{\Omega}_S \rightarrow \mathbb{R}^2 \ | \ {\boldsymbol {\zeta}} \in (H^1(\hat{\Omega}_S))^2 , \; {\boldsymbol {\zeta}}=0 \; \textrm{on} \; \hat{\Gamma}_S^{in} \cup \hat{\Gamma}_S^{out} \right\}, \\ & V^{FSI}(t) = \left\{ (\boldsymbol \phi, {\boldsymbol \zeta}) \in V^{F,0}(t) \times V^S \ | \ \boldsymbol \phi = {\boldsymbol \zeta} \circ \mathcal{A}^{-1} \; \textrm{on} \; \Gamma(t) \right\}. \end{align*} We define the following bilinear forms associated with the fluid and structure problems: \begin{align} &a_F(\boldsymbol v, \boldsymbol \phi) = 2 \mu_F \int_{\Omega_F(t)} \boldsymbol D(\boldsymbol v) : \boldsymbol D(\boldsymbol \phi) d \boldsymbol x, \quad \forall \boldsymbol v, \boldsymbol \phi \in V^F(t), \\ &b_F(\boldsymbol v, \psi) = \int_{\Omega_F(t)} \nabla \cdot \boldsymbol v \psi d \boldsymbol x, \quad \forall \boldsymbol v \in V^F(t), \psi \in Q^F(t), \\ &a_S(\boldsymbol \eta, \boldsymbol \zeta) = 2 \mu_S \int_{\hat{\Omega}_S} \boldsymbol D(\boldsymbol \eta) : \boldsymbol D(\boldsymbol \zeta) d \boldsymbol x+ \lambda_S \int_{\hat{\Omega}_S} (\nabla \cdot \boldsymbol \eta)(\nabla \cdot \boldsymbol \zeta) d \boldsymbol x, \quad \forall \boldsymbol \eta, \boldsymbol \zeta \in V^S. \end{align} We also define norm $\| \cdot \|_S$ associated with the bilinear form $a_S(\cdot, \cdot)$ as \begin{gather} \| \boldsymbol \eta \|_S = \left(a_S(\boldsymbol \eta,\boldsymbol \eta)\right)^{\frac12}. \end{gather} The weak formulation of the coupled fluid-structure interaction problem~\eqref{fsi1}-\eqref{fsi2} with boundary conditions~\eqref{inlet2}-\eqref{dp3} and~\eqref{homostructure1}-\eqref{homostructure2} is given as follows: Find $(\boldsymbol v, \boldsymbol \xi) \in V^{FSI}(t), p \in Q^F(t)$ and $ \boldsymbol \eta \in V^S$ such that $\partial_t \boldsymbol \eta = \boldsymbol \xi$ and \begin{gather*} \rho_F \int_{\Omega_F(t)} \partial_t \boldsymbol v|_{\hat{\Omega}_F} \cdot \boldsymbol \phi d\boldsymbol{x} +\rho_F \int_{\Omega_F(t)} \left((\boldsymbol v-\boldsymbol w) \cdot \nabla \right) \boldsymbol v \cdot \boldsymbol \phi d\boldsymbol{x} +2 \mu_F \int_{\Omega_F(t)} \boldsymbol D(\boldsymbol v) : \boldsymbol D (\boldsymbol \phi) d \boldsymbol x \notag \\ - \int_{\Omega_F(t)} p \nabla \cdot \boldsymbol \phi d\boldsymbol{x} + \int_{\Omega_F(t)} q \nabla \cdot \boldsymbol v d\boldsymbol{x} + {\rho}_S \int_{\hat\Omega_S} \partial_{t} {\boldsymbol \xi} \cdot {\boldsymbol \zeta} d {\boldsymbol X} + 2 \mu_S \int_{\hat\Omega_S} D(\boldsymbol \eta) : \boldsymbol D (\boldsymbol \zeta) d \boldsymbol X \notag \\ + \lambda_S \int_{\hat\Omega_S} (\nabla \cdot \boldsymbol \eta) (\nabla \cdot \boldsymbol \zeta) d \boldsymbol X = - \int_{\Gamma_F^{in}} p_{in} \boldsymbol \phi \cdot \boldsymbol{n}_F dx - \int_{\Gamma_F^{out}} p_{out} \boldsymbol \phi \cdot \boldsymbol{n}_F dx \notag \\ +\frac{\rho_F}{2} \int_{\Gamma_F^{in} \cup \Gamma_F^{out}} |\boldsymbol v |^2 \boldsymbol \phi \cdot \boldsymbol{n}_F dx, \end{gather*} for all $(\boldsymbol \phi, {\boldsymbol \zeta}) \in V^{FSI}(t), q \in Q^F(t)$. To derive the energy of the coupled FSI problem, we take $\boldsymbol \phi = \boldsymbol v, q=p$ and $\boldsymbol \zeta=\boldsymbol \xi$. We transform $\int_{\Omega_F(t)} \rho_F \partial_t {\boldsymbol{v}} |_{\hat{\Omega}_F} \cdot \boldsymbol v d\boldsymbol x$ on the reference domain $\hat{\Omega}_F$ as follows: \begin{align*} \int_{\Omega_F(t)} \rho_F \partial_t {\boldsymbol{v}} |_{\hat{\Omega}_F} \cdot \boldsymbol v d\boldsymbol x & = \int_{\hat{\Omega}_F} \rho_F J \partial_t \left({\boldsymbol{v} \circ \mathcal{A}}\right) \cdot \left( {\boldsymbol v} \circ \mathcal{A} \right) d\hat{\boldsymbol x} \\ & = \frac12 \int_{\hat{\Omega}_F} \rho_F \partial_t \left( J | {\boldsymbol{v}} \circ \mathcal{A} |^2 \right) d\hat{\boldsymbol x} -\frac12 \int_{\hat{\Omega}_F} \rho_F \partial_t J | {\boldsymbol{v}} \circ \mathcal{A}|^2 d\hat{\boldsymbol x}. \end{align*} Using the Euler expansion formula, \begin{align} \partial_t J|_{\hat{\Omega}_F} = J {\nabla} \cdot {\boldsymbol w}, \label{euleref} \end{align} we have \begin{align*} \int_{\Omega_F(t)} \rho_F \partial_t {\boldsymbol{v}} |_{\hat{\Omega}_F} \cdot \boldsymbol v d\boldsymbol x & = \frac12 \int_{\hat{\Omega}_F} \rho_F \partial_t \left( J | {\boldsymbol{v} \circ \mathcal{A} }|^2 \right) d\hat{\boldsymbol x} -\frac12 \int_{\hat{\Omega}_F} \rho_F J {\nabla} \cdot \left( {\boldsymbol w} \circ \mathcal{A} \right) | {\boldsymbol{v}\circ \mathcal{A} }|^2 d\hat{\boldsymbol x} \\ & = \frac12 \frac{d}{dt} \int_{\hat{\Omega}_F} \rho_F J | {\boldsymbol{v} \circ \mathcal{A}}|^2 d\hat{\boldsymbol x} -\frac12 \int_{\hat{\Omega}_F} \rho_F J {\nabla} \cdot \left( {\boldsymbol w} \circ \mathcal{A} \right) | {\boldsymbol{v}\circ \mathcal{A} }|^2 d\hat{\boldsymbol x} \\ & = \frac12 \frac{d}{dt} \int_{{\Omega_F(t)}} \rho_F | {\boldsymbol{v}}|^2 d{\boldsymbol x} -\frac12 \int_{{\Omega_F(t)}} \rho_F {\nabla} \cdot {\boldsymbol w} | {\boldsymbol{v}}|^2 d{\boldsymbol x}. \end{align*} To handle the convective term, after integration by parts and taking into account $\nabla \cdot \boldsymbol v=0$, we have \begin{align} \rho_F\int_{\Omega_F(t)}\left((\boldsymbol{v}-\boldsymbol{w}) \cdot \nabla\right) \boldsymbol{v} \cdot \boldsymbol v d\boldsymbol x & = \frac{\rho_F}{2} \int_{\Omega_F(t)} \nabla \cdot \boldsymbol w |\boldsymbol v|^2 d\boldsymbol x +\frac{\rho_F}{2} \int_{\Gamma(t)} \left((\boldsymbol v - \boldsymbol w) \cdot \boldsymbol n_F \right) |\boldsymbol v|^2 dS \notag \\ & +\frac{\rho_F}{2} \int_{\Gamma_F^{in}(t) \cup \Gamma_F^{out}(t)} \left((\boldsymbol v - \boldsymbol w) \cdot \boldsymbol n_F \right) |\boldsymbol v|^2 dS. \notag \end{align} Since $\boldsymbol w = \boldsymbol u$ on $\Gamma(t)$ and $\boldsymbol w = \boldsymbol 0$ on $\Gamma_F^{in} \cup \Gamma_F^{out}$, the following energy equality holds: \begin{gather*} \frac{\rho_F}{2} \frac{d}{dt} \| {\boldsymbol{v}} \|^2_{L^2(\Omega_F(t))} +2 \mu_F \| \boldsymbol D(\boldsymbol v) \|^2_{L^2(\Omega_F(t))} +\frac{\rho_S}{2} \frac{d}{dt} \| {\boldsymbol \xi} \|^2_{L^2(\hat{\Omega}_S)} +\frac{1}{2} \frac{d}{dt}\| {\boldsymbol \eta} \|^2_{S} \\ = - \int_{\Gamma_F^{in}} p_{in}(t) \boldsymbol v \cdot \boldsymbol n d S - \int_{\Gamma_F^{out}} p_{out}(t) \boldsymbol v \cdot \boldsymbol n d S. \end{gather*} \section{Numerical method} Let $\Delta t$ be the time step and $t^n = n \Delta t$ for $n=0, \ldots, N.$ We denote by $z^n$ the approximation of a time-dependent function $z$ at time level $t^n$. We define the discrete backward difference operator $d_t z^{n+1}$ and the average $z^{n+\frac12}$ as $$ d_t z^{n+1} = \frac{z^{n+1}-z^n}{\Delta t}, \qquad z^{n+\frac12} = \frac{z^{n+1}+z^n}{2}. $$ Similar as in~\cite{badia,badia2009robin}, we consider a linear combination of FSI coupling conditions~\eqref{kinematic}-\eqref{dynamic} \begin{align} & \alpha \boldsymbol \xi+ \boldsymbol{\sigma}_S {\boldsymbol n}_S= \alpha \boldsymbol{v} \circ \mathcal{A}(t) - {J} \boldsymbol \sigma_F \boldsymbol{F}^{-T} \boldsymbol n_F \;\;\; \textrm{on} \; \hat\Gamma \times (0,T), \label{lcomb} \end{align} where $\alpha>0$ is a combination parameter. Using~\eqref{dynamic} again, we introduce the following two time-discrete transmission conditions of Robin type: \begin{align} & \alpha \boldsymbol \xi^{n+1}+ \boldsymbol{\sigma}_S^{n+1} {\boldsymbol n}_S= \alpha \boldsymbol{v}^{n} \circ \mathcal{A}(t^n) - {J}^n \boldsymbol \sigma_F^n (\boldsymbol{F}^n)^{-T} \boldsymbol n_F^n \;\;\; \textrm{on} \; \hat\Gamma \times (0,T), \label{cc1} \\ & \alpha \boldsymbol \xi^{n+1} - {J}^{n+1} \boldsymbol \sigma_F^{n+1} (\boldsymbol{F}^{n+1})^{-T} \boldsymbol n_F^{n+1} = \alpha \boldsymbol{v}^{n+1}\circ \mathcal{A}(t^{n+1}) - {J}^n \boldsymbol \sigma_F^n (\boldsymbol{F}^n)^{-T} \boldsymbol n_F^n. \quad \textrm{on} \; \hat\Gamma \times (0,T). \label{cc2} \end{align} Condition~\eqref{cc1} will serve as a Robin-type boundary condition for the structure sub-problem, and condition~\eqref{cc2} will serve as a Robin-type boundary condition for the fluid sub-problem. To discretize the fluid and structure sub-problems in time, we use the Backward Euler scheme. The fluid and structure sub-problems, semi-discretized in time, are now given as follows: \noindent \textbf{Structure sub-problem:} Find ${\boldsymbol \eta}^{n+1}$ and $\boldsymbol \xi^{n+1}$ such that \begin{align} & d_t \boldsymbol \eta^{n+1} = \boldsymbol \xi^{n+1} & \textrm{in}\; \hat{\Omega}_S, \label{scheme1} \\ & {\rho}_S d_t {\boldsymbol \xi}^{n+1} = {\nabla} \cdot \boldsymbol \sigma_S (\boldsymbol \eta^{n+1}) & \textrm{in}\; \hat{\Omega}_S, \\ & \alpha \boldsymbol \xi^{n+1}+ \boldsymbol{\sigma}_S^{n+1} {\boldsymbol n}_S= \alpha \boldsymbol{v}^{n} \circ \mathcal{A}(t^n) - {J}^n \boldsymbol \sigma_F^n (\boldsymbol{F}^n)^{-T} \boldsymbol n_F^n & \textrm{on} \; \hat{\Gamma}. \end{align} \noindent \textbf{Geometry sub-problem:} Find ${\boldsymbol \eta}_F^{n+1}$ such that \begin{align} &- \Delta {\boldsymbol \eta}^{n+1}_F = 0 & \textrm{in} \; \hat{\Omega}_F, \\ & {\boldsymbol \eta}^{n+1}_F = 0 & \textrm{on} \; \hat{\Gamma}^{in}_F \cup \hat{\Gamma}^{out}_F, \\ & {\boldsymbol \eta}^{n+1}_F = {\boldsymbol \eta}^{n+1} & \textrm{on} \; \hat{\Gamma}, \end{align} and ${\boldsymbol w}^{n+1}$ such that \begin{equation} {\boldsymbol w}^{n+1} \circ \mathcal{A}(t^{n+1}) = d_t {\boldsymbol \eta}_F^{n+1} \quad \textrm{in} \; \hat{\Omega}_F. \end{equation} Compute $\Omega_F(t^{n+1})$ as $\Omega_F(t^{n+1}) = (I + {\boldsymbol \eta}_F^{n+1})(\hat{\Omega}_F)$. Set $ \boldsymbol v^{n} \circ \mathcal{A}(t^{n}) = \boldsymbol w^{n+1} \circ \mathcal{A}(t^{n+1})$ on $\hat\Gamma$. \noindent \textbf{Fluid sub-problem:} Find $\boldsymbol v^{n+1}$ and $p^{n+1}$ such that \begin{align}\label{Tfluid} & \rho_F \left( J^{n} \frac{ {\boldsymbol{v}}^{n+1} \circ \mathcal{A}(t^{n+1})-{\boldsymbol{v}}^{n} \circ \mathcal{A}(t^{n})}{\Delta t} + J^{n+\frac12} ({\boldsymbol{v}}^{n} \circ \mathcal{A}(t^{n})-\boldsymbol{w}^{n+1} \circ \mathcal{A}(t^{n+1})) \cdot \nabla \boldsymbol{v}^{n+1} \circ \mathcal{A}(t^{n+1}) \right) \notag \\ &\qquad = J^{n+1} \nabla \cdot \boldsymbol\sigma_F({\boldsymbol v}^{n+1}, {p}^{n+1}) \circ \mathcal{A}(t^{n+1}) \qquad \textrm{in}\; \hat{\Omega}_F, \\ &J^{n+1} \nabla \cdot {\boldsymbol{v}}^{n+1} = 0 \qquad \textrm{in}\; \hat{\Omega}_F, \\ & \alpha \boldsymbol \xi^{n+1} - {J}^{n+1} \boldsymbol \sigma_F^{n+1} (\boldsymbol{F}^{n+1})^{-T} \boldsymbol n_F^{n+1} = \alpha \boldsymbol{v}^{n+1}\circ \mathcal{A}(t^{n+1}) - {J}^n \boldsymbol \sigma_F^n (\boldsymbol{F}^n)^{-T} \boldsymbol n_F^n \qquad \textrm{on} \; \hat{\Gamma}. \label{scheme2} \end{align} We note that the continuous formulation of the fluid sub-problem is written on the reference domain due to the use of different time discretizations of the computational domain for different terms in the equation. However, the deformed domains, as described in~\eqref{Fweak}, are considered in practice. \subsection{Weak formulation of the semi-discrete partitioned scheme} We define the following bilinear forms associated with the fluid problem: \begin{gather*} a_F^n(\boldsymbol v, \boldsymbol \phi) = 2 \mu_F \int_{\Omega_F(t^n)} \boldsymbol D(\boldsymbol v) : \boldsymbol D (\boldsymbol \phi) d \boldsymbol x, \qquad b_F^n(p, \boldsymbol \phi) = \int_{\Omega_F (t^n)} p \nabla \cdot \boldsymbol \phi d \boldsymbol x, \end{gather*} for all $\boldsymbol v, \boldsymbol \phi \in V^F(t^n)$ and $p \in Q^F(t^n)$. To simplify the notation moving forward, we will write $$ \int_{\Omega(t^{m})} \boldsymbol v^{n} \qquad \textrm{instead of} \quad \int_{\Omega(t^{m})} \boldsymbol v^{n} \circ \mathcal{A}(t^{n}) \circ \mathcal{A}^{-1}(t^{m}) $$ whenever we need to integrate $\boldsymbol v^n$ on a domain $\Omega(t^{m})$, for $m \neq n$. The weak formulation of the fluid and structure sub-problems is given as: \noindent \textbf{Structure sub-problem:} Find $\boldsymbol \xi^{n+1} \in V^S$ and $\boldsymbol \eta^{n+1} \in V^S$, where $\boldsymbol \xi^{n+1} = d_t \boldsymbol \eta^{n+1}$, such that for all $\boldsymbol \zeta \in V^S$ we have \begin{align} \rho_S \int_{\hat{\Omega}_S} d_t \boldsymbol \xi^{n+1} \cdot \boldsymbol \zeta d \boldsymbol x + a_S(\boldsymbol \eta^{n+1}, \boldsymbol \zeta) +\alpha \int_{\hat{\Gamma}} (\boldsymbol \xi^{n+1} - \boldsymbol v^n ) \cdot \boldsymbol \zeta d \boldsymbol x = -\int_{\hat\Gamma} {J}^n \boldsymbol \sigma_F^n (\boldsymbol{F}^n)^{-T} \boldsymbol n_F^n \cdot \boldsymbol \zeta d\boldsymbol x. \label{Sweak} \end{align} \noindent \textbf{Fluid sub-problem:} Find $\boldsymbol v^{n+1} \in V^F(t^{n+1})$ and $p^{n+1} \in Q^F(t^{n+1})$ such that for all $\boldsymbol \phi \in V^F(t^{n+1})$ and $\psi \in Q^F(t^{n+1})$ we have \begin{align} &\rho_F \int_{\Omega_F(t^{n})} \frac{ {\boldsymbol{v}}^{n+1}-{\boldsymbol{v}}^{n}}{\Delta t} \cdot {\boldsymbol \phi} d \boldsymbol x +\rho_F \int_{\Omega_F(t^{n+\frac12})} \left(({\boldsymbol{v}}^{n}-{\boldsymbol{w}}^{n+1}) \cdot \nabla \right) {\boldsymbol{v}}^{n+1} \cdot {\boldsymbol \phi} d \boldsymbol x +a_F^{n+1}({\boldsymbol v}^{n+1}, \boldsymbol \phi) \notag \\ & \quad -b_F^{n+1}({p}^{n+1}, \boldsymbol \phi) +b_F^{n+1}(\psi, {\boldsymbol v}^{n+1}) + \alpha \int_{\Gamma(t^{n+1})}(\boldsymbol v^{n+1} - \boldsymbol \xi^{n+1} ) \cdot \boldsymbol \phi d \boldsymbol x \notag \\ & \quad =\int_{\Gamma(t^{n})} \boldsymbol \sigma_F(\boldsymbol v^n, p^n) \boldsymbol n_F^n \cdot {\boldsymbol \phi} d \boldsymbol x +\int_{\Gamma_F^{in} \cup \Gamma_F^{out}} \boldsymbol \sigma_F (\boldsymbol v^{n+1}, p^{n+1}) \boldsymbol n_F^{n+1} \cdot \boldsymbol \phi d \boldsymbol x. \label{Fweak} \end{align} We note that the boundary conditions in the fluid sub-problem are not specified. Conditions~\eqref{inlet}-\eqref{outlet} will be used in numerical simulations in Section~\ref{numerics}, while conditions~\eqref{inlet2}-\eqref{dp3} will be used in stability analysis in Section~\ref{StabAn}. \section{Stability analysis}~\label{StabAn} Let $\mathcal{E}^n$ denote the sum of the kinetic energy of the fluid and the kinetic and elastic energy of the solid, given by $$ \mathcal{E}^n= \frac{\rho_F}{2} \| \boldsymbol v^{n} \|^2_{L^2(\Omega_F(t^n))} + \frac{\rho_S}{2} \| \boldsymbol \xi^{n} \|^2_{L^2(\hat{\Omega}_S)} +\frac12 \| \boldsymbol \eta^{n}\|^2_S, $$ let $\mathcal{D}^n$ denote the fluid viscous dissipation, given by $$ \mathcal{D}^n = \mu_F \Delta t \sum_{k=1}^{n} \| \boldsymbol D(\boldsymbol v^{k}) \|^2_{L^2(\Omega_F(t^{k}))}, $$ and let $\mathcal{N}_1^n$ and $\mathcal{N}_2^n$ denote terms due to numerical dissipation, given by \begin{align*} & \mathcal{N}_1^n = \frac{\alpha \Delta t}{2} \| \boldsymbol v^{n}\|^2_{L^2(\hat{\Gamma})} +\frac{ \Delta t}{2 \alpha} \| {J}^n \boldsymbol \sigma_F^n (\boldsymbol{F}^n)^{-T} \boldsymbol n_F^n \|^2_{L^2(\hat\Gamma)} , \\ & \mathcal{N}_2^n = \frac{\rho_S}{2} \sum_{k=0}^{n-1} \| \boldsymbol \xi^{k+1} - \boldsymbol \xi^{k} \|^2_{L^2(\hat{\Omega}_S)} +\frac12 \sum_{k=0}^{n-1} \| \boldsymbol \eta^{k+1} - \boldsymbol \eta^k\|^2_S + \frac{\rho_F}{2} \sum_{k=0}^{n-1} \| {\boldsymbol v}^{k+1} - \boldsymbol v^{k} \|^2_{L^2(\Omega_F(t^{k}))} \\ &\qquad +\frac{\alpha \Delta t}{2} \sum_{k=0}^{n-1} \| \boldsymbol \xi^{k+1}-\boldsymbol v^k \|^2_{L^2(\hat{\Gamma})}. \end{align*} The stability of method~\eqref{Sweak}-\eqref{Fweak} is presented in the following theorem. \begin{theorem} Let $(\boldsymbol \xi^n, \boldsymbol \eta^n, \boldsymbol v^n, p^n$) be the solution of~\eqref{Sweak}-\eqref{Fweak}. Assume boundary conditions~\eqref{inlet2}-\eqref{dp3} are imposed. Then, the following a priori energy estimate holds: \begin{align} & \mathcal{E}^{N}+\mathcal{D}^N+\mathcal{N}_1^N+\mathcal{N}_2^N \leq \mathcal{E}^0+\mathcal{N}_1^0 + \frac{\Delta t C_P^2 C_K^2 }{2 \mu_F} \| p_{in} \|^2_{L^2(\Gamma_F^{in})} + \frac{\Delta t C_P^2 C_K^2}{2 \mu_F} \| p_{out} \|^2_{L^2(\Gamma_F^{in})}. \label{energy_inequality} \end{align} \end{theorem} \begin{proof} Take $\boldsymbol \zeta =\Delta t \boldsymbol \xi^{n+1}$ in~\eqref{Sweak} and $\boldsymbol \phi = \Delta t\boldsymbol v^{n+1}, \psi =\Delta t p^{n+1}$ in~\eqref{Fweak}. Adding the equations and recasting the interface integrals in the fluid problem on the reference domain, we have \begin{align} & \frac{\rho_S}{2} \left( \| \boldsymbol \xi^{n+1} \|^2_{L^2(\hat{\Omega}_S)} - \| \boldsymbol \xi^{n} \|^2_{L^2(\hat{\Omega}_S)} + \| \boldsymbol \xi^{n+1} - \boldsymbol \xi^{n} \|^2_{L^2(\hat{\Omega}_S)} \right) +\frac12 \left( \| \boldsymbol \eta^{n+1}\|^2_S - \| \boldsymbol \eta^{n}\|^2_S + \| \boldsymbol \eta^{n+1} - \boldsymbol \eta^n\|^2_S \right) \notag \\ & +\rho_F \int_{\Omega_F(t^{n})} ({ \boldsymbol{v}}^{n+1}-{\boldsymbol{v}}^{n}) \cdot {\boldsymbol v}^{n+1} d \boldsymbol x +\rho_F \Delta t \int_{\Omega_F( t^{n+\frac12})} \left( ({\boldsymbol{v}}^{n}-{\boldsymbol{w}}^{n+1} ) \cdot \nabla \right) {\boldsymbol{v}}^{n+1} \cdot {\boldsymbol v}^{n+1} d \boldsymbol x \notag \\ & +2\mu_F \Delta t \| \boldsymbol D(\boldsymbol v^{n+1}) \|^2_{L^2(\Omega_F(t^{n+1}))} +\frac{\alpha \Delta t}{2} \left( \| \boldsymbol v^{n+1} \|^2_{L^2(\hat\Gamma)} -\| \boldsymbol v^n \|^2_{L^2(\hat\Gamma)} \right) \notag \\ & +\frac{\alpha \Delta t}{2} \left( \| \boldsymbol \xi^{n+1}-\boldsymbol v^n \|^2_{L^2(\hat\Gamma)} +\| \boldsymbol v^{n+1} - \boldsymbol \xi^{n+1} \|^2_{L^2(\hat \Gamma)} \right) \notag \\ & = \Delta t \int_{\hat\Gamma} {J}^n \boldsymbol \sigma_F^n (\boldsymbol{F}^n)^{-T} \boldsymbol n_F^n \cdot (\boldsymbol v^{n+1} - \boldsymbol \xi^{n+1}) d \boldsymbol x -\Delta t \int_{\Gamma_F^{in}} p_{in} \boldsymbol v^{n+1} \cdot \boldsymbol n_F^{n+1} d x \notag \\ & -\Delta t \int_{\Gamma_F^{out}} p_{out} \boldsymbol v^{n+1} \cdot \boldsymbol n_F^{n+1} dx +\frac{\rho_F \Delta t}{2} \int_{\Gamma_F^{in} \cup \Gamma_F^{out}} |\boldsymbol v^{n+1} |^2 \boldsymbol v^{n+1} \cdot \boldsymbol{n}_F^{n+1} dx. \label{stab1} \end{align} We transform the integral containing the time-derivative of the fluid velocity to the reference domain as follows: \begin{gather*} \rho_F \int_{\Omega_F(t^{n})} ( {\boldsymbol{v}}^{n+1}-{\boldsymbol{v}}^{n}) \cdot {\boldsymbol v}^{n+1} d \boldsymbol x =\rho_F \int_{\hat{\Omega}_F} J^{n} \left( \boldsymbol{v}^{n+1}-{\boldsymbol{v}}^{n} \right) \cdot \boldsymbol v^{n+1} d \boldsymbol x. \end{gather*} Using identity \begin{align*} &\int_{\hat{\Omega}_F} J^{n} \left( \boldsymbol{v}^{n+1}-{\boldsymbol{v}}^{n} \right) \cdot \boldsymbol v^{n+1} d \boldsymbol x \notag \\ & \quad = \frac12 \int_{\hat{\Omega}_F} \left(J^{n+1} |\boldsymbol{v}^{n+1}|^2 - J^n |\boldsymbol{v}^{n} |^2 \right) d \boldsymbol x -\frac12 \int_{\hat{\Omega}_F} \left(J^{n+1}-J^n \right) |\boldsymbol{v}^{n+1} |^2 d \boldsymbol x \notag \\ & \quad +\frac{1}{2 } \int_{\hat{\Omega}_F} J^n |\boldsymbol{v}^{n+1} -\boldsymbol{v}^{n} |^2 d \boldsymbol x, \end{align*} we obtain \begin{align} \rho_F \int_{\Omega_F(t^{n})} \left( {\boldsymbol{v}}^{n+1}-{\boldsymbol{v}}^{n} \right) \cdot {\boldsymbol v}^{n+1} d \boldsymbol x =&\frac{\rho_F}{2 } \left( \|\boldsymbol v^{n+1} \|^2_{L^2(\Omega_F(t^{n+1}))} -\|\boldsymbol v^{n} \|^2_{L^2(\Omega_F(t^{n}))}+\|{\boldsymbol v}^{n+1}-\boldsymbol v^{n} \|^2_{L^2(\Omega_F(t^{n}))} \right) \notag \\ &-\frac{\rho_F}{2} \int_{\hat{\Omega}_F} \left( J^{n+1}-J^n \right) |\boldsymbol{v}^{n+1}|^2 d \boldsymbol x. \label{kineticTerm} \end{align} To handle the last term in~\eqref{kineticTerm}, we use the~\emph{Geometric Conservation Law}~\cite{boffi2004stability,nobile1999stability,lukavcova2013kinematic,donea2004arbitrary} given as \begin{align*} \| \boldsymbol v^{n+1} \|^2_{L^2(\Omega_F(t^{n+1}))} -\| \boldsymbol v^{n+1} \|^2_{L^2(\Omega_F(t^{n}))} = \int_{t^n}^{t^{n+1}}\left( \int_{\Omega_F(t)}|\boldsymbol v^{n+1}|^2 \nabla \cdot \boldsymbol w d \boldsymbol x \right) dt. \end{align*} Since we consider a linear time variation for the displacement of the points of the fluid domain, the domain velocity is constant in time interval $[t^n, t^{n+1}]$. In that case, it has been shown in~\cite{lesoinne1996geometric} that the Geometric Conservation Law is exactly satisfied if the midpoint formula is used for time-integration in two-dimensions, yielding \begin{align} \| \boldsymbol v^{n+1} \|^2_{L^2(\Omega_F(t^{n+1}))} -\| \boldsymbol v^{n+1} \|^2_{L^2(\Omega_F(t^{n}))} = \Delta t \int_{\Omega_F(t^{n+\frac12})}|\boldsymbol v^{n+1}|^2 \nabla \cdot \boldsymbol w^{n+\frac12} d \boldsymbol x. \label{GCL} \end{align} As in~\cite{nobile2008effective}, we note that since the domain velocity is piecewise constant, we have $\boldsymbol w^{n+\frac12}=\boldsymbol w^{n+1}$. Therefore, equation~\eqref{kineticTerm} can be written as \begin{align} \rho_F \int_{\Omega_F(t^{n})} \left( {\boldsymbol{v}}^{n+1}-{\boldsymbol{v}}^{n} \right) \cdot {\boldsymbol v}^{n+1} d \boldsymbol x =&\frac{\rho_F}{2 } \left( \|\boldsymbol v^{n+1} \|^2_{L^2(\Omega_F(t^{n+1}))} -\|\boldsymbol v^{n} \|^2_{L^2(\Omega_F(t^{n}))}+\|{\boldsymbol v}^{n+1}-\boldsymbol v^{n} \|^2_{L^2(\Omega_F(t^{n}))} \right) \notag \\ &-\frac{\rho_F \Delta t}{2}\int_{\Omega_F(t^{n+\frac12})}|\boldsymbol v^{n+1}|^2 \nabla \cdot \boldsymbol w^{n+1} d \boldsymbol x. \label{timeterm} \end{align} For the advection term, we proceed as follows: \begin{align} \rho_F \Delta t \int_{\Omega_F(t^{n+\frac12})} \left(({\boldsymbol{v}}^{n}-{\boldsymbol{w}}^{n+1}) \cdot \nabla \right) {\boldsymbol{v}}^{n+1} \cdot {\boldsymbol{v}}^{n+1} d \boldsymbol x = \frac{\rho_F \Delta t}{2} \int_{\Omega_F(t^{n+\frac12})} \nabla \cdot {\boldsymbol w}^{n+1} |{\boldsymbol v}^{n+1} |^2 d\boldsymbol x \notag \\ + \frac{\rho_F \Delta t}{2} \int_{\Gamma_F^{in} \cup \Gamma_F^{out}} |\boldsymbol v^{n+1} |^2 \boldsymbol v^{n+1} \cdot \boldsymbol{n}_F^{n+1} dx. \label{advection_simplified} \end{align} To handle the interface term in~\eqref{stab1}, using~\eqref{scheme2}, we have \begin{align} & \Delta t \int_{\hat\Gamma} {J}^n \boldsymbol \sigma_F^n (\boldsymbol{F}^n)^{-T} \boldsymbol n_F^n \cdot (\boldsymbol v^{n+1} - \boldsymbol \xi^{n+1}) d \boldsymbol x \notag \\ & =\frac{ \Delta t}{\alpha} \int_{\hat\Gamma} {J}^n \boldsymbol \sigma_F^n (\boldsymbol{F}^n)^{-T} \boldsymbol n_F^n \cdot \left( {J}^n \boldsymbol \sigma_F^n (\boldsymbol{F}^n)^{-T} \boldsymbol n_F^n -{J}^{n+1} \boldsymbol \sigma_F^{n+1} (\boldsymbol{F}^{n+1})^{-T} \boldsymbol n_F^{n+1} \right) \notag \\ & = \frac{ \Delta t}{2 \alpha} \left( \| {J}^n \boldsymbol \sigma_F^n (\boldsymbol{F}^n)^{-T} \boldsymbol n_F^n \|^2_{L^2(\hat\Gamma)} - \| {J}^{n+1} \boldsymbol \sigma_F^{n+1} (\boldsymbol{F}^{n+1})^{-T} \boldsymbol n_F^{n+1} \|^2_{L^2(\hat\Gamma)} \right) \notag \\ & \qquad +\frac{ \Delta t}{2 \alpha} \| {J}^n \boldsymbol \sigma_F^n (\boldsymbol{F}^n)^{-T} \boldsymbol n_F^n-{J}^{n+1} \boldsymbol \sigma_F^{n+1} (\boldsymbol{F}^{n+1})^{-T} \boldsymbol n_F^{n+1} \|^2_{L^2(\hat\Gamma) } \notag \\ &= \frac{ \Delta t}{2 \alpha} \left( \| {J}^n \boldsymbol \sigma_F^n (\boldsymbol{F}^n)^{-T} \boldsymbol n_F^n \|^2_{L^2(\hat\Gamma)} - \| {J}^{n+1} \boldsymbol \sigma_F^{n+1} (\boldsymbol{F}^{n+1})^{-T} \boldsymbol n_F^{n+1} \|^2_{L^2(\hat\Gamma)} \right) \notag \\ & \qquad +\frac{ \alpha \Delta t }{2 } \| \boldsymbol v^{n+1} - \boldsymbol \xi^{n+1} \|^2_{L^2(\hat\Gamma)}. \label{energyB} \end{align} To estimate the forcing terms, we use the Cauchy-Schwarz, Young's, Poincare and Korn's inequalities as follows: \begin{align} & -\Delta t \int_{\Gamma_F^{in}} p_{in} \boldsymbol v_h^{n+1} \cdot \boldsymbol n_F -\Delta t \int_{\Gamma_F^{out}} p_{out} \boldsymbol v_h^{n+1} \cdot \boldsymbol n_F \notag \\ & \leq \frac{\Delta t C_P^2 C_K^2 }{2 \mu_F} \| p_{in} \|^2_{L^2(\Gamma_F^{in})} + \frac{\Delta t C_P^2 C_K^2}{2 \mu_F} \| p_{out} \|^2_{L^2(\Gamma_F^{in})} +\Delta t \mu_F \| \boldsymbol D( \boldsymbol v^{n+1}) \|^2_{L^2(\Omega_F(t^{n+1}))}. \label{inequality} \end{align} Using~\eqref{timeterm}-\eqref{inequality} in~\eqref{stab1} and summing from $n=0$ to $N-1$ completes the proof. \end{proof} \begin{remark} Similarly as in~\cite{nobile2008effective,badia2009robin,badia2008fluid,gerardo2010analysis}, the method proposed here is developed using generalized Robin boundary conditions. However, in this work, generalized Robin boundary conditions are designed and discretized in a novel way, leading to an unconditionally stable scheme which does not require sub-iterations. As opposed to the previous work, where two combination parameters are introduced, we have only one combination parameter, $\alpha$. This method also exhibits similarities to the method proposed in~\cite{burman2009stabilization}. In particular, the weak form of the partitioned scheme presented in this work is similar to the incomplete version of the explicit method presented in~\cite{burman2009stabilization}, which was obtained by enforcing coupling conditions using Nitsche's penalty method. However, only conditional stability was proved for the method presented in~\cite{burman2009stabilization} after a stabilization term was added. \end{remark} \section{Convergence analysis}\label{conv} To analyze the convergence of the fully discrete proposed method, we assume that the fluid is described by the time-dependent Stokes equations, that the structure deformation is infinitesimal and that the fluid-structure interaction is linear. These assumptions are common in the analysis of partitioned schemes for FSI problems as the main difficulties related to the splitting between the fluid and structure sub-problems are still present~\cite{fernandez2015generalized,bukavc2016stability,burman2009stabilization,banks2014analysis2}. Therefore, to simplify the notation, in the following we will omit the hat notation. The resulting numerical method is given by: \noindent \textbf{Structure sub-problem:} Find ${\boldsymbol \eta}^{n+1}$ and $\boldsymbol \xi^{n+1} = d_t {\boldsymbol \eta}^{n+1}$ such that \begin{align} & {\rho}_S d_t {\boldsymbol \xi}^{n+1} = {\nabla} \cdot \boldsymbol \sigma_S (\boldsymbol \eta^{n+1}) & \textrm{in}\; {\Omega}_S, \label{Ssolid}\\ & \alpha {\boldsymbol \xi}^{n+1} + \boldsymbol \sigma_S ({\boldsymbol \eta}^{n+1}) {\boldsymbol n}_S=\alpha{\boldsymbol{v}}^{n} - \boldsymbol \sigma_F (\boldsymbol v^{n}, p^{n}) \boldsymbol n_F & \textrm{on} \; {\Gamma}. \end{align} \noindent \textbf{Fluid sub-problem:} Find $\boldsymbol v^{n+1}$ and $p^{n+1}$ such that \begin{align} & \rho_F d_t \boldsymbol{v}^{n+1} = \nabla \cdot \boldsymbol\sigma_F({\boldsymbol v}^{n+1}, {p}^{n+1}) & \textrm{in}\; {\Omega}_F, \\ & \nabla \cdot {\boldsymbol{v}}^{n+1} = 0 & \textrm{in}\; {\Omega}_F, \\ & \alpha \boldsymbol{v}^{n+1} + \boldsymbol \sigma_F (\boldsymbol v^{n+1}, p^{n+1}) \boldsymbol n_F = \alpha \boldsymbol \xi^{n+1} + {\boldsymbol \sigma}_F(\boldsymbol v^{n}, p^{n}) {\boldsymbol n}_F & \textrm{on} \; {\Gamma}. \label{Sfluid} \end{align} To discretize~\eqref{Ssolid}-\eqref{Sfluid} in space, we use the finite element method. The finite element spaces are defined as the subspaces $V^F_h \subset V^F, Q^F_h \subset Q^F$ and $V^S_h \subset V^S$ based on a conforming finite element triangulation with maximum triangle diameter $h$. We assume that spaces $V^F_h$ and $Q^F_h$ are \textit{inf-sup} stable and that the fluid boundary conditions are~\eqref{inlet}-\eqref{outlet}. The weak formulation of the scheme is given as follows: \noindent \textbf{Structure sub-problem:} Find $\boldsymbol \xi_h^{n+1} \in V^S_h$ and $\boldsymbol \eta_h^{n+1} \in V^S_h$, where $\boldsymbol \xi^{n+1}_h = d_t \boldsymbol \eta_h^{n+1}$, such that for all $\boldsymbol \zeta_h \in V^S_h$ we have \begin{align} \rho_S \int_{{\Omega}_S} d_t \boldsymbol \xi_h^{n+1} \cdot \boldsymbol \zeta_h d \boldsymbol x + a_S(\boldsymbol \eta_h^{n+1}, \boldsymbol \zeta_h) +\alpha \int_{{\Gamma}} (\boldsymbol \xi_h^{n+1} - \boldsymbol v_h^n ) \cdot \boldsymbol \zeta_h d \boldsymbol x = -\int_{\Gamma} \boldsymbol \sigma_F (\boldsymbol v_h^{n}, p_h^{n}) \boldsymbol n_F \cdot \boldsymbol \zeta_h d\boldsymbol x. \label{SSweak} \end{align} \noindent \textbf{Fluid sub-problem:} Find $\boldsymbol v_h^{n+1} \in V^F_h$ and $p_h^{n+1} \in Q^F_h$ such that for all $\boldsymbol \phi_h \in V^F_h$ and $\psi_h \in Q^F_h$ we have \begin{align} &\rho_F \int_{\Omega_F} d_t \boldsymbol{v}_h^{n+1} \cdot {\boldsymbol \phi_h} d \boldsymbol x +a_F({\boldsymbol v}_h^{n+1}, \boldsymbol \phi_h) -b_F({p}^{n+1}_h, \boldsymbol \phi_h) +b_F(\psi_h, {\boldsymbol v}^{n+1}_h) + \alpha \int_{\Gamma}(\boldsymbol v_h^{n+1} - \boldsymbol \xi_h^{n+1} ) \cdot \boldsymbol \phi_h d \boldsymbol x \notag \\ & \quad =\int_{\Gamma} {\boldsymbol \sigma}_F({\boldsymbol v}_h^{n}, {p}_h^{n}) {\boldsymbol n}_F \cdot \boldsymbol \phi_h d \boldsymbol x -\int_{\Gamma_F^{in} } p_{in}(t) \boldsymbol \phi_h \cdot \boldsymbol n_F dx -\int_{ \Gamma_F^{out}} p_{out}(t) \boldsymbol \phi_h \cdot \boldsymbol n_F dx. \label{SFweak} \end{align} For spatial discretization, we use the Lagrangian finite elements of polynomial degree $k$ for all variables except for the fluid pressure for which we use elements of degree $r<k$. Assume that the continuous solution satisfies the following assumptions: \begin{align} &\boldsymbol v \in L^{\infty}(0,T; H^{k+1}(\Omega_F)) \cap H^1(0,T; H^{k+1}(\Omega_F))\cap H^2(0,T; L^2(\Omega_F)), \label{reg1} \\ &\boldsymbol v|_{\Gamma} \in L^{\infty} (0,T; H^{k+1}(\Gamma)) \cap H^1(0,T; H^{k+1}(\Gamma)) , \\ & p \in L^2(0,T; H^{r+1}(\Omega_F)),\;\; p|_{\Gamma} \in H^{1}(0,T; L^2(\Gamma)), \\ &\boldsymbol \eta \in W^{1,\infty}(0,T; H^{k+1}(\Omega_S)) \cap H^2(0,T; H^{k+1}(\Omega_S))\cap H^3(0,T; L^2(\Omega_S)). \label{reg2} \end{align} Let $a \lesssim(\gtrsim) b$ denote that there exists a positive constant $C$, independent of $h$ and $\Delta t$, such that $a \leq(\geq) C b$. We introduce the following time discrete norms: \begin{equation*} \|\boldsymbol \varphi\|_{L^2(0,T; X)} = \left(\Delta t \sum_{n=0}^{N-1} \|\boldsymbol \varphi^{n+1}\|^2_{X} \right)^{\frac12}, \quad \| \boldsymbol \varphi\|_{L^{\infty}(0,T; X)} = \max_{0 \le n \le N} \|\boldsymbol \varphi^n \|_{X}, \label{tdiscnorm} \end{equation*} where $X \in \{H^k(\Omega_F), H^k(\Omega_S), H^k(\Gamma), S\}$. Note that they are equivalent to the continuous norms since we use piecewise constant approximations in time. Furthermore, the following inequality holds: \begin{equation*} \Delta{t}\sum_{n=1}^{N-1} \| d_t\boldsymbol \varphi^{n+1} \|^2_{X} \lesssim \| \partial_t\boldsymbol \varphi \|^2_{L^2(0,T; X)}. \label{ineq} \end{equation*} Let $P_h$ be the Lagrangian interpolation operator onto $V^S_h.$ Then, $I_h := P_h|_{\Gamma}$ is a Lagrangian interpolation operator. Similar as in~\cite{bukavc2016stability,Fernandez2012incremental}, we introduce a Stokes-like projection operator $(S_h, R_h): V^F \rightarrow V^F_h \times Q_h^F$, defined for all $\boldsymbol v \in V^F$ by \begin{align} & (S_h \boldsymbol v, R_h \boldsymbol v) \in V^F_h \times Q^F_h, \\ & (S_h \boldsymbol v)|_{\Gamma} = I_h (\boldsymbol v|_{\Gamma}), \\ & a_F (S_h \boldsymbol v, \boldsymbol \varphi_h) -b_F (R_h \boldsymbol v, \boldsymbol \varphi_h) = a_F (\boldsymbol v, \boldsymbol \varphi_h), \; \forall \boldsymbol \varphi_h \in V^F_h \; \textrm{such that} \; \boldsymbol \varphi_h|_{\Gamma}=0, \\ &b_F( q, S_h \boldsymbol v) = 0, \quad \forall q \in Q^F_h. \label{press_proj} \end{align} Projection operators $S_h$ and $I_h$ satisfy the following approximation properties (see~\cite{ciarlet1978finite,bukavc2012fluid}): \begin{align} & \| \boldsymbol D( \boldsymbol v - S_h \boldsymbol v) \|_{L^2(\Omega_F)} \lesssim h^k \| \boldsymbol v \|_{H^{k+1}(\Omega_F)} \quad \textrm{for all} \; \boldsymbol v \in V^F, \label{app1} \\ & \| \boldsymbol \xi - I_h \boldsymbol \xi \|_{L^2(\Gamma)} + h \| \boldsymbol \xi - I_h \boldsymbol \xi \|_{H^1(\Gamma)} \lesssim h^{k+1} \| \boldsymbol \xi \|_{H^{k+1}(\Gamma)} \quad \textrm{for all} \; \boldsymbol \xi \in V^S. \end{align} Let $\Pi_h$ be a projection operator onto $Q_h^F$ such that \begin{equation} \| p - \Pi_h p \|_{L^2(\Omega_F)} \lesssim h^{r+1} \| p \|_{H^{r+1}(\Omega_F)}, \quad \textrm{for all} \; p \in Q^F. \end{equation} Let $R_h$ be the Ritz projector onto $V_h^S$ such that for all $\boldsymbol \eta \in V^S$, \begin{equation} a_S(\boldsymbol \eta - R_h \boldsymbol \eta, \boldsymbol \chi_h) = 0 \quad \textrm{for all} \; \boldsymbol \chi_h \in V_h^S. \label{Ritz} \end{equation} Then, the finite element theory for Ritz projections~\cite{ciarlet1978finite} gives \begin{equation} \| \boldsymbol \eta - R_h \boldsymbol \eta \|_{S} \lesssim h^{k} \| \boldsymbol \eta \|_{H^{k+1}(\Gamma)} \quad \textrm{for all} \; \boldsymbol \eta \in V^S. \label{app2} \end{equation} In the following, in addition to standard inequalities~\cite{bukavc2012fluid}, we will also use the discrete trace-inverse inequality: For a triangular domain $\Omega_F \subset \mathbb{R}^2$ there exists a positive constant $C_{TI}$ depending on the angles in the finite element mesh such that \begin{gather} \| \boldsymbol v_h \|^2_{L^2(\Gamma)} \leq \frac{C_{TI} k^2}{h} \| \boldsymbol v_h \|^2_{L^2(\Omega_F)}, \label{traceinverse} \end{gather} for all $\boldsymbol v_h \in V_h.$ We assume that the continuous fluid velocity belongs to the space $V^{FD}=\{\boldsymbol v \in V^F | \; \nabla \cdot \boldsymbol v =0\}$. Since the test functions for the partitioned scheme do not satisfy the kinematic coupling condition, we start by deriving the monolithic variational formulation with the test functions in $V_h^S \times V_h^{F} \times Q^F_h$: Find $(\boldsymbol \xi^{n+1} =\partial_t \boldsymbol \eta^{n+1}, \boldsymbol v^{n+1}, p^{n+1}) \in V^{S} \times V^F \times Q^F$ with $\boldsymbol v^{n+1} = \boldsymbol \xi^{n+1}$ on $\Gamma$ such that for all $(\boldsymbol \zeta_h, \boldsymbol \phi_h) \in V_h^{S}\times V_h^F$ we have \begin{align} & \rho_F \int_{\Omega_F} \partial_t \boldsymbol v^{n+1} \cdot \boldsymbol \phi_h +a_F(\boldsymbol v^{n+1}, \boldsymbol \phi_h) -b_F(p^{n+1}, \boldsymbol \phi_h) + \rho_S \int_{\Omega_S} \partial_t \boldsymbol \xi_h^{n+1} \cdot \boldsymbol \zeta_h +a_S(\boldsymbol \eta, \boldsymbol \zeta_h) \notag \\ &= \int_{\Gamma} \boldsymbol \sigma_F (\boldsymbol v^{n+1},p^{n+1}) \boldsymbol n_F \cdot (\boldsymbol \phi_h - \boldsymbol \zeta_h) - \int_{\Gamma_{in}} p_{in}(t^{n+1}) \boldsymbol \phi_h \cdot \boldsymbol n - \int_{\Gamma_{out}} p_{out}(t^{n+1}) \boldsymbol \phi_h \cdot \boldsymbol n. \label{monoweak} \end{align} Subtracting~\eqref{SSweak}-\eqref{SFweak} from~\eqref{monoweak}, we obtain the following error equation: \begin{align} & \rho_F \int_{\Omega_F} d_t( \boldsymbol v^{n+1}- \boldsymbol v_h^{n+1}) \cdot \boldsymbol \phi_h +a_F(\boldsymbol v^{n+1} - \boldsymbol v_h^{n+1}, \boldsymbol \phi_h) -b_F (p^{n+1}-p_h^{n+1}, \boldsymbol \phi_h) -b_F( \psi_h, \boldsymbol v_h^{n+1}) \notag \\ & \; + \rho_S \int_{\Omega_S} d_t( \boldsymbol \xi^{n+1}- \boldsymbol \xi_h^{n+1}) \cdot \boldsymbol \zeta_h +a_S(\boldsymbol \eta^{n+1} - \boldsymbol \eta_h^{n+1}, \boldsymbol \zeta_h) +\alpha \int_{\Gamma} (\boldsymbol \xi^{n+1} - \boldsymbol \xi_h^{n+1} - \boldsymbol v^{n}+\boldsymbol v_h^n) \cdot \boldsymbol \zeta_h d \boldsymbol x \notag \\ & \; +\alpha \int_{\Gamma}(\boldsymbol v^{n+1} - \boldsymbol v_h^{n+1}-\boldsymbol \xi^{n+1} + \boldsymbol \xi_h^{n+1}) \cdot \boldsymbol \phi_h d \boldsymbol x \notag \\ & = \int_{\Gamma} \boldsymbol \sigma_F (\boldsymbol v^{n} - \boldsymbol v_h^n,p^{n}-p_h^n) \boldsymbol n_F \cdot (\boldsymbol \phi_h - \boldsymbol \zeta_h) +\mathcal{R}_1 (\boldsymbol \phi_h, \boldsymbol \zeta_h), \label{erroreq} \end{align} for all $(\boldsymbol \zeta_h, \boldsymbol \phi_h, \psi_h) \in V_h^{S}\times V_h^F \times Q_h^F$, where, since $\boldsymbol v^{n+1}=\boldsymbol \xi^{n+1}$ on $\Gamma,$ \begin{align*} \mathcal{R}_1 (\boldsymbol \phi_h, \boldsymbol \zeta_h) &= \rho_F \int_{\Omega_F} (d_t \boldsymbol v^{n+1} - \partial_t \boldsymbol v^{n+1}) \cdot \boldsymbol \phi_h +\rho_S \int_{\Omega_S} (d_t \boldsymbol \xi^{n+1} - \partial_t \boldsymbol \xi^{n+1}) \cdot \boldsymbol \zeta_h \notag \\ & \; +\alpha \int_{\Gamma} (\boldsymbol v^{n+1} - \boldsymbol v^{n}) \cdot \boldsymbol \zeta_h d \boldsymbol x +\int_{\Gamma} \boldsymbol \sigma_F (\boldsymbol v^{n+1} - \boldsymbol v^n,p^{n+1}-p^n) \boldsymbol n_F \cdot (\boldsymbol \phi_h - \boldsymbol \zeta_h) . \end{align*} We split the error of the method as a sum of the approximation error, $\theta_r^{n+1}$, and the truncation error, $\delta_r^{n+1},$ for $r \in \{F,P,\eta, \xi \}$ as follows: \begin{align} \boldsymbol e_F^{n+1} & =\boldsymbol v^{n+1}-\boldsymbol v_h^{n+1} = (\boldsymbol v^{n+1}-S_h \boldsymbol v^{n+1})+(S_h \boldsymbol v^{n+1}-\boldsymbol v_h^{n+1}) = \boldsymbol \theta_F^{n+1}+\boldsymbol \delta_F^{n+1}, \label{error1}\\ e_P^{n+1} & =p^{n+1}-p_h^{n+1} = (p^{n+1}-\Pi_h p^{n+1})+(\Pi_h p^{n+1}-p_h^{n+1}) = \theta_P^{n+1}+\delta_P^{n+1}, \label{errorpom2}\\ \boldsymbol e_\eta^{n+1} & =\boldsymbol \eta^{n+1}-\boldsymbol \eta_h^{n+1} = (\boldsymbol \eta^{n+1}-R_h \boldsymbol \eta^{n+1})+(R_h \boldsymbol \eta^{n+1}-\boldsymbol \eta_h^{n+1}) = \boldsymbol \theta_{\eta}^{n+1}+ \boldsymbol \delta_{\eta}^{n+1},\\ \boldsymbol e_{\xi}^{n+1} & =\boldsymbol \xi^{n+1}-\boldsymbol \xi_h^{n+1} = (\boldsymbol \xi^{n+1}-P_h \boldsymbol \xi^{n+1})+(P_h \boldsymbol \xi^{n+1}-\boldsymbol \xi_h^{n+1}) = \boldsymbol \theta_{\xi}^{n+1}+\boldsymbol \delta_{\xi}^{n+1}. \label{error2} \end{align} The main result of this section is stated in the following theorem. \begin{theorem}\label{MainThm} Consider the solution $(\boldsymbol \xi_h, \boldsymbol \eta_h, \boldsymbol v_h, p_h)$ of~\eqref{SSweak}-\eqref{SFweak}, with discrete initial data given by $(\boldsymbol \xi_h^0, \boldsymbol \eta_h^0, \boldsymbol v_h^0, p_h^0) = (P_h \boldsymbol \xi^0, R_h \boldsymbol \eta^0, S_h \boldsymbol v^0, \Pi_h p^0)$. Assume that the exact solution satisfies assumptions~\eqref{reg1}-\eqref{reg2} and that the following inequality is satisfied: \begin{gather} \Delta t \leq \frac{\rho_F}{\alpha C_{TI} k^2} h. \label{CFLconv} \end{gather} Then, the following estimate holds: \begin{align} & \frac{\rho_F}{2} \|\boldsymbol e_F^{N} \|^2_{L^2(\Omega_F)} +\frac{\rho_S}{2} \|\boldsymbol e_\xi^{N} \|^2_{L^2(\Omega_S)} +\frac12 \| \boldsymbol e_{\eta}^{N} \|^2_{S} + \frac{\alpha \Delta t}{2} \| \boldsymbol e_F^{N} \|^2_{L^2(\Gamma)} + \mu_F \Delta t \sum_{n=0}^{N-1} \| \boldsymbol{D} ( \boldsymbol e_F^{N} ) \|^2_{L^2(\Omega_F)} \notag \\ & \lesssim e^T \left( h^{2k+2} \mathcal{A}_0 +h^{2r+2} \mathcal{A}_1 +h^{2k} \mathcal{A}_2 +\Delta t^2 h^{2k+2} \mathcal{A}_3 +\Delta t^2 \mathcal{A}_4 +\Delta t \mathcal{A}_{5} \right), \notag \end{align} where \begin{align} \mathcal{A}_0 & = \rho_S \| \boldsymbol \xi \|^2_{L^{\infty}(0,T; H^{k+1}(\Omega_S))} +\rho_S \| \partial_t \boldsymbol \xi \|^2_{L^2(0,T; H^{k+1}(\Omega_S))}, \notag \\ \mathcal{A}_1 & = \frac{1 }{\mu_F} \|p\|^2_{L^2(0,T; H^{r+1}(\Omega_F))}, \notag \\ \mathcal{A}_2 & = \rho_F \| \boldsymbol v \|^2_{L^{\infty}(0,T; H^{k+1}(\Omega_F))} + \| \boldsymbol \eta \|^2_{L^{\infty}(0,T; H^{k+1}(\Omega_S))} + \| \boldsymbol {\xi} \|^2_{L^2(0,T: H^{k+1}(\Omega_S))} \notag \\ & \; \;\; +\frac{ \rho_F^2}{\mu_F} \| \partial_t \boldsymbol v \|^2_{L^2(0,T: H^{k+1}(\Omega_F))} + \mu_F \| \boldsymbol v \|^2_{L^2(0,T; H^{k+1}(\Omega_F))}, \notag \\ \mathcal{A}_3 & = \left(\frac{\alpha^2 }{\mu_F}+\alpha \right) \| \partial_t \boldsymbol v \|^2_{L^2(0,T; H^{k+1}(\Gamma))}, \notag \\ \mathcal{A}_4 & = \frac{ \rho_F^2 }{ \mu_F} \| \partial_{tt} \boldsymbol v \|^2_{L^2(0,T; L^2(\Omega_F))} + \rho_S \| \partial_{tt} \boldsymbol \xi \|^2_{L^2(0,T; L^2(\Omega_S))} +\alpha \left(\frac{\alpha}{2 \mu_F}+1\right) \| \partial_t \boldsymbol v \|^2_{L^2(0,T; L^2(\Gamma))} \notag \\ & \;\;\; + \frac{1}{\alpha} \| \partial_t \boldsymbol \sigma_F \boldsymbol n_F \|^2_{L^2(0,T: L^2(\Gamma))} + \|\partial_{tt} \boldsymbol \eta\|^2_{L^2(0,T; S)}, \notag \\ \mathcal{A}_{5} & = \frac{1}{\alpha} \| \partial_t \boldsymbol \sigma_F \boldsymbol n_F \|^2_{L^2(0,T; L^2(\Gamma))}. \notag \end{align} \if 1=0 \begin{align} & \frac{\rho_F}{2} \|\boldsymbol v^{N} - \boldsymbol v_h^{N} \|^2_{L^2(\Omega_F)} +\frac{\rho_S}{2} \|\boldsymbol \xi^{N} - \boldsymbol \xi_h^{N} \|^2_{L^2(\Omega_S)} +\frac12 \| \boldsymbol {\eta}^{N} - \boldsymbol {\eta}_h^{N}\|^2_{S} + \frac{\alpha \Delta t}{2} \| \boldsymbol v^{N} - \boldsymbol v_h^{N} \|^2_{L^2(\Gamma)} \notag \\ & + \mu_F \Delta t \sum_{n=0}^{N-1} \| \boldsymbol{D} ( \boldsymbol v^{N} - \boldsymbol v_h^{N} ) \|^2_{L^2(\Omega_F)} \notag \\ & \lesssim e^T \left( h^{2k+2} \left( \rho_S \| \boldsymbol \xi \|^2_{L^{\infty}(0,T; H^{k+1}(\Omega_S))} +\rho_S \| \partial_t \boldsymbol \xi \|^2_{L^2(0,T; H^{k+1}(\Omega_S))} \right) \right. \notag \\ & + h^{2s+2} \left( \frac{1 }{\mu_F} \|p\|^2_{L^2(0,T; H^{s+1}(\Omega_F))} +\frac{1 }{\alpha} \| p \|^2_{L^2(0,T; H^{s+1}(\Gamma))} \right) \notag \\ & + h^{2k} \left( \rho_F \| \boldsymbol v \|^2_{L^{\infty}(0,T; H^{k+1}(\Omega_F))} + \| \boldsymbol \eta \|^2_{L^{\infty}(0,T; H^{k+1}(\Omega_S))} + \| \boldsymbol {\xi} \|^2_{L^2(0,T: H^{k+1}(\Omega_S))} \right. \notag \\ & \; \;\; \left. +\frac{ \rho_F^2}{\mu_F} \| \partial_t \boldsymbol v \|^2_{L^2(0,T: H^{k+1}(\Omega_F))} + \mu_F \| \boldsymbol v \|^2_{L^2(0,T; H^{k+1}(\Omega_F))} +\frac{ \mu_F^2}{ \alpha} \| \boldsymbol v \|^2_{L^2(0,T; H^{k+1}(\Gamma))} \right) \notag \\ & +\Delta t^2 \left( \frac{ \rho_F^2 }{ \mu_F} \| \partial_{tt} \boldsymbol v \|^2_{L^2(0,T; L^2(\Omega_F))} + \rho_S \| \partial_{tt} \boldsymbol \xi \|^2_{L^2(0,T; L^2(\Omega_S))} +\alpha \left(\frac{\alpha}{2 \mu}+1\right) \| \partial_t \boldsymbol v \|^2_{L^2(0,T; L^2(\Gamma))} \right. \notag \\ & \left. \;\;\; + \frac{1}{\alpha} \| \partial_t \boldsymbol \sigma_F \boldsymbol n_F \|^2_{L^2(0,T: L^2(\Gamma))} + \|\partial_{tt} \boldsymbol \eta\|^2_{L^2(0,T; L^2(S))} \right) \notag \\ & + \Delta t^2 h^{2k+2} \left(\frac{\alpha^2 }{\mu}+\alpha \right) \| \partial_t \boldsymbol v \|^2_{L^2(0,T; H^{k+1}(\Gamma))} + \frac{\Delta t^2 h^{2s+2}}{\alpha} \|\partial_t p \|^2_{L^2(0,T;H^{s+1}(\Gamma))} +\Delta t h^{2k+2} \alpha\| \boldsymbol \xi \|^2_{L^{\infty}(0,T; H^{k+1}(\Gamma))} \notag \\ & \; +\frac{\Delta t^2 h^{2k} \mu_F^2}{\alpha} \|\partial_t \boldsymbol v \|^2_{L^2(0,T;H^{k+1}(\Gamma))} + \frac{\Delta t h^{2s+2}}{\alpha} \|\partial_t p \|^2_{L^2(0,T;H^{s+1}(\Gamma))} +\frac{\Delta t h^{2k} \mu_F^2}{\alpha} \|\partial_t \boldsymbol v \|^2_{L^2(0,T;H^{k+1}(\Gamma))} \notag \\ & \left. +\frac{\Delta t}{\alpha} \| \partial_t \boldsymbol \sigma_F \boldsymbol n_F \|^2_{L^2(0,T; L^2(\Gamma))} \right) \end{align} \fi \end{theorem} \begin{proof} Rearranging the error equation~\eqref{erroreq}, using $\boldsymbol \theta_F^{n+1} = \boldsymbol \theta_{\xi}^{n+1}$ on $\Gamma$, and taking the property~\eqref{Ritz} of the Ritz projection operator into account, we obtain \begin{align} & \rho_F \int_{\Omega_F} d_t \boldsymbol \delta_F^{n+1} \cdot \boldsymbol \phi_h +a_F(\boldsymbol \delta_F^{n+1}, \boldsymbol \phi_h) -b_F (\delta_P^{n+1}, \boldsymbol \phi_h) -b_F( \psi_h, \boldsymbol v_h^{n+1}) + \rho_S \int_{\Omega_S} d_t \boldsymbol \delta_{\xi}^{n+1} \cdot \boldsymbol \zeta_h \notag \\ & \; +a_S( \boldsymbol \delta_{\eta}^{n+1}, \boldsymbol \zeta_h) +\alpha \int_{\Gamma} (\boldsymbol \delta_{\xi}^{n+1} - \boldsymbol \delta_F^{n}) \cdot \boldsymbol \zeta_h d \boldsymbol x +\alpha \int_{\Gamma}(\boldsymbol \delta_F^{n+1} -\boldsymbol \delta_{\xi}^{n+1}) \cdot \boldsymbol \phi_h d \boldsymbol x \notag \\ & = \int_{\Gamma} \boldsymbol \sigma_F (\boldsymbol e_F^{n} ,e_P^n) \boldsymbol n_F \cdot (\boldsymbol \phi_h - \boldsymbol \zeta_h) -\rho_F \int_{\Omega_F} d_t \boldsymbol \theta_F^{n+1} \cdot \boldsymbol \phi_h -a_F(\boldsymbol \theta_F^{n+1}, \boldsymbol \phi_h) \notag \\ & +b_F (\theta_P^{n+1}, \boldsymbol \phi_h) -\rho_S \int_{\Omega_S} d_t \boldsymbol \theta_{\xi}^{n+1} \cdot \boldsymbol \zeta_h -\alpha \int_{\Gamma} (\boldsymbol \theta_{\xi}^{n+1} - \boldsymbol \theta_F^{n}) \cdot \boldsymbol \zeta_h d \boldsymbol x +\mathcal{R}_1 (\boldsymbol \phi_h, \boldsymbol \zeta_h). \label{error_s} \end{align} Let $\boldsymbol \phi_h = \Delta t\boldsymbol \delta_F^{n+1}, \boldsymbol \zeta_h =\Delta t \boldsymbol \delta_{\xi}^{n+1}$ and $\psi_h = \Delta t\delta_P^{n+1}$. Thanks to~\eqref{press_proj}, the pressure terms simplify as follows: \begin{gather*} - \Delta t b_F(\delta_P^{n+1}, \boldsymbol \delta_F^{n+1}) -\Delta t b_F(\delta_P^{n+1}, \boldsymbol v_h^{n+1}) = -\Delta t b_F(\delta_P^{n+1}, S_h \boldsymbol v^{n+1})= 0. \end{gather*} Equation~\eqref{error_s} now becomes \begin{align} & \frac{\rho_F}{2}\left( \|\boldsymbol \delta_F^{n+1} \|^2_{L^2(\Omega_F)} - \|\boldsymbol \delta_F^{n} \|^2_{L^2(\Omega_F)} + \|\boldsymbol \delta_F^{n+1}-\boldsymbol \delta_F^{n} \|^2_{L^2(\Omega_F)}\right) +2 \mu_F \Delta t \| {\boldsymbol D}( \boldsymbol \delta_F^{n+1})\|^2_{L^2(\Omega_F)} \notag \\ & \; +\frac{\rho_S}{2}\left( \|\boldsymbol \delta_\xi^{n+1} \|^2_{L^2(\Omega_S)} - \|\boldsymbol \delta_\xi^{n} \|^2_{L^2(\Omega_S)} + \|\boldsymbol \delta_\xi^{n+1}-\boldsymbol \delta_\xi^{n} \|^2_{L^2(\Omega_S)}\right) +\Delta t a_S( \boldsymbol \delta_{\eta}^{n+1}, \boldsymbol \delta_\xi^{n+1}) \notag \\ & \; + \frac{\alpha \Delta t }{2} \left( \| \boldsymbol \delta_{F}^{n+1} \|^2_{L^2(\Gamma)} - \| \boldsymbol \delta_{F}^{n} \|^2_{L^2(\Gamma)} + \| \boldsymbol \delta_{\xi}^{n+1} - \boldsymbol \delta_{F}^{n} \|^2_{L^2(\Gamma)} + \| \boldsymbol \delta_{F}^{n+1} - \boldsymbol \delta_{\xi}^{n+1} \|^2_{L^2(\Gamma)} \right) \notag \\ & = \Delta t \int_{\Gamma} \boldsymbol \sigma_F (\boldsymbol e_F^n, e_P^{n}) \boldsymbol n_F \cdot (\boldsymbol \delta_{F}^{n+1} - \boldsymbol \delta_{\xi}^{n+1} ) -\Delta t\rho_F \int_{\Omega_F} d_t \boldsymbol \theta_F^{n+1} \cdot \boldsymbol \delta_{F}^{n+1} \notag \\ & \; - \Delta t a_F( \boldsymbol \theta_F^{n+1}, \boldsymbol \delta_{F}^{n+1}) +\Delta t b_F(\theta_P^{n+1}, \boldsymbol \delta_{F}^{n+1}) -\Delta t \rho_S \int_{\Omega_S} d_t \boldsymbol \theta_{\xi}^{n+1} \cdot \boldsymbol \delta_{\xi}^{n+1} \notag \\ & \; -\alpha \Delta t \int_{\Gamma} (\boldsymbol \theta_{\xi}^{n+1} - \boldsymbol \theta_F^{n}) \cdot \boldsymbol \delta_{\xi}^{n+1} +\Delta t \mathcal{R}_1 (\boldsymbol \delta_{F}^{n+1}, \boldsymbol \delta_{\xi}^{n+1}). \label{error_eng} \end{align} For term $\Delta t a_S( \boldsymbol \delta_{\eta}^{n+1}, \boldsymbol \delta_{\xi}^{n+1})$ we proceed as follows: \begin{gather*} \Delta t a_S( \boldsymbol \delta_{\eta}^{n+1}, \boldsymbol \delta_{\xi}^{n+1})= \Delta t a_S(\boldsymbol \delta_{\eta}^{n+1}, d_t \boldsymbol \delta_{\eta}^{n+1}+P_h \boldsymbol{\xi}^{n+1}-R_h d_t \boldsymbol \eta^{n+1}) =\frac12 \| \boldsymbol \delta_{\eta}^{n+1}\|^2_{S} -\frac12 \| \boldsymbol \delta_{\eta}^{n}\|^2_{S} \notag \\ +\frac{\Delta t^2}{2} \| d_t \boldsymbol \delta_{\eta}^{n+1}\|^2_S + \Delta t a_S(\boldsymbol \delta_{\eta}^{n+1}, P_h \boldsymbol{\xi}^{n+1}-R_h d_t \boldsymbol \eta^{n+1}). \end{gather*} Note that $P_h \boldsymbol{\xi}^{n+1}-R_h d_t \boldsymbol \eta^{n+1} = P_h \boldsymbol \xi^{n+1}-\boldsymbol \xi^{n+1}+\boldsymbol \xi^{n+1}-R_h d_t \boldsymbol \eta^{n+1} = -\boldsymbol{\theta}_{\xi}^{n+1}+d_t \boldsymbol{\theta}_{\eta}^{n+1}+\partial_t \boldsymbol \eta^{n+1}-d_t \boldsymbol \eta^{n+1}.$ Hence, using property~\eqref{Ritz} of the Ritz projection operator, Cauchy-Schwartz and Young's inequalities, we have \begin{align} \Delta t a_S( \boldsymbol \delta_{\eta}^{n+1}, P_h \boldsymbol{\xi}^{n+1}-R_h d_t \boldsymbol \eta^{n+1}) \leq \Delta t \| \boldsymbol{\theta}_{\xi}^{n+1} \|^2_S+\frac{\Delta t }{4} \|\boldsymbol \delta_{\eta}^{n+1}\|^2_S+\Delta t \mathcal{R}_2(\boldsymbol \delta_{\eta}^{n+1}), \end{align} where $\mathcal{R}_2(\boldsymbol \delta_{\eta}^{n+1}) = a_S(\boldsymbol\delta_{\eta}^{n+1}, \partial_t \boldsymbol \eta^{n+1}-d_t \boldsymbol \eta^{n+1})$. To estimate the first term on the right hand side of~\eqref{error_eng}, similarly as in~\cite{bukavc2016stability}, we note that $\boldsymbol \delta_{F}^{n+1} -\boldsymbol \delta_{\xi}^{n+1} = -(\boldsymbol v_h^{n+1}-\boldsymbol \xi_h^{n+1})$ on $\Gamma$. Furthermore, adding and subtracting the continuous velocity and pressure in~\eqref{Sfluid}, the following relation holds on $\Gamma$: \begin{align} & \boldsymbol \delta_{F}^{n+1}- \boldsymbol \delta_{\xi}^{n+1} = \frac{1}{\alpha} \left( \boldsymbol\sigma_F(\boldsymbol e_F^{n}, e_P^{n}) \boldsymbol n_F - \boldsymbol\sigma_F(\boldsymbol e_F^{n+1}, e_P^{n+1})\boldsymbol n_F + \boldsymbol\sigma_F(\boldsymbol v^{n+1} -\boldsymbol v^{n}, p^{n+1} -p^{n})\boldsymbol n_F \right). \label{trstres} \end{align} Employing identity~\eqref{trstres}, we have \begin{align} & \Delta t \int_{\Gamma} \boldsymbol \sigma_F (\boldsymbol e_F^{n}, e_P^{n}) \boldsymbol n_F \cdot (\boldsymbol \delta_{F}^{n+1} - \boldsymbol \delta_{\xi}^{n+1} ) \notag \\ & \quad = \underbrace{ \frac{\Delta t}{\alpha} \int_{\Gamma}\boldsymbol \sigma_F (\boldsymbol e_F^{n}, e_P^{n}) \boldsymbol n_F \cdot \left( \boldsymbol\sigma_F(\boldsymbol e_F^{n}, e_P^{n}) \boldsymbol n_F - \boldsymbol\sigma_F(\boldsymbol e_F^{n+1}, e_P^{n+1})\boldsymbol n_F \right) }_{\mathcal{T}_1} \notag \\ & \quad \; +\underbrace{ \frac{\Delta t }{\alpha} \int_{\Gamma} \boldsymbol \sigma_F (\boldsymbol e_F^{n}, e_P^{n}) \boldsymbol n_F \cdot \boldsymbol\sigma_F(\boldsymbol v^{n+1} -\boldsymbol v^{n}, p^{n+1} -p^{n})\boldsymbol n_F.}_{\mathcal{T}_2} \end{align} Using the polarized identity, $\mathcal{T}_1$ is given as \begin{align} \mathcal{T}_1 &= - \frac{\Delta t}{2 \alpha} \| \boldsymbol \sigma_F(\boldsymbol e_F^{n+1}, e_P^{n+1}) \boldsymbol n_F \|^2_{L^2(\Gamma)} + \frac{\Delta t}{2 \alpha} \| \boldsymbol \sigma_F(\boldsymbol e_F^{n}, e_P^{n}) \boldsymbol n_F \|^2_{L^2(\Gamma)} \notag \\ & \; +\frac{\Delta t}{2 \alpha} \| \boldsymbol \sigma_F(\boldsymbol e_F^{n+1}, e_P^{n+1}) \boldsymbol n_F -\boldsymbol \sigma_F(\boldsymbol e_F^{n}, e_P^{n}) \boldsymbol n_F \|^2_{L^2(\Gamma)}. \label{t1} \end{align} To estimate the last term in~\eqref{t1}, we again use identity~\eqref{trstres} and Young's inequality as follows: \begin{align} & \frac{\Delta t}{2 \alpha} \left\| \boldsymbol \sigma_F(\boldsymbol e_F^{n+1}, e_P^{n+1}) \boldsymbol n_F -\boldsymbol \sigma_F(\boldsymbol e_F^{n}, e_P^{n}) \boldsymbol n_F \right\|^2_{L^2(\Gamma)} \notag \\ & \quad = \frac{\Delta t}{2 \alpha} \left\| \boldsymbol\sigma_F(\boldsymbol v^{n+1} -\boldsymbol v^{n}, p^{n+1} -p^{n})\boldsymbol n_F -\alpha (\boldsymbol \delta_{F}^{n+1}-\boldsymbol \delta_{\xi}^{n+1}) \right\|^2_{L^2(\Gamma)} \notag \\ & \quad = \frac{\Delta t}{2 \alpha} \left\| \boldsymbol\sigma_F(\boldsymbol v^{n+1} -\boldsymbol v^{n}, p^{n+1} -p^{n})\boldsymbol n_F \right\|^2_{L^2(\Gamma)} + \frac{\alpha \Delta t }{2} \| \boldsymbol \delta_{F}^{n+1}-\boldsymbol \delta_{\xi}^{n+1} \|^2_{L^2(\Gamma)} \notag \\ & \quad \; - \Delta t \int_{\Gamma} (\boldsymbol \delta_{F}^{n+1} -\boldsymbol \delta_{\xi}^{n+1} ) \cdot \boldsymbol\sigma_F(\boldsymbol v^{n+1} -\boldsymbol v^{n}, p^{n+1} -p^{n})\boldsymbol n_F \notag \\ & \quad \leq \frac{\Delta t}{2 \alpha} \left\| \boldsymbol\sigma_F(\boldsymbol v^{n+1} -\boldsymbol v^{n}, p^{n+1} -p^{n})\boldsymbol n_F \right\|^2_{L^2(\Gamma)} + \frac{\alpha \Delta t}{2} \| \boldsymbol \delta_{F}^{n+1}-\boldsymbol \delta_{\xi}^{n+1} \|^2_{L^2(\Gamma)} \notag \\ & \quad +\frac{\alpha \Delta t}{12} \| \boldsymbol \delta_{F}^{n+1}-\boldsymbol \delta_{\xi}^{n+1} \|^2_{L^2(\Gamma)} +\frac{3\Delta t}{ \alpha} \left\| \boldsymbol\sigma_F(\boldsymbol v^{n+1} -\boldsymbol v^{n}, p^{n+1} -p^{n})\boldsymbol n_F \right\|^2_{L^2(\Gamma)}. \end{align} Finally, we estimate $\mathcal{T}_2$ using the Cauchy-Schwartz inequality and Young's inequality as \begin{align} \mathcal{T}_2 & \leq \frac{ \Delta t^2}{2 \alpha } \left\|\boldsymbol \sigma_F(\boldsymbol e_F^{n}, e_P^{n}) \boldsymbol n_F \right\|^2_{L^2(\Gamma)} +\frac{ 1}{ 2 \alpha } \left\| \boldsymbol \sigma_F\left( \boldsymbol v^{n+1}-\boldsymbol v^{n}, p^{n+1}-p^n \right) \boldsymbol n_F \right\|^2_{L^2(\Gamma)}. \end{align} We bound the remaining terms in~\eqref{error_eng} as follows. Using Cauchy-Schwartz, Young's, Poincar\'e - Friedrichs, and Korn's inequalities, we have \begin{align*} &-\Delta t\rho_F \int_{\Omega_F} d_t \boldsymbol \theta_F^{n+1} \cdot \boldsymbol \delta_{F}^{n+1} - \Delta t a_F ( \boldsymbol \theta_F^{n+1}, \boldsymbol \delta_{F}^{n+1}) +\Delta t b_F (\theta_P^{n+1}, \boldsymbol \delta_{F}^{n+1}) -\Delta t \rho_S \int_{\Omega_S} d_t \boldsymbol \theta_{\xi}^{n+1} \cdot \boldsymbol \delta_{\xi}^{n+1} \\ &\quad \lesssim \frac{ \Delta t \rho_F^2}{\mu_F} \| d_t \boldsymbol \theta_F^{n+1}\|^2_{L^2(\Omega_F)} + \Delta t \mu_F \| \boldsymbol{D} (\boldsymbol \theta_F^{n+1}) \|^2_{L^2(\Omega_F)}+\frac{ \Delta t }{\mu_F} \|\theta_P^{n+1}\|^2_{L^2(\Omega_F)} +\frac{\mu_F \Delta t}{4}\|\boldsymbol D (\boldsymbol \delta_F^{n+1}) \|^2_{L^2(\Omega_F)} \notag \\ &\qquad +\Delta t \rho_S \| d_t \boldsymbol \theta_\xi^{n+1} \|^2_{L^2(\Omega_S)} +\frac{\Delta t \rho_S }{4} \| \boldsymbol \delta_\xi^{n+1}\|^2_{L^2(\Omega_S)}. \end{align*} Next, noting that $\boldsymbol \theta_F^{n+1} = \boldsymbol \theta_{\xi}^{n+1}$ on $\Gamma$ and adding and subtracting $\boldsymbol \delta_F^{n+1}$, we have \begin{align} &-\alpha \Delta t \int_{\Gamma} (\boldsymbol \theta_{\xi}^{n+1} - \boldsymbol \theta_F^{n}) \cdot \boldsymbol \delta_{\xi}^{n+1} \notag \\ & \quad = -\alpha \Delta t \int_{\Gamma}(\boldsymbol \theta_F^{n+1} -\boldsymbol \theta_{F}^{n}) \cdot \boldsymbol \delta_F^{n+1} -\alpha \Delta t \int_{\Gamma}(\boldsymbol \theta_F^{n+1} -\boldsymbol \theta_{F}^{n}) \cdot (\boldsymbol \delta_\xi^{n+1}-\boldsymbol \delta_F^{n+1}) \notag \\ &\quad \lesssim \Delta t^3 \left(\frac{\alpha^2 }{\mu_F}+ \alpha \right) \|d_t \boldsymbol \theta_F^{n+1} \|^2_{L^2(\Gamma)} +\frac{ \mu_F \Delta t }{4} \|\boldsymbol D(\boldsymbol \delta_F^{n+1}) \|^2_{L^2(\Omega_F)} +\frac{\alpha \Delta t }{12} \|\boldsymbol \delta_F^{n+1}-\boldsymbol \delta_\xi^{n+1} \|^2_{L^2(\Gamma)}. \notag \end{align} Combining the estimates above with equation~\eqref{error_eng}, summing from $n=0, \ldots, N-1$ and taking into account the assumption on the initial data, we have \begin{align} & \frac{\rho_F}{2} \|\boldsymbol \delta_F^{N} \|^2_{L^2(\Omega_F)} +\frac{\rho_S}{2} \|\boldsymbol \delta_\xi^{N} \|^2_{L^2(\Omega_S)} +\frac12 \| \boldsymbol \delta_{\eta}^{N}\|^2_{S} + \frac{\alpha \Delta t}{2} \| \boldsymbol \delta_{F}^{N} \|^2_{L^2(\Gamma)} + \frac{\Delta t}{2 \alpha} \| \boldsymbol \sigma_F(\boldsymbol e_F^{N}, e_P^{N}) \boldsymbol n_F \|^2_{L^2(\Gamma)} \notag \\ & \; +\frac{3}{2} \mu_F \Delta t \sum_{n=0}^{N-1} \| {\boldsymbol D}( \boldsymbol \delta_F^{n+1})\|^2_{L^2(\Omega_F)} + \frac{\rho_F \Delta t^2}{2} \sum_{n=0}^{N-1} \|d_t\boldsymbol \delta_F^{n+1} \|^2_{L^2(\Omega_F)} +\frac{\rho_S \Delta t^2}{2} \sum_{n=0}^{N-1} \| d_t \boldsymbol \delta_\xi^{n+1} \|^2_{L^2(\Omega_S)} \notag \\ & \; +\frac{\Delta t^2}{2} \sum_{n=0}^{N-1} \| d_t \boldsymbol \delta_{\eta}^{n+1}\|^2_S + \frac{\alpha \Delta t}{2} \sum_{n=0}^{N-1} \| \boldsymbol \delta_{\xi}^{n+1} - \boldsymbol \delta_{F}^{n} \|^2_{L^2(\Gamma)} \notag \\ & \lesssim \Delta t \sum_{n=0}^{N-1} \| \boldsymbol{\theta}_{\xi}^{n+1} \|^2_S +\frac{ \Delta t \rho_F^2}{\mu_F} \sum_{n=0}^{N-1} \| d_t \boldsymbol \theta_F^{n+1}\|^2_{L^2(\Omega_F)} + \Delta t \mu_F \sum_{n=0}^{N-1} \| \boldsymbol{D} (\boldsymbol \theta_F^{n+1}) \|^2_{L^2(\Omega_F)} +\frac{ \Delta t }{\mu_F} \sum_{n=0}^{N-1} \|\theta_P^{n+1}\|^2_{L^2(\Omega_F)} \notag \\ & \; +\Delta t \rho_S \sum_{n=0}^{N-1} \| d_t \boldsymbol \theta_\xi^{n+1} \|^2_{L^2(\Omega_S)} + \Delta t^3 \left(\frac{\alpha^2 }{\mu_F}+ \alpha \right) \sum_{n=0}^{N-1} \|d_t \boldsymbol \theta_F^{n+1} \|^2_{L^2(\Gamma)} \notag \\ & \; + \frac{\Delta t+1}{\alpha} \sum_{n=0}^{N-1} \left\|\boldsymbol \sigma_F\left(\boldsymbol v^{n+1}-\boldsymbol v^{n}, p^{n+1}-p^n \right) \boldsymbol n_F \right\|^2_{L^2(\Gamma)} \notag \\ & \; +\frac{\Delta t^2}{2 \alpha } \sum_{n=0}^{N-1} \left\|\boldsymbol \sigma_F(\boldsymbol e_F^{n}, e_P^{n}) \boldsymbol n_F \right\|^2_{L^2(\Gamma)} +\frac{ \alpha \Delta t}{6} \sum_{n=0}^{N-1} \| \boldsymbol \delta_{F}^{n+1}-\boldsymbol \delta_{\xi}^{n+1} \|^2_{L^2(\Gamma)} +\frac{\Delta t \rho_S }{4} \sum_{n=0}^{N-1} \| \boldsymbol \delta_\xi^{n+1}\|^2_{L^2(\Omega_S)} \notag \\ &\; +\frac{\Delta t}{4} \sum_{n=0}^{N-1} \|\boldsymbol \delta_{\eta}^{n+1}\|^2_S +\Delta t \sum_{n=0}^{N-1} \mathcal{R}_1 (\boldsymbol \delta_{F}^{n+1}, \boldsymbol \delta_{\xi}^{n+1}) +\Delta t \sum_{n=0}^{N-1}\mathcal{R}_2(\boldsymbol \delta_{\eta}^{n+1}). \label{error_eng2} \end{align} To estimate the approximation and consistency errors, we use Lemmas~\ref{cons1} and~\ref{lemma_interpolation}, leading to the following inequality: \begin{align} & \frac{\rho_F}{2} \|\boldsymbol \delta_F^{N} \|^2_{L^2(\Omega_F)} +\frac{\rho_S}{2} \|\boldsymbol \delta_\xi^{N} \|^2_{L^2(\Omega_S)} +\frac12 \| \boldsymbol \delta_{\eta}^{N}\|^2_{S} + \frac{\alpha \Delta t}{2} \| \boldsymbol \delta_{F}^{N} \|^2_{L^2(\Gamma)} + \frac{\Delta t}{2 \alpha} \| \boldsymbol \sigma_F(\boldsymbol e_F^{N}, e_P^{N}) \boldsymbol n_F \|^2_{L^2(\Gamma)} \notag \\ & \; + \mu_F \Delta t \sum_{n=0}^{N-1} \| {\boldsymbol D}( \boldsymbol \delta_F^{n+1})\|^2_{L^2(\Omega_F)} + \frac{\rho_F \Delta t^2}{2} \sum_{n=0}^{N-1} \|d_t\boldsymbol \delta_F^{n+1} \|^2_{L^2(\Omega_F)} +\frac{\rho_S \Delta t^2}{2} \sum_{n=0}^{N-1} \| d_t \boldsymbol \delta_\xi^{n+1} \|^2_{L^2(\Omega_S)} \notag \\ & \; +\frac{\Delta t^2}{2} \sum_{n=0}^{N-1} \| d_t \boldsymbol \delta_{\eta}^{n+1}\|^2_S + \frac{\alpha \Delta t}{2} \sum_{n=0}^{N-1} \| \boldsymbol \delta_{\xi}^{n+1} - \boldsymbol \delta_{F}^{n} \|^2_{L^2(\Gamma)} \notag \\ & \lesssim h^{2k} \| \boldsymbol {\xi} \|^2_{L^2(0,T: H^{k+1}(\Omega_S))} +\frac{ \rho_F^2}{\mu_F} h^{2k}\| \partial_t \boldsymbol v \|^2_{L^2(0,T: H^{k+1}(\Omega_F))} + \mu_F h^{2k} \| \boldsymbol v \|^2_{L^2(0,T; H^{k+1}(\Omega_F))} \notag \\ & \; +\frac{1 }{\mu_F} h^{2r+2} \|p\|^2_{L^2(0,T; H^{r+1}(\Omega_F))} +\rho_S h^{2k+2}\| \partial_t \boldsymbol \xi \|^2_{L^2(0,T; H^{k+1}(\Omega_S))} \notag \\ & \; + \Delta t^2 \left(\frac{\alpha^2 }{\mu_F}+\alpha \right) h^{2k+2} \| \partial_t \boldsymbol v \|^2_{L^2(0,T; H^{k+1}(\Gamma))} +\frac{ \Delta t^2 \rho_F^2 }{ \mu_F} \| \partial_{tt} \boldsymbol v \|^2_{L^2(0,T; L^2(\Omega_F))} \notag\\ & \; + \Delta t^2 \rho_S \| \partial_{tt} \boldsymbol \xi \|^2_{L^2(0,T; L^2(\Omega_S))} +\alpha \Delta t^2 \left(\frac{\alpha}{\mu_F}+1 \right) \| \partial_t \boldsymbol v \|^2_{L^2(0,T; L^2(\Gamma))} \notag \\ & \; + \frac{ \Delta t (\Delta t +1)}{\alpha} \| \partial_t \boldsymbol \sigma_F \boldsymbol n_F \|^2_{L^2(0,T: L^2(\Gamma))} +\Delta t^2 \|\partial_{tt} \boldsymbol \eta\|^2_{L^2(0,T; S)} +\frac{ \Delta t^2}{2 \alpha } \sum_{n=0}^{N-1} \left\|\boldsymbol \sigma_F(\boldsymbol e_F^{n}, e_P^{n}) \boldsymbol n_F \right\|^2_{L^2(\Gamma)} \notag \\ & \; +\frac{\alpha \Delta t}{4} \sum_{n=0}^{N-1} \| \boldsymbol \delta_{F}^{n+1}-\boldsymbol \delta_{\xi}^{n+1} \|^2_{L^2(\Gamma)} +\frac{\Delta t \rho_S}{2} \sum_{n=0}^{N-1} \| \boldsymbol \delta_\xi^{n+1}\|^2_{L^2(\Omega_S)} + \frac{\Delta t}{2}\sum_{n=0}^{N-1} \| \boldsymbol \delta_{\eta}^{n+1} \|^2_S. \label{errorN} \end{align} We estimate term $\displaystyle\frac{\alpha \Delta t}{4} \sum_{n=0}^{N-1} \|\boldsymbol \delta_F^{n+1} - \boldsymbol \delta_{\xi}^{n+1} \|^2_{L^2(\Gamma)}$ by adding and subtracting $\boldsymbol \delta_F^n$ and using trace-inverse inequality~\eqref{traceinverse} as follows: \begin{align} & \frac{\alpha \Delta t}{4} \sum_{n=0}^{N-1} \| \boldsymbol \delta_F^{n+1} - \boldsymbol \delta_{\xi}^{n+1} \|^2_{L^2(\Gamma)} =\frac{\alpha \Delta t}{4} \sum_{n=0}^{N-1} \| \boldsymbol \delta_F^{n+1}- \boldsymbol \delta_F^{n}+ \boldsymbol \delta_F^{n} - \boldsymbol \delta_{\xi}^{n+1} \|^2_{L^2(\Gamma)} \notag \\ & \quad \leq \frac{\alpha \Delta t}{2} \sum_{n=0}^{N-1} \| \boldsymbol \delta_F^{n+1} - \boldsymbol \delta_{F}^{n} \|^2_{L^2(\Gamma)} +\frac{\alpha \Delta t}{2} \sum_{n=0}^{N-1} \| \boldsymbol \delta_{\xi}^{n+1} -\boldsymbol \delta_F^{n} \|^2_{L^2(\Gamma)} \notag \\ & \quad \leq \frac{\alpha C_{TI} k^2 \Delta t}{2 h} \sum_{n=0}^{N-1} \| \boldsymbol \delta_F^{n+1} - \boldsymbol \delta_{F}^{n} \|^2_{L^2(\Omega_F)} +\frac{\alpha \Delta t}{2} \sum_{n=0}^{N-1} \| \boldsymbol \delta_{\xi}^{n+1} -\boldsymbol \delta_F^{n} \|^2_{L^2(\Gamma)}. \label{deltaDiff} \end{align} Combining~\eqref{deltaDiff} with~\eqref{errorN}, we get \begin{align} & \frac{\rho_F}{2} \|\boldsymbol \delta_F^{N} \|^2_{L^2(\Omega_F)} +\frac{\rho_S}{2} \|\boldsymbol \delta_\xi^{N} \|^2_{L^2(\Omega_S)} +\frac12 \| \boldsymbol \delta_{\eta}^{N}\|^2_{S} + \frac{\alpha \Delta t}{2} \| \boldsymbol \delta_{F}^{N} \|^2_{L^2(\Gamma)} + \frac{\Delta t}{2 \alpha} \| \boldsymbol \sigma_F(\boldsymbol e_F^{N}, e_P^{N}) \boldsymbol n_F \|^2_{L^2(\Gamma)} \notag \\ & \; + \mu_F \Delta t \sum_{n=0}^{N-1} \| {\boldsymbol D}( \boldsymbol \delta_F^{n+1})\|^2_{L^2(\Omega_F)} + \frac{\Delta t^2}{2} \left( \rho_F - \frac{\alpha C_{TI} k^2 \Delta t}{h} \right) \sum_{n=0}^{N-1} \|d_t\boldsymbol \delta_F^{n+1} \|^2_{L^2(\Omega_F)} \notag \\ & \; +\frac{\rho_S \Delta t^2}{2} \sum_{n=0}^{N-1} \| d_t \boldsymbol \delta_\xi^{n+1} \|^2_{L^2(\Omega_S)} +\frac{\Delta t^2}{2} \sum_{n=0}^{N-1} \| d_t \boldsymbol \delta_{\eta}^{n+1}\|^2_S \notag \\ & \lesssim h^{2k} \| \boldsymbol {\xi} \|^2_{L^2(0,T: H^{k+1}(\Omega_S))} +\frac{ \rho_F^2}{\mu_F} h^{2k}\| \partial_t \boldsymbol v \|^2_{L^2(0,T: H^{k+1}(\Omega_F))} + \mu_F h^{2k} \| \boldsymbol v \|^2_{L^2(0,T; H^{k+1}(\Omega_F))} \notag \\ & \; +\frac{1 }{\mu_F} h^{2r+2} \|p\|^2_{L^2(0,T; H^{r+1}(\Omega_F))} +\rho_S h^{2k+2}\| \partial_t \boldsymbol \xi \|^2_{L^2(0,T; H^{k+1}(\Omega_S))} \notag \\ & \; + \Delta t^2 \left(\frac{\alpha^2 }{\mu_F}+\alpha \right) h^{2k+2} \| \partial_t \boldsymbol v \|^2_{L^2(0,T; H^{k+1}(\Gamma))} +\frac{ \Delta t^2 \rho_F^2 }{ \mu_F} \| \partial_{tt} \boldsymbol v \|^2_{L^2(0,T; L^2(\Omega_F))} \notag\\ & \; + \Delta t^2 \rho_S \| \partial_{tt} \boldsymbol \xi \|^2_{L^2(0,T; L^2(\Omega_S))} +\alpha \Delta t^2 \left(\frac{\alpha}{\mu_F}+1 \right) \| \partial_t \boldsymbol v \|^2_{L^2(0,T; L^2(\Gamma))} \notag \\ & \; + \frac{ \Delta t (\Delta t +1)}{\alpha} \| \partial_t \boldsymbol \sigma_F \boldsymbol n_F \|^2_{L^2(0,T: L^2(\Gamma))} +\Delta t^2 \|\partial_{tt} \boldsymbol \eta\|^2_{L^2(0,T; S)} +\frac{ \Delta t^2}{2 \alpha } \sum_{n=0}^{N-1} \left\|\boldsymbol \sigma_F(\boldsymbol e_F^{n}, e_P^{n}) \boldsymbol n_F \right\|^2_{L^2(\Gamma)} \notag \\ & \; +\frac{\Delta t \rho_S}{2} \sum_{n=0}^{N-1} \| \boldsymbol \delta_\xi^{n+1}\|^2_{L^2(\Omega_S)} + \frac{\Delta t}{2}\sum_{n=0}^{N-1} \| \boldsymbol \delta_{\eta}^{n+1} \|^2_S. \label{errorNF} \end{align} We recall that the error between the exact and the discrete solution is the sum of the approximation error and the truncation error. Thus, using the triangle inequality, approximation properties~\eqref{app1}-\eqref{app2} and the Gronwall lemma, we prove the desired estimate. \end{proof} Using Taylor-Hood elements, i.e. $k = 2, r = 1$, for the fluid problem and piecewise quadratic elements for the solid problem, we have the following estimate. \begin{corollary} Consider algorithm~\eqref{SSweak}-\eqref{SFweak}. Suppose that $(V^F_h,Q_h^F)$ is given by $\mathbb{P}_2-\mathbb{P}_1$ Taylor-Hood approximation elements and $V_h^S$ is given by $\mathbb{P}_2$ approximation elements. Under the assumptions of Theorem~\ref{MainThm}, we have \begin{align} & \frac{\rho_F}{2} \|\boldsymbol e_F^{N} \|^2_{L^2(\Omega_F)} +\frac{\rho_S}{2} \|\boldsymbol e_\xi^{N} \|^2_{L^2(\Omega_S)} +\frac12 \| \boldsymbol e_{\eta}^{N} \|^2_{S} + \frac{\alpha \Delta t}{2} \| \boldsymbol e_F^{N} \|^2_{L^2(\Gamma)} + \mu_F \Delta t \sum_{n=0}^{N-1} \| \boldsymbol{D} ( \boldsymbol e_F^{N} ) \|^2_{L^2(\Omega_F)} \notag \\ & \lesssim e^T \left( h^4+\Delta t \right). \notag \end{align} \end{corollary} The following lemmas are used in the proof of Theorem~\ref{MainThm}. \begin{lemma} \label{cons1} The following estimate holds: \begin{align*} & \Delta t \sum_{n=0}^{N-1} \big(\mathcal{R}_1 (\boldsymbol \delta_{F}^{n+1}, \boldsymbol \delta_{\xi}^{n+1})+\mathcal{R}_2(\boldsymbol \delta_{\eta}^{n+1}) \big) \\ & \lesssim \Delta t^2 \left( \frac{\rho_F^2 }{ \mu_F}\| \partial_{tt} \boldsymbol v \|^2_{L^2(0,T; L^2(\Omega_F))} +\rho_S \| \partial_{tt} \boldsymbol \xi \|^2_{L^2(0,T; L^2(\Omega_S))} +\alpha \left(\frac{\alpha}{\mu_F}+1 \right) \| \partial_t \boldsymbol v\|^2_{L^2(0,T; L^2(\Gamma))} \right. \\ & \quad \left. + \frac{1}{\alpha} \| \partial_t \boldsymbol \sigma_F \boldsymbol n_F \|^2_{L^2(0,T; L^2(\Gamma))} + \| \partial_{tt} \boldsymbol \eta\|^2_{L^2(0,T; S)} \right) + \frac{\mu_F \Delta t}{2} \sum_{n=0}^{N-1} \| \boldsymbol D(\boldsymbol \delta_F^{n+1})\|^2_{L^2(\Omega_F)} \\ & \quad + \frac{ \Delta t \rho_S }{4}\| \boldsymbol \delta_{\xi}^{n+1} \|^2_{L^2(\Omega_S)} +\frac{\alpha \Delta t}{10} \| \boldsymbol \delta_{F}^{n+1}- \boldsymbol \delta_{\xi}^{n+1} \|^2_{L^2(\Gamma)} +\frac{ \Delta t}{4} \sum_{n=0}^{N-1} \| \boldsymbol \delta_{\eta}^{n+1}\|^2_S. \end{align*} \end{lemma} \begin{proof} Rearranging and using Cauchy-Schwartz, Young's, Poincar\'e - Friedrichs, and Korn's inequalities, we have \begin{align*} \Delta t\mathcal{R}_1 (\boldsymbol \delta_{F}^{n+1}, \boldsymbol \delta_{\xi}^{n+1}) & = \Delta t\rho_F \int_{\Omega_F} (d_t \boldsymbol v^{n+1} - \partial_t \boldsymbol v^{n+1}) \cdot \boldsymbol \delta_{F}^{n+1} +\Delta t\rho_S \int_{\Omega_S} (d_t \boldsymbol \xi^{n+1} - \partial_t \boldsymbol \xi^{n+1}) \cdot \boldsymbol \delta_{\xi}^{n+1} \notag \\ & \; +\alpha \Delta t \int_{\Gamma} (\boldsymbol v^{n+1} - \boldsymbol v^{n}) \cdot \boldsymbol \delta_{F}^{n+1} d \boldsymbol x +\alpha \Delta t \int_{\Gamma} (\boldsymbol v^{n+1} - \boldsymbol v^{n}) \cdot (\boldsymbol \delta_{\xi}^{n+1} -\boldsymbol \delta_{F}^{n+1}) d \boldsymbol x \notag,\\ & \; +\Delta t\int_{\Gamma} \boldsymbol \sigma_F (\boldsymbol v^{n+1} - \boldsymbol v^n,p^{n+1}-p^n) \boldsymbol n_F \cdot (\boldsymbol \delta_{F}^{n+1} -\boldsymbol \delta_{\xi}^{n+1}) \notag \\ & \lesssim \frac{\Delta t \rho_F^2 }{ \mu_F} \| d_t \boldsymbol v^{n+1}-\partial_t \boldsymbol v^{n+1}\|^2_{L^2(\Omega_F)} +\frac{\mu_F \Delta t}{2} \| \boldsymbol D(\boldsymbol \delta_F^{n+1})\|^2_{L^2(\Omega_F)} \notag\\ & \; + \Delta t \rho_S \| d_t \boldsymbol \xi^{n+1}-\partial_t \boldsymbol \xi^{n+1}\|^2_{L^2(\Omega_S)} + \frac{ \Delta t \rho_S }{4}\| \boldsymbol \delta_{\xi}^{n+1} \|^2_{L^2(\Omega_S)} \notag\\ & \; +\alpha \Delta t \left(\frac{\alpha}{\mu_F}+1 \right) \| \boldsymbol v^{n+1} - \boldsymbol v^n \|^2_{L^2(\Gamma)} +\frac{\alpha \Delta t}{12} \| \boldsymbol \delta_{F}^{n+1}- \boldsymbol \delta_{\xi}^{n+1} \|^2_{L^2(\Gamma)} \notag \\ & \; + \frac{ \Delta t}{\alpha} \|\boldsymbol \sigma_F (\boldsymbol v^{n+1} - \boldsymbol v^n,p^{n+1}-p^n) \boldsymbol n_F \|^2_{L^2(\Gamma)}. \end{align*} Furthermore, using Cauchy-Schwartz and Young's inequalities, we have \begin{align*} \Delta t \mathcal{R}_2(\boldsymbol \delta_{\eta}^{n+1}) &= \Delta t a_S(\boldsymbol\delta_{\eta}^{n+1}, \partial_t \boldsymbol \eta^{n+1}-d_t \boldsymbol \eta^{n+1}) \\ & \leq \Delta t \| d_t \boldsymbol \eta^{n+1}-\partial_t \boldsymbol \eta^{n+1}\|^2_S +\frac{\Delta t}{4} \| \boldsymbol \delta_{\eta}^{n+1}\|^2_S. \end{align*} The final estimate follows by summing from $n=0$ to $N-1$ and applying Lemma~\ref{consistency}. \end{proof} \begin{lemma}[Consistency errors] \label{consistency} Assume $X \in \{\Omega, \Gamma\}$. The following inequalities hold: \begin{align*} &\Delta t \sum_{n=0}^{N-1}\| d_t \boldsymbol \varphi^{n+1}-\partial_t \boldsymbol \varphi^{n+1}\|^2_{L^2(X)} \lesssim \Delta t^2 \|\partial_{tt} \boldsymbol \varphi\|^2_{L^2(0,T;L^2(X))}, \\ &\Delta t \displaystyle\sum_{n=0}^{N-1}\| \boldsymbol \varphi^{n+1} -\boldsymbol \varphi^{n} \|^2_{L^2(X)}\lesssim \Delta t^2 \| \partial_t \boldsymbol \varphi \|^2_{L^2(0,T; L^2(X))}. \end{align*} \end{lemma} \begin{proof} See~\cite{bukavc2016stability} for proof. \end{proof} \begin{lemma}[Interpolation errors] \label{lemma_interpolation} The following inequalities hold: \begin{gather*} \Delta t \sum_{n=0}^{N-1} \| d_t \boldsymbol \theta_F^{n+1}\|^2_{L^2(\Omega_F)} \le \| \partial_t \boldsymbol \theta_F\|^2_{L^2(0,T;L^2(\Omega_F))} \lesssim h^{2k} \|\partial_t \boldsymbol v \|^2_{L^2(0,T;H^{k+1}(\Omega_F))}, \\ \Delta t \sum_{n=0}^{N-1} \| d_t \boldsymbol \theta_{\xi}^{n+1}\|^2_{L^2(\Omega_S)} \le \| \partial_t \boldsymbol \theta_{\xi}\|^2_{L^2(0,T;L^2(\Omega_S))} \lesssim h^{2k+2} \|\partial_t \boldsymbol \xi\|^2_{L^2(0,T;H^{k+1}(\Omega_S))}, \\ \Delta t \sum_{n=0}^{N-1} \|\boldsymbol D(\boldsymbol \theta_F^{n+1})\|^2_{L^2(\Omega_F)} \lesssim \Delta t \sum_{n=0}^{N-1} h^{2k} \| \boldsymbol v^{n+1}\|^2_{H^{k+1}(\Omega_F)} \lesssim h^{2k} \| \boldsymbol v\|^2_{L^2(0,T;H^{k+1}(\Omega_F))}, \\ \Delta t \sum_{n=0}^{N-1} \| \boldsymbol \theta_{\eta}^{n+1} \|^2_S \lesssim h^{2k} \|\boldsymbol \eta\|^2_{L^2(0,T;H^{k+1}(\Omega_S))}, \qquad \Delta t \sum_{n=0}^{N-1} \|\theta_p^{n+1}\|^2_{L^2(\Omega_F)} \lesssim h^{2r+2} \| p \|^2_{L^2(0,T; H^{r+1}(\Omega_F))}. \end{gather*} \end{lemma} \begin{proof} The last three inequalities follow directly from approximation properties~\eqref{app1}-\eqref{app2}. For other inequalities, see~\cite{bukavc2016stability} for more details. \end{proof} \begin{remark} The sub-optimal order of convergence in time that is shown in this paper is often obtained in partitioned methods for the interaction between a fluid and thick structure. In particular, sub-optimal accuracy has been shown for the partitioned method based on Nitsche's approach in~\cite{burman2009stabilization} and for the Robin-Neumann method in~\cite{fernandez2015generalized}. Extending the algorithm to optimal accuracy could be achieved by using higher-order extrapolations in the design of the generalized Robin coupling conditions, but it is out of scope of this paper. \end{remark} \section{Numerical examples}~\label{numerics} To demonstrate the performance of the proposed numerical scheme, we present three numerical examples. In the first example, we investigate the accuracy of the linearized FSI problem~\eqref{Ssolid}-\eqref{Sfluid} considered in Section~\ref{conv} and compare the approximated solution to a manufactured one. We consider the same benchmark problem in the second example, but apply it to a moving domain FSI problem~\eqref{fsi1}-\eqref{fsi2}. In both of these examples, the convergence rates are calculated using different combination parameters, $\alpha$, in order to show the theory is satisfied and in some cases, exceeded. In our final example, we model pressure propagation in a two-dimensional channel with physiologically realistic parameters for blood flow and show the comparison of the results obtained using the proposed partitioned scheme and a monolithic method. \subsection{Example 1} In the first numerical example, we use the method of manufactured solutions to verify the theoretical convergence results from Section~\ref{conv}. We define the structure and fluid domains as upper and lower parts of the unit square, respectively, i.e. ${\Omega}_S=(0,1) \times (\frac{1}{2},1)$ and ${\Omega}_F=(0,1) \times (0, \frac{1}{2})$. The true solutions for the structure displacement, $\boldsymbol \eta$, the fluid velocity, $\boldsymbol v$, and the fluid pressure, $p$, are defined as: \begin{align} & \begin{bmatrix} \eta_x \\ \eta_y \end{bmatrix} = \begin{bmatrix} 10^{-3}2x(1-x)y(1-y)e^t \\ 10^{-3}x(1-x)y(1-y) e^t \label{true_eta} \end{bmatrix}, \\ &\begin{bmatrix} v_x \\ v_y \end{bmatrix} = \begin{bmatrix} 10^{-3}2x(1-x)y(1-y)e^t \\ 10^{-3}x(1-x)y(1-y)e^t \label{true_u} \end{bmatrix}, \\ &p =-10^{-3} e^t \lambda_S \left(2(1-2x)y(1-y) + x(1-x)(1-2y)\right). \label{true_p} \end{align} We note that the fluid velocity is not divergence-free. Therefore, we add a forcing term to the conservation of mass equation. We also add forcing terms in both the fluid and structure equations~\eqref{Ssolid}-\eqref{Sfluid}, resulting in the following system: \begin{align*} & \rho_F \partial_t \boldsymbol{v} = \nabla \cdot \boldsymbol\sigma_F(\boldsymbol v, p) + \boldsymbol{f}_F& \textrm{in}\; \Omega_F \times(0,T), \\ &\nabla \cdot \boldsymbol{v} = s & \textrm{in}\; \Omega_F \times(0,T), \\ & \partial_{t} {\boldsymbol \eta} = \boldsymbol \xi & \textrm{in}\; {\Omega}_S\times(0,T), \\ &{\rho}_S \partial_{t} {\boldsymbol \xi} = {\nabla} \cdot \boldsymbol \sigma_S(\boldsymbol \eta) + \boldsymbol{f}_S& \textrm{in}\; {\Omega}_S\times(0,T), \\ & \boldsymbol v= \boldsymbol 0 & \textrm{on} \; \partial \Omega_F / {\Gamma} \times (0,T), \\ & \boldsymbol \eta = \boldsymbol 0 & \textrm{on} \; \partial \Omega_S / {\Gamma} \times (0,T). \end{align*} Using the exact solutions, we compute forcing terms $\boldsymbol f_F, \boldsymbol f_S$ and $s$. Implementing our methodology using finite elements was facilitated through the use of the FreeFem++ software~\cite{hecht2012new}. For space discretization, $\mathbb{P}_1$ elements were used for both the structure velocity and displacement, where $\mathbb{P}_1$ bubble - $\mathbb{P}_1$ elements were used for the fluid velocity and pressure, respectively. We set parameters $\lambda_S, \rho_S, \mu_S, \rho_F \text{ and } \mu_F$ equal to one. The simulations were performed until the final time $T= 0.3$ s was reached. Figure~\ref{comp_actual} shows the comparison of the computed and exact fluid velocity (top) and structure displacement (bottom) obtained with $\alpha=10$. An excellent agreement is observed. \begin{figure}[ht] \centering{ \includegraphics[scale=0.15]{actual_vs_calc.pdf} } \caption{Example 1: A comparison of the computed and exact fluid velocity (top) and structure displacement (bottom) at $T=0.3$ s.} \label{comp_actual} \end{figure} In conjunction with comparing the numerical results to the actual solution, we compute convergence rates as described in Theorem~\ref{MainThm} in addition to analyzing how well the coupling conditions are satisfied at the interface. In particular, we compute the following errors for the structure displacement and velocity, and fluid velocity: \begin{align*} e_{\boldsymbol \eta} = \frac{ \left\Vert \boldsymbol \eta -\boldsymbol \eta_{ref} \right\Vert^2_{S}}{\left\Vert \boldsymbol \eta_{ref} \right\Vert^2_S}, \quad e_{\boldsymbol \xi}=\frac{\left\Vert \boldsymbol \xi - \boldsymbol \xi_{ref} \right\Vert_{L^2({\Omega}_S)}}{\left\Vert \boldsymbol \xi_{ref} \right\Vert_{L^2({\Omega}_S)}}, \quad e_F=\frac{\left\Vert \boldsymbol v- \boldsymbol v_{ref} \right\Vert_{L^2(\Omega_F)}}{\left\Vert \boldsymbol v_{ref} \right\Vert_{L^2(\Omega_F)}}, \end{align*} as well as the error for the kinematic coupling condition: \begin{align*} e_{ke}=\frac{\left \Vert \boldsymbol v - \boldsymbol \xi \right \Vert_{\Gamma}}{\left \Vert\boldsymbol v \right\Vert_{\Gamma}}, \end{align*} and error for the dynamic coupling condition: \begin{align*} e_{\boldsymbol \sigma}=\frac{\left \Vert \boldsymbol \sigma_F \boldsymbol n_F - \boldsymbol \sigma_S \boldsymbol n_F \right \Vert_{\Gamma}}{\left\Vert \boldsymbol \sigma_F \boldsymbol n_F \right \Vert_{\Gamma}}. \end{align*} In order to compute the convergence rates, we start with an initial time step $\Delta t = 0.01$ and mesh size $h=0.1$, and divide them by two for four iterations. Each variable is then evaluated with differing alphas equaling 1, 10, 100, 200, and 500. \begin{figure}[ht] \centering{ \includegraphics[scale=0.55]{ETAE_Ex2.pdf} \includegraphics[scale=0.55]{XI_Ex2.pdf} \includegraphics[scale=0.55]{V_Ex2.pdf} } \caption{Example 1: Errors for the solid displacement $\boldsymbol \eta$ (top-left), solid velocity $\boldsymbol \xi$ (top-right), and fluid velocity $\boldsymbol v$ (bottom) at the final time $T=0.3$ s. } \label{eta_xi_v} \end{figure} Figure~\ref{eta_xi_v} shows the convergence rates for the structure displacement (top left), structure velocity (top right) and fluid velocity (bottom) computed at the final time. We observe that the convergence rates for the structure displacement are close to one across all values of $\alpha$. The convergence rates for the structure velocity are first-order, or better, when $\alpha$ is equal to 1 and 10. As $\alpha$ increases, the convergence rates begin to decrease, compromising condition~\eqref{CFLconv} used in the convergence analysis. Similar holds for the fluid velocity, which has the best convergence rates for $\alpha$ values of 10 and 100, and the worst when $\alpha$ increases to 500. \begin{figure}[h!] \centering{ \includegraphics[scale=0.55]{Kinematic_Ex2.pdf} \includegraphics[scale=0.55]{Dynamic_Ex2.pdf} } \caption{Example 1: Kinematic (left) and dynamic (right) coupling condition errors at the final time $T=0.3$ s.} \label{kinematic_dynamic} \end{figure} In addition to the errors related to Theorem~\ref{MainThm}, we investigate the relation between the combination parameter $\alpha$ and how well the coupling conditions are satisfied. In particular, the generalized Robin boundary condition~\eqref{lcomb} will turn into the dynamic coupling condition~\eqref{dynamic} as $\alpha \rightarrow 0$, and it will approach the kinematic coupling condition~\eqref{kinematic} as $\alpha \rightarrow \infty.$ Therefore, we compute errors $e_{ke}$ and $e_{\boldsymbol \sigma}$ as we take $\alpha=1, 10, 100, 200,$ and $500$. In this case, to better approximate the fluid and structure stresses, we used $\mathbb{P}_2$ elements for fluid and structure velocities and the structure displacement, and $\mathbb{P}_1$ elements for pressure. Figure~\ref{kinematic_dynamic} shows errors $e_{ke}$ (left) and $e_{\boldsymbol \sigma}$ (right) computed with the following time and mesh sizes: \begin{gather} (\Delta t, h) \in \left\{ \left( \frac{10^{-2}}{2^k}, \frac{0.0625}{2^k} \right)\right\}_{k=0}^3. \label{discPar} \end{gather} We observe that, with the exception of $\alpha=1$, the convergence rates are closer to one for smaller values of $\alpha$, and they decrease to 0.5 as $\alpha$ increases to 500. We also note that the error in the kinematic coupling condition decreases as $\alpha$ increases, while the opposite holds for the dynamic coupling condition. However, for all the considered cases, the relative error in the kinematic coupling condition is significantly smaller than the relative error in the dynamic coupling condition. \subsection{Example 2} In the second example, we study the accuracy of the proposed method applied to a moving domain FSI problem~\eqref{fsi1}-\eqref{fsi2}. We use the same manufactured solutions,~\eqref{true_eta}-\eqref{true_p}, as in Example~1. Furthermore, we define the true solution for the fluid domain displacement to be $\boldsymbol \eta_F = \boldsymbol \eta$, and the true solution for the fluid domain velocity to be $\boldsymbol w =\partial_t \boldsymbol \eta_F$. Similar to Example~1, we add forcing terms to equations~\eqref{fsi1},~\eqref{fsi11} and~\eqref{fsi12}. To update the fluid domain, we solve \begin{align*} &- \Delta {\boldsymbol \eta}^{n+1}_F = \boldsymbol f_D & \textrm{in} \; \hat{\Omega}_F, \\ & {\boldsymbol \eta}^{n+1}_F = 0 & \textrm{on} \; \hat{\Gamma}^{in}_F \cup \hat{\Gamma}^{out}_F, \\ & {\boldsymbol \eta}^{n+1}_F = {\boldsymbol \eta}^{n+1} & \textrm{on} \; \hat{\Gamma}. \end{align*} As for $\boldsymbol f_F, \boldsymbol f_S$ and $s$, we compute $\boldsymbol f_D$ using the exact solution. Every other aspect of this example remains unchanged, meaning the error calculations, space and time discretization specifications, and parameters are the same as in Example~1. \begin{figure}[ht] \centering{ \includegraphics[scale=0.55]{ETAE.pdf} \includegraphics[scale=0.55]{XI.pdf} \includegraphics[scale=0.55]{V.pdf} } \caption{Example 2: Errors for the solid displacement $\boldsymbol \eta$ (top-left), solid velocity $\boldsymbol \xi$ (top-right), and fluid velocity $\boldsymbol v$ (bottom) at the final time $T=0.3$ s.} \label{eta_xi_v2} \end{figure} Figure~\ref{eta_xi_v2} shows the errors for the structure displacement (top left), structure velocity (top right) and fluid velocity (bottom) obtained at $T=0.3$ s. Similar behavior is observed as in Example 1. For all values of $\alpha$, the convergence rates for the solid displacement are close to one, while the errors are roughly the same with the very slight exception of when $\alpha=500$. The convergence rates for solid velocity decrease from one to 0.5 as the values of $\alpha$ increase, while the errors themselves grow as $\alpha$ increases with the exception of $\alpha = 1$. In a similar trend, the rates for the fluid velocity decrease and the errors increase as $\alpha$ grows, with the exception of $\alpha=1$. For all variables, the best convergence rates and the smallest errors are obtained with $\alpha=10$. \begin{figure}[h!] \centering{ \includegraphics[scale=0.55]{Kinematic.pdf} \includegraphics[scale=0.55]{Dynamic.pdf} } \caption{Example 2: Kinematic (left) and dynamic (right) coupling condition errors at the final time $T=0.3$ s.} \label{kinematic_dynamic2} \end{figure} Likewise to Example~1, we calculate the errors in approximating coupling conditions using a $\mathbb{P}_1$ space discretization for pressure and $\mathbb{P}_2$ for all other variables. The temporal and spatial discretization parameters are the same as described in~\eqref{discPar}. Figure~\ref{kinematic_dynamic2} shows the kinematic coupling condition error (left) and the dynamic coupling condition error (right) at $T=0.3$ s obtained using different values of $\alpha$. Similar to what we observed in Example~1, as $\alpha$ increases, the error decreases for the kinematic coupling condition with the reversed result for the dynamic coupling condition. As for convergence rates, we obtain values around 0.5 using $\alpha=1$ and values very close to one using $\alpha=10$, which then decrease back down to 0.5 as $\alpha$ increases. \subsection{Example 3} The third example focuses on a classical benchmark problem used in the validation of FSI solvers~\cite{bukavc2014modular}. We consider the fluid flow in a two-dimensional channel interacting with a deformable wall. The reference fluid and structure domains are defined as $\hat{\Omega}_F = (0,6) \times (0,0.5)$ and $\hat\Omega_S= (0,6) \times (0.5,0.6)$, respectively. We consider the moving domain FSI problem~\eqref{fsi1}-\eqref{fsi2}, where we add a linearly elastic spring term, $\gamma \boldsymbol \eta$, to the elastodynamic equation, yielding: \begin{align*} \rho_S \partial_t \boldsymbol \xi + \gamma \boldsymbol \eta = \nabla \cdot \boldsymbol \sigma_S(\boldsymbol \eta) \qquad \textrm{in} \; \hat{\Omega}_S \times (0,T). \end{align*} Term $\gamma \boldsymbol \eta$ is obtained from the axially symmetric model, and it represents a spring keeping the top and bottom boundaries in a two-dimensional model connected~\cite{bukavc2014modular}. The parameters used in this example, $\rho_F$ = 1 g/cm$^3, \mu_F$ = 0.035 g/cm s$, \rho_S=1.1$ g/cm$^3, \mu_S=5.75 \cdot 10^5$ dyne/cm$^2$, $ \gamma=4 \cdot 10^6$ dyne/cm$^4$ and $\lambda_S=1.7 \cdot 10^6$ dyne/cm$^2$, are within physiologically realistic values of blood flow in compliant arteries. In this example, we use $\alpha=100$. The flow is driven by prescribing a time-dependent pressure drop at the inlet and outlet sections, as defined in~\eqref{inlet}-\eqref{outlet}, where \begin{align} p_{in}(t)=\left\{ \begin{array}{ll} \frac{p_{max}}{2} \left[1-\cos \left(\frac{2 \pi t}{t_{max}}\right) \right], & \text{if } t \leq t_{max}\\ 0, &\text{if }t > t_{max} \end{array} \right. ,\text{ }p_{out}=0 \text{ } &\forall t \in (0,T). \end{align} The pressure pulse is in effect for $t_{max} = 0.03$ s with maximum pressure $p_{max}=1.333 \times 10^4$ dyne/$\text{cm}^2$. The final time is $T=12$ ms. We use $\mathbb{P}_1$ bubble - $\mathbb{P}_1$ elements for the fluid velocity and pressure, respectively, and $\mathbb{P}_1$ elements for the structure velocity and displacement. The results are obtained using $\Delta t=10^{-5}$ on a mesh containing 7,500 elements in the fluid domain and 1,200 elements in the structure domain. \begin{figure}[h!] \centering{ \includegraphics[scale=0.55]{flowrate.pdf} } \caption{Fluid flowrate vs. x-axis compared with a monolithic scheme.} \label{monolithiccomparisonF} \end{figure} \begin{figure}[h!] \centering{ \includegraphics[scale=0.55]{pressure.pdf} } \caption{Fluid pressure vs. x-axis compared with a monolithic scheme.} \label{monolithiccomparisonP} \end{figure} \begin{figure}[h!] \centering{ \includegraphics[scale=0.55]{radius.pdf} } \caption{Fluid-structure interface displacement vs. x-axis compared with a monolithic scheme.} \label{monolithiccomparisonD} \end{figure} Figures~\ref{monolithiccomparisonF},~\ref{monolithiccomparisonP} and~\ref{monolithiccomparisonD} show a comparison of the flowrate, mean pressure and fluid-structure interface displacement obtained using the proposed numerical method and a monolithic scheme used in~\cite{bukavc2014modular,quaini2009algorithms} at times $t=4, 8,$ and 12 ms. A good agreement is observed in all cases, even with small discrepancies in the interface displacement. We note that the time step used in the simulations obtained with a monolithic solver is $\Delta t=10^{-4}$. As expected, due to the splitting error, a smaller time-step was needed in the partitioned scheme. \section{Conclusions} We present a novel partitioned, non-iterative method for FSI problems with thick structures. The presented method is based on generalized Robin boundary conditions, which are designed by linearly combining kinematic and dynamic coupling conditions using a combination parameter, $\alpha$. Thanks to a novel design of Robin boundary conditions used in the fluid and structure sub-problems, we prove unconditional stability of the semi-discrete numerical method applied to a moving domain FSI problem. Convergence analysis was performed for a fully-discrete, linearized problem, yielding $\mathcal{O} (\Delta t^{\frac12})$ accuracy in time and optimal accuracy in space. The theoretically obtained results are verified in numerical examples. In particular, using the method of manufactured solutions, we compute the relative errors between the numerical and exact solutions on both fixed domain and moving domain problems. In particular, we compute the convergence rates for different values of the combination parameter $\alpha$, and note that increasing values of $\alpha$ will lead to a decrease of convergence rates from one to 0.5 for a fixed $\Delta t$. We also compare our results to the ones obtained using a monolithic scheme on a benchmark problem of pressure propagation in a two-dimensional channel, obtaining a good agreement. However, due to the splitting error and sub-optimal accuracy, a smaller time step was used in the partitioned scheme. An extension of the proposed method to higher-order accuracy will be considered in our future work. {\color{black} One of the drawbacks of the proposed method is its dependence on the combination parameter $\alpha$, which is, generally, problem dependent. In other work where similar combination parameters are introduced, such as~\cite{gerardo2010analysis}, the authors suggest to use \begin{gather} \alpha = \frac{\rho_S H_S}{\Delta t} + \beta H_S \Delta t, \label{alphaformula} \end{gather} where $H_S$ is the height of the solid domain and $$\beta=\frac{E}{1-\nu^2}(4\rho_1^2 - 2(1-\nu)\rho_2^2),$$ with $E$ denoting the Young's modulus, $\nu$ denoting the Poisson's ratio and $\rho_1$ and $\rho_2$ denoting the mean and Gaussian curvatures of the fluid-structure interface, respectively. However, this choice of $\alpha$ is proposed to ensure convergence of a subiterative solution procedure when solving strongly coupled FSI problems. Since we do not need subiterations to achieve stability, we do not require similar conditions on $\alpha$. Indeed, using~\eqref{alphaformula} to compute $\alpha$ in our method gives results that are not optimally accurate. Therefore, $\alpha$ needs to be estimated separately for each problem. } \section{Acknowledgments} This work was partially supported by NSF under grants DMS 1619993 and 1912908, and DCSD 1934300. We would like to thank Prof. Catalin Trenchea for helpful discussions and suggestions. \bibliographystyle{ieeetr}
1,314,259,994,458
arxiv
\section{Introduction} In recent years string theory or, more specifically, gauge-gravity duality has seen interesting applications in the field of condensed matter physics. One of the earliest such applications is the discovery of a superconductor-like phase transition in AdS with a Reissner-Nordstr\"{o}m black hole and a charged scalar field minimally coupled to a $U(1)$ gauge field \cite{Gubser:2008px,Hartnoll:2008vx,Hartnoll:2008kx}. Other system with similar properties include so-called ``p-wave'' holographic superconductors with non-abelian gauge fields instead of a scalar coupled to $U(1)$ gauge fields \cite{Gubser:2008wv,Gubser:2008zu,Roberts:2008ns}. In various works \cite{Horowitz:2009ij,Gubser:2009cg,Gubser:2009gp, Gubser:2008pf, Gauntlett:2009dn, Gubser:2008wz, Konoplya:2009hv,Basu:2009vv,Ammon:2009xh}, authors have studied related systems in the zero temperature limit. Generically the zero temperature solutions turn out to be a solitonic solution with a zero sized horizon. The authors find that the effective potential for small gauge field fluctuations vanishes near the black hole horizon for the abelian cases. This implies that the normal component of the A.C. conductivity never vanishes, even at zero temperature, which in turn indicates that the superconductor is gapless \cite{Horowitz:2009ij}. However the corresponding effective potential does not vanish at the horizon for the non-abelian case. It is concluded that the holographic non-abelian superconductor does have a finite gap for the relevant gauge field fluctuations \cite{Basu:2009vv}. The non-abelian system is an anisotropic system which shows different conductivity in different directions. The near horizon geometry of those zero temperature solutions are interesting and ranges from simple $AdS^4$ to various complicated and Lifshitz like geometries. Near the horizon these geometries could be constructed by an analytic perturbation theory \cite{Horowitz:2009ij,Basu:2009vv}. Near horizon values of the scalar field or the appropriate component of the gauge field enters as a undetermined parameter in those perturbative expansion. Those parameters are determined by a numerical integration to infinity and consequent application of proper boundary conditions. In this work we generalize simple version of those zero temperature solutions to small but non-zero temperature $T$. It should be noted that a non-zero temperature solution was already obtained numerically \cite{Hartnoll:2008kx,Horowitz:2009ij,Ammon:2009xh}. However from a purely numerical solution it is difficult to conclude about the low temperature analytic behaviour of various quantities. Whereas we will be able to calculate the nature of various interesting physical quantities analytically. We confine ourselves to cases where near horizon geometry is $AdS^4$. We expect that at non-zero temperature a small horizon would form deep inside this $AdS^4$. We intuitively understand this by separation of scales. As the black hole is situated deep inside the $AdS^4$ it does not affect the UV physics. Hence we expect that at an intermediate scale ($r_i$) the non-zero temperature solution approaches the zero temp one. Importantly for a very small horizon size ($r_0$) the intermediate scale may itself be chosen very small so that the zero temperature perturbative method would be valid in the intermediate scale. Here we have $r_0 \ll r_i \ll 1$. We show that we can set up a perturbative expansion in terms of the gauge field $A_0$ which interpolates between the black hole horizon and the intermediate scale. From this matching in the intermediate scale we argue that a slight variation of the zero temperature numerics may be applied to the non-zero temperature case. From our solution we may calculate how entropy, specific heat etc. vanishes near zero temperature. We also calculate the various energy gap associated with the systems. Especially in the non-abelian case we calculate the various energy gap in the system and from their ratio we find some hint of underlying ``pairing mechanism". The ratio deviates around $33\%$ from its weak coupling BCS counterpart. Our results may be generalized to various cases where near horizon geometry at zero temperature deviates from $AdS^4$ \cite{Horowitz:2009ij,Gubser:2009cg,Gubser:2009gp, Gubser:2008pf, Gauntlett:2009dn, Gubser:2008wz}. The application part may include calculation of various fermionic propagator, calculation of second sound and more interestingly low temperature behaviour of non-universality of viscosity entropy ratio etc \cite{Herzog:2009ci,Chen:2009pt,Gubser:2009dt,Ammon:2010pg,Gubser:2010dm,Faulkner:2009am,Erdmenger:2010xm,Natsuume:2010ky}. Plan of this paper is as follows. In section \ref{sec:ab} we discuss the abelian or s-wave case. In section \ref{sec:nonab} we discuss the non-abelian or p-wave case. \section{Abelian holographic superconductors} \label{sec:ab} We begin with the following four dimensional action describing gravity minimally coupled to a Maxwell field and charged scalar: \begin{equation}\label{eq:bulktheory} {\cal L} = R + \frac{6}{L^2} - \frac{1}{4} F^{\mu\nu} F_{\mu\nu} - |\nabla \psi - i q A \psi |^2 -V( |\psi|) \,. \end{equation} As usual we are writing $F=dA$, the cosmological constant is $-3/L^2$, and $m,q$ are the mass and charge of the scalar field. We are interested in plane symmetric solutions, so we set \begin{equation}\label{metric} ds^2=-g(r) e^{-\chi(r)} dt^2+{dr^2\over g(r)}+r^2(dx^2+dy^2) \end{equation} \begin{equation} A=A_0(r)~dt, \quad \psi = \psi(r) \end{equation} We can choose a gauge in which $\psi$ is real and work in units with $L=1$. The equations of motion are: \begin{equation} \psi''+\left(\frac{g'}{g}-\frac{\chi'}{2}+\frac{2}{r} \right)\psi' +\frac{q^2A_0^2e^\chi}{g^2} \psi -{V'(\psi)\over 2g}=0\label{psieom}\end{equation} \begin{equation}\label{phieom} A_0''+\left(\frac{\chi'}{2}+\frac{2}{r} \right)A_0'-\frac{2q^2\psi^2}{g}A_0=0 \end{equation} \begin{equation} \chi'+r\psi'^2+\frac{rq^2A_0^2\psi^2e^\chi}{g^2}=0\label{chieom} \end{equation} \begin{equation}\label{geom} g' + \left(\frac{1}{r} - { \chi'\over 2}\right) g+\frac{rA_0'^2e^\chi}{4}- 3r+\frac{rV(\psi)}{2}=0 \end{equation} These equations are invariant under a scaling symmetries: \begin{eqnarray}\label{rescales} r \to a r \,, \quad (t,x,y) \to (t,x,y)/a \,, \quad g \to a^2 g \,, \quad A_0 \to a A_0 \\ e^\chi \to b^2 e^\chi, \quad t\to bt, \quad A_0 \to A_0/b \end{eqnarray} Once a solution is found, this symmetry can be used to set $\chi =0$ at the boundary at infinity, so the metric takes the standard AdS form asymptotically. At large radius \begin{equation} A_0 = \mu -{\rho\over r}, \qquad \psi ={\psi^{(\lambda)}\over r^\lambda}+{\psi^{(3-\lambda)}\over r^{3-\lambda}}. \end{equation} where $\lambda = (3 +\sqrt{9+4m^2})/2 $. In the boundary CFT, $\mu$ is the chemical potential, $\rho$ is the charge density, and $\lambda$ is the scaling dimension of the operator dual to $\psi$. We want this operator to condense without being sourced, so we are only interested in solutions where $\psi$ is normalizable. This typically requires setting $\psi^{(3-\lambda)} = 0$. As the boundary chemical potential is increased beyond certain critical value, $\psi$ condenses. One may ask about the zero temperature limit of such a configuration. At $T=0$, a condensation of $\psi$ is possible only if $m^2-2 q^2 < -3/2$. \subsection{$m^2 = 0$} Here we would rephrase the results of \cite{Horowitz:2009ij} in our terms. We like to find the superconducting ground state of the system with a non-zero condensate. We will confine ourselves to $m^2 = 0$ case. Being a single state without any degeneracy, a superconducting ground state does not have any entropy associated with it (\cite{Horowitz:2009ij},\cite{Gubser:2009cg,Gubser:2009gp}). We start by guessing a near horizon ansatz, $g(r)=r^2,\psi = \psi_0$. We use the scaling symmetries \eref{rescales} to set the co-efficient in front of $g$ to unity. Once a suitable $g(r)$ is chosen, we set up a step by step perturbation in $A_0$. The first step is to solve for the eqn. of motion of $A_0$ in this metric, \begin{equation} A_0 = r^{2+\alpha}, \quad q\psi_0 = \left({\alpha^2 + 5\alpha + 6\over 2}\right)^{1/2} \end{equation} Here we have used the scaling symmetries to rescale the co-efficient of $A_0$ to $1$. All the other metric component and scalar field are kept $r$ independent at this step. In the next order in $A_0$ one may solve for the $r$ dependence of other fields (assuming $\alpha > -1$). This procedure works as long as the various $A_0$ dependent quantities appearing in the perturbative expansion are small. We get, \begin{equation} \quad \psi = \psi_0 - \psi_1(r), \quad \chi = - \chi_1(r), \quad g=r^2 - g_1(r) \end{equation} where, \begin{equation} \quad \chi_1 = {\alpha^2 + 5\alpha + 6\over 4(\alpha + 1)}e^{\chi_o}r^{2(1+\alpha)} \end{equation} \begin{equation} g_1 = {\alpha + 2\over 4} e^{\chi_o}r^{4+2\alpha}, \quad \psi_1 = {q e^{\chi_o} \over 2(2\alpha^2 + 7\alpha +5)}\left({\alpha^2 + 5\alpha + 6\over 2}\right)^{1/2} r^{2(1+\alpha)}. \label{scaling} \end{equation} This scaling solution is valid in the regime $r \ll 1$. This solution may be used as a boundary condition to the EOM's for a numerical integration to infinity. In general the value of $\psi(\infty)$ will be non-zero. The value of $\alpha$ is determined by the requirement $\psi(\infty)$ is zero. \subsection{Small non-zero $T$} From the scaling relations \eref{scaling} one finds that IR geometry ($r \rightarrow 0$) is an emergent $AdS^4$ with the same cosmological constant as that of the boundary $AdS^4$. At small non-zero $T$ we guess that a black hole horizon will be created in the deep $IR$ region of the emergent $AdS^4$. Hence at the first step we choose the following ansatz, \begin{align} g(r)=r^2 \left(1-\frac{r_0^3}{r^3}\right), \quad \psi=\psi_0 \label{testmetric} \end{align} We follow the same chain of logic as in the zero temperature case and construct a solution as a perturbation in $A_0(r)$. Our idea is to find out a solution such that it approaches the scaling solution for $\frac{r}{r_0} \gg 1$. As we argue below, it would then be meaningful to match with the numerical solution to get a full solution of the EOMs. Other quantities then automatically matches with the scaling solution. The solution for $A_0$ in the above metric (eqn (\ref{testmetric})) is given by, \begin{align} A_0&= r_0^{2+\alpha} F(\frac{r}{r_0}) \\ F(r)&=\frac{\Gamma\left(\frac{2+\alpha}{3}\right) \Gamma\left(\frac{5+\alpha }{3}\right) }{\Gamma\left(\frac{2}{3}\right)\Gamma\left(\frac{5}{3}+\frac{2 \alpha }{3}\right)\sin(\frac{\pi \alpha}{3})}\frac{1}{r} \text{Im}\left(\text{\mbox{$_2${F}$_1$}}\left(-1-\frac{\alpha}{3},\frac{2}{3}+\frac{\alpha }{3},\frac{2}{3},r^3\right)\right \label{ftemp} \end{align} Here, $A_0$ vanishes linearly near the black hole horizon. For $r\rightarrow \infty$, $F(r)\approx r^{2+\alpha}+O(r^{1+\alpha})$. From that we get $A_0(r)\approx r^{2+\alpha}(1+r_0 O(\frac{1}{r}))$ for $r \gg r_0$. In a intermediate region $r=r_* \gg r_0$, the solutions in Eq. (\ref{ftemp}) could be matched with the zero temperature solution of $A_0$. Now if $r_0 \ll 1$, then we can choose the matching region such that $r_0 \ll r_* \ll 1$. Importantly $A_0$ remains small in the matching region and other fields could be solved perturbatively in terms of $A_0$ (see appendix \ref{app:lim}). We get, \begin{align} \psi_1(r)&=-q^2 \psi_0 \int^r_{r_0} \frac{d\tilde r}{g\tilde r^2} \int^{\tilde r}_{r_0} \frac{r'^2 A_0^2}{g} dr' \\ \chi_1(r)&=-q^2 \psi_0^2 \int^r_{r_0} \tilde r \frac{A_0^2}{g^2} d\tilde r \\ g_1(r)&=- \frac{1}{r} \int^r_{r_0} \tilde r^2 \frac{A_0'^2}{4} d\tilde r. \label{perturb} \end{align} Using the asymptotic expansion of the hypergeometric function one finds that the above quantities approach their zero temperature values in the matching region $r \sim r_*$. Now the solution at non-zero temperature could be integrated out to infinity. As our solution matches with the zero temperature solution in the leading order in $\frac{r_0}{r_*} \ll 1$, an almost same numerical solution may be used to extend our solution to all values of $r$. We assume that the EOMs are numerically stable in the region $(r_*,\infty)$ in a sense that a small perturbative change in the initial condition gives rise to a small change at infinity. \subsection{Some results} We do some simple calculation using our finite temperature solution. At $T\rightarrow 0$ the horizon behaves like a black hole horizon situated deep inside a IR $AdS^4$ and various $AdS^4$ results are applicable at the leading order in $r_0$. The temperature $T$ of the black hole is given by, \begin{align} 4\pi T= [g'(g\exp(-\chi_0))']^\frac{1}{2}|_{r=r_0} \approx 3 \exp(-\chi_0/2) r_0, \quad \text{for } r_0 \ll 1. \end{align} Hence $T \propto r_0$ for small $r_0$. Total entropy($S$)\footnote{Condensate does not have any entropy. Hence the entropy of the whole solution is same as the entropy of the non-supercondcuting part.} and mass($M$) of the non-superconducting part (i.e. for the black hole) vary as, \begin{align} S \propto r_0^2 \propto T^2 \end{align} and, \begin{align} M \propto r_0 ^3 \propto T^3. \end{align} We may define two kind of specific heat for our system. One is at fixed chemical potential($C_\mu$) and other is at fixed total charge($C_\rho$) \cite{Peeters:2009sr}. Here we will calculate \begin{align} C_\mu \sim T \frac{\partial S}{\partial T},\quad \mu \text{ fixed.} \end{align} Say a small change $\delta r_0$ in $r_0$ changes the system from $(T,\mu)$ to $(T+\delta T,\mu+\delta \mu)$. The resulting system is equivalent to a system $(\frac{T+\delta T}{\mu+\delta \mu} ,\mu)$. Here $\delta \mu \propto \delta r_0$ and $\delta T \propto \delta r_0$. Hence, \begin{align} C_{\mu} \propto r_0 ^2 \propto T^2 \end{align} The charge density of the non-superconducting part behaves like, \begin{align} \rho \propto r_0^2 A' \propto r_0 ^{3+\alpha} \propto T^{3+\alpha}. \end{align} \subsection{Conductivity} To obtain the conductivity in a background, one solves for a linearized perturbation of the vector potential $A_x$ in the same geometry. Assuming $A_x=a(r) \exp(i\omega t)$ we get in our case, \begin{align} a''+\left(\frac{g'}{g}-\frac{\chi'}{2}\right) a'+ \left(e^{\chi}(\frac{\omega^2}{g^2}-\frac{A'^2}{g})-2q^2 \psi^2\right)a=0 \end{align} We define a new variable, \begin{align} d \tilde r= \frac{e^{\frac{\chi}{2}}}{g} dr. \end{align} In terms of this new variable, \begin{align} -\frac{d^2}{d\tilde r^2}a+V(\tilde r)a=\omega^2 a \label{Aeq} \end{align} where, \begin{align} V(\tilde r)=g [A_0'^2+2 q^2 \psi^2 \exp(-\chi)] \end{align} This potential vanishes near $r=0$. The superconducting nature of the system is argued from the existence of a supercurrent solution. If we set $\omega=0$ and integrate $A_x$ from the horizon (with a regularity condition at the horizon), we are expected to get a non-trivial $A_x$. Existence of such a solution implies a $\delta$ function for the real part of conductivity at $\omega=0$ \cite{Basu:2008st,Horowitz:2009ij,Herzog:2008he}. The field $a$ has the following asymptotic behaviours near the horizon ($\tilde{r} \rightarrow -\infty$) and the boundary ($\tilde{r} \rightarrow 0$): \begin{eqnarray} a(\tilde{r} \rightarrow 0) \sim a_0^b + a_1^b \tilde{r} \\ a(\tilde{r} \rightarrow \infty) = a_0 e^{i \omega \tilde{r}}, \end{eqnarray} Here, we have chosen the incoming boundary condition near the horizon. Conductivity is defined as follows, \begin{equation} \label{conductivity} \sigma = -\frac{ia_1^b}{\omega a_0^b} \end{equation} It has been argued that at zero temperature, the zero frequency limit (i.e. the non-superconducting part) of ${\text Re}(\sigma)$ vanishes as powerlaw, i.e. \cite{Horowitz:2009ij,Gubser:2008pf}, \begin{eqnarray} {\text Re}(\sigma(\omega)) \sim \omega^\delta, \text{ for } \delta\ll 1. \end{eqnarray} Here $\delta=\sqrt{4 V_0+1}-1$. Where $V_0=\lim_{\tilde r \rightarrow \infty} \tilde r^2 V(\tilde r)$. As the non-superconducting contribution to ${\text Re}(\sigma)$ is non-zero even at small frequencies, the system does not have a energy gap in this channel. However non-superconducting part of ${\text Re}(\sigma(\omega))$ vanishes at the zero frequency limit. \subsubsection{Small non-zero $T$} Non-superconducting part of $\lim_{\omega \rightarrow 0}{\text Re}(\sigma(\omega))$ is non-zero at any finite temperature. Due to the gaplessness of the system we expect a powerlaw decay of the above quantity with the temperature. The non-superconducting contribution to $\lim_{\omega \rightarrow 0}{\text Re}(\sigma(\omega))$ has a smooth zero frequency limit (see appendix \ref{app:cond}) and could be calculated by setting $\omega=0$ in \eref{Aeq}, i.e. \begin{align} \frac{d^2}{d\tilde r^2} a=g \left ( A'(x)^2+q^2 e^{-\chi}A_0^2 \right)a \\ \end{align} and we have, \begin{align} \lim_{\omega \rightarrow 0}{\text Re}(\sigma(\omega))=\frac{a_{h}^2}{a_b^2}. \end{align} Where, $a_h$ and $a_b$ is the value of $a$ at the horizon and boundary respectively. We break down the domain of $r$ in two parts $(r_0,r_*)$ and $(r_*,\infty)$, s.t., $r_0 \ll r_* \ll 1$. We have, \begin{align} \lim_{\omega \rightarrow 0}{\text Re}(\sigma(\omega))=\frac{a_{h}^2}{a_b^2}=\frac{a_{h}^2}{a(r_*)^2}\frac{a(r_*)^2}{a_b^2}. \end{align} In terms of the new co-ordinate $r_{*}=\tilde r$. Our goal is to fix $r_*$ and take $r_0$ to a zero. In that case, \begin{align} \lim_{\omega \rightarrow 0}{\text Re}(\sigma(\omega))=\frac{a_{h}^2}{a_b^2} \sim \frac{a_{h}^2}{a(r_*)^2} C_1 \end{align} Where $C_1$ is the limiting value of the quantity $\frac{a(r_*)^2}{a_b^2}$ as $r_0 \rightarrow 0$. This value may be calculated from the numerics. Leading dependence of non-superconducting part of $\mathrm{Re}(\sigma(0))$ on $\frac{1}{T}$ comes from the behaviour of the solution between $(r_0,r_*)$. Taking $r_*$ in our matching region we can use our analytic solution in the matching region. Defining a rescaled variable $r_1=\frac{r}{r_0}$ and the corresponding rescaled variable $\tilde r_1=\tilde r r_0$, we write the equations \eref{Aeq} as, \begin{align} \frac{d^2}{d\tilde r_1^2} a&= r_1^2 (1-\frac{1}{r_1^3}) \left ( c^2 A'(x)^2+2 q^2 e^{-\chi}\psi^2 \right)a \\ &\approx 2q^2 r_1^2 (1-\frac{1}{r_1^3}) e^{-\chi_0}\psi_0^2 a \end{align} where we have kept the leading order terms in $r_0$. The regular solution at the horizon has the following form, \begin{align} a=\text{Im}\left[\text{\mbox{$_2${F}$_1$}}\left(-\frac{\alpha }{3}-\frac{2}{3},\frac{\alpha }{3}+1,\frac{1}{3},\frac{r^3}{r_0^3}\right)\right]. \end{align} Using the asymptotic expansion of the hypergeometric function we get, \begin{eqnarray} \lim_{\omega \rightarrow 0}{\text Re}(\sigma(\omega)) \sim T^{2+\alpha} \end{eqnarray} \section{Non-abelian case} \label{sec:nonab} The Einstein-YM action for a non-abelian gauge field with a negative cosmological constant is given by \cite{Gubser:2008zu}, \begin{eqnarray} {\cal L} =\int d^4x\sqrt{-g}\left({\mathcal {R}}+\frac{6}{l^2}-\frac{1}{4}F_a^{\mu\nu}F_{\mu\nu}^a\right), \label{actionem} \end{eqnarray} where $F_{\mu\nu}$ is the field strength of an $SU(2)$ gauge field. The fully backreacted solution of the above equations is constructed in \cite{Ammon:2009xh,Basu:2009vv}. The ansatz for the gauge fields is\footnote{Due to a repulsive term coming from the non-abelian interactions, it is expected that a isotropic ansatz will have a quartic instability and would possibly have more free energy than the anisotropic ones \cite{Gubser:2008wv,Basu:2008bh}.}, \begin{eqnarray} A=A(r)\tau^3 dt+B(r)\tau^1 dx. \end{eqnarray} To tally with the anisotropy of the gauge field ansatz in the spatial direction, we choose the following ansatz for our metric, \begin{eqnarray} ds^2=-g(r)e^{-\chi(r)}dt^2+\frac{dr^2}{g(r)}+r^2\Big(c(r)^2 dx^2+dy^2\Big). \end{eqnarray} The Maxwell's equations of $A(r),B(r)$ are \begin{eqnarray} A_t^3\longrightarrow& A''+A'\left(\frac{2}{r}+\frac{\chi'}{2}+\frac{c'}{c}\right)-\frac{q^2B^2}{r^2gc}A=0, \nonumber\\ A_x^1\longrightarrow& B''+B'\left(\frac{g'}{g}-\frac{\chi'}{2}-\frac{c'}{c}\right)+\frac{e^{\chi}q^2A^2}{g^2}B=0. \label{maineq} \end{eqnarray} The diagonal Einstein equations give, \begin{eqnarray} -g'\left(\frac{1}{r}+\frac{c'}{2c}\right)-g\left(\frac{1}{r^2}+\frac{3c'}{r c}+\frac{c''}{c}\right)+3 &=& \frac{e^{\chi}}{4}A'^2+\frac{g}{4r^2c}B'^2+e^{\chi}\frac{q^2A^2B^2}{4r^2gc}, \nonumber\\ -\frac{\chi'}{r}+\frac{c'}{c}\left(-\chi'+\frac{g'}{g}\right) &=& \frac{e^{\chi}q^2A^2B^2}{g^2r^2c^2}, \nonumber\\ cc''+cc'\left(\frac{g'}{g}+\left(\frac{2}{r}-\frac{\chi'}{2}\right)\right) &=& -\frac{B'^2}{2r^2}+e^{\chi}\frac{q^2A^2B^2}{2g^2r^2}. \label{maineq2} \end{eqnarray} The above equations are invariant under the following scaling symmetries: \begin{eqnarray} \label{rescale} & &r \rightarrow a_1 r, \quad (t,x,y) \rightarrow (t,x,y)/a_1, \quad g \rightarrow a_1^2g, \quad A \rightarrow a_1 A, \quad B \rightarrow a_1 B, \\ \nonumber & &e^\chi \rightarrow a_2^2 e^\chi, \quad t \rightarrow a_2 t, \quad A \rightarrow A/a_2. \\ \nonumber & & x \rightarrow x/a_3,\quad B \rightarrow a_3B. \quad c \rightarrow a_3 c. \end{eqnarray} The second scaling symmetry may be used to set $\chi=0$ at infinity and the third scaling symmetry may be used to set $c=1$ at infinity, so that the asymptotic metric is that of $AdS_4$. The fields have the following asymptotic behavior: \begin{equation} A = \mu - \frac{\rho}{r}, \quad B = B_0^b + \frac{B_1^b}{r}, \label{asymptotic} \end{equation} where $\mu$ is the chemical potential and $\rho$ is the charge density in the boundary theory. In what follows we will only consider the solutions for the field $B$ which vanishes near the boundary, i.e. $B_0 = 0$. \subsection{Zero temperature solution} \label{sec:nonabzero} Like the abelian case the zero temperature solution is constructed by similar techniques \cite{Basu:2009vv}. We start by guessing near horizon ansatz $g=r^2,B= B_0 $, Putting this in \eref{maineq}, we get find out the equation of motion for $A$, \begin{eqnarray} r^2 (r^2 A')'=\frac{q^2 B_0^2} {A} \Rightarrow A = e^{-\frac{\beta}{r}}, \quad \beta=q B_0/{c_0}, \end{eqnarray} where we have used the observation $A \rightarrow 0$ at the horizon and by rescaling (\ref{rescale}) we set the coefficients $A_0=1$. In the next order in the perturbation $A_0$, we get, \begin{eqnarray} ~~B\sim B_0-B_1(r) ,~~\chi\sim \chi_0-\chi_1(r),~~g\sim r^2+g_1(r),~~c\sim c_0+c_1(r) . \label{ansatz} \end{eqnarray} All the terms with subscript $1$ are sub-leading and go to zero, faster than the leading part where it is applicable, as $r\rightarrow0$. Here, \begin{eqnarray} ~~~B_1= B_0\left(\frac{e^{\chi_0}q^2 }{4\beta^2}e^{-2\beta/r}\right), ~~~c =c_0\left(\frac{e^{\chi_0} }{8r^2}e^{-2\beta/r}\right), \\ \nonumber \chi_1=-\frac{e^{\chi_0}}{2r}e^{-2\beta/r},~~~g_1= -\frac{e^{\chi_0}A_0^2\beta}{4r}e^{-2\beta/r}, \label{scaling2} \end{eqnarray} where by rescaling (\ref{rescale}) we may set $\chi_0=0,c_0=1$. After one solves this equation by numerics, one again uses the rescalings of $g,c,\chi$ to make the Asymptotic geometry the same as that of $AdS_4$. For a given $q$, one numerically choose $\beta$ in such a fashion that $B$ vanishes near the boundary \cite{Basu:2009vv}. \subsection{Small non-zero $T$} We follow the same strategy as in the abelian case and choose a finite temperature metric and $B$ field like \footnote{In a similar discussion of a non-zero temperature solution, the metric used was not correct in \cite{Basu:2009vv} and other formulas are also schematic.}, \begin{eqnarray} g(r)=r^2 \left(1-\frac{r_0^3}{r^3}\right), \quad B=B_0 \end{eqnarray} This satisfies the Einstein's equations at the zero'th order. Using this background fields we may write down the equation for $A$, \begin{eqnarray} A''+\frac{2}{ r}A'-\frac{\beta^2}{ r^4 (1-r_0^3/r^3)}A=0,\quad \beta=\frac{q B_0}{c_0} \label{nonabA} \end{eqnarray} We would like to find a solution to the above equation which is regular at the horizon and approaches the zero temperature solution $\exp(-\frac{\beta}{r})$ for $r \gg r_0$. Unfortunately there seems to be no analytic solutions to the above equation. However, any solution $A(r)$ may be written as a linear combination \begin{align} A(r)=C_1 \exp(-\frac{\beta}{r}) + C_2 \exp(\frac{\beta}{r}) \text{ for } r \gg r_0. \end{align} We need to show that as $r_0 \rightarrow 0$, we may choose $r_* (\text{where } r_* \gg r_0)$ in such a way that $C_2/C_1 \ll \exp(2\frac{\beta}{r_*})$. Just like the abelian case we may choose a $r_*$, s.t. $r_0 \ll r_* \ll 1$. So that the nonzero-temperature solution approaches the zero temperature solution in the regime $r \sim r_*$. This amounts to saying that \eref{nonabA} has a smooth zero temperature limit, at least for the solutions which are regular at the horizon. This is argued using matched asymptotic expansion. Near the horizon $r\approx r_0+\delta r, \delta r \ll 1$ and it is possible to linearise \eref{nonabA}. The solution of the linearised equation which is regular at the horizon is given by, \begin{align} A(r)\approx \frac{r_0^{3/2}}{\sqrt{\delta r}} {\text I}_1\left(\beta \frac{2 \sqrt{\delta r}}{\sqrt{3} {r_0}^{3/2}}\right) \end{align} For small enough $r_0$ there is a region where both the above linear approximation and WKB solution of \eref{nonabA} are both valid. Moreover one may choose $ \frac{r_0^{3/2}}{\sqrt{\delta r}} \gg 1$ in such a region. Using the asymptotic expansion of the Bessel function one argues that the regular solution matches with the correct WKB solution. Extrapolating the correct solution to $r \gg r_0$ one gets, \begin{align} A(r)\approx C_1 \exp(-\frac{\beta}{r}) \text{ for } r \gg r_0. \end{align} This guarantees a matching region where the non-zero temperature solution approaches the zero temperature solution. Other fields may be solved following the similar procedure to that of abelian case. Using the similar argument of numerical stability we expect that our solution may be integrated out to infinity using a slight variation of the zero temperature numerics. \subsection{Some results} We will make some simple calculation using our scaling solution. As the black hole becomes a small black hole situated deep inside a IR $AdS^4$ various $AdS^4$ results are applicable at the leading order in $r_0$. The temperature $T$, entropy, mass and specific heat follow the similar behaviour to that of abelian case. The charge density of the non-superconducting part behaves like, \begin{align} \rho \propto r_0^2 A' \propto \exp(-\frac{\beta}{r_0}) \sim \exp(-\frac{\beta}{T}) . \end{align} \subsection{Conductivity and energy gap} In order to calculate the conductivity of this system, we need to turn on a small perturbation in the vector potential. We turn on the gauge field perturbations of the form: \begin{equation} A_y^3 = \epsilon a(r)e^{-i\omega t}\tau^3 dy \end{equation} Here we get, \begin{eqnarray} \label{nonabcond} a''+a'\left(\frac{g'}{g}-\frac{\chi'}{2}+\frac{c'}{c}\right) +a\left(\frac{e^{\chi}\omega^2}{g^2}-\frac{q^2B^2}{gr^2c^2} -e^{\chi}\frac{A'^2}{g}\right)=0. \end{eqnarray} This can be written as a Schr\"{o}dinger equation: \begin{equation} \label{schroedinger} -a'' + V(\tilde{r})a = c^2 \omega^2 a, \end{equation} where \begin{eqnarray} V(r)=g\left(c^2A'^2+e^{-\chi}\frac{q^2B^2}{r^2}\right). \label{vr} \end{eqnarray} and all the derivatives in \eref{schroedinger} are in terms of the new variable new variable $\tilde{r}$ (``tortoise coordinate'') given by: \begin{equation} \frac{d}{d\tilde{r}} \equiv e^{-\chi/2} gc \frac{d}{dr}. \label{tortoise} \end{equation} In terms of the new co-ordinate horizon is mapped to $\tilde r=-\infty.$ Here, $c$ approaches to unity as $r \rightarrow \infty$, so that the spacetime is asymptotically $AdS_4$. It follows then from \eref{asymptotic} that the potential $V(r)$ vanishes near the boundary. If we require $g\sim r^2$ near the horizon at $r=0$ then the first term vanishes, while the second term is finite as $B(r=0) \equiv B_0 \neq 0$ and $\chi$ is also finite at the horizon. Note that since $c \rightarrow 1$ near the boundary, the quantity $\omega$ can be interpreted as the frequency of the incoming wave. Similar to abelian case, the superconducting nature of the system is argued from the existence of a supercurrent solution. The nature of the finite part of the conductivity can be inferred from the potential $V(r)$ and shows a hard gap \cite{Basu:2009vv} at $T=0$. The fact that the potential is nonzero at the horizon at $T=0$ makes it possible for this system to exhibit a hard gap. From \eref{schroedinger}, the field $a$ has the following asymptotic behaviours near the horizon ($\tilde{r} \rightarrow -\infty$) and the boundary ($\tilde{r} \rightarrow 0$): \begin{eqnarray} a(\tilde{r} \rightarrow 0) \sim a_0^b + a_1^b \tilde{r} \\ a(\tilde{r} \rightarrow \infty) = a_0 e^{i \tilde{\omega} \tilde{r}}, \end{eqnarray} where $\tilde{\omega} = \sqrt{c_0^2 \omega^2 - V_0}$, with $c_h$, $a_h$ being the near-horizon values of $c$ and $a$ respectively. Here, we have chosen the incoming boundary condition near the horizon. The conductivity of the system is defined in the same way as in \eref{conductivity}. It follows from \eref{schroedinger} that : \begin{equation} a^* a'' - a a^{*''} = 0, \end{equation} which implies that the quantity $\Lambda = a^* a' - a a^{*'} = 2i \mathrm{Im}(a a^{*'})$ is a constant. \subsubsection{$T=0$ case} At T=0 , equating the values of $\Lambda$ near the horizon and the boundary we get: \begin{equation} \mathrm{Re}(\sigma) = \left\{ \begin{array}{l l} \frac{\tilde{\omega}}{\omega} \frac{|a_0|^2}{|a_0^b|^2} & , \quad \tilde \omega^2 > 0 \\ 0,& \quad \tilde \omega^2 < 0 \\ \end{array} \right. \label{finalcond} \end{equation} Therefore, the real part of the conductivity will vanish whenever $\tilde{\omega}$ is imaginary, i.e. when $\omega < \Delta_1=\sqrt{V_0}/c_0$, which defines the gap. \subsubsection{Small nonzero $T$} However at any finite temperature $T$, the system does not show a hard gap as the potential $V(r)$ actually vanishes near the black hole horizon. This is an expected behaviour considering thermal excitations of the condensate. One may define another gap by the low temperature behaviour of the conductivity. Considering the finite part (i.e. non-superconducting contribution) of $\mathrm{Re}(\sigma)$ at zero frequency limit at low temperature, one expects \begin{align} \lim_{\omega\rightarrow 0} \mathrm{Re}(\sigma(\omega)) \sim \exp(-\frac{\Delta_2}{T}). \end{align} Generically $\Delta_2 \neq \Delta_1$ and their ratio gives information about the pairing mechanism. $\Delta_2$ may be thought as the mass of the charged quassiparticle carriers in the system. $\Delta_1$ may be thought as the mass of the "pairs//combination" of the quassiparticles which gives excitation over the pure condensate. In BCS theory such a combination of basic carriers is a 'Cooper pair'. In the BCS theory, $\Delta_1=2 \Delta_2$. We would like to calculate $\Delta_2$ from our low temperature solution. The non-superconducting contribution to $\mathrm{Re}(\sigma)(0)$ has a smooth zero frequency limit and could be calculated by setting $\omega=0$ in \eref{schroedinger}, i.e. \begin{align} \label{sch2} \frac{d^2}{d\tilde r^2} a=g \left ( c^2 A'(x)^2+q^2 e^{-\chi}\frac{B^2}{r^2}\right)a \\ \end{align} and we have, \begin{align} \lim_{\omega\rightarrow 0} \mathrm{Re}(\sigma(\omega))=\frac{a_{h}^2}{a_b^2}. \end{align} Where, $a_h$ and $a_b$ is the value of $a$ at the horizon and boundary respectively. We break down the domain of $r$ in two parts $(r_0,r_*)$ and $(r_*,\infty)$, s.t., $r_0 \ll r_* \ll 1$. We have, \begin{align} \lim_{\omega\rightarrow 0} \mathrm{Re}(\sigma(\omega)) =\frac{a_{h}^2}{a_b^2}=\frac{a_{h}^2}{a(r_*)^2}\frac{a(r_*)^2}{a_b^2}. \end{align} In terms of the new co-ordinate $r_{*}=\tilde r$. Our goal is to fix $r_*$ and take $r_0$ to a zero. In that case, \begin{align} \lim_{\omega\rightarrow 0} \mathrm{Re}(\sigma(\omega))=\frac{a_{h}^2}{a_b^2} \sim \frac{a_{h}^2}{a(r_*)^2} C_1 \end{align} Where $C_1$ is the limiting value of the quantity $\frac{a(r_*)^2}{a_b^2}$ as $r_0 \rightarrow 0$. This value may be calculated from the numerics. Leading dependence of non-superconducting part of $\mathrm{Re}(\sigma(0))$ on $\frac{1}{T}$ comes from the behaviour of the solution between $(r_0,r_*)$. Taking $r_*$ in our matching region we can use our analytic solution in the matching region. Defining a rescaled variable $r_1=\frac{r}{r_0}$ and the corresponding rescaled variable $\tilde r_1=\tilde r r_0$, we write the equations \eref{sch2} as, \begin{align} \frac{d^2}{d\tilde r_1^2} a&=\frac{1}{r_0^2} r_1^2 (1-\frac{1}{r_1^3}) \left ( c^2 A'(x)^2+q^2 e^{-\chi}\frac{B^2}{r_1^2}\right)a \\ &\approx q^2 \frac{1}{r_0^2} r_1^2 (1-\frac{1}{r_1^3}) e^{-\chi_0}\frac{B_0^2}{r_1^2} a \end{align} where we have kept the leading order terms in $r_0$. For $r_0\ll 1$ the above equation may be solved using WKB approximation. One may question of validity of WKB approximation as the potential vanishes near the black hole horizon. This turns out not be a problem as we can again break down the range of $r_1$ of into two parts $[1,r_2]$ and $[r_2,r_*)$. We choose our $r_2$, s.t. at $r\sim r_2$ WKB solution is valid. For a small $r_0$, $r_2$ lies very close to horizon $r_1=1$. For example one may choose $r_2=1+\sqrt{r_0}$. We can also use near horizon linearisation of metric and solve $a(r_1)$ terms of Bessel functions for $r_1<r_2$, \begin{eqnarray} a(r_1)={\rm I}_0\left(\frac{q B_0 e^{-\frac{\chi_0}{2}}\sqrt{r_1-1}}{r_0}\right) \end{eqnarray} At $r \sim r_2$ both the Bessel function and WKB method is valid and we can match these two following the principle of matched asymptotic expansion. Using this method one finds out that in the leading order only the WKB contribution matters, \begin{align} \lim_{\omega\rightarrow 0} \mathrm{Re}(\sigma(\omega)) &\sim \exp\left(-2 \frac{q B_0}{c_0 r_0} \int_{1}^{\infty} \frac{dr}{r^2 \sqrt{1-\frac{1}{r^3}}}\right) \end{align} In the above we also take $\frac{r_*}{r_0}\rightarrow \infty$ and $\frac{r_2}{r_1}\rightarrow 1$ limit. Using the formula for temperature we get, \begin{align} \Delta_2= \Delta_1 \frac{3}{2\pi}\int_{1}^{\infty} \frac{dr}{r^2 \sqrt{1-\frac{1}{r^3}}} = \Delta_1 \frac{3\Gamma\left(\frac{4}{3}\right)}{2\sqrt{\pi}\Gamma\left(\frac{5}{6}\right)} \approx 0.669 \Delta_1 \end{align} This may be contrasted with the weak coupling BCS value of $\Delta_2= \frac{1}{2}\Delta_1$. We find around $33\%$ deviation from the BCS value. Interestingly the numerical factor is close to $\frac{2}{3}$. This gives us information about possible underlying strong coupling pairing mechanism \cite{Hartnoll:2008vx}. \section{Acknowledgements} I thank Jianyang He, Moshe Rozali and Sumit Das for various discussions. I thank people of University of BC and University of KY for support. I am supported by grant NSF-PHY-0855614 and NSF-PHY-0970069.
1,314,259,994,459
arxiv
\section{Introduction} \label{S1} Andrews, Bhattacharjee and Dastidar \cite{Andrews1} introduced a new statistic of integer partitions, which they named as the $k$-measure. \begin{definition} The $k$-measure of a partition is the length of the largest subsequence of parts in the partition in which the difference between any two consecutive parts of the subsequence is at least $k$. \end{definition} Recall that the Durfee square of a partition is the largest square that can be constructed in the Ferrers diagram of the partition beginning from the top left corner. Andrews, Bhattacharjee and Dastidar \cite{Andrews1} found a deep connection between these two statistics of a partition as described in the theorem below. \begin{theorem}[Andrews, Bhattacharjee and Dastidar (2022)] \label{Wonderful} The number of partitions of $n$ with $2$-measure m equals the number of partitions of $n$ with Durfee square of side $m$. \end{theorem} The authors proved Theorem \ref{Wonderful} using $q$-series analysis. They concluded the paper by noting that a theorem as simply stated as Theorem \ref{Wonderful} should definitely have a bijective proof. They also suggested a further study of the properties of $k$-measures of partitions for $k > 2$. In a subsequent work, Andrews, Chern and Li \cite{Andrews2} used generalized Heine transformations to establish some trivariate generating function identities counting both the length and the k-measure for partitions and distinct partitions, respectively. As a corollary of their result, they obtained the following refinement of Theorem \ref{Wonderful}. \begin{theorem}[Andrews, Chern and Li] \label{Wonderful2} The number of partitions of $n$ with $l$ parts and $2$-measure m equals the number of partitions of $n$ with $l$ parts and Durfee square of side $m$. \end{theorem} As another corollary of their result, they got the following result for partitions with distinct odd parts. They defined $l(\lambda)$ to be the number of parts of $\lambda$ and $\mu_2(\lambda)$ to be the $2$-measure of $\lambda$. \begin{theorem}[Andrews, Chern and Li] \label{Wonderful3} The excess of the number of partitions $\lambda$ of $n$ with $l(\lambda) + \mu_2(\lambda)$ even over those with $l(\lambda) + \mu_2(\lambda)$ odd equals the number of partitions of $n$ into distinct odd parts. \end{theorem} Also refer to \cite{Lin} for some recent refinements of Theorem \ref{Wonderful} and an alternate $q$-series proof of Theorem \ref{Wonderful2}. In Section \ref{S2}, we obtain a short combinatorial proof of Theorem \ref{Wonderful}. In fact, our proof also proves the more general result of Theorem \ref{Wonderful2}. In Section \ref{S3}, we use the ideas in this proof to generalize Theorem \ref{Wonderful} for $k$-measures. \section{Proof of Theorem \ref{Wonderful}} \label{S2} Let \begin{itemize} \item $a_m(n)$ denotes the number of partitions of $n$ with $2$-measure $m$. \item $b_m(n)$ denotes the number of partitions of $n$ with Durfee square of side $m$ by $b_m(n)$. \end{itemize} Thus, Theorem \ref{Wonderful} asserts that $a_m(n) = b_m(n)$ for all $m$ and $n$. We begin by noting that $$a_m(n) = c_m(n) - c_{m+1}(n),$$ and $$b_m(n) = d_m(n) - d_{m+1}(n),$$ where $c_m(n)$ and $d_m(n)$ are defined as follows. \begin{itemize} \item $C_m(n)$ denotes the set of partitions of $n$ in which there exists a subsequence of length $m$ of parts in the partition in which the difference between any two consecutive parts of the subsequence is at least $2$. \item $D_m(n)$ denotes the set of partitions of $n$ which have at least $m$ parts greater than or equal to $m$. \item $c_m(n) = |C_m(n)|$. \item $d_m(n) = |D_m(n)|$. \end{itemize} From here, it immediately follows that $a_m(n) = b_m(n)$ for all $m$ and $n$ if and only if $c_m(n)=d_m(n)$ for all $m$ and $n$. That is Theorem \ref{Wonderful} is equivalent to Theorem \ref{Easy} described below, for which we provide a short bijective proof. \begin{theorem} \label{Easy} The number of partitions of $n$ in which there exists a subsequence of length $m$ of parts in the partition in which the difference between any two consecutive parts of the subsequence is at least $2$ is equal to the number of partitions of $n$ which have at least $m$ parts greater than or equal to $m$. \end{theorem} \begin{proof} We construct a bijection $\phi$ between the sets $C_m(n)$ and $D_m(n)$. Prior to that, we describe the motivation behind this bijection with the help of an example. Suppose $m=5$. Then, we are given a subsequence $(\lambda_1, \lambda_2, \lambda_3, \lambda_4, \lambda_5)$ with $$\lambda_1 \geq \lambda_2 + 2, \lambda_2 \geq \lambda_3 + 2, \lambda_3 \geq \lambda_4 + 2 , \lambda_4 \geq \lambda_5 + 2.$$ Thus, in particular, we have $$\lambda_1 \geq 9, \lambda_2 \geq 7, \lambda_3 \geq 5, \lambda_4 \geq 3, \lambda_5 \geq 1.$$ We need to map this to a partition with all parts greater than or equal to $5$. Note that the average of these lower bounds $9,7,5,3,1$ is equal to $5$. Therefore, we modify all the members of the subsequence $(\lambda_1, \lambda_2, \lambda_3, \lambda_4, \lambda_5)$ in such a way that their lower bound sequence $9,7,5,3,1$ becomes the average sequence $5,5,5,5,5$. To do this, we basically reduce the first number by $4$, reduce the second number by $2$, keep the third number unchanged, increase the fourth number by $2$ and increase the last number by $4$. That is, we map the subsequence $(\lambda_1, \lambda_2, \lambda_3, \lambda_4, \lambda_5)$ to $(\lambda_1-4, \lambda_2-2, \lambda_3, \lambda_4+2, \lambda_5+4)$. Clearly, each component of this vector is a number greater than or equal to $5$. To make the pattern even more clear, we can also express the above vector as $(\lambda_1-4, \lambda_2-2, \lambda_3 - 0, \lambda_4 -(-2), \lambda_5-(-4))$. Now, we describe the bijection $\phi$ for a general $m$. Suppose we have a partition $\lambda \in C_m(n)$. Further, suppose $(\lambda_1, \lambda_2, \cdots, \lambda_m)$ be a subsequence of parts of $\lambda$ in which the difference between any two consecutive parts of the subsequence is at least $2$. That is, $\lambda_i \geq \lambda_{i+1} + 2$ for all $i$. In particular, we have $\lambda_i \geq 1 + 2(m-i)$ for all $1 \leq i \leq m$. We define the map $\phi$ to be such that the elements of $\lambda$ not in our chosen subsequence $(\lambda_1, \lambda_2, \cdots, \lambda_m)$ are left unchanged, and $(\lambda_1, \lambda_2, \cdots, \lambda_m)$ is mapped under $\phi$ to \begin{equation} \label{Balance2} \Big(\lambda_1 - (m-1), \lambda_2 - (m-3), \cdots, \lambda_i - (m-(2i-1)), \cdots, \lambda_{m-1} + (m-3), \lambda_m + (m-1) \Big) . \end{equation} We verify that the resultant partition is indeed a member of $D_{m}(n)$. For that, we note two things. First, we have $$ \sum_{i = 1}^m \Big(\lambda_i - (m-(2i-1))\Big) = \sum_{i = 1}^m \lambda_i.$$ That is, the other terms with the positive and negative signs cancel out. Secondly, using $\lambda_i \geq 1 + 2(m-i)$ for all $1 \leq i \leq m$, one easily observes that $$\lambda_i - (m-(2i-1)) \geq m$$ for all $1 \leq i \leq m$. Thus, all the members of the vector in \eqref{Balance2} are greater than or equal to $m$, and therefore the resultant partition is indeed a member of $D_{m}(n)$. Next, we show that the map $\phi$ is invertible by constructing the inverse map $\psi$. Though $\psi$ is easy to predict, we describe it below in some detail. Suppose we have a partition $\pi \in D_m(n)$. There exist some parts $\pi_1, \pi_2, \cdots, \pi_m$ of $\pi$ such that $$ \pi_1 \geq \pi_2 \cdots \geq \pi_m \geq m. $$ We define the map $\psi$ to be such that the elements of $\pi$ other than $(\pi_1, \pi_2, \cdots, \pi_m)$ are left unchanged, and $(\pi_1, \pi_2, \cdots, \pi_m)$ is mapped under $\psi$ to \begin{multline*} \Big(\pi_1 + (m-1), \pi_2 + (m-3), \cdots, \pi_i + (m-(2i-1)), \cdots, \pi_{m-1} - (m-3), \pi_m - (m-1) \Big) . \end{multline*} It is straightforward to verify that consecutive members of the above vector differ by at least $2$, and thus the resultant partition is indeed a member of $C_m(n)$. Finally, it is easy to check that the maps $\phi$ and $\psi$ are indeed inverses of each other, completing the proof of Theorem \ref{Easy}, and thus also of Theorem \ref{Wonderful}. \end{proof} \begin{remark} Since the maps $\phi$ and $\psi$ preserve the number of parts, our proof also gives a combinatorial proof of Theorem \ref{Wonderful2}. \end{remark} \section{Generalization of Theorem \ref{Wonderful} for $k$-measures} \label{S3} It turns out that Theorem \ref{Easy} is easy to generalize by appropriately modifying the maps $\phi$ and $\psi$. We provide all the details for the sake of completeness. We denote the floor and ceiling functions of $x$ by $\lfloor x \rfloor$ and $\lceil x \rceil$ respectively. Note that for any natural number $m$, $ \left \lfloor \frac{m}{2} \right \rfloor + \left \lceil \frac{m}{2} \right \rceil = m$. This fact will be used frequently in the proof of the next theorem. \begin{theorem} \label{General} The number of partitions of $n$ in which there exists a subsequence of length $m$ of parts in the partition in which the difference between any two consecutive parts of the subsequence is at least $k$ is equal to the number of partitions of $n$ which have at least $\left \lfloor \frac{m}{2} \right \rfloor $ parts greater than or equal to $1 + \left \lceil \frac{k(m-1)}{2} \right \rceil $, and an additional at least $\left \lceil \frac{m}{2} \right \rceil $ parts greater than or equal to $1 + \left \lfloor \frac{k(m-1)}{2} \right \rfloor $. \end{theorem} \begin{proof} Let \begin{itemize} \item $C_{k,m}(n)$ denotes the set of partitions of $n$ in which there exists a subsequence of length $m$ of parts in the partition in which the difference between any two consecutive parts of the subsequence is at least $k$. \item $D_{k,m}(n)$ denotes the set of partitions of $n$ which have at least $\left \lfloor \frac{m}{2} \right \rfloor $ parts greater than or equal to $1 + \left \lceil \frac{k(m-1)}{2} \right \rceil $, and an additional at least $\left \lceil \frac{m}{2} \right \rceil $ parts greater than or equal to $1 + \left \lfloor \frac{k(m-1)}{2} \right \rfloor $. \end{itemize} We construct a bijection $\phi'$ between $C_{k,m}(n)$ and $D_{k,m}(n)$. Prior to that, we describe the motivation behind this bijection. Suppose we have a partition $\lambda \in C_{k,m}(n)$. Further, suppose $(\lambda_1, \lambda_2, \cdots, \lambda_m)$ be a subsequence of parts of $\lambda$ in which the difference between any two consecutive parts of the subsequence is at least $k$. That is, $\lambda_i \geq \lambda_{i+1} + k$ for all $i$. In particular, we have $\lambda_i \geq 1+k(m-i)$ for all $1 \leq i \leq m$. As suggested by the proof of Theorem \ref{Easy}, we calculate the average of the lower bounds on the $\lambda_i$'s, which comes out to be $1 + \frac{k(m-1)}{2}$. If $k$ is even or $m$ is odd, then $\frac{k(m-1)}{2}$ is an integer and our idea in the proof of Theorem \ref{Easy} can be easily generalized to construct the required bijection. However if $k$ is odd and $m$ is even, then the average $1 + \frac{k(m-1)}{2}$ of the lower bounds on the $\lambda_i$'s is not an integer. This problem makes the analysis of this case a little harder. We explain this difficulty by considering an example. Suppose $k=3$ and $m=4$. Then, we are given a subsequence $(\lambda_1, \lambda_2, \lambda_3, \lambda_4)$ with $$\lambda_1 \geq \lambda_2 + 3, \lambda_2 \geq \lambda_3 + 3, \lambda_3 \geq \lambda_4 + 3.$$ Thus, in particular, we have $$\lambda_1 \geq 10, \lambda_2 \geq 7, \lambda_3 \geq 4, \lambda_4 \geq 1.$$ The average of these lower bounds on the $\lambda_i$'s comes out to be $5.5$. Now if we just try to use the previous approach, we would map this subsequence to $ (\lambda_1 - 4.5, \lambda_2-1.5, \lambda_3 + 1.5, \lambda_4 + 4.5)$. However, since the parts of a partition must be integers, we cannot map it like this. Thus, the appropriate modification we make in this case is to map the subsequence to $ (\lambda_1 - 4, \lambda_2-1, \lambda_3 + 1, \lambda_4 + 4)$ instead. Note that in this case, the resultant partition has the property that at least two parts are greater than or equal to $6$ and an additional two parts are greater than or equal to $5$, explaining the strange condition in the definition of $D_{k,m}(n)$. Next, we describe the map $\phi'$ for any $k$ and $m$. Using floor and ceiling functions, we will be able to handle together the two cases when $k(m-1)$ is odd or even. We define the map $\phi'$ to be such that the elements of $\lambda$ not in our chosen subsequence $(\lambda_1, \lambda_2, \cdots, \lambda_m)$ are left unchanged, and $(\lambda_1, \lambda_2, \cdots, \lambda_m)$ is mapped under $\phi'$ to \begin{multline} \label{Balancek} \Bigg(\lambda_1 - \left \lfloor \frac{k(m-1)}{2} \right \rfloor, \lambda_2 - \left \lfloor \frac{k(m-3)}{2} \right \rfloor, \cdots, \lambda_{\left \lfloor \frac{m}{2} \right \rfloor} - \left \lfloor \frac{k}{2} \left(m+1-2\left\lfloor \frac{m}{2} \right \rfloor \right) \right \rfloor, \\ \lambda_{\left \lfloor \frac{m}{2} \right \rfloor + 1} + \left \lfloor \frac{k}{2} \left(m+1-2\left\lceil \frac{m}{2} \right \rceil \right) \right \rfloor, \lambda_{\left \lfloor \frac{m}{2} \right \rfloor + 2} + \left \lfloor \frac{k}{2} \left(m+3-2\left\lceil \frac{m}{2} \right \rceil \right) \right \rfloor, \cdots \\ \cdots, \lambda_{m-1} + \left \lfloor \frac{k(m-3)}{2} \right \rfloor, \lambda_m + \left \lfloor \frac{k(m-1)}{2} \right \rfloor \Bigg) . \end{multline} The members of the vector in \eqref{Balancek} can be described compactly as follows. The first $\left \lfloor \frac{m}{2} \right \rfloor$ members can be written as $$ \left\{\lambda_i - \left \lfloor \frac{k(m-(2i-1))}{2} \right \rfloor: 1 \leq i \leq \left \lfloor \frac{m}{2} \right \rfloor \right\}, $$ while the remaining $\left \lceil \frac{m}{2} \right \rceil $ members can be written (beginning from right to left) as $$ \left\{\lambda_{m-i} + \left \lfloor \frac{k(m-(2i+1)}{2} \right \rfloor : 0 \leq i < \left \lceil \frac{m}{2} \right \rceil \right\}. $$ We prove that the resultant partition is indeed a member of $D_{k,m}(n)$. First, considering two cases based on the parity of $m$, it is an easy exercise to confirm that the sum of the members of the vector in \eqref{Balancek} is equal to the sum of $\lambda_i$'s. That is, the other terms with the positive and negative signs cancel out. Secondly, we show that the first $\left \lfloor \frac{m}{2} \right \rfloor$ members of the the vector in \eqref{Balancek} are greater than or equal to $1 + \left \lceil \frac{k(m-1)}{2} \right \rceil $, and the last $\left \lceil \frac{m}{2} \right \rceil $ members are greater than or equal to $1 + \left \lfloor \frac{k(m-1)}{2} \right \rfloor $. To prove these facts, we crucially use $\lambda_i \geq 1+k(m-i)$ for all $1 \leq i \leq m$. Therefore, for $1 \leq i \leq \left \lfloor \frac{m}{2} \right \rfloor$, we have \begin{align*} \lambda_i - \left \lfloor \frac{k(m-(2i-1))}{2} \right \rfloor &\geq 1+k(m-i) - \left \lfloor \frac{k(m-(2i-1))}{2} \right \rfloor \\ &= 1+km - \left \lfloor \frac{k(m+1))}{2} \right \rfloor \\ &= 1+k(m-1) - \left \lfloor \frac{k(m-1))}{2} \right \rfloor \\ &= 1 + \left \lceil \frac{k(m-1)}{2} \right \rceil. \end{align*} Similarly, for $0 \leq i < \left \lceil \frac{m}{2} \right \rceil$, we have \begin{align*} \lambda_{m-i} + \left \lfloor \frac{k(m-(2i+1))}{2} \right \rfloor &\geq 1+ki+ \left \lfloor \frac{k(m-(2i+1))}{2} \right \rfloor \\ & = 1 + \left \lfloor \frac{k(m-1)}{2} \right \rfloor, \end{align*} as required. Thus, the resultant partition is indeed a member of $D_{k,m}(n)$. Next, we show that the map $\phi'$ is invertible by constructing the inverse map $\psi'$. The map $\psi'$ is again easy to guess but we describe it in some detail below. Suppose we have a partition $\pi \in D_m(n)$. There exist some parts $\pi_1, \pi_2, \cdots, \pi_m$ of $\pi$ such that $$ \pi_1 \geq \pi_2 \cdots \geq \pi_m \geq m. $$ We define the map $\psi'$ to be such that the elements of $\pi$ other than $(\pi_1, \pi_2, \cdots, \pi_m)$ are left unchanged, and $(\pi_1, \pi_2, \cdots, \pi_m)$ is mapped under $\psi'$ to \begin{multline*} \Bigg(\pi_1 + \left \lfloor \frac{k(m-1)}{2} \right \rfloor, \pi_2 + \left \lfloor \frac{k(m-3)}{2} \right \rfloor, \cdots, \pi_{\left \lfloor \frac{m}{2} \right \rfloor} + \left \lfloor \frac{k}{2} \left(m+1-2\left\lfloor \frac{m}{2} \right \rfloor \right) \right \rfloor, \\ \pi_{\left \lfloor \frac{m}{2} \right \rfloor + 1} - \left \lfloor \frac{k}{2} \left(m+1-2\left\lceil \frac{m}{2} \right \rceil \right) \right \rfloor, \pi_{\left \lfloor \frac{m}{2} \right \rfloor + 2} - \left \lfloor \frac{k}{2} \left(m+3-2\left\lceil \frac{m}{2} \right \rceil \right) \right \rfloor, \cdots \\ \cdots, \pi_{m-1} - \left \lfloor \frac{k(m-3)}{2} \right \rfloor, \pi_m - \left \lfloor \frac{k(m-1)}{2} \right \rfloor \Bigg) . \end{multline*} It is easy to verify that consecutive members of the above vector differ by at least $k$, and thus the resultant partition is indeed a member of $C_{k,m}(n)$. Finally, it is also straightforward to verify that the maps $\phi'$ and $\psi'$ are indeed inverses of each other, completing the proof of Theorem \ref{General}. \end{proof} Next, we use Theorem \ref{General} to generalize Theorems \ref{Wonderful} and \ref{Wonderful2}. First, we deduce two immediate corollaries of Theorem \ref{General} that will be helpful to obtain these generalizations. \begin{corollary} \label{EO} Suppose either $k$ is even or $m$ is odd. Then the number of partitions of $n$ in which there exists a subsequence of length $m$ of parts in the partition in which the difference between any two consecutive parts of the subsequence is at least $k$ is equal to the number of partitions of $n$ which have at least $m$ parts of $1+\frac{k(m-1)}{2}$. \end{corollary} \begin{corollary} \label{OE} Suppose $k$ is odd and $m$ is even. Then the number of partitions of $n$ in which there exists a subsequence of length $m$ of parts in the partition in which the difference between any two consecutive parts of the subsequence is at least $k$ is equal to the number of partitions of $n$ which have at least $\frac{m}{2}$ parts of $\frac{k(m-1)+1}{2}$, and an additional at least $\frac{m}{2}$ parts of $\frac{k(m-1)+3}{2}$. \end{corollary} Based on the results in these corollaries, we define a $(k,m)$-polygon associated to an integer partition. \begin{itemize} \item Suppose $k$ is even or $m$ is odd. Then define the $(k,m)$-polygon of a partition $\pi$ to be the rectangle containing $m$ rows of $1+\frac{k(m-1)}{2}$ nodes (In other words, the rectangle with vertical side $m$ and horizontal side $1+\frac{k(m-1)}{2}$) beginning from the top left corner of the Ferrers diagram of $\pi$. For example, the partition $9+9+8+7+4+3+1$ of $41$ has the following $(4,3)$-polygon. \vspace{1cm} \begin{center} \begin{tikzpicture} \foreach \x in {0,...,8} \filldraw (\x*.5, -5) circle (.5mm); \foreach \x in {0,...,8} \filldraw (\x*.5, -5.5) circle (.5mm); \foreach \x in {0,...,7} \filldraw (\x*.5, -6) circle (.5mm); \foreach \x in {0,...,6} \filldraw (\x*.5, -6.5) circle (.5mm); \foreach \x in {0,...,3} \filldraw (\x*.5, -7) circle (.5mm);. \foreach \x in {0,...,2} \filldraw (\x*.5, -7.5) circle (.5mm);. \foreach \x in {0} \filldraw (\x*.5, -8) circle (.5mm);. \draw[solid,black] (0,-5)-- (0,-6); \draw[solid,black] (0,-6)-- (2,-6); \draw[solid,black] (2,-5)-- (2,-6); \draw[solid,black] (0,-5)-- (2,-5); \end{tikzpicture} \end{center} \vspace{1cm} \item Suppose $k$ is odd and $m$ is even. Then define the $(k,m)$-polygon of a partition $\pi$ to be the polygon containing $\frac{m}{2}$ rows of $\frac{k(m-1)+3}{2}$ nodes and $\frac{m}{2}$ rows of $\frac{k(m-1)+1}{2}$ nodes beginning from the top left corner of the Ferrers diagram of $\pi$. For example, the partition $9+9+8+7+4+3+1$ of $41$ has the following $(3,4)$-polygon. \end{itemize} \vspace{1cm} \begin{center} \begin{tikzpicture} \foreach \x in {0,...,8} \filldraw (\x*.5, -5) circle (.5mm); \foreach \x in {0,...,8} \filldraw (\x*.5, -5.5) circle (.5mm); \foreach \x in {0,...,7} \filldraw (\x*.5, -6) circle (.5mm); \foreach \x in {0,...,6} \filldraw (\x*.5, -6.5) circle (.5mm); \foreach \x in {0,...,3} \filldraw (\x*.5, -7) circle (.5mm);. \foreach \x in {0,...,2} \filldraw (\x*.5, -7.5) circle (.5mm);. \foreach \x in {0} \filldraw (\x*.5, -8) circle (.5mm);. \draw[solid,black] (0,-5)-- (0,-6.5); \draw[solid,black] (0,-5)-- (2.5,-5); \draw[solid,black] (0,-6.5)-- (2,-6.5); \draw[solid,black] (2,-6.5)-- (2,-6); \draw[solid,black] (2,-6)-- (2.5,-5.5); \draw[solid,black] (2.5,-5)-- (2.5,-5.5); \end{tikzpicture} \end{center} \vspace{1cm} Based on this definition, we can rewrite Corollaries \ref{EO} and \ref{OE} together as follows. \begin{corollary} \label{Final} Suppose $k$ is odd and $m$ is even. Then the number of partitions of $n$ in which there exists a subsequence of length $m$ of parts in the partition in which the difference between any two consecutive parts of the subsequence is at least $k$ is equal to the number of partitions of $n$ whose Ferrers diagram contains its $(k,m)$-polygon. \end{corollary} It is easy to observe the following properties of the $(k,m)$-polygons of partitions. \begin{itemize} \item For a given $k$ and a partition $\pi$, the $(k,m)$-polygon of $\pi$ is strictly contained in the $(k,m+1)$-polygon of $\pi$ for any $m$, irrespective of the parities of $k$ and $m$. \item For a given $k$ and a partition $\pi$, there are only finitely many values of $m$ such that the Ferrers diagram of $\pi$ contains the $(k,m)$-polygon of $\pi$. \end{itemize} From these observations, it is obvious that for a given $k$ and a partition $\pi$, there exists a largest value of $m$ such that the Ferrers diagram of $\pi$ contains the $(k,m)$-polygon of $\pi$. We say that $\pi$ has a $(k,m)$-Durfee polygon. Then from Corollary \ref{Final}, the following result immediately follows. \begin{theorem} \label{FinalMeasure} The number of partitions of $n$ with $k$-measure $m$ is equal to the number of partitions of $n$ with $(k,m)$-Durfee polygon. \end{theorem} \begin{remark} \label{Durfee} Note that for $k=2$, the $(2,m)$-polygonal of $\pi$ is a square of side $m$, and thus for a partition $\pi$ to have $(2,m)$-Durfee polygon is the same thing as $\pi$ having a Durfee square of side $m$. Thus, Theorem \ref{FinalMeasure} is a generalization of Theorem \ref{Wonderful}. \end{remark} \begin{remark} Since the maps $\phi'$ and $\psi'$ preserve the number of parts, our proof also gives a generalization of Theorem \ref{Wonderful2}. That is, the number of partitions of $n$ with $l$ parts and $k$-measure $m$ is equal to the number of partitions of $n$ with $l$ parts and $(k,m)$-Durfee polygon. \end{remark} \begin{remark} For even $k$, the $(k,m)$-polygons are rectangles irrespective of the value of $m$. Thus, for an even $k$, Theorem \ref{FinalMeasure} provides a very simple looking generalization of Theorem \ref{Wonderful}. For example, the number of partitions of $n$ with $4$-measure $m$ is equal to the number of partitions for which the largest value of $j$ such that the Ferrers diagram of the partition contains a $j \times (2j-1)$ rectangle equals $m$. Similarly, the number of partitions of $n$ with $6$-measure $m$ is equal to the number of partitions for which the largest value of $j$ such that the Ferrers diagram of the partition contains a $j \times (3j-2)$ rectangle equals $m$. For the case when $k$ is odd, the situation is relatively complex as $(k,m)$-polygons can be rectangles or not depending on the parity of $m$. \end{remark} These results seem to yield interesting properties even in the case $k=1$. For example, substituting $k=1$ in Theorem \ref{General}, we get the following result. \begin{corollary} \label{k=1} The number of partitions of $n$ which have at least $m$ distinct parts is equal to the number of partitions of $n$ which have at least $\left \lfloor \frac{m}{2} \right \rfloor $ parts greater than or equal to $ \left \lceil \frac{m+1)}{2} \right \rceil $, and an additional at least $\left \lceil \frac{m}{2} \right \rceil $ parts greater than or equal to $ \left \lfloor \frac{m+1}{2} \right \rfloor $. \end{corollary} That is for an odd number $m$, the number of partitions of $n$ which have at least $m$ distinct parts is equal to the number of partitions of $n$ which have at least $m$ parts greater than or equal to $\frac{m+1}{2}$. For example, the number of partitions of $n$ which have at least $7$ distinct parts is equal to the number of partitions of $n$ which have at least $7$ parts greater than or equal to $4$. Similarly, for an even number $m$, the number of partitions of $n$ which have at least $m$ distinct parts is equal to the number of partitions of $n$ which have at least $\frac{m}{2}$ parts greater than or equal to $\frac{m}{2}+1$, and an additional at least $\frac{m}{2}$ parts greater than or equal to $\frac{m}{2}$. For example, the number of partitions of $n$ which have at least $6$ distinct parts is equal to the number of partitions of $n$ which have at least $3$ parts greater than or equal to $4$, and an additional at least $3$ parts greater than or equal to $3$. Finally, substituting $k=1$ in Corollary \ref{Final} gives the following result. \begin{corollary} \label{2;k=1} The number of partitions of $n$ with $m$ distinct parts is equal to the number of partitions of $n$ with $(1,m)$-Durfee polygon. \end{corollary} \section{Concluding Remarks} It will be very interesting to see if one could utilize the ideas in this paper to obtain a combinatorial proof and possibly generalizations of Theorem \ref{Wonderful3}. \section{Acknowledgement} The author acknowledges the support of IISER Mohali for providing research facilities and fellowship.
1,314,259,994,460
arxiv
\section{Introduction} The concept of outliers has been studied extensively in the statistics community from the $19^{th}$ century \cite{Edgeworth1887}. In real-world application scenarios, usually there is outlier data, a.k.a. anomaly, which differs from the rest of the data. The word outlier stands for \textit{a statistical observation that is markedly different in value from the others of the sample.}\footnote{https://www.merriam-webster.com/dictionary/outlier. Accessed: 06 April 2020} Barnett and Lewis(1984) \cite{Barnett1984} formally defined outlier as: ``An observation (or a subset of observations) which appears to be inconsistent with the remainder of that set of data''. \textit{Outlier Detection} (OD) is an essential task in data mining that deals with detecting outliers in data sets automatically. Over the years, an enormous amount of research has been carried out in an attempt to detect outliers in a data set. Although those algorithms are detecting outliers very well, they are not able to explain why those points are considered as an outlier, i.e., they cannot tell in which feature subset(s) the data object significantly deviates from the rest of the data. The explanation of outlier has led to a renewed interest in \textit{Outlying Aspect Mining} (OAM). An outlying aspect mining is formally defined as the task of recognizing that feature subset(s) where a given data object is inconsistent with the remainder of that set of data objects. That given data object is called as a query, and those feature subset(s) are called as outlying aspects of the given query. The following are some of the definitions found in the literature: \begin{itemize} \item ``\textit{Outlying aspects mining discovers feature subsets (or subspaces) that describe how a query stand out from a given data set.}'' \cite{Wells2019} \item \cite{Vinh2016} define outlying aspect mining as ``\textit{problem of investigating, for a particular query object, the sets of features (a.k.a attributes, dimensions) that make it most unusual compared to the rest of the data.}'' \end{itemize} Previous studies have termed this problem as \textit{outlier explanation} \cite{Micenkova2013}, \textit{outlier interpretation} \cite{Dang2014}, \textit{outlying subspace detection} \cite{Zhang2004}, \textit{outlying aspect mining} \cite{Duan2015, Vinh2016}. A recent line of research has established this problem as outlying aspect mining \cite{Duan2015, Vinh2016, Wells2019, Samariya2020}. Past studies have hinted at a link between OAM and OD. However, it is worth noting that OAM and OD are different --- the main aim of OAM is to find aspects for a given data object, where it exhibits the most outlying characteristics while the latter focuses on detecting all instances exhibiting outlying characteristics in the given original input space. Outlying aspect mining has many practical applications, such as an insurance analyst may be interested to find out in which particular aspect an insurance claim looks suspicious. Furthermore, when evaluating job applications, a selection panel wants to investigate in which specific aspect applicant is most different than others. For example, with similar qualifications and experience, John has the highest number of projects completed successfully. Outlying aspect mining is a new and interesting topic among researchers. To the best of our knowledge, there is no such survey article that has been conducted as of now, which motivates us to write this survey. In this survey paper, we are providing a structured and in-depth review of research on OAM techniques. The work on OAM is categorized into three categories --- 1) Score-and-Search based approach, 2) Feature selection based approach, and 3) Hybrid approach. This paper is organized into seven distinct sections. Section \ref{sec:overview} provides an overview of OAM approaches. Outlying aspect mining techniques are categorized in score-and-search based approaches (Section \ref{sec:score-and-search}), feature selection based approaches (Section \ref{sec:feature-selection}) and hybrid approaches (Section \ref{sec:hybrid}). We have discussed open challenges in Section \ref{sec:open-challenges}. Concluding remarks are provided in Section \ref{sec:Conclusion}. \section{Overview of OAM approaches} \label{sec:overview} \begin{table}[bt] \centering \caption{Key symbols and notations used in this paper. } \begin{tabular}{l @{\hspace{15pt}} l} \toprule\noalign{\smallskip} Symbol & Definition \\ \noalign{\smallskip}\midrule\noalign{\smallskip} $\mathcal{O}$ & A set of $n$ data instances in an $D$-dimensional space, $|\mathcal{O}|=n$ \\ ${\bf o}\in \mathcal{O}$ & A data instance represented as a vector, ${\bf o}=\langle o.1,o.2,\cdots,o.D \rangle$ \\ $\mathcal{F}$ & The set of input features, i.e., $\mathcal{F} = \{1, 2, \cdots, D\}$ \\ $\mathcal{S}_{\mathcal{F}}$ & The set of all possible subspaces (non-empty subsets) of $\mathcal{F}$ \\ $d_S({\bf a}, {\bf b})$ & The euclidean distance between ${\bf a}$ and ${\bf b}$ in subspace $S\in \mathcal{S}_{\mathcal{F}}$ \\ $\aleph_S^k({\bf q})$ & The set of $k$-nearest neighbors of ${\bf q}$ in subspace $S\in \mathcal{S}_{\mathcal{F}}$ \\ \noalign{\smallskip}\bottomrule \end{tabular} \label{tab:symbols} \end{table} To start with, we have fixed some notations for the rest of the paper and introduced few preliminary definitions. The primary symbols and notations used are provided in Table \ref{tab:symbols}. Let $\mathcal{O} = \{o_1, o_2, \cdots, o_n\}$ be a collection of $n$ data objects in $D$-dimensional space. Each data object ${\bf o}$ is represented as $D$-dimensional vector $\langle o.1, o.2, \cdots, o.D\rangle$. As mentioned above, the OAM approaches are categorized into three categories which are as follows: \begin{enumerate} \item Score-and-Search: In the score-and-search based approach, OAM algorithm requires the computation of the outlying degree of a query in each possible subspace in order to identify the subspace where it exhibits the highest degree of outlying characteristics w.r.t. the rest of the data. \item Feature Selection: In this approach, the problem of OAM is treated as a traditional problem of feature selection for classification. \item Hybrid Approach: In the hybrid approach, the problem of OAM, is solved using a combination of score-and-search and feature selection based approach. \end{enumerate} \section{Score-and-Search based approach} \label{sec:score-and-search} To date, most of the studies that have been conducted to solve OAM problem belong to this category. The score-and-search approach required scoring function to measure the outlying degree of the given query. Then outlyingness of a query will be compared in all possible subspaces to detect the most outlying aspects. As far as we know, \cite{Zhang2004} is the earliest work, which addresses the problem of outlying aspect mining. Therein, the authors introduced a framework that detects the outlying subspace of a given query termed as HOS-Miner which stands for {\bf H}igh-dimensional {\bf O}utlying {\bf S}ubspace {\bf Miner}. They formulate the problem as: for a given data object, identify the subspaces in which this query object is considerably dissimilar or inconsistent w.r.t. the rest of the data objects. Moreover, this problem mathematically is stated as follows: for a given data object ${\bf q}$, find the set of subspaces $\mathcal{S}_\mathcal{F}$ such that for each subspace $S \in \mathcal{S}_F$, $OD_{S}({\bf q}) \geq \delta$, where $OD$ is the distance function (Equation \ref{EQ:HOS-Miner}), and $\delta$ is distance threshold. They described HOS-Miner as ``outlier $\rightarrow$ spaces" method. In their work, they employed a distance-based scoring measure called {\bf O}utlying {\bf D}egree ($OD$ in short) to measure the outlyingness of the given query, which is the sum of the distances between the query and its k-nearest neighbors. The $OD$ of a query point ${\bf q}$ in subspace $S$ is calculated as : \begin{equation} \label{EQ:HOS-Miner} OD_S({\bf q}) = \sum\limits_{x \in \aleph_S^k({\bf q)}} d_S(q,x) \end{equation} \begin{figure}[t] \centering \includegraphics[scale=0.45]{figures/hos-miner.png} \caption{The overview of HOS-Miner \cite{Zhang2004}.} \label{fig:hos-miner} \end{figure} The process of HOS-Miner is shown in Fig. \ref{fig:hos-miner}. The proposed framework is divided into four steps. In the first step, the X-tree indexing module executes X-Tree \cite{Berchtold1996} indexing on the data set to enable $k$-nearest neighbor ($k$NN) search faster in subspace $S$. In the second step, the random sampling module randomly selects samples from the data set and then performs a dynamic subspace search to examine downward and upward subspace pruning possibilities of low to high dimensional subspaces. In the subsequent module, the subspace outlier detection module calculates the outlier score of the query and performs a dynamic subspace search to find subspaces where the query object deviates from the rest of the data. The last module is a filtering module, which filters out the most outlying subspace and returns to the user. Duan et al. (2015) \cite{Duan2015} introduce {\bf O}utlying {\bf A}spect {\bf Miner} ({\bf OAMiner} in short), which uses a Kernel Density Estimation (KDE) \cite{Silverman1986} based scoring measure to compute the outlyingness of query ${\bf q}$ in subspace $S$: \begin{equation} f_{S}({\bf q}) = \frac{1}{n(2 \pi)^{\frac{m}{2}} \prod\limits_{i \in S} h_{i}} \sum\limits_{{\bf x} \in \mathcal{O}} e^ {- \sum\limits_{i\in S} \frac{(q.i - x.i)^2}{2 h^2_{i}}} \end{equation} \noindent where, $f_S({\bf q})$ is a kernel density estimation of ${\bf q}$ in subspace $S$, $m$ is the dimensionality of subspace $S$ ($|S|=m$), $h_{i}$ is the kernel bandwidth in dimension $i$. The study carried out by Duan et al. (2015)\cite{Duan2015}, stated that density is a bias towards high-dimensional subspaces -- density tends to decrease as dimension increases. Thus, to remove the effect of dimensionality biasedness, they proposed to use the density rank of the query as a measure of outlyingness. To find the most outlying subspace of query, the density of all data point needs to compute in each subspace, where the subspace with the best rank is selected as an outlying aspect of the given query. OAMiner systematically enumerates all the possible subspaces. In OAMiner, the author has used the set enumeration tree approach \cite{Rymon1992}, which is widely used by the data mining research community. OAMiner searches for subspaces by traversing a depth-first manner \cite{Russell2009}. OAMiner used some anti-monotonicity properties to prune the subspaces. Given data set $\mathcal{O}$, a query object ${\bf q}$ and subspace $S$, if $rank(f_{S}({\bf q}))$ = 1, then every super-set of $S$ cannot be a minimal subspace and thus can be pruned. OAMiner has two fundamental challenges: \begin{enumerate} \item OAMiner uses a density-based scoring function. Computing the density of each data point in each subspace is computationally expensive. Thus, it becomes infeasible in large and high dimensional data sets. The time complexity of finding the rank of $q$ in subspace $S$ is $O(n^2 m)$. \item OAMiner employs depth-first-search and utilizes anti-monotonicity property to prune subspace; therefore, an expensive search is required to find outlying aspects of the given query. \end{enumerate} The work of Vinh et al. (2016) \cite{Vinh2016} captures the concept of dimensionality unbiasedness and further investigates scoring functions, which is dimensionally unbiased. Dimensionality unbiasedness is an essential property for outlying measures because the query object is compared in different subspaces with a different number of dimensions. They proposed two novel outlying scoring metric (1) density $Z$-score and (2) \textbf{i}solation \textbf{Path} score (iPath in short). In their work, they showed that the proposed $Z$-score and iPath are dimensionally unbiased. Therein, the density $Z$-score is defined as follows: \begin{equation} \mbox{Z-Score} (\tilde{f}_S({\bf q})) \triangleq \frac{\tilde{f}_S({\bf q}) -\mu_{\tilde{f}_S}}{\sigma_{\tilde{f}_S}} \end{equation} \noindent where $\mu_{f_S}$ and $\sigma_{f_S}$ are the mean and standard deviation of the density of all data instances in subspace $S$, respectively. The iPath score is motivated by \textbf{i}solation \textbf{Forest} (iForest) anomaly detection approach \cite{Liu2008}. The intuition behind iForest is that anomalies are few and susceptible to isolation. iForest constructs $t$ trees, where each tree is constructed from randomly selected sub-samples $\psi$ ($\psi \ll n$). Later, it divides using the axis-parallel random splits. Since in the outlying aspect mining context, the main focus is on the path length of the query; thus, authors have ignored other parts of the tree. In outlying aspect mining, the intuition behind iPath score is that in the most outlying subspace, a given query is easy to isolate than the rest of the data. The process of calculating the iPath of query ${\bf q}$ w.r.t. sub-samples $\psi$ of the data is \begin{equation} iPath_S({\bf q}) = \frac{1}{t} \sum\limits_{i=1}^t l_S^i({\bf q}) \end{equation} \noindent where $l_S^i({\bf q})$ is path length of ${\bf q}$ in $i^{th}$ tree and subspace $S$. The demo of iPath is presented in Fig. \ref{fig:iPathDemo}. In Fig. \ref{fig:iPathDemo}, the red square is a query point in $2$ dimensional space. Each horizontal or vertical numbered line represent splits. In Fig. \ref{fig:iPathDemo}(a), to isolate query, 3 splits are required, whereas 7 splits are required to isolate query in Fig. \ref{fig:iPathDemo}(b). \begin{figure}[t] \centering \includegraphics[scale=0.55]{figures/iPath.png} \caption{An illustrative example of iPath. The query is presented as red square. (a) A random isolation path of a query point where it is an outlier. (b) A random isolation path of a query point where it is an inlier \cite{Vinh2016}.} \label{fig:iPathDemo} \end{figure} Vinh et al. (2016)\cite{Vinh2016} was the first to coin the term dimensionality unbiasedness. \begin{definition}[\textbf{Dimensionality unbiased} \cite{Vinh2016}] A dimensionality unbiased outlyingness measure ($OM$) is a measure of which the baseline value, i.e., average value for any data sample $\mathcal{O} = \{o_1, o_2, \cdots, o_n \}$ drawn from a uniform distribution, is a quantity independent of the dimension of the subspace S, i.e., $$ E[OM_S(x) | x \in \mathcal{O}] = \frac{1}{n} \sum\limits_{x \in \mathcal{O}} OM(x) = \mbox{const. w.r.t } |S| $$ \label{def1} \end{definition} In \cite[Theorem 3]{Vinh2016}, it is proven that rank transformation and $Z$-score normalization have resulted in a constant average value in any data distribution. It is worth noting that the $Z$-score scoring function is not only normalized but also the variance of the normalized measures that are constant to dimensions. The overall beam search process is divided into three stages. In the first stage, all $\mbox{1-D}$ subspaces are inspected to identify trivial outlying features. In the subsequent stage, an exhaustive search is performed on all possible $2$ dimensional subspaces. In the third stage, the beam search is implemented at level $l$. The beam algorithm only keeps top $W$ subspaces (that is called beam width) in the search process. The total number of subspace considered by beam algorithm is in the order of $O(D^2 + W \ \ D_{max})$ where $D_{max}$ is a maximum dimension of subspace, and $W$ is the beamwidth. \cite{Wells2019} introduced a simple grid-based density estimator called sGrid. sGrid is a smoothed variant of a grid-based density estimator \cite{Silverman1986}. Let $\mathcal{O}$ be a collection of $n$ data objects in $D$-dimensional space, $x.S$ be a projection of a data object $x \in \mathcal{O}$ in subspace $S$. The sGrid density of point $q$ is computed as points that fall in a bin that covers point $q$ and its surrounding neighbors. Fig. \ref{fig:SGrid} shows an illustrative example of sGrid, in which $x$ is estimated using $9$ bins while $y$ is estimated using $6$ bins. \begin{figure}[tb] \centering \includegraphics[scale=0.75]{figures/SGrid.png} \caption{An illustrative example of the sGrid \cite{Wells2019}.} \label{fig:SGrid} \end{figure} In their work, they showed that the proposed density estimator has advantages over the existing kernel density estimator in outlying aspect mining by replacing kernel density estimator to sGrid. By replacing KDE to sGrid density estimator, OAMiner \cite{Duan2015} and Beam \cite{Vinh2016} runs two orders of magnitude faster than their origin implementation. However, sGrid is not a dimensionally unbiased measure; hence it requires $Z$-Score normalization. Again, it makes sGrid computationally inefficient. Very recently, \cite{Samariya2020} proposed a \textbf{S}imple \textbf{I}solation score using \textbf{N}earest \textbf{N}eighbor \textbf{E}nsemble (SiNNE in short) measure which is motivated from Isolation using Nearest Neighbor Ensembles (iNNE in short) method for outlier detection \cite{Tharindu2017}. SiNNE constructs $t$ ensemble of models ($\mathcal{M}_1, \mathcal{M}_2, \cdots, \mathcal{M}_t$). Each model $\mathcal{M}_i$ is constructed from randomly chosen sub-samples ($\mathcal{D}_i \subset \mathcal{O}, |\mathcal{D}_i| = \psi < n)$. Each model have $\psi$ hyperspheres, where radius of hypersphere is the euclidean distance between $a$ ($a \in \mathcal{D}_i)$ to its nearest neighbor in $\mathcal{D}_i$. A working example of SiNNE model is constructed on $2$-Dimensional data set having 20 data objects and $\psi = 8$ presented in Fig. \ref{fig:SiNNE-Model}. The outlying score of ${\bf q}$ in model $\mathcal{M}_i$, $I(q|\mathcal{M}_i) = 0$ if $q$ falls in any of the ball and 1 otherwise. The final outlying score of ${\bf q}$ using $t$ models is : \begin{equation} \mbox{SiNNE}(q) = \frac{1}{t} \sum\limits_{i=1}^t I(q|\mathcal{M}_i) \end{equation} In their work, they argue that $Z$-score normalization is biased towards a subspace having high-density variance and the definition of dimensionality unbiasedness is not sufficient. SiNNE is computationally faster than density and distance-based measures. \begin{figure}[t] \centering \subfloat[]{\includegraphics[width=0.4\textwidth]{./figures/Example-dataset.png}}\hspace{50pt} \subfloat[]{\includegraphics[width=0.4\textwidth]{./figures/Example-normal-regions.png}} \caption{(a) An example data set $\mathcal{O}$ (samples on dark black are selected to be in $\mathcal{D}_i$ to construct ${\mathcal M}_i$); and (b) Normal region is defined as the area covered by hyperspheres in ${\mathcal M}_i$ \cite{Samariya2020}.} \label{fig:SiNNE-Model} \end{figure} \paragraph{\textbf{Strengths and Weaknesses.}} The existing OAM score-and-search techniques show good performance. However, distance and density-based measures are computationally expensive. As a result, they are only applicable to very small data sets. The iPath score is a computationally fastest measure because it does not require any distance computation. However, the iPath score is unable to detect local outliers. sGrid density estimator is a great replacement of KDE density estimator because it is computationally efficient than KDE. However, sGrid is biased towards high dimensional subspaces. Thus, it requires $Z$-score normalization, which adds significant computational overhead. SiNNE is the second-fastest measure after iPath. However, iPath is unable to detect local outliers, whereas SiNNE can. In addition to that, it is an unbiased measure; hence there is no need for any normalization. The time complexity of each scoring measure is summarized in Table \ref{tab:features}. \begin{table}[htb] \caption{The time complexity to compute the score of one query $q$ in a subspace by using different measures. Note that $n$ is the data size; $m$ is the dimensionality of subspace; $w$ is the block size in bitset operation, a parameter used in sGrid; $\psi$ is sub samples size and $t$ is number of sets, which are parameters used in iPath.} \label{tab:features} \centering \begin{tabular}{l @{\hspace{50pt}} l} \toprule\noalign{\smallskip} Scoring Measure & Time Complexity \\ \noalign{\smallskip}\midrule\noalign{\smallskip} Density & $O(nm)$ \\ Density Rank & $O(n^2m)$ \\ Density $Z$-Score & $O(n^2m)$ \\ iPath & $O(t \psi)$ \\ sGrid $Z$-Score & $O(n^2m/w)$ \\ SiNNE & $O(t \psi m + t \psi^2 m)$ \\ \noalign{\smallskip}\bottomrule \end{tabular} \end{table} \section{Feature Selection} \label{sec:feature-selection} Compared to the above mentioned approach, a little study is available for feature selection based methods. In the feature selection approach \cite{Micenkova2013, Dang2014}, firstly, the outlying aspect mining problem is transformed into classification and then performs some classical feature selection approaches to find an explanatory subspace of a given outlier. In this line of work, \cite{Micenkova2013} is the earliest work which performs outlier explanation on the numeric data sets. They termed the outlying aspect mining problem as \textit{outlier explanation}. They formulate the problem of outlier explanation as: for a given outlier, detected by any outlier detection algorithm, find the possible explanation for that outlier. In this work, authors assume that the outlier is given as a query (input) data, and their aim is to find an explanatory subspace. Outlier explanation converts problem of OAM in two class (inlier and outlier class) classification problem. For each outlier ${\bf q}$, a outlier class is generated from a Gaussian $\mathcal{N}_d(q, \Sigma)$, where $\Sigma$ is $D$ x $D$ scalar matrix and $\Sigma = \lambda^2 I$, and $\lambda$ = $\alpha \cdot \frac{1}{\sqrt{D}} \cdot d_k({\bf q})$, $d_k({\bf q})$ is the distance between ${\bf q}$ and its $k^{th}$ nearest neighbor. The negative class is constructed by $k$-nearest neighbors of outlier point ${\bf q}$ in full feature space and $k$ points from rest of the data set. \cite{Angiulli2009} has studied the problem of outlier property detection and introduced the outlying property detection technique. Given a categorical data set, the goal is to find out the top $k$ set of attributes from which the query point ${\bf q}$ has the highest outliers score. \cite{Angiulli2017} proposed a version for the numeric data set. For a given data set $\mathcal{O}$ in $D$ dimensional space, a query object $q \in \mathcal{O}$ \cite{Angiulli2017} finds the pairs ($E$,$S$), satisfying $E \subseteq \mathcal{O}$ and $S \in D$ where $E$ is referred as explanation and $S$ referred as property (dimension). In 2014, \cite{Dang2014} introduced LOGP which stands for \textbf{L}ocal \textbf{O}utliers with \textbf{G}raph \textbf{P}rojection. LOGP is a novel technique that offers a solution to two problems, (1) outlier detection and (2) outlier interpretation. \paragraph{\textbf{Strengths and Weaknesses.}} The advantages of these methods are that they do not require any subspace search, so these methods are faster than score-and-search methods. However, feature selection based methods depend on $k$ nearest-neighbor techniques. As pointed out in Vinh et al. (2016) \cite{Vinh2016}, $k$-nearest neighbors in full dimensional space is dramatically different from the $k$-NN in the subspace. \section{Hybrid Approach} \label{sec:hybrid} To the best of our knowledge, {\bf OARank} (stands for {\bf O}utlying {\bf A}spect Mining via Feature {\bf Ranking}) \cite{Vinh2015} is the only work which solves OAM problem using a hybrid approach. The proposed hybrid framework uses the strength of both feature selection based approach and score-and-search based approach. The OARank framework is a two-stage process. In the first stage, the OARank framework rank features as per their outlyingness of the query in that feature. In the second stage, the score-and-search technique is performed on the set of top-ranked features, where $m \leq D$. However, the second stage is optional. The top selected feature is either used for manual user inspection or user can perform score-and-search on top $k$ ranked features. The condition to choose $m$ features is as follows: \begin{equation} SS = \min_{\substack{S \subset D \\ |S| = m}} \Bigg\{ C(m) \sum\limits_{i=1}^n \sum\limits_{\substack{t,j \in S \\ t<j}} K(q.j-o_{i}.j, h.j) K(q.t - o_{i}., h.t) \Bigg\} \end{equation} \noindent where $K(x-\mu,h) = (2\pi h^2)^{-\frac{1}{2}} \ \frac{\exp{-(x-\mu)^2}}{2h^2} $ is the one dimensional Gaussian kernel. $h$ and $\mu$ is bandwidth and center of Gaussian kernel respectively. $C(m) = \frac{2}{nm(m-1)2^{(m-2)}}$ is a normalization constant. \paragraph{\textbf{Strengths and Weaknesses.}} The hybrid systems are built upon the connection between score-and-search and feature selection based approaches. OARank uses a kernel density estimator to determine subspace where it is minimized, which is again computationally prohibited in large and high dimensional data sets. \section{Open Challenges} \label{sec:open-challenges} Outlying aspect mining has slowly got little attention from researchers. However, there are many challenges that needs attention in the future. First and foremost challenge is that traditional outlying aspect mining score-and-search based approaches use distance or density estimation based scoring measures. These methods are easy to implement. However, these methods have a high time complexity, which is $O(n^2 D)$. Thus, they are infeasible in high dimensional and huge data sets. The most computationally expensive part of OAM is the computation of score, which is a repeated task for every data object in each subspace. Another issue that still needs attention is that there is no such globally accepted evaluation measure for outlying aspect mining systems. Vinh et al. (2016) \cite{Vinh2016} proposed to use an entropy-based evaluation measure called consensus index in their work. However, Wells and Ting (2019) \cite{Wells2019} pointed out that, a consensus index is more suitable to evaluate clustering outcomes than assessing the outlierness of a query in a subspace. Therefore, one of the open research challenges is the development of an evaluation metric that can be used to evaluate detected outlying aspects of the given query by OAM systems. An important part of OAM is to search the subspaces, where a given data object is different from the rest of the data objects. By using systematic search methods, OAM has to compute outlierness of a given query in each subspace. This technique makes OAM methods computationally expensive. So an appropriate search technique is needed to reduce the effect of the curse of dimensionality. \section{Conclusion} \label{sec:Conclusion} Outlying aspect mining is a new field, and a little is known about it among the research community, which motivates us to write this survey. In this survey, an attempt is made to summarise various ways in which the problem of outlying aspect mining has been solved in the past and discussed existing work, which is divided based on approaches. We have discussed the strengths and weaknesses of each approach in their respective categories. However, we are specifically interested in problems related to efficiency and effectiveness for high dimensional and large data sets. We believe there is still room for improvement in the area of outlying aspect mining, which offers lots of research opportunities in the future. \subsubsection*{Acknowledgments} This work is supported by Federation University Research Priority Area (RPA) scholarship, awarded to Durgesh Samariya. \bibliographystyle{splncs04}
1,314,259,994,461
arxiv
\section{Introduction} The binaries with orbital periods shorter than a few hours, namely { ultracompact binaries (UCBs)}, play a crucial role in the functional tests of space gravitational wave observatories \citep{Shah+etal+2012}. Over the past two years, the Zwicky Transient Facility (ZTF) has discovered a few UCBs with orbital period shorter than 20 minutes through densely sampled photometric measurements \citep{Burdge+etal+2019+Nature,Burdge+etal+2020+systematic,Burdge+etal+2020+8.8min}, these binaries are predicted to be detected by LISA with high signal-to-noise (SNR) and to aid their gravitational-wave (GW) parameter estimation. On the other hand, as a class of binary with the shortest orbital period, UCBs represent the terminal phase of some binary evolution, which provide opportunities in studying physics under extreme conditions and give crucial constraints on the binary evolution, such as mass-accretion/loss processes, common-envelope evolution, and angular-momentum loss mechanisms (see also \citealt{Toonen+etal+2014,Wang+etal+2021,Chen+etal+2020,Zhu+etal+2012,Rappaport+etal+1983}). Noninteracting black hole binaries (or candidates), which cannot be detected by current X-ray detectors, have been discovered by periodic photometric variability and radial velocities (RVs) of their visible companion stars \citep{Thompson+etal+2019+Science,Liu+etal+2019+Nature}. Furthermore, some newest researches suggest that the black holes in the short-period ellipsoidal variables can be revealed by analyzing the Fourier amplitudes of their light curves \citep{Gomel+etal+2021+ellipsoidals_I,Gomel+etal+2021+ellipsoidals_II,Gomel+etal+2021+ellipsoidals_III}. As the cross field between GW verification binaries and black hole binaries, the ultracompact black hole (X-ray) binaries \citep{Bahramian+etal+2017+UCBHXB}, in which the X-ray radiation should be inefficient \citep{Knevitt+etal+2014,Menou+etal+1999}, are expected to be first discovered by high-cadence, wide-area optical survey missions or next-generation gravitational-wave observatories. The ultracompact black hole binaries could be the unique Galactic black hole systems that can be detected by both gravitational and electromagnetic waves, implying they will be the most direct evidence that the stellar black hole exists. \begin{figure*} \includegraphics[width=0.94\textwidth]{figures/skymap.pdf} \caption{ \textbf{Observation sky areas of the TMTS shown in equatorial coordinates.} The sky map is plotted by using the HEALPix package (\url{http://healpix.sourceforge.net}) with NSIDE=128 \citep{Gorski+etal+2005}. The depth of the color represents the total number of 1-min exposure. } \label{fig:skymap} \end{figure*} So far, a dozen of ground-based survey missions have operated to search for transients and variables on different timescales. These missions include the Deep Lens Survey (DLS, 1999--2005, \citealt{DLS+2004_short}), the Faint Sky Variability Survey (FSVS, \citealt{FSVS+2003+reduction}) the RApid Temporal Survey (RATS, \citealt{RATS+2005+survey} ), the Catalina Real Time Survey (CRTS, since 2007, \citealt{Catalina+2009+first,Catalina+2014+ultracompact,Catelina+2014+CV} ), the Palomar Transient Factory (PTF, 2009--2012, \citealt{PTF+2009+performance,PTF+2009+OT}), the omegaWhite survey ( \citealt{OmegaWhite+2015+survey} ), the Intermediate Palomar Transient Factory (iPTF, 2013--2017, \citealt{iPTF+2019+detectability,Ho+etal+2018+iptf}), the High Cadence Transient Survey (HiTS, \citealt{HiTS+2018}), the Evryscope (since 2015, \citealt{Evryscope+2019+performance}), the Zwicky Transient Facility (ZTF, since 2017, \citealt{ZTF+2019+first,ZTF+2019+products}), the Compact binary HIgh CAdence Survey (CHiCaS, \citealt{CHiCaS+2020+survey}). High-cadence surveys, especially uninterrupted time series photometry, are more efficient in discovering short-period light variations and phenomenon associated with stellar flares/bursts. However, only a few missions (e.g. ZTF high-cadence Galactic Plane Survey, \citealt{Kupfer+etal+2021}) insist on performing uninterrupted photometry, as high-cadence surveys will significantly sacrifice the coverage of the sky area. Fortunately, several space-based survey missions, such as the \emph{Kepler} mission \citep{Kepler+2010+first,Koch+etal+2010+Kepler_intrument} and the Transiting Exoplanet Survey Satellite (TESS,\citealt{Ricker+etal+2014+TESS,TESS+2015}), have operated to perform long-duration uninterrupted photometry. But \emph{Kepler} was limited to observe only Cygnus-Lyra region and ecliptic plane, while TESS was designed to monitor the brightest dwarf stars. Moreover, these space-based missions usually suffer low efficiency of data transportation, and thus finally provide light curves for only hundred thousands of objects. As \cite{Burdge+etal+2020+systematic} mentioned, a systematic search and study of ultracompact binaries relies not only on the high-cadence photometry, but also on the time-resolved spectroscopy. The high-cadence photometry is used to search for periodic signals, while the spectra are used to determine semi-amplitude of radial velocities. On the other hand, studies of fast-evolving transients such as flare stars also requires spectroscopic confirmation \citep{Kulkarni+FOTs+DLS+2006,Ho+etal+2018+iptf}. Therefore, we initiated a new high-cadence survey mission, with an attempt to cover the LAMOST sky areas with the Tsinghua University-Ma Huateng Telescopes for Survey (TMTS, \citealt{TMTS+2020+survey}). The LAMOST has started the time-domain medium-resolution spectroscopic survey since October 2018 \citep{Liu+etal+2020}, which provides precise measurements of the RV variations for stars brighter than 15.0 mag. In this paper, we present the methods of data analysis and preliminary results for the first-year high-cadence surveys from the TMTS. The schema of first-year observations and light-curve dataset are described in Section~\ref{sec:observation}. The descriptions of photometry and calibration are presented in Section~\ref{sec:reduction}. In Section~\ref{sec:methods}, we introduce the methodology of detecting variability, periodicity and flares in the TMTS light curves, respectively. In this section, we also describe the source selection with the Hertzsprung–Russell (HR) diagram. We present some selected results in Section~\ref{results}. \section{Observation} \label{sec:observation} TMTS is a multiple-tube telescope system consisting of four {40-cm} optical telescopes with a total field of view (FoV) of about 18~deg$^2$ { (4.5~deg$^2$ for each telescope ) and a plate scale of $1.86^{\prime \prime}\,$pixel$^{-1}$.} The TMTS system is equipped with 4096$\times$4096 pixels CMOS cameras, which have short read-out time ($< 1$~s) and allow to conduct high-cadence photometry for targets on large sky areas. {Detailed introduction about the performance of TMTS is described in \cite{TMTS+2020+survey}.} Since TMTS and LAMOST have similar FoV and locate at the same site (i.e. Xinglong Station of NAOC), the former is an ideal telescope system to carry out collaborative tasks with the latter. { At Xinglong Station, the typical seeing is better or comparable to $2.6^{\prime \prime}$ for 80\% of nights and the sky brightness at zenith is around 21.1~mag/arcsec$^2$ \citep{Huang+etal+2012+TNT,Zhang+etal+2015+Xinglong_conditions}. Due to the light pollution from the surrounding cities, the sky brightness increases with the increase of zenith angle. As \cite{Zhang+etal+2015+Xinglong_conditions} introduced, 32\% of nights in Xinglong Station have cloud-free observations for at least 6 hours, which means that about 117~nights per year are suitable for the interrupted photometric observations required by the TMTS.} The TMTS has two observation modes: (i) staring at the LAMOST areas for the whole night whenever possible with a cadence of about 1 minute; (ii) supernova survey with a cadence of about 1--2 days. In this paper, we concentrate on the first-year observations of the LAMOST sky areas. In order to achieve a high signal-to-noise ratio, the observations are conducted in Luminous filter {(L filter hereafter), which has a very wide coverage ranging from 330~nm to about 900~nm when combined with the CMOS detector (see Figure~6 in \citealt{TMTS+2020+survey}). Similar to \emph{Gaia}'s G band (330--1050~nm, \citealt{Gaia_collaboration+2018+data}), the ``white-light'' band can maximize the detection depth of optical telescopes. For a 1-min exposure, the 3-$\sigma$ detection limit of the TMTS can reach about 19.4~mag.} As shown in Figure~\ref{fig:skymap}, the TMTS observed 188 LAMOST plates during the whole year of 2020, covering a total sky area of $\approx 1970~{\rm deg}^2$. Among them, the sky area of $\approx1793$ deg$^2$ has at least 100 uninterrupted 1-min exposures, as shown in the left panel of Figure~\ref{fig:obs_stat}. Notice that, the 1-min image here is combined from six 10-second images and its frame rate thus dropped to $\approx 1/75$~Hz. For the purpose of selecting variables based on the light-curve analysis, we focus on the observed sky areas with at least 100 epochs, which takes up about 96\% of the LAMOST sky areas monitored during the first year. The high-cadence survey allows us to discover and identify variables in the LAMOST fields on a time scale of about 1 minute. \begin{figure*} \includegraphics[width=0.49\textwidth]{figures/areas_exposures.pdf} \includegraphics[width=0.49\textwidth]{figures/lc_exposures.pdf} \caption{ \textbf{Histogram of number of (1-min) exposures for observed area (left) and light curves (right).} \emph{Left:} The black solid line and red dashed line represent the statistics based on overall and uninterrupted observations, respectively. The purple dot-dashed lines indicate the cut-off value (i.e. 100 repeated exposures). \emph{Right:} The blue dashed line represents the exposure number of valid measurements (i.e. flag$=0$, see details in Section~\ref{sec:reduction}). } \label{fig:obs_stat} \end{figure*} As can be seen from the right panel of Figure~\ref{fig:obs_stat}, the TMTS has produced $\approx$13 million uninterrupted light curves during the survey conducted in 2020, of which {$\approx$6 million} have at least 100 repeated measurements. Notice that there are about 4.7 million light curves with less than 20 epochs. The sources with such sparse measurements could locate either near the edge of FoV or hover around the detection limit. Based on the light curves with at least 100 ``valid'' measurements (see details in Section~\ref{sec:reduction}), we built a dataset including 4.9 million selected light curves from the first-year survey, namely \textbf{\tmtsdata} dataset. It is worth noting that, multiple light curves may correspond to the same source due to that some sources are located in the overlapping FoV of multiple telescopes of TMTS. Since each light curve can be used to detect variability and periodicity for a source independently and only a small part of sources have multi-epoch light curves, the repeated light curves are not spliced together. \section{Photometry and Calibration} \label{sec:reduction} All of the 10-sec raw images from the TMTS are first bias-, dark- and flat-corrected using the \emph{FITSH} package \citep{Pal+2012+FITSH}. Then the astrometric calibration is applied to the 10-sec single frame using the soft package \emph{SCAMP} \citep{Bertin+2006+SCAMP} and the reference catalog of PPM-Extended (PPMX, \citealt{Roser+etal+2008+PPMX} ). The \emph{SCAMP} can automatically generate accurate World Coordinate System (WCS) information by cross-correlating the reference catalog, and it can give accurate astrometric solutions for the FITS images. To improve the detection depth, 6 successive single frames are median combined into a 1-min image using the soft module \emph{SWarp} in the \emph{TERAPIX} pipeline \citep{Bertin+etal+2002+TERAPIX}. We extract the fluxes of sources on the combined images using the soft package Source Extractor (SExtractor, \citealt{Bertin+Arnouts+1996+SExtractor}). We checked the SExtractor flag for all of the TMTS measurements. The SExtractor flag $\ne 0$ means that there are some problems in the measurements, e.g. blending or saturation (see details in \url{https://sextractor.readthedocs.io/en/latest/Flagging.html}). In addition, we added a new flag bit (value$=256$) to mark the measurements within 100 pixels of the detector boundary, as these measurements frequently cause spurious variations in the light curves and are difficult to be calibrated. Due to immature manufacturing process, the backgrounds of four regions divided by the X/Y midlines in the CMOS detector are not completely consistent, especially during big moon nights (see \citealt{TMTS+2020+survey}).This inconsistency would cause spurious variation in the light curves of objects across the midlines, we thus added an additional flag bit (value$=512$) to those detections within 40 pixels of the detector midlines. The histogram of flag=0 measurements (``valid measurements'' hereafter) is also shown in the right panel of Figure~\ref{fig:obs_stat}, and the number of light curves with at least 100 repeated valid measurements is $\approx4.9$ million. \begin{figure} \includegraphics[width=0.47\textwidth]{figures/lc_example.pdf} \caption{ Example of TMTS light curve and correction factor (i.e. the $\alpha_{i}$ in Eq~\ref{Eq+corrected_flux}) for the W UMa-type eclipsing binary CRTS J075625.0+420405 \citep{Drake+etal+2014+catalina_period,Marsh+etal+2017+eb}, which was observed by the telescope \#3 of TMTS for $\approx$5.3 hours on January 15, 2020.} \label{fig:lc_example} \end{figure} The flux measurements from continuous observations were combined into a light curve. Before detecting real variability and periodicity, we need to first remove the systematic effects like spurious variations caused by changes in airmass, lunar phase and solar altitude etc. The \emph{Tamuz}'s method \citep{Tamuz+etal+2005+sysremove} and the Principal Component Analysis (PCA) are not adopted in the analysis. Because the extinction coefficients in these algorithms cannot be correctly determined for variable stars, as these methods actually make an assumption that the magnitudes of each light curve are equal to its average. Some methods can better constrain the coefficients for variable stars but a prior model on the intrinsic variation needs to be assumed (see also \citealt{Aigrain+etal+2017+sysrem,Ofir+etal+2010+corot}). For these reasons, we developed a weighted version of ``differential photometry'' to reduce the systematic errors of the light curves. Similar to the \emph{Tamuz}'s method and PCA, our algorithm adopted in the analysis also involves removing the common trends and features among a large set of light curves. These common trends and features can be modeled by (weighted) averaging all measurements for constant stars (i.e., the stars that show no variations during the observations) within the FoV. The ``effective extinction coefficient'' for each star (i.e $c_i$ in \citealt{Tamuz+etal+2005+sysremove}) is not set in our algorithm, since the coefficient cannot be determined accurately for those variable stars. Instead, a weighted factor based on the separations between reference stars of constant luminosity and target object is introduced to modulate various effects in different detection regions. Hence, the corrected flux $F_{i}^{\rm corr}$ at epoch $t_i$ for target source $i$ is calculated as \begin{equation} F_{i}^{\rm corr}=F_i\times \alpha_i= F_i\times \prod\limits_{j=1}^M (\frac{\overline{F_{j}^{\rm ref} }}{ F_{i,j}^{\rm ref} })^{\omega_j/\sum\limits_{j=1}^M \omega_j } \label{Eq+corrected_flux} \end{equation} where $\alpha_i$ represents the correction factor and $M$ is the total number of reference stars. $F_i$ and $F_{i,j}^{\rm ref}$ are the uncorrected flux for the target and the $j$th reference star at time $t_i$, respectively. $\overline{F_{j}^{\rm ref} }$ is the average uncorrected flux of the $j$th reference star over an observation night, and it is expressed as $ \sum\limits_{i=1}^N F_{i,j}^{\rm ref}/N$, where $N$ is the number of epochs. The $\omega_j$ is a weighted factor for the $j$th reference star, which is set to $1~{\rm arcsec^2}/(a_j+C)^2$, where $a_j$ is the separation between the target and the $j$th reference star. The characteristic separation $C$ is set to be a small value ( i.e. $60$~arcsecs in our work) to avoid the singular value when the reference star is very close to the target. The corrected fluxes seem to be insensitive to the value of $C$ and the results are not significantly different even if the characteristic separation $C$ is set as 10~arcmins. The sources with 13 mag < $G$ < 17 mag were selected to be the reference star candidates, the $G$ represents the mean G-band magnitudes from Gaia DR2 database \citep{Gaia_Collaboration+2016+performance,Gaia_collaboration+2018+data}. In order to improve the calibration process, we only adopted the reference star candidates with $q=100\%$ for their light curves, where $q=N({\rm flag=0})/N$ is a parameter to evaluate the quality of a light curve. $N$ indicates the total number of epochs for a given light curve and $N({\rm flag=0})$ represents the number of valid measurements. Notice that, the reference stars may contain some variables which should be revealed and kicked out through iterative process. In order to reveal those variables in the reference stars, we calculated the inverse von Neumann ratio for all of our light curves (see \citealt{Shin+etal+2009+variable,Sokolovsky+etal+2017+variables}). This ratio is a very useful variability index derived by testing the independence of successive measurements. The inverse von Neumann ratio \citep{Sokolovsky+etal+2017+variables} is defined as \begin{equation} \frac{1}{\eta}=\frac{\sum\limits_{i=1}^{N} (F_{i}^{\rm corr}-\overline{F^{\rm corr}} )^2} {\sum\limits_{i=1}^{N-1} (F_{i+1}^{\rm corr}-F_{i}^{\rm corr})^2 } \label{Eq+neumann_ratio} \end{equation} where $\overline{F^{\rm corr}}$ represents the corrected flux averaged over all epochs. We set a very tight cut-off value, i.e. 0.8, to exclude all variables from the reference stars. We will explain why $\frac{1}{\eta}=0.8$ is a robust threshold in Section~\ref{sec+variable_detection}. \begin{figure} \includegraphics[width=0.47\textwidth]{figures/example_tess.pdf} \caption{ \textbf{A comparison between TESS and TMTS light curves.} The observed object is HS 0455+8315, which is an eclipsing cataclysmic variable with a visual magnitude from about 15 to 17 \citep{Downes+etal+2001+CV}. TMTS observed this object on November 2, 2020, and the TESS (PDC) light curve was obtained from the observations on June 9, 2020 (Sector~26). The start time of TESS light curve is reset to make its major eclipse coincide with that of TMTS. } \label{fig:tess_tmts_lc} \end{figure} An example of re-calibrated flux and correction factor for the TMTS light curve of a WUMa-type eclipsing binary (i.e. CRTS J075625.0+420405) are shown in the top and middle panels of Figure~\ref{fig:lc_example}, respectively. To obtain the corresponding magnitudes, we also calculate the magnitude zero point $m_0$ for each targets, which follows as \begin{equation} m_0=\frac{\sum\limits_{j=1}^M \omega_j\times (2.5\, \log_{10} \overline{F_{j}^{\rm ref}} + G_j)}{\sum\limits_{j=1}^M \omega_j} \label{Eq+zero_magnitude} \end{equation} where $G_j$ is the Gaia DR2 G magnitude of the $j$th reference star. The magnitude obtained at epoch $t_i$ is thus estimated as $ m_i=-2.5\times \log_{10}F_{i}^{\rm corr} + m_0$. Inserting Equations~\ref{Eq+corrected_flux} and \ref{Eq+zero_magnitude} into the above equation, the magnitude can be expressed as \begin{equation} m_i=\frac{\sum\limits_{j=1}^M \omega_j\times (-2.5\, \log_{10}\frac{F_i}{ F_{i,j}^{\rm ref}}+ G_j)}{\sum\limits_{j=1}^M \omega_j}~. \end{equation} The bottom panel of Figure~\ref{fig:lc_example} shows the final magnitudes obtained with the TMTS for CRTS~J075625.0+420405. It is worth noting that, the measurement accuracy of TMTS is superior to the space-based survey mission TESS, as shown by the comparison of the light curves obtained for the same source (see Figure~\ref{fig:tess_tmts_lc}). \begin{figure} \includegraphics[width=0.47\textwidth]{figures/gaia_tmts_mag.png} \caption{ \textbf{Comparison of TMTS L-band magnitudes with Gaia DR2 G-band Magnitude.} The blue points represent the sources with reliable parallax measurements ( $\sigma_\varpi/\varpi \leq 0.2$), which means that these sources have more precise astrometric solutions. The purple dot-dashed line represents the diagonal line. } \label{fig:gaia_tmts_mag} \end{figure} Figure~\ref{fig:gaia_tmts_mag} shows the comparison of TMTS magnitudes obtained in L band with the G magnitudes from Gaia DR2. The mean TMTS magnitudes here were taken from the light curves of \tmtsdata. The corresponding Gaia sources with reliable parallax measurements ($\sigma_\varpi/\varpi \leq 0.2$ here, where $\varpi$ is the parallax and $\sigma_\varpi$ represents the error of parallax ) are used to cross-match the TMTS sources. One can see that TMTS L magnitudes are basically consistent with the Gaia G magnitudes while the scatter between these two magnitude systems tends to increase at the faint end (see the blue points in the Figure~\ref{fig:gaia_tmts_mag}). Notice that the Gaia sources with spurious parallax values (e.g. negative value) are usually faint and likely locate in crowded regions (e.g. low Galactic latitude) where their astrometric solutions are poorly constrained \citep{Gaia_collaboration+2018+data}. For the comparison, we also showed an overall version that includes the sources with spurious astrometric measurements ( see the grey points in the Figure~\ref{fig:gaia_tmts_mag}), these sources with poor astrometry from Gaia caused an additional cluster above the original distribution when matching with the TMTS sources, which also appeared in the comparison between Gaia DR1 G magnitudes and DR2 G magnitudes (see details in \url{https://gea.esac.esa.int/archive/documentation/GDR2/index.html}). \section{Methods} \label{sec:methods} \subsection{Variability detection} \label{sec:variability_detection} \begin{figure} \includegraphics[width=0.47\textwidth]{figures/variability_detection.png} \caption{ \textbf{Distribution of light curves, robust Std and inverse von Neumann ratio versus the instrumental magnitude.} \emph{Upper panel}: Distribution of the number of the TMTS light curves from the \tmtsdata dataset against the magnitudes. The red dashed line represents the light curves with quality higher than 95\% { (see Section~\ref{sec:reduction}) }. The bin size of the histogram is 0.1 mag. \emph{Middle panel}: The normalized robust StD versus the magnitude. The black and red solid lines represent the polynomial fit to the median and 10-$\sigma$ threshold, respectively. The blue squares indicate those light curves with variability index being above the 10-$\sigma$ threshold. \emph{Lower panel}: The inverse von Neumann ratio versus the magnitude. } \label{fig:variability_detection} \end{figure} \label{sec+variable_detection} Difference image analysis (DIA, \citealt{Tomaney+Crotts+1996+DIA,Alard+Lupton+1998+DIA}) and light curve analysis (LCA, \citealt{Sokolovsky+etal+2017+variables}) are two main methods to search for variables. Compared with the DIA, the light curve analysis, based on the measurements obtained at more than two epochs, can reveal low-amplitude variability. In search for variable sources from the TMTS light curves, we calculated two common variability indices. As it is difficult to detect reliable variabilities for the light curves covering a very short duration, we thus identified variable sources for those light curves with at least 100 valid epochs (see also \citealt{Gomel+etal+2021+ellipsoidals_III,Kupfer+etal+2021}). We selected the TMTS light curves of the \tmtsdata dataset ($\approx$4.9 million) with instrumental magnitude $ 11.0 < \widetilde{m} <18.5 $. Since the average zero point of all measurements is $25.59\pm0.25$, we defined the instrumental magnitude as $\widetilde{m}=-2.5\times\log_{10}( \overline{F^{\rm corr}} )+25.6$. { The value of the instrumental magnitude here is close to but not equal to the astrophysical magnitude, due to that the variations of photometric zero points with sky areas and observation conditions are not considered.} The upper panel of Figure~\ref{fig:variability_detection} shows the histogram of number of TMTS {light curves} from the \tmtsdata dataset as a function of the instrumental magnitude. From magnitude 12 to 17, the number density of TMTS {light curves} increase by about an order of magnitude. The highest number density appears at $\widetilde{m} \approx 17.0$~mag. At the fainter end, the decrease in number density is primarily due to that the detection depth varies with observation conditions; at the brighter end, the detections suffer from effects of both saturation and small number of bright stars. We have calculated the robust standard deviation (StD) and inverse von Neumann ratio ($1/\eta$ here) as a function of their instrumental magnitude $\widetilde{m}$ for the selected TMTS light curves (see the middle and lower panels of Figure~\ref{fig:variability_detection}). The robust StD is the standard deviation inferred from the central 50 percentile of the data points by assuming a Gaussian distribution \citep{ZTF+DR1+2020}, implying that the robust StD is extremely insensitive to outliers or occasional variations. The normalized robust StD, defined as the ratio of the robust StD to the median flux, increases from $\sim 0.01$ to $\sim 0.1$ when the brightness of the sources decreases from 12 mag to 18 mag. Those points that have significantly larger ``scatter'' than the expected are very likely due to light variations. We use 5th-order polynomial to fit the median and the 10-$\sigma$ excess, respectively, which are both calculated in a bin of 0.1 mag. There are about 5,600 light curves have higher robust Std than the 10-$\sigma$ threshold (the blue square in the middle panel of Figure~\ref{fig:variability_detection}). These light curves correspond to about 5,300 GAIA DR2 sources. However, it is difficult to conclude that these sources are all astrophysically variable stars, since blended sources can also show variabilities in their light curves. For example, \cite{Kupfer+etal+2021} recently revealed a false positive rate ({i.e. the rate of non-astrophysically variable sources) of up to 85\%} for the variability detection in the high-cadence Galactic Plane observations of ZTF. { By visually inspecting 300 TMTS light curves with significance of light variations being above the 10-$\sigma$ threshold, we found that non-astrophysically variable stars accounted for about 67\%. For a lower threshold, i.e. 5$\sigma$, about 23,000 light curves can be selected but the false positive rate increases to about 86\%. Obviously, the lower thresholds can be used to pick more astrophysically variable stars, but the higher false positive rates also result in a huge sample containing more non-astrophysically variable sources which is hard to be visually inspected. Therefore, we will take the variability indices as an auxiliary condition to select periodic variables and flare stars. } \begin{figure} \includegraphics[width=0.47\textwidth]{figures/neumann_ratio_dist.pdf} \caption{ \textbf{Density distribution of inverse von Neumann Ratio $1/\eta$ of light curves from the \tmtsdata dataset for the all four (grey solid line) and each of four (colorful dashed line) telescopes of TMTS.} The bin size here is 0.02. The purple dot-dashed line indicate the cut-off value to determine the non-variable reference sources for photometric calibration. } \label{fig:neumann_dist} \end{figure} The inverse von Neumann ratio quantifies the smoothness of a time-series successive variation and does not depend on the uncertainty of the measurements as the contribution of uncertainty has been nearly {offset} by its denominator as shown in Eq.~\ref{Eq+neumann_ratio}. For an ideal time series of photometry following a Gaussian distribution, the expected value of its $1/\eta$ is equal to 0.5 . However, for real photometric measurements, which do not follow the Gaussian distribution or are not completely independent of each other, the cut-off value should be determined based on the distribution of $1/\eta$ (see \citealt{Sokolovsky+etal+2017+variables}). Given the characteristics of $1/\eta$, we use a 3rd-order polynomial (rather than the 5th-order polynomial) to fit the median and robust StD versus $\widetilde{m}$, respectively, which yields $\approx$24 thousand light curves showing variations beyond the 10-$\sigma$ threshold. Notice that the $1/\eta$ does not obey the Gaussian distribution in practise, hence the ``$\sigma$'' here does not represent the confidence level corresponding to Gaussian distribution. Moreover, we found that $1/\eta$ tends to be larger for brighter sources, which could be caused by the unmarked saturations. The saturation effect tends to reduce the independence of successive measurements and thus increase the value of $1/\eta$. For the purpose of excluding all variable stars from the reference stars, we use $1/\eta$ to identify variable sources because the robust StD parameter is insensitive to the occasional variations (e.g. the variations of flare stars). Since the threshold of variability index varies with the instrumental magnitude, we thus defined a statistic parameter $\epsilon_{\frac{1}{\eta}}=[\frac{1}{\eta}- \nu(\widetilde{m}) ]/\sigma (\widetilde{m})$, where $\nu(\widetilde{m})$ and $\sigma (\widetilde{m})$ are the median and robust standard deviation of inverse von Neumann ratios for the TMTS light curves at a given magnitude, respectively. The $\epsilon_{\frac{1}{\eta}}$ is a key parameter to introduce the significance of variability for a light curve. As introduced above, although the identifications of astrophysically variable stars using the the variability indices has a low true positive rate (TPR), they can be used to identify stars of constant luminosity at a very high TPR since the light variations caused by image quality will not increase the FPR of non-variable sources. The setting of the thresholds for variability indices is usually arbitrary (see \citealt{ZTF+2019+first,Nidever+etal+2021+survey,Kupfer+etal+2021}). In the photometric calibration process (shown in Section~\ref{sec:reduction}), a fixed threshold $1/\eta\leq0.8$ is empirically set to identify non-variable sources. It is a tight threshold that ensures $\epsilon_{\frac{1}{\eta}} \lesssim 1.5$ for the reference stars in all observed magnitudes, corresponding to the exclusion of about 12\% of the reference star candidates. Furthermore, we compare the density distribution of inverse von Neumann ratio for each telescope of the TMTS system. As Figure~\ref{fig:neumann_dist} shows, the $1/\eta$ distribution of each telescope is roughly consistent with each other except that the telescope \#1 and telescope \#2 have a slightly more concentrated distribution, implying that the capability of variability detection is almost equivalent for each telescope. Therefore, the same threshold for variability indices is adopted for all of the four telescopes. \subsection{Periodicity detection} \label{sec:periodicity_detection} \begin{figure} \includegraphics[width=0.47\textwidth]{figures/periodic_prob_histo.pdf} \caption{ \textbf{ Cumulative distribution function (CDF) of unmodified false alarm probability (FAP) for periodicity detection in the \tmtsdata dataset. } The bin size is 0.5. The grey solid line represents the FAPs obtained from the \tmtsdata light curves, and the red solid line is the model fitted to the grey bins above 1$-$CDF$=0.1$. The grey dashed line represents the FAP distribution derived from 10,000 simulated time series obeying a normal distribution, and the blue solid line is the ideal null distribution with 1$-$CDF$=$FAP. } \label{fig:periodic_prob} \end{figure} Due to potential nonuniform sampling caused by some ``bad'' measurements, we test the periodicity of TMTS light curves using the Lomb–Scargle periodogram (LSP hereafter, \citealt{Lomb+1976,Scargle+1982,VanderPlas+2018+LSP_understanding} ). The LSP here is defined as \begin{equation} \begin{aligned} P(f)=\frac{1}{2 \sigma^2}\times \left \{ \frac{ \left [\sum\limits_{i=1}^{N} F_{i} \times \cos\, 2\pi f(t_i-\tau) \right ]^2}{\sum\limits_{i=1}^{N} \cos^2\, 2\pi f(t_i-\tau)} \right . \\ +\left . \frac{ \left [\sum\limits_{i=1}^{N} F_{i} \times \sin\, 2\pi f(t_i-\tau) \right ]^2}{\sum\limits_{i=1}^{N} \sin^2\, 2\pi f(t_i-\tau)} \right \}, \\ {\rm and}\,\, \tau= \frac{1}{4 \pi f} \times \left ( \arctan \, \frac{\sum\limits_{i=1}^{N} \sin\, 4\pi f t_i }{\sum\limits_{i=1}^{N} \cos\, 4\pi f t_i } \right ) \\ \end{aligned} \end{equation} where $F_{i}$ is the flux at epoch $t_i$ after the calibration (see Section~\ref{sec:reduction}), $f$ is the test frequency and $\sigma^2$ is the variance of the fluxes. The LSP here is normalized by the variance and thus the white noise in LSP follows the exponential distribution as $\exp{(-z)}$ (see also \citealt{Coughlin+etal+2020+ZTFprojectionII}). We determine the most likely photometric period by searching the highest LSP peak $P_{\rm max}$ in the frequency range of $ 3/2T \leq f \leq { f_{\rm nyq}} $, where $T$ is the time span of the observations and $f_{\rm nyq}$ is the (pseudo-)Nyquist frequency \citep{VanderPlas+2018+LSP_understanding}, which can be estimated as a half of average sampling rate ($\approx 1/75$~Hz). Notice that, the observations must cover one and half cycles before we can determine its periodicity. Based on the cumulative distribution function (CDF) of $\exp{(-z)}$ and the multiplicative property of the independent probabilities, the false alarm probability (FAP, see \citealt{Lomb+1976}) of periodicity can be estimated as \begin{equation} {\rm FAP}=1-[1-\exp{({ -P_{\rm max} })}]^{N_{\rm eff}} \label{Eq+FAP} \end{equation} where $N_{\rm eff}$ is the number of independent frequencies, which can be calculated by $N_{\rm eff}=f_{\rm nyq}T$ in approximation \citep{VanderPlas+2018+LSP_understanding}, namely a half of total number of epochs. Notice that the estimate of FAP is completely dependent on the assumption of white noise. However, LSP power tends to be higher at lower frequency because of the effect of red noise, thus the resultant FAP could be seriously underestimated, especially at the low-frequency end. Due to relatively short duration of our continuous photometry, the LSP is more likely to be polluted by the red noise generated from non-periodic or long-period variations. Therefore, we also search for the high LSP powers at lower frequency range (i.e. $f < 3/2T $), since strong powers in the low-frequency range (as $P^{\rm red}$) are very likely caused by red noise rather than real periodic behavior. All light curves with $P^{\rm red} > P_{\rm max} $ were marked to indicate the possible red noises in the LSPs. Several samples of period search are shown in Figure~\ref{fig:lsp_and_phivv}. Noted that both panel \emph{ii-b} and panel \emph{v-b} have { higher powers $P^{\rm red}$} below the frequency threshold, implying that they suffered non-periodic variations during the observations. \begin{figure*} \includegraphics[width=0.94\textwidth]{figures/lc_lsp_phivv_example.pdf} \caption{ \textbf{ Examples for detections of periodicity variables and flare stars.} The row \emph{i} to \emph{v} correspond to a W UMa-type eclipsing binary, a RR Lyr-type variable, a $\delta$ Scuti star, an AM Her-type cataclysmic variable and a multi-peak flare, respectively. The column \emph{a}, \emph{b} and \emph{c} represent the TMTS light curves, Lomb-Scargle periodograms, and time series of $\phi_{\rm VV}$, respectively. The red lines in column \emph{b} represent the frequency threshold (i.e., $3/2T$) used to investigate the red noise in Lomb-Scargle periodograms. { Notice that, the Lomb-Scargle periodograms here are a zoom-in version, the complete Lomb-Scargle periodograms cover the frequency even higher than 20~hour$^{-1}$. } } \label{fig:lsp_and_phivv} \end{figure*} For the purpose of checking the periodicity FAPs obtained from the light curves in the \tmtsdata dataset, we plotted their cumulative distribution function (CDF) in Figure~\ref{fig:periodic_prob}. For comparison, we also generated 10,000 simulated time series, and each is composed of random 100--1000 points that obey a normal distribution. As shown in Figure~\ref{fig:periodic_prob}, the periodicity FAPs, calculated from these simulated time series, follows exactly the ideal null distribution 1$-$CDF$=$FAP. This implies that the method of obtaining periodicity FAPs is feasible if the TMTS measurements are independent and follow a Gaussian distribution. However, our FAP, estimated from real dataset, deviates significantly from the ideal null distribution. Such a deviation was also found to exist in the dataset of other survey mission \citep{Drake+etal+2013,Drake+etal+2014+catalina_period}. It is known that the detectable periodic variable stars take only a small percentage of all observed sources \citep{Drake+etal+2014+catalina_period,Drake+etal+2017+catalina_period,Chen+etal+2020+ZTF_periodic,ZTF+DR1+2020}, the periodicity FAP discussed here can be seriously underestimated in practise. \begin{figure} \includegraphics[width=0.47\textwidth]{figures/modified_periodic_prob_histo.pdf} \caption{ \textbf{ Cumulative distribution function (CDF) of modified false alarm probability (FAP) for periodicity detection in the \tmtsdata dataset. } The bin size is 0.5. The grey solid line represents the FAP modified by $k=0.225$, and the blue solid line is the ideal null distribution with 1$-$CDF$=$FAP. } \label{fig:modified_periodic_prob} \end{figure} In order to derive more reliable estimate of periodicity FAPs, some methods have been developed, such as the \emph{Baluev}'s method \citep{Baluev+2008+FAP_corrected} and bootstrap method \citep{Ivezi+2014+book}. As an alternative, we construct a true null distribution by real dataset since we have already calculated the FAPs for all available light curves of the \tmtsdata { dataset}. By assuming that most observed sources are non-periodic sources (typically $\gtrsim 90\%$ ) and periodic sources tend to have higher LSP peaks and thus lower FAPs, we can take { the highest 90\% FAPs as approximated} null samples. We found these null samples (corresponding to the 1$-$CDF$> 0.1$ part of grey solid line in Figure~\ref{fig:periodic_prob}) follow a straight line in the logarithmic 1$-$CDF versus logarithmic FAP diagram, implying that the null distribution of real data may differ from the ideal null distribution by only a constant $k$, namely $\log_{10}(1-{\rm CDF})=k\times\log_{10} {\rm FAP}$. By fitting the distribution of these null samples, we obtained $k=0.225$ for all light curves of the \tmtsdata { dataset} and then the modified false alarm probabilities can be expressed as $ {\rm FAP_{mod} }= {\rm FAP}^k$. Notice that the $k$ value actually varies for different datasets as the performance of flux measurements is dependent on the observation conditions. Thus, a \tmtslocal $k$ value is determined from daily observation dataset in real time and \tmtslocal modified FAPs are generated for the light curves. The distribution of modified FAP is shown in Figure~\ref{fig:modified_periodic_prob}, where one can see that the area between the grey line and blue line corresponds to the candidates of periodic variable. About $99\%$ TMTS light curves [i.e. $\log_{10}(1-{\rm CDF}) >-2$] match the ideal null distribution, implying that the (detectable) periodic light curves account for only a few thousandths of \tmtsdata dataset. { Notice that, due to the limitation of current observation duration (typically within a night), it is difficult to reveal long-period variables (e.g. $P>0.5$~day, see details in Section~\ref{sec:periodic_variable_sources} ) using the \tmtsdata dataset. But the number of periodic variables will be greatly improved with the ongoing of supernova survey of TMTS. } \subsection{Flare search} A common method of flare detection is to search for outliers in the light curves for which non-flare variations (e.g. large-amplitude and long-duration variations ) have been removed. For the purpose of avoiding false flares caused by instrumental errors or cosmic rays, the flare search often requires at least two consecutive outliers rather than one-point outliers \citep{Walkowicz+etal+2011+flares,Osten+etal+2012+flare_archive,Yang+etal+2018+SC_LC_flares}. In order to remove the non-flare variations, \cite{Osten+etal+2012+flare_archive} use two different models to fit periodic and non-periodic light curves, respectively. To avoid the comparison of goodness for two models and speed up the data analysis, we fit all light curves by a unified compound model of 4th-order Fourier series \citep{Pojmanski+2002+06hvariable, Kim+etal+2016+UPSILON,Drake+etal+2014+catalina_period,Drake+etal+2017+catalina_period} and 2nd-order polynomial. The polynomial terms here are used to offset the potential long-scale variations. Notice that the purpose of fitting here is to remove the non-flare variations, rather than modeling the true variations of light curves. The compound model is expressed as \begin{equation} F_{i}^{\rm model}=\sum\limits_{j=0}^2 c_i\times t_i^{j} + \sum\limits_{k=1}^4 a_k\times \cos (2\pi \,k\,f_{\rm max}\,t_i ) + b_k\times \sin (2\pi \,k\,f_{\rm max}\, t_i ) \end{equation} where $f_{\rm max}$ is the frequency corresponding to the highest power $P_{\rm max}$ in the LSP (see Section~\ref{sec:periodicity_detection}). Notice that even if a light curve is non-periodic, we can still find a $f_{\rm max}$ in its LSP. For such a light curve, our model is still applicable while the values of best-fit $a_k$ and $b_k$ are very small. The residual flux $F_{i}^{\rm res}= F_{i} - F_{i}^{\rm model}$ can be easily obtained and the normalized residual could be calculated as \begin{equation} r_i= \frac{F_{i}^{\rm res}- \overline{F^{\rm res}}}{\sigma^{\rm res}} \end{equation} where $\overline{F^{\rm res}}$ and $\sigma^{\rm res}$ represent the median flux and the robust standard deviation of residual fluxes, respectively. It is worth noting that, we adopted the robust StD instead of the uncertainty in flux measurements, since the former can reveal true scatter in the residual fluxes. In this way, the normalised residuals (except the points corresponding to flares) will obey a normal distribution, which is the prerequisite to estimate the significance of selected flare candidates. The flare candidates are selected by locating the maximum $\phi_{\rm VV}$ in a time series \citep{Osten+etal+2012+flare_archive}, where the $\phi_{\rm VV}$ is defined as the product of continuous two normalized residuals, \begin{equation} \phi_{\rm VV, i}=r_{i} \times r_{i+1} \end{equation} The examples of flare search using time series of $\phi_{\rm VV}$ are also shown in Figure~\ref{fig:lsp_and_phivv}, where one can see that only the row \emph{v} (corresponding to a flare ) has very strong $\phi_{\rm VV}$ values in its time series. To estimate the false discovery rate (FDR), some studies \citep{Kowalski+etal+2009+FDR_flare,Osten+etal+2012+flare_archive,Paudel+etal+2018+K2_flare,Paudel+etal+2020_LDwarf_flares} have applied the FDR analysis following \cite{Miller+etal+2001+FDR}. However, the \emph{Miller}'s method is not applied to our project, because it requires a large number of null samples manually selected from the dataset of light curves, but this work cannot be finished automatically in real time by our pipeline. Therefore, we explored the mathematical formula for the null distribution of $\phi_{\rm VV}$. As introduced above, the (non-flare) normalized residual fluxes obey a normal distribution. By assuming successive residuals are independent of each other, the product of two normalized residuals (i.e. $\phi_{\rm VV}$) should follow the probability density function (PDF) as indicated below, \begin{equation} {\rm PDF}= \frac{K_0(\vert \phi_{\rm VV} \vert)}{\pi}= \frac{1}{\pi} \int^{\infty}_{0} \frac{\cos (\phi_{\rm VV}\,t)}{\sqrt{t^2+1}}\,{\rm d}\,t \end{equation} where $K_0$ is the special ($n=0$) case of \emph{modified Bessel function of the second kind} \citep{Abramowitz+Stegun+1972+mathematical+function}. Figure~\ref{fig:phivv_distribution} shows the density distribution of for about 36~million $\phi_{\rm VV}$ measurements from the TMTS observation conducted on December 19, 2020. The Bessel function is characterized by the density distribution of $\phi_{\rm VV}$, implying that the CDF of the Bessel function can be applied to estimate the FDR for flare candidates. \begin{figure} \includegraphics[width=0.47\textwidth]{figures/phivv.pdf} \caption{ \textbf{ The density distribution of $\phi_{\rm VV}$ calculated from the TMTS observation on December 19, 2020. } The observation data include about 36~million $\phi_{\rm VV}$ measurements from light curves of about 82 thousand sources. The bin size is 0.06. } \label{fig:phivv_distribution} \end{figure} Similar to Eq~\ref{Eq+FAP}, the FDR of flares can be estimated as \begin{equation} {\rm FDR}=1-[1 - \frac{1}{2\pi} \int^{\infty}_{ \phi_{\rm VV, max} } \,K_0(\vert x \vert) \,{\rm d}\,x ]^{N-1} \label{Eq+FDR} \end{equation} where $\phi_{\rm VV, max}$ is the maximum value of $\phi_{\rm VV}$ inferred in a time series and $N-1$ is the number of $\phi_{\rm VV}$. Notice that, for the purpose of selecting flares, we must exclude the $\phi_{\rm VV}$ derived from the product of a pair of negative $r_i$ values (thus the integral probability term is multiplied by a factor of $\frac{1}{2}$ in Eq~\ref{Eq+FDR}). \begin{figure} \includegraphics[width=0.46\textwidth]{figures/phiVV_prob_histo.pdf} \caption{ \textbf{ Cumulative distribution function (CDF) of unmodified false discovery rate (FDR) for flare detection in the \tmtsdata light curves. } The bin size is 0.2. } \label{fig:phivv_prob} \end{figure} The cumulative distribution function of flare FDRs for about 4.9~million { light curves of \tmtsdata dataset} is shown in Figure~\ref{fig:phivv_prob}. The FDR distribution generated from 10,000 simulated random time series (see more in Section~\ref{sec:periodicity_detection}) matches well with the ideal null distribution, suggesting that our mathematical formula of calculating the FDR is applicable for normally distributed and independent residuals. However, as the successive normalized residuals are not completely independent of each other, the FDRs generated from the \tmtsdata dataset deviate significantly from that of the ideal null distribution. Here we applied the same methods and assumptions described in Section~\ref{sec:periodicity_detection} to modify the FDR. By fitting the cumulative distribution above 0.1, we obtained $k=0.376$ for the overall \tmtsdata data and modified the false discovery rate by ${\rm FDR}_{\rm mod}={\rm FDR}^k$. On the other hand, for daily observation dataset, our pipeline of data analysis will also automatically determine a \tmtslocal $k$ value, which helps modify the FDR derived from daily observation data in a more accurate way. \subsection{Cross match with other catalogs} In order to obtain additional information (e.g. distance and radial velocity) for source selections, we have cross-matched all of the TMTS sources with the Gaia DR2 database and the LAMOST DR7 (including both low- and medium-resolution spectra) catalogs. \subsubsection{Gaia} The Gaia DR2 covers 1.69 billion sources brighter than 21 mag, among which astrometric positions, parallax and proper motion parameters are available for more than 1.33 billion sources \citep{Gaia_collaboration+2018+data}. Most of these sources also have photometric data in G (330--1050~nm), B (330--630~nm) and R bands (630--1050~nm). Out of the 4.87 million light curves from the \tmtsdata dataset, about 4.83 million light curves (99\% ) are found to have the Gaia-DR2 counterparts. The remained sources without Gaia counterparts are either transients or bad detections. After removing a part of sources that are located in multiple LAMOST plates or repeatedly observed by different telescopes of TMTS, the total number of the Gaia-DR2 counterparts is 4.26 million. Among them, about 4.18 million (98\%) Gaia sources have parallax measurements, but only 2.63 million (62\%) have reliable parallax measurements (i.e. $\sigma_\varpi/\varpi \leq 0.2$ here). \begin{figure} \includegraphics[width=0.47\textwidth]{figures/gaia_HR.pdf} \caption{ \textbf{ Density distribution of the Gaia DR2-TMTS sources across the Hertzsprung–Russell (HR) diagram.} The bins size is 0.1$\times$0.1 mag$^2$, and the total number of sources is 2.63 million. } \label{fig:gaia_hr} \end{figure} Based on parallax, G magnitude and color ($G_{\rm BP}-G_{\rm RP}$), we can plot the Hertzsprung-Russell (HR) diagram for these Gaia DR2-TMTS sources, which is a very valid tool to select the white dwarfs \citep{Esteban+etal+2018+gaia_wd,Fusillo+etal+2019+gaia_wd,Pelisoli+Joris+2019+ELM,Kim+etal+2020+wd} and hot subdwarf stars \citep{Geier+etal+2019+hot_subdwarf,Geier+2020+subdwarf}. As shown in Figure~\ref{fig:gaia_hr} , the Gaia DR2-TMTS sources cover a wide area across the HR diagram. The reddening corrections were not applied to these sources. The high-density areas in HR diagram correspond to giant stars and main-sequence stars, which include some classes of variables, such as pulsating stars and eclipsing binaries. The TMTS has also captured a small number of white dwarfs in its first-year observations. By applying a simple set of cuts for $G_{\rm abs}$ and $G_{\rm BP}-G_{\rm RP}$ (the Eq~2--5 in \citealt{Fusillo+etal+2019+gaia_wd} ), we identified 565 ($\approx$0.02\%) white dwarf candidates out of the 2.63 million TMTS sources. Furthermore, we can identify cataclysmic variables (CVs, \citealt{Pala+etal+2020+gaia_CV}) and $\delta$ Scuti stars in periodic variables (see details in Section~\ref{sec:periodic_variable_sources}). \subsubsection{Lamost} From 2011 to 2019, the LAMOST spectroscopic survey has obtained 10.6 million low-resolution ($R\sim 1800$) spectra and 11.4 million single-exposure medium-resolution ($R\sim 7500$) spectra for about 8.9 million LAMOST targets brighter than 17.5 mag (see more in \citealt{Cui+etal+2012+LAMOST,Zhao+etal+2012+LAMOST} and \url{http://dr7.lamost.org}). In the LAMOST DR7 catalog \emph{v1.2}, all of the LAMOST targets have already been cross-matched with that of the Gaia DR2 catalog. Plenty of atmospheric parameters (including effective temperature, surface gravity and metallicity) can be inferred from the observed spectra by applying LAMOST Stellar Parameter pipeline (LASP). The abundant spectral parameters from LAMOST can help identify the variable stars from the TMTS. It is worth noting that, LAMOST can provide radial velocity (RV) measurements for millions of stars. These RV measurements were determined by LASP or cross correlation with KURUCZ synthetic templates \citep{Wang+etal+2019+RV_measurements}. As multi-epoch spectra are available for millions of objects, the LAMOST data can reveal radial velocity variations for a large number of stars. Hence, the LAMOST data can be used to select those with significant RV variations (e.g. eclipsing binaries, \citealt{Yang+etal+2020+EB_lamost} ) and even the binaries harboring an invisible high-mass companion (i.e. black hole binary candidates, \citealt{Thompson+etal+2019+Science,Liu+etal+2019+Nature,Yi+etal+2019+theory_bh,Zheng+etal+2019+lamost_bh} ). \begin{figure} \includegraphics[width=0.47\textwidth]{figures/n_exposure_dist.pdf} \caption{ \textbf{Number distribution of LAMOST DR7 epochs (including both low-resolution and single-exposure medium-resolution spectra) .} The blue squares represents the distribution for the LAMOST-TMTS sources. } \label{fig:n_exposure_dist} \end{figure} Figure~\ref{fig:n_exposure_dist} shows the number of objects corresponding to the LAMOST DR7 spectra versus the number of the corresponding spectra which contain both low-resolution and single-exposure medium-resolution spectra. About 2.35 million sources ($32\%$ of LAMOST sources) have 2 to over 70 epochs, and the number of sources decrease with the number of epochs. After cross-matching the TMTS sources in \tmtsdata dataset with the LAMOST DR7 catalog, we find that there are {626 thousand sources (8.5 \% of the LAMOST sources)} in common. Among them, 285 thousand TMTS sources have multi-epoch spectra. For a five-year survey, the TMTS is expected to provide high-cadence photometric data for more than half of the LAMOST sources. \section{Preliminary Results} \label{results} In this section, we present some preliminary results from the first-year high-cadence photometric observations of the LAMOST fields with the TMTS. The result of searching for periodic variable sources is shown in Section~\ref{sec:periodic_variable_sources}, and the result for flares is shown in Section~\ref{sec:flare_stars}. \subsection{Periodic Variable Sources} \label{sec:periodic_variable_sources} \begin{figure} \includegraphics[width=0.47\textwidth]{figures/periodic_variables_dist.pdf} \caption{ \textbf{ Period distribution of selected light curves from the \tmtsdata dataset.} The red and blue lines represent the sets of light curves selected with different ratios of period of light variation to observation duration. } \label{fig:periodic_variables_dist} \end{figure} \begin{figure} \includegraphics[width=0.47\textwidth]{figures/period_vsx.pdf} \caption{ \textbf{Period of the highest peak in the periodogram from the \tmtsdata dataset versus the period given by the International Variable Star Index (VSX).} The solid and dashed lines represent the relation $y=x$ and $y=2x$, respectively. } \label{fig:period_vsx} \end{figure} \begin{figure} \includegraphics[width=0.47\textwidth]{figures/phased_lc_examples.pdf} \caption{ \textbf{The phase-folded light curves for two shortest-period stars from \tmtsdata.} The upper panel shows a $\delta$ Scuti star candidate with a period of 18.4~minutes and an amplitude of 0.03~mag; the lower panel shows a blue large-amplitude pulsator (BLAP) candidate with a period of 18.9 minutes and an amplitude of 0.26~mag. Notice that, the BLAP candidate has been observed by telescope \#1 (grey) and \#3 (red) within different nights, respectively. } \label{fig:phased_lc_examples} \end{figure} \begin{figure*} \includegraphics[width=0.94\textwidth]{figures/periodic_variables_hrd.pdf} \caption{ \textbf{Distribution of periodic variable star candidates across the HR diagram.} { All points represent the periodic variable star candidates selected from the \tmtsdata dataset, among which the eclipsing binaries, $\delta$ Scuti stars, CVs and other variables identified in the International Variable Star Index (VSX)} are highlighted using the symbol shape of square, triangle, star and diamond respectively. The open circles represent the periodic variable candidates that are not registered in VSX yet. The color depth of symbols represents the period corresponding to the maximum power in LSP(s). The orange area indicates the instability strip. Notice that, the period corresponding to eclipsing binaries here is the photometric period rather than the orbital period, since these candidates are not completely classified yet. } \label{fig:periodic_variables_hr} \end{figure*} The \tmtsdata light curves with periodic variations (typically shorter than 4~hours) are selected by the following criteria: (i) \tmtslocal periodic FAP < 0.001; (ii) no significant Red Noise; (iii) LC quality > 95\%. { As a result, 6626 light curves were selected. However, these selection criteria may introduce a large number of light curves with false periodic variations due to that a wide periodicity criteria was applied. In order to improve the true positive rate (TPR), we applied two sets of criteria to select the light curves with different ratios of photometric period to observation duration ($P/T$), respectively. To select light curves that cover at least two complete periods of light variations [i.e., new criterion (iv) $P \leq T/2$], we followed previous selection criteria (i)--(iii). This resulted in 2835 candidate light curves. For the purpose of collecting light curves with periodic variations slightly longer than a half of the observation duration (i.e., $ T/2 < P \leq 2T/3$), we adopted the following tighter criteria: (i+) \tmtslocal periodic FAP $< 10^{-5}$; (ii) and (iii) are the same as the previous criteria; (iv+) $ T/2 < P \leq 2T/3$ ; (v) the variability index $\epsilon_{\frac{1}{\eta}} \geq 3.0$ (see details in Section~\ref{sec:variability_detection}). With these modified criteria, 988 additional light curves were selected. The period distribution of all 3823 candidate light curves (corresponding to 3723 Gaia DR2 sources) is shown in Figure~\ref{fig:periodic_variables_dist}.} The periods of these selected candidates are distributed from about 20 minutes to 7.5 hours (see Figure~\ref{fig:periodic_variables_dist}). The number of candidates increases linearly with the photometric period until it peaks at around 3~hours. For the uninterrupted observations, the detectable period is obviously dependent on the observation duration within a night. Although the observation strategy prevent us from discovering longer-period variable stars, the number of short-period variables found by TMTS is very competitive (see also \citealt{ZTF+DR1+2020,Burdge+etal+2020+systematic,Chen+etal+2020+ZTF_periodic}). For a 5-year survey plan, TMTS is expected to reveal more than 20,000 periodic variable stars with period shorter than 8 hours. It is worth noting that, 81 periodic variable star candidates (corresponding to 77 unique sources) have a photometric period below 1~hour, as the total number of periodic variable stars with periods below 1~hour in the International Variable Star Index (VSX, the version updated on May 31, 2021, \citealt{Watson+etal+2006+VSX}) is only 887. In the next few years, TMTS will greatly expand the sample of ultra-short periodic variable stars. The vast majority of these sources are $\delta$ Scuti stars, some blue large-amplitude pulsators (BLAPs, \citealt{Pietrukowicz+etal+2017+blap}) and ultracompact binaries are also likely captured. To distinguish the {BLAPs and} UCBs from the $\delta$ Scuti stars, we need to investigate their absolute magnitudes, colors and spectra. { The phase-folded light curves for two shortest-period stars are shown in Figure~\ref{fig:phased_lc_examples}. Their periods (and amplitudes) are 18.4~minutes (0.03~mag) and 18.9~minutes (0.26~mag), respectively. With the \emph{Gaia} $G_{\rm abs}$ and $G_{\rm BP}-G_{\rm RP}$ color measurements, we inferred that the former is a low-amplitude $\delta$ Scuti star while the latter is a BLAP. Further photometric and spectroscopic observations are undergoing. We will study these ultra-short periodic variable stars in separate papers.} All objects of these candidate light curves were cross-matched with the VSX. As a result, 1603 light curves have a VSX counterpart recorded and almost all these counterparts have a period measurement given by VSX. We plotted the period corresponding to the maximum power in TMTS LSP against the VSX period in Figure~\ref{fig:period_vsx}. The VSX periods basically coincide with the TMTS periods or twice the TMTS periods (typically eclipsing binaries) except for a few inconsistent measurements ($\lesssim 2\%$ of all samples). These inconsistent measurements are typically caused by multi-period variable stars or spurious periods in light curves. All objects corresponding to candidate light curves are also cross-matched with the Gaia DR2 catalogue. Because some light curves match with repeated sources, all these candidate light curves correspond to \textbf{3723} unique Gaia sources. Among these stars, \textbf{2987} sources have both a $G_{\rm BP}-G_{\rm RP}$ measurement and a reliable parallax measurement. Figure~\ref{fig:periodic_variables_hr} shows the distribution of these sources across the HR diagram. Both the absolute magnitudes and the color of these sources were dereddened using the 3D dust map from \cite{Green+etal+2019+3dmap} and the \emph{DUSTMAPS Python} package\footnote{https://github.com/gregreen/dustmaps} \citep{Green+2018+python}. After crosschecking with the latest VSX catalogue, we find that more than half of our periodic variable star candidates are newly discovered. In Figure~\ref{fig:periodic_variables_hr}, we compared these periodic variable star candidates with the $\delta$ Scuti instability strip inferred by using the strip boundaries from \cite{Murphy+etal+2019+ds} and the evolutionary tracks of single stars from the Podova Stellar Evolution Database \citep{Girardi+etal+2000+track}. As a result, about 940 periodic variable stars were found to locate in the instability strip, among which about 800 candidates could be new-discovered $\delta$ Scuti stars. Notice that, the number is still underestimated, as more than 700 candidates have not good parallax measurement and thus cannot be put in the HR diagram. This demonstrates the high efficiency of discovering short-period variables by the TMTS. In comparison, the estimate of the number for eclipsing binaries is more difficult, as the eclipsing binaries distributed on a much wider area of the HR diagram than the $\delta$ Scuti stars. Due to the special profiles of light curves from eclipsing binaries, they could be identified by the random forest (RF) or neural networks (NNs). By adopting the cyclic-permutation invariant neural networks from \cite{Zhang+Bloom+2021+nn}, there are about 1900 eclipsing binaries in our 3723 periodic variable star candidates, among which about 600 eclipsing binaries are new-discovered (see details in Xi et al. in prep.). Notice that, the results did not include the eclipsing binaries that were covered by TMTS observations for shorter than 1.5 photometric periods (i.e., 0.75 orbital period). These longer-period eclipsing binaries are expected to be identified by eyes or new algorithm of neural networks. \begin{figure} \includegraphics[width=0.47\textwidth]{figures/periodic_variables_logg_teff.pdf} \caption{ \textbf{Lamost surface gravity versus effective temperature for periodic variable candidates from the \tmtsdata dataset.} The red triangles and blue squares represent the $\delta$ Scuti stars and eclipsing binaries identified in VSX, respectively. The unclassified periodic variable candidates are labeled as grey circles. The green dashed line indicates the cut-off temperature for selecting $\delta$ Scuti stars (see also \citealt{Murphy+etal+2019+ds}). } \label{fig:periodic_variables_logg_teff} \end{figure} { Since about 20\% of periodic variable star candidates have not reliable \emph{Gaia} parallax measurements, the identifications of these candidates rely on additional spectroscopic observations. Among 3723 periodic variable star candidates, 1252 sources are observed by the LAMOST and 814 sources have measurements of both surface gravity (log$\,$g) measurement and effective temperature ($\rm T_{eff}$), which means that a part of candidates can be identified using the parameters from the LAMOST spectra. This is because the two dominant classes of short-period variable sources, $\delta$ Scuti stars and eclipsing binaries, locate at distinct areas in the $\rm T_{eff}$--log$\,$g diagram (see the Figure~\ref{fig:periodic_variables_logg_teff}). With the cut-off temperature 6500~K from \cite{Murphy+etal+2019+ds}, 290 new $\delta$ Scuti star candidates are selected from the 502 unidentified periodic variable candidates, and the remained unidentified candidates are very likely eclipsing binaries. } The LAMOST has started time-domain medium-resolution spectroscopic survey since 2018, and one of the main scientific goals of this survey is to discover quiescent or noninteracting black hole binaries in our Galaxy. Since a few binaries harboring an invisible high-mass companion (e.g. $M_{\rm unseen} > 3~M_\odot $) have been proposed as black hole binary candidates \citep{Thompson+etal+2019+Science,Liu+etal+2019+Nature,Gu+etal+2019+bh,Zheng+etal+2019+lamost_bh,Yi+etal+2019+theory_bh,Clavel+etal+2021+bh_test,Gomel+etal+2021+ellipsoidals_III}. With multi-epoch RV measurements from the spectroscopic surveys and periodic provided by the wide-field photometric survey missions like TMTS, the lower limits of mass functions are easily estimated for the observed binaries. It is worth noting that, the multi-epoch spectra tend to discover short-period binaries, since these systems usually have larger Keplerian velocities. As introduced by \cite{Yang+etal+2020+EB_lamost}, the most spectroscopic binaries (SBs) identified from the LAMOST have an orbital period shorter than 0.6~day. In the next few years, the TMTS is expected to play an important role in providing a large sample with short-period light variations. With the progress of time-domain survey, LAMOST and TMTS observations will give constrains on mass function for a large number of single-lined binaries and thus provide an opportunity for the search of black hole or neutron star binaries. \subsection{Flare Stars} \label{sec:flare_stars} As a subclass of eruptive variable star, flare stars are observed to exhibit flaring behaviour during which the brightness of stars dramatically increase within a few minutes and then decrease for several hours. Since the operation of \emph{Kepler} mission, about 3,400 stars are found to have flares according to the Q1--Q17 (Data Release 25) long-cadence (LC) data of \emph{Kepler} \citep{Yang+Liu+2019+flare_catalog}. However only about 200 flare stars have been identified through its short-cadence (SC) data, since only about 5,000 targets in \emph{Kepler} have SC observations \citep{Balona+etal+2015+flares,Yang+etal+2018+SC_LC_flares}. Compared to the LC data, the good time resolution of SC data allows further study about the morphology of flares \citep{Balona+etal+2015+flares}. During the high-cadence observations of the LAMOST sky area, TMTS has also captured a series of flare events. Because the flares only cause occasional variations in the light curves, the variability indexes tend to be less sensitive compared with the periodic variations. Therefore, we widened the criterion of $\epsilon_{\frac{1}{\eta}}$ here when identifying flares. On the other hand, the flare search is very dependent on the observational condition since two continuous outliers in a light curve are already enough to produce a flare candidate. It is worth noting that, some spurious ``flares'' could be generated when the objects locate on the bad pixels of detectors. These bad pixels usually distort the point spread function (PSF) of the sources, so the SExtractor Flags of the corresponding measurements are very likely non-zero values. Hence, a very tight criterion is needed for the LC quality. \begin{figure} \includegraphics[width=0.47\textwidth]{figures/flare_variables_hrd.pdf} \caption{ \textbf{Distribution of the flare stars and candidates in the HR diagram.} The confirmed flare stars are labeled with red star symbol. } \label{fig:flar_star_hr} \end{figure} In summary, we searched for the flares in the \tmtsdata light curves using the following criteria: (i) \tmtslocal flare FDP $< 10^{-5}$; (ii) $\epsilon_{\frac{1}{\eta}} > 2.0$ (iii) LC quality = 100\% (iv) excluding bad weather observations. This finally resulted in 356 flare candidates, among which 42 flares were visually confirmed. Among them, 39 flares correspond to Gaia sources with both the $G_{\rm BP}-G_{\rm RP}$ color and the reliable parallax measurement. As shown in Figure~\ref{fig:flar_star_hr}, these flare stars are distributed on the lower right branch of the main sequence in the HR diagram, with an absolute magnitude lying between 6.0~{\rm mag} and 12.0~ {\rm mag}. Compared to flare stars discovered by other surveys, the 1-min cadence of uninterrupted photometry allow us to further study the shapes and durations (a flare sample is shown in the row \emph{v} of Figure~\ref{fig:lsp_and_phivv} ). In a 5-year survey plan of TMTS, the discovery number of flares with high-quality light curve coverage is expected to significantly expand the samples of flares with short time resolution. \section{Summary} We present the methodology of variables detecting and preliminary scientific results for the first-year high-cadence monitoring of the LAMOST plates with the TMTS. During the period from January 2020 to December 2020, the TMTS has observed 188 LAMOST plates and generated 4.9~million uninterrupted light curves for over 4.2~million objects with a cadence of about 1 minute. We have applied the inverse von Neumann ratio, Lomb–Scargle periodogram and \emph{Osten}'s methods to detect variability, periodicity and flares in these light curves, respectively, and then estimated the corresponding significance by their cumulative functions. A preliminary result of periodicity detection reveals 3723 short-period variable candidates, with a period shorter than 7.5 hours. Hence, TMTS is expected to find more than 20,000 short-period periodic variables in a five-year observation plan. By plotting the TMTS sources across the HR diagram using the Gaia DR2 parameters, we estimated that at least 600 new eclipsing binaries and 800 new $\delta$ Scuti stars were discovered in 2020. Furthermore, over 40 flares with great temporal resolution were detected during the observations. Further analysis of the periodic variables and flares will be presented in the forthcoming papers. \section*{Acknowledgements} We acknowledge the support of the staffs from Xinglong Observatory of NAOC during the installation, commissioning and operation of the TMTS system. This work is supported by the Ma Huateng Foundation, the National Natural Science Foundation of China (NSFC grants 12033003 and 11633002), the National Program on Key Research and Development Project (grant no. 2016YFA0400803). X.W. is also supported by the Scholar Program of Beijing Academy of Science and Technology (BS2020002) and the Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No. XDB23040100. Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. { This paper includes data collected with the TESS mission, obtained from the MAST data archive at the Space Telescope Science Institute (STScI). Funding for the TESS mission is provided by the NASA Explorer Program. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5–26555. This research has made use of the International Variable Star Index (VSX, \citealt{Watson+etal+2006+VSX}) database, operated at AAVSO, Cambridge, Massachusetts, USA.} Some of the results in this paper have been derived using the HEALPix \citep{Gorski+etal+2005} package. \section*{Data Availability} The data underlying this article are subject to an embargo of about 10 months from the publication date of the article. But the data can still be shared on reasonable request to the corresponding author. The light curves from the first-year survey of TMTS will be publicly available at TMTS Public Data Release 1 (in prep.) in 2022. \input{TMTS.bbl} \bsp \label{lastpage} \end{document}
1,314,259,994,462
arxiv
\section*{Introduction} Networks and network theory have been utilized to represent and analyze the structure and function of a myriad of biological systems. These systems span scales from cells to ecosystems and include gene regulatory networks~\cite{barabasi_nat2004,alon_2007}, metabolic pathways~\cite{guimera_nat2005,duarte_2007}, disease dynamics~\cite{newman_pre2002,meyers_2005}, food webs~\cite{pascual_book2006,allesina_sci2008}, host-parasite webs~\cite{lafferty_pnas2006,flores_pnas2011}, and social interactions \cite{watts_nat1998,dodds_sci2003,christakis_2007}. In the process, structural archetypes have been identified including scale-free behavior, motifs, modularity, the emergence of hubs, and small-world structure~\cite{watts_nat1998,barabasi_sci1999,strogatz2001,girvan_2002,albert_rmp2002,newman_siam2003,newman_pnas2006,alon_book2007}. However, these theories do not typically incorporate the spatial constraints that underlie the location and connections amongst nodes and edges. Indeed, there are many examples of delivery and distribution networks where nodes and edges are physical structures embedded in space, e.g., leaf venation networks \cite{turcotte_jtb1998,brodribb_2010}, cardiovascular networks \cite{labarbera_1990,kassab_2006}, cortical networks \cite{zhang_pnas2000}, root networks \cite{waisel_2002}, ant trails \cite{latty_2011} and road networks \cite{masucci_2009}. Hence, theory is also needed to characterize biological networks whose structure is strongly influenced by physical constraints (for a review, see \cite{barthelemy_2011}). Although the theory of spatial networks is quite diverse, the theory as applied to resource delivery networks in biology often involves certain simplifying assumptions. For example, in fractal branching theory, a network is seen as a perfectly self-similar structure, e.g. a dividing binary tree \cite{rashevsky62}. A prominent theory of metabolic scaling in mammals assumes the cardiovascular system is a fractal whose physical dimensions have evolved to optimally transport fluid from the aorta to capillaries \cite{west_sci1997,brown_eco2004}. An extension of this model to the above-ground structure of tree branches makes similar assumptions \cite{enquist_nat1998}. Both models have inspired a wide array of follow-up work with increased recognition that the original fractal branching assumption is overly simplistic \cite{dodds_2001,price_2007,savage_2008,banavar_2010,kolokotrones_nat2010}. For example, in reality, physical networks in biology have side branches and are not perfectly balanced binary trees \cite{turcotte_jtb1998}. Theories of side-branching resource delivery and distribution networks have their origins in the study of river networks. In a river network, streams merge together to form larger streams. However, small streams can merge into larger streams of all scales. The topological structure of river networks can be analyzed using the so-called Horton-Strahler order \cite{horton1945, strahler1957}. This scheme assigns an integer number to every branch of the network. The numbers represent different levels of the branch hierarchy, with larger numbers corresponding to the larger stream segments in the network. The Horton-Strahler ordering is the basis for the characterization of the statistical properties of river networks\cite{dodds_2000}, including the finding that river networks are fractal \cite{rodrigueziturbe_1997}. Moreover, the side-branching statistics first introduced by Tokunaga \cite{tokunaga_1978} can be used to characterize universal features of river networks and departures thereof \cite{dodds_pre2001}. Leaf venation networks are a prominent example of a physical delivery and distribution network whose structure possess numerous side branches. The structure of leaf venation networks has broad functional implications. For example, leaf vein density is positively correlated with photosynthetic rates~\cite{brodribb_2007} and also influences the extent to which leaves form a hydraulic bottleneck in whole plants~\cite{cochard_2004,sack_2006}. However, many leaves of higher plants (notably most leaves of angiosperm lineages), have reticulate venation networks, involving loops within loops \cite{brodribb_2010}. It has been hypothesized that reticulate patterns allow leaves to maintain the supply of water and nutrients to and from photosynthetically active chloroplasts even when flow through some edges in the network is lost~\cite{nardini_2001,katifori_2010,sack_pnas2008,corson_2010} due to mechanical damage or herbivory. Unfortunately, the Horton-Strahler ordering scheme developed for the analysis of river networks is not directly applicable to reticular networks. The reason is that loops lead to inconsistencies in the merging procedure in which a strictly hierarchical order is assigned to all edges. In this paper we propose a method that generalizes the Horton-Strahler order to planar, weighted reticular networks. Such networks encompass a large class of physical networks, where weights can often be obtained by estimating dimensions of edges, such as branch widths, or other indicators of cost or importance. While coinciding with the Horton-Strahler order for branching networks, our method also assigns hierarchical levels to the loops of the network. Moreover, it categorizes the branches into the ones responsible for the formation of loops, and the ones forming the tree structure of the network. Edge weights play an important role in our algorithm, and we perform a theoretical analysis of possible effects of weight perturbations on the hierarchical levels. We find that the loop hierarchy is more robust to measurement error of network edge weights than is the tree hierarchy. In the past, comparisons of the statistical similarity between river networks and leaves have been proposed, albeit such comparisons are restricted to leaves without loops~\cite{pelettier_2000}. Hence, we also discuss applications of the current method to the characterization and comparison of reticulate leaf venation networks as well as obstacles to extending this method to a more general class of networks. \section*{A graph theoretic approach to Horton-Strahler ordering of rooted trees} We start by reviewing the algorithm for constructing the Horton-Strahler order. For the remainder of the paper, we shall adopt the language of graph theory \cite{chartrand_book1985, bollobas_book1998}. Note that in graph theory, the ``leaves'' of the network are those vertices which only have a single edge that connects to them. In this context, the input to the Horton-Strahler ordering algorithm is simply a rooted tree, $T=(V, E)$, where $V$ is the set of vertices and $E$ is the set of edges. Given such a tree, the algorithm assigns a level, $\lambda(e)$, to each edge $e\in E$ in the following way: \begin{itemize} \item Assign level $1$ to all edges connected to the leaves of $T$. \item For each vertex having only one incident edge, $e$, with undefined $\lambda(e)$, let $l$ be the maximal level among the other incident edges. If there is a single incident edge of level $l$, then $\lambda(e)=l$. If there are two or more incident edges of level $l$, then $\lambda(e)=l+1$. \end{itemize} The result of this algorithm is illustrated in Fig. \ref{fig:strahler}(a). Conventionally in the study of river networks~\cite{rodrigueziturbe_1997}, this algorithm can be summarized by a single rule which states that the order of a downstream segment is equal to $$ \lambda = \textrm{max}(\lambda_1,\lambda_2)+\delta_{\lambda_1,\lambda_2} $$ where $\lambda_1$ and $\lambda_2$ are the order of the two upstream segments that are merging and $\delta$ is the Kronecker delta. It is clear, however, that if the network has loops, as in Fig. \ref{fig:strahler}(b), the algorithm simply cannot proceed because there always will be a vertex having more than one incident edge with an undefined level. Moreover, loops in this graph seem to also form a hierarchy. For example, the loop outlined in Fig. \ref{fig:strahler}(b) by the red dotted line may belong to a higher level than the loop outlined by the blue dashed line. It turns out that such a hierarchy can be constructed and separated from the tree hierarchy if edges have weights and the graph itself is planar. An example of such a graph is shown in Fig. \ref{fig:strahler}(c), where the weights represent widths of the branches. \begin{figure}[tb] \begin{center} \includegraphics[width=0.5\textwidth]{pics/strahler.eps} \caption{\label{fig:strahler} \footnotesize Examples of networks with hierarchical structure with a common ``root'' or outlet denoted by the red dot at the bottom of each network: (a) Horton-Strahler stream order of branch hierarchy in a tree network; (b) Reticular network with possible loop hierarchy: the, blue, dashed loop might be less important than the red, dotted loop; (c) Reticular network of (b) with weights.} \end{center} \end{figure} To guide the reader's intuition, we first provide an alternative description of the Horton-Strahler algorithm for the case when the tree $T$ is binary and weighted. Let $w:E\to\R$ be the weight function, that is, $w(e)$ is the weight of an edge $e\in E$. Since the tree is rooted, there is a partial order defined on $E$ as follows: $e_1\leq e_2$ if there is a path from the root to $e_1$ which passes through $e_2$ (in other words, $e_2$ is closer to the root than $e_1$). Let us assume that the weight function is strictly increasing with respect to this order, that is, $w(e_1)<w(e_2)$ if $e_1<e_2$. In this case, the Horton-Strahler order can be computed using the following procedure: \begin{itemize} \item Let $C\subset E$ be the set of edges incident to leaves of $T$, regarded as a set of disjoint components. For each $e\in C$ let $\lambda(e)=1$. \item Iterate through (the rest of the) edges in order of increasing weight. For each edge $e$ do the following: \begin{itemize} \item If $e$ shares a vertex with a single component $c_1\in C$, then merge $c_1$ and $e$ into a new component $c$, and let $\lambda(e)=\lambda(c)=\lambda(c_1)$. \item If $e$ shares a vertex with two components $c_1, c_2\in C$, then merge $c_1, c_2$, and $e$ into a new component $c$, and assign levels as follows: \begin{itemize} \item If $\lambda(c_1)=\lambda(c_2)$, then $\lambda(c)=\lambda(c_1)+1$. \item If $\lambda(c_1)\neq\lambda(c_2)$, then $\lambda(c)=\max{\{\lambda(c_1), \lambda(c_2)\}}$. \item $\lambda(e)=\lambda(c)$. \end{itemize} \end{itemize} \end{itemize} \section*{Ordering of planar weighted graphs} We now present the algorithm for constructing the generalized Horton-Strahler order. Let $G=(V, E)$ be a planar graph, not necessarily a tree, and again let $w:E\to \R$ be a weight function. We shall assume that $w$ is injective (i.e., all weights are unique). Otherwise, the ties will be resolved arbitrarily. The merging procedure for computing the Horton-Strahler order works with disjoint components, which, in the language of algebraic topology, are $0$-dimensional homology classes. Loops, on the other hand, are $1$-dimensional homology classes. Hence, we may try to construct a hierarchy by merging loops. Notice that the boundary of a face of the graph $G$ is a loop, and we can merge two neighboring faces by removing a shared edge. Using these two observations, we obtain the following merging procedure for loops: \begin{itemize} \item Sort the edges so that $w(e_1)<w(e_2)<\cdots<w(e_n)$, where $n=|E|$ is the number of edges. \item Let $\lambda(f)=1$ for each face $f$. \item Iterate through $e_1,\ldots, e_n$ and do the following: \begin{itemize} \item If $e_i$ is adjacent to a single face, skip to the next edge. \item If $e_i$ is adjacent to two distinct faces $f_L$ and $f_R$, remove $e_i$ from the graph, let $f_{merged}=f_L\cup f_R$, and assign the levels as follows: \begin{itemize} \item If $\lambda(f_L)=\lambda(f_R)$ then $\lambda(f_{merged})=\lambda(f_L)+1$. \item If $\lambda(f_L)\neq\lambda(f_R)$ then $\lambda(f_{merged})=\max\{{\lambda(f_L), \lambda(f_R)}\}$. \item $\lambda(e_i)=\min\{{\lambda(f_L), \lambda(f_R)}\}$. \end{itemize} \end{itemize} \end{itemize} \begin{figure}[tb] \begin{center} \includegraphics[width=0.5\textwidth]{pics/merge.eps} \caption{\label{fig:merge} \footnotesize An illustration of the loop merging procedure applied to the graph from Fig. \ref{fig:strahler}(c). Red, dashed edges are the ones removed during merging, the corresponding numbers show their levels. Levels of faces is encoded by the color: white faces have level $1$, light blue faces have level $2$, and gold faces have level $3$. Note that $f_7$ is the unbounded face.} \end{center} \end{figure} A step-by-step illustration of loop merging applied to the tree in Fig. \ref{fig:strahler}(c) is shown in Fig. \ref{fig:merge}. Notice that this procedure builds a rooted binary tree, where leaves correspond to the faces of $G$, and the rest of the vertices correspond to unions of these faces. The assignment of levels in this tree follows the original Horton-Strahler algorithm. It is also useful to remember that faces of $G$ are vertices of its dual graph, $G^*$, and merging faces of $G$ can be thought of as adding an edge to $G^*$. Hence, the two merging procedures that we described are, in some sense, dual. We shall refer to the binary tree of faces as the \emph{co-tree} of $G$, and denote it by $T^*(G)$. The construction of $T^*(G)$ removes edges from $G$ which are responsible for the existence of loops. We shall call such edges \emph{reticular}. Assignment of levels for such edges is based on the assumption that a merger should not be more significant than any of the merging elements. Notice that after removing reticular edges from $G$ we have a spanning tree of $G$, which we denote by $T(G)$. This tree captures the tree-like structure of the original network, and we can assign hierarchical levels to its edges using the original Horton-Strahler algorithm. We only need to determine which vertex should be the root, and we do this by finding the vertex with a single incident weight of maximum weight. Hence, we augment the procedure for constructing the loop hierarchy by the following statement: \begin{itemize} \item Apply the Horton-Strahler ordering to the remainder of the graph (which is a rooted tree). \end{itemize} The result of the complete algorithm applied to the tree in Fig. \ref{fig:strahler}(c) is provided in Fig. \ref{fig:merge_result}. \begin{figure}[tb] \begin{center} \includegraphics[width=0.5\textwidth]{pics/merge_result.eps} \caption{\label{fig:merge_result} \footnotesize Hierarchical levels assigned to the loops and branches of the network from Fig. \ref{fig:strahler}(c). Edge levels are shown on the left, where black edges have order $1$, light blue edges have order $2$ and gold edges have order $3$; reticular edges are dashed. Face levels are shown in the co-tree on the right, where white nodes have order $1$, light blue nodes have order $2$ and gold nodes have order $3$. Leaves of the co-tree are labeled by the corresponding faces while other nodes are labeled by the reticular edges causing the merger of the two child nodes. Numbers $N_{ij}$ are Tokunaga statistics for the spanning tree and indicate the number of edges of level $j$ joining with edges of level $i$~\cite{tokunaga_1978}. Similarly, $M_{ij}$ are Tokunaga statistics for the reticulate co-tree and indicate the number of edges and faces of level $j$ merging with edges of level $i$. For both $M$ and $N$, statistics are only collected when $i>j$.} \end{center} \end{figure} The algorithm produces three types of output. First, it provides a unique set of orders to those edges involved in the non-reticulate component of the network (Figure 3 - left panel). Second, it provides a unique set of orders to those edges involved in the formation of loops (Figure 3 - right panel). Further, one can also calculate the side-branching statistics associated with both orderings. The side-branching statistics, i.e., ``Tokunaga'' statistics~\cite{tokunaga_1978}, for a conventional non-loopy tree are summarized by the numbers $N_{ij}$ which are the number of edges of level $j$ that join with edges of level $i$. Because of the ordering process, these statistics are evaluated for $i>j$. These numbers can also be divided by the number of absorbing edges, i.e., the total number of edges of level $i$ to yield an average number of side-branches per segment. Here, the algorithm produces two sets of Tokunaga statistics, the numbers $N_{ij}$ for the side-branching of tree edges (Figure 3 - left panel) and $M_{ij}$ for the side-branching of reticulate edges (Figure 3 - right panel). \begin{figure}[tb] \begin{center} \includegraphics[width=0.5\textwidth]{pics/worst_case.eps} \caption{\label{fig:worst_case} \footnotesize Example of the two extreme cases of the loop hierarchy. The network has $m$ faces, where $m=2^{k}$ for some integer $k>0$. $(m-1)$ of these faces are adjacent squares and the other one is the unbounded face. Vertical edges are removed before horizontal edges as follows: (a) The edges are removed sequentially from left to right. The corresponding co-tree has the shape of a ``comb'' and the maximal hierarchical level is $2$; (b) The edges are removed from left to right skipping every second edge. The process is repeated until all vertical edges except the rightmost one are removed. The corresponding co-tree has the height $log(m)=k$ which is the maximal hierarchical level.} \end{center} \end{figure} \section*{Sensitivity of planar network ordering to weight perturbations} Clearly, edge weights play an important role in the construction of both loop and tree hierarchies. Unfortunately, weight estimation done in practice is often imprecise, so the order in which the algorithm iterates through the edges may be perturbed. In this section we investigate how such a perturbation affects the loop and tree hierarchies. We start by considering the worst possible change in the hierarchical levels of loops. Notice that the highest level in the hierarchy of loops can be as low as $2$. This happens when the first reticular edge creates a level $2$ face and every other reticular edge merges a level $1$ face with the only level $2$ face (see Fig. \ref{fig:worst_case}(a) ). On the other hand, the highest level in the loop hierarchy can be as high as $\log(m)$, where $m$ is the number of faces. This happens when level $1$ faces are merged only with level $1$ faces until only faces of level $2$ are left, then level $2$ faces are merged with level $2$ faces until only faces of level $3$ are left, and so on (see Fig. \ref{fig:worst_case}(b) ). It is clear from the example in Fig. \ref{fig:worst_case} that there is a permutation of edges that can change the loop hierarchy from one of the extreme cases to the other. However, in practice such a permutation would generally result in from a significant perturbation in weights. For small perturbations, it is more likely that only a few transpositions of edges will occur. Let $e_1,\ldots,e_n$ be the order of edges with respect to their weights. We shall now analyze how the structure of $T(G)$ and $T^*(G)$ changes when a single transposition occurs, that is, when the order of $e_i$ and $e_{i+1}$ is swapped. First, we notice that there will be no changes to the structure of the co-tree or the spanning tree if $e_i$ and $e_{i+1}$ are both tree edges, or if $e_i$ is a tree edge and $e_{i+1}$ is a reticular edge. Hence, there are two cases to consider: when both $e_i$ and $e_{i+1}$ are reticular, and when $e_i$ is a reticular edge and $e_{i+1}$ is a tree edge. In the former case, we can regard reticular edges as edges of the co-tree. We see then that swapping the two edges may shift a subtree of the co-tree only one level up or down. Therefore, it is reasonable to expect that hierarchical levels of loops will change at most by one. The case of a reticular edge and a tree edge is more complicated. Such a transposition may lead to detaching a subtree of the remaining spanning tree and attaching it at a different place. This may have a drastic effect on the tree hierarchy. A detailed analysis of the two cases justifying the above conclusions is present below. \paragraph{Case 1.} $e_i$ and $e_{i+1}$ are both reticular. Only the co-tree can be affected in this case. Let $f_i^R$, $f_i^L$ and $f_{i+1}^R$, $f_{i+1}^L$ be the faces merged by removing $e_i$ and $e_{i+1}$, respectively. Also, let $f_i=f_i^L\cup f_i^R$ and $f_{i+1}=f_{i+1}^L\cup f_{i+1}^R$. Notice that if $f_i\neq f_{i+1}^L$ and $f_i\neq f_{i+1}^R$, then $f_i$ is not a child of $f_{i+1}$ in $T^*(G)$), and there will be no changes to the structure of the co-tree. Suppose that $f_i=f_{i+1}^L$ (the case when $f_i=f_{i+1}^R$ follows the same argument). Then $e_{i+1}$ is adjacent to either $f_i^L$ or $f_i^R$; let us assume it's $f_i^R$. Removing $e_{i+1}$ before $e_i$ leads to merging $f_{i+1}^R$ with $f_i^R$ first, and then merging the resulting face with $f_i^L$. The corresponding change in the tree structure, shown in Fig \ref{fig:case_1}, is a single rotation around $f_{i+1}$. Possible changes in the levels of the nodes involved in the rotation are also shown in Fig. \ref{fig:case_1}. We can see that these levels can change at most by one. However, in the worst case the change in levels may propagate up $T^*(G)$ all the way to the root. \begin{figure*}[tb] \begin{center} \includegraphics[width=\textwidth]{pics/case_1.eps} \caption{\label{fig:case_1} \footnotesize Example of the effect of a single transposition of two reticular edges: (a) the part of the network containing the two edges being transposed and the effect of the transposition on the structure of the co-tree; (b) possible level changes caused by the transposition.} \end{center} \end{figure*} \paragraph{Case 2.} $e_i$ is a reticular edge and $e_{i+1}$ is a tree edge. Let $f^L_i$ and $f^R_i$ be the two faces merged by removing $e_i$. Notice that there will be no changes in the structure of $T(G)$ or $T^*(G)$ if $e_{i+1}$ is not adjacent to both $f^L_i$ and $f^R_i$. So, let $e_{i+1}$ be adjacent to $f^L_i$ and $f^R_i$. Then removing $e_{i+1}$ before $e_i$ merges the same $f^L_i$ and $f^R_i$, so no changes to the structure of the co-tree happen. However, $e_{i+1}$ turns into a reticular edges, and $e_i$ becomes a tree edge. Consequently, the structure of the spanning tree changes. Let $E^{L,R}$ be the set of edges incident to both $f^L_i$ and $f^R_i$, and let $T^{L,R}$ be the tree formed by the edges in $E^{L,R}$ and the edges connected to $E^{L,R}$ and having only $f^L_i$ or $f^R_i$ as an adjacent face (see Fig. \ref{fig:case_2}). Removing $e_{i+1}$ and $e_i$ splits $T^{L,R}$ into three trees, $T^L$, $T^R$, and $T^M$, such that $T^L$ and $T^R$ are connected to the boundary of $f^L_i\cup f^R_i$, and $T^M$ is not (Fig. \ref{fig:case_2}). If $e_i$ is removed before $e_{i+1}$, then $T^M$ is connected to $T^R$ by $e_{i+1}$. However, if the transposition happens and $e_{i+1}$ is removed before $e_i$, then $T^M$ is connected to $T^L$ by $e_i$ (Fig. \ref{fig:case_2}). To understand the effect of such a change on hierarchical levels, we first assume that $T^M$ does not contain the root of $T(G)$. Let $v^R$ be the vertex incident to $e_{i+1}$ and $T^R$, and let $v^L$ be the vertex incident to $e_i$ and $T^L$. Also, let $\lambda_{i+1}=\lambda(e_{i+1})$, where $e_{i+1}$ is regarded as an edge in $T^M\cup e_{i+1}$ rooted at $v^R$, and let $\lambda_i=\lambda(e_i)$, where $e_i$ is regarded as an edge in $T^M\cup e_i$ rooted at $v^L$. Denote by $e^R$ the edge of $T(G)$ which is next to $e_{i+1}$ in the path from $e_{i+1}$ to the root of $T(G)$, and by $e^L$ the edge of $T(G)$ which is next to $e_i$ in the path from $e_i$ to the root of $T(G)$. Then we can see that removing $e_{i+1}$ before $e_i$ can decrease the level of $e^R$ by at most $\max\{{1,\lambda_{i+1}-1}\}$. At the same time, the level of $e^L$ can increase by at most $\max\{{1,\lambda_i-1}\}$. In the worst case, these changes can propagate up $T(G)$ all the way to the root. The case when the root of $T(G)$ belongs to $T^M$ can lead to more drastic changes. In this case, removing $e_{i+1}$ before $e_i$ leads to recomputing levels of all edges in $T(G)-T^M$ by changing the root from $v^R$ to $v^L$. Again, this change can then propagate further to the root of $T(G)$. \begin{figure*}[tb] \begin{center} \includegraphics[width=\textwidth]{pics/case_2.eps} \caption{\label{fig:case_2} \footnotesize Example of the effect of a single transposition of a reticular edge and a tree edge: (a) The part of the network containing the two edges being transposed. The brown, blue, and green triangles (and edges) denote the subtrees adjacent to the edges. (b) The effect of the transposition on the structure of the spanning tree and its hierarchical levels.} \end{center} \end{figure*} \section*{Discussion} We have shown that the hierarchy of loops often observed in reticular physical networks can be defined explicitly using a generalization of the Horton-Strahler order. To obtain such a generalization we regard the network as a weighted graph, with weights corresponding to the widths of the network branches. Noticing that the Horton-Strahler order can be computed by analyzing how specific disjoint components (sub-networks) of a (non-reticular) network are merged as the edges are \emph{added} in the order of increasing weight, we show that the hierarchical order of loops in a weighted planar graph can then be computed by analyzing how the \emph{faces} of the graph are merged as we \emph{remove} the edges in the order of increasing weight. This approach naturally classifies graph edges into reticular edges, which are responsible for loop formation, and tree edges, which constitute a spanning tree of the graph. Hence, both the loop and the tree hierarchies can be computed. Being able to compute hierarchical levels for loops creates new possibilities for analyzing the structure of reticular networks. By means of analogy, river networks can be compared by representing their connectivity in terms of side-branching statistics~\cite{tokunaga_1978}. These statistics depict the ways in which smaller streams connect to larger streams at all scales of the network~\cite{dodds_pre2001}. A similar procedure could be applied to leaf networks. For example, the current algorithm decomposes reticulate networks into a binary-tree for the loop hierarchy and a separate binary tree for the tree hierarchy. Both networks have associated Horton-Strahler orders and therefore their structure can be estimated using Tokunaga statistics. Recent innovations in software now permit semi-automated extraction of the dimension and connectivity of entire leaf vein networks and the areoles that veins surround ~\cite{price_leafgui2011}. Hence, greater quantification of leaf vein networks from across a wide range of biological diversity will soon be available for which to analyze leaf development, variation across environmental gradients and in paleobotanical studies. Current attempts to compare reticulate structure have largely focused on the density of areoles (i.e., network faces) as a proxy for the ``loopiness'' of the network~\cite{blonder_2011}. The current study will provide additional metrics to compare the detailed branching structure of reticulate networks. An important caveat to keep in mind when comparing reticulate network structure is that estimating weights in physical networks is by no means a trivial problem. Therefore, we have performed a theoretical analysis of possible changes in the loop and tree hierarchies due to perturbations in edge weights. We have shown that the worst possible change in the loop hierarchy is attainable, but requires a significant perturbation of weights. Taking into account that small perturbations are likely to cause only a few transpositions in the order in which the edges are removed, we have shown that a single such transposition can change the hierarchical levels of loops at most by one. We have also shown that the change in the hierarchical levels of the remaining spanning tree can be arbitrarily large even when a single transposition is performed. It is important to note that in either case the change does not happen for every transposition. Rather, the transposed edges have to satisfy a particular condition, which may happen rarely in practice. The latter claim is supported by the numerous successful applications of the Horton-Strahler order. While the method itself does not depend on any weights, the connectivity of the network is obtained by analyzing digital elevation map which contain noise~\cite{tarboton_1991,peckham_1995}. In particular, the difference between the correct and the computed connectivity may be exactly the same as the difference in the connectivity of our spanning tree caused by transposing two edges. Hence, the resulting hierarchy may be drastically different from the correct one. Nevertheless, the Horton-Strahler order has been successfully used for over five decades despite the potential instability identified here\cite{horton1945,strahler1957,tarboton_1991,rodrigueziturbe_1997,dodds_2000}. We suggest that empirical characterizations of reticulate planar networks include randomization analysis on edge weights to identify the robustness of claims regarding statistical structure of side-branching of the tree and co-tree. Many biological and physical systems are represented by non-planar physical networks ~\cite{gastner_2006,barthelemy_2011} and computing hierarchical levels of loops in such networks is still an open question. While our method can be applied to obtain the tree hierarchy of such networks, the loop hierarchy cannot be computed in this case because the algorithm relies on the fact that any loop in a planar network corresponds to a union of faces. In the mathematical language, (boundaries of) faces of a planar graph form a canonical basis for loops ($1$-dimensional homology classes). Such a canonical basis is not present in non-planar graphs. It is not clear at this point how to handle the non-planar case. Perhaps a method for computing loop hierarchies which is not based on the widths of the network branches could provide an answer. We hope that our approach of using algebraic topology language to deal with nodes and loops of networks will prove useful in developing such a method and complement other approaches. \section*{Acknowledgments} This work was supported by the National Science Foundation Plant Genome Research Program (grant 0820624 to H.E. and J.S.W.) and the Defense Advanced Projects Research Agency (grant HR0011-09-1-0055 to J.S.W.). Joshua S.~Weitz, Ph.D., holds a Career Award at the Scientific Interface from the Burroughs Wellcome Fund. During preparation of this manuscript the authors became aware of a related work by Katifori and Magnasco, concurrently submitted for publication.